[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: hardware/optimizations for a download-webserver



On Tue, 20 Jul 2004 20:05, Brett Parker <iDunno@sommitrealweird.co.uk> wrote:
> > (create large file)
> > steve@hadrian:~$ dd if=/dev/urandom of=public_html/large_file bs=1024
> > count=50000 50000+0 records in
> > 50000+0 records out
> >
> > (get large file)
> > steve@gashuffer:~$ wget www.lobefin.net/~steve/large_file
> > [...]
> > 22:46:09 (9.61 MB/s) - `large_file' saved [51200000/51200000]
> >
> > Of course, for reasonable sized files (where reasonable is <10MB),
> > I get transfer speeds closer to 11MB/s.  YMMV, but it is not a fault
> > of the tcp protocol.  Switched 10/100 connection here.  Of course real
> > internet travel adds some latency, but that's not the point - the NIC
> > is not the bottleneck, bandwidth is in the OP's question.
>
> *ARGH*... and of course, there's *definately* no compression going on
> there, is there...

If the files come from /dev/urandom then there won't be any significant 
compression.

http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.1/0257.html

Once again, see the above URL with Dave S. Miller's .sig on the topic.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/    Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



Reply to: