Re: Download Accelerator

Replies:

Parents:

On Thu, Sep 14, 2000 at 10:01:26AM +0200, Hugo Haas wrote:
> On Fri, Sep 08, 2000, Curtis Johnstone wrote:
> > Apparently it uses a facility in HTTP to grab different chunks of the file
> > at the same time. I vaguely remember ftp having this capability -- you can
> > specify where in the file you want to start downloading from (makes the
> > 'resume' feature possible). Download accelerator slices the file up and
> > downloads multiple chunks simultaneously.

Sounds like Range-Requests... browsers also use those requests to
continue downloading images that were interrupted before.

> > I was thinking that would be cool having that built right into HTTP for Web
> > Sites and basic HTML. When the browser hits a web site, the site immediately
> > responds with how many 'chunks' it is has with the appropriate markers. HTTP
> > then starts up x number of simultaneous connections an grabs different
> > pieces of the HTML file and puts it back together. Or, alternatively,
> > simultaneous connection could download the graphic files associated with a
> > site.

I think all modern browsers do this for graphics... this feature
first showed up in Netscape 0.93b, I think (summer/fall 1994.)
It might not have been until a bit later though.

> HTTP/1.1 defines chunked transfer coding (RFC2616, section 3.6.1).
>
> Every browser speaking HTTP/1.1 could use this feature to do the same. I
> am not sure why they don't do it, but in my opinion, download
> accelerators are anti-social because they use several connections which:
>
> - puts a heavy load on the server:
>     If a server used to have say 100 users simultaneously and needed 100
>     processes to serve their requests, it may now need 500 processes
>     (assuming that each user opens 5 connections).
>
> - can be unfair to the users:
>     If my policy is to limit the number of connections to my server to
>     100, still assuming that people are going to open 5 connections each
>     time, there will be a limit of 20 users.

I remember everyone flaming Netscape on Usenet when this feature
first showed up, and Andreessen posted something that pointed out
that because each user would be hitting the server for one-fifth
as long, it all evens out in the end. (that is, for servers that
have many concurrent users.)

Note that Netscape allows you to change how many concurrent
requests it sends out (defaults to 4, I think); I haven't tried
tweaking that number to see if a higher value is any faster.

For a single large file, I don't really see how this would speed
things up, unless the multiple connections are just getting
around some kind of artificial limit per connection or something.
(shouldn't a single download be enough to saturate whatever
bandwidth you have available?)

--
Gerald Oskoboiny <[email protected]>
http://impressive.net/people/gerald/

Re: Download Accelerator

Replies:

  • None.

Parents:

On Thu, Sep 14, 2000, Gerald Oskoboiny wrote:
> For a single large file, I don't really see how this would speed
> things up, unless the multiple connections are just getting
> around some kind of artificial limit per connection or something.
> (shouldn't a single download be enough to saturate whatever
> bandwidth you have available?)

On Thu, Sep 14, 2000, Joseph M. Reagle Jr. wrote:
> But then again, if this accelerator really does work that means the
> intermediary pipes can handle the bits (otherwise I wouldn't see a
> difference) AND the server can as well. So was the previous "inefficiency"
> arbitrary; or am I benefiting at the expense of my network peers?

This is actually a very interesting question: what is the bottleneck for
a TCP connection between two hosts?

I guess that the answer is simple: at some point along the route between
the source and the destination, there is a congestion.

Being very simplistic:

If we imagine that there are 100 TCP connections at this particular
point, and that this is the only problem along the way, the bandwidth
you get with this connection would be 1/100th of the available
bandwidth.

If all things are equal and you open 4 extra connections, you will get
5/104th of the bandwidth, i.e. that you will download your large file
4.8 times faster.

Again, this is very simplistic and I'm not even sure that it would work
that well, but I guess that it is the idea.

And if everybody starts using download accelerators, a user will end up
with 5/500th of the bandwidth, i.e. the same as the beginning, except
that there will be 5 times as many TCP control packets as there were
with only one connection, which means that this increased overhead will
make the user lose bandwidth used for the actual transfer.

--
Hugo Haas <[email protected]> - http://larve.net/people/hugo/
I would kill everyone in this room for a drop of sweet beer. -- Homer
J. Simpson

HURL: fogo mailing list archives, maintained by Gerald Oskoboiny