On Thu, Sep 14, 2000 at 10:01:26AM +0200, Hugo Haas wrote:
> On Fri, Sep 08, 2000, Curtis Johnstone wrote:
> > Apparently it uses a facility in HTTP to grab different chunks of the file
> > at the same time. I vaguely remember ftp having this capability -- you can
> > specify where in the file you want to start downloading from (makes the
> > 'resume' feature possible). Download accelerator slices the file up and
> > downloads multiple chunks simultaneously.
Sounds like Range-Requests... browsers also use those requests to
continue downloading images that were interrupted before.
> > I was thinking that would be cool having that built right into HTTP for Web
> > Sites and basic HTML. When the browser hits a web site, the site immediately
> > responds with how many 'chunks' it is has with the appropriate markers. HTTP
> > then starts up x number of simultaneous connections an grabs different
> > pieces of the HTML file and puts it back together. Or, alternatively,
> > simultaneous connection could download the graphic files associated with a
> > site.
I think all modern browsers do this for graphics... this feature
first showed up in Netscape 0.93b, I think (summer/fall 1994.)
It might not have been until a bit later though.
> HTTP/1.1 defines chunked transfer coding (RFC2616, section 3.6.1).
>
> Every browser speaking HTTP/1.1 could use this feature to do the same. I
> am not sure why they don't do it, but in my opinion, download
> accelerators are anti-social because they use several connections which:
>
> - puts a heavy load on the server:
> If a server used to have say 100 users simultaneously and needed 100
> processes to serve their requests, it may now need 500 processes
> (assuming that each user opens 5 connections).
>
> - can be unfair to the users:
> If my policy is to limit the number of connections to my server to
> 100, still assuming that people are going to open 5 connections each
> time, there will be a limit of 20 users.
I remember everyone flaming Netscape on Usenet when this feature
first showed up, and Andreessen posted something that pointed out
that because each user would be hitting the server for one-fifth
as long, it all evens out in the end. (that is, for servers that
have many concurrent users.)
Note that Netscape allows you to change how many concurrent
requests it sends out (defaults to 4, I think); I haven't tried
tweaking that number to see if a higher value is any faster.
For a single large file, I don't really see how this would speed
things up, unless the multiple connections are just getting
around some kind of artificial limit per connection or something.
(shouldn't a single download be enough to saturate whatever
bandwidth you have available?)
--
Gerald Oskoboiny <
[email protected]>
http://impressive.net/people/gerald/