Download Accelerator

Replies:

Parents:

  • None.

There has been quite a few software packages for the PC (mostly shareware)
in the last couple of years claiming to boost your connection speed to the
Internet. Several cats I work with have been using one called Download
Accelerator (http://www.downloadaccelorator.com). I've been using it for
about 3 months now and it is really awesome -- especially for downloading
files.

It installs itself in-place of your browsers' regular downloading mechanism.
When you click on a file it pops up and immediately tests the connection
speed to the location the file is at and gives you the option to download it
now or schedule it later. With FTP sites it finds any available mirrors with
the same file and will simultaneously download it from multiple locations.

What is most impressive is the speed gain. I regularly seeing speed
increases of five times. I was getting a downstream speed of about 70 Kbs /
sec tonight on Gerald's site and I downloaded a 244 Mb file in about 11
minutes.

Apparently it uses a facility in HTTP to grab different chunks of the file
at the same time. I vaguely remember ftp having this capability -- you can
specify where in the file you want to start downloading from (makes the
'resume' feature possible). Download accelerator slices the file up and
downloads multiple chunks simultaneously.

I was thinking that would be cool having that built right into HTTP for Web
Sites and basic HTML. When the browser hits a web site, the site immediately
responds with how many 'chunks' it is has with the appropriate markers. HTTP
then starts up x number of simultaneous connections an grabs different
pieces of the HTML file and puts it back together. Or, alternatively,
simultaneous connection could download the graphic files associated with a
site.

Re: Download Accelerator

Replies:

Parents:

On Fri, Sep 08, 2000, Curtis Johnstone wrote:
> Apparently it uses a facility in HTTP to grab different chunks of the file
> at the same time. I vaguely remember ftp having this capability -- you can
> specify where in the file you want to start downloading from (makes the
> 'resume' feature possible). Download accelerator slices the file up and
> downloads multiple chunks simultaneously.
>
> I was thinking that would be cool having that built right into HTTP for Web
> Sites and basic HTML. When the browser hits a web site, the site immediately
> responds with how many 'chunks' it is has with the appropriate markers. HTTP
> then starts up x number of simultaneous connections an grabs different
> pieces of the HTML file and puts it back together. Or, alternatively,
> simultaneous connection could download the graphic files associated with a
> site.

HTTP/1.1 defines chunked transfer coding (RFC2616, section 3.6.1).

Every browser speaking HTTP/1.1 could use this feature to do the same. I
am not sure why they don't do it, but in my opinion, download
accelerators are anti-social because they use several connections which:

- puts a heavy load on the server:
   If a server used to have say 100 users simultaneously and needed 100
   processes to serve their requests, it may now need 500 processes
   (assuming that each user opens 5 connections).

- can be unfair to the users:
   If my policy is to limit the number of connections to my server to
   100, still assuming that people are going to open 5 connections each
   time, there will be a limit of 20 users.

I woud like to know the number of people using such programs. I
occasionally see them in my logs, but not very often.

--
Hugo Haas <[email protected]> - http://larve.net/people/hugo/
- I know you feel bad about the juice incident, but I'm sure you can
make up for it somehow. - That's it! Somehow! -- Homer Jay

Re: Download Accelerator

Replies:

Parents:

On Thu, Sep 14, 2000 at 10:01:26AM +0200, Hugo Haas wrote:
> On Fri, Sep 08, 2000, Curtis Johnstone wrote:
> > Apparently it uses a facility in HTTP to grab different chunks of the file
> > at the same time. I vaguely remember ftp having this capability -- you can
> > specify where in the file you want to start downloading from (makes the
> > 'resume' feature possible). Download accelerator slices the file up and
> > downloads multiple chunks simultaneously.

Sounds like Range-Requests... browsers also use those requests to
continue downloading images that were interrupted before.

> > I was thinking that would be cool having that built right into HTTP for Web
> > Sites and basic HTML. When the browser hits a web site, the site immediately
> > responds with how many 'chunks' it is has with the appropriate markers. HTTP
> > then starts up x number of simultaneous connections an grabs different
> > pieces of the HTML file and puts it back together. Or, alternatively,
> > simultaneous connection could download the graphic files associated with a
> > site.

I think all modern browsers do this for graphics... this feature
first showed up in Netscape 0.93b, I think (summer/fall 1994.)
It might not have been until a bit later though.

> HTTP/1.1 defines chunked transfer coding (RFC2616, section 3.6.1).
>
> Every browser speaking HTTP/1.1 could use this feature to do the same. I
> am not sure why they don't do it, but in my opinion, download
> accelerators are anti-social because they use several connections which:
>
> - puts a heavy load on the server:
>     If a server used to have say 100 users simultaneously and needed 100
>     processes to serve their requests, it may now need 500 processes
>     (assuming that each user opens 5 connections).
>
> - can be unfair to the users:
>     If my policy is to limit the number of connections to my server to
>     100, still assuming that people are going to open 5 connections each
>     time, there will be a limit of 20 users.

I remember everyone flaming Netscape on Usenet when this feature
first showed up, and Andreessen posted something that pointed out
that because each user would be hitting the server for one-fifth
as long, it all evens out in the end. (that is, for servers that
have many concurrent users.)

Note that Netscape allows you to change how many concurrent
requests it sends out (defaults to 4, I think); I haven't tried
tweaking that number to see if a higher value is any faster.

For a single large file, I don't really see how this would speed
things up, unless the multiple connections are just getting
around some kind of artificial limit per connection or something.
(shouldn't a single download be enough to saturate whatever
bandwidth you have available?)

--
Gerald Oskoboiny <[email protected]>
http://impressive.net/people/gerald/

Re: Download Accelerator

Replies:

  • None.

Parents:

On Thu, Sep 14, 2000, Gerald Oskoboiny wrote:
> For a single large file, I don't really see how this would speed
> things up, unless the multiple connections are just getting
> around some kind of artificial limit per connection or something.
> (shouldn't a single download be enough to saturate whatever
> bandwidth you have available?)

On Thu, Sep 14, 2000, Joseph M. Reagle Jr. wrote:
> But then again, if this accelerator really does work that means the
> intermediary pipes can handle the bits (otherwise I wouldn't see a
> difference) AND the server can as well. So was the previous "inefficiency"
> arbitrary; or am I benefiting at the expense of my network peers?

This is actually a very interesting question: what is the bottleneck for
a TCP connection between two hosts?

I guess that the answer is simple: at some point along the route between
the source and the destination, there is a congestion.

Being very simplistic:

If we imagine that there are 100 TCP connections at this particular
point, and that this is the only problem along the way, the bandwidth
you get with this connection would be 1/100th of the available
bandwidth.

If all things are equal and you open 4 extra connections, you will get
5/104th of the bandwidth, i.e. that you will download your large file
4.8 times faster.

Again, this is very simplistic and I'm not even sure that it would work
that well, but I guess that it is the idea.

And if everybody starts using download accelerators, a user will end up
with 5/500th of the bandwidth, i.e. the same as the beginning, except
that there will be 5 times as many TCP control packets as there were
with only one connection, which means that this increased overhead will
make the user lose bandwidth used for the actual transfer.

--
Hugo Haas <[email protected]> - http://larve.net/people/hugo/
I would kill everyone in this room for a drop of sweet beer. -- Homer
J. Simpson

Re: Download Accelerator

Replies:

Parents:

At 10:01 9/14/2000 +0200, Hugo Haas wrote:
>Every browser speaking HTTP/1.1 could use this feature to do the same. I
>am not sure why they don't do it, but in my opinion, download
>accelerators are anti-social because they use several connections

Browsers will now open multiple connections for multiple resources. But I
think it's 1-to-1, and for big files I think you answered your own question!

>I woud like to know the number of people using such programs. I
>occasionally see them in my logs, but not very often.

I installed it just to play around with, and it is convenient for large
downloads. One of the things that surprise me with a cable modem is that
usually my bandwidth is not the limiting factor on my interactions. (The
reason I say cable modem is I'm not usually downloading movies and mp3s at
work, so I wouldn't notice this there.) Frequently, when using scour (I
don't use napster anymore) I'll pull out those folks with stated T1
connections and ping<50 and still only get a 30Kbps average. I once got 80
on scour, so maybe that was someone on my local subnet! Regardless (and I'm
sure there is an answer) where is most of the congestion/hold-up? It isn't
my pipe, and I'd be surprised if it's the servers.

But then again, if this accelerator really does work that means the
intermediary pipes can handle the bits (otherwise I wouldn't see a
difference) AND the server can as well. So was the previous "inefficiency"
arbitrary; or am I benefiting at the expense of my network peers?
__
Regards,          http://www.mit.edu/~reagle/
Joseph Reagle     E0 D5 B2 05 B6 12 DA 65  BE 4D E3 C1 6A 66 25 4E
MIT LCS Research Engineer at the World Wide Web Consortium.

* This email is from an independent academic account and is
not necessarily representative of my affiliations.

HURL: fogo mailing list archives, maintained by Gerald Oskoboiny