Salvador
Salvador

Reputation: 16472

Download, pause and resume an download using Indy components

Actually i'm using the TIdHTTP component for download a file from internet. i'm wondering if is possible pause and resume the download using this component o maybe another indy component.

this is my current code, this works ok for download a file (without resume), but . now i want pause the download close my app ,and when my app restart then resume the download from the last position saved.

var
  Http: TIdHTTP;
  MS  : TMemoryStream;
begin
  Result:= True;
  Http  := TIdHTTP.Create(nil);
  MS    := TMemoryStream.Create;
  try

    try
      Http.OnWork:= HttpWork;//this event give me the actual progress of the download process
      Http.Head(Url);
      FSize := Http.Response.ContentLength;
      AddLog('Downloading File '+GetURLFilename(Url)+' - '+FormatFloat('#,',FSize)+' Bytes');
      Http.Get(Url, MS);
      MS.SaveToFile(LocalFile);
    except
      on E : Exception do
      Begin
       Result:=False;
       AddLog(E.Message);
      end;
    end;
  finally
    Http.Free;
    MS.Free;
  end;
end;

Upvotes: 6

Views: 5186

Answers (2)

K.Sandell
K.Sandell

Reputation: 1394

Maybe the HTTP RANGE header can help you here. Have a look at archive.org's copy of http://www.west-wind.com/Weblog/posts/244.aspx for more info on resuming HTTP downloads:

(2004-02-07) A couple of days ago somebody on the Message Board asked an interesting question about how to provide resumable HTTP downloads. My first response to this question was that this isn't possible since HTTP is a stateless protocol that has no concept of file pointers and thus can't resume an HTTP download.

However it turns out HTTP 1.1 does have the ability to specify ranges in downloads by using the Range: header in the Http header sent form the client. You can do things like:

Range: 0-10000
Range: 100000-
Range: -100000

which download the first 100000 bytes, everything over 100000 bytes or the last 100000 bytes. There are more combinations but the first two are the ones that are of interest for a resumable download.

To demonstrate this feature I used wwHTTP (in Web Connection/VFP) to download a first 400k chunk of a file into a file with HTTPGetEx which is meant to simulate an aborted download. Next I do a second request to pick up the existing file and download the remainder:

#INCLUDE wconnect.h
CLEAR
CLOSE DATA
DO WCONNECT

LOCAL o as wwHTTP
lcDownloadedFile = "d:\temp\wwipstuff.zip"

*** Simulate partial output
lcOutput = ""
Text=""
tnSize = 0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",@Text,@tnSize,"Range: bytes=0-400000"+CRLF,lcDownloadedFile)
o.Httpclose()

lcOutput = Text
? LEN(lcOutput)

*** Figure out how much we downloaded
lnOpenAt = FILESIZE(lcDownloadedFile)

*** Do a partial download starting at this byte count
Text=""
tnSize =0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",@Text,@tnSize,"Range: bytes=" + TRANSFORM(lnOpenAt) + "-" + CRLF)
o.Httpclose()

? LEN(Text)
*** Read the existing partial download and append current download
lcOutput = FILETOSTR(lcDownloadedFile) + TEXT
? LEN(lcOutput)

STRTOFILE(lcOutput,lcDownloadedFile)

RETURN

Note that this approach uses a file on disk, so you have to use HTTPGetEx (with Web Connection). The second download can also be done to disk if you choose, but things will get tricky if you have multiple aborts and you need to piece them together. In that case you might want to try to keep track of each file and add a number to it, then combine the result at the very end.

If you download to memory using WinInet (which is what wwHTTP uses behind the scenes) you can also try to peel out the file from the Temporary Internet Files cache. Although this works I suspect this process will become very convoluted quickly so if you plan on providing the ability to resume I would highly recommend that you write your output to file yourself using the approach above.

Some additional information on WinInet and some of the requirements for this approach to work with it are described here: http://www.clevercomponents.com/articles/article015/resuming.asp.

The same can be done with wwHTTP for .Net by adding the Range header to the wwHTTP:WebRequest.Headers object.

(Randy Pearson) Say you don't know what the file size is at the server. Is there a way to find this out, so you can know how many chunks to request, for example? Would you send a HEAD request first, or does the header of the GET response tell you the total size also?

(Rick Strahl) You have to read the Content-Length: header to get the size of the file downloaded. If you're resuming this shouldn't matter - you just use Range: (existingsize)- to get the rest. For chunky downloads you can read the content length and only download the first x bytes. This gets tricky with wwHTTP - you have to make individual calls with HTTPGetEx and set the tnBufferSize parameter to the chunk size to retrieve to have it stop after the size is reached.

(Randy Pearson) Follow-up: It looks like a compliant server would send you enough to know the size. If it provides chunks it should reply with something like:

Content-Range: 0-10000/85432

so you could (if desired) extract that and use it in a loop to continue with intelligent chunk requests.

Also look here https://forums.embarcadero.com/message.jspa?messageID=219481 for TIdHTTP related discussion on the same topic:

(at least partly as per tfilestream.seek and offset confusion)

if FileExists(dstFile) then
begin
  Fs := TFileStream.Create(dstFile, fmOpenReadWrite);
  try
    Fs.Seek(Max(0, Fs.Size-1024), soFromBeginning);
    // alternatively:
    // Fs.Seek(-1024, soFromEnd);
    Http.Request.Range := IntToStr(Fs.Position) + '-';
    Http.Get(Url, Fs);
  finally
    Fs.Free;
  end;
end;

Upvotes: 3

DaniCE
DaniCE

Reputation: 2421

the following code worked to me. It downloads the file by chunks:

procedure Download(Url,LocalFile:String;
  WorkBegin:TWorkBeginEvent;Work:TWorkEvent;WorkEnd:TWorkEndEvent);
var
  Http: TIdHTTP;
  quit:Boolean;
  FLength,aRangeEnd:Integer;
begin
  Http  := TIdHTTP.Create(nil);
  fFileStream:=nil;
  try

    try
      Http.OnWork:= Work; 
      Http.OnWorkEnd := WorkEnd;

      Http.Head(Url);
      FLength := Http.Response.ContentLength;
      quit:=false;
      repeat

        if not FileExists(LocalFile) then begin
          fFileStream := TFileStream.Create(LocalFile, fmCreate);
        end
        else begin
          fFileStream := TFileStream.Create(LocalFile, fmOpenReadWrite);
          quit:= fFileStream.Size >= FLength;
          if not quit then
            fFileStream.Seek(Max(0, fFileStream.Size-4096), soFromBeginning);
        end;

        try
          aRangeEnd:=fFileStream.Size + 50000;

          if aRangeEnd < fLength then begin           
            Http.Request.Range := IntToStr(fFileStream.Position) + '-'+  IntToStr(aRangeEnd);
          end
          else begin
            Http.Request.Range := IntToStr(fFileStream.Position) + '-';
            quit:=true;
          end;

          Http.Get(Url, fFileStream);
        finally
          fFileStream.Free;
        end;
     until quit;
     Http.Disconnect;

    except
      on E : Exception do
      Begin
       //Result:=False;
       //AddLog(E.Message);
      end;
    end;
  finally
    Http.Free;
  end;
end;

Upvotes: 7

Related Questions