Reputation: 53
The following code will add 3000 to 4000 documents of various sizes to a Firebird Blob field before running out of memory. I believe I have freed the streams I create, but I may not be doing it correctly. Streams are a new tool in my toolbox. Can anyone spot what I'm doing wrong?
procedure TfrmMigration.Button1Click(Sender: TObject);
var
DocFile: string;
Blob: TStream;
fs: TFileStream;
begin
with DM.qImageList do
begin
close;
SQL.Text := ImgListSQL;
open;
First;
while not eof do
begin
{ Ignore if previously done.}
if (FieldByName('Document_Files').IsNull) then
begin
DocFile := Trim(gsDocumentPath + '\' + FieldByName('ImageName').asString);
if FileExists(DocFile) then
begin
edit;
Blob := CreateBlobStream(FieldByName('Document_Files'), bmWrite);
try
Blob.Seek(0, soFromBeginning);
fs := TFileStream.Create(DocFile, fmOpenRead or fmShareDenyWrite);
try
Blob.CopyFrom(fs, fs.Size);
finally
fs.free;
end;
finally
Blob.free;
end;
post;
end;
end;
next;
end;
end;
end;
Upvotes: 1
Views: 145
Reputation: 53
Thank you all for pitching in. You were correct. There was nothing actually wrong with my code.
I solved the memory depletion issue by turning off CacheBlobs in my query component. I'm using IBDAC, and all I needed to do was set that property to False. That allowed me to copy 107,389 documents into the database in a little over one hour on a fairly slow Windows 11 machine (running the app in the IDE). It seems I had been putting data into cache faster than it was being removed.
Again, thank you for all your comments.
Upvotes: 1