Reputation: 10701
In my C# program, I am reading in about 350GB worth of zipped CSV files and storing the data in a SQLite v3 database. I'm using System.Data.SQLite
from NuGet.
My database is about 147GB at the moment, and I am getting an error when trying to run the next INSERT query:
(778) os_win.c:41557: (665) winWrite2(D:\System.db) - The requested operation could not be completed due to a file system limitation.
Error: near line 1: disk I/O error."
The drive is 1.81TiB and has 1.37TiB free. The volume is NTFS. The DB is 146650432KiB. ChkDsk reports everything is OK, and the drive is otherwise working perfectly.
This happens for any INSERT from my program as well as from the DB Browser application.
(The database would probably shrink substantially if I ran a VACUUM operation, since I have run a lot of INSERT statements)
I estimate there are approximately 3.5 billion rows in 12 tables. From what I've read, there should be no issues with a SQLite database of this size.
Any ideas why this might be happening and how to work around it? Ideally, it would be great if I didn't have to start the whole import process from scratch as it's taken a few days to get to this point, which is only about 20% complete.
Upvotes: 4
Views: 5226
Reputation: 180172
The Windows error code 665 means exactly what the message says: the file system has reached some limit.
The limit is not on the file size itself, but on the number of internal structures needed to manage a fragmented file. (The exact same problem happens with Microsoft's own SQL Server.)
Microsoft has a hot fix that would allow you reformat the drive with a file system that reserves more space for these structures. But an easier way to fix this problem now would be to run a defragmenter, or to just make a new copy of the file (if you have enough free space).
If possible, try to avoid fragmenting the file, i.e., don't write other data to the same file system while filling the database. Alternatively, to avoid smaller fragments, extend the size of the database file by a large amount by temporarily inserting a large row:
CREATE TABLE t(x);
INSERT INTO t VALUES(zeroblob(500*1024*1024)); -- 500 MB
DROP TABLE t;
Upvotes: 4