Reputation: 1
I am trying to ingest 9,000,000 rows in an elastic pool database with 6 Vcore. Data ingestion using python (pyodbc).
Since data is large, I am ingesting the data in chunks.
I am getting weird behaviour after the 9th chunk of the ingestion. Process disappear and randomly appears after an hour.
Is there any solution for this?
Upvotes: 0
Views: 88
Reputation: 15694
My suggestion use non-durable memory-optimized table to speed up data ingestion, while managing the In-Memory OLTP storage footprint by offloading historical data to a disk-based Columnstore table. Use a job to regularly batch-offload data to a disk-based Columnstore table.
With that adjustment you can obtain 1.4 million sustained rows per second during ingestion.
Upvotes: 0