Reputation: 11
OS: Windows Language: Python 3
A csv file with only one column:
Column1
10000ABC89GH
10000DBC29GH
10300ABC59GK
10120ANC39LH
.
.
is on the server within a folder.
I need to get this Column1 values into a SQL Server table which is on the other server (Not the same server where the csv file is placed).
I have done this using Pandas but the issue is its very slow. Meaning I was inserting 3000 records in 2.3 mins. This won’t be good as I expect to receive around 250000 records and that would run for hours as I will have multiple files also.
The other option I tried was a Bulk Insert query but since the path is dynamic, I was getting double backslashes in the query "where" clause and it resulted in the format error.
Please let me know if you can Help and let me know the best way to proceed.
Upvotes: 1
Views: 877
Reputation: 89121
Python doesn't have a first-class bulk loading library for SQL Server. You can use BCP if you have it installed, or BULK INSERT if the file is somewhere the SQL Server can see it.
Or you can send the data to SQL Server as a JSON document and parse it on the server-side, which is not the absolute fastest way to load, but it's much faster that row-by-row with pandas. See eg: Trying to insert pandas dataframe to temporary table
Upvotes: 1