Reputation: 1037
I have code like this..it's working fine but its taking too much time to load the data into vertica. around 10 mins for 1000 rows. is there any alternative/faster way to insert the data in vertica.
import pandas as pd
import vertica_python
conn_info = {'host': '127.0.0.1',
'user': 'some_user',
'password': 'some_password',
'database': 'a_database'}
connection = vertica_python.connect(**conn_info)
df = pd.DataFrame({'User':['101','101','101','102','102','101','101','102','102','102'],'Country':['India','Japan','India','Brazil','Japan','UK','Austria','Japan','Singapore','UK']})
lists= df.values.tolist()
with connection.cursor() as cursor:
for x in lists:
cursor.execute("insert into test values (%s,%s)" , x)
connection.commit()
Thanks
Upvotes: 1
Views: 3282
Reputation: 889
You should use in cursor.copy
option instead of cursor.execute
.
For example:
# add new import:
import cStringIO
...
# temporary buffer
buff = cStringIO.StringIO()
# convert data frame to csv type
for row in df.values.tolist():
buff.write('{}|{}\n'.format(*row))
# now insert data
with connection.cursor() as cursor:
cursor.copy('COPY test (Country, "User") FROM STDIN COMMIT' , buff.getvalue())
On my testing system following results
your implementation:
$ time ./so.py
real 0m4.175s
user 0m0.523s
sys 0m0.101s
my implementation:
$ time ./so.py
real 0m0.814s
user 0m0.530s
sys 0m0.078s
5 times faster(4.175s vs 0.814s).
Upvotes: 1