Maksim Gayduk
Maksim Gayduk

Reputation: 1082

Python mysql-connector converts some strings into bytearray

I am using python3 and pandas to connect to some sql database:

import pandas as pd
import mysql.connector

cnx = mysql.connector.connect(user='me', password='***',
                          host='***',
                          database='***')
df=pd.read_sql("select id as uid,refType from user where registrationTime>=1451606400",con=cnx)
cnx.close()

I am getting 2 columns: id and refType, both of them are of type string (varchar in SQL terminology). However, for some reason, refType column is correctly imported as string, but uid column is imported as bytearray. This is how they look:

df.head()
                                             uid  

0 [49, 54, 54, 57, 55, 54, 50, 55, 64, 97, 110]
1 [49, 54, 54, 57, 55, 54, 50, 56, 64, 105, 111]
2 [49, 48, 49, 53, 51, 50, 51, 50, 57, 53, 57, 5...
3 [57, 53, 52, 52, 56, 57, 56, 56, 49, 50, 57, 5...
4 [49, 54, 54, 57, 55, 54, 50, 57, 64, 105, 111]

                                         refType  

0 adx_Facebook.IE_an_ph_u8_-.cc-ch.g-f.au-ret7.c...
1 adx_Facebook.IE_io_ph_u4_-.cc-gb.g-f.au-toppay...
2 ad_nan_1845589538__CAbroadEOScys_-.cc-ca.g-f.a...
3 ad_offerTrialPay-DKlvl10-1009
4 adx_Facebook.IE_io_ph_u4_-.cc-us.g-f.au-topspe...

And this is how uid column is supposed to look:

[i.decode() for i in df['uid'][1:5]]

['16697628@io', '10153232959751867@fb', '954489881295911@fb', '16697629@io']

I don't understand neither why was it converted to bytearray nor how to choose to convert it to string. I couldn't find anything about it or similar questions in internet or pandas documentation. Of course, I can always convert that column to string after importing, but that is not preferred, because the shown sql query is just an example, and in real table there can be hundreds of columns that would be incorrectly imported as bytearrays. It would be real pain in the ass to manually find those columns and convert to string

The connector itself outputs the same bytearray:

cursor = cnx.cursor()
cursor.execute('select id as uid,refType from user where registrationTime>=1451606400 LIMIT 1')
cursor.fetchall()`

[(bytearray(b'16697627@an'), 'adx_Facebook.IE_an_ph_u8_-.cc-ch.g-f.au-ret7.cr-cys.dt-all.csd-291215.-')

The data types of the columns in SQL database are "Varchar(32)" for the first column (uid) and "Varchar(128)" for the second one (refType)

Upvotes: 7

Views: 3676

Answers (3)

JackByte
JackByte

Reputation: 116

Had the same issue with the package "mysql-connector". Installing "mysql-connector-python" instead did the trick for me.

pip install mysql-connector-python

Upvotes: 5

DanB
DanB

Reputation: 59

Maybe try a different approach. Write SQL to CSV file with Python, and then read CSV file into Pandas.

import pyodbc
import csv
import pandas

cnxn = pyodbc.connect('DRIVER={Server Type};SERVER=YourServer;DATABASE=YourDatabase;UID=UserId;PWD=PassWord') 
cursor = cnxn.cursor()
query = cursor.execute("select id as uid,refType from user where registrationTime>=1451606400")
List = {}
for row in cursor.fetchall():
    List.update({row.uid:row.refType})
cnxn.close()

with open('C:\\file.csv', 'wb') as the_file:
    for key,value in CurrentCommentList.items():
        the_file.write(str(key).encode('utf-8') + ','.encode('utf-8') + 
        str(value).encode('utf-8') + '\n'.encode('utf-8'))

pd.read_csv('C:\\file.csv')

Upvotes: 0

Graziad
Graziad

Reputation: 1

This is strange indeed. I wonder if passing the parameter "coerce_float=False" to read_sql function would help for this situation.

Upvotes: 0

Related Questions