Reputation: 21
I am using sqlalchemy to extract data from MSSQL Server 2005. I have a table that contains a column containing values such as 0x0000000000881C3C, 0x0000000000881C3D, 0x0000000000881C3E, 0x0000000000881C3F
.
The column type is timestamp. I used reflection to check the column type and the column type is determined correctly by sqlalchemy. However when I use a query to extract the data, the data type returned is a bytearray.
For a source row that has the following values:
0x0000000000881C3C,35,1880744, AR,2014-05-13 13:04:53.000, N, 106, 1B3448E8-4D2F-46A0-87B9-ED07411F1478, NULL, NULL
the values returned by sqlalchemy are as follows:
(bytearray(b'\x00\x00\x00\x00\x00\x88\x1c<'), 35, 1880744, 'AR', datetime.datetime(2014, 5, 13, 13, 4, 53), 'N', 106, '1B3448E8-4D2F-46A0-87B9-ED07411F1478', None, None)
How can I prevent 0x0000000000881C3C
from being interpreted as a byte array? Is there a way that I can force sqlalchemy to treat that column as a string?
When I try to insert the above value into MongoDB, I run into issues. I tried using various combinations of decode and encode but nothing seems to work. I am at my wits end and would appreciate any help in this regard.
Here is the code that I am using:
engine = create_engine('mssql+pyodbc://my_uid:my_pwd@my_db')
metadata = MetaData()
conn = engine.connect()
opp_table = Table('MYTABLE', metadata, autoload=True, autoload_with=engine)
s = select([opp_table])
result = conn.execute(s)
for row in result.fetchall():
print row
Upvotes: 2
Views: 1791
Reputation: 638
Is there a way that I can force sqlalchemy to treat that column as a string?
Use autoload
and Columns
combined to override reflected columns:
...
from sqlalchemy.types import Unicode
opp_table = Table('MYTABLE', metadata,
Column('your_column_name', Unicode(30)),
autoload=True,
autoload_with=engine)
Upvotes: 1