Mattidge
Mattidge

Reputation: 83

PYODBC Inserting data into datetime column produces incorrectly formatted table

I am currently writing a program that will take data from an excel spreadsheet and insert it into a sql server table that I have created within the program.

I have previously assigned the datetime column to be a nvarchar(250) for the purpose of getting the overall programme to work, however when I go to change it to datetime, the data is inputted into the wrong columns? The rest of the code worked with the nvarchar datatype aswell.

import pyodbc

connection_string = r'connection_string'
data = 'file_path'

conn = pyodbc.connect(connection_string)
cur = conn.cursor()

createtable = """
create table table1(
    ID Int NULL,
    Date datetime(250) NULL,
    City nvarchar(250) NULL,
    Country nvarchar(250) NULL,
    Image nvarchar(250) NULL,
    Length nvarchar(250) NULL,
    Date_Of_capture nvarchar(250) NULL,
    Comments nvarchar(1000) NULL
    )"""

truncatetable = """truncate table table1"""

with open(data) as file:
    file.readline()
    lines = file.readlines()

if cur.tables(table="table1").fetchone():
    cur.execute(truncatetable)
    for line in lines:
        cols = line.split(',')
        cols = line.replace("'", "")
        sql = "INSERT INTO table1 VALUES({}, '{}', '{}', '{}', '{}', '{}','{}','{}')".format(cols[0], cols[1],cols[2], cols[3], cols[4], cols[5], cols[6], cols[7])
        cur.execute(sql)
else:
    cur.execute(createtable)
    for line in lines:
        cols = line.split(',')
        sql = "INSERT INTO table1 VALUES({}, '{}', '{}', '{}', '{}', '{}','{}','{}')".format(cols[0], cols[1],cols[2], cols[3], cols[4], cols[5], cols[6], cols[7])
        cur.execute(sql)

conn.commit()

conn.close()

I would expect the date column to show as a datetime data type whilst being contained within one column however it changes the tables so that all the columns are incorrect and each digit of the date is within a different column?

Any help is greatly appreciated. Thank you.

Upvotes: 1

Views: 727

Answers (1)

Parfait
Parfait

Reputation: 107567

Consider the following best practices:

  • Always specify columns in INSERT INTO even SELECT clauses, specifically use INSERT INTO myTable (Col1, Col2, Col3, ...) which helps in readability and maintainability;

  • Use parameterization with prepared statement to avoid quote escaping or type casting among other important items. Additionally Python allows tuples to be passed into params argument of cursor.execute() without listing each individual column.

  • Use the csv library of Python to traverse CSV files with lists or dictionary for proper alignment and avoid the memory intensive .readlines() call;

  • Combine CREATE TABLE and TRUNCATE in one SQL call to avoid if conditionals with cursor fetch call.

See adjusted code.

import csv
...

action_query = """
    IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = N'mytable')
      BEGIN
        TRUNCATE TABLE table1
      END
    ELSE
      BEGIN
        CREATE TABLE table1(
           ID Int NULL,
           Date datetime NULL,
           City nvarchar(250) NULL,
           Country nvarchar(250) NULL,
           Image nvarchar(250) NULL,
           Length nvarchar(250) NULL,
           Date_Of_capture nvarchar(250) NULL,
           Comments nvarchar(1000) NULL
        )
      END
""")

cur.execute(action_query)
conn.commit()

# PREPARED STATEMENT
append_query = """INSERT INTO mytable (ID, Date, City, Country, Image, 
                                       Length, Date_Of_capture, Comments)
                  VALUES (?, ?, ?, ?, ?, ?, ?, ?)
               """

# ITERATE THROUGH CSV AND INSERT ROWS
with open(mydatafile) as f:
    next(f) # SKIP HEADERS
    reader = csv.reader(f)

    for r in reader:
        # RUN APPEND AND BIND PARAMS
        cur.execute(append_query, params=r)
        conn.commit()

cur.close()
conn.close()

Upvotes: 2

Related Questions