Reputation: 319
I want to test my MySQL database and how it handles my future data. it is only a table with two columns, one of the column is only one word and another one is 30.000 characters in. So I copied and inserted into the same table 20.000 times that shows the size is 2.0 GB. Now I want to browse them through phpMyAdmin it shows nothing and every destroyed that table to show anything. I output it through python it only shows 5 rows that were inserted before this copy. I used a script to delete rows from IDs between 5000 - 10.000 it works. That means that data is there but doesn't come out. Any explanation?
import MySQLdb as mdb
con = mdb.connect('127.0.0.1', 'root','password', 'database')
title = []
entry = []
y = 0
with con:
conn = con.cursor()
conn.execute("SELECT * FROM mydatabase WHERE id='2' AND myword = 'jungleboy'")
rows = conn.fetchall()
for i in rows:
title.append(i[1])
entry.append(i[2])
for x in range(20000):
cur.execute("INSERT INTO mydatabase(myword,explanation) VALUES (%s,%s)",(str(title[0]),str(entry[0])))
if x > y+50:
print str(x)
y = x
Upvotes: 0
Views: 380
Reputation: 174624
I'm not sure I understand your question, but here are some tips with the code you have pasted.
After any INSERT
or other query that adds, removes or changes data in a table, you need to commit the transaction with con.commit()
.
There is a limit on how many records can be fetched with fetchall()
. You can see and adjust this limit by printing the arraysize
attribute of the cursor:
print 'I can only fetch {0.arraysize} rows at a time.'.format(cur)
To guarantee that you are fetching every row, loop through the results, like this:
q = "SELECT .... " # some query that returns a lot of results
conn.execute(q)
rows = con.fetchone() # or fetchall()
while rows:
print rows
rows = con.fetchone() # again, or fetchall()
Upvotes: 3