RomaValcer
RomaValcer

Reputation: 2956

Using multithreading to optimise python code is possible

I have an array with some dictionaries (400-1600), and example (shortened) of this is:

def explain(item):
  global dict
  conn = sqlite3.connect('tf2.db')
  cursor = conn.cursor()
  cursor.execute("SELECT name, imgurl from items where defindex = :def",{'def':item['defindex']})
  temp = cursor.fetchone()
  processed = {}
  processed['name'] = temp[0]
  processed['imgurl'] = temp[1]
  dict.append(processed)`

Is it possible to have any good gain of speed using multithreading instead of

for item in file['result']['items']:
  explain(item)

Upvotes: 0

Views: 52

Answers (1)

sirbrialliance
sirbrialliance

Reputation: 3702

Yes, there could be some benefit, depending on how sqlite3 and it's python bindings are written. You can do as mgilson suggests and time it.

However, parallelization is only one of many, many ways to optimize a bottleneck in your code.

It looks like your use case involves looking up TF2 item information. It appears that's only about 2,000 items to worry about and new items aren't added "very often". Assuming this roughly fits your use case, you can fit it all in memory easily!

When you start your application, load all the definitions from the database (pseudocode):

allItems = {}
rows = get("SELECT * from items")
for row in rows:
    allItems[row.defindex] = makeItemFromRow(row)

Then, when you need to look up a bunch of items you can just look them up in a normal dictionary:

for item in file['result']['items']:
    userItems.append(allItems[item['defindex']])

This will run much more quickly than asking sqlite to look up the data each time.

Upvotes: 1

Related Questions