Reputation:
I have developed a chess engine in python using minimax algorithm and i have also used Alpha-Beta Pruning to optimize it further. Currently i am searching to a depth of 4 which isnt a lot but it still takes 10 - 60 s to think of a move.
The main thing that slows down the program is iterating again and again. I first generate all possible moves in a deque.collection()
and then i iterate through it once to validate it. Now i iterate through it once again to evaluate the moves and then compare them to get the best possible move. I am using for loops throughout this and the format of all possible moves is a collection(mainmoves) of collections(moves)
What can i do to optimize this and reduce the time taken to generate a move.
def minimaxRoot(depth,isMaximizing):
global board
possibleMoves = gen(False)
bestMove = -math.inf
bestMoveFinal = None
for move in possibleMoves:
orig = normperform(move)
value = max(bestMove, minimax(depth - 1,not isMaximizing,-math.inf,math.inf))
undo(move,orig)
if value > bestMove:
bestMove = value
bestMoveFinal = move
return bestMoveFinal
def minimax(depth,ismax,alpha,beta):
global board
if depth == 0:
return calcpoints()
maxeval = -math.inf
mineval = math.inf
if ismax == True:
mainmoves = gen(False)
if mainmoves == 'mate':
return 8000
for move in mainmoves:
orig = normperform(move)
eval = minimax(depth-1,False,alpha,beta)
undo(move,orig)
maxeval=max(eval,maxeval)
alpha = max(alpha,eval)
if beta <= alpha:
break
return maxeval
elif ismax == False:
mainmoves2 = gen(True)
if mainmoves2 == 'mate':
return 8000
for move2 in mainmoves2:
orig2 = normperform(move2)
eval2 = minimax(depth-1,True,alpha,beta)
undo(move2,orig2)
mineval = min(mineval,eval2)
if eval2 < beta:
beta = eval2
if beta <= alpha:
break
return mineval
Upvotes: 0
Views: 1868
Reputation: 46
I was attempting to do the same thing with Python, and here's what I did to make my engine search a board with a depth of 5 under about 30s.
functools.lru_cache
so the library won't have to generate moves repeatedly. This was the step that enabled my engine to skip from a depth of 4 to a depth of 5. Note that lru_cache uses hash tables to cache keys, but chess.Board
isn't hashable by default. This can be fixed by using chess.Board.__hash__ = chess.polyglot.zobrist_hash
however. In this way, you can cache the ordered move list to further improve its speed.functools.lru_cache
with the method mentioned above.You're probably just making this in Python for fun and experimentations, so I do encourage you to try this in Python first. But if this was for serious usage, you may want to implement this in C or C++, or use engines like Stockfish.
Well, that's about all. Hope it helps. ;)
Upvotes: 2
Reputation: 464
There are two main issues which lead to your chess engines low search depth:
Upvotes: 0