Reputation: 1
I have a problem concerning a board game I'm creating. I plan on making an artificial player for the game using MiniMax with AlphaBeta prunning, but I'm not sure how to evaluate ir the player is good at the game. Since it's a new game I can't get a player who is good to give feedback. So I'd like to know if there is any technique to determine objectively if the artificial player is actually good. Thank you in advance.
Upvotes: 0
Views: 71
Reputation: 55609
Proper automated testing the quality of AI is difficult because you'd have to have something as good or better than the AI (at least in certain positions - as in sometimes spot obvious stupid moves, which is not that difficult to do manually), so, you'll have to write AI that's better than your AI, which you'd have to test by writing better AI, which you'd have to test... Well, to say the least, this obviously won't work so well.
The options for testing the quality of AI are (to my knowledge):
Manually - Get good at the game and ask yourself to give feedback. Play complete games and/or start in specific positions and make sure it doesn't do really stupid things.
Basic check - Test it against other, more basic, artificial players to make sure it wins practically all the time. For more established games, you should be able to find pretty decent AI written by others.
Test against data - not really applicable, but can be applicable in games - find well-known instances of the game played by experts and check that your AI matches a lot of the moves made by the experts for a given position.
Brute-force it - check all possibilities for a given game state (that's close to the end) and objectively determine the best move, comparing this against what your AI does. Also, you're code should have this in the final version either way, but you don't want this initially so you can perform this check. This is basically what mini-max already does, so it doesn't really help.
AI rush! - write a bunch of different AI bots, all using different approaches and have a giant show-down.
So, to summarize (for your scenario), I'd suggest:
Test it yourself.
Write some really basic artificial players that can even just move randomly and study any game your AI loses to find any flaws.
Specific to mini-max:
This is not proper AI (at least in my opinion), you're just exploring some states and finding the best one (so the above is still applicable, but not to such a large extent). Your main concerns will be:
(the first 3 are pretty standard AI problems)
Is my code right? You should be able to see stupid moves from the AI from a small number of games if this is not the case.
Is my evaluation function right / good enough? This can always be tweaked, but you can also play a few games to find out if it's ok.
My AI is useless - is my code or my evaluation function wrong? Assuming a decent implementation, if it makes bad moves all the time, it's probably your code. If it makes some good and some bad moves, it's probably your evaluation function. But it really could be either. Remember that the evaluation function is probably a lot less to check than all your code.
Is it fast enough or can I explore deeper? Check how long it takes. If it's a split-second, you can probably afford to increase the explore depth by one. If it takes a few minutes you may want to decrease the explore depth.
Upvotes: 3