[SIZE="3"]More AI. Unit-tests. [/SIZE]
Last time we talked about AI and alpha-beta pruning and we can now happily report that the AI work is done! We havenít written anything about evaluation function yet, so weíll cover that today.
One of the requirements of AI for turn-based game is that every position must be evaluated; otherwise we donít have the necessary information needed to choose a move. To build an evaluation function, we need to understand what parameters we are will be working with. Basically they are:
- the number of pieces on board that are ours;
- the number of the pieces on the board that belong to our opponent;
- the number of pieces in our reserve; and,
- the number of pieces in the opponentís reserve
Now, in a more advanced version we can build a heuristic, such as evaluating the number of rows that can be removed in 1(2, ...) move, key position points occupied and so forth. Right now, however, we donít use any of them and with good depth of search they are redundant. Even with 4 parameters, writing an evaluation function isnít such an easy task. After all, what should be considered the stronger position: 10 pieces on board + 2 in reserve or 2 pieces on board + 10 in reserve? In which case does adding 1 piece give more power? Unfortunately, we arenít gurus of gipf theory so we need other way to figure this all out.
Practice: AI vs AI
The method we settled on consisted of us pairing one AI with another AI using a different evaluation function and then testing all combinations we could invent. Practice is the best criteria of truth! Actually, trial and error can be a very good way to check that an AI mechanism is correct: pair AI that considers n-turns deeps with AI that considers m-turns deep. Here are some cool stats from our work (We counted each playerís turn as separate, so sequence player 1 Ė player 2 Ė player 1 is 3 turns.):
5-levels AI beats 3-levels AI in ~60 turns
5-levels AI beats 4-levels AI in ~130 turns
5-levels AI vs 5-levels AI played over 200 turns!
Pure C: Profit!
This probably isnít going to be one of the more disputed points in this process, but we stilled ask ourselves if pure C as it worth it? Certainly, yes!
Our python prototype started to lag at depth level 4. We had to wait roughly one minute or so for an AI move at that level. Basically, allocations\copies took too much time and processing power. In our C version, with no heap alloc, struct memcpy-only, and optimized calc code, a depth level 4 analysis took roughly a second (~0.6 s - 1.2 s).
In other words, we got a 60-fold increase performance improvement by using pure C, which made the game more friendly for users (User will wait a couple of seconds, not minutes). So the answer is a resounding yes; going with pure C was worth the effort.
Another topic we want to cover here is unit-tests. Playing with an evaluation function and watching how 2 AI players interacted, we were able to discover some bugs. Yep, we arenít perfect. Thankfully, we planned for our imperfections when we decided to add regression testing and spent a couple of hours adding the ability to set up any game scenario and execute AI based on that.
In just 2 days we already had about 20 tests. About half of them caught bugs and other half were related to restrictions we constructed to save us from another dozen of bugs. Of course, with AI it is almost impossible to expect accurate, perfect moves in the middle of the game, but in the end of the game some moves are 100% precise. As an example, consider that player has only 1 piece left, his optimal strategy is to remove his pieces lest he lose. This was not always how the AI played, which led to a few bugs in the end of the game.
To share one bug with you to give an idea of what we are talking about, consider this situation. Sometimes AI managed to find a very good turn on depth 4 and collected many pieces. The problem, however, was that he was out of pieces on depth 2 meaning he had already actually lost the game. Clearly, fixing this error in logic led to better game play. The bottom line is this. If you ask us ďis unit-testing worth it?Ē the answer is ďyes, absolutely!Ē
Before wrapping up, letís look at our current status.
- AI module
- Core mvc architecture basis according to the docs
- Model library (GipfGame) -> GameController
- Added scene controllers for all scenes
- Player model layer basic (PlayerBase, UserPlayer, CPUPlayer)
LOC and files:
$ find . "(" -name "*.m" -or -name "*.c" -or -name "*.h" ")" -print | xargs wc -l