# Heuristic Analysis¶

For my game_agent, I tested 3 heuristics listed below. Each heuristic assumes I am Player 1.

• Aggressive: This heuristic chooses moves for Player 1 that tries to reduce the number of moves available to Player 2 by favoring limiting moves with a constant multiplier
• Defensive: This heuristic chooses the moves that have the best chance to increase the number of plays available to Player 1 by the same multiplier
• 'Close to you': This heuristic combines aggression with proximity by penalizing moves by how far they are to the opponent. It has the effect of creeping closer to Player 2.

### Analysis¶

Each agent was run 5 times in a row, and the results averaged out over the 5 runs.

• Aggressive ( Modifier 1.5 ): ID_Improved: 78.29%
• Very Aggressive ( Modifier 4 ): ID_Improved: 71.62%
• Defensive ( Modifier 1.5 ): ID_Improved: 77.42%
• Very Defensive ( Modifier 4 ): ID_Improved: 72.81%
• Close to you: ID_Improved: 77.856%

Highest average Score: 77.29%, from the Aggressive agent.

In [1]:
import qgrid
import pandas as pd
import numpy as np
scores = {
'ID_Improved': [ 68.56, 80.71, 71.43, 79.29, 77.14 ],
'custom_score_close_to_you': [ 80.0, 78.57, 78.57, 85 , 67.14 ] ,
'custom_score_defensive' : [ 79.29, 72.86, 85.71, 78.57,  70.71 ],
'custom_score_aggressive': [78.57,78.57,77.14, 77.86, 79.29] }
df = pd.DataFrame(scores)
avgs = df.transpose()
avgs.columns = ['Run 1','Run 2','Run 3','Run 4','Run 5' ]
avgs['Average'] = avgs.mean(numeric_only = True, axis=1)
avgs

Run 1 Run 2 Run 3 Run 4 Run 5 Average
ID_Improved 68.56 80.71 71.43 79.29 77.14 75.426
custom_score_aggressive 78.57 78.57 77.14 77.86 79.29 78.286
custom_score_close_to_you 80.00 78.57 78.57 85.00 67.14 77.856
custom_score_defensive 79.29 72.86 85.71 78.57 70.71 77.428

I also tried a few other things, such as weighting the SCORE of each move with the DEPTH that it found it at, so a depth of 3 would increase the score by the constant 3 ( score = score * depth ). This didn't hurt performance, but it did not help either. I tried variations on this, adding a simple + DEPTH instead of multiplication, as well as exponential depth. Also tried was randomly selecting one of the above heuristics, which gave a score of 59.52%.

## Final Result¶

As you can see the Aggressive agent won by a hair, but all 3 were very close. The highest single score of any match came from our custom close_to_you heuristic, as well as our lowest single score.

For choosing the final agent to play, I concluded that the Aggressive agent was the best because:

• Consistent behavior. Throughout the tests ( and more not shown above ) the aggressive agent scored consistently high with very few valleys.
• Most reliable. When comparing agents, the Aggressive agent scored consistently high against all of the test suites, compared to some that only did well against the MM suite of tests.
• Highest win percentage. The aggressive agent scored the highest win percentage for this set of runs, as well as the many runs during testing not shown here.

Beating the ID_Improved score proved very difficult and it took many tries and heuristics to get an agent to come close and finally beat it out by a hair. All my heuristics are 'lightweight', trying to maximize the depth of the search rather than get the perfect move for each level.