Let's take RIT (https://github.com/ogrnv/random-intelligence-tests) sample tasks of different complexity and compare human results with results of a Monte Carlo (MC) algorithm (https://github.com/ogrnv/Creating-sample-means-for-measurement-standards-of-intelligence).
Then it will be seen that the human intelligence AGI tries to recreate achieves better results when there are more degrees of freedom (d.f.) on the board.
For example, these are mean values of moves per step in at least 130 tests:
the board 8x8, 42 chips (more d.f.) of 7 types, 5 rounds x 12 steps in a round:
MC 4810.561
HM 2.8112
the board 8x8, 59 chips (less d.f.) of 7 types, 5 rounds x 12 steps in a round:
MC 1587.414
HM 4.1397
So it's a way to quantify how close AI's thinking is to human thinking, i.e. how close is an AI to AGI at any given time.