[continued from the previous post]
Certainly a blitz game (unknowingly, at the time, against a computer) was a diversion from the events of the day, but it was not the best choice. Tennis, anyone?
perrypawnpusher - guestM
5 1 blitz, FICS, 2023
26.Kh2 Qh6+ 27.Kg1 Rxg3 28.Qe4 Bh3 29.Rff2
My Jerome Gambit has been a wild ride, but not a successful one.
It is time for my opponent to finish me off.
29...Bxg2
Forcing, but not best.
30.Rxg2 Rxg2+ 31.Rxg2 d5
Those annoying pawns. (Page Dante Alighieri.)
32.Qe5 Rg8
Odd.
Stockfish 16's analysis starts off with the recommended 32...c4, which seems more to the point.
My opponent seems to have lost its way.
33.Qxd5
What else?
After the suggested improvement 33.Rg5 Rd8 the pawn is no longer available, although after the game I let Stockfish analyze for a while and it struggled to convert Black's advantage.
33...Qxf4 34.Qxc5 Qc1+ 35.Kh2 Qf4+ 36.Kg1 Qc1+
I know that in the past computer programs had difficulty with the endgame - often taking calculating time to assess what humans can understand at a glance (e.g. unopposed pawns advancing) - but that was then and this is now. Besides, this isn't quite an endgame, is it?
Or is it?
37.Kh2
Giving Black's Queen choices. Too many choices? Perhaps.
37...Qf4+
Again, Stockfish 16 prefers 37...Qh6+, but it still has a hard time bringing the whole point home quickly.
38.Kg1
At this point the game was drawn by repetition.
'Tis a puzzlement.
And, for me, an escape. I was happy to collect the half point.
By the way, it turns out that my opponent was Maia Chess, "a human-like neural network chess engine"
Maia’s goal is to play the human move — not necessarily the best move. As a result, Maia has a more human-like style than previous engines, matching moves played by human players in online games over 50% of the time..
Maia is an ongoing research project using chess as a case study for how to design better human-AI interactions. We hope Maia becomes a useful learning tool and is fun to play against. Our research goals include personalizing Maia to individual players, characterizing the kinds of mistakes that are made at each rating level, running Maia on your games and spotting repeated, predictable mistakes, and more...
Maia is particularly good at predicting human mistakes. The move-matching accuracy of any model increases with the quality of the move, since good moves are easier to predict. But even when players make horrific blunders, Maia correctly predicts the exact blunder they make around 25% of the time. This ability to understand how and when people are likely to make mistakes can make Maia a very useful learning tool.
My interest in the Jerome Gambit, as I have pointed out in the past, comes from investigating "errors in thinking", so perhaps Maia and I are not so far apart, after all.
No comments:
Post a Comment