My recent work got me again thinking about the nature of learning and understanding. More precisely, I got interested in thinking about the best way to use the ai in go studies.
Recent research (for example, Thinking, Fast and Slow by Daniel Kahneman) suggests that expertise in a domain is not a single skill but rather a large collection of miniskills. In go, these could be knowledge of particular jōseki and shapes, and for example knowing when and how to check whether a particular ladder works for a player. To get better at go, you basically need your brain to accumulate a bigger and better collection, or knowledge base, of miniskills.
A player’s knowledge can further be classified into tacit and explicit knowledge. Tacit knowledge is difficult to transfer to another person by means of verbalising it; for example, a gut feeling about why a particular move feels good but which you cannot really explain in words arises from tacit knowledge. Conversely, explicit knowledge, such as ‘the empty triangle is a bad shape’, is easy to transfer. Tacit knowledge and explicit knowledge are learned and stored in memory in different ways, with tacit knowledge requiring more time for a player to internalise.
The human memory has a ‘chunking’ feature to it: with practice, it becomes able to recall larger and larger bits of something. This is easy to demonstrate with two whole-board go positions, one containing four joseki positions, and the other containing four randomly mixed groups of stones:
The miniskills and knowledge that a go player learns are closely tied to the development of their memory-chunking process, which explains why especially young Asian go trainees are taught to memorise tsumego and professional game records.
When I look back at my own go career, I realise that up until 2012, I was mostly playing and understanding the game from a tacit knowledge base. Of course, I had learned some explicit knowledge from theory books and my teachers, but at that point I still didn’t have what could be called a ‘formal go education’. When my Japanese go teacher would ask me why I played a particular move, I would usually find that I didn’t have a verbalisable reason to it – I had chosen it by gut feeling and inconclusive reading. This started changing for me around 2014, when in my insei training I gradually got used to my teacher interrogating me for the reasons for my moves. Eventually I noticed that this was useful not only in my own teaching work, where I learned to explain the game in terms of plans and intentions, but also in my post-game reviews with other professional players, where the exchange of ideas became much quicker and more precise.
Currently, it is starting to seem to me that ‘proper’ knowledge of something has a ‘generative’ quality to it: when you understand something fundamentally enough, you can use and combine your knowledge to create something novel, or else to make sense of something you have never seen before. Unfortunately, my brief search into ‘generative knowledge’ brought up no hits in the sense that I am using it here – either it is called something different, or it is not an actual phenomenon at all, but something like a result of higher-level miniskills working in tandem.
How does this all link to studying go with ais? ais are a new kind of go teaching medium – a bit like if you hired the world’s strongest player to personally teach you go, but they could only show you variations while saying ‘good’ or ‘bad’. This is far from useless, but clearly worse than a ‘strong enough’ teacher who can also teach tactical and strategical concepts.
Stronger players, especially professionals, play go by attaching meaning to stones on the board and creating plans accordingly: ‘important’ stones are saved, ‘unimportant’ stones are discarded, ‘weak’ stones are attacked or defended, and ‘strong’ stones are steered away from. What constitutes ‘important’, ‘unimportant’, ‘strong’, and ‘weak’ is an extremely nuanced question, and when you can answer it accurately and precisely, you are well on the way to becoming a really strong player.
Probably the most distinguishing factor about human players and ai players is that ais are able to view the whole-board position fluidly as a single entity, while human players break the whole-board position into smaller parts which they can evaluate precisely, and then weigh against each other for a whole-board perspective. At the time of writing, it is an open question whether a human can learn to play the whole board as well as an ai can; my own finding is that, if I try to play a peaceful whole-board game against an ai, I will lose some decimal points with almost every move even though (I believe) I am playing strategically and tactically sound moves.
Move number | Points loss | Move number | Points loss |
---|---|---|---|
1 | 0.0 | 19 | 0.0 | 3 | −0.1 | 21 | −0.1 |
5 | −0.3 | 23 | −0.5 |
7 | 0.0 | 25 | −0.4 |
9 | −0.1 | 27 | 0.0 |
11 | −0.1 | 29 | 0.0 |
13 | −0.3 | 31 | 0.0 |
15 | −0.1 | 33 | −0.3 |
17 | 0.0 | Total | −2.3 |
There is a way for a human player to minimise the number of loss-incurring moves. That is to learn and play out long jōseki sequences that are approved by the ai. If you can for example play a 49-move jōseki that has been found to be fair, then you’ve gotten 49 moves closer to the end of the game without losing points. However, it would be a stretch to say that by just memorising such a sequence you would be a ‘strong player’ – after all, all that your opponent needs to do is avoid that particular jōseki.
I believe that, to get the most use of an ai, the important thing is not to mindlessly copy its ‘optimal’ sequences, but to try to understand on a higher level what the ai is trying to accomplish and why. In other words, you should not be just learning a particular sequence for a particular whole-board position, but also try to find out what kinds of whole-board positions ask for a similar type of a local sequence. But how can you do that when the ai’s only output beyond the moves is ‘good’ or ‘bad’?
My current method, which you may (or may not) find effective, is first to review my games without the help of an ai. I then make my own predictions on what were the key turning points and mistakes of the game and what the winning percentage graph will look like. Only after this I review the game with an ai, and – this is the important part – if the ai shows me something that I really didn’t expect, I get surprised and try to find out why my prediction was mistaken.
If you instead directly review your games with the ai without making your own predictions, I suspect you will skip over many moves and sequences that you could have been surprised about with a little preparatory work – this will simply reduce your opportunities to learn. There is also an additional risk that this way your study will devolve into merely memorising the ai’s sequences, whereas with the preparatory work you already have a verbalised ‘story’ of the game that you can start adjusting.
All times are in Helsinki time (eet with summer time).
Check the current review credits balance
Public lectures on Twitch every 2nd and 4th Saturday of the month at 1 pm.
Jeff and Mikko stream on Twitch on Fridays at 6 pm.
Very interesting article and I'll definitely try out this reviewing method you suggest (just me first, then AI), however it makes me worry about the time needed for it. As of now, I aim at reviewing every game I play, and I manage to do most - like 80%. Often times my reviews consume more time than the game itself, so for someone like me, trying to optimize limited time, do you think it's better to play more and review every second or third game, or play less, but review them all?
Thank you for this article; this a topic close to my own interests.
I also review every game I play; I just briefly go over the game without AI (most likely far too briefly), then upload to AI Sensei, where I look at all my mistakes that lose 3 points or more. If I don't understand an AI sequence, I try out variations with the local KataGo.
Often in variations given by AI Sensei (or some other automatic review), KataGo will want me to play move A, then the opponent plays tenuki, then I get a second move in that area. My usual question then is "that's nice, but what if the opponent does not tenuki?". This is where the local KataGo can help.
About "generative knowledge": I think it is necessary to have the vocabulary to talk about this knowledge. Maybe this is one of the differences between East and West; in the East, they have more precise Go vocabulary and the words are often used in everyday language as well. Only when we can verbalise the basic concepts can we build upon them to create higher-level or more abstract concepts (again with their own vocabulary).
I've seen the word "synthesis" used in that respect, as in "Bloom's taxonomy", where synthesis "refers to the ability to put parts together to form a new whole."
@Zdzieli: I think it is hard to say which way of time usage is more effective, so, if it was me, I would decide by what I ‘felt like doing’. If/when I feel like reviewing games, I’d spend more time there, and if/when I feel like playing, I’d skip on some game reviews. The main thing I would avoid is to spend time doing something that I don’t feel motivated about – this is sure to make the time usage much less efficient than a slightly bad ratio between playing and reviewing.
@Marcel: Thank you! And yes, having the language to discuss the nuances is definitely a key requirement. Thanks for the references to synthetisation and Bloom’s taxonomy, I’ll have a look into both of them and think further about how the whole subject relates to go!