This is one of those topics that LLMs (Opus 4, Gemini 2.5 pro, etc) seem bad at explaining.<p>I was trying to figure out the difference between the Stockfish approach (minimax, alpha-beta pruning) versus Alpha Zero / Leela Chess Zero (MCTS). My very crude understanding is that stockfish has a very light & fast neural net and goes for a very thorough search. Meanwhile, in MCTS (which I don't really understand at this point), you eval the neural net, sample some paths based on the neural net (similar to minimax), and then pick the path you sampled the most. There's also the training vs eval aspect to it. Would love a better explanation.
"pick the path you sampled most" is misleading.<p>What you actually do is model every node (a game state) as a multi armed bandit. Moves are levers and the final game results are payoffs.<p>So you basically keep a tree of multi-armed bandits and adjust it after each (semi-)random game, perhaps adding some nodes, for example the first node the game visited which is not yet in your move tree.<p>For the random game you pick the next node to maximise long term payoff (exploration/exploitation tradeoff applies here) which usually means a move which gave good win ratio on previous plays but not always (exploration).<p>And obviously this only applies to the first part of the game which is still in the memorized tree - after that it's random.<p>This alone does converge to a winning strategy but sometimes impractically slowly. Here's where the neural network comes in - in every new node assign the weights not uniformly but rather directed by the NN which seeks out promising moves and greatly speeds up the convergence.
In old-fashioned AI, it was generally believed that the best way to spend resources was to exactly evaluate as much of the search tree as possible. To that end, you should use lightweight heuristics to guide the search in promising directions and optimizations like alpha-beta pruning to eliminate useless parts of the search space. For finite games of perfect information like chess this is hard to beat when the search is deep enough. (For if you could evaluate the whole game tree from the start then you could always make optimal moves.) Stockfish follows this approach and provides ample evidence of the strength in this strategy.<p>Perhaps a bit flippantly, you can think of MCTS as “vibe search”—but more accurately it’s a sampling-based search. The basic theory is that we can summarize the information we’ve obtained to estimate our belief in the “goodness” of every possible move and (crucially) our confidence in that belief. Then we allocate search time to prioritize the branches that we are most certain are good.<p>In this way MCTS iteratively constructs an explicit search tree for the game with associated statistics that is used to guide decisions during play. The neural network does a “vibe check” on each new position in the tree for the initial estimate of “goodness” and then the search process refines that estimate. (Ask the NN to guess at the current position; then play a bunch of simulations to make sure it doesn’t lead to obvious blunders.)
AlphaGo Zero is: assume you have a neural network that, given a board position, will answer: what's win probability, and how interesting is each move from here.<p>You use the followup moves as places to search down. It's a multi-armed bandit problem choosing which move(s) to explore down, but for simplicity in explanation you can just say: maybe just search the top few, vaguely in proportion to how interesting they are (the number the net gave you, updated if you find any surprises).<p>To search down further, you just play that move and then ask the network for the winrate (and followup moves) again. If there's any surprises, you can update upwards to say "hey this is better than expected!" or whatever.<p>The key thing for training this network: spending computation from an existing network gives ycu better training data to train that same network. So you can start from scratch and use reinforcement learning to improve it without bound.