Sunday, December 24, 2017

AlphaZero beats Stockfish 64:36 In 100 amusements AlphaZero scored 25 wins and 25 draws with White, while with Dark it scored 3 wins and 47 draws.

Google-subsidized DeepMind has now demonstrated that their AlphaZero calculation can't simply be utilized to beat Go yet can defeat the best existing chess motors at chess and shogi. Their calculation took a minor 4 hours of playing amusements against itself to show itself to play chess at a level better than Stockfish 8! In 100 amusements AlphaZero scored 25 wins and 25 draws with White, while with Dark it scored 3 wins and 47 draws. It didn't lose an amusement, with the last score 64:36. Here you can replay 10 illustration diversions by our new PC overlord. Read the DeepMind paper

Acing Chess and Shogi independent from anyone else Play with a

General Reinforcement Learning Algorithm

David Silver,1∗ Thomas Hubert,1∗

Julian Schrittwieser,1∗

Ioannis Antonoglou,1 Matthew Lai,1 Arthur Guez,1 Marc Lanctot,1

Laurent Sifre,1 Dharshan Kumaran,1 Thore Graepel,1

Timothy Lillicrap,1 Karen Simonyan,1 Demis Hassabis1

1DeepMind, 6 Pancras Square, London N1C 4AG.

∗These creators contributed similarly to this work.


The round of chess is the most broadly considered area ever.

The most grounded programs depend on a blend of refined hunt systems,

area particular adjustments, and carefully assembled assessment works that have been

refined by human specialists more than quite a few years. Interestingly, the AlphaGo Zero program

as of late accomplished superhuman execution in the round of Go, by clean slate support

gaining from recreations of self-play. In this paper, we sum up this approach into

a solitary AlphaZero calculation that can accomplish, clean slate, superhuman execution in

many testing areas. Beginning from irregular play, and given no space information

but the amusement rules, AlphaZero accomplished inside 24 hours a superhuman level of play in

the recreations of chess and shogi (Japanese chess) and additionally Go, and convincingly vanquished a

best on the planet program for each situation.

The investigation of PC chess is as old as software engineering itself. Babbage, Turing, Shannon,

also, von Neumann formulated equipment, calculations and hypothesis to examine and play the amusement

of chess. Chess in this manner turned into the amazing test undertaking for an age of counterfeit consciousness

analysts, coming full circle in superior PC chess programs that perform at

superhuman level (9, 13). Be that as it may, these frameworks are profoundly tuned to their area, and can't

be summed up to different issues without huge human exertion.

A long-standing aspiration of counterfeit consciousness has been to make programs that can

take in for themselves from first standards (26). As of late, the AlphaGo Zero calculation

accomplished superhuman execution in the session of Go, by speaking to Go learning utilizing

profound convolutional neural systems (22, 28), prepared exclusively by support gaining from

diversions of self-play (29). In this paper, we apply a comparative however completely bland calculation, which we


arXiv:1712.01815v1 [cs.AI] 5 Dec 2017

call AlphaZero, to the amusements of chess and shogi and in addition Go, with no extra area

information with the exception of the standards of the amusement, showing that a broadly useful support

learning calculation can accomplish, clean slate, superhuman execution crosswise over many testing


A historic point for counterfeit consciousness was accomplished in 1997 when Deep Blue crushed the human

best on the planet (9). PC chess programs kept on advancing consistently past human

level in the accompanying two decades. These projects assess positions utilizing highlights carefully assembled

by human grandmasters and deliberately tuned weights, joined with an elite

alpha-beta pursuit that extends a tremendous inquiry tree utilizing a substantial number of cunning heuristics and

area particular adjustments. In the Methods we portray these growthes, concentrating on the

2016 Top Chess Engine Championship (TCEC) title holder Stockfish (25); other solid

chess programs, including Deep Blue, utilize fundamentally the same as designs (9, 21).

Shogi is an altogether harder diversion, as far as computational many-sided quality, than chess (2,

14): it is played on a bigger board, and any caught rival piece changes sides and may in this way

be dropped anyplace on the board. The most grounded shogi programs, for example, Computer

Shogi Association (CSA) best on the planet Elmo, have just as of late vanquished human champions

(5). These projects utilize a comparable calculation to PC chess programs, again in view of a

exceedingly improved alpha-beta web index with numerous area particular adjustments.

Go is appropriate to the neural system engineering utilized as a part of AlphaGo on the grounds that the principles of

the diversion are translationally invariant (coordinating the weight sharing structure of convolutional

systems), are characterized regarding freedoms relating to the adjacencies between focuses

on the board (coordinating the neighborhood structure of convolutional arranges), and are rotationally and

reflectionally symmetric (taking into account information enlargement and ensembling). Moreover, the

activity space is basic (a stone might be set at every conceivable area), and the diversion results

are confined to twofold wins or misfortunes, both of which may help neural system preparing.

Chess and shogi are, seemingly, less inherently suited to AlphaGo's neural system models.

The standards are position-subordinate (e.g. pawns may propel two stages from the

second rank and advance on the eighth rank) and uneven (e.g. pawns just push ahead,

also, castling is diverse on kingside and queenside). The principles incorporate long-run collaborations

(e.g. the ruler may cross the board in one move, or checkmate the lord from the far side

of the board). The activity space for chess incorporates every lawful goal for the greater part of the players'

pieces on the board; shogi additionally permits caught pieces to be put back on the board. Both

chess and shogi may bring about attracts expansion to wins and misfortunes; surely it is trusted that the

ideal answer for chess is a draw (17, 20, 30).

The AlphaZero calculation is a more bland adaptation of the AlphaGo Zero calculation that was

to begin with presented with regards to Go (29). It replaces the carefully assembled information and domainspecific

expansions utilized as a part of customary amusement playing programs with profound neural systems

what's more, a clean slate fortification learning calculation.

Rather than a handmade assessment capacity and move requesting heuristics, AlphaZero uses

a profound neural system (p, v) = fθ(s) with parameters θ. This neural system takes the board position

s as an info and yields a vector of move probabilities p with parts dad = P r(a|s)


for each activity an, and a scalar esteem v assessing the normal result z from position s,

v ≈ E[z|s]. AlphaZero takes in these move probabilities and esteem appraises totally from selfplay;

these are then used to control its inquiry.

Rather than an alpha-beta pursuit with area particular improvements, AlphaZero utilizes a generalpurpose

Monte-Carlo tree look (MCTS) calculation. Each inquiry comprises of a progression of mimicked

recreations of self-play that navigate a tree from root sroot to leaf. Every recreation continues by

choosing in each state s a move a with low visit check, high move likelihood and high esteem

(found the middle value of over the leaf conditions of recreations that chose a from s) as indicated by the current

neural system fθ. The hunt restores a vector π speaking to a likelihood circulation over

moves, either relatively or covetously regarding the visit checks at the root state.

The parameters θ of the profound neural system in AlphaZero are prepared without anyone else play support

getting the hang of, beginning from haphazardly initialised parameters θ. Diversions are played by choosing

moves for the two players by MCTS, at ∼ πt

. Toward the finish of the diversion, the terminal position sT is

scored by the standards of the amusement to process the diversion result z: −1 for a misfortune, 0 for

a draw, and +1 for a win. The neural system parameters θ are refreshed in order to limit the

blunder between the anticipated result vt and the diversion result z, and to amplify the likeness

of the arrangement vector pt

to the hunt probabilities πt

. In particular, the parameters θ are balanced

by slope drop on a misfortune work l that aggregates over mean-squared blunder and cross-entropy

misfortunes separately,

(p, v) = fθ(s), l = (z − v)

2 − π

> log p + c||θ||2


where c is a parameter controlling the level of L2 weight regularization. The refreshed parameters

are utilized as a part of consequent diversions of self-play.

The AlphaZero calculation depicted in this paper varies from the first AlphaGo Zero

calculation in a few regards.

AlphaGo Zero gauges and upgrades the likelihood of winning, accepting parallel win/misfortune

results. AlphaZero rather evaluates and streamlines the normal result, assessing

draws or possibly different results.

The tenets of Go are invariant to pivot and reflection. This reality was misused in AlphaGo

also, AlphaGo Zero of every two ways. In the first place, preparing information was increased by producing 8 symmetries

for each position. Second, amid MCTS, board positions were changed utilizing a haphazardly

chosen turn or reflection before being assessed by the neural system, so that the MonteCarlo

assessment is arrived at the midpoint of over various predispositions. The standards of chess and shogi are hilter kilter,

furthermore, by and large symmetries can't be expected. AlphaZero does not expand the preparation information

what's more, does not change the board position amid MCTS.

In AlphaGo Zero, self-play amusements were produced by the best player from every single past cycle.

After every cycle of preparing, the execution of the new player was measured against

the best player; on the off chance that it won by an edge of 55% then it supplanted the best player and self-play amusements

were thusly produced by this new player. Conversely, AlphaZero basically keeps up a solitary

neural system that is refreshed persistently, as opposed to sitting tight for an emphasis to finish.


Figure 1: Training AlphaZero for 700,000 stages. Elo appraisals were registered from assessment

amusements between various players when given one moment for each move. a Performance of AlphaZero

in chess, contrasted with 2016 TCEC best on the planet program Stockfish. b Performance of AlphaZero

in shogi, contrasted with 2017 CSA best on the planet program Elmo. c Performance of

AlphaZero in Go, contrasted with AlphaGo Lee and AlphaGo Zero (20 piece/3 day) (29).

Self-play amusements are created by utilizing the most recent parameters for this neural system, discarding

the assessment step and the determination of best player.

AlphaGo Zero tuned the hyper-parameter of its pursuit by Bayesian enhancement. In AlphaZero

we reuse the same hyper-parameters for all recreations without amusement particular tuning. The

sole special case is the commotion that is added to the earlier approach to guarantee investigation (29); this is

scaled in extent to the average number of lawful moves for that diversion write.

Like AlphaGo Zero, the load up state is encoded by spatial planes construct just with respect to the fundamental

rules for each diversion. The activities are encoded by either spatial planes or a level vector, once more

construct just in light of the essential standards for each diversion (see Strategies).

We connected the AlphaZero calculation to chess, shogi, and furthermore Go. Unless generally indicated,

a similar calculation settings, organize engineering, and hyper-parameters were utilized for all

three amusements. We prepared a different example of AlphaZero for each diversion. Preparing continued

for 700,000 stages (scaled down clumps of size 4,096) beginning from haphazardly initialised parameters,

utilizing 5,000 original TPUs (15) to produce self-play amusements and 64 second-age

TPUs to prepare the neural networks.1 Additionally subtle elements of the preparation strategy are given in the


Figure 1 demonstrates the execution of AlphaZero amid self-play support learning, as

an element of preparing ventures, on an Elo scale (10). In chess, AlphaZero beat Stockfish

after only 4 hours (300k stages); in shogi, AlphaZero beat Elmo after under 2 hours

(110k stages); and in Go, AlphaZero beat AlphaGo Lee (29) following 8 hours (165k steps).2

We assessed the completely prepared occasions of AlphaZero against Stockfish, Elmo and the past

form of AlphaGo Zero (prepared for 3 days) in chess, shogi and Go separately, playing

100 diversion matches at competition time controls of one moment for every move. AlphaZero and the

past AlphaGo Zero utilized a solitary machine with 4 TPUs. Stockfish and Elmo played at their

1The unique AlphaGo Zero paper utilized GPUs to prepare the neural systems.

2AlphaGo Ace and AlphaGo Zero were at last prepared for 100 times this time allotment; we don't

recreate that exertion here.


Diversion White Dark Win Draw Misfortune

Chess AlphaZero Stockfish 25 0

Stockfish AlphaZero 3 47 0

Shogi AlphaZero Elmo 43 2 5

Elmo AlphaZero 47 0 3

Go AlphaZero AG0 3-day 31 – 19

AG0 3-day AlphaZero 29 – 21

Table 1: Competition assessment of AlphaZero in chess, shogi, and Go, as amusements won, drawn

or then again lost from AlphaZero's point of view, in 100 amusement matches against Stockfish, Elmo, and the

already distributed AlphaGo Zero following 3 days of preparing. Each program was given 1 minute

of reasoning time per move.

most grounded ability level utilizing 64 strings and a hash size of 1GB. AlphaZero convincingly vanquished

all adversaries, losing zero amusements to Stockfish and eight diversions to Elmo (see Supplementary Material

for a few illustration diversions), and also vanquishing the past form of AlphaGo Zero

(see Table 1).

We likewise broke down the relative execution of AlphaZero's MCTS look contrasted with the

best in class alpha-beta web search tools utilized by Stockfish and Elmo. AlphaZero seeks just

80 thousand positions for every second in chess and 40 thousand in shogi, contrasted with 70 million

for Stockfish and 35 million for Elmo. AlphaZero makes up for the lower number of assessments

by utilizing its profound neural system to concentrate considerably more specifically on the most encouraging

varieties – seemingly a more "human-like" way to deal with look, as initially proposed by Shannon

(27). Figure 2 demonstrates the adaptability of every player concerning thinking time, measured

on an Elo scale, in respect to Stockfish or Elmo with 40ms reasoning time. AlphaZero's MCTS

scaled more successfully with intuition time than either Stockfish or Elmo, raising doubt about

the generally held conviction (4, 11) that alpha-beta pursuit is innately unrivaled in these domains.3

At long last, we dissected the chess learning found by AlphaZero. Table 2 investigations the

most regular human openings (those played more than 100,000 times in an online database

of human chess amusements (1)). Each of these openings is autonomously found and played

every now and again by AlphaZero amid self-play preparing. When beginning from every human opening,

AlphaZero convincingly vanquished Stockfish, proposing that it has without a doubt aced a wide range

of chess play.

The round of chess spoke to the zenith of AI inquire about more than a very long while. State-ofthe-craftsmanship

programs depend on capable motors that pursuit a large number of positions, utilizing

high quality space aptitude and modern area adjustments. AlphaZero is a bland

support learning calculation – initially contrived for the session of Go – that accomplished prevalent

comes about inside a couple of hours, looking through a thousand times less positions, given no space

3The commonness of attracts abnormal state chess tends to pack the Elo scale, contrasted with shogi or Go.


A10: English Opening D06: Rulers Gambit

8rmblkans 7opopopop 60Z0Z0Z0Z 5Z0Z0Z0Z0 40ZPZ0Z0Z 3Z0Z0Z0Z0 2PO0OPOPO 1SNAQJBMR a b c d e f g h

8rmblkans 7opo0opop 60Z0Z0Z0Z 5Z0ZpZ0Z0 40ZPO0Z0Z 3Z0Z0Z0Z0 2PO0ZPOPO 1SNAQJBMR a b c d e f g h

w 20/30/0, b 8/40/2 1...e5 g3 d5 cxd5 Nf6 Bg2 Nxd5 Nf3 w 16/34/0, b 1/47/2 2...c6 Nc3 Nf6 Nf3 a6 g3 c4 a4

A46: Rulers Pawn Amusement E00: Rulers Pawn Diversion

8rmblka0s 7opopopop 60Z0Z0m0Z 5Z0Z0Z0Z0 40Z0O0Z0Z 3Z0Z0ZNZ0 2POPZPOPO 1SNAQJBZR a b c d e f g h

8rmblka0s 7opopZpop 60Z0Zpm0Z 5Z0Z0Z0Z0 40ZPO0Z0Z 3Z0Z0Z0Z0 2PO0ZPOPO 1SNAQJBMR a b c d e f g h

w 24/26/0, b 3/47/0 2...d5 c4 e6 Nc3 Be7 Bf4 O-O e3 w 17/33/0, b 5/44/1 3.Nf3 d5 Nc3 Bb4 Bg5 h6 Qa4 Nc6

E61: Rulers Indian Safeguard C00: French Resistance

8rmblka0s 7opopopZp 60Z0Z0mpZ 5Z0Z0Z0Z0 40ZPO0Z0Z 3Z0M0Z0Z0 2PO0ZPOPO 1S0AQJBMR a b c d e f g h

8rmblkans 7opo0Zpop 60Z0ZpZ0Z 5Z0ZpZ0Z0 40Z0OPZ0Z 3Z0Z0Z0Z0 2POPZ0OPO 1SNAQJBMR a b c d e f g h

w 16/34/0, b 0/48/2 3...d5 cxd5 Nxd5 e4 Nxc3 bxc3 Bg7 Be3 w 39/11/0, b 4/46/0 3.Nc3 Nf6 e5 Nd7 f4 c5 Nf3 Be7

B50: Sicilian Safeguard B30: Sicilian Protection

8rmblkans 7opZ0opop 60Z0o0Z0Z 5Z0o0Z0Z0 40Z0ZPZ0Z 3Z0Z0ZNZ0 2POPO0OPO 1SNAQJBZR a b c d e f g h

8rZblkans 7opZpopop 60ZnZ0Z0Z 5Z0o0Z0Z0 40Z0ZPZ0Z 3Z0Z0ZNZ0 2POPO0OPO 1SNAQJBZR a b c d e f g h

w 17/32/1, b 4/43/3 3.d4 cxd4 Nxd4 Nf6 Nc3 a6 f3 e5 w 11/39/0, b 3/46/1 3.Bb5 e6 O-O Ne7 Re1 a6 Bf1 d5

B40: Sicilian Safeguard C60: Ruy Lopez (Spanish Opening)

8rmblkans 7opZpZpop 60Z0ZpZ0Z 5Z0o0Z0Z0 40Z0ZPZ0Z 3Z0Z0ZNZ0 2POPO0OPO 1SNAQJBZR a b c d e f g h

8rZblkans 7ZpopZpop 6pZnZ0Z0Z 5ZBZ0o0Z0 40Z0ZPZ0Z 3Z0Z0ZNZ0 2POPO0OPO 1SNAQJ0ZR a b c d e f g h

w 17/31/2, b 3/40/7 3.d4 cxd4 Nxd4 Nc6 Nc3 Qc7 Be3 a6 w 27/22/1, b 6/44/0 4.Ba4 Be7 O-O Nf6 Re1 b5 Bb3 O-O

B10: Caro-Kann Safeguard A05: Reti Opening

8rmblkans 7opZpopop 60ZpZ0Z0Z 5Z0Z0Z0Z0 40Z0ZPZ0Z 3Z0Z0Z0Z0 2POPO0OPO 1SNAQJBMR a b c d e f g h

8rmblka0s 7opopopop 60Z0Z0m0Z 5Z0Z0Z0Z0 40Z0Z0Z0Z 3Z0Z0ZNZ0 2POPOPOPO 1SNAQJBZR a b c d e f g h

w 25/25/0, b 4/45/1 2.d4 d5 e5 Bf5 Nf3 e6 Be2 a6 w 13/36/1, b 7/43/0 2.c4 e6 d4 d5 Nc3 Be7 Bf4 O-O

Add up to amusements: w 242/353/5, b 48/533/19 General rate: w 40.3/58.8/0.8, b 8.0/88.8/3.2

Table 2: Investigation of the 12 most famous human openings (played more than 100,000 times

in an online database (1)). Each opening is marked by its ECO code and regular name. The

plot demonstrates the extent of self-play preparing amusements in which AlphaZero played each opening,

against preparing time. We additionally report the win/draw/misfortune consequences of 100 diversion AlphaZero versus

Stockfish matches beginning from each opening, as either white (w) or dark (b), from AlphaZero's

viewpoint. At long last, the chief variety (PV) of AlphaZero is given from each opening.


Figure 2: Versatility of AlphaZero with speculation time, measured on an Elo scale. an Execution

of AlphaZero and Stockfish in chess, plotted against deduction time per move. b Execution

of AlphaZero and Elmo in shogi, plotted against speculation time per move.

learning aside from the standards of chess. Besides, a similar calculation was connected without

adjustment to the all the more difficult session of shogi, again beating the cutting edge

inside a couple of hours


Life systems of a PC Chess Program

In this segment we depict the parts of a normal PC chess program, centering

particularly on Stockfish (25), an open source program that won the 2016 TCEC PC chess

title. For an outline of standard techniques, see (23).

Each position s is portrayed by a meager vector of high quality highlights φ(s), including

midgame/endgame-particular material point esteems, material unevenness tables, piece-square tables,

portability and caught pieces, pawn structure, ruler wellbeing, stations, cleric combine, and other

random assessment designs. Each component φi

is appointed, by a mix of manual and

programmed tuning, a relating weight wi and the position is assessed by a straight mix

v(s, w) = φ(s)

>w. In any case, this crude assessment is just viewed as precise for positions

that are "peaceful", with no uncertain catches or checks. A space specific peacefulness

look is utilized to determine continuous strategic circumstances previously the assessment work is connected.

The last assessment of a position s is figured by a minimax look through that assesses each leaf

utilizing a peacefulness look. Alpha-beta pruning is utilized to securely cut any branch that is provably

commanded by another variety. Extra cuts are accomplished utilizing goal windows and

vital variety look. Other pruning techniques incorporate invalid move pruning (which accept

a pass move ought to be more terrible than any variety, in positions that are probably not going to be in zugzwang,

as dictated by basic heuristics), vanity pruning (which expect learning of the most extreme

conceivable change in assessment), and other space subordinate pruning rules (which accept

information of the estimation of caught pieces).

The hunt is centered around promising varieties both by expanding the inquiry profundity of promising

varieties, and by lessening the pursuit profundity of unpromising varieties in light of heuristics

like history, static-trade assessment (SEE), and moving piece write. Expansions depend on

space free decides that distinguish particular moves with no sensible option, and domaindependent

rules, for example, broadening check moves. Diminishments, for example, late move decreases, are

construct intensely with respect to space information.

The effectiveness of alpha-beta inquiry depends basically upon the request in which moves are

considered. Moves are in this way requested by iterative extending (utilizing a shallower inquiry to

arrange moves for a more profound inquiry). What's more, a mix of space autonomous move

requesting heuristics, for example, executioner heuristic, history heuristic, counter-move heuristic, and furthermore

space subordinate information in light of catches (SEE) and potential catches (MVV/LVA).

A transposition table encourages the reuse of qualities and move orders when a similar position

is come to by numerous ways. A deliberately tuned opening book is utilized to choose moves at the

begin of the amusement. An endgame tablebase, precalculated by thorough retrograde examination of

endgame positions, gives the ideal move in all positions with six and once in a while seven

pieces or less.

Other solid chess programs, and furthermore prior projects, for example, Dark Blue, have utilized extremely

comparative models (9,23) including most of the segments portrayed above, in spite of the fact that


critical points of interest differ extensively.

None of the methods portrayed in this segment are utilized by AlphaZero. It is likely that

some of these procedures could additionally enhance the execution of AlphaZero; in any case, we

have concentrated on an unadulterated self-play fortification learning methodology and leave these augmentations

for future research.

Earlier Work on PC Chess and Shogi

In this area we talk about some striking earlier work on support learning in PC chess.

NeuroChess (31) assessed positions by a neural system that utilized 175 carefully assembled input

highlights. It was prepared by worldly contrast figuring out how to anticipate the last amusement result, and

likewise the normal highlights after two moves. NeuroChess won 13% of diversions against GnuChess

utilizing a settled profundity 2 look.

Beal and Smith connected transient distinction figuring out how to appraise the piece esteems in chess (7)

also, shogi (8), beginning from irregular esteems and adapting exclusively without anyone else play.

KnightCap (6) assessed positions by a neural system that utilized an assault table in light of

information of which squares are assaulted or shielded by which pieces. It was prepared by a

variation of fleeting distinction learning, known as TD(leaf), that updates the leaf estimation of the

important variety of an alpha-beta inquiry. KnightCap accomplished human ace level in the wake of preparing

against a solid PC rival with hand-initialised piece-esteem weights.

Meep (32) assessed positions by a straight assessment work in view of carefully assembled highlights.

It was prepared by another variation of transient distinction learning, known as TreeStrap, that

refreshed all hubs of an alpha-beta hunt. Meep vanquished human global ace players

in 13 out of 15 amusements, in the wake of preparing without anyone else play with arbitrarily initialised weights.

Kaneko and Hoki (16) prepared the weights of a shogi assessment work containing a million

highlights, by figuring out how to choose master human moves amid alpha-beta serach. They additionally performed

a huge scale streamlining in light of minimax seek directed by master amusement logs (12);

this framed piece of the Bonanza motor that won the 2013 World PC Shogi Title.

Giraffe (19) assessed positions by a neural system that included portability maps and assault

what's more, shield maps depicting the most reduced esteemed assailant and protector of each square. It was

prepared without anyone else play utilizing TD(leaf), likewise achieving a standard of play similar to global


DeepChess (11) prepared a neural system to performed match shrewd assessments of positions. It

was prepared by regulated gaining from a database of human master recreations that was pre-sifted

to maintain a strategic distance from catch moves and drawn recreations. DeepChess achieved a solid grandmaster level of


These projects joined their scholarly assessment capacities with an alpha-beta pursuit

upgraded by an assortment of expansions.

An approach in view of preparing double arrangement and esteem systems utilizing AlphaZero-like strategy

cycle was effectively connected to enhance the cutting edge in Hex (3).


MCTS and Alpha-Beta Hunt

For no less than four decades the most grounded PC chess programs have utilized alpha-beta hunt

(18, 23). AlphaZero utilizes a uniquely extraordinary approach that midpoints over the position assessments

inside a subtree, as opposed to figuring the minimax assessment of that subtree. Be that as it may,

chess programs utilizing customary MCTS were substantially weaker than alpha-beta inquiry programs,

(4, 24); while alpha-beta projects in view of neural systems have beforehand been not able

to rival quicker, high quality assessment capacities.

AlphaZero assesses positions utilizing non-straight capacity estimate in view of a profound

neural system, instead of the straight capacity estimation utilized as a part of commonplace chess programs.

This gives a significantly more effective portrayal, however may likewise present spurious estimation

blunders. MCTS midpoints over these estimate mistakes, which thusly tend to scratch off

out while assessing a huge subtree. Conversely, alpha-beta pursuit figures an unequivocal minimax,

which proliferates the greatest estimation blunders to the base of the subtree. Utilizing MCTS

may enable AlphaZero to viably join its neural system portrayals with a capable,

space autonomous pursuit

Area Information

1. The info highlights portraying the position, and the yield highlights depicting the move,

are organized as an arrangement of planes; i.e. the neural system design is coordinated to the

matrix structure of the board.

2. AlphaZero is furnished with consummate learning of the amusement rules. These are utilized amid

MCTS, to reenact the positions coming about because of a succession of moves, to decide amusement

end, and to score any reproductions that achieve a terminal state.

3. Information of the tenets is likewise used to encode the information planes (i.e. castling, reiteration,

no-advance) and yield planes (how pieces move, advancements, and piece drops in shogi).

4. The run of the mill number of legitimate moves is utilized to scale the investigation commotion (see beneath).

5. Chess and shogi amusements surpassing a greatest number of steps (dictated by ordinary

amusement length) were ended and doled out a drawn result; Go recreations were ended

furthermore, scored with Tromp-Taylor rules, likewise to past work (29).

AlphaZero did not utilize any type of area information past the focuses recorded previously.


In this area we depict the portrayal of the board inputs, and the portrayal of the

activity yields, utilized by the neural system in AlphaZero. Different portrayals could have been

utilized; in our trials the preparation calculation worked powerfully for some sensible decisions.


Go Chess Shogi

Highlight Planes Highlight Planes Highlight Planes

P1 stone 1 P1 piece 6 P1 piece 14

P2 stone 1 P2 piece 6 P2 piece 14

Redundancies 2 Reiterations 3

P1 detainee tally 7

P2 detainee tally 7

Shading 1 Shading 1 Shading 1

Add up to move tally 1 Add up to move tally 1

P1 castling 2

P2 castling 2

No-advance tally 1

Add up to 17 Add up to 119 Aggregate 362

Table S1: Information highlights utilized by AlphaZero in Go, Chess and Shogi individually. The main set

of highlights are rehashed for each position in a T = 8-step history. Checks are spoken to by

a solitary genuine esteemed information; other info highlights are spoken to by a one-hot encoding utilizing the

determined number of paired information planes. The present player is meant by P1 and the rival

by P2.

The contribution to the neural system is a N × N × (MT + L) picture stack that speaks to state

utilizing a connection of T sets of M planes of size N × N. Each arrangement of planes speaks to the

load up position at once step t − T + 1, ..., t, and is set to zero for time-steps under 1. The

board is arranged to the viewpoint of the present player. The M include planes are formed

of double component planes showing the nearness of the player's pieces, with one plane for each

piece compose, and a moment set of planes showing the nearness of the rival's pieces. For shogi

there are extra planes demonstrating the quantity of caught detainees of each sort. There are

an extra L consistent esteemed information planes indicating the player's shading, the aggregate move tally,

what's more, the condition of unique standards: the legitimateness of castling in chess (kingside or queenside); the

reiteration mean that position (3 redundancies is a programmed attract chess; 4 in shogi); and

the quantity of moves without advance in chess (50 moves without advance is a programmed

draw). Info highlights are abridged in Table S1.

A move in chess might be portrayed in two sections: choosing the piece to move, and after that

choosing among the lawful moves for that piece. We speak to the arrangement π(a|s) by a 8 × 8 × 73

pile of planes encoding a likelihood conveyance more than 4,672 conceivable moves. Each of the 8×8

positions recognizes the square from which to "get" a piece. The initial 56 planes encode

conceivable 'ruler moves' for any piece: various squares [1..7] in which the piece will be

moved, along one of eight relative compass headings {N, NE, E, SE, S, SW, W, NW}. The

next 8 planes encode conceivable knight moves for that piece. The last 9 planes encode conceivable


Chess Shogi

Highlight Planes Highlight Planes

Ruler moves 56 Ruler moves 64

Knight moves 8 Knight moves 2

Underpromotions 9 Advancing ruler moves 64

Advancing knight moves 2

Drop 7

Add up to 73 Add up to 139

Table S2: Activity portrayal utilized by AlphaZero in Chess and Shogi individually. The approach

is spoken to by a pile of planes encoding a likelihood appropriation over legitimate moves; planes

relate to the sections in the table.

underpromotions for pawn moves or catches in two conceivable diagonals, to knight, religious administrator or

rook separately. Other pawn moves or catches from the seventh rank are elevated to a


The arrangement in shogi is spoken to by a 9 × 9 × 139 pile of planes likewise encoding a

likelihood dispersion more than 11,259 conceivable moves. The initial 64 planes encode 'ruler moves'

what's more, the following 2 moves encode knight moves. An extra 64 + 2 planes encode advancing

ruler moves and advancing knight moves separately. The last 7 planes encode a caught

piece dropped over into the board at that area.

The strategy in Go is spoken to indistinguishably to AlphaGo Zero (29), utilizing a level dissemination

more than 19 × 19 + 1 moves speaking to conceivable stone arrangements and the pass move. We moreover

had a go at utilizing a level dissemination over moves for chess and shogi; the last outcome was relatively indistinguishable

despite the fact that preparation was marginally slower.

The activity portrayals are compressed in Table S2. Unlawful moves are veiled out by

setting their probabilities to zero, and re-normalizing the probabilities for residual moves.


Amid preparing, each MCTS utilized 800 reenactments. The quantity of diversions, positions, and considering

time fluctuated per diversion due to a great extent to various load up sizes and amusement lengths, and are appeared

in Table S3. The learning rate was set to 0.2 for each diversion, and was dropped three times (to

0.02, 0.002 and 0.0002 individually) over the span of preparing. Moves are chosen in extent

to the root visit check. Dirichlet commotion Dir(α) was added to the earlier probabilities in the

root hub; this was scaled in reverse extent to the rough number of lawful moves in a

run of the mill position, to an estimation of α = {0.3, 0.15, 0.03} for chess, shogi and Go individually. Unless

generally determined, the preparation and inquiry calculation and parameters are indistinguishable to AlphaGo

Zero (29).


Chess Shogi Go

Little bunches 700k

Preparing Time 9h 12h 34h

Preparing Diversions 44 million 24 million 21 million

Thinking Time 800 sims 800 sims 800 sims

40 ms 80 ms 200 ms

Table S3: Chose insights of AlphaZero preparing in Chess, Shogi and Go.

Amid assessment, AlphaZero chooses moves insatiably regarding the root visit tally.

Each MCTS was executed on a solitary machine with 4 TPUs.


To assess execution in chess, we utilized Stockfish rendition 8 (official Linux discharge) as a

pattern program, utilizing 64 CPU strings and a hash size of 1GB.

To assess execution in shogi, we utilized Elmo rendition WCSC27 in blend with

YaneuraOu 2017 Early KPPT 4.73 64AVX2 with 64 CPU strings and a hash size of 1GB with

the usi alternative of EnteringKingRule set to NoEnteringKing.

We assessed the relative quality of AlphaZero (Figure 1) by measuring the Elo rating of

every player. We evaluate the likelihood that player a will crush player b by a strategic capacity

p(a routs b) = 1

1+exp (celo(e(b)−e(a)) , and gauge the appraisals e(·) by Bayesian strategic relapse,

figured by the BayesElo program (10) utilizing the standard consistent celo = 1/400. Elo

appraisals were processed from the aftereffects of a 1 second for every move competition between emphasess

of AlphaZero amid preparing, and furthermore a gauge player: either Stockfish, Elmo or AlphaGo

Lee separately. The Elo rating of the benchmark players was tied down to openly accessible

values (29).

We likewise measured the straight on execution of AlphaZero against every benchmark player.

Settings were compared with PC chess competition conditions: every player

was permitted 1 minute for each move, renunciation was empowered for all players (- 900 centipawns for 10

back to back moves for Stockfish and Elmo, 5% winrate for AlphaZero). Contemplating was incapacitated

for all players.

Example games In this section we include 10 example games played by AlphaZero against Stockfish during the 100 game match using 1 minute per move.

 White: Stockfish Black: AlphaZero 1. e4 e5 2. Nf3 Nc6 3. Bb5 Nf6 4. d3 Bc5 5. Bxc6 dxc6 6. 0-0 Nd7 7. Nbd2 0-0 8. Qe1 f6 9. Nc4 Rf7 10. a4 Bf8 11. Kh1 Nc5 12. a5 Ne6 13. Ncxe5 fxe5 14. Nxe5 Rf6 15. Ng4 Rf7 16. Ne5 Re7 17. a6 c5 18. f4 Qe8 19. axb7 Bxb7 20. Qa5 Nd4 21. Qc3 Re6 22. Be3 Rb6 23. Nc4 Rb4 24. b3 a5 25. Rxa5 Rxa5 26. Nxa5 Ba6 27. Bxd4 Rxd4 28. Nc4 Rd8 29. g3 h6 30. Qa5 Bc8 31. Qxc7 Bh3 32. Rg1 Rd7 33. Qe5 Qxe5 34. Nxe5 Ra7 35. Nc4 g5 36. Rc1 Bg7 37. Ne5 Ra8 38. Nf3 Bb2 39. Rb1 Bc3 40. Ng1 Bd7 41. Ne2 Bd2 42. Rd1 Be3 43. Kg2 Bg4 44. Re1 Bd2 45. Rf1 Ra2 46. h3 Bxe2 47. Rf2 Bxf4 48. Rxe2 Be5 49. Rf2 Kg7 50. g4 Bd4 51. Re2 Kf6 52. e5+ Bxe5 53. Kf3 Ra1 54. Rf2 Re1 55. Kg2+ Bf4 56. c3 Rc1 57. d4 Rxc3 58. dxc5 Rxc5 59. b4 Rc3 60. h4 Ke5 61. hxg5 hxg5 62. Re2+ Kf6 63. Kf2 Be5 64. Ra2 Rc4 65. Ra6+ Ke7 66. Ra5 Ke6 67. Ra6+ Bd6 0-1

White: Stockfish Black: AlphaZero 1. e4 e5 2. Nf3 Nc6 3. Bb5 Nf6 4. d3 Bc5 5. Bxc6 dxc6 6. 0-0 Nd7 7. c3 0-0 8. d4 Bd6 9. Bg5 Qe8 10. Re1 f6 11. Bh4 Qf7 12. Nbd2 a5 13. Bg3 Re8 14. Qc2 Nf8 15. c4 c5 16. d5 b6 17. Nh4 g6 18. Nhf3 Bd7 19. Rad1 Re7 20. h3 Qg7 21. Qc3 Rae8 22. a3 h6 23. Bh4 Rf7 24. Bg3 Rfe7 25. Bh4 Rf7 26. Bg3 a4 27. Kh1 Rfe7 28. Bh4 Rf7 29. Bg3 Rfe7 30. Bh4 g5 31. Bg3 Ng6 32. Nf1 Rf7 33. Ne3 Ne7 34. Qd3 h5 35. h4 Nc8 36. Re2 g4 37. Nd2 Qh7 38. Kg1 Bf8 39. Nb1 Nd6 40. Nc3 Bh6 41. Rf1 Ra8 42. Kh2 Kf8 43. Kg1 Qg6 44. f4 gxf3 45. Rxf3 Bxe3+ 46. Rfxe3 Ke7 47. Be1 Qh7 48. Rg3 Rg7 49. Rxg7+ Qxg7 50. Re3 Rg8 51. Rg3 Qh8 52. Nb1 Rxg3 53. Bxg3 Qh6 54. Nd2 Bg4 55. Kh2 Kd7 56. b3 axb3 57. Nxb3 Qg6 58. Nd2 Bd1 59. Nf3 Ba4 60. Nd2 Ke7 61. Bf2 Qg4 62. Qf3 Bd1 63. Qxg4 Bxg4 64. a4 Nb7 65. Nb1 Na5 66. Be3 Nxc4 67. Bc1 Bd7 68. Nc3 c6 69. Kg1 cxd5 70. exd5 Bf5 71. Kf2 Nd6 72. Be3 Ne4+ 73. Nxe4 Bxe4 74. a5 bxa5 75. Bxc5+ Kd7 76. d6 Bf5 77. Ba3 Kc6 78. Ke1 Kd5 79. Kd2 Ke4 80. Bb2 Kf4 81. Bc1 Kg3 82. Ke2 a4 83. Kf1 Kxh4 84. Kf2 Kg4 85. Ba3 Bd7 86. Bc1 Kf5 87. Ke3 Ke6 0-1

White: AlphaZero Black: Stockfish 1. Nf3 Nf6 2. c4 b6 3. d4 e6 4. g3 Ba6 5. Qc2 c5 6. d5 exd5 7. cxd5 Bb7 8. Bg2 Nxd5 9. 0-0 Nc6 10. Rd1 Be7 11. Qf5 Nf6 12. e4 g6 13. Qf4 0-0 14. e5 Nh5 15. Qg4 Re8 16. Nc3 Qb8 17. Nd5 Bf8 18. Bf4 Qc8 19. h3 Ne7 20. Ne3 Bc6 21. Rd6 Ng7 22. Rf6 Qb7 23. Bh6 Nd5 24. Nxd5 Bxd5 25. Rd1 Ne6 26. Bxf8 Rxf8 27. Qh4 Bc6 28. Qh6 Rae8 29. Rd6 Bxf3 30. Bxf3 Qa6 31. h4 Qa5 32. Rd1 c4 33. Rd5 Qe1+ 34. Kg2 c3 35. bxc3 Qxc3 36. h5 Re7 37. Bd1 Qe1 38. Bb3 Rd8 39. Rf3 Qe4 40. Qd2 Qg4 41. Bd1 Qe4 42. h6 Nc7 43. Rd6 Ne6 44. Bb3 Qxe5 45. Rd5 Qh8 46. Qb4 Nc5 47. Rxc5 bxc5 48. Qh4 Rde8 49. Rf6 Rf8 50. Qf4 a5 51. g4 d5 52. Bxd5 Rd7 53. Bc4 a4 54. g5 a3 55. Qf3 Rc7 56. Qxa3 Qxf6 57. gxf6 Rfc8 58. Qd3 Rf8 59. Qd6 Rfc8 60. a4 1-0

 White: AlphaZero Black: Stockfish 1. d4 e6 2. Nc3 Nf6 3. e4 d5 4. e5 Nfd7 5. f4 c5 6. Nf3 Nc6 7. Be3 Be7 8. Qd2 a6 9. Bd3 c4 10. Be2 b5 11. a3 Rb8 12. 0-0 0-0 13. f5 a5 14. fxe6 fxe6 15. Bd1 b4 16. axb4 axb4 17. Ne2 c3 18. bxc3 Nb6 19. Qe1 Nc4 20. Bc1 bxc3 21. Qxc3 Qb6 22. Kh1 Nb2 23. Nf4 Nxd1 24. Rxd1 Bd7 25. h4 Ra8 26. Bd2 Rfb8 27. h5 Rxa1 28. Rxa1 Qb2 29. Qxb2 Rxb2 30. c3 Rb3 31. Ra8+ Rb8 32. Ra2 Rb3 33. g4 Ra3 34. Rb2 Kf7 35. Kg2 Bc8 36. Rb6 Ra6 37. Rb1 Ke8 38. Kg3 h6 39. Ng6 Ra3 40. Rb6 Bd7 41. g5 hxg5 42. Kg4 Bd8 43. Rb2 Bc8 44. Nxg5 Ra1 45. Nf3 Ra3 46. Be1 Ba5 47. Rf2 Ra1 48. Bd2 Bd8 49. Rh2 Ne7 50. Bg5 Nf5 51. Bxd8 Kxd8 52. Rb2 Rc1 53. Ngh4 Nxh4 54. Nxh4 Bd7 55. Rb8+ Bc8 56. Ng2 Rxc3 57. Nf4 Rc1 58. Ra8 Kd7 59. Kf3 Rc3+ 60. Kf2 Ke7 61. Kg2 Kf7 62. Ng6 Ke8 63. Ra1 Rc7 64. Kh3 Rf7 65. Kg4 Kd8 66. Nf4 Bd7 67. Ra7 Kc8 68. Kg3 Re7 69. Nd3 Kb8 70. Ra6 Bc8 71. Rb6+ Kc7 72. Rd6 Kb8 73. Nc5 g6 74. h6 Rh7 75. Nxe6 Rxh6 76. Nf4 Rh1 77. Nxd5 Rh3+ 78. Kf4 Rh4+ 79. Ke3 Rh3+ 80. Kd2 Bf5 81. Ne7 Rh2+ 82. Ke3 Bh3 83. Nxg6 Rh1 84. Nf4 Bg4 85. Rf6 Kc7 86. Nd3 Bd7 87. d5 Bb5 88. Nf4 Ba4 89. Kd4 Be8 90. Rf8 Rd1+ 91. Kc5 Rc1+ 92. Kb4 Rb1+ 93. Kc3 Bb5 94. Kd4 Ba6 95. Rf7+ 1-0

White: AlphaZero Black: Stockfish 1. d4 Nf6 2. c4 e6 3. Nf3 b6 4. g3 Bb7 5. Bg2 Be7 6. 0-0 0-0 7. d5 exd5 8. Nh4 c6 9. cxd5 Nxd5 10. Nf5 Nc7 11. e4 Bf6 12. Nd6 Ba6 13. Re1 Ne8 14. e5 Nxd6 15. exf6 Qxf6 16. Nc3 Nb7 17. Ne4 Qg6 18. h4 h6 19. h5 Qh7 20. Qg4 Kh8 21. Bg5 f5 22. Qf4 Nc5 23. Be7 Nd3 24. Qd6 Nxe1 25. Rxe1 fxe4 26. Bxe4 Rf5 27. Bh4 Bc4 28. g4 Rd5 29. Bxd5 Bxd5 30. Re8+ Bg8 31. Bg3 c5 32. Qd5 d6 33. Qxa8 Nd7 34. Qe4 Nf6 35. Qxh7+ Kxh7 36. Re7 Nxg4 37. Rxa7 Nf6 38. Bxd6 Be6 39. Be5 Nd7 40. Bc3 g6 41. Bd2 gxh5 42. a3 Kg6 43. Bf4 Kf5 44. Bc7 h4 45. Ra8 h5 46. Rh8 Kg6 47. Rd8 Kf7 48. f3 Bf5 49. Bh2 h3 50. Rh8 Kg6 51. Re8 Kf7 52. Re1 Be6 53. Bc7 b5 54. Kh2 Kf6 55. Re3 Ke7 56. Re4 Kf7 57. Bd6 Kf6 58. Kg3 Kf7 59. Kf2 Bf5 60. Re1 Kg6 61. Kg1 c4 62. Kh2 h4 63. Be7 Nb6 64. Bxh4 Na4 65. Re2 Nc5 66. Re5 Nb3 67. Rd5 Be6 68. Rd6 Kf5 69. Be1 Ke5 70. Rb6 Bd7 71. Kg3 Nc1 72. Rh6 Kd5 73. Bc3 Bf5 74. Rh5 Ke6 75. Kf2 Nd3+ 76. Kg1 Nf4 77. Rh6+ Ke7 78. Kh2 Nd5 79. Kg3 Be6 80. Rh5 Ke8 81. Re5 Kf7 82. Bd2 Ne7 83. Bb4 Nd5 84. Bc3 Ke7 85. Bd2 Kf6 86. f4 Ne7 87. Rxb5 Nf5+ 88. Kh2 Ke7 89. Ra5 Nh4 90. Bb4+ Kf7 91. Rh5 Nf3+ 92. Kg3 Kg6 93. Rh8 Nd4 94. Bc3 Nf5+ 95. Kxh3 Bd7 96. Kh2 Kf7 97. Rb8 Ke6 98. Kg1 Bc6 99. Rb6 Kd5 100. Kf2 Bd7 101. Ke1 Ke4 102. Bd2 Kd5 103. Rf6 Nd6 104. Rh6 Nf5 105. Rh8 Ke4 106. Rh7 Bc8 107. Rc7 Ba6 108. Rc6 Bb5 109. Rc5 Bd7 110. Rxc4+ Kd5 111. Rc7 Kd6 112. Rc3 Ke6 113. Rc5 Nd4 114. Be3 Nf5 115. Bf2 Nd6 116. Rc3 Ne4 117. Rd3 1-0

White: AlphaZero Black: Stockfish 1. d4 Nf6 2. Nf3 e6 3. c4 b6 4. g3 Be7 5. Bg2 Bb7 6. 0-0 0-0 7. d5 exd5 8. Nh4 c6 9. cxd5 Nxd5 10. Nf5 Nc7 11. e4 Bf6 12. Nd6 Ba6 13. Re1 Ne8 14. e5 Nxd6 15. exf6 Qxf6 16. Nc3 Bc4 17. h4 h6 18. b3 Qxc3 19. Bf4 Nb7 20. bxc4 Qf6 21. Be4 Na6 22. Be5 Qe6 23. Bd3 f6 24. Bd4 Qf7 25. Qg4 Rfd8 26. Re3 Nac5 27. Bg6 Qf8 28. Rd1 Rab8 29. Kg2 Ne6 30. Bc3 Nbc5 31. Rde1 Na4 32. Bd2 Kh8 33. f4 Qd6 34. Bc1 Nd4 35. Re7 f5 36. Bxf5 Nxf5 37. Qxf5 Rf8 38. Rxd7 Rxf5 39. Rxd6 Rf7 40. g4 Kg8 41. g5 hxg5 42. hxg5 Nc5 43. Kf3 Nb7 44. Rdd1 Na5 45. Re4 c5 46. Bb2 Nc6 47. g6 Rc7 48. Kg4 Nd4 49. Rd2 Rf8 50. Bxd4 cxd4 51. Rdxd4 Rfc8 52. Kg5 Rf8 53. Rd2 Rc6 54. Rd5 Rc7 55. f5 Rb7 56. a3 Rc7 57. a4 a6 58. Red4 Rcc8 59. Re5 Rc7 60. a5 Rc5 61. Rxc5 bxc5 62. Rd6 Ra8 63. Re6 Kf8 64. Rc6 Ke7 65. Kf4 Kd7 66. Rxc5 Rh8 67. Rd5+ Ke7 68. Re5+ Kd7 69. Re6 Rh4+ 70. Kg5 1-0

 White: AlphaZero Black: Stockfish 1. d4 Nf6 2. c4 e6 3. Nf3 b6 4. g3 Bb7 5. Bg2 Bb4+ 6. Bd2 Bxd2+ 7. Qxd2 d5 8. 0-0 0-0 9. cxd5 exd5 10. Nc3 Nbd7 11. b4 c6 12. Qb2 a5 13. b5 c5 14. Rac1 Qe7 15. Na4 Rab8 16. Rfd1 c4 17. Ne5 Qe6 18. f4 Rfd8 19. Qd2 Nf8 20. Nc3 Ng6 21. Rf1 Qd6 22. a4 Rbc8 23. e3 Ne7 24. g4 Ne8 25. f5 f6 26. Nf3 Qd7 27. Qf2 Nd6 28. Nd2 Rf8 29. Qg3 Rcd8 30. Rf4 Nf7 31. Rf2 Rfe8 32. h3 Qd6 33. Nf1 Qa3 34. Rcc2 h5 35. Qc7 Qd6 36. Qxd6 Rxd6 37. Ng3 h4 38. Nh5 Ng5 39. Rf1 Kh7 40. Nf4 Rdd8 41. Kh2 Rd7 42. Bh1 Rd6 43. Ng2 g6 44. Nxh4 gxf5 45. gxf5 Rh8 46. Nf3 Kg7 47. Nxg5 fxg5 48. Rg2 Kf6 49. Rg3 Re8 50. Bf3 Rdd8 51. Be2 Rf8 52. Bg4 Nc8 53. Bf3 Rfe8 54. h4 Rh8 55. h5 Rhe8 56. Bg2 Ne7 57. h6 Rh8 58. Rh3 Rh7 59. Kg1 Ba8 60. Nd1 g4 61. Rh5 g3 62. Nc3 Ng8 63. Ne2 Rxh6 64. Nxg3 Rxh5 65. Nxh5+ Kf7 66. Kf2 Nf6 67. Nxf6 Kxf6 68. Rh1 c3 69. Rc1 Rh8 70. Rxc3 Kxf5 71. Rc7 Kf6 72. Bf3 Rg8 73. Rh7 Rg6 74. Bd1 Rg8 75. Rh6+ Ke7 76. Rxb6 Kd7 77. Rf6 Ke7 78. Rh6 Rg7 79. Rh8 Bb7 80. Rh5 Kd6 81. Rh3 Rf7+ 82. Ke1 Bc8 83. Rh6+ Kc7 84. Rc6+ Kb8 85. Rd6 Bb7 86. b6 Ba6 87. Rxd5 Rf6 88. Rxa5 Rxb6 89. Kd2 Bb7 90. Rb5 Rf6 91. Bb3 Kc7 92. Re5 Ba6 93. Kc3 Rf1 94. Bc2 Rh1 95. a5 Kd6 96. e4 Bf1 97. Rf5 Bg2 98. Rf4 Rc1 99. Kb2 Rh1 100. a6 1-0

White: AlphaZero Black: Stockfish 1. d4 Nf6 2. c4 e6 3. Nf3 b6 4. g3 Bb7 5. Bg2 Bb4+ 6. Bd2 Be7 7. Nc3 c6 8. e4 d5 9. e5 Ne4 10. 0-0 Ba6 11. b3 Nxc3 12. Bxc3 dxc4 13. b4 b5 14. Nd2 0-0 15. Ne4 Bb7 16. Qg4 Nd7 17. Nc5 Nxc5 18. dxc5 a5 19. a3 axb4 20. axb4 Rxa1 21. Rxa1 Qd3 22. Rc1 Ra8 23. h4 Qd8 24. Be4 Qc8 25. Kg2 Qc7 26. Qh5 g6 27. Qg4 Bf8 28. h5 Rd8 29. Qh4 Qe7 30. Qf6 Qe8 31. Rh1 Rd7 32. hxg6 fxg6 33. Qh4 Qe7 34. Qg4 Rd8 35. Bb2 Qf7 36. Bc1 c3 37. Be3 Be7 38. Qe2 Bf8 39. Qc2 Bg7 40. Qxc3 Qd7 41. Rc1 Qc7 42. Bg5 Rf8 43. f4 h6 44. Bf6 Bxf6 45. exf6 Qf7 46. Ra1 Qxf6 47. Qxf6 Rxf6 48. Ra7 Rf7 49. Bxg6 Rd7 50. Kf2 Kf8 51. g4 Bc8 52. Ra8 Rc7 53. Ke3 h5 54. gxh5 Kg7 55. Ra2 Re7 56. Be4 e5 57. Bxc6 exf4+ 58. Kxf4 Rf7+ 59. Ke5 Rf5+ 60. Kd6 Rxh5 61. Rg2+ Kf6 62. Kc7 Bf5 63. Kb6 Rh4 64. Ka5 Bg4 65. Bxb5 Ke7 66. Rg3 Bc8 67. Re3+ Kf7 68. Be2 1-0

White: AlphaZero, Black: Stockfish 1. d4 e6 2. e4 d5 3. Nc3 Nf6 4. e5 Nfd7 5. f4 c5 6. Nf3 cxd4 7. Nb5 Bb4+ 8. Bd2 Bc5 9. b4 Be7 10. Nbxd4 Nc6 11. c3 a5 12. b5 Nxd4 13. cxd4 Nb6 14. a4 Nc4 15. Bd3 Nxd2 16. Kxd2 Bd7 17. Ke3 b6 18. g4 h5 19. Qg1 hxg4 20. Qxg4 Bf8 21. h4 Qe7 22. Rhc1 g6 23. Rc2 Kd8 24. Rac1 Qe8 25. Rc7 Rc8 26. Rxc8+ Bxc8 27. Rc6 Bb7 28. Rc2 Kd7 29. Ng5 Be7 30. Bxg6 Bxg5 31. Qxg5 fxg6 32. f5 Rg8 33. Qh6 Qf7 34. f6 Kd8 35. Kd2 Kd7 36. Rc1 Kd8 37. Qe3 Qf8 38. Qc3 Qb4 39. Qxb4 axb4 40. Rg1 b3 41. Kc3 Bc8 42. Kxb3 Bd7 43. Kb4 Be8 44. Ra1 Kc7 45. a5 Bd7 46. axb6+ Kxb6 47. Ra6+ Kb7 48. Kc5 Rd8 49. Ra2 Rc8+ 50. Kd6 Be8 51. Ke7 g5 52. hxg5 1-0

White: AlphaZero, Black: Stockfish 1. Nf3 Nf6 2. d4 e6 3. c4 b6 4. g3 Bb7 5. Bg2 Be7 6. 0-0 0-0 7. d5 exd5 8. Nh4 c6 9. cxd5 Nxd5 10. Nf5 Nc7 11. e4 d5 12. exd5 Nxd5 13. Nc3 Nxc3 14. Qg4 g6 15. Nh6+ Kg7 16. bxc3 Bc8 17. Qf4 Qd6 18. Qa4 g5 19. Re1 Kxh6 20. h4 f6 21. Be3 Bf5 22. Rad1 Qa3 23. Qc4 b5 24. hxg5+ fxg5 25. Qh4+ Kg6 26. Qh1 Kg7 27. Be4 Bg6 28. Bxg6 hxg6 29. Qh3 Bf6 30. Kg2 Qxa2 31. Rh1 Qg8 32. c4 Re8 33. Bd4 Bxd4 34. Rxd4 Rd8 35. Rxd8 Qxd8 36. Qe6 Nd7 37. Rd1 Nc5 38. Rxd8 Nxe6 39. Rxa8 Kf6 40. cxb5 cxb5 41. Kf3 Nd4+ 42. Ke4 Nc6 43. Rc8 Ne7 44. Rb8 Nf5 45. g4 Nh6 46. f3 Nf7 47. Ra8 Nd6+ 48. Kd5 Nc4 49. Rxa7 Ne3+ 50. Ke4 Nc4 51. Ra6+ Kg7 52. Rc6 Kf7 53. Rc5 Ke6 54. Rxg5 Kf6 55. Rc5 g5 56. Kd4 1-0

 Program Chess Shogi Go AlphaZero 80k 40k 16k Stockfish 70,000k Elmo 35,000k Table S4: Evaluation speed (positions/second) of AlphaZero, Stockfish, and Elmo in chess, shogi and Go.


No comments:

Post a Comment