The effect of hash collisions in a Computer Chess program

Robert M. Hyatt, PhD.

Anthony E. Cozzie

Abstract

Hash signature collisions have been feared in computer chess programs, dating back to the early Greenblatt program, and continuing until today. The fear has been that a signature collision, where the hashing function can take two different chess board positions and produce the same hash signature, will produce errors in the tree search that will lead to gross playing errors and result in inferior play. As a result, chess programmers have spent significant amounts of time trying to reduce these signature collisions as far as possible (since a chess position can't be uniquely stored in 64 or fewer bits, it is impossible to avoid all signature collisions). This research was done to answer the question “Is it really worth all the effort to absolutely minimize signature collisions?” The answer is surprising.

Introduction

Hash tables (also known as transposition/refutation tables) have been used in game-tree programs since their original mention in the Greenblatt chess program developed in the early 1960's at MIT [Greenblatt, 1967]. Most computer chess programs since then have relied on this idea to help minimize the size of the search tree. [Beal, 1996; Breuker, 1998; Nelson, 1985] Without going into a lot of detail, if a chess program is supposed to search to a depth of six plies (six half-moves), the position reached by playing the three moves Nf3 Nf6 Bc4 is the same as the position reached by playing the same moves in a different order, Bc4 Nf6 Nf3. Since the program must search to a depth of 6 half-moves, after either of these two positions is reached there is still a significant amount of searching required to reach a depth of six plies and produce a score and best move. A chess program will reach one of the two positions first, search the tree below that position, and then store the position and score in the hash table. When it reaches the position again, it will find the score in the hash table and avoid searching that sub-tree a second time, saving significant search effort.

The hash table is generally based on a 64 bit hash signature, where the full chess position is reduced to a 64 bit value by Zobrist hashing [Zobrist, 1970]. Many chess software developers have exhaustively tested this 64 bit hashing algorithm, verifying that the probability that two different chess positions produce the same 64-bit hash signature is as near zero as possible. Obviously, if a collision occurs, the search will use the tree below one position as the score for the tree below a different position, producing errors in the final backed-up score and best move. A parallel search aggravates this because multiple threads can write to the hash table in various orders, leaving hash entries that are incorrect. This was explained and a solution presented in the paper mentioned below.

In 2002, one of the authors submitted a paper on a transposition/refutation table algorithm for a parallel search that required no critical section locks [Hyatt, 2002]. This paper was the result of eliminating a particular atomic lock that was causing poor performance on NUMA-type parallel machines. Eliminating the lock resulted in much better overall performance, but it produced an interesting question from one of the reviewers. He sent an email that asked "This obviously will work well, but just how important is it to avoid the issue you are describing? Will such infrequent errors actually have any measurable effect on the search?"

Any publication that discusses "hashing" or "transposition/refutation tables" in computer chess (or other zero-sum board games based on chess, checkers or variants) inevitably turns to the issue of "hash collisions" which is a term used to define the circumstance where two different chess positions produce the same hash signature. [Brueker, 1998; Nelson, 1985; Warnock, 1988] The question posed by the reviewer regarding whether or not such hashing errors are a cause for concern was indeed interesting. Attempting to answer that question led to some interesting results.

Another aspect of this is that researchers have noticed the robustness of the alpha/beta minimax search. For example, how many programmers have discovered major errors in either their search or evaluation, yet noticed that the program was playing well anyway, and in many cases, plays no better after the bug has been excised. Beal's research using a random evaluation further shows this characteristic of large minimax search trees, in that a purely random evaluation score still produces reasonable chess because the search twists the uniform property of random number generators into a sort of “mobility” that will guide the program to reasonable positions based on a very unreasonable evaluation [Beal, 1994]. In short, the search space is very large, and occasional errors tend to get lost in the large search space, and go undetected over long periods of times. This turns out to be just as true for hash signature collisions. Huge numbers of errors get lost in the even larger tree search space, making the effect of these errors hardly noticeable for any reasonable error rate.

The hypothesis

For years, hash collisions have been considered bad. Indeed, much work has been done within the Zobrist hashing framework to minimize these collisions. For example, Warnock and Wendroff did research on the number of bits required to minimize collisions as well as the properties of the random numbers used to compute the Zobrist hash signature [Warnock, 1988]. It has been felt that even one error per game was unacceptable because a hash error introduces a wrong score, which might cause a change in the search score or best move.

That formed the framework for the original hypothesis of this paper, namely that hash collisions were bad and could not be tolerated. We felt that any significant number of errors would produce wrong scores and incorrect moves from the search. The only issue is “what is a significant number of errors?” Our original conjecture was only that if the number of errors exceeded some small number in a single game, we would see problems. We then set about developing an experiment that would give a real estimate for an acceptable error rate that would not change the game outcome.

The test-bed

For this research, we chose to use two computer chess programs written by the authors. Crafty (Hyatt) and Zappa (Cozzie) were used since we have access to all the source, and we know the internals of these programs well. Both programs are traditional computer chess programs that use the alpha/beta algorithm to search a minimax game tree. The search is done in three distinct phases.

Phase 1 is the normal full-width search that is done to some fixed depth as limited by the total time allowed for a move. This fixed-depth search is modified by various search extensions to force certain paths to be examined deeper for various tactical reasons, such as finding/avoiding a checkmate or winning/losing material, and so forth. At some point, the search decides that the base search has reached deeply enough (while searching all legal moves) and that it is time to stop the search and compute a score for the position. The searches also use search reductions, such as null-move, futility pruning, and other ideas that attempt to reduce the overall tree search space without overlooking critical search pathways.

Phase 2 is an add-on search done after the basic search has reached a satisfactory depth. Phase 2 only considers captures in Crafty, while Zappa also includes some checking moves. This phase is really needed to eliminate certain types of tactical errors that would otherwise slip by, primarily caused by statically evaluating positions where tactical issues are present, such as overloaded pieces, hung pieces, and so forth. This is commonly called the quiescence search although the moves that are included in this search varies significantly from program to program.

Phase 3 is the static evaluation, which takes the current position after phase 2 has completed, and computes a numerical score based on the material, the locations of the pieces, pawn structure, and many other positional considerations. The only important point here is that since this static evaluation does not know anything about captures, or hanging pieces, or overloaded pieces, or other dynamic features, it depends on the first two phases to reach positions where no tactical threats are pending, so that the positional score will be as accurate as possible.

In Crafty/Zappa, all of the hashing is done in phase1 of the search, there is no hashing in the quiescence search. This is a reasonably important detail, since some programs hash in the q-search as well. Crafty has used both approaches in the past, but the current approach was found to be the most effective in terms of search speed, search depth, and code simplicity. Zappa has a similar history and the author reached the same conclusion.

The experiment

The experiment was actually easy to set up. First, it is useful to think about what will happen when a signature collision occurs. Position A is searched and the score saved in the table. Position B is then reached and the hash signature matches that of position A so the result from search A is passed back through the tree search, which is clearly an error.

Since that concept seemed reasonable, we modified the programs so that we could force them to produce a false signature match every Nth probe. Crafty/Zappa use a hashing algorithm that essentially does two probes into the hash table. The first probe is to the "depth-preferred" table while the second is to the "always store" table (this is often called the “Belle” hash table approach since it was used in the Belle chess machine built by Ken Thompson and Joe Condon. [Condon, 1982]) Each time a probe is done, the error counter is decremented, and when it reaches zero, an error is produced. The error could be on either of the two probes, reasonably uniformly distributed between the two tables.

A normal hash probe uses the right-most N bits of the hash signature as the table index, and then the entire 64 bit signature must match the stored signature to call this a match. To produce an error, we chose to ignore the rightmost N bits, and use the next N bits instead, to force the probe to a different place in the table, and then consider the signature a match to simulate a collision error.

It is important to notice that the error rate is in terms of probes per error, not nodes per error. For example, using N=1000 could be an error rate of one per 1000 nodes, as the first probe could be a hit which terminates the hash probe after the error counter has been decremented by one. Alternatively, it could represent an error rate of one per 500 nodes that result in hash probes, because the two internal probes might each decrement the error counter by one, whenever the first probe results in a signature mismatch. It should also be noted that since neither program probes in the q-search, it is possible that one error per 1000 probes could well be one error per 5000 nodes (or less) if the q-search is very large.

This accurately simulates the effect of a hash collision, since it produces a false match and uses a score that is wrong for the current position, and when an error occurs, it is really an error. Without the idea of using other than the low order N bits, in simple end games, we always probe to the right place, which often has the right table entry. In one of the test positions used for the results of this paper, Fine#70, the hit rate is well over 70%. This makes it hard to force a significant number of errors since most positions are actual real hash signature matches. With the change as described, real errors can be forced for any frequency desired.

Now to go with programs that could simulate hash errors, we needed a test set of positions that we could run with different error rates, to see when/if the score/move changed for one of them. We spent quite a bit of time making sure that the test set would address the kinds of positions that might be seen in a game. After initially starting with a large test set of known positions (including tactical and endgame positions) we decided to reduce this to a more manageable number since we anticipated doing lots of hand-analysis on the results. Since hashing is particularly effective in end-games, we chose six end-game positions from chess literature. We were pretty certain these would be the positions that hash errors would affect the most, because these positions (using hashing) allow the programs to reach incredible search depths. One of these positions, the ubiquitous position 70 from Fine's “Basic Chess Endings” is a well-known hash-table stress test, due to the blocked nature of the position that allows a program to search well beyond 30 plies almost instantly. We also chose six tactical positions, thinking these would also produce significant problems, since these tree paths are very deep and narrow, with deep (but forcing) paths leading to mate or material win. The final six positions were chosen from games played by Crafty on the Internet Chess Club, and represented typical middle-game positions where a mistake might or might not lead to problems. It should be noted that this final set of six positions had no real “correct or best move”. They are just middle-game positions where the side on move has lots of options that are OK, as well as a few that would be blunders (most chess positions have some horrible move alternatives of course.)

The experimental plan was to run the positions through the normal program, then introduce varying levels of errors until some sort of error rate was found that actually changed the results. We were looking for changes that would produce one of the following:

1. Change in the size of the tree (total nodes searched). Until this happens, no scores or variations are likely going to change. Once the size of the tree changes, the hashing error rate begins to influence the search result. It turns out that any kind of error rate alters the size of the tree to some extent. As a result, this particular change was deemed irrelevant and is not shown in the results because all positions would show minor changes in the node counts. All such positions are marked with a "-" if there were no other changes, meaning that the error rate had no noticeable impact on that particular position.

2. Change in the score (or in the variation produced although the first move must not change here). This would show that the collisions were going beyond changing the size of the tree, and were changing the actual score or the path of best moves. The effect on the game could possibly be to play a different move if enough searches were done. The table marks these positions with a "S" when the score changes. It should be noted that not all score changes are significant. Sometimes even varying the size of the hash table is enough to change the score, as anyone that has run Fine70 knows.

3. Change the best move. This would mean that the collisions actually changed the best move to something that is probably inferior. It should be noted that in some positions it is possible that two root moves are equal, so if this test produced a different best move, it required further testing with the no-error versions to see if the chosen move was really worse, or was just an equal or nearly-equal alternative (in some cases, the first and third moves in the PV can be interchanged without changing the final result at all.) The table marks these positions with a "M" when the best move changes. It is possible that playing a different move will have no effect on the game at all.

4. Change the game outcome. Shifting the score, or changing the best move is not necessarily something that would change the game outcome. This type of change was evaluated by searching the best move chosen with the error-prone hashing, but using the versions with no errors. In addition, commercial chess programs were used to analyze the original best move, and the different move produced by the error-prone program, so that we could verify that the error-prone program produced a move that was actually bad enough to change the outcome of the game. It should also be noted that in the endgame tests, the programs did not use endgame tables, but when we did the verification searches, we used the 5-6 piece endgame tables when applicable to help further refine our assessment of “game-outcome-changing.” If the error-prone code produced a move M with a score of +.5, while the no-error code said that move was really a -1.5 move, then that is an outcome-changing error because material is being lost unnecessarily. The table marks these positions with a "G" when the game outcome apparently changes.

Results

The results turned out to be surprising. We first ran the set of 18 positions using the normal programs, with no hash collision, to establish baseline scores and variations. This represented the scores and variations the normal programs would produce with no odd hashing problems included.

Experiment 1 was run with a hash size of 384 megabytes for Crafty, and 512MB for Zappa. In Crafty, that turns into a hash table with twenty-four million entries, at 16 bytes per entry, while Zappa would have thirty-two million entries. Both programs were run on single-processor machines so that no non-deterministic parallel search behavior would influence the results. The programs searched the positions for two minutes each, a tree search space of about 120 million total nodes (for Crafty) and 90 million total nodes for Zappa, enough to overwrite parts of the hash table multiple times.

Figure 1 shows the results of this experiment. Each row represents one of the eighteen positions tested, each column represents an error rate of one error every 10N probes (N comes from the column heading.) Each column has two characters representing the result for that particular position and error rate. The first character represents Crafty's result while the second character represents Zappa's result. The legend for this table is “-” is no change, “S” means the score or PV changed but the best move did not, “M” means the best move changed but it was not significantly different from the real best move, and finally “G” means that the best move changed and it was a change that would probably affect the game outcome. For example, "MS" means that Crafty played a different move for that position and error rate, while Zappa only produced a different score from the error-free search.

The most unexpected result is that with an error rate of one error for every one thousand hash probes done, over one-half of the positions produce results identical to the no-error versions of the programs. However, this represents such a ridiculously bad hash collision rate, one would have to use a table with just a few entries and a hash signature of just a few bits, to even come close to producing such an error rate.

Looking at figure 1, for one error every one million probes, only 3 scores changed for Zappa, while none changed for Crafty. Dropping all the way to one error every ten thousand nodes, Crafty produced three score changes while Zappa produced four. At one error per one thousand probes, Crafty and Zappa both produced only six and five score changes respectively, still playing the same moves as played by the error-free programs.




101

102

103

104

105

106

107

P1

SS

--

--

--

--

--

--

P2

SS

SS

SS

SS

-S

-S

-S

P3

SG

SS

SS

-S

-S

-S

-S

P4

SS

--

--

--

--

--

--

P5

SG

G-

S-

S-

--

--

--

P6

MS

SS

SS

SS

MS

-S

-S

P7

-S

--

--

--

--

--

--

P8

GG

-G

--

--

--

--

--

P9

SS

SS

--

--

--

--

--

P10

SM

-M

--

--

--

--

--

P11

SG

--

S-

--

--

--

--

P12

-S

--

-S

--

--

--

--

P13

MS

SS

-S

-S

--

--

--

P14

SS

S-

S-

--

S-

--

--

P15

MS

-S

--

--

--

--

--

P16

MS

S-

--

--

--

--

--

P17

MM

-M

--

--

--

--

--

P18

SG

SM

--

--

--

--

--

Figure 1

Experiment 2 was run with the same parameters, except that the size of the hash table was reduced from 384M/512M to 12M/16M (these are the Crafty/Zappa sizes). The idea behind this test was to determine what might happen if the hash table gets overwritten at a much higher rate than in experiment 1. This could be used to extrapolate how a faster processor (faster search speed) might affect the test if the size of the hash table remains constant. With a search space of around 120M nodes, and a hash size of 768K/1024K positions, the table entry replacement rate is extremely high for both programs.

Figure 2 gives the results of this experiment, and clearly shows that the effect of collisions is more pronounced when the table size is reduced, or the search space is increased due to either faster hardware or longer time limits.

The first noticeable result is that position 6 is a problem for both programs when the hash table size is very small. Both programs produced different moves and/or scores for the first six tests, while the last (lowest error rate) still produced a wrong score with Crafty while Zappa produced the error-free score.

It still seems likely that the error rate of one per ten thousand probes is not that unsafe, since Zappa produced only one score change at that level, while Crafty produced four score changes and one non-critical move change.




101

102

103

104

105

106

107

P1

SS

SS

--

--

-S

--

--

P2

SS

SS

SS

S-

SS

--

--

P3

SS

SS

SS

S-

--

--

--

P4

SS

-S

--

--

--

--

--

P5

MG

S-

S-

S-

--

--

--

P6

MM

MM

SS

SS

MM

SM

S-

P7

-S

--

--

--

--

--

--

P8

GM

SS

SS

--

--

--

--

P9

SG

S-

--

--

--

--

--

P10

GM

SS

--

--

--

--

--

P11

GG

SS

--

--

--

--

--

P12

GS

--

--

--

--

--

--

P13

MG

SS

SS

-S

--

--

--

P14

SS

SS

SS

M-

--

--

-S

P15

SM

SS

--

--

--

--

--

P16

MM

-S

--

--

--

--

--

P17

SM

SM

M-

--

--

--

--

P18

MM

-M

--

--

--

--

--

Figure 2

Discussion

For some insight into how errors affected results, looking at the result tables shows that the first six positions are affected more (as a group) than the others. The first six positions are pure endgame positions where the search depth reaches 40 plies and beyond, typically. The next six positions are tactical positions (appendix I) while the last set of six positions are just normal middle game positions.

We had originally hypothesized that end games would be the worst case, because they really utilize the hash table to reach extreme search depths. It turns out that this part of the original hypothesis was correct, as the data shows.

We had also originally hypothesized that the sharp tactical lines would be the next group most likely to change, but surprisingly that was not the case. In a couple of cases, there was a game-changing different move chosen, which seems to make sense because these positions represent deep but narrow searches along a very precise line of play to see the proper tactical conclusion. Change any score along that narrow path and it may well change the principle variation and the ultimate score/move chosen.

We thought that the middle game positions would be the least affected, but this was only partially true. There were plenty of score and move changes, but none of them seemed to really be an "outcome-changer" when examined carefully. After considering this, it appears logical, since in middle game positions, there is usually no clear best move, and slight changes can produce different best moves with no trouble. It should be noted that the first column represents a completely ridiculous error rate of one error per ten probes, something that is highly unlikely to happen anywhere. It is given for reference only as even at that error rate, about one-half of the positions produce the same best move (Crafty) which is remarkable.

One example of a different move at the root that is a game-outcome changer appeared in position 5, a king and two pawns vs king and pawn ending. Endgame table bases were not used during this test, to place emphasis on the actual search done by the programs. The error-prone version of crafty, at one error every 100 probes, plays g6 in this position. This is called an outcome-changing move because the non-error search correctly finds that Kf4 is winning. Testing this position with endgame table bases shows that Kf4 is a mate in 26 moves while the g6 move leads to a forced draw.

There are other examples where the error-prone version produced a different move, but examination using the no-error search proves that the score is only slightly different, which would not necessarily change the outcome of the game where that move was played.

Another important point is that while the deep endgame positions were most problematic, they are also the rarest positions encountered in real games, since many games are resolved before the endgame is reached. Every program is going to end up searching opening, middle-game, and even deep tactical positions, in most every game played. But searching endgame positions such as Fine #70 is very rare, which means that these types of positions rarely influence the outcome of a game because they occur so infrequently over the board. This suggests that the effect of hash signature collisions is even less than what was seen in these test positions.

Conclusions

Before concluding too much from this experiment, the intent was hardly to prove that a certain number of signature collisions can be tolerated inside the search without any measurable effect on the search result. Rather, the goal was to answer a more simple question, namely “will occasional errors hurt the quality of the tree search?” Most every chess author has spent time testing the random numbers, and then testing to be sure that signature collisions are extremely rare events. We tried to show that even significant numbers of such errors have very little (if any) effect on the quality of the search result, until the number of errors increases far beyond the level even the most primitive hashing algorithm might produce.

The numbers are surprising, but they clearly show that errors do not influence the search nearly as much as has always been suspected. Even with programs searching at one million nodes per second and beyond, the errors do not wreck the search and cause substantial problems. For example, in the tactical positions, there are just a few key positions that influence the final result. In a tree search space of 120 million nodes, a signature collision on one of such a small set of critical nodes is very unlikely. One point this certainly underlines is that past claims that programs search a large number of useless chess positions is clearly right on target. One can almost change scores at will and still have little adverse effect on the quality of the moves produced.

With large hash tables, the first table suggests that an error rate of one in 1,000,000 probes would be safe for this test set, and that increasing the error rate to one in 100,000 would only change two of eighteen scores for Crafty. Going to one in 10,000 adds one more score change for Crafty, which would not change the outcome in a game nor (apparently) weaken its play at all. For smaller hash tables, both programs are far more sensitive to errors as figure 2 shows.

Obviously such high error rates are not going to be trusted, because this test is based on hashing that has a distinct randomness due to the Zobrist algorithm's use of random numbers. What it does suggest, quite clearly, is that expending effort to drive the error rate below the one in four billion probes sort of range that 64 bit hash signatures produce is effort that is going to produce little return in terms of accuracy for the search.

On the practical side, we believe that everyone agrees that we want the minimum number of hash collisions possible, simply because "Murphy's law" will likely make one hash collision produce an embarrassing moment in a public event somewhere. To finish this up, and come full circle, to the "nameless reviewer" that posed the original question for the lock-less hashing algorithm paper, we can now offer a subjective answer. The answer is, "this really isn't a problem that should worry anyone." Therefore, the lock-less hashing algorithm was really addressing a type of hashing error that occurred only a couple of times per move, at most, given the large hash table sizes we use on today's computers.

However, in the case of Crafty, there was an overwhelming reason to allow no hash collisions. Crafty uses the "hash move" in the search, and it cannot recover from a circumstance where the hash move is actually illegal, as it immediately makes the move, and once that happens the chess board can become irreversibly corrupted. For example, castling king-side after the king has already castled queen-side will produce a new king at g1, with a king already at c1. When this move is unmade, the g1 king goes back to e1, and we now have an ugly position with two kings. Years ago Crafty had a simple legality check added to prevent this kind of catastrophic failure, and when it is detected, the move is not played, but an error message is logged. This error has not been reported in over 100,000 games now, which doesn't necessarily mean that no hash collisions have occurred, but it does mean that if a collision did occur, at least the hash move was not illegal.

One other simple experiment was run, attempting to define the minimum hash signature length necessary to avoid problems. In summary, 32 bit signatures produced wrong moves and scores, while 64 bits did not. An “in-between” value of 48 bits also proved to be just as safe as 64, but 48 is not a reasonable size since either 32 bit or 64 bit values are the natural signature lengths for 32 bit processors, and 64 bit signatures are the natural signature length for 64 bit processors.

The results are unexpected, and human psychology says that we probably will not trust the results even if every chess program on the planet is tested in the same way and produces identical results. Human nature abhors errors that can be avoided, and we will continue to strive to eliminate these errors that really don't affect anything at all, at least so far as it can be measured.

Appendix I

# endgame positions.

/1k3ppp//3K/7P/5PP// w - -; bm Kd6; id "fine #66"

/2p/3k/1p1p1K//1P1P/2P// w - -; bm b4; id "fine #67"

/k/3p/p2P1p/P2P1P///K/ w - -; bm Kb1; id "fine #70"

/pp5p//PP2k/2P2pp/3K/6PP// w - -; bm c5; id "fine #82"

6k/6p//4K1P//7P// w - -; bm Kf4; id "bote.1"

/5p1p/8/6k//6P/5PP/7K w - -; bm Kh2; id "bote.6"


#tactical positions.

rnbq1b1r/p1pp1p1p/4k/1p1NP1p/2QP1p/5N/PP1B1KPP/n6R w - -; bm Nxg5+; id "m10"

/p3q1kp/1p2Pnp1/3pQ3/2pP4/1nP3N1/1B4PP/6K1 w - -; bm Ba3; id "Botvinnik"

5r1k/6p/1n2Q2p/4p//7P/PP4PK/R1B1q/ w - -; bm Bxh6; id "Cray Blitz - Belle 81"

2r2rk/1bqnbpp/1p1ppnp/pP/N1P1P/P2B1N1P/1B2QPP/R2R2K b - -; bm Bxe4; id "K 22"

/7p/5k/5p/p1p2P/Pr1pPK/1P1R3P/ b - -; bm Rxb2; id "WAC 2"

4r1k/p1qr1p/2pb1Bp/1p5p/3P1n1R/1B3P/PP3PK/2Q4R w - -; bm Qxf4; id "WAC 141"


#middlegame positions.

4r1k/1p2q1pp/pb/2p1Nb/2P2PP/B1P2Q1P/P/3R3K/ b - -; bm Be6;id "mid1"

5rk/1p2q1pp/pb/2p1Nb/2P2P/B1P4P/P3Q1P/4R2K/ b - -; bm Rd8; id "mid 2"

3rr1k/1ppb1qpp/pb3p1n/4P/2P2P1N/2P2NQP/PB4P/3RR2K/ b - -; bm Bc8; id "mid 3"

2rq1rk/pp1bbppp/2n1pn/4N/2BP/1P/PB1N1PPP/2RQ1RK/ w - -; bm a3; id "mid 4"

3rbrk/pp2bppp/q1n1pn/4N/2NP/1PBB1Q/P4PPP/2R2RK/ w - -; bm Rfd1; id "mid 5"

3rbrk/p3bppp/1qn1pn/1p2N/3P/1PBBNQ/P4PPP/2RR2K/ w - -; bm d5; id "mid 6"

References

Beal, D. F. and Smith, M. C., “Multiple probes of transposition tables,” ICCA Journal, Vol. 19, No. 4, pp. 227-233, 1996.

Beal, D. and Smith, M., “Random Evaluation in Chess,” ICCA Journal, Vol. 17, No. 1, pp3-9, 1994.

Condon, J and Thompson, K, “Belle chess hardware,” in Advances in Computer Chess 3, ed M. Clark, Pergamon Press, pp 45-54,1982.

Breuker, D. M., “Memory versus search in games,” Ph.D. Thesis, Universiteit Maastricht, 1998.

Breuker, D. M., Uiterwijk, J. W. H. M., Herik, H. J. van den, (1994). “Replacement schemes for transposition tables,” ICCA Journal, Vol. 17, No. 4, pp. 183-193, 1994

Breuker, D. M., Uiterwijk, J. W. H. M., Herik, H. J. van den, “Replacement schemes and two-level tables,” ICCA Journal, Vol. 19, No. 3, pp. 175-180, 1996.

Breuker, D. M., Uiterwijk, J. W. H. M., Herik, H. J. van den, “Information in transposition tables,” Advances in Computer Chess 8 (eds. H.J. van den Herik and J. W. H. M. Uiterwijk), pp. 199-211. Universiteit Maastricht, Maastricht, The Netherlands, 1997.

Greenblatt, R. D., Eastlake, D. E., and Crocker, S. D., "The Greenblatt chess program," Proceedings of the Fall Joint Computer Conference, pp. 801-810, 1967.

Hyatt, R. M. And Mann, T., "A lock-less transposition table implementation for parallel search chess engines," Journal of the International Computer Games Association, Vol. 25,No. 2, pp. 63-72. 2002.

Nelson, H. L., “Hash Tables in Cray Blitz,” ICCA Journal, Vol. 8, No. 1, pp. 3 13, 1985.

Warnock, T., Wendroff, B., “Search tables in computer chess,” ICCA Journal, Vol. 11, No. 1, pp. 10-13, 1988.

Zobrist, A. L., "A new hashing method with applications for game playing," Technical report 88, Computer Science Department, University of Wisconsin, 1970.