Difference between revisions of "ABDADA"

From Chessprogramming wiki
Jump to: navigation, search
m (Recursion)
Line 7: Line 7:
  
 
=Recursion=  
 
=Recursion=  
<span id="Recursion"></span>While YBWC is difficult to implement recursively, ABDADA is not. In YBWC, when a processor receives an evaluation answer from one of its slaves, it must be able to produce a jump in the search depth if this evaluation produces a pruning of the current master node. So the values of [[Alpha|alpha]] and [[Beta|beta]] must at least be kept in [[Array|arrays]] indexed by [[Ply|ply]] distance from root, and special arrangements must be made to counteract the disruption that might occur from confusing depths. Therefor YBWC needs a hugely modified [[Iterative Search|iterative search]] algorithm. In his paper <ref>[[Jean-Christophe Weill]] ('''1996'''). ''[http://portal.acm.org/citation.cfm?id=228345 The ABDADA Distributed Minimax Search Agorithm]''. Proceedings of the 1996 ACM Computer Science Conference, pp. 131-138. ACM, New York, N.Y, reprinted [[ICGA Journal#19_1|ICCA Journal, Vol. 19, No. 1]], [http://www.recherche.enac.fr/%7Eweill/publications/acm.ps.gz zipped postscript]</ref> , Weill states that using ABDADA within a recursive [[NegaScout]] framework was 30% faster than a none-recursive search, which was observed independently by [[Mark Brockington]] in 1994 <ref>[[Mark Brockington]] ('''1994'''). ''An Implementation of the Young Brothers Wait Concept''. Internal report, [[University of Alberta]]</ref> . It can be explained by the fact that within high-level languages and their compilers for certain target platforms, the use of procedure [[Stack|stacks]] is much more optimized than the use of indexed arrays, also observed by [https://en.wikipedia.org/wiki/Niklaus_Wirth Niklaus Wirth] concerning the [https://en.wikipedia.org/wiki/Quicksort Quicksort] algorithm <ref>[https://en.wikipedia.org/wiki/Niklaus_Wirth Niklaus Wirth] ('''1976'''). ''[https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures_%3D_Programs Algorithms + Data Structures = Programs]''. pp 100</ref> .
+
<span id="Recursion"></span>While [[YBWC]] is difficult to implement recursively, ABDADA is not. In YBWC, when a processor receives an evaluation answer from one of its slaves, it must be able to produce a jump in the search depth if this evaluation produces a pruning of the current master node. So the values of [[Alpha|alpha]] and [[Beta|beta]] must at least be kept in [[Array|arrays]] indexed by [[Ply|ply]] distance from root, and special arrangements must be made to counteract the disruption that might occur from confusing depths. Therefor YBWC needs a hugely modified [[Iterative Search|iterative search]] algorithm. In his paper <ref>[[Jean-Christophe Weill]] ('''1996'''). ''[http://portal.acm.org/citation.cfm?id=228345 The ABDADA Distributed Minimax Search Agorithm]''. Proceedings of the 1996 ACM Computer Science Conference, pp. 131-138. ACM, New York, N.Y, reprinted [[ICGA Journal#19_1|ICCA Journal, Vol. 19, No. 1]], [http://www.recherche.enac.fr/%7Eweill/publications/acm.ps.gz zipped postscript]</ref> , Weill states that using ABDADA within a recursive [[NegaScout]] framework was 30% faster than a none-recursive search, which was observed independently by [[Mark Brockington]] in 1994 <ref>[[Mark Brockington]] ('''1994'''). ''An Implementation of the Young Brothers Wait Concept''. Internal report, [[University of Alberta]]</ref> . It can be explained by the fact that within high-level languages and their compilers for certain target platforms, the use of procedure [[Stack|stacks]] is much more optimized than the use of indexed arrays, also observed by [https://en.wikipedia.org/wiki/Niklaus_Wirth Niklaus Wirth] concerning the [https://en.wikipedia.org/wiki/Quicksort Quicksort] algorithm <ref>[https://en.wikipedia.org/wiki/Niklaus_Wirth Niklaus Wirth] ('''1976'''). ''[https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures_%3D_Programs Algorithms + Data Structures = Programs]''. pp 100</ref> .
  
 
=Algorithm=  
 
=Algorithm=  

Revision as of 00:27, 19 August 2018

Home * Search * Parallel Search * ABDADA

Hannah Höch - Da Dandy [1]

ABDADA, (Alpha-Bêta Distribué avec Droit d'Aînesse = Distributed Alpha-Beta Search with Eldest Son Right)
a loosely synchronized, distributed search algorithm developed by Jean-Christophe Weill. The ABDADA search algorithm is based on Young Brothers Wait Concept (YBWC) (Rainer Feldmann et al. 1986, 1993) [2] [3] and αβ* (Vincent David 1993) [4] . From YBWC it retains the basic concept to allow parallel search only if the eldest son has been fully evaluated. From αβ* as well as Steve Otto and Ed Felten's algorithm (1988) [5] which both rely on a Shared Hash Table and all processors start the search simultaneously at the root, ABDADA retains the simple recursive control structure similar to a serial algorithm. With the help of additional transposition table information, i.e. the number of processors searching this node, it is possible to control speculative parallelism.

Recursion

While YBWC is difficult to implement recursively, ABDADA is not. In YBWC, when a processor receives an evaluation answer from one of its slaves, it must be able to produce a jump in the search depth if this evaluation produces a pruning of the current master node. So the values of alpha and beta must at least be kept in arrays indexed by ply distance from root, and special arrangements must be made to counteract the disruption that might occur from confusing depths. Therefor YBWC needs a hugely modified iterative search algorithm. In his paper [6] , Weill states that using ABDADA within a recursive NegaScout framework was 30% faster than a none-recursive search, which was observed independently by Mark Brockington in 1994 [7] . It can be explained by the fact that within high-level languages and their compilers for certain target platforms, the use of procedure stacks is much more optimized than the use of indexed arrays, also observed by Niklaus Wirth concerning the Quicksort algorithm [8] .

Algorithm

The ABDADA algorithm is described in five steps:

  1. A TT-Entry has a field entry.nproc containing the number of processors currently searching the associated node
  2. All processors start the search simultaneously at the root of the search tree
  3. When a processor enters a node, it increments entry.nproc
  4. When a processor leaves a node, it decrements entry.nproc
  5. The analysis of a position is done in three phases:
    1. The eldest son is analyzed, regardless of the activities of other processors
    2. Next, all other sons not currently being analyzed by other processors are analyzed
    3. In the final phase, any remaining sons not yet analyzed are searched, i.e. the corresponding entry in the TT indicates this node and its siblings have not been searched to the required depth

Pseudo Code

The C-like pseudo code below (adopted from the Pascal like pseudo code from Weill's paper), demonstrates the mentioned three phases controlled by an iteration counter. The boolean parameter exclusiveP indicates whether the node should be searched exclusively and is passed to the TT-probing code via the procedure retrieveAsk. A new value outside the usual value range need to be defined (ON_EVALUATION), and retrieveAsk returns this score if probed in exclusive move, and other processors evaluating this node.

int abdada(const CNode &position, int α, int β, int depth, bool exclusiveP) {
   if (depth == 0) return evaluate( position );
   int best = -oo;

   retrieveAsk(position, α, β, depth, exclusiveP);
   /* generate moves while waiting for the answer ... */
   GenerateMoves(position);
   retrieveAnswer(&α, &β, &best);
   /* The current move is not searched if causing a cutoff or */
   /* in exclusive mode and another processor is currently searching it */
   if ( α >= β || best == ON_EVALUATION )
      return best; /* entry.nproc not incremented */

   bool alldone = false;
   for ( int iteration = 0; iteration < 2 && α < β && !alldone; iteration++) {
      alldone = true;
      Move m = firstMove( position );
      while ( (m != 0) && (α < β) ) {
         exclusive = (iteration == 0) && !isFirstMove( m );
         /* On the first iteration, only one processor should search young sons exclusively */
         value = -abdada( position * m, -β, -max(α, best), depth-1, exclusive );
         if ( value == -ON_EVALUATION) {
            alldone = false;
         } else if ( value > best ) {
            best = value;
            if ( best > β ) goto endsearch;
         }
         m = nextMove( position );
      }
   }
endsearch:
   storeHash( position, α, β, depth, best );
   return best;
}

Implementations

Frenchess

ABDADA was implemented in Frenchess on a Cray T3D with 128 DEC Alpha 21064 processors. Frenchess participated at the WCCC 1995 finishing fourth, tied with Deep Blue Prototype [9], as well inside an Othello program called BUGS [10] .

Smash

The GPL open source chess engine Smash by Maurizio Sambati demonstrates an ABDADA implementation in C++ [11] [12] .

See also

Publications

Forum Posts

1997 ...

Re:Parallel searching by Mark Brockington, rgcc, March 22, 1997

2000 ...

2010 ...

External Links

Algorithm

Cutoff Checks from Parallel Search by Tom Kerrigan

Misc

feat. Allan Zavod, Jamie Glaser, Rayford Griffin and Keith Jones

References

  1. Hannah Höch - Da Dandy, 1919, Hannah Höch | Da-Dandy (1919)
  2. Rainer Feldmann, Peter Mysliwietz, Oliver Vornberger (1986). A Local Area Network Used as a Parallel Architecture. Technical Report 31, Paderborn University
  3. Rainer Feldmann (1993). Game Tree Search on Massively Parallel Systems. Ph.D. Thesis, pdf
  4. Vincent David (1993). Algorithmique parallèle sur les arbres de décision et raisonnement en temps contraint. Etude et application au Minimax = Parallel algorithm for heuristic tree searching and real-time reasoning. Study and application to the Minimax, Ph.D. Thesis, École nationale supérieure de l'aéronautique et de l'espace, Toulouse, France
  5. Ed Felten, Steve Otto (1988). Chess on a Hypercube. The Third Conference on Hypercube Concurrent Computers and Applications, Vol. II-Applications (ed. G. Fox), pp. 1329-1341
  6. Jean-Christophe Weill (1996). The ABDADA Distributed Minimax Search Agorithm. Proceedings of the 1996 ACM Computer Science Conference, pp. 131-138. ACM, New York, N.Y, reprinted ICCA Journal, Vol. 19, No. 1, zipped postscript
  7. Mark Brockington (1994). An Implementation of the Young Brothers Wait Concept. Internal report, University of Alberta
  8. Niklaus Wirth (1976). Algorithms + Data Structures = Programs. pp 100
  9. Marc-François Baudot, Jean-Christophe Weill, Jean-Luc Seret, Michel Gondran (1995). Frenchess: A Cray T3D at the 8th World Computer Chess Championship. zipped ps
  10. Jean-Christophe Weill (1996). The ABDADA Distributed Minimax Search Agorithm. Proceedings of the 1996 ACM Computer Science Conference, pp. 131-138. ACM, New York, N.Y, reprinted ICCA Journal, Vol. 19, No. 1, zipped postscript
  11. Re: interested in making single procesor program multi by Alessandro Scotti, CCC, December 29, 2007
  12. smash.tar.bz2 search.cpp
  13. "How To" guide to parallel-izing an engine by Tom Kerrigan, CCC, August 27, 2017

Up one level