Changes

Jump to: navigation, search

Oracle

23,988 bytes added, 16:17, 11 May 2018
Created page with "'''Home * Knowledge * Oracle''' border|right|thumb|[[Arts#Waterhouse|John William Waterhouse - ''Consulting..."
'''[[Main Page|Home]] * [[Knowledge]] * Oracle'''

[[FILE:John William Waterhouse oracle 1884.png|border|right|thumb|[[Arts#Waterhouse|John William Waterhouse]] - ''Consulting the Oracle'' <ref>Consulting the Oracle by [[Arts#Waterhouse|John William Waterhouse]], 1884, showing eight priestesses in a temple of prophecy, [https://en.wikipedia.org/wiki/Oracle Oracle from Wikipedia]</ref> ]]

'''Oracle''',
an instance with the fame of perfect knowledge, that is [[Evaluation|evaluation]] without [https://en.wikipedia.org/wiki/Statistical_noise noise] or errors, a form of [https://en.wikipedia.org/wiki/Divination divination], and a perfect supervisor in [[Learning|learning]]. For instance [[Endgame Tablebases|endgame tablebases]] act like an oracle, providing the optimal moves.

=Pre-processing=
A so called oracle approach is a knowledge instance applied at the [[Root|root]] or the [[Interior Node|interior]] of the [[Search Tree|tree]] with some [[Depth|depth]] left to the [[Horizon Node|horizon]], which guides [[Search|search]] and [[Evaluation|leaf evaluation]] for specific features or [[Pattern|pattern]], relevant in particular chess positions, and not in others. This applies to first order evaluation terms by initializing [[Piece-Square Tables]], also dubbed '''Pre Scan Heuristics''' <ref>[http://www.schach-computer.info/wiki/index.php/PSH PSH] from [http://www.schach-computer.info/wiki/index.php/Hauptseite_En Schachcomputer.info Wiki] (German)</ref> as propagated by [[David Kittinger]], as well as second order terms if related pattern match as described by [[Hans Berliner]] in ''Some Innovations Introduced by [[HiTech]]'' <ref>[[Hans Berliner]] ('''1987'''). ''Some Innovations Introduced by Hitech''. [[ICGA Journal#10_3|ICCA Journal, Vol. 10, No. 3]]</ref>.

While [[Piece-Square Tables#Preprocessing|Pre-processing]] seems somewhat outdated with todays [[Depth|search depths]] in conjunction with [[Transposition Table|transposition table]] anomalies and resulting [[Search Instability|search instabilities]], Oracle approaches may be used to control [[Time Management|time management]], to [[Move Ordering|order moves]], or, as proposed by [[Ronald de Man]] in [[Ronald de Man#ScoringRootMoves|scoring root moves]] by slightly shifting the [[Window|alpha-beta windows]] <ref>[http://groups.google.com/group/rec.games.chess.computer/msg/ccc2546e26d92f88 Re: computer chess "oracle" ideas...] by [[Ronald de Man]], [[Computer Chess Forums|rgcc]], April 7, 1997</ref>.

=Quotes=
==Bruce Wright==
<span id="BruceWright"></span>[[Bruce Wright]] on [[Piece-Square Tables#Preprocessing|pre-processing]] in [[Duchess]] <ref>[[Bruce Wright|Bruce C. Wright]] ('''1996'''). ''Routines prior to Main Search''. [[Computer Chess Reports]], Vol. 5, Nos. 3+4, p. 102</ref>
One thing that we used to do with our chess program, [[Duchess]], was to have a special routine that ran before the [[Search|main search]] that identified strong and weak points in each side's respective position - weak squares, [[Passed Pawn|passed pawns]], weak [[King Safety|king protection]], etc, and tried to grow trees of moves to protect our own weak squares or attack the opponent's weak squares. These would be just sequences of moves by our own or our opponent's pieces and pawns to reach the targeted squares. A table was build for the value of reaching each of these goals for each type of piece and for reaching the middle points along the paths (the value diminishing the further they were from the goal). Then during the search the program would get bonus points for reaching some of the intermediate points in the plan.

This seemed most useful in the [[Endgame|endgame]] when actually reaching the culmination of a play might be beyond the [[Depth|search depth]], and where [[Tactics|tactics]] did not dominate as much as they do in the [[Middlegame|middle game]]; for example, Duchess was quite capable of finding long King maneuvers that might take the King far away from simpleminded heuristics such as "[[King centralization|centralize the King in the endgame]]" and that were too deep to be found by a direct search. It wasn't perfect; it could not take into account the changes in strategy that might be dictated by a radically different structure encountered deep in the search, but it seemed to be better than nothing.

==David Kittinger==
[[David Kittinger]] and [[Scott McDonald]] in 1984 on [[Piece-Square Tables#Preprocessing|Pre Scan Heuristics]] of the [[Super Constellation]] <ref>[[David Kittinger]] and [[Scott McDonald]] ('''1984'''). ''Report from the U.S. Open''. [[Computer Chess Reports|Computer Chess Digest Annual 1984]] pp. 15-33</ref>
A second departure from other commercial programs has been the simplification of the [[Evaluation function|evaluation function]] as applied to the [[Leaf Node|end nodes]] of the tree [[Search|search]]. The programs instead rely heavily on specific chess [[Knowledge|knowledge]] which is concentrated into a special [[Piece-Square Tables#Preprocessing|preprocessor]] which interfaces to the tree search primarily through the [[Score|scores]] associated with specific ply-one [[Moves|moves]]. This ides of a ply-one move preprocessor was originally implemented in the program [[Tech]] by [[James Gillogly]] in the late 1960s. Although Tech only achieved a high 1400 rating running on a large computer, the strategy has certain appeal. First, chess tree searching has become very efficient, and second, the interaction problems associated with putting ever increasing amounts of chess knowledge in the tree become formidable. It has become apparent to that this rather simple approach might contain the structure of a master level microcomputer program.

==Peter Gillgasch==
[[Peter Gillgasch]] on [[DarkThought|DarkThought's]] Oracle, February 1996 <ref>[http://groups.google.com/group/rec.games.chess.computer/msg/b8bdef757df5d5c9 Drawn games (Was Re: Transposition table)] by [[Peter Gillgasch]], [[Computer Chess Forums|rgcc]], February 06, 1996</ref>
After toying with horrible slow code that did endpoint evaluations for the pieces I got the idea (as many did before, but I swear I didn't know that and hence I was quite proud of it) to analyze the root position and to modify the [[Piece-Square Tables|piece/square table]] relative to [[Pawn Structure|pawn structure]] and king locations. The piece/square table scores were also used for [[Move Ordering|move ordering]] to cut down the tree size (this helped a lot since it is basically using the evaluation to order the moves at close to no cost).

Of course the program played '''much''' better than before but it had serious trouble when the pawn structure was highly flexible and if there was a way to lock its bishops with own pawns it was guaranteed to like exactly this line ;)

Later the three of us modified the oracle code (to use Berliner's term) and we where quite successful with it - DarkThought started to beat commercial PC programs. But it was quite hard to make further progress, so a lot of knowledge was moved to the endpoint evaluation up to the point that there was basically no oracle code left. This improved [[Playing Strength|playing strength]] but it was always my impression that we suffered in the "[[Tactics|tactical]]" aspect a bit since the search was no longer fast as hell and I was never convinced that going this way was the best decision.
<span id="Donninger"></span>
==Chrilly Donninger==
Quote by [[Chrilly Donninger]] from ''CHE: A Graphical Language for Expressing Chess Knowledge'', 1996 <ref>[[Chrilly Donninger]] ('''1996'''). ''CHE: A Graphical Language for Expressing Chess Knowledge''. [[ICGA Journal#19_4|ICCA Journal, Vol. 19, No. 4]]</ref>
The main design criterion for successor [[Nimzo|Nimzo-3]] was combining the positional play of Nimzo-2 with the [[Tactics|tactical]] strength of a program like [[Fritz]]. Nimzo-2 was developed on an [[x86|486/50 MHz]] [[IBM PC|PC]], which calculated about 3,000 [[Nodes per second|nodes per second]]. Thereof 60 to 70 percent was spent/wasted in the [[Evaluation|leaf evaluation]] routines. Hence, a major improvement in speed and thus in tactical strength could only be obtained by performing most of the evaluation either in the root or the interior of the [[Search Tree|search tree]]. So, Nimzo-3 became a [[Chess Genius|Genius]]/[[Fritz]]-like program with a complex root evaluation, called Oracle, similar to [[Hans Berliner|Berliner's]] Oracle, and with a very simple, mainly first order evaluation at the [[Leaf Node|leaves]] <ref>This approach seems to have been invented by [[Kaare Danielsen]] for his program [[CXG Star Chess#Advanced|CXG Advanced Star Chess]], Quote from [[Chrilly Donninger|Donninger's]] CHE paper</ref>. Nimzo-3 spends about 10 to 20% on leaf evaluation, its node rate has increased by 400% up to 12,000 nodes/second on the same hardware.

==Don Dailey==
[[Don Dailey]] on [[Piece-Square Tables#Preprocessing|Pre-processing]], May 1998 <ref>[https://www.stmintz.com/ccc/index.php?id=18181 Re: What is "pre-processing"] by [[Don Dailey]], [[CCC]], May 07, 1998</ref> :
{| class="wikitable"
|-
| The term pre-processing is normally used in the context of an evaluation function. It's a process where you make most of your evaluation decisions BEFORE the [[Search|search]] begins. Typically you determine where each piece should go and essentially build a 64 square table for each piece type of each color on the chess board. You might decide that the c file is [[Open file|open]] and so you give rooks a big bonus if they can get to the c file. Once you make this decision it doesn't change during the search. So if the c file suddenly gets blocked during the search, the rooks won't know this and still try to get on the c file.

Pre-processing has many advantages and disadvantages. Here is a list of them:

'''Pro's'''
* Very fast, evaluation almost free.
* Trivial to implement.
* Can put huge amounts of knowledge in chess program with essentially no slowdown. This is because all the real work is done in a fraction of a second before the search begins.
* Especially good with [[Pawn Pattern and Properties|pawn patterns]]. (but not [[Pawn Structure|pawn structure]])

'''Con's'''
* The evaluation is very dependent on the [[Root|starting position]]. Deeper searches suffer more than shallow one. Ironically, you can do deeper searches if you do lots of pre-processing.
* Very difficult to put in dynamic endings where relationships change quickly.
* Concepts like [[Mobility|mobility]] are covered very inexactly. Pawn structure terms cannot be calculated with any reliability.
* Makes it problematic whether to use [[Transposition Table|hash table]] aging because it affects the hash tables in strange ways. The problem is that 2 identical positions can have different scores if they are descendants of different starting positions.

There are probably other pro's and con's I didn't think of. Many really good programs use this method, but almost always there is a mixture of pre-processing with more dynamic stuff like pawn structure which might be hashed, or calculated quickly with incremental updating schemes. There is quite a bit of diversity and opinion on this subject. Some swear by it and others shun it completely. I don't use it any more but my older programs did. My feeling is that with ever increasing depth, it becomes more and more troublesome. Imagine a program doing 60 ply searches. The end nodes of the tree will not bear very much resemblance to the initial position where the decisions were made. But since we don't usually look 60 ply deep pre-processing is still a workable method.

But I believe you can still write very strong programs that do a substantial amount of pre-processing. [[Fritz]] is one example. [[Chess Genius|Genius]] also does substantial pre-processing. I think probably more programs use this than not. My old program [[RexChess]] was almost all pre-process with a little dynamic pawn structure mixed in! Pre-processing does not necessarily have to be pure piece/sq tables. You can make other evaluation decisions in advance too and prepare them for fast execution during the search.
|}

==Ulrich Türke==
Reply by [[Ulrich Türke]] <ref>[https://www.stmintz.com/ccc/index.php?id=18203 Re: What is "pre-processing"] by [[Ulrich Türke]], [[CCC]], May 08, 1998</ref> :
I too think that pre-processing is still done in most of the current chess programs, more or less. I am considering the possibility of a moderated pre-processing, i. e. apply the time-consuming pre-processing evaluation parts not at the root but to nodes at [[Depth|depth]] - say - the half of the aimed search depth. Thus, one would still have the main advantage, saving a lot of calculation time, because the number of pre-processed nodes would still be a microscopic fraction of the [[Leaf Node|leaf nodes]]. But on the other hand, the pre-processed knowledge would be applied to nodes being far closer to the leaf nodes (in your examples "only" 30 ply instead of 60). Opinions?

==Amir Ban==
Note by [[Amir Ban]] <ref>[https://www.stmintz.com/ccc/index.php?id=18213 Re: What is "pre-processing"] by [[Amir Ban]], [[CCC]], May 08, 1998</ref> :
It's a fair but simplistic description of [[Junior]].

==Don Dailey==
[[Don Dailey]] again <ref>[https://www.stmintz.com/ccc/index.php?id=18209 Re: What is "pre-processing"] by [[Don Dailey]], [[CCC]], May 08, 1998</ref> :
I've been considering this type of approach for a long time. I think it definitely has merit. It would suffer from [[Transposition Table|hash table]] anomalies but there are ways to deal with this, the primary one to simply ignore them.

There are a few issues to be dealt with but my feeling is that you might do well with this scheme. Scoring might be a little fuzzy at times but that might be ok. Probably the scheme would be to pre-process at every level up to MAX_DEPTH-n, where n might be at least 3 or 4 ply. Keep in mind that pre-processing is incredibly expensive even compared to fully dynamic processing, so I suspect you cannot get very close to the leaf nodes. Of course 3 or 4 ply away might be considered pretty close if you were doing 14 ply searches.

The way I got this idea (which I'm not taking credit for) actually began when I was programming the ending [[KBNK Endgame|Bishop and Knight vs King]] with a pre-processor program of mine a few years ago. This is a very good exercise for someone wanting to understand some of the difficulties of pre-processing. I tested with a 3 ply search, and tuned the thing up to do quite well. But it turned out to not work so well at other levels! It turns out that no matter what table you use, they will tend to be optimum for a specific depth only. It was almost funny because I kept figuring out which positions in the search were messing it up and changed the tables accordingly. These anticipations were wrong though at different levels! Finally though, I extracted the most general principles possible and created a table that was almost static (not based as much on the placing of the pieces) and it did ok, but not as good as the depth specific ones. Some pre-processors will suggest a move and for this ending that would work well. The idea is to give a bonus to the whole search tree under some root move. For example I might give a small bonus for playing any move that's a pin. This makes the program slightly in favor of playing a pin. I call these "ply 1 incentives" but there is probably better terminology in use. This is also another example of pre-processing. See note at end of my post.

But your idea would solve this and a host of other problems. You could make very good guesses about configurations if you knew the possible changes to a position were limited. This would enable you to be more aggressive about the knowledge you included.

Here is another idea I had which is more dynamic in nature. It occurred to me that you could do a fairly decent evaluation based on pawn position only, the idea being a separate set of tables for each pawn structure. These could be built by the pawn structure hash table mechanism most programs have. Most pieces can be judged well by the pawn structure around them.

I did not pursue this idea because I fear it may still be too slow. If my hash table hit rate was say 95 percent, then 1/20 of the positions would require a pre-processing step. While this does not seem like much, you have to consider how expensive building the table is. It's pretty expensive, especially if you want sophistication. But on top of that you must recompute the positional score each time the pawn structure changes, EVEN if the table itself is ready to go. This should be fairly fast but not trivial in terms of execution speed. You have already killed some of the speed you were hoping to gain. Is the idea workable? Maybe, but I'm skeptical. Also, the pawn structure is not the only relevant factor so other evaluation would not doubt be needed. For instance you cannot look at the pawn structure and know for sure that you are in an endgame. You could make assumptions though based on the exact position you had when you built the tables.

Anyway the idea is interesting and I sometimes toy with the idea of trying it out, but it seems like a lot of work for an experiment that may not pay off. It could well be that someone dedicated to making it work might find imaginative solutions to some of the problems. But your idea may be more sound. It may very well be that some are using it. I've never heard of a specific example of it though.

I think an old program called [[Tech]] used the method of "ply 1 move incentives." There was no evaluation at all (the way I remember it) except for [[Material|material]] value and a list of moves was prepared in order of goodness from a pre-processor at the root. The program simply played the first move it came to in the move list that was at least as good tactical as any other. I'll bet [[Robert Hyatt|Bob Hyatt]] knows about this one. I had a nice discussion once with [[James Gillogly|the programmer]] when he was giving a talk on [[Transposition Table#bits|hash table collisions]] in chess programs.

==Vincent Diepeveen==
[[Vincent Diepeveen]] on [[Piece-Square Tables#Preprocessing|Pre-processing]], October 2000 <ref>[https://www.stmintz.com/ccc/index.php?id=132894 Re: What is Prescan Heuristics ??] by [[Vincent Diepeveen]], [[CCC]], October 13, 2000</ref>
{| class="wikitable"
|-
| It's pretending you know in advance what is the best move and just let the program figure out the tactics. It's fast and of course my first line is a rude outline of what it is, but it's definitely no compare to a leaf evaluation.

So the 2 things against each other:

* Prescan heuristics/ piece square tables:
: fill a table BEFORE you search and apply there the knowledge, so for example for each piece at each square and in your [[Make Move|makemove]] only add the score from the tables and do nothing in your leafs (don't see why one would need a [[Quiescence Search|qsearch]] in fact for such programs a 'guess' function should be plenty enough). No evaluation needed. Even [[Passed Pawn|passers]] you can easily do [[Incremental Updates|incremental]], no problem.

* Leaf eval: you can sure use the makemove to build up data structures, but the evaluation is NOT dependent upon the moves made, but only upon the position and you go scan the board position in each leaf for chess knowledge.

So the big difference is for example when exchanging queen: [[Chess Tiger]] used (don't know whether it's still a pre-processor) to be happy before queen exchange, then after queen exchange it's suddenly very unhappy. Old [[Fritz]] versions except the latest 6a, 6b etcetera were also pre-processors.

Of course many programmers figured out that using a pre-processor only is very hard to play chess, so usually a few big scores they do in eval like passed pawns. Many old programs simply had no king safety at all.

[[Nimzo]] is a classical case of pre-processing too. if you checkout its book you'll see it hardly plays openings with very tough pawn structures without having all tough lines in book, as otherwise the pre-processor makes big mistakes as it doesn't see the position change in the root of course.

Now where is the big advantage of pre-processors apart that you can do it faster, why have they been so pretty successful in the past? I think very important reason is that it's also very easy to debug a pre-processor when comparing that with a leaf evaluation.
|}

=See also=
* [[Endgame Tablebases]]
* [[Incremental Updates]]
* [[Interior Node Recognizer]]
* [[Lazy Evaluation]]
* [[Piece-Square Tables#Preprocessing|Piece-Square Tables | Pre-processing]]
* [[Ronald de Man#ScoringRootMoves|Ronald de Man | Scoring Root Moves]]

=Publications=
* [[James Gillogly]] ('''1972'''). ''The Technology Chess Program''. Artificial Intelligence, Vol. 3, pp. 145-163. ISSN 0004-3702. Reprinted ('''1988''') in [[Computer Chess Compendium]]
* [[Peter W. Frey]] ('''1986'''). ''Fuzzy Production Rules in Chess''. [[ICGA Journal#9_4|ICCA Journal, Vol. 9, No. 4]]
* [[Hans Berliner]] ('''1987'''). ''Some Innovations Introduced by Hitech''. [[ICGA Journal#10_3|ICCA Journal, Vol. 10, No. 3]]
* [[Hans Berliner]] ('''1989'''). ''Some Innovations Introduced by Hitech''. [[Advances in Computer Chess 5]]
* [[Chrilly Donninger]] ('''1996'''). ''CHE: A Graphical Language for Expressing Chess Knowledge''. [[ICGA Journal#19_4|ICCA Journal, Vol. 19, No. 4]]

=Forum Posts=
* [http://groups.google.com/group/rec.games.chess.computer/browse_frm/thread/99eec6923b0481db computer chess "oracle" ideas...] by [[Robert Hyatt]], [[Computer Chess Forums|rgcc]], April 1, 1997
: [http://groups.google.com/group/rec.games.chess.computer/msg/0df39371422a600c Re: computer chess "oracle" ideas...] by [[Ronald de Man]], [[Computer Chess Forums|rgcc]], April 3, 1997
: [http://groups.google.com/group/rec.games.chess.computer/msg/ccc2546e26d92f88 Re: computer chess "oracle" ideas...] by [[Ronald de Man]], [[Computer Chess Forums|rgcc]], April 7, 1997
* [https://www.stmintz.com/ccc/index.php?id=18179 What is "pre-processing"] by Howard Exner, [[CCC]], May 07, 1998
: [https://www.stmintz.com/ccc/index.php?id=18181 Re: What is "pre-processing"] by [[Don Dailey]], [[CCC]], May 07, 1998
* [http://groups.google.com/group/rec.games.chess.computer/browse_frm/thread/ce25565232355c61 Preprocessing vs leaf evaluating - any preprocessors left?] by [[Tom King]], [[Computer Chess Forums|rgcc]], July 05, 1998
* [https://www.stmintz.com/ccc/index.php?id=35000 Ply 1 scoring factors] by [[Will Singleton]], [[CCC]], December 07, 1998
* [https://www.stmintz.com/ccc/index.php?id=132894 Re: What is Prescan Heuristics ??] by [[Vincent Diepeveen]], [[CCC]], October 13, 2000

=External Links=
* [https://en.wikipedia.org/wiki/Oracle Oracle from Wikipedia]
* [https://en.wikipedia.org/wiki/Oracle_%28disambiguation%29 Oracle (disambiguation) from Wikipedia]
* [https://en.wikipedia.org/wiki/Oracle_%28software_testing%29 Oracle (software testing) from Wikipedia]
: [https://en.wikipedia.org/wiki/ORACLE_%28computer%29 ORACLE (computer) from Wikipedia]
* [https://en.wikipedia.org/wiki/Oracle_machine Oracle machine from Wikipedia]
* [https://en.wikipedia.org/wiki/Matroid_oracle Matroid oracle from Wikipedia]
* [https://en.wikipedia.org/wiki/Prophecy Prophecy from Wikipedia]
* [https://en.wikipedia.org/wiki/Oracle_Corporation Oracle Corporation from Wikipedia]
* [https://en.wikipedia.org/wiki/Oracle_Database Oracle Database from Wikipedia]
* [[Videos#MichaelHedges|Michael Hedges]] - [https://en.wikipedia.org/wiki/Oracle_%28Michael_Hedges_album%29 Oracle] / Fusion of the Five Elements, [https://en.wikipedia.org/wiki/YouTube YouTube] Video
: {{#evu:https://www.youtube.com/watch?v=1H4olP-5ed8|alignment=left|valignment=top}}

=References=
<references />

'''[[Knowledge|Up one Level]]'''

Navigation menu