Difference between revisions of "Knowledge"

From Chessprogramming wiki
Jump to: navigation, search
Line 45: Line 45:
 
  I am not saying search is the only reason but that search is more significant in the Elo jump than other factors. Thanks for the interesting history which in general I agree with but I should add for clarification...
 
  I am not saying search is the only reason but that search is more significant in the Elo jump than other factors. Thanks for the interesting history which in general I agree with but I should add for clarification...
  
  In 1989 I was exposed to the  [[Null Move|null move]] in [[Hiarcs|HIARCS']] first official computer tournament at the [[1st Computer Olympiad|Olympiad]] in London. [[John Hamlen]] had written his program [[Woodpusher]] as part of his M.Sc project investigating the null move exclamation and although Woodpusher did not do well in that tournament I had the good fortune to discuss null move with John and reading his project which interested me very much. John and I had many discussions on computer chess over the following months.
+
  In 1989 I was exposed to the  [[Null Move|null move]] in [[HIARCS|HIARCS']] first official computer tournament at the [[1st Computer Olympiad|Olympiad]] in London. [[John Hamlen]] had written his program [[Woodpusher]] as part of his M.Sc project investigating the null move exclamation and although Woodpusher did not do well in that tournament I had the good fortune to discuss null move with John and reading his project which interested me very much. John and I had many discussions on computer chess over the following months.
 
  That Olympiad probably had an impact on you too and not only because [[Rebel]] won Gold above [[Mephisto Portorose|Mephisto X]] and [[Fidelity|Fidelity X]] for the first time but because there was a program there running quite fast (at that time in history) called [[E6P]] (so named because it could reach 6 ply full width!) running on an Acorn [[ARM2|ARM]] [[Acorn Archimedes|based machine]]. [[Jan Louwman|Jan]] was operating Rebel at the tournament as you know and I noticed Jan take a keen interest in this new machine.  
 
  That Olympiad probably had an impact on you too and not only because [[Rebel]] won Gold above [[Mephisto Portorose|Mephisto X]] and [[Fidelity|Fidelity X]] for the first time but because there was a program there running quite fast (at that time in history) called [[E6P]] (so named because it could reach 6 ply full width!) running on an Acorn [[ARM2|ARM]] [[Acorn Archimedes|based machine]]. [[Jan Louwman|Jan]] was operating Rebel at the tournament as you know and I noticed Jan take a keen interest in this new machine.  
  

Revision as of 07:40, 30 June 2018

Home * Knowledge

Samuel Bak - Knowledgeable [1]

Knowledge is the possession of information, its acquisition involves complex cognitive processes: perception, learning, communication, association and reasoning.

Inside a chess program, knowledge is manifested either as procedural or as declarative knowledge. The declarative a priori knowledge about the rules of chess is immanent inside the move generator in conjunction with check detection. Further declarative knowledge is coded as rules of thumb of the evaluation function, as well as persistent perfect knowledge from retrograde analysis, or as empirical knowledge mostly from human experience, to retrieve moves from an hand crafted opening book. Procedural knowledge is applied by Learning, as well to backup declarative knowledge by Search. However, the Search versus Knowledge trade-off in computer chess and games refers heuristic or perfect knowledge.

Search versus Knowledge

The trade-offs of search versus Knowledge [2]

KnowledeSearch.JPG
Curves A and B represent a fixed performance level, say 2000 for B and 2200 for A. The curves indicate that there are many ways to archive that level of performance, either by little knowledge and lot of search, or vice versa. If a program has archived point P on curve B, it may improve between search only (point S) or knowledge only (point K).

In Practice

Andreas Junghanns and Jonathan Schaeffer in Search versus knowledge in game-playing programs revisited [3]:

The difficulty lies in quantifying the knowledge axis. Perfect knowledge assumes an oracle, which for most games we do not have. However, we can approximate an oracle by using a high-quality, game playing program that performs deep searches. Although not perfect, it is the best approximation available. Using this, how can we measure the quality of knowledge in the program?
A heuristic evaluation function, as judged by an oracle, can be viewed as a combination of two things: oracle knowledge and noise. The oracle knowledge is beneficial and improves the program' s play. The noise, on the other hand, represents the inaccuracies in the program' s knowledge. It can be introduced by several things, including knowledge that is missing, over- or under-valued, and/or irrelevant. As the noise level increase, the beneficial contribution of the knowledge is overshadowed.
By definition, an oracle has no noise. We can measure the quality of the heuristic evaluation in a program by the amount of noise that is added into it. To measure this, we add a random number to each leaf node evaluation ... 

HiTech versus LoTech

At the Advances in Computer Chess 5 conference 1987, Hans Berliner et al. introduced an interesting experiment. HiTech with a sophisticated evaluation competed versus LoTech, almost the same program but a rudimentary evaluation, at different search depths [4]. According to Peter Kouwenhoven [5], Berliner's conclusion was that more was to be won by increasing HiTech's knowledge at constant speed rather than further increasing its speed while keeping knowledge constant. The abstract from the extended 1990 paper [6]:

Chess programs can differ in depth of search or in the evaluation function applied to leaf nodes or both. Over the past 10 years, the notion that the principal way to strengthen a chess program is to improve its depth of search has held sway. Improving depth of search undoubtedly does improve a program's strength. However, projections of potential gain have time and again been found to overestimate the actual gain.
We examine the notion that it is possible to project the playing strength of chess programs by having different versions of the same program (differing only in depth of search) play each other. Our data indicates that once a depth of “tactical sufficiency” is reached, a knowledgeable program can beat a significantly less knowledgeable one almost all of the time when both are searching to the same depth. This suggests that once a certain knowledge gap has been opened up, it cannot be overcome by small increments in searching depth. The conclusion from this work is that extending the depth of search without increasing the present level of knowledge will not in any foreseeable time lead to World Championship level chess. The approach of increasing knowledge has been taken in the HiTech chess machine. 

Ed Schröder's Conclusion

Ed Schröder concluded HiTech's extra knowledge was just worth one ply in the computer-computer area [7][8], and further elaborated on tactical evaluation knowledge helpful in early Rebel searching 5-7 plies versus Rebel searching 13-15 plies on 2001 hardware [9] [10]:

Some specific chess knowledge through the years become out-dated due to the speed of nowadays computers. An example: In the early days of computer chess, say the period 1985-1989 I as hardware had a 6502 running at 5 Mhz. Rebel at that time could only search 5-7 plies on tournament time control. Such a low depth guarantees you one thing: horizon effects all over, thus losing the game.
To escape from the horizon effect all kind of tricks were invented, chess knowledge about dangerous pins, knight forks, double attacks, overloading of pieces and reward those aspects in eval. Complicated and processor time consuming software it was (15-20% less performance) but it did the trick escaping from the horizon effect in a reasonable way.
Today we run chess program on 1500 Mhz machines and instead of the 5-7 plies Rebel now gets 13-15 plies in the middle game and the horizon effect which was a major problem at 5 Mhz slowly was fading away.
So I wondered, what if I throw that complicated "anti-horizon" code out of Rebel, is it still needed? So I tried and found out that Rebel played as good with the "anti-horizon" code as without the code. In other words, the net gain was a "free" speed gain of 15-20%, thus an improvement.
One aspect of chess programming is that your program is in a constant state of change due to the state of art of nowadays available hardware. I am sure a Rebel at 10 Ghz several parts of Rebel need a face-lift to get the maximum out of the new speed monster. 

Search versus Evaluation

Mark Uniacke in a reply to Ed Schröder, on Search or Evaluation, mentioning the 1st Computer Olympiad, Null Move Pruning, HIARCS, Woodpusher, E6P, Rebel, Fritz ... [11]:

I am not saying search is the only reason but that search is more significant in the Elo jump than other factors. Thanks for the interesting history which in general I agree with but I should add for clarification...
In 1989 I was exposed to the  null move in HIARCS' first official computer tournament at the Olympiad in London. John Hamlen had written his program Woodpusher as part of his M.Sc project investigating the null move exclamation and although Woodpusher did not do well in that tournament I had the good fortune to discuss null move with John and reading his project which interested me very much. John and I had many discussions on computer chess over the following months.
That Olympiad probably had an impact on you too and not only because Rebel won Gold above Mephisto X and Fidelity X for the first time but because there was a program there running quite fast (at that time in history) called E6P (so named because it could reach 6 ply full width!) running on an Acorn ARM based machine. Jan was operating Rebel at the tournament as you know and I noticed Jan take a keen interest in this new machine. 
Frans was using null move in Fritz 2 in Madrid 1992 (I am not sure about Fritz 1). I am sure you remember that tournament fondly. I remember at Madrid having a discussion with a few programmers including Richard about what on earth was Fritz 2 doing to reach the depths it was attaining, this seed of interest made me investigate search enhancements carefully after Madrid and it was then that I began re-investigating the null move. It was not long before HIARCS began using null moves too and by Christmas 1992 I had a working implementation which gave a big jump over Hiarcs 1 which had only been released shortly before. I had much fun playing my Hiarcs 1.n against Mephisto Risc 1 in late 1992, early 1993 and that sort of closes the circle on the E6P story. Even so, the null move idea still evolved and my implementation today is much better than it was initially and so it can be for other ideas. We can all profit from these techniques but some gain more than others if they find new ways to exploit it.

Rebel 7/8/9 and Hiarcs 5/6/7 battled hard in those years and my big jump to the top of the rating lists happened partly because of a search change to Hiarcs 5/6 which led at one stage to a 71 Elo lead on the SSDF. I recall Rebel and Hiarcs exchanging the lead on the SSDF list a number of times in those years. So we agree the impact search has made on computer chess progress is clear. Now we diverge on what might be the reason for new progress in the field.
Clearly we have good search enhancements on cut nodes and all nodes, but that does not mean they cannot be improved and enhanced or in fact that another technique might prove to be superior or complementary. Meanwhile there can be no doubt that progress is made positionally irrespective of search but I do not believe it can be the reason for a breakthrough jump in chess strength but rather a steady climb.

But wait!
Maybe there is a middle way, a third avenue of improvement which sort of falls between the two pillars of search and evaluation and that is search intelligence. Something both of us have practiced in our programs for years but maybe not properly exploited. The ability of the eval to have a more significant impact on the search than traditionally has been the case. I think this area has been relatively unexplored and offers interesting potential. 

Knowing

Declarative Knowledge

A Priori

Heuristic Knowledge

Empirical Knowledge

Perfect Knowledge

Perfect Knowledge usually incorporates analysis or stored results from exhaustive search, which requires no further search at interior nodes except the root. An oracle might be considered as "perfect" evaluation.

Interior Node Recognizer

Endgame Databases

as probed inside a Recognizer framework, if a certain material constellation is detected.

Procedural Knowledge

See also

Publications

1970 ...

1975 ...

1980 ...

1985 ...

Revised as Hans Berliner, Carl Ebeling (1990). Hitech. Computers, Chess, and Cognition

1990 ...

1995 ...

2000 ...

2005 ...

2010 ...

2015 ...

Forum Posts

1996 ...

2000 ...

2005 ...

Re: Search or Evaluation? by Mark Uniacke, Hiarcs Forum, October 14, 2007

2010 ...

2015 ...

Re: Chessprogams with the most chessknowing by Alvaro Cardoso, CCC, February 18, 2017
Re: Chessprogams with the most chessknowing by Mark Lefler, CCC, February 18, 2017 » Komodo
Re: Chessprogams with the most chessknowing by Marco Costalba, CCC, February 19, 2017 » Stockfish

External Links

Knowledge

Related Topics

Types of Knowledge

Musicvideo

John McLaughlin, Billy Cobham, Rick Laird, Jan Hammer, Jerry Goodman

References

  1. Chess in the Art of Samuel Bak, Center for Holocaust & Genocide Studies, University of Minnesota
  2. Jonathan Schaeffer (1997). One Jump Ahead Challenging Human Supremacy in Checkers, Springer, ISBN 0-387-94930-5, Prelude to Disaster, pp. 255-256 or pp. 249-250 in the Second Edition, 2009
  3. Andreas Junghanns, Jonathan Schaeffer (1997). Search versus knowledge in game-playing programs revisited. IJCAI-97, Vol 1, pdf, 3 Search Versus Knowledge: Practise
  4. Hans Berliner, Gordon Goetsch, Murray Campbell, Carl Ebeling (1989). Measuring the Performance Potential of Chess Programs, Advances in Computer Chess 5
  5. Peter Kouwenhoven (1987). Advances in Computer Chess, The 5th Triennial Conference. ICCA Journal, Vol. 10, No. 2 (report)
  6. Hans Berliner, Gordon Goetsch, Murray Campbell, Carl Ebeling (1990). Measuring the Performance Potential of Chess Programs. Artificial Intelligence, Vol. 43, No. 1
  7. Chess in 2010 by Ed Schröder, The Berliner Experiment
  8. The end of computer chess progress? by Ed Schröder, rgcc, March 06, 1999
  9. Re: Intelligent software, please by Ed Schröder, CCC, November 26, 2001
  10. Schröder, Ed (German) from Schachcomputer.info - Wiki also refers the post by Ed Schröder
  11. Re: Search or Evaluation? by Mark Uniacke, Hiarcs Forum, October 14, 2007
  12. UCI Machine Learning Repository: Chess (King-Rook vs. King-Pawn) Data Set
  13. Patanjali from Wikipedia
  14. Dap Hartmann (1988). Alen D. Shapiro: Structured Induction in Expert Systems. ICCA Journal, Vol. 11, No. 4
  15. Knowledge discovery from Wikipedia
  16. Dap Hartmann (2010). How can Humans learn from Computers? Review on Matej Guid's Ph.D. thesis, ICGA Journal, Vol. 33, No. 4
  17. Original ZDF broadcast "Sonntagskonzert", September 17, 1972, recorded at Kongresshalle Deutschen Museum, Munich, "Jazz Now", August 17, 1972, curator Joachim-Ernst Berendt, see The Mahavishnu Orchestra On Film, MUENCHEN-72_KPL.pdf, Diese Woche im Fernsehen, Der Spiegel 38/1972 (German), 1972 Summer Olympics

Up one Level