Difference between revisions of "NUMA"

From Chessprogramming wiki
Jump to: navigation, search
Line 57: Line 57:
 
* [http://www.talkchess.com/forum/viewtopic.php?t=63581 What Linux compatible Numa aware engines are available?] by [[Dann Corbit]], [[CCC]], March 29, 2017 » [[Linux]]
 
* [http://www.talkchess.com/forum/viewtopic.php?t=63581 What Linux compatible Numa aware engines are available?] by [[Dann Corbit]], [[CCC]], March 29, 2017 » [[Linux]]
 
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=68293 Ethereal 10.88 NUMA] by [[Norman Schmidt]], [[CCC]], August 24, 2018 » [[Ethereal]]
 
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=68293 Ethereal 10.88 NUMA] by [[Norman Schmidt]], [[CCC]], August 24, 2018 » [[Ethereal]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=71027 Some NUMA data for Stockfish-dev and Cfish-dev] by [[Louis Zulli]], [[CCC]], June 17, 2019 » [[Stockfish]], [[CFish]]
  
 
=External Links=
 
=External Links=

Revision as of 14:47, 30 June 2019

Home * Hardware * Memory * NUMA

Possible NUMA system [1]

NUMA, (Non-uniform memory access)
a multiprocessing memory design where the main memory is partitioned between processors. Opposed to SMP, where all processors compete for access to the centralized shared memory bus, making it difficult to scale well bejoind 8 to 12 CPUs [2], NUMA splits the main memory into so called nodes with separate memory busses for subsets of processors, and high speed interconnection between nodes, either directly in so called 1-hop distance, or indirectly in 2-hop distance. Despite the high speed interconnection, NUMA memory access time varies considerably between faster local memory and remote memory of other nodes. Maintaining cache coherence of processor caches adds significant overhead to NUMA Systems, addressed by ccNUMA which is mostly used synonymous for current NUMA implementations [3].

x86

AMD implemented NUMA with its Opteron processor in 2003, using HyperTransport. Intel announced NUMA compatibility for their x86 servers in late 2007 with Nehalem CPUs using QuickPath Interconnect [4].

Considerations

Scheduling of threads across nodes and cores of a system is a complicated topic due to access of independent or shared data. There are several considerations in ccNUMA aware operating systems and software, such as keeping data local by virtue of first touch [5] [6]. NUMA and processor affinity APIs help application programmers to bind threads or processes to NUMA nodes or to allocate memory from a certain node.

See also

Selected Publications

1998 ...

2000 ...

Memory part 1
Memory part 2: CPU caches
Memory part 3: Virtual Memory
Memory part 5: What programmers can do

2010 ...

Forum Posts

2000 ...

2010 ...

2015 ...

Re: thread affinity by Robert Hyatt, CCC, July 03, 2015

External Links

Linux

Windows

x86

References

Up one Level