NUMA

From Chessprogramming wiki
Jump to: navigation, search

Home * Hardware * Memory * NUMA

Possible NUMA system [1]

NUMA, (Non-uniform memory access)
a multiprocessing memory design where the main memory is partitioned between processors. Opposed to SMP, where all processors compete for access to the centralized shared memory bus, making it difficult to scale well bejoind 8 to 12 CPUs [2], NUMA splits the main memory into so called nodes with separate memory busses for subsets of processors, and high speed interconnection between nodes, either directly in so called 1-hop distance, or indirectly in 2-hop distance. Despite the high speed interconnection, NUMA memory access time varies considerably between faster local memory and remote memory of other nodes. Maintaining cache coherence of processor caches adds significant overhead to NUMA Systems, addressed by ccNUMA which is mostly used synonymous for current NUMA implementations [3].

x86

AMD implemented NUMA with its Opteron processor in 2003, using HyperTransport. Intel announced NUMA compatibility for their x86 servers in late 2007 with Nehalem CPUs using QuickPath Interconnect [4].

Considerations

Scheduling of threads across nodes and cores of a system is a complicated topic due to access of independent or shared data. There are several considerations in ccNUMA aware operating systems and software, such as keeping data local by virtue of first touch [5] [6]. NUMA and processor affinity APIs help application programmers to bind threads or processes to NUMA nodes or to allocate memory from a certain node.

See also

Selected Publications

1998 ...

2000 ...

Memory part 1
Memory part 2: CPU caches
Memory part 3: Virtual Memory
Memory part 5: What programmers can do

2010 ...

Forum Posts

2000 ...

2010 ...

2015 ...

Re: thread affinity by Robert Hyatt, CCC, July 03, 2015

External Links

Linux

Windows

x86

Misc

References

Up one Level