# (6) Maximum NUMA Nodes (as a power of 2)

## pjp

Not having much luck with an explanation using Google, and I've not found a clear declaration of how many memory controllers my CPU has (Athlon II X4 640), and so far, nothing under /usr/src/linux/Documentation.

The link to the CPU mentions 2 controllers, but it doesn't sound like both would be used at the same time (DDR2 vs. DDR3 controllers).  So would that be 2 controllers per core for 8 on a 4 core, or 1 per core?  Which would then make the setting for this option either 4 or 8 instead of 6 (which seems to be the default)?

```
Maximum NUMA Nodes (as a power of 2)

CONFIG_NODES_SHIFT:

Specify the maximum number of NUMA Nodes available on the target

system.  Increases memory reserved to accommodate various tables.

Symbol: NODES_SHIFT [=6]

Prompt: Maximum NUMA Nodes (as a power of 2)

Defined at arch/x86/Kconfig:1222

Depends on: NEED_MULTIPLE_NODES [=y] && !MAXSMP [=n]

Location:

-> Processor type and features
```

----------

## roarinelk

This is only interesting if you have more than one socket (i.e. core 0 of socket A wants to access memory of socket B and

the latencies of the cpu<->cpu interconnects have to be considered).

----------

## pjp

Should all NUMA settings be turned off then with only 1 socket?

```
CONFIG_NUMA:

Enable NUMA (Non Uniform Memory Access) support.

The kernel will try to allocate memory used by a CPU on the

local memory controller of the CPU and add some more

NUMA awareness to the kernel.

For 64-bit this is recommended if the system is Intel Core i7

(or later), AMD Opteron, or EM64T NUMA.
```

EDIT:  From what I can tell, NUMA should be off completely.

----------

## SteveBallmersChair

 *pjp wrote:*   

> Not having much luck with an explanation using Google, and I've not found a clear declaration of how many memory controllers my CPU has (Athlon II X4 640), and so far, nothing under /usr/src/linux/Documentation.
> 
> The link to the CPU mentions 2 controllers, but it doesn't sound like both would be used at the same time (DDR2 vs. DDR3 controllers).  So would that be 2 controllers per core for 8 on a 4 core, or 1 per core?  Which would then make the setting for this option either 4 or 8 instead of 6 (which seems to be the default)?

 

There are two memory controllers on an Athlon II, but they are both connected to the same processor die and thus do not need NUMA. The reason there are two memory controllers is that AMD wanted to be able to do memory reads and writes at the same time and thus put two 64-bit memory controllers in the Phenom/Phenom II/Athlon II CPUs instead of one 128-bit-wide one like in the A64s. The controllers can each work with DDR2 or DDR3 memory (in Socket AM3 CPUs only), so there is not one controller for DDR2 and one for DDR3. You can also link the two controllers together in 128-bit mode ("ganged") if you wish.

And as far as what needs NUMA: all Opterons and Xeon 5500/5600/6500/7500 series units in multi-socket setups as well as all Socket G34 Opteron 6100s, even if there is only one of them in a computer. G34 CPUs have two dies inside the CPU that communicate using NUMA.

----------

## Massimo B.

As of today, CONFIG_NUMA is described as  *Quote:*   

> For 64-bit this is recommended if the system is Intel Core i7 (or later), AMD Opteron, or EM64T NUMA.

 Interesting, didn't know I have a NUMA architecture with an old i7-2620M? Had that always disabled but going to enable now.

What CONFIG_NODES_SHIFT is recommended for 2/4 HT cores?

----------

## Ant P.

It's been changed because newer CPUs (Intel's confusingly labelled post-2016 i7's with ≥ 11c22t or AMD ThreadRipper/Epyc) are NUMA-on-a-chip: some system memory is prohibitively expensive to access from some cores.

You don't need it at all unless you have multiple sockets, or a single CPU that costs the price of a car. That's as true now as it was 8 years ago.

----------

## Massimo B.

What are these  post-2016 i7's with ≥ 11c22t, can't find that. Can you mention the first i7 with NUMA-on-a-chip?

After enabling NUMA in the kernel, lscpu also displays the NUMA node(s):

```
# zgrep NUMA /proc/config.gz |grep -v "^#"

CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y

CONFIG_NUMA=y

CONFIG_X86_64_ACPI_NUMA=y

CONFIG_USE_PERCPU_NUMA_NODE_ID=y

CONFIG_ACPI_NUMA=y

# lscpu

Architecture:        x86_64

CPU op-mode(s):      32-bit, 64-bit

Byte Order:          Little Endian

CPU(s):              4

On-line CPU(s) list: 0-3

Thread(s) per core:  2

Core(s) per socket:  2

Socket(s):           1

NUMA node(s):        1

Vendor ID:           GenuineIntel

CPU family:          6

Model:               42

Model name:          Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz

...

```

Does that mean, any NUMA capable architecture would at least show a NUMA node(s) > 1? Even the single socket NUMA-on-chip? And enabling NUMA on all other systems would only have disadvantages?

----------

