Yesterday we mentioned how caches work, what the distinction is between L1 and L2, and the varied design components that decide how briskly (and the way efficient) a CPU’s cache is. Right now, we’re going to take one step additional and discover the distinction between L2 and L3 caches.

At its easiest stage, an L3 cache is only a bigger, slower model of the L2 cache. Again when most chips had been single-core processors, this was usually true. The primary L3 caches had been truly constructed on the motherboard itself, linked to the CPU by way of the bottom bus. When AMD launched its K6-III processor household, many current K6/Ok-2 motherboards may settle for a K6-III as effectively. Usually these boards had 512Ok-2MB of L2 cache — when a K6-III, with its built-in L2 cache was inserted, these slower, motherboard-based caches turned L3 as an alternative.

By the flip of the century, slapping a further L3 cache on a chip had turn into a straightforward method to enhance efficiency — Intel’s first consumer-oriented Pentium four “Excessive Version” was a repurposed Gallatin Xeon with a 2MB L3 on-die. Including that cache was enough to purchase the Pentium four EE a 10-20 p.c efficiency enhance over the usual Northwood line.

Cache and the multi-core curveball

As multicore processors turned extra widespread, L3 cache began showing extra often on client . These chips, like Intel’s Nehalem and AMD’s K10 (Barcelona) used L3 as greater than only a bigger, slower backstop for L2. Along with this operate, the L3 cache is commonly shared between all the processors on a single piece of silicon. That’s in distinction to the L1 and L2 caches, each of which are usually non-public and devoted to the wants of every specific core. (AMD’s Bulldozer design is an exception to this — Bulldozer, Piledriver, and Steamroller all share a typical L1 instruction cache between the 2 cores in every module).

Intel’s Haswell-E, for instance, has eight separate cores that every one again as much as a typical L3 cache.

Haswell-E

Personal L1/L2 caches and a shared L3 is hardly the one technique to design a cache hierarchy, nevertheless it’s a typical method that a number of distributors have adopted. Giving every particular person core a devoted L1 and L2 cuts entry latencies and reduces the prospect of cache rivalry — which means two completely different cores received’t overwrite very important knowledge that the opposite put in a location in favor of their very own workload. The widespread L3 cache is slower however a lot bigger, which suggests it may retailer knowledge for all of the cores without delay. Refined algorithms are used to make sure that Core zero tends to retailer data closest to itself, whereas Core 7 throughout the die additionally places vital knowledge nearer to itself.

Not like the L1 and L2, that are practically at all times CPU-focused and personal, the L3 may also be shared with different gadgets or capabilities. Intel’s Sandy Bridge CPUs shared an 8MB L3 cache with the on-die graphics core (Ivy Bridge gave the GPU its personal devoted slice of L3 cache in lieu of sharing the complete 8MB).

In distinction to the L1 and L2 caches, each of that are sometimes mounted and differ solely very barely (and principally for finances elements) each AMD and Intel provide completely different chips with considerably completely different quantities of L3. Intel sometimes sells a minimum of a number of Xeons with decrease core counts, increased frequencies, and a better L3 cache-per-CPU ratio. Intel’s Core i7 processors have maintained an 8MB L3 for the reason that debut of Nehalem in 2008 (roughly 2MB of L3 for each CPU core) however the highest-end elements are sometimes pegged at 2.5MB of cache per CPU core.

Right now, the L3 is characterised as a pool of quick reminiscence widespread to all of the CPUs on an SoC. It’s typically gated independently from the remainder of the CPU core and may be dynamically partitioned to steadiness entry pace, energy consumption, and storage capability. Whereas not practically as quick as L1 or L2, it’s typically extra versatile and performs an important function in managing inter-core communication. With Intel having already added L4 to its Skylake chips, it’s potential we’ll see the L3 take a extra simplified function — with a few of its capabilities and capabilities shifting over to the newer, bigger pool of cache.