Sep 12, 2022
System capacities and features – Introduction to Power E1080
1.1.4 System capacities and features
With any initial orders, the Power E1080 supports up to four system nodes. The maximum memory capacity that is supported in each node is 4 TB.
The maximum number of supported PCIe Gen 3 I/O expansion drawer is four per system node. Each I/O expansion drawer can be populated with two Fanout Modules. Each Fanout Module in turn is connected to a system node through one PCIe x16 to CXP Converter Card.
Memory features #EMC1 128 GB, #EMC2 256 GB, #EMC3 512 GB, and #EMC4 1024 GB are available.
The following characteristics are available:
Ê Maximum of 4 #EDN1 5U system node drawers
Ê Maximum of 16 TB of system memory per node drawer
Ê Maximum of 16 #EMX0 PCIe Gen 3 I/O expansion drawers
Ê Maximum of 32 #EMXH PCIe Gen 3 6-slot Fanout Module for PCIe Gen 3 expansion drawers
Ê Maximum of 32 #EJ24 PCIe x16 to CXP Converter Cards
1.2 System nodes
The full operational Power E1080 includes one SCU and one, two, three, or four system nodes. A system node is also referred to as a central electronic complex (CEC), or CEC drawer.
Each system node is 5U rack units high and holds four air-cooled Power10 single-chip modules (SCMs) that are optimized for performance, scalability, and AI workloads. An SCM is constructed of one Power10 processor chip and more logic, pins, and connectors that enable plugging the SCM into the related socket on the system node planar.
The Power E1080 Power10 SCMs are available in 10-core, 12-core, or 15-core capacity. Each core can run in eight-way simultaneous multithreading (SMT) mode, which delivers eight independent hardware threads of parallel execution power.
The 10-core SCMs are ordered in a set of four per system node through processor feature #EDP2. In this way, feature #EDP2 provide 40 cores of processing power to one system node and 160 cores of total system capacity in a 4-node Power E1080 server. The maximum frequency of the 10-core SCM is specified with 3.9 GHz, which makes this SCM suitable as a building block for entry class Power E1080 servers.
The 12-core SCMs are ordered in a set of four per system node through processor feature #EDP3. In this way, feature #EDP3 provides 48 cores capacity per system node and a maximum of 192 cores per fully configured 4-node Power E1080 server. This SCM type offers the highest processor frequency at a maximum of 4.15 GHz, which makes it a perfect choice if highest thread performance is one of the most important sizing goals.
6 IBM Power E1080: Technical Overview and Introduction
The 15-core SCMs are ordered in a set of four per system node through processor feature #EDP4. In this way, feature #EDP4 provides 60 cores per system node and an impressive 240 cores total system capacity for a 4-node Power E1080. The 15-core SCMs run with a maximum of 4.0 GHz and meet the needs of environments with demanding thread performance and high compute capacity density requirements.
Three PowerAXON1 18-bit wide buses per Power10 processor chip are used to span a fully connected fabric within a CEC drawer. In this way, each SCM within a system node is directly connected to every other SCM of the same drawer at 32 Gbps speed. This on-planar interconnect provides 128 GBps chip-to-chip data bandwidth, which marks an increase of 33% relative to the previous POWER9 processor-based on-planar interconnect implementation in Power E980 systems. The throughput can be calculated as 16 b lane with * 32 gbps = 64 GBps per direction * 2 directions for an aggregated rate of 128 GBps.
Each of the four Power10 processor chips in a Power E1080 CEC drawer is connected directly to a Power10 processor chip at the same position in every other CEC drawer in a multi-node system This connection is made by using a symmetric multiprocessing (SMP) PowerAXON 18-bit wide bus per connection running at 32 Gbps speed.
The Power10 SCM provides eight PowerAXON connectors directly on the module of which six are used to route the SMP bus to the rear tailstock of the CEC chassis. This innovative implementation allows to use passive SMP cables, which in turn reduces the data transfer latency and enhances the robustness of the drawer to drawer SMP interconnect. As discussed in 1.2, “System nodes” on page 6, cable features #EFCH, #EFCE, #EFCF, and #EFCG are required to connect system node drawers to the system control unit. They also are required to facilitate the SMP interconnect among each drawer in a multi-node
Power E1080 configuration.
To access main memory, the Power10 processor technology introduces the new open memory interface (OMI). The 16 available high-speed OMI links are driven by 8 on-chip memory controller units (MCUs) that provide a total aggregated bandwidth of up to 409 GBps per SCM. This design represents a memory bandwidth increase of 78% compared to the POWER9 processor-based technology capability.
Every Power10 OMI link is directly connected to one memory buffer-based differential DIMM (DDIMM) slot. Therefore, the four sockets of one system node offer a total of 64 DDIMM slots with an aggregated maximum memory bandwidth of 1636 GBps. The DDIMM densities supported in Power E1080 servers are 32 GB, 64 GB,128 GB, and 256 GB, all of which use Double Data Rate 4 (DDR4) technology.
The Power E1080 memory options are available as 128 GB (#EMC1), 256 GB (#EMC2), 512 GB (#EMC3), and 1024 GB (#EMC4) memory features. Each memory feature provides four DDIMMs.
1 PowerAXON stands for A-bus/X-bus/OpenCAPI/Networking interfaces of the Power10 processor.
Chapter 1. Introduction to Power E1080 7
Each system node supports a maximum of 16 memory features that cover the 64 DDIMM slots. The use of 1024 GB DDIMM features yields a maximum of 16 TB per node. A 2-node system has a maximum of 32 TB capacity. A 4-node system has a maximum of 64 TB capacity. Minimum memory activations of 50% of the installed capacity are required.
The Power10 processor I/O subsystem is drivenby 32 GHz differential Peripheral Component Interconnect Express 5.0 (PCIe Gen 5) buses that provide 32 lanes that are grouped in two sets of 16 lanes. The 32 PCIe lanes deliver an aggregate bandwidth of 576 GBps per system node and are used to support 8 half-length, low-profile (half-height) adapter slots for external connectivity and 4 Non-Volatile Memory Express (NVMe) mainstream Solid State Drives (SSDs) of form factor U.2. for internal storage.
Six of the eight external PCIe slots can be used for PCIe Gen 4 x16 or PCIe Gen 5 x8 adapters and the remaining two offer PCIe Gen 5 x8 capability. All PCIe slots support earlier generations of the PCIe standard, such as PCIe Gen 1 (PCIe 1.0), PCIe Gen 2 (PCIe 2.0), PCIe Gen 3 (PCIe 3.0), and PCIe Gen 4 (PCIe 4.0).
For extra connectivity, up to four 19-inch PCIe Gen 3 4U high I/O expansion units (#EMX0) optionally can be attached to one system node. Each expansion drawer contains one or two PCIe Fanout Modules (#EMXH) with six PCIe Gen 3 full-length, full-height slots each.
A fully configured 4-node Power E1080 server offers a total of 32 internal PCIe slots and up to 192 PCIe slots through I/O expansion units.
Figure 1-2 shows the front view of a system node. The fans and power supply units (PSUs) are redundant and concurrently maintainable. Fans are n+1 redundant; therefore, the system continues to function when any one fan fails. Because the power supplies are n+2 redundant, the system continues to function, even if any two power supplies fail.
Figure 1-2 Front view of a Power E1080 server node
8 IBM Power E1080: Technical Overview and Introduction
Figure 1-3 shows the rear view of a system node with the locations of the external ports and features.
Figure 1-3 Rear view of a Power E1080 server node
Figure 1-4 shows the internal view of a system node and some of the major components like heat sinks, processor voltage regulator modules (VRMs), VRMs of other miscellaneous components, differential DIMM (DDIMM) slots, DDIMMs, system clocks, trusted platform modules (TPMs), and internal SMP cables.
Figure 1-4 Top view of a Power E1080 server node with the top cover assembly removed
Chapter 1. Introduction to Power E1080 9
More Details