Sep 12, 2022
System capacities and features – Introduction to Power E1080

1.1.4 System capacities and features

With any initial orders, the Power E1080 supports up to four system nodes. The maximum memory capacity that is supported in each node is 4 TB.

The maximum number of supported PCIe Gen 3 I/O expansion drawer is four per system node. Each I/O expansion drawer can be populated with two Fanout Modules. Each Fanout Module in turn is connected to a system node through one PCIe x16 to CXP Converter Card.

Memory features #EMC1 128 GB, #EMC2 256 GB, #EMC3 512 GB, and #EMC4 1024 GB are available.

The following characteristics are available:

Ê Maximum of 4 #EDN1 5U system node drawers

Ê Maximum of 16 TB of system memory per node drawer

Ê Maximum of 16 #EMX0 PCIe Gen 3 I/O expansion drawers

Ê Maximum of 32 #EMXH PCIe Gen 3 6-slot Fanout Module for PCIe Gen 3 expansion drawers

Ê Maximum of 32 #EJ24 PCIe x16 to CXP Converter Cards

1.2 System nodes

The full operational Power E1080 includes one SCU and one, two, three, or four system nodes. A system node is also referred to as a central electronic complex (CEC), or CEC drawer.

Each system node is 5U rack units high and holds four air-cooled Power10 single-chip modules (SCMs) that are optimized for performance, scalability, and AI workloads. An SCM is constructed of one Power10 processor chip and more logic, pins, and connectors that enable plugging the SCM into the related socket on the system node planar.

The Power E1080 Power10 SCMs are available in 10-core, 12-core, or 15-core capacity. Each core can run in eight-way simultaneous multithreading (SMT) mode, which delivers eight independent hardware threads of parallel execution power.

The 10-core SCMs are ordered in a set of four per system node through processor feature #EDP2. In this way, feature #EDP2 provide 40 cores of processing power to one system node and 160 cores of total system capacity in a 4-node Power E1080 server. The maximum frequency of the 10-core SCM is specified with 3.9 GHz, which makes this SCM suitable as a building block for entry class Power E1080 servers.

The 12-core SCMs are ordered in a set of four per system node through processor feature #EDP3. In this way, feature #EDP3 provides 48 cores capacity per system node and a maximum of 192 cores per fully configured 4-node Power E1080 server. This SCM type offers the highest processor frequency at a maximum of 4.15 GHz, which makes it a perfect choice if highest thread performance is one of the most important sizing goals.

6      IBM Power E1080: Technical Overview and Introduction

The 15-core SCMs are ordered in a set of four per system node through processor feature #EDP4. In this way, feature #EDP4 provides 60 cores per system node and an impressive 240 cores total system capacity for a 4-node Power E1080. The 15-core SCMs run with a maximum of 4.0 GHz and meet the needs of environments with demanding thread performance and high compute capacity density requirements.

Three PowerAXON1 18-bit wide buses per Power10 processor chip are used to span a fully connected fabric within a CEC drawer. In this way, each SCM within a system node is directly connected to every other SCM of the same drawer at 32 Gbps speed. This on-planar interconnect provides 128 GBps chip-to-chip data bandwidth, which marks an increase of 33% relative to the previous POWER9 processor-based on-planar interconnect implementation in Power E980 systems. The throughput can be calculated as 16 b lane with * 32 gbps = 64 GBps per direction * 2 directions for an aggregated rate of 128 GBps.

Each of the four Power10 processor chips in a Power E1080 CEC drawer is connected directly to a Power10 processor chip at the same position in every other CEC drawer in a multi-node system This connection is made by using a symmetric multiprocessing (SMP) PowerAXON 18-bit wide bus per connection running at 32 Gbps speed.

The Power10 SCM provides eight PowerAXON connectors directly on the module of which six are used to route the SMP bus to the rear tailstock of the CEC chassis. This innovative implementation allows to use passive SMP cables, which in turn reduces the data transfer latency and enhances the robustness of the drawer to drawer SMP interconnect. As discussed in 1.2, “System nodes” on page 6, cable features #EFCH, #EFCE, #EFCF, and #EFCG are required to connect system node drawers to the system control unit. They also are required to facilitate the SMP interconnect among each drawer in a multi-node

Power E1080 configuration.

To access main memory, the Power10 processor technology introduces the new open memory interface (OMI). The 16 available high-speed OMI links are driven by 8 on-chip memory controller units (MCUs) that provide a total aggregated bandwidth of up to 409 GBps per SCM. This design represents a memory bandwidth increase of 78% compared to the POWER9 processor-based technology capability.

Every Power10 OMI link is directly connected to one memory buffer-based differential DIMM (DDIMM) slot. Therefore, the four sockets of one system node offer a total of 64 DDIMM slots with an aggregated maximum memory bandwidth of 1636 GBps. The DDIMM densities supported in Power E1080 servers are 32 GB, 64 GB,128 GB, and 256 GB, all of which use Double Data Rate 4 (DDR4) technology.

The Power E1080 memory options are available as 128 GB (#EMC1), 256 GB (#EMC2), 512 GB (#EMC3), and 1024 GB (#EMC4) memory features. Each memory feature provides four DDIMMs.

1 PowerAXON stands for A-bus/X-bus/OpenCAPI/Networking interfaces of the Power10 processor.

Chapter 1. Introduction to Power E1080                7

Each system node supports a maximum of 16 memory features that cover the 64 DDIMM slots. The use of 1024 GB DDIMM features yields a maximum of 16 TB per node. A 2-node system has a maximum of 32 TB capacity. A 4-node system has a maximum of 64 TB capacity. Minimum memory activations of 50% of the installed capacity are required.

The Power10 processor I/O subsystem is drivenby 32 GHz differential Peripheral Component Interconnect Express 5.0 (PCIe Gen 5) buses that provide 32 lanes that are grouped in two sets of 16 lanes. The 32 PCIe lanes deliver an aggregate bandwidth of 576 GBps per system node and are used to support 8 half-length, low-profile (half-height) adapter slots for external connectivity and 4 Non-Volatile Memory Express (NVMe) mainstream Solid State Drives (SSDs) of form factor U.2. for internal storage.

Six of the eight external PCIe slots can be used for PCIe Gen 4 x16 or PCIe Gen 5 x8 adapters and the remaining two offer PCIe Gen 5 x8 capability. All PCIe slots support earlier generations of the PCIe standard, such as PCIe Gen 1 (PCIe 1.0), PCIe Gen 2 (PCIe 2.0), PCIe Gen 3 (PCIe 3.0), and PCIe Gen 4 (PCIe 4.0).

For extra connectivity, up to four 19-inch PCIe Gen 3 4U high I/O expansion units (#EMX0) optionally can be attached to one system node. Each expansion drawer contains one or two PCIe Fanout Modules (#EMXH) with six PCIe Gen 3 full-length, full-height slots each.

A fully configured 4-node Power E1080 server offers a total of 32 internal PCIe slots and up to 192 PCIe slots through I/O expansion units.

Figure 1-2 shows the front view of a system node. The fans and power supply units (PSUs) are redundant and concurrently maintainable. Fans are n+1 redundant; therefore, the system continues to function when any one fan fails. Because the power supplies are n+2 redundant, the system continues to function, even if any two power supplies fail.

Figure 1-2 Front view of a Power E1080 server node

8      IBM Power E1080: Technical Overview and Introduction

Figure 1-3 shows the rear view of a system node with the locations of the external ports and features.

Figure 1-3 Rear view of a Power E1080 server node

Figure 1-4 shows the internal view of a system node and some of the major components like heat sinks, processor voltage regulator modules (VRMs), VRMs of other miscellaneous components, differential DIMM (DDIMM) slots, DDIMMs, system clocks, trusted platform modules (TPMs), and internal SMP cables.

Figure 1-4 Top view of a Power E1080 server node with the top cover assembly removed

Chapter 1. Introduction to Power E1080    9

More Details
Aug 12, 2022
System control unit – Introduction to Power E1080

1.3 System control unit

The system control unit (SCU) is implemented in a 2U high chassis and provides system hardware, firmware, and virtualization control functions through a pair of redundant Flexible Service Processor (FSP) devices. It also contains the operator panel and the electronics module that stores the system vital product data (VPD). The SCU is also prepared to facilitate USB connectivity that can be used by the Power E1080 server.

One SCU is required and supported for each Power E1080 server (any number of system nodes) and, depending on the number of system nodes, the SCU is powered according to the following rules:

Ê Two universal power interconnect (UPIC) cables are used to provide redundant power to the SCU.

Ê In a Power E1080 single system node configuration, both UPIC cables are provided from the single system node to be connected to the SCU.

Ê For a two, three, or four system nodes configuration, one UPIC cable is provided from the first system node and the second UPIC cable is provided from second system node to be connected to the SCU.

The set of two cables facilitate a 1+1 redundant electric power supply. In case of a failure of one cable, the remaining UPIC cable is sufficient to feed the needed power to the SCU.

Two service processor cards in SCU are ordered by using two mandatory #EDFP features. Each one provides two 1 Gb Ethernet ports for the Hardware Management Console (HMC) system management connection. One port is used as primary connection and the second port can be used for redundancy. To enhance resiliency, it is recommended to implement a dual HMC configuration by attaching separate HMCs to each of the cards in the SCU.

Four FSP ports per FSP card provide redundant connection from the SCU to each system node. System nodes connect to the SCU by using the cable features #EFCH, #EFCE, #EFCF, and #EFCG. Feature #EFCH connects the first system node to the SCU and it is included by default in every system node configuration. It provides FSP, UBIC, and USB cables, but no symmetric multiprocessing (SMP) cables. All the other cable features are added depending on the number of extra system nodes that are configured and includes FSP and SMP cables.

The SCU implementation also includes the following highlights:

Ê Elimination of clock cabling since the introduction of POWER9 processor-based servers Ê Front-accessible system node USB port

Ê Optimized UPIC power cabling Ê Optional external DVD

Ê Concurrently maintainable time of day clock battery

10      IBM Power E1080: Technical Overview and Introduction

Figure 1-5 shows the front and rear view of a SCU with the locations of the external ports and features.

Figure 1-5 Front and rear view of the system control unit

More Details
Jun 12, 2022
Server specifications – Introduction to Power E1080

1.4 Server specifications

The Power E1080 server specifications are essential to plan for your server. For a first assessment in the context of your planning effort, this section provides you with an overview related to the following topics:

Ê Physical dimensions

Ê Electrical characteristics

Ê Environment requirements and noise emission

For more information about the comprehensive Model 9080-HEX server specifications product documentation, see IBM Documentation.

Chapter 1. Introduction to Power E1080        11

1.4.1 Physical dimensions

The Power E1080 is a modular system that is build of a single SCU and one, two, three, or four system nodes.

Each system component must be mounted in a 19-inch industry standard rack. The SCU requires 2U rack units and each system node requires 5U rack units. Thus, a single-node system requires 7U, a two-node system requires 12U, a three-node system requires 17U, and a four-node system requires 22U rack units. More rack space must be allotted; for example, to PCIe I/O expansion drawers, a hardware management console, flat panel console kit, network switches, power distribution units, and cable egress space.

Table 1-2 lists the physical dimensions of the Power E1080 server control unit and a Power E1080 server node. The component height is also given in Electronic Industries Alliance (EIA) rack units. (One EIA unit corresponds to one rack unit (U) and is defined as 1 3/4 inch or 44.45 mm respectively).

Lift tools

It is recommended to have a lift tool available at each site where one or more Power E1080 servers are located to avoid any delays when servicing systems. An optional lift tool #EB2Z is available for order with a Power E1080 server. One #EB2Z lift tool can be shared among many servers and I/O drawers. The #EB2Z lift tool provides a hand crank to lift and position up to 159 kg (350 lb). The #EB2Z lift tool is 1.12 meters x 0.62 meters (44 in x 24.5 in).

1.4.2 Electrical characteristics

Each Power E1080 server node has four 1950 W bulk power supplies. The hardware design provides N+2 redundancy for the system power supply, and any node can continue to operate at full function in nominal mode with any two of the power supplies functioning.

Depending on the specific Power E1080 configuration, the power for the SCU is provided through two UPIC cables connected to one or two system nodes, as described in 1.2, “System nodes” on page 6.

12      IBM Power E1080: Technical Overview and Introduction

Table 1-3 lists the electrical characteristics per Power E1080 server node. For planning purposes, use the maximum values that are provided. However, the power draw and heat load depends on the specific processor, memory, adapter, and expansion drawer configuration and the workload characteristics.

More Details
May 12, 2022
Environment requirements and noise emission – Introduction to Power E1080

1.4.3 Environment requirements and noise emission

The environment requirements for the Power E1080 servers are classified in operating and non-operating environments. The operating environments are further segmented regarding the recommended and allowable conditions.

The recommended operating environment designates the long-term operating environment that can result in the greatest reliability, energy efficiency, and reliability. The allowable operating environment represents where the equipment is tested to verify functionality. Because the stresses that operating in the allowable envelope can place on the equipment, these envelopes must be used for short-term operation, not continuous operation.

The condition of a non-operating environment pertains to the situation when equipment is removed from the original shipping container and is installed, but is powered down. The allowable non-operating environment is provided to define the environmental range that an unpowered system can experience short term without being damaged.

Table 1-4 on page 14 lists the environment requirements for the Power E1080 server regarding temperature, humidity, dew point, and altitude. It also lists the maximum noise emission level for a fully configured Power E1080 server.

Chapter 1. Introduction to Power E1080    13

a. Declared level LWA,m is the upper-limit A-weighted sound power level measured in bel (B).

A comprehensive list of noise emission values for various different Power E1080 server configurations is provided by the Power E1080 product documentation. For more information about noise emissions, search for “Model 9080-HEX server specifications” at IBM Documentation.

14      IBM Power E1080: Technical Overview and Introduction

More Details
May 12, 2022
System overview – Introduction to Power E1080

1.1 System overview

The Power E1080, also referred to by its 9080-HEX machine type-model designation, represents the most powerful and scalable server in the IBM Power portfolio. It is comprised of a combination of CEC enclosures that are called nodes (or system nodes) and more units and drawers.

1.1.1 System nodes, processors, and memory

In this section, we provide a general overview of the system nodes, processors, and memory. For more information about the system nodes, see 1.2, “System nodes” on page 6.

A system node is an enclosure that provides the connections and supporting electronics to connect the processor with the memory, internal disk, adapters, and the interconnects that are required for expansion.

A combination of one, two, three, or four system nodes per server is supported.

Each system node provides four sockets for Power10 processor chips and 64 differential DIMM (DDIMM) slots for Double Data Rate 4 (DDR4) technology DIMMs.

Each socket holds one Power10 single chip module (SCM). An SCM can contain 10, 12, or 15 Power10 processor cores. It also holds the extra infrastructure logic to provide electric power and data connectivity to Power10 processor chip.

The processor configuration of a system node is defined by the selected processor feature. Each feature defines a set of four Power10 processors chips with the same core density (10, 12, or 15).

A 4-node Power E1080 server scales up to 16 processor sockets and 160, 192, or 240 cores, depending on the number of cores provided by the configured SCM type.

All system nodes with a Power E1080 server must be configured with the same processor feature.

Each system node can support up to a maximum of 16 TB of system memory by using the largest available memory DIMM density. A fully configured 4-node Power E1080 can support up to 64 TB of memory.

To support internal boot capability, each system node enables the use of up to four non-volatile memory express (NVMe) drive bays. More drive bays are provided through expansion drawers.

Each system node provides eight Peripheral Component Interconnect Express (PCIe) Gen 5 capable slots, with a maximum of 32 per Power E1080 server.

Any one-, two-, three-, or four-system node configuration requires the system control unit (SCU) to operate. The SCU provides system hardware, firmware, and virtualization control through redundant Flexible Service Processors (FSPs). Only one SCU is required and supported for every Power E1080 server. For more information about the system control unit, see 1.3, “System control unit” on page 10.

For more information about the environmental and physical aspects of the server, see 1.4, “Server specifications” on page 11.

More Details
Apr 12, 2022
System features – Introduction to Power E1080

1.5 System features

This section lists and explains the available system features on a Power E1080 server. These features describe the resources that are available on the system by default or by virtue of procurement of configurable feature codes.

An overview of various feature codes and the essential information also is presented that can help users design their system configuration with suitable features that can fulfill the application compute requirement. This information also helps with building a highly available, scalable, reliable, and flexible system around the application.

1.5.1 Minimum configuration

A minimum configuration babels a user to order a fully qualified and tested hardware configuration of a Power system with a minimum set of offered technical features. The modular design of a Power E1080 server enables the user to start low with a minimum configuration and scale up vertically as and when needed.

Table 1-5 lists the Power E1080 server configuration with minimal features.

Table 1-5 Minimum configuration

GBIT DDR4

Chapter 1. Introduction to Power E1080        15

1.5.2 Processor features

Each system node in a Power E1080 server provides four sockets to accommodate Power10 single chip modules (SCMs). Each processor feature code represents four of these sockets, which are offered in 10-core, 12-core, and 15-core density.

Table 1-6 lists the available processor feature codes for a Power E1080 server. The system configuration requires one, two, three, or four quantity of same processor feature, according to the number of system nodes.

Table 1-6 Processor features.

The system nodes connect to other system nodes and to the system control unit through cable connect features. Table 1-7 lists the set of cable features that are required for one-, two-, three-, and four-node configurations.

Table 1-7 Cable set features quantity

16      IBM Power E1080: Technical Overview and Introduction

Every feature code that is listed in Table 1-6 on page 16 provides the processor cores, not their activation. The processor core must be activated to be assigned as resource to a logical partition. The activations are offered through multiple permanent and temporary activation features. For more information about these options, see 2.4, “Capacity on-demand” on

page 76.

Table 1-8 lists the processor feature codes and the associated permanent activation features. Any of these activation feature codes can permanently activate one core.

Table 1-8 Processor and activation features

The following types of permanent activations are available:

A minimum of 16 processor cores must always be activated with the static activation features, regardless of the Power E1080 configuration. Also, if the server is associated to a PEP 2.0, a minimum of one base activation is required.

For more information about other temporary activation offerings that are available for the Power E1080 server, see 2.4, “Capacity on-demand” on page 76.

Chapter 1. Introduction to Power E1080                17

Regular and PEP 2.0 associated activations for Power E1080 are listed in Table 1-9. The Order type table column includes the following designations:

Table 1-9 Processor activation features

18      IBM Power E1080: Technical Overview and Introduction

More Details
Mar 12, 2022
Memory features – Introduction to Power E1080

1.5.3 Memory features

This section describes the memory features that available on a Power E1080 server. Careful selection of these features helps the user to configure their system with the correct amount of memory that can meet the demands of memory intensive workloads. On a Power E1080 server, the memory features can be classified into the following feature categories:

Ê Physical memory Ê Memory activation

These features are described next.

Physical memory features

Physical memory features that are supported on Power E1080 are the next generation differential dual inline memory modules, called DDIMM (see 2.3, “Memory subsystem” on page 72). DDIMMS that are used in the E1080 are Enterprise Class 4U DDIMMs.

The memory DDIMM features are available in 32-, 64-, 128-, and 256-GB capacity. Among these DDIMM features, 32 GB and 64 GB DDIMMs run at 3200 MHz frequency and 128 GB and 256 GB DDIMMs run at 2933 MHz frequency.

Each system node provides 64 DDIMM slots that support a maximum of 16 TB memory and a four system node E1080 can support a maximum of 64 TB memory. DDIMMs are ordered by using memory feature codes, which include a bundle of four DDIMMs with the same capacity.

Consider the following points regarding improved performance:

Ê Plugging DDIMMs of same density provides the highest performance. Ê Filling all the memory slots provides maximum memory performance.

Ê System performance improves when more quads of memory DDIMMs match.

Ê System performance also improves as the amount of memory is spread across more DDIMM slots.

For example, if 1TB of memory is required, 64 x 32 GB DDIMMs can provide better performance than 32 x 64 GB DDIMMs

Chapter 1. Introduction to Power E1080    19

Figure 1-6 shows a DDIMM memory feature.

Figure 1-6 New DDIMM feature

Table 1-10 lists the available memory DDIMM feature codes for the Power E1080.

Table 1-10 E1080 memory feature codes

Memory activation features

Software keys are required to activate part or all of the physical memory that is installed in the Power E1080 to be assigned to logical partitions (LPAR). Any software key is available when memory activation feature is ordered and can be ordered at any time during the life-cycle of the server to help the user scale up memory capacity without outages, unless an more physical memory upgrade and activation are required.

A server administrator or user cannot control which physical memory DDIMM features are activated when memory activations are used.

The amount of memory to activate depends on the feature code ordered; for example, if an order contains two of feature code EDAB (100 GB DDR4 Mobile Memory Activation for HEX), these feature codes activate 200 GB of the installed physical memory.

20      IBM Power E1080: Technical Overview and Introduction

Different types of memory activation features that are available for the Power E1080 server are known to the PowerVM hypervisor as total quantity for each type. The PowerVM hypervisor determines the physical DDIMM memory to be activated and assigned to the LPARs.

Similar to processor core activation features, different types of permanent memory activation features are offered on the Power E1080 server. For more information about the available types of activations, 1.5.2, “Processor features” on page 16.

Orders for memory activation features must consider the following rules:

Ê The system must have a minimum of 50% activated physical memory. It can be activated by using static or static and mobile memory activation features.

Ê The system must have a minimum of 25% of physical memory activated by using static memory activation features.

Ê When a Power E1080 is part of a PEP 2.0 environment, the server must have a minimum of 256 GB of base memory activations.

Consider the following examples:

Ê For a system with 4 TB of physical memory, at least 2 TB (50% of 4 TB) must be activated.

Ê When a Power E1080 is part of a PEP 1.0 environment, a server with 4 TB of physical memory and 3.5 TB of activated memory requires a minimum of 896 GB (25% of 3.5 TB) of physical memory activated by using static activation features.

Ê When a Power E1080 is part of a PEP 2.0 environment, a server with 4 TB of physical memory requires a minimum of 256 GB of memory activated with base activation features.

Table 1-11 lists the available memory activation feature codes for Power E1080. The Order type column provides indicates whether the feature code is available for an initial order only, or also with a MES upgrade on a existing server only, or both.

Table 1-11 Memory activation features.

Chapter 1. Introduction to Power E1080                21

More Details
Feb 13, 2022
System node PCIe features – Introduction to Power E1080

1.5.4 System node PCIe features

Each system node provides eight PCIe Gen 5 hot-plug enabled slots; therefore, a two-system nodes server provides 16 slots, a three-system nodes server provides 24 slots, and a

four-system nodes system provides 32 slots.

Table 1-12 lists all the supported PCIe adapter feature codes inside the Power E1080 server node drawer.

Table 1-12 PCIe adapters supported on Power E1080 server node

22      IBM Power E1080: Technical Overview and Introduction

a. Requires SFP to provide 10 Gb, 2 Gb, or 1 Gb BaseT connectivity

1.5.5 System node disk and media features

At the time of this writing, the Power E1080 server node supports up to four 7 mm NVMe U.2 drives that are plugged into the 4-bay NVMe carrier backplane (feature code EJBC). Each system node requires one backplane, even if no NVMe U.2 drives are selected.

Each NVMe U.2 drive can be independently assigned to different LPARs for hosting the operating system and to start from them. They also can be used for non-data intensive workloads. NVMe U.2 drives are concurrently replaceable.

Table 1-13 lists the available NVMe drive feature codes for the Power E1080 and the operating system support.

Table 1-13 NVMe features

For more information, see 2.5.2, “Internal NVMe storage subsystem” on page 87.

For systems that are running IBM i, an expansion or storage drawer can meet the NVMe requirements.

1.5.6 System node USB features

The Power E1080 supports one stand-alone external USB drive that is associated to feature code EUA5. The feature code includes the cable that is used to the USB drive to the preferred front accessible USB port on the SCU.

The Power E1080 server node does not offer an integrated USB port. The USB 3.0 adapter feature code EC6J is required to provide connectivity to an optional external USB DVD drive and requires one system node or I/O expansion drawer PCIe slot. The adapter connects to

Chapter 1. Introduction to Power E1080        23

the USB port in the rear of the SCU with the cable associated to feature code EC6N. Because this cable is 1.5 m long, in case of a Power E1080 with more than one system node, the USB 3.0 adapter can be used in the first or the second system node only.

The USB 3.0 adapter feature code EC6J supports the assignment to an LPAR and can be migrated from an operating LPAR to another, including the connected DVD drive. This design allows it to assign the DVD drive feature to any LPAR according to the need.

Dynamic allocation of system resources such as processor, memory, and I/O is also referred to as dynamic LPAR or DLPAR.

For more information about the USB subsystem, see 1.5.6, “System node USB features” on page 23.

More Details
Jan 13, 2022
Power supply features – Introduction to Power E1080

1.5.7 Power supply features

Each Power E1080 server node has four 1950 W bulk power supply units that are operating at 240 V. These power supply unit features are a default configuration on every Power E1080 system node. The four units per system node do not have an associated feature code and are always auto-selected by the IBM configurator when a new configuration task is started.

Four power cords from the power distribution units (PDU) drive these power supplies, which connect to four C13/C14 type receptacles on the linecord conduit in the rear of the system. The power linecord conduit source power from the rear and connects to the power supply units in the front of the system.

The system design provides N+2 redundancy for system bulk power, which allows the system to continue operation with any two of the power supply units functioning. The failed units must remain in the system until new power supply units are available for replacement.

The power supply units are hot-swappable, which allows replacement of a failed unit without system interruption. The power supply units are placed in front of the system, which makes any necessary service that much easier.

Figure 1-7 shows the power supply units and their physical locations marked as E1, E2, E3, and E4 in the system.

Figure 1-7 Power supply units

24      IBM Power E1080: Technical Overview and Introduction

1.5.8 System node PCIe interconnect features

Each system node provides 8 PCIe Gen5 hot-plug enabled slots; therefore, a 2-node system provides 16 slots, a 3-node system provides 24 slots, and a 4-node system provides 32 slots.

Up to four I/O expansion drawer features #EMX0 can be connected per node to achieve the slot capacity that is listed in Table 1-14.

Table 1-14 PCIe slots availability for different system nodes configurations

Each I/O expansion drawer consists of two Fanout Module feature #EMXH, each providing six PCIe slots. Each Fanout Module connects to the system by using a pair of CXP cable features. The CXP cable features are listed in Table 1-16 on page 27.

Table 1-15 Optical CXP cable feature

The RPO-only cables in this list are not available for ordering new or MES upgrade, but for migrating from a source system. Select a longer length feature code for inter-rack connection between the system node and the expansion drawer.

The one pair of CXP optical cable connects to system node by using one 2-ports PCIe optical cable adapter feature EJ24, which is placed in the CEC.

Both CXP optical cable pair and the optical cable adapter features are concurrently maintainable. Therefore, careful balancing of I/O, assigning adapters through redundant EMX0 expansion drawers, and different system nodes can ensure high-availability for I/O resources that are assigned to partitions.

For more information abut internal buses and the architecture of internal and external I/O subsystems, see 2.5, “Internal I/O subsystem” on page 83.

Chapter 1. Introduction to Power E1080        25

More Details
Jan 12, 2022
Expansion drawers and storage enclosures – Introduction to Power E1080

1.1.2 Expansion drawers and storage enclosures

Capacity can be added to your system by using expansion drawers and storage enclosures.

An optional 19-inch PCIe Gen 3 4U I/O expansion drawer provides 12 PCIe Gen 3 slots. The I/O expansion drawer connects to the system node with a pair of PCIe x16 to CXP converter cards that are housed in the system node. Each system node can support up to four I/O expansion drawers, for a total of 48 PCIe Gen 3 slots. A fully configured Power E1080 can support a maximum of 16 I/O expansion drawers, which provides a total of 192 PCIe Gen 3 slots.

An optional EXP24SX SAS storage enclosure provides 24 2.5-inch small form factor (SFF) serial-attached SCSI (SAS) bays. It supports up to 24 hot-swap hard disk drives (HDDs) or solid-state drives (SSDs) in only 2U rack units of space in a 19-inch rack. The EXP24SX is connected to the Power E1080 server by using SAS adapters that are plugged into system node PCIe slots or I/O expansion drawer slots.

For more information about enclosures and drawers, see 1.6, “I/O drawers” on page 26.

For more information about IBM storage products, see this web page.

1.1.3 Hardware at-a-glance

The Power E1080 server provides the following hardware components and characteristics:

Ê 10-, 12-, or 15-core Power10 processor chips that ar packaged in a single chip module per socket

Ê One, two, three, or four system nodes with four Power10 processor sockets each Ê Redundant clocking in each system node

Ê Up to 60 Power10 processor cores per system node and up to 240 per system Ê Up to 16 TB of DDR4 memory per system node and up to 64 TB per system

Ê 8 PCIe Gen 5 slots per system node and a maximum of 32 PCIe Gen 5 slots per system Ê PCIe Gen 1, Gen 2, Gen 3, Gen 4, and Gen 5 adapter cards supported in system nodes

Ê Up to 4 PCIe Gen 3 4U I/O expansion drawers per system node providing a maximum of 48 additional PCIe Gen 3 slots

Ê Up to 192 PCIe Gen 3 slots using 16 PCIe Gen 3 I/O expansion drawers per system

Ê Up to over 4,000 directly attached SAS HDDs or SSDs through EXP24SX SFF drawers

Ê System control unit, which provides redundant Flexible Service Processors and support for the operations panel, the system VPD, and external attached DVD

The massive computational power, exceptional system capacity, and the unprecedented scalability of the Power E1080 server hardware are unfolded by unique enterprise class firmware and system software capabilities and features. The following important characteristics and features are offered by the IBM Power enterprise platform:

Ê Support for IBM AIX, IBM i, and Linux operating system environments

Ê Innovative dense math engine that is integrated in each Power10 processor-core to accelerate AI inferencing workloads

Ê Optimized encryption units that are implemented in each Power10 processor-core

Ê Dedicated data compression engines that are provided by the Power10 processor technology

Chapter 1. Introduction to Power E1080        3

Ê Hardware and firmware assisted and enforced security provide trusted boot and pervasive memory encryption support

Ê Up to 1,000 virtual machines (VMs) or logical partitions (LPARs) per system

Ê Dynamic LPAR support to modify available processor and memory resources according to workload, without interruption of the business

Ê Capacity on demand (CoD) processor and memory options to help respond more rapidly and seamlessly to changing business requirements and growth

Ê IBM Power System Private Cloud Solution with Dynamic Capacity featuring Power Enterprise Pools 2.0 that support unsurpassed enterprise flexibility for real-time workload balancing, system maintenance and operational expenditure cost management.

Table 1-1 compares important technical characteristics of the Power E1080 server with those of the Power System E980 server, based on IBM POWER9™ processor-based technology.

4      IBM Power E1080: Technical Overview and Introduction

a. CAPI designates the coherent accelerator processor interface technology and OpenCAPI designates the open coherent accelerator processor interface technology. For more information about architectural specifications and the surrounding system, see this web page.
b. NVMe designates the Non-Volatile Memory Express interface specification under supervision
of the NVM Express consortium: https://nvmexpress.org.
c. SMP designates the symmetric multiprocessing architecture, which is used to build monolithic servers out of multiple processor entities.
d. Time domain reflectometry (TDR) allows the server toactively detect faults incables and locate discontinuities in a connector.

Figure 1-1 shows a 4-node Power E1080 server that is mounted in an IBM rack. Each system node is cooled by a set of five fans, which are arranged side-by-side in one row. The cooling assemblies show through the front door of the rack.

Figure 1-1 Power E1080 4-node server mounted in S42 rack with #ECRT door

More Details