Jan 13, 2022
Power supply features – Introduction to Power E1080

1.5.7 Power supply features

Each Power E1080 server node has four 1950 W bulk power supply units that are operating at 240 V. These power supply unit features are a default configuration on every Power E1080 system node. The four units per system node do not have an associated feature code and are always auto-selected by the IBM configurator when a new configuration task is started.

Four power cords from the power distribution units (PDU) drive these power supplies, which connect to four C13/C14 type receptacles on the linecord conduit in the rear of the system. The power linecord conduit source power from the rear and connects to the power supply units in the front of the system.

The system design provides N+2 redundancy for system bulk power, which allows the system to continue operation with any two of the power supply units functioning. The failed units must remain in the system until new power supply units are available for replacement.

The power supply units are hot-swappable, which allows replacement of a failed unit without system interruption. The power supply units are placed in front of the system, which makes any necessary service that much easier.

Figure 1-7 shows the power supply units and their physical locations marked as E1, E2, E3, and E4 in the system.

Figure 1-7 Power supply units

24      IBM Power E1080: Technical Overview and Introduction

1.5.8 System node PCIe interconnect features

Each system node provides 8 PCIe Gen5 hot-plug enabled slots; therefore, a 2-node system provides 16 slots, a 3-node system provides 24 slots, and a 4-node system provides 32 slots.

Up to four I/O expansion drawer features #EMX0 can be connected per node to achieve the slot capacity that is listed in Table 1-14.

Table 1-14 PCIe slots availability for different system nodes configurations

Each I/O expansion drawer consists of two Fanout Module feature #EMXH, each providing six PCIe slots. Each Fanout Module connects to the system by using a pair of CXP cable features. The CXP cable features are listed in Table 1-16 on page 27.

Table 1-15 Optical CXP cable feature

The RPO-only cables in this list are not available for ordering new or MES upgrade, but for migrating from a source system. Select a longer length feature code for inter-rack connection between the system node and the expansion drawer.

The one pair of CXP optical cable connects to system node by using one 2-ports PCIe optical cable adapter feature EJ24, which is placed in the CEC.

Both CXP optical cable pair and the optical cable adapter features are concurrently maintainable. Therefore, careful balancing of I/O, assigning adapters through redundant EMX0 expansion drawers, and different system nodes can ensure high-availability for I/O resources that are assigned to partitions.

For more information abut internal buses and the architecture of internal and external I/O subsystems, see 2.5, “Internal I/O subsystem” on page 83.

Chapter 1. Introduction to Power E1080        25

More Details
Jan 12, 2022
Expansion drawers and storage enclosures – Introduction to Power E1080

1.1.2 Expansion drawers and storage enclosures

Capacity can be added to your system by using expansion drawers and storage enclosures.

An optional 19-inch PCIe Gen 3 4U I/O expansion drawer provides 12 PCIe Gen 3 slots. The I/O expansion drawer connects to the system node with a pair of PCIe x16 to CXP converter cards that are housed in the system node. Each system node can support up to four I/O expansion drawers, for a total of 48 PCIe Gen 3 slots. A fully configured Power E1080 can support a maximum of 16 I/O expansion drawers, which provides a total of 192 PCIe Gen 3 slots.

An optional EXP24SX SAS storage enclosure provides 24 2.5-inch small form factor (SFF) serial-attached SCSI (SAS) bays. It supports up to 24 hot-swap hard disk drives (HDDs) or solid-state drives (SSDs) in only 2U rack units of space in a 19-inch rack. The EXP24SX is connected to the Power E1080 server by using SAS adapters that are plugged into system node PCIe slots or I/O expansion drawer slots.

For more information about enclosures and drawers, see 1.6, “I/O drawers” on page 26.

For more information about IBM storage products, see this web page.

1.1.3 Hardware at-a-glance

The Power E1080 server provides the following hardware components and characteristics:

Ê 10-, 12-, or 15-core Power10 processor chips that ar packaged in a single chip module per socket

Ê One, two, three, or four system nodes with four Power10 processor sockets each Ê Redundant clocking in each system node

Ê Up to 60 Power10 processor cores per system node and up to 240 per system Ê Up to 16 TB of DDR4 memory per system node and up to 64 TB per system

Ê 8 PCIe Gen 5 slots per system node and a maximum of 32 PCIe Gen 5 slots per system Ê PCIe Gen 1, Gen 2, Gen 3, Gen 4, and Gen 5 adapter cards supported in system nodes

Ê Up to 4 PCIe Gen 3 4U I/O expansion drawers per system node providing a maximum of 48 additional PCIe Gen 3 slots

Ê Up to 192 PCIe Gen 3 slots using 16 PCIe Gen 3 I/O expansion drawers per system

Ê Up to over 4,000 directly attached SAS HDDs or SSDs through EXP24SX SFF drawers

Ê System control unit, which provides redundant Flexible Service Processors and support for the operations panel, the system VPD, and external attached DVD

The massive computational power, exceptional system capacity, and the unprecedented scalability of the Power E1080 server hardware are unfolded by unique enterprise class firmware and system software capabilities and features. The following important characteristics and features are offered by the IBM Power enterprise platform:

Ê Support for IBM AIX, IBM i, and Linux operating system environments

Ê Innovative dense math engine that is integrated in each Power10 processor-core to accelerate AI inferencing workloads

Ê Optimized encryption units that are implemented in each Power10 processor-core

Ê Dedicated data compression engines that are provided by the Power10 processor technology

Chapter 1. Introduction to Power E1080        3

Ê Hardware and firmware assisted and enforced security provide trusted boot and pervasive memory encryption support

Ê Up to 1,000 virtual machines (VMs) or logical partitions (LPARs) per system

Ê Dynamic LPAR support to modify available processor and memory resources according to workload, without interruption of the business

Ê Capacity on demand (CoD) processor and memory options to help respond more rapidly and seamlessly to changing business requirements and growth

Ê IBM Power System Private Cloud Solution with Dynamic Capacity featuring Power Enterprise Pools 2.0 that support unsurpassed enterprise flexibility for real-time workload balancing, system maintenance and operational expenditure cost management.

Table 1-1 compares important technical characteristics of the Power E1080 server with those of the Power System E980 server, based on IBM POWER9™ processor-based technology.

4      IBM Power E1080: Technical Overview and Introduction

a. CAPI designates the coherent accelerator processor interface technology and OpenCAPI designates the open coherent accelerator processor interface technology. For more information about architectural specifications and the surrounding system, see this web page.
b. NVMe designates the Non-Volatile Memory Express interface specification under supervision
of the NVM Express consortium: https://nvmexpress.org.
c. SMP designates the symmetric multiprocessing architecture, which is used to build monolithic servers out of multiple processor entities.
d. Time domain reflectometry (TDR) allows the server toactively detect faults incables and locate discontinuities in a connector.

Figure 1-1 shows a 4-node Power E1080 server that is mounted in an IBM rack. Each system node is cooled by a set of five fans, which are arranged side-by-side in one row. The cooling assemblies show through the front door of the rack.

Figure 1-1 Power E1080 4-node server mounted in S42 rack with #ECRT door

More Details