Eltu swap meet

English Language Teaching Unit – CUHK

IBM XSERIES W HOT SWAP PSU NA/REFURB/RECERTIFIED. $ -9 A&T HEATER MODULE TO MEET TEMP. RATING OF AC ENTERPRISE LICS ELTU/REFURB/RECERTIFIED. $ Power Enterprise Pools delivers the support to meet clients' business .. A blind swap cassette (BSC) is used to house the low-profile adapters Source Specify (GB HDD SFF-2) M9S ELTU #ESEY Load Source. The chunnel carries power from the rear of the system node to the hot-swap . Power Enterprise Pools deliver the support to meet clients' business goals 4K Block - ) ELTS A #ESFS Load Source Specify (TB HDD SFF-2) ELTU A.

During this temporary transfer, the CBU system's internal records of its total number of IBM i processor and user license entitlements is not updated, and you may see IBM i license noncompliance warning messages from the CBU system. If your primary or CBU machine is sold or discontinued from use, any temporary entitlement transfers must be returned to the machine on which they were originally acquired.

For CBU registration and further information visit http: The minimum defined initial order configuration, if no choice is made, when AIX or Linux is the primary operating system is as follows: Feature number Description EPX0 6-core 3. A Fibre Channel adapter must be ordered if feature is selected. AIX is small tier licensing. The minimum defined initial order configuration, if no choice is made, when IBM i is the primary operating system is: A DVD device will be needed for normal system operations, but not required on all systems.

IBM i operating system performance: IBM i is tier 10 licensing, which does have user based licensing and does not include the features.

Projects – English Language Teaching Unit – CUHK

All processor cores must be activated. The following defines the allowed quantities of processor activation entitlements: A maximum of four processor activation code feature EPYK is required. A maximum of six processor activation code feature EPY0 is required. A maximum of eight processor activation code feature EPY6 is required. Memory upgrades require memory pairs. If the initial order was 16 GB of memory, the original 16 GB memory must be paired as part of the upgrade.

Plans for future memory upgrades should be taken into account when deciding which memory feature size to use at the time of initial system order. Power cords Two power cords are required. The Power S supports power cord 4. Refer to the feature listing for other options.

From 1991: Bill Geist travels Route 66, the "Main Street of America"

It is designed to meet many entry client requirements with its 64GB memory maximum, eight SAS drives in its system unit maximum and seven PCIe slots maximum. The 4-core Power S uses a 3. Two are x16 full-height and full-length slots.

Five are x8 Gen 3 full-height, half-length slots. The x16 slots can provide up to twice the bandwidth of a x8 slot because they offer twice as many PCIe lanes. One of the x8 PCIe slots is used for this required adapter, identified as the C10 slot. These servers are smarter about energy efficiency for cooling the PCIe adapter environment.

In contrast, POWER7 servers required the user to enter a "non-acoustic mode" command to speed up the fans. Note that faster fans increase the sound level of the server. IBM is also introducing a gzip acceleration adapter EJ This PCIe adapter incorporates the latest in FPGA technology to provide significant performance improvements for customers running workloads such as IBM WebSphere, which require frequent gzip compressions and decompressions.

This feature is particularly effective for workloads requiring transfer of large buffers. Utilizing this adapter can reduce both storage requirements and network congestion in a customer's environment.

This feature is only supported in AIX. SAS bays and storage backplane options Three backplane options provide a great deal of flexibility and capability.

One of these three must be configured: Thus the drives are designated SFF All SFF-3 bays support concurrent maintenance or "hot plug" capability. Internally, 13 no cache or 16 with cache 6Gb SAS ports are implemented and provide plenty of bandwidth.

By optionally adding the feature EJ0S Split Backplane feature, a second integrated SAS controller with no write cache is provided and the twelve SSF-3 bays are logically divided into two sets of six bays. Each SAS controller independently runs one of the six-bay sets of drives.

The dual SAS controllers provide both performance and protection advantages. Patented Active-Active capabilities enhance performance when there is more than one array configured. Each of the dual controllers has access to all the backplane SAS bays and can back up the other controller if there were to be a problem with the other controller.

Each controller mirrors the other's write cache, providing redundancy protection. Integrated flash memory for the write cache content provides protection against electrical power loss to the server and avoids the need for write cache battery protection and battery maintenance.

All three of these backplane options can offer different drive protection options: RAID 5 requires a minimum of three drives of the same capacity. RAID 6 requires a minimum of four drives of the same capacity. RAID 10 requires a minimum of two drives. If the client needs a change after the server is already installed, the backplane option can be changed. Using two 6-slot fan-out modules per drawer provides a maximum of 48 PCIe slots per system node.

This enables a lower-cost configuration if fewer PCIe slots are required. Thus a system node supports the following half-drawer options: Because there is a maximum of four EMX0 drawers per node, a single system node cannot have more than four half drawers.

A server with more system nodes can support more half drawers, up to four per node. PCIe Gen3 drawers can be concurrently added to the server at a later time.

The drawer being added can have either one or two fan-out modules. Note that adding a second fan-out module to a half-full drawer does require scheduling downtime. The top port of the fan-out module must be cabled to the top port of the EJ07 port. Likewise, the bottom two ports must be cabled together.

This can help provide cabling for higher availability configurations. When this cable is ordered with a system in a rack specifying IBM Plant integration, IBM Manufacturing will ship SAS cables longer than 3 meters in a separate box and not attempt to place the cable in the rack. A BSC is used to house the full-high adapters that go into the fan-out module slots. A feature number to order additional full-high BSC is not required or announced. Slot filler panels are included for empty bays when initially shipped.

It uses only 2 EIA of space in a inch rack. To maximize configuration flexibility and space utilization, the system node does not have integrated SAS bays or integrated SAS controllers.

To further reduce possible single points of failure, EXP24S configuration rules consistent with previous Power Systems are used. Protecting the drives is highly recommended, but not required for other operating systems. All Power operating system environments that are using SAS adapters with write cache require the cache to be protected by using pairs of adapters. The order also changes the feature number so that IBM configuration tools can better interpret what is required.

Clients booting from a disk or SSD that is not on a storage area network SAN have a specify option when ordering their server to better reflect their configuration on IBM configuration tools. This enables a client upgrading with the same serial number or migrating to a new serial number system to avoid buying an additional EXP24S. Racks The Power EC server is designed to fit a standard inch rack. Clients can choose to place the server in other racks if they are confident those racks have the strength, rigidity, depth, and hole pattern characteristics that are needed.

Clients should work with IBM Service to determine other racks' appropriateness. The Power EC rails can adjust their depth to fit a rack that is An initial system order is placed in a T42 rack.

A same serial-number model upgrade MES is placed in an equivalent feature rack. This is done to ease and speed client installation, provide a more complete and higher quality environment for IBM Manufacturing system assembly and testing, and provide a more complete shipping package. Clients who don't want this rack can remove it from the order, and IBM Manufacturing will then remove the server from the rack after testing and ship the server in separate packages without a rack.

Use the factory-deracking feature ER21 on the order to do this. Five rack front door options are supported for the 42U enterprise rack T42 or The front trim kit is also supported The Power logo rack door is not supported. When considering an acoustic door, note the majority of the acoustic value is provided by the front door because the servers' fans are mostly located in the front of the rack.

Not including a rear acoustic door saves some floor space, which may make it easier to use the optional 8-inch expansion feature on the rear of the rack.

Leave the bottom 2U of the rack open for cable management when below-floor cabling is used. Likewise, if overhead cabling is used, it is strongly recommended the top 2U be left open for cable management. If clients are using both overhead and below-floor cabling, leaving 2U open on both the top and bottom of the rack is a good practice.

Rack configurations placing equipment in these 2U locations can be more difficult to service if there are a lot of cables running by them in the rack. The system node and system control unit must be immediately physically adjacent to each other in a contiguous space. The cables connecting the system control unit and the system node are built to very specific lengths.

In a two-node configuration, system node 1 is on top, and then the system control unit in the middle and system node 2 is on the bottom. Use specify feature ER16 to reserve 5U space in the rack for a future system node and avoid the work of shifting equipment in the rack in the future.

On a four-node configuration system, node 4 is on the top, then node 1 is below it, then the system control unit, then node 2, and finally node 3 is on the bottom. With the 2-meter T42 or featurea rear rack extension of ERG0 provides space to hold cables on the side of the rack and keep the center area clear for cooling and service access. Approximately 64 short-length SAS cables per side of a rack or around 50 longer-length thicker SAS cables per side of a rack is a good rule of thumb.

The feature ERG0 extension can be good to use even with a smaller number of cables as it enhances the ease of cable management with the extra space it provides. Multiple service personnel are required to manually remove or insert a system node drawer into a rack, given its dimensions and weight and content. To avoid any delay in service, it is recommended that the client obtain an optional lift tool EB2Z.

The EB2Z lift tool provides a hand crank to lift and position up to kg lb. The EB2Z lift tool is 1. Note that a single system node can weigh up to Use feature 42U enterprise rack for this order. After the rack with Expansion Drawers is delivered to the client, the client is allowed to rearrange the PDUs from horizontal to vertical.

However, the IBM configurator tools will continue to assume that the PDUs are placed horizontally for the matter of calculating the free space still available in the rack for additional future orders. This is done to aid cable routing. Each horizontal PDU occupies 1U. Vertically mounting the PDUs to save rack space can cause cable routing challenges and interfere with optimal service access. When mounting the horizontal PDUs, it is a good practice to place them almost at the top or almost at the bottom of the rack, leaving 2U or more of space at the very top or very bottom of the opening for cable management.

Mounting a horizontal PDU in the middle of the rack is generally not optimal for cable management. Two possible PDU ratings are supported: Rack-integrated system orders require at least two of either feature, or This AC power distribution unit provides twelve C13 power outlets. It receives power through a UTG connector. It can be used for many different countries and applications by varying the PDU to Wall Power Cord, which must be ordered separately.

Supported power cords include the following features: Power Distribution Unit mounts in a inch rack and provides twelve C13 power outlets. Feature has six 16A circuit breakers, with two power outlets per circuit breaker. System units and expansion units must use a power cord with a C14 plug to connect to the feature One of the following line cords must be used to distribute power from a wall outlet to the feature It has a 4. A separate "to-the-wall" power cord is not required or orderable.

Use the Power Cord 2. These power cords are different than the ones used on the feature and PDUs. A system node is designed to continue functioning with just two working power supplies. A failed power supply can be hot swapped but must remain in the system until the replacement power supply is available for exchange. The chunnel carries power from the rear of the system node to the hot-swap power supplies located in the front of the system node where they are more accessible for service.

An alternative to using AC power is DC power. Four DC power supplies are used: Hot-plug options The following options are hot-plug capable: System node AC power supplies: Two functional power supplies must remain installed at all times while the system is operating.

System control unit Op Panel. System control unit DVD drive. UPIC power cables from system node to system control unit. This enables efficient resource sharing through virtualization, which enables workload consolidation and secure workload isolation as well as the flexibility to redeploy resources dynamically. Other PowerVM technologies include the following: Migrate from older generation Power servers to the Power EC system.

Use this capability to do the following: Evacuate workloads from a system before performing scheduled maintenance. Move workloads across a pool of different physical resources as business needs shift.

Move workloads away from underutilized machines so that they can be powered off to save on energy and cooling costs. Active Memory Sharing enables memory to be dynamically moved between running partitions for optimal resource usage. PowerVP Virtualization Performance monitor provides real-time monitoring of a virtualized system showing the mapping of VMs to physical hardware.

Active Memory Expansion Active Memory Expansion is an innovative technology supporting the AIX operating system that enables the effective maximum memory capacity to be much larger than the true physical memory maximum. This can enable a partition to do significantly more work or support more users with the same physical amount of memory.

Similarly, it can enable a server to run more partitions and do more work for the same physical amount of memory. The trade-off of memory capacity for processor cycles can be an excellent choice, but the degree of expansion varies on how compressible the memory content is. Tests in IBM laboratories using sample workloads showed excellent results for many workloads in terms of memory expansion per additional CPU utilized.

You have a great deal of control over Active Memory Expansion usage. Control parameters set the amount of expansion desired in each partition to help control the amount of CPU used by the Active Memory Expansion function.

2017-18 Term 1 ELTU Course Registration and Class Swapping Arrangement

An IPL is required for the specific partition that is turning memory expansion. When they are turned on, monitoring capabilities are available in standard AIX performance tools such as lparstat, vmstat, topas, and svmon.

A planning tool is included with AIX, enabling you to sample actual workloads and estimate both how expandable the partition's memory is and how much CPU resource is needed. Any Power Systems model can run the planning tool. In addition, a one-time, day trial of Active Memory Expansion is available to enable more exact memory expansion and CPU measurements.

You can request the trial at the Power Systems Capacity on Demand web page. Active Memory Expansion is enabled by chargeable hardware feature EM82, which can be ordered with the initial order of the system or as an MES order.

A software key is provided when the enablement feature is ordered, which is applied to the system node. An IPL is not required to enable the system node. The key is specific to an individual system and is permanent. It cannot be moved to a different server. Normal licensing requirements apply. Active Memory Mirroring The Power EC server offers the Active Memory Mirroring for Hypervisor feature, which is designed to prevent a system outage in the event of an uncorrectable error in memory being used by the system hypervisor.

For the Power EC: Processor feature EPBA 4.