IPMI is an open standard hardware management specification that defines a set of common interfaces to hardware and firmware. System administrators can use IPMI to monitor and manage system’s health. IPMI operates independently of the operating system and thus allows management of systems, even remotely, even in the absence of an operating system or power. IPMI may also work in conjunction with systems management software to provide enhanced functionality. IPMI helps administrators in monitoring systems sensors like temperatures, fans, voltages and memory status. Some products allow for thresholds can be set for each sensor and configured to execute pre-set action incase of an event. Current IPMI implementations can send out alerts to remote clients via LAN, serial or serial-over-LAN connections. Sometimes the interfaces are shared with the on-board LAN port and sometimes there is a dedicated LAN port available. They also provide remote text console redirection via one of the supported interfaces. Some IPMI products provide an added feature of remote graphical console. IPMI implementations across hardware vendors vary but the structure and format of the interfaces are consistent. This allows a single, consistent and uniform management interface in a mixed, multi-vendor environment. Using IPMI compliance devices helps in lowering management costs by eliminating additional training, efforts and software costs. There are many open source software available to manage IPMI compliant devices. More information available at Intel website.
What models of HPC Systems servers and workstations support IPMI?
All models of HPC Systems dual CPU and quad CPU servers and workstations are available with IPMI support.
KVM over IP is similar to remote graphical console access software such as PCAnywhere or Microsoft Remote Desktop Connection. The main benefit of KVM-over-IP is to allow the administrator to have complete control over the system from anywhere in the world with an internet connection. KVM over IP is ideal for system administrators because it is hardware based, requiring no software to be installed on the remote system. Instead, the IPMI BMC (Baseboard Management Card) obtains video data directly from the graphics chip using the industry standard DVI bus, compress and convert it to IP packets, and send it over an Ethernet link to a remote console application that unpacks and reconstitutes the dynamic graphical image. This KVM over IP is available during the entire BIOS boot process.
What models of HPC Systems servers and workstations support KVM over IP using the IPMI card?
Currently, it is supported on the E1204-SAS and SCSI models, the E2208-SAS and SCSI models, and the E1404W. It is also available on select models by special request.
Serial attached SCSI (SAS) is the next generation bus technology designed for connecting hard disks, and other secondary storage devices like optical and tape devices to the computer system. SAS will replace the aging SCSI technology in server systems and high end storage for corporate and enterprise markets. SAS devices use serial links that operate at higher speeds and consume lower power than the SCSI busses. SAS currently in its current revision supports up to 3 Gb/s data transfers. SAS uses the existing SCSI protocol and commands over serial links. SAS is backwards compatible with SATA devices. Tagged Command Queuing (TCQ) technology, featured on some SAS products, allow for high performance SAS disks. SAS supports higher number of devices (disk) than SCSI busses without loss of performance. This allows for JBOS and RBODs with a greater number of disks. Since SAS is based on point to point and full duplex links, expanders can be used to connect multiple SAS devices to a single port. SAS devices support hot-plug capability and can be used with external JOBDs. SAS devices are generally well suited for business critical, reliable and highly available environments. Some SAS products can also be used in certain cluster configurations with a shared disk model. More information available at www.t10.org
Serial ATA (SATA) is the next generation bus technology designed for connecting hard disks, optical devices and other secondary storage devices to the computer system. SATA will replace the aging ATA technology in personal computers (primarily) and other computing systems. SATA is designed for only direct attached storage (DAS) devices. SATA devices use serial links that operate at higher speeds and consume lower power than the ATA busses. SATA currently in its 2.0 revision supports up to 3 Gb/s data transfers per device. All SATA 1.0 devices can work perfectly with SATA 2.0 busses and vice-versa with full compatibility. However, as one could infer, SATA 2.0 devices when connected to a SATA 1.0 bus will perform at SATA 1.0 speeds. Some SATA devices support hot-plug capability and can be used with external JOBDs. And some SATA connectors provide an upgrade path to higher performance SAS devices. Native Command Queuing (NCQ) technology, featured on some SATA II products, allow for high performance SATA disks. SATA devices are generally well suited for high volume, non-critical usage cases. SATA disks provide an exceptional value with high storage volumes at low costs. SATA disk are available up to 1.5TB per disk capacities. For example: * For a large compute nodes cluster, SATA disks are ideal for local storage. * For a home computer, SATA disks are ideal by providing high volume at low costs. More information available at www.sata-io.org/esata.asp
PCI Express (PCIe) is the next generation system bus technology. Built on the serial interface architecture, PCIe provides faster, cooler and low power consuming interfaces. PCIe allows for higher throughput than the traditional PCI and PCI-X interfaces. Next generation I/O technologies like Infiniband, 4Gbps Fibre Channel, 10 Gb Ethernet and high end video cards can work at their full potential only on PCIe expansion busses. PCIe busses are made of individual data paths known as lanes. The speed of a PCIe bus is indicated by its width or the number of lanes, generally indicated as, x1, x2, x4, x8 & x16 and read as “by 1, by 2, by 4, by 8 and by 16”. A card of lesser lanes can be used in any expansion slot with higher number of lanes. For example, a x4 card can be used in a x16 slot but the vice versa is not possible. Sometimes PCIe busses are configured for lesser number of lanes than the expansion slot it self. For example, a slot can be x8 wide but configured for x4 width. So even if a x8 card is placed in the slot, it will still function at x4 speed. However, this allows of full flexibility in utilizing all available expansion slots on your system. PCI and PCI-X are bus based technologies and cause contention among PCI devices attached to the same bus. Since the PCIe technology is point to point and full duplex, even after fully populating all the available slots, all the slots are guaranteed to perform at their peak performance unlike the PCI and PCI-X busses.
|Bus system||Bit width||Clock Speed||Bandwidth|
|PCI||32||33 MHz||1 Gbps (uni-directional)|
|PCI-X||32 / 64||66 - 133 MHz||2 - 8.5 Gbps (uni-directional)|
|PCI Express||1 - 16||2.5 GHz||2.5 - 40 Gbps (bi-directional)|
At a software level, PCI, PCI-X and PCI Express are compatible so existing drivers and interfaces do not require extensive rewrites for new PCI Express cards using similar hardware. More information available at PCI SIG website.
RAID is an acronym for Redundant Array of Independent (or Inexpensive) Disks. Fundamentally, RAID combines multiple hard disks into a single logical unit. This can offer fault tolerance and/or higher throughput levels than a single hard drive or group of independent hard drives. RAID can provide real-time data recovery when a hard drive fails, increasing system up-time and network availability which protecting against loss of data. Multiple drives working together can also increase system performance.
JBOD or "Just a Bunch Of Disks":
JBOD or spanning of disks is not one of the numbered RAID levels, but it is a method for combining multiple physical disk drives into a single virtual disk. As the name implies, disks are merely concatenated together, end to beginning, so they appear to be a single large disk. Some RAID controllers use JBOD to refer to configuring drives without RAID features. Each drive shows up separately in the OS. This JBOD is not the same as concatenation described above.
Striped set of atleast two disks without parity. RAID 0 provides improved performance and additional storage but no fault tolerance from disk errors or disk failure. Any disk failure destroys the array, which becomes more likely with more disks in the array. The reason a single disk failure destroys the entire array is because when data is written to a RAID 0 drive, the data is broken into "fragments". The number of fragments is dictated by the number of disks in the drive. Each of these fragments are written to their respective disks simultaneously on the same sector. This allows the entire chunk of data to be read off the drive in parallel, giving this type of arrangement huge bandwidth. When one sector on one of the disks fails though, the corresponding sector on every other disk is rendered useless because part of the data is now corrupted. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the drive means higher bandwidth, but greater risk of data loss.
Mirrored set of atleast two disks without parity. RAID 1 provides fault tolerance from disk errors and single disk failure. Increased read performance occurs when using a multi-threaded operating system that supports split seeks, very small performance reduction when writing. Array continues to operate so long as at least one drive is functioning.
RAID 3 and RAID 4:
Striped set of atleast three disk with dedicated parity. RAID 3/4 Provides improved performance and fault tolerance similar to RAID 5, but with a dedicated parity disk rather than rotated parity stripes. The single disk is a bottle-neck for writing since every write requires updating the parity data. One minor benefit is the dedicated parity disk allows the parity drive to fail and operation will continue without parity or performance penalty.
Striped set at least three disks with distributed parity. Distributed parity requires all but one drive to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive.
Striped set of at least four disks with dual distributed parity: RAID 6 provides fault tolerance from two drive failures; array continues to operate with up to two failed drives. This makes larger RAID groups more practical. This is becoming a popular choice for SATA drives as they approach 1 Terabyte in size. This is because the single parity RAID levels are vulnerable to data loss until the failed drive is rebuilt. The larger the drive, the longer the rebuild will take. With dual parity, it gives the array time to rebuild onto a large drive with the ability to sustain another drive failure.
Nested RAID Levels:
In nested RAID levels one RAID can use another as its basic element, instead of using physical drives. It is instructive to think of these arrays as layered on top of each other, with physical drives at the bottom.
Mirrored Set + Striped Set. It requires at least four disks.Always needs an even number of disks, provides fault tolerance and improved performance but increases complexity. Array continues to operate with one or more failed drives. The key difference from RAID 0+1 is that RAID 1+0 creates a striped set from a series of mirrored drives.
Striped Set + Mirrored Set. It requires at least four disks; Always needs an even number of disks, provides fault tolerance and improved performance but increases complexity. Array continues to operate with one failed drive. The key difference from RAID 1+0 is that RAID 0+1 creates a second striped set to mirror a primary striped set, and as a result can only sustain a maximum of a single disk loss, whereas 1+0 can sustain multiple drive losses as long as no two drive loss comprise a single pair.
A stripe across distributed parity RAID systems. RAID 5+1: A mirror striped set with distributed parity. Nested RAIDs are usually signified by joining the numbers indicating the RAID levels into a single number, sometimes with a '+' in between. For example, RAID 10 (or RAID 1+0) conceptually consists of multiple level 1 arrays stored on physical drives with a level 0 array on top, striped over the level 1 arrays. In the case of RAID 0+1, it is most often called RAID 0+1 as opposed to RAID 01 to avoid confusion with RAID 1. However, when the top array is a RAID 0 (such as in RAID 10 and RAID 50), most vendors choose to omit the '+', though RAID 5+0 is more informative.
A hardware implementation of RAID requires at a minimum a special-purpose RAID controller. On a desktop system, this may be a PCI expansion card, or might be a capability built in to the motherboard. In industrial applications, the controller and drives are provided as a stand-alone enclosure. The drives may be IDE/ATA, SATA, SCSI, SAS, Fibre Channel, or a combination thereof. The using system can be directly attached to the controller, or more commonly, connected via a SAN. The controller hardware handles the management of the drives, and performs any parity calculations required by the chosen RAID level. Most hardware implementations provide a non-volatile read/write cache, which depending on the I/O workload, will improve performance. Cached RAID controllers are most commonly used in industrial applications. Hardware implementations provide guaranteed performance, add no overhead to the local CPU and can support many operating systems, as the controller simply presents a logical disk to the operating system.
Hybrid RAID implementations have become very popular with the introduction of inexpensive RAID controllers, implemented using a standard disk controller and then implementing the RAID in the controllers BIOS extension and the operating system driver. These controllers actually do all calculations in software, not hardware. Like hardware RAID, they are typically proprietary to a given RAID controller manufacturer and typically cannot span multiple controllers. Both hardware and hybrid implementations may support the use of hot spare drives, a pre-installed drive which is used to immediately (and almost always automatically) replace a drive that has failed. This reduces the mean time to repair period during which a second drive failure in the same RAID redundancy group can result in loss of data. It also prevents data loss when multiple drives fail in a short period of time. This can happen when all drives in an array have undergone very similar use patterns, and experience wear-out failures. Zero
A type of RAID implementation in which a PCI RAID controller is designed to use the on-board SCSI channels of a motherboard to implement a cost-effective Hardware RAID solution.
Which is better, software RAID or hardware RAID?
Hardware RAID, by definition, requires a separate controller card often costing several hundred dollars and thus is more expensive than a software RAID solution. However, in its favor, hardware RAID is usually operating system independent which allows for use in dual-booting scenarios. Furthermore, since all the RAID functions (such as calculating parity) are performed on the RAID controller, there is no noticeable drain on system resources such as CPU or memory. Software RAID does not require additional hardware and is sometimes bundled with the operating system, so it is a less expensive solution than many RAID scenarios. The one area when software RAID cannot compare to certain hardware RAID solutions is data reliability and integrity across power losses. Some hardware RAID controllers feature a battery backup unit that allows any write requests sent to the controller but not yet stored to disk to be kept in the RAID controller memory in the event of a power outage. Upon reboot, the RAID controller will pick up where it left off and write this data to the array. Software RAID has no such functionality as any cached writes will be immediately lost as soon as the system loses power or the operating system crashes.
Hyper-Threading Technology (HTT) is an implementation of simultaneous multithreading technology by Intel in their microprocessor series. The technology improves processor performance under certain workloads by providing useful work for execution units that would otherwise be idle, for example during a cache miss. HTT improves processor utilization and application performance under specific workloads. There are also cases in which enabling HTT can degrade an applicaiton performance. For further information check the official Intel website.
x86-64 is a 64-bit microprocessor architecture and corresponding instruction set; it is a superset of the Intel x86 architecture, which it natively supports. It was designed by Advanced Micro Devices (AMD), who have since renamed it AMD64. This architecture has also been adopted by Intel under the name Intel EM64T or Extended Memory 64 Technology. This leads to the common use of the names x86-64 or x64 as more vendor-neutral terms to collectively refer to the two nearly identical implementations. Microprocessors based on the x86-64 technology can natively support both 32-bit and extended 64-bit operating systems. Depending on the operating system loaded, the microprocessor can be switched to operate in either of the modes. This provides a smooth upgrade path for end users from a 32-bit operating environment to a 64-bit environment when they are ready. It also proivdes users with future proof technologies and investment protection. The end user can just load a new operating system and they have a 64-bit environment on the same hardware. x86-64 microprocessors introduced a whole new set of enhancements to the proven x86 architecture. The most significant of these enhancements is support for larger addressable memory - up to 256 tebibytes of virtual address space (248 bytes). This limit can be raised in future implementations to 16 exbibytes (264 bytes). This is compared to just 4 gibibytes for 32-bit x86. This means that very large files can be operated on by mapping the entire file into the process' address space (which is generally faster than working with file read/write calls), rather than having to map regions of the file into and out of the address space. Other enhancements include:
- 64-bit integers
- Additional, wider registers
- Additional SSE registers for multimedia and vector processing
- NoeXecute bit
- Wider pointers
For more information, please check the official Intel website and AMD website. For more information, check the AMD whitepaper website.
Direct Connect Architecture is AMD’s computing platform design, the processors, memory controller, and I/O are directly connected to the CPU and communicate at CPU speed. This architecture helps in reducing memory latency and increasing memory bandiwdth. It also eliminates the need for deeper pipelines and complex cache policies leading to a more efficient and clean microprocessor design and performance. For more information, check the official AMD website.
AMD processors feature hardware-assisted AMD Virtualization™ (AMD-V™). This helps streamline virtualization deployment, improves virtualization support, and helps guest x86 operating systems run unmodified at industry leading execution speeds. For more information, check the official AMD website. Intel® Virtualization Technology (Intel® VT) is a set of hardware enhancements to Intel® server and client platforms that can improve traditional software-based virtualization solutions. This collection of premier Intel designed and manufactured silicon technologies delivers new and improved computing benefits for home and business users, and IT managers.For more information, check the official Intel website.
PowerNow is a power management technology for processors introduced by AMD. PowerNow provides performance-on-demand by dynamically adjusting performance based on CPU utilization - helping systems to run at optimum performance and power levels, reducing electricity costs while maximizing IT budget dollars. For more information, check the official AMD website.
HyperTransport technology is the interconnect between the CPU and I/O controllers such as a PCI-X controller, PCI Express controller and/or Southbridge. It is a high-speed, low latency, point-to-point link. For more information, check the official Hypertransport website: http://www.hypertransport.org/
Multi-core technologies refer to a class of microprocessors that combines two or more processing units in to a single physical package. This technology allows for increasing the processing power of a compute system without increasing the number of processor sockets. It allows for a certain level of parallelism (thread-level parallelism - TLP) in the system. With the advances in technology and the economics of manufacturing, today almost the entire line of microprocessors is no longer single core. This provides the customers with the advantages of never before seen compute densities. For example, the HPC Systems A1403 packs 4 sockets in a 1U package, with dual-core processors it packs 8 processing cores and with the upcoming quad-core, 16 processing cores in a 1U package. This opens up a whole new realm of possibilities for the customers. Depending on the system manufacturer and microprocessor, upgrading to multi-core processors can be anything from in socket replacement or a brand new system. Multi-core processors that allow for in-socket replacement are generally compliant with their predecessors in terms of power, thermal and application compatibility. This makes it easy for a customer to upgrade to new technology with minimal investment. They also, generally, reflect efficiency in design of the microprocessors. There are certain advantages and disadvantages in using a multi-core processor. Multi-core processors are not suitable for all types of workloads. The user has to profile his/her application of interest to determine the gains of using a multi-core processor. Although the applications can run on the new multi-core processors, it does not in anyway imply that it is able to utilize all of the available processors. However, nowadays most of the software is written to fully utilize all the available processors. For example, an older version of a financial analysis application may not experience any increase in performance but a newer version written for multiple cores may experience an immediate increase in performance when executed on a multi-core processor. There has been a bit of controversy surrounding the recent multi-core products in the x86 space. Some microprocessors package two or more individual processors in to a single physical package. Ex: Intel Xeon 5000 series and 5300 series. By definition this is called a multi-chip module (MCM) and technically it is still a multi-core processor. Depending on the area of application, MCM’s may not provide the same level of performance as a multi-core CPU. One must also note that a dual core processor may not always provide 2x the performance of two single core microprocessors.
AMD Multicore Technology: AMD processors feature a native multi-core design - Two or more cores are integrated onto a single processor die, with the cores joined in the same electrical package so they connect directly with one another at full speed. In socket F generation dual-core processors each processor core has it’s own dedicated L1/L2 Cache. AMD Opteron™ processors feature a dedicated L1/L2 cached and a new shared L3 cache on Quad-Core deliver efficient memory handling that reduces the need for “brute force” cache sizes. You can find an interactive tour of this technology from this website.
Intel Multicore Technology: Intel® multi-core architecture, on some series of microprocessors, has a single Intel processor package that contains two or more processor "execution cores," or computational engines, and delivers—with appropriate software—fully parallel execution of multiple software threads. The operating system (OS) perceives each of its execution cores as a discrete processor, with all the associated execution resources. For more information, please check the official Intel website. Talk to your sales representative to identify the right microprocessor technology for your needs.
Copyright © 2011 HPC Systems1 2 3