Fusion-io announced its new ioDrive Octal PCIe solid state disk is capable of achieving a bandwidth of 6.2GB/s and over 1 million IOPS:
At Supercomputing 2010, Fusion-io announced that it has once again achieved the highest Input/Output Operations Per Second (IOPS) and bandwidth in the industry, demonstrating the company’s continued leadership in flash-based, server-attached storage-class memory. The metrics behind these performance achievements are untouched by any other solid-state or traditional disk-based technology on the market today.
Fusion’s ioMemory technology has enabled more than 1 million IOPS from an increasingly dense footprint:
* In 2008, from a single rack, working together with IBM on Project Quicksilver
* In 2009, from a single server, working together with HP’s ProLiant team
* Now, in 2010, from a single PCI Express card, the ioDrive Octal again redefines the standard by which all others will be compared
In addition to providing more than 1 million IOPS of performance, each ioDrive Octal provides 6.2 GB/s of bandwidth and up to 5.7 TB of linear-scaling capacity per PCI-Express slot. This allows applications to process tens of terabytes of data without the latency impact of accessing backing data stores.
“Scientists face an overabundance of data in areas such as climatology, cosmology, nanotechnology and defense. Accessing and visualizing these complex data models take an inordinate amount of time,” said David Flynn, CEO of Fusion-io. “Rapid data access enables researchers to quickly and reliably solve problems, and technologies such as those from Fusion-io allow them to analyze much more data faster than ever before. With today’s astounding performance benchmarks, we’re proud to demonstrate that the speed of our technology directly translates to accelerated workload and data processing. In turn, our customers are tackling previously unattainable workload challenges.”
Demonstrated last year at SC09, the ioDrive Octal extends Fusion’s ioMemory portfolio and offers customers the highest performance available on the market today. The ioDrive Octal holds eight ioMemory Modules - putting the equivalent capacity, performance and reliability of eight ioDrives into a single card. It fits any PCI Express x16 Gen2 double-wide slot, the same as those used for high performance graphics cards.
The ioDrive Octal is built with Fusion’s ioSphere Software Platform, making it a complete server-attached, storage-class memory solution of hardware, software and services. The ioDrive Octal is now available for purchase through Fusion-io.
Organizations, including Los Alamos National Lab (LANL), have already experienced the benefit of working with Fusion’s products.
At SC10, Fusion-io is showcasing its work with LANL to design data-intensive systems for processing climate data. LANL ran the Hadoop Distributed File System over Fusion’s ioMemory technology to process time-series data 500 percent faster than that of spinning disks.
“As demand for data intensive supercomputing workloads grows, Fusion-io is packaging system technology that allows us to expand our performance footprint without the burdensome space and resource requirements of traditional clusters,” said Jim Ahrens, Visualization Team Leader at LANL. “By adopting Fusion’s ioMemory technology, we and others are re-architecting systems to thrive in this highly competitive environment where each discovery advances our knowledge of the earth and its ever changing climate conditions.”
In addition, a government aeronautics organization is using ioMemory technology in its data-intense transfer operation, sending data from a facility in Illinois, to one in Maryland, and then to New Orleans. Using one ioDrive Octal, the institution realized an unprecedented 4000 MB/s transfer rate using (4) 10 GB/s Ethernet links.
For scientific discovery, installations such as these now have the ability to keep remote researchers productive – even while being thousands of miles away from headquarters. By deploying the ioDrive Octal, they can move petabytes of data at a time – a feat significantly more time-consuming and complex when using traditional disk-based systems.