Seagate presented world's fastest HDD, the Savvio 15K.2. It's a 2.5" HDD designed for usage in data centers.
This hard disk has platters that spin a 15,000 RPM, a world's first for 2.5" HDDs. Seagate says their Savvio 15K.2 is even faster than their Cheetah 15K.5 HDDs. The Cheetahs have an average seek time of 3.5ms while the new Savvio's have a seek time of 2.9ms.
The new Savvio 15K drives are available in 36GB and 73GB capacities. Both HDDs have 16MB cache memory.
Other advantages are the high reliability of 1.6 million hours and the 5.8W idle power consumption which is 31% lower than the Cheetah 15K.5 HDDs. The drives measure 111 x 70 x 15mm, which is 70% smaller than the Cheetah 15K.5 drives.
Use Disqus to post new comments, the old comments are listed below.
Re: Seagate Savvio 15K HDDs with 2.9ms seek time by Anonymous on Friday, December 05 2008 @ 07:38:21 CET
146GB not big enough. Period. End.
In fact, until 10K.3, VelociRaptor was the only 300GB 2.5" in town.
Sad to see the mighty seagate producing boring products.
They call these "green," yet its far greener to stripe the crap out of 1.0TB disks if you need to get to a multi-TB volume size. 7 of these little disks = 1 1TB disk.
Crap. Boring. Make a 300GB version and I'll start to look. 150GB is already obsolete.
Reply by Anonymous on Friday, December 26 2008 @ 17:18:51 CET
That's an improper comparison as the drives are optimized for different factors. These Savvios are targeted at loads that are IOPS constrained, not bandwidth constrained, hence the focus on seek times. Small random loads, not sequential. Delivering more IOPS than a single spindle using RAID is a difficult matter, especially when dealing with relatively equal read and write loads. Many database loads fall into this class, especially when finance is involved and transactions need to be fully committed to non-volatile storage rapidly with the additional complication that many transactions must be serialized, so there's no hope of generating a lot of concurrent reads. This class of drives has never been about how much you need to store or how much data you can put down per second, as RAID can give you that more cheaply. These drives are focused on applications that need to quickly store and retrieve uncacheable data sets. The Savvio 15k.2 should deliver just over 200 read IOPS, and almost 190 write IOPS. It also happens to deliver 120-160MB/s of read speed depending on the head position, but this probably doesn't matter to most target users.
Striping does not deliver the right kind of performance for these applications. For read heavy and highly parallel loads, such as seen in a webserver, RAID 5 and 10 can indeed deliver at a lower cost per GB. But then again, webservers generally aren't disk constrained at all unless they're serving a very large pool of static files without an easily cacheable subset. For parallel read heavy loads, a set of 5 7200RPM SATA drives in a RAID 5 can deliver 4TB of usable storage with around 300 read IOPS, but only about 70 write IOPS. NVRAM cache in the RAID controller may allow you to sustain much higher than 70 write IOPS under most conditions. Indeed, for heavy random writes (parallel or not), well implemented NVRAM is always massive win as it let's the actual writes to disk be done near sequentially.
For loads that are bandwidth constrained, 15k spindles are usually sub-optimal, and RAID 10 across a larger set of disks with high aerial density and lower spindle speed is a good choice. For one thing, most applications in this class deal with very large data sets, where small fast drives are cost prohibitive. The 2.5" 300GB velociraptor you mentioned delivers just shy of 135 read IOPS and a sustained data rate of 120MB/s per spec (despite it's lower rotational speed than the 15k savvio as density compensates), which is pretty impressive if you need to handle a lot of sequential access, such as seen when working with DV. It's also no slouch for random access, so general desktop application performance will also be quite good. If that's not enough sequential performance, you could stripe and mirror, but you approach a point where you have to think very carefully about how you're going to move that much data around that quickly. Bus design and controller implementation, memory bandwidth and CPU performance can all become serious bottlenecks.