Western Digital details 14-platter 3.5-inch HAMR HDD designs with 140 TB and beyond (www.tomshardware.com)
from veeesix@lemmy.ca to selfhosted@lemmy.world on 07 Feb 21:52
https://lemmy.ca/post/60059065

cross-posted from: beehaw.org/post/24650125

Because nothing says “fun” quite like having to restore a RAID that just saw 140TB fail.

Western Digital this week outlined its near-term and mid-term plans to increase hard drive capacities to around 60TB and beyond with optimizations that significantly increase HDD performance for the AI and cloud era. In addition, the company outlined its longer-term vision for hard disk drives’ evolution that includes a new laser technology for heat-assisted magnetic recording (HAMR), new platters with higher areal density, and HDD assemblies with up to 14 platters. As a result, WD will be able to offer drives beyond 140 TB in the 2030s.

Western Digital plans to volume produce its inaugural commercial hard drives featuring HAMR technology next year, with capacities rising from 40TB (CMR) or 44TB (SMR) in late 2026, with production ramping in 2027. These drives will use the company’s proven 11-platter platform with high-density media as well as HAMR heads with edge-emitting lasers that heat iron-platinum alloy (FePt) on top of platters to its Curie temperature — the point at which its magnetic properties change — and reducing its magnetic coercivity before writing data.

#selfhosted

threaded - newest

solrize@lemmy.ml on 07 Feb 22:03 next collapse

As a result, will be able to offer drives beyond 140 TB in the 2030s.

Um thanks but tell us about 2026?

lemmyng@piefed.ca on 07 Feb 23:38 collapse

Shrimp platters.

ToTheGraveMyLove@sh.itjust.works on 07 Feb 23:42 collapse

Whoops, sorry, the oceans are hostile to life now. No more shrimp platters. Try again next time.

FirmDistribution@lemmy.world on 07 Feb 22:08 next collapse

with optimizations that significantly increase HDD performance for the AI and cloud era

Can somebody do anything with a normal consumer in mind these days? 😭

dual_sport_dork@lemmy.world on 07 Feb 22:18 next collapse

Not until somebody shuts off the investor money faucet for AI. Then they’ll come crawling back — although inevitably not until after they go whining to all the world’s governments about wanting a bailout.

But hey, look at the bright side. We’ve already had the cryptocurrency mining boom and bust, and “AI” boom and soon to be bust. There’s still time for some idiot to invent the next tech scam fad which will conveniently require a shitload of hardware for no recognizably useful purpose.

cecilkorik@piefed.ca on 08 Feb 04:39 next collapse

Then they’ll come crawling back — although inevitably not until after they go whining to all the world’s governments about wanting a bailout.

And don’t forget the part where, whether they get a bailout or not, they’ll still have to double the prices of everything to make up for all the money they lost on that stupid AI bubble exploding in their face (which all of us are somehow to blame for, obviously, which is why we have to pay them back for it)

AndrewZabar@lemmy.world on 08 Feb 16:44 collapse

”although inevitably not until after they go whining to all the world’s governments about wanting a bailout”.

Ahem… Whining? Wanting? Try instructing. They own the governments so they will just tell them to do it, and it will be done.

akilou@sh.itjust.works on 07 Feb 22:44 next collapse

Does data take up less room when it’s being used by AI?

mycodesucks@lemmy.world on 08 Feb 02:11 next collapse

No, and it’s by design.

You’re gonna lease a tablet and use cloud-based storage services and like it.

The dystopia is here.

RalfWausE@feddit.org on 08 Feb 07:52 collapse

Back to the 70s and early 80s…

selokichtli@lemmy.ml on 08 Feb 15:03 collapse

Yeah, adding all the surveillance technology developed in the last 40 years, so you dont dare to take your eyes out of the display, for example.

myserverisdown@lemmy.world on 08 Feb 03:51 next collapse

140 TB is a whole heck of a lot of movies and TV shows

Kushan@lemmy.world on 08 Feb 06:58 collapse

It’s about the storage I have in my server right now - using 15 drives ☠️

brygphilomena@lemmy.dbzer0.com on 08 Feb 14:24 collapse

It’s about half of mine, with about 30 drives. Whatcha running?

Kushan@lemmy.world on 08 Feb 15:26 collapse

I’m running a TrueNAS build which has just grown in time. Started off at 5x8TB drives, then added 5x16TB drives and just last week added another 5x26TB drives (that was costly ☠️). It’s all running in a very cheap case using an old threadripper machine I had (2950x), which thankfully supports ECC (128GB purchased years ago before the sillyness).

rumba@lemmy.zip on 08 Feb 05:36 next collapse

Normal consumers can install jellyfin. At some point they’ll make downloading a crime, they wouldn’t hurt people to have a decent collection of stuff ready for that day.

atzanteol@sh.itjust.works on 08 Feb 14:06 next collapse

That fuck you mean? You can use these drives for any purpose you want.

selokichtli@lemmy.ml on 08 Feb 15:00 collapse

Well, that’s a target market right now. Intel GPUs are doing better than expected, I think, thanks to all the big corporations abandoning “normal consumers”.

billwashere@lemmy.world on 07 Feb 22:40 next collapse

This would be a bitch to have to rebuild in a raid array. At some point a drive can get TOO big. And this is looking to cross that line.

irmadlad@lemmy.world on 07 Feb 22:54 next collapse

At some point a drive can get TOO big

I was thinking the same. I would hate to toast a 140 TB drive. I think I’d just sit right down and cry. I’ll stick with my 10 TB drives.

rtxn@lemmy.world on 07 Feb 23:05 next collapse

This is not meant for human beings. A creature that needs over 140 TB of storage in a single device can definitely afford to run them in some distributed redundancy scheme with hot swaps and just shred failed units. We know they’re not worried about being wasteful.

thejml@sh.itjust.works on 07 Feb 23:50 next collapse

Rebuild time is the big problem with this in a RAID Array. The interface is too slow and you risk losing more drives in the array before the rebuild completes.

rtxn@lemmy.world on 08 Feb 00:04 collapse

Realistically, is that a factor for a Microsoft-sized company, though? I’d be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between high-availability hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.

thejml@sh.itjust.works on 08 Feb 00:20 next collapse

True, but that’s going to really be pushing your network links just to recover. Realistically, something like ZFS or a RAID-6 with extra hot spares would help reduce the risks, but it’s still a non trivial amount of time. Not to mention the impact to normal usage during that time period.

frongt@lemmy.zip on 08 Feb 04:02 collapse

Network? Nah, the bottleneck is always going to be the drive itself. Storage networks might pass absurd numbers of Gbps, but ideally you’d be resilvering from a drive on the same backplane, and SAS-4 tops out at 24 Gbps, but there’s no way you’re going to hit that write speed on a single drive. The fastest retail drives don’t do more than ~2 Gbps. Even the Seagate Mach.2 only does around twice that due to having two head actuators.

thejml@sh.itjust.works on 08 Feb 17:25 collapse

100%. But the post i was responding to was talking about recovering a failed array from other copies, not locally.

enumerator4829@sh.itjust.works on 08 Feb 08:27 next collapse

Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between

  • disk speed
  • targets for ”resilver” time / risk acceptance
  • disk size
  • failure domain size (how many drives do you have per server)
  • network speed

Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.

Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:

  • 3x16TB triple mirror
  • 4x8TB Raid6/RaidZ2
  • 6x4TB Raid6/RaidZ2

The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).

This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.

The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.

tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.

brygphilomena@lemmy.dbzer0.com on 08 Feb 14:43 collapse

I’d imagine they are using ceph or similar.

You have disk level protection for servers. Server level protection for racks. Rack level protection for locations. Location level protection for datacenters. Probably datacenter level protections for geographic regions.

It’s fucking wild when you get to that scale.

MonkeMischief@lemmy.today on 08 Feb 06:25 collapse

This is not meant for human beings.

This is for like, Smaug but if he hoarded classic anime and the entirety of Steam or something. Lol

gravitas_deficiency@sh.itjust.works on 08 Feb 00:19 collapse

Yeah I’m running 16s and that’s pushing it imo

non_burglar@lemmy.world on 07 Feb 23:07 next collapse

It doesn’t really matter, the current limitations are not so much data density at rest, but getting the data in and out at a useful speed. We breached the capacity barrier long ago with disk arrays.

SATA will no longer be improved, we now need u.2 designs for data transport that are designed for storage. This exists, but needs to filter down through industrial application to get to us plebs.

irmadlad@lemmy.world on 07 Feb 23:34 collapse

640K ought to be enough for anybody.

pHr34kY@lemmy.world on 08 Feb 00:09 collapse

I don’t get how a single person would have that much data. I fit my whole life from the first shot I took on a digital camera in 2001… Onto a 4TB drive.

…and even then, two thirds of it is just pirated movies.

billwashere@lemmy.world on 08 Feb 00:23 next collapse

Amateur 😀

But seriously I probably have close to 100 TB of music, TV shows, movies, books, audiobooks, pictures, 3d models, magazines, etc.

panda_abyss@lemmy.ca on 08 Feb 00:49 collapse

I need a home for my orphaned podman containers /s

I think this is better targeted to small and medium businesses. 

if you run this as a NAS you could easily have all your budd s obsesses files in one place without needing complex networking. 

just_another_person@lemmy.world on 08 Feb 00:06 next collapse

This ONLY works at an insane scale. This will never hit the consumer market.

Korkki@lemmy.ml on 08 Feb 02:50 collapse

Also what current consumer level application could require of storage 140TB. That would be some advanced level data hoarding or smth.

Andres4NY@social.ridetrans.it on 08 Feb 03:04 next collapse

@Korkki @just_another_person I see 4k HDR blue ray movie rips these days on the order of 50GB (edit: eg, Eddington.2025.MULTi.VFF.2160p.DV.HDR.BluRay.REMUX.HEVC-[BATGirl]: 77.73G).

Which is too rich for my blood (I'm still watching on 1080p screens over here), but for someone with the right kind of home theater.. that's only ~280 movies on a 14TB drive. Lots of movie collections, even in the olden days of physical VHS and DVDs, span 1,000+ movies.

Zorque@lemmy.world on 08 Feb 05:28 collapse

14TB or 140TB? The latter is what’s being talked about, so that’s more like 2800 movies. Which more than covers that 1000+ movie criteria.

Andres4NY@social.ridetrans.it on 08 Feb 05:43 collapse

@Zorque I'm saying that 14TB will only fit 280 (or more likely, less) of those ultra-hq movies, so 140TB (or, in the lead up to that, 100TB, since they're talking about 5+ years or more before they even get close to 140TB) is reasonable for a 1,000-2,000 movie collection. Obviously I'm being loose with numbers, but the fact that one single movie can consume almost 80GB.. well, you can start to understand consumer demand for 100+TB drives.

just_another_person@lemmy.world on 08 Feb 03:18 collapse

The failure rate is going to be absolute INSANE as well.

gravitas_deficiency@sh.itjust.works on 08 Feb 00:18 next collapse

Holy fuck can you imagine how long it would take to re-stripe a failed drive in a z2 array 😭

Telorand@reddthat.com on 08 Feb 01:14 next collapse

Not a clue. Care to eli5?

SmoothLiquidation@lemmy.world on 08 Feb 01:35 collapse

When you are running a server just to store files (a NAS) you generally set it up so multiple physical hard disks are joined together into an array so if one fails, none of the data is lost. You can replace a failed drive by taking it out and putting in a new working drive and then the system has to copy all of the data over from the other drives. This process can take many hours to run even with the 10-20 TB drive you get today, so doing the same thing with 140 TB drive would take days.

Andres4NY@social.ridetrans.it on 08 Feb 01:51 collapse

@SmoothLiquidation @Telorand They also claim up to 8x speed improvements with HAMR. Obviously that remains to be seen, but if they could roughly match capacity improvements, that would keep restriping in the same ballpark.

Dremor@lemmy.world on 08 Feb 07:37 collapse

My Z2 had à drive failure recently, with 4To drives. Took me almost 3 days to re-silver the array 😅. fortunately had a hot spare setup, so it started as soon as it failed, but now a second drive is showing signs of failing soon, so I had to pay the AI tax (168€) to get one asap (arriving Monday), as well as a second one, cheaper (around 120€), but which won’t arrive until the end of April.

Decronym@lemmy.decronym.xyz on 08 Feb 00:30 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

4 acronyms in this thread; the most compressed thread commented on today has 14 acronyms.

[Thread #72 for this comm, first seen 8th Feb 2026, 00:30] [FAQ] [Full list] [Contact] [Source code]

zorflieg@lemmy.world on 08 Feb 01:15 next collapse

I wonder why current consumer HDD’s don’t have NVME connectors on them. Like I know speeding up the bus isn’t going to make the spinning rust access faster but the cache ram would probably benefit from not being capped at 550MBps

Shady_Shiroe@lemmy.world on 08 Feb 01:53 next collapse

I just hope smaller sized drives become cheaper. The word “hope” is doing a lot of heavy lifting here.

Supervisor194@lemmy.world on 08 Feb 02:39 collapse

Ten years from now…

Amazon search: “hard drive”

Result: 4TB $198

Zozano@aussie.zone on 08 Feb 03:22 next collapse

BARGAIN!

AndrewZabar@lemmy.world on 08 Feb 16:42 collapse

I think ten years from now you’ll be hard pressed to find anyone even wasting their time on something so small.

HeyThisIsntTheYMCA@lemmy.world on 08 Feb 17:23 collapse

so you say, but people still collect “antique” hardware.

AndrewZabar@lemmy.world on 08 Feb 21:35 collapse

Well, retro etc. but I wouldn’t consider this to be that. There’s no inherent value of a run-of-the-mill drive with merely lower storage capacity. And certainly not worth a premium.

HeyThisIsntTheYMCA@lemmy.world on 08 Feb 22:16 collapse

it’s not antique yet. i still have my 5.25" diskettes with quest for glory 2 on them and they’re almost antique. i think the usb drive that reads them still works. give them another couple years.

do HDDs work better than SSDs in space? because of the cosmic rays and shit? or something about intermittent power? no, really, this is a real problem that they could be already solving, one i know jack shit about.

iturnedintoanewt@lemmy.world on 08 Feb 02:09 next collapse

Doesn’t this sound awfully similar to the Mini disc technology? The discs were only writable when heated by a laser. They were pretty impressive for the time… But not very fast. Especially when writing.

thatradomguy@lemmy.world on 08 Feb 02:12 next collapse

Probably still with only 1 year warranty…

Grapho@lemmy.ml on 08 Feb 06:22 collapse

And if it breaks at 10 months and they take another 2 to send your replacement back, well, they no longer need to send one that actually works this time either

MonkeMischief@lemmy.today on 08 Feb 06:28 next collapse

Okay cool, cool, so does this mean ridiculous data centers will use these things, and then can I get another 4TB RED for my NAS so I can fit my whole life on a mirrored total of 8TB without paying 8x what it’s worth, please?

Thaaaaanks…

brygphilomena@lemmy.dbzer0.com on 08 Feb 14:23 next collapse

Is there a Lemmy community for trading surplus hardware yet?

I have a pile of HDDs and servers that I no longer use. I’ve transitioned almost all mine to 20tb+. I might have 8 or 10 4tb REDs laying around. They’re old, probably have thousands of power on hours in the smart data though.

yyprum@lemmy.dbzer0.com on 08 Feb 14:40 collapse

Are you in Europe by any chance? :)

brygphilomena@lemmy.dbzer0.com on 08 Feb 14:44 collapse

Ah, no. Sorry. Midwest, USA.

yyprum@lemmy.dbzer0.com on 08 Feb 17:35 collapse

No apologies needed, I’m not even OP :) it was just a long shot :D

AndrewZabar@lemmy.world on 08 Feb 16:41 collapse

8TB? That’s my ideal RAM configuration lol. ;-)

InFerNo@lemmy.ml on 08 Feb 12:58 next collapse

I’d put this in a mirror configuration tbh.

Fmstrat@lemmy.world on 08 Feb 13:52 next collapse

Question: Are failures due to issues on a specific platter? Meaning, could a ZRAID theoretically use specific platters as a way to replicate data and not require 140TB of resilvering on a failure?

Nilz@sopuli.xyz on 08 Feb 16:44 next collapse

IIRC, HDDs have some reserved sectors in case some go bad. But in practice, once you start having faulty sectors it’s usually a sign that the drive is dying and you should replace it ASAP.

I think if you know drive topology you can technically create partitions on platter level, but I don’t really see a reason why you’d do it. If the drive is dying you need to resilver the entire drive’s content to a new disk anyway.

Andres4NY@social.ridetrans.it on 08 Feb 18:26 collapse

@Fmstrat @veeesix Since there's two very diffrent questions there.. The first, "where do the failures happen?": anywhere. It could be the controller dying (in which case the platters themselves are fine if you replace the board, but otherwise the whole thing is toast). It could be the head breaking. It could be issues with a specific platter. It could be something that affects _all_ the platters (like dust getting inside the sealed area). So basically, it very much depends.

Andres4NY@social.ridetrans.it on 08 Feb 18:28 collapse

@Fmstrat @veeesix The second, could you do raid across specific platters - yes and no. The drive firmware specifically hides the details of the underlying platter layout. But if you targeted a specific model, you could probably hack something together that would do raid across the platters. But given the answer to the first question, why would you?

Fmstrat@lemmy.world on 08 Feb 19:01 collapse

Great answers, thank you.

Alpha71@lemmy.world on 08 Feb 17:45 next collapse

Okay. I want total honesty here. How many of you could actually fill that thing up?

greedytacothief@lemmy.dbzer0.com on 08 Feb 18:33 next collapse

With useful stuff? Never. With random bullshit I think might be useful some day if only I find the time? Easy

suzune@ani.social on 08 Feb 19:27 next collapse

… or be able to backup it?

LifeInMultipleChoice@lemmy.world on 08 Feb 19:28 next collapse

I remember Mac OS X having an issue with its mail app awhile back that would create massive log files continuously that would keep generating until they filled the entire drive. You would have to boot to a recovery partition or such because the OS partition wouldn’t have enough room to expand/boot and remove them and fix the issue.

Imagine having 130 terabytes of invisible log files

alekwithak@lemmy.world on 08 Feb 20:05 collapse

Archive.org, Anna’s archive, Jan 6 footage, Epstein files, there’s plenty to back up.

nuko147@lemmy.world on 08 Feb 17:58 next collapse

Whats the point when the prices for 4-8TB disks are stable the last 5 years? (I think that they are getting higher even…)

sefra1@lemmy.zip on 08 Feb 18:10 next collapse

The point is that 8TB are too small, and not enough for my anime.

Zetta@mander.xyz on 08 Feb 19:02 collapse

The point is the need for more and more data storage is never going to stop.

pound_heap@lemmy.dbzer0.com on 08 Feb 20:58 collapse

Does the increased density mean that the speed also goes up? It would be nice if a 7200 RPM drive could finally saturate SATA3 bandwidth.