this post was submitted on 17 Dec 2024
321 points (99.1% liked)

Technology

59970 readers
3466 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] GreenKnight23@lemmy.world 2 points 9 hours ago (1 children)

cool never will buy another seagate ever though.

[–] interdimensionalmeme@lemmy.ml 1 points 9 hours ago

Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.

[–] ANIMATEK@lemmy.world 54 points 19 hours ago (2 children)
[–] avieshek@lemmy.world 24 points 17 hours ago

sonarr goes brrrrrr…

[–] avidamoeba@lemmy.ca 21 points 19 hours ago (1 children)
[–] ExcessShiv@lemmy.dbzer0.com 10 points 18 hours ago

...dum tss!

[–] hsdkfr734r@feddit.nl 34 points 18 hours ago (1 children)

My first HDD had a capacity of 42MB. Still a short way to go until factor 10⁶.

[–] 4grams@lemmy.world 20 points 18 hours ago (1 children)

My first HD was a 20mb mfm drive :). Be right back, need some “just for men” for my beard (kidding, I’m proud of it).

[–] I_Miss_Daniel@lemmy.world 15 points 17 hours ago (3 children)

So was mine, but the controller thought it was 10mb so had to load a device driver to access the full size.

Was fine until a friend defragged it and the driver moved out of the first 10mb. Thereafter had to keep a 360kb 5¼" drive to boot from.

That was in an XT.

[–] Feathercrown@lemmy.world 8 points 16 hours ago

Was fine until a friend defragged it and the driver moved out of the first 10mb

Oh noooo 😭

load more comments (2 replies)
[–] JakenVeina@lemm.ee 5 points 13 hours ago (1 children)

The two models, [...] each offer a minimum of 3TB per disk

Huh? The hell is this supposed to mean? Are they talking about the internal platters?

load more comments (1 replies)
[–] JasonDJ@lemmy.zip 16 points 18 hours ago* (last edited 18 hours ago) (4 children)

This is for cold and archival storage right?

I couldn't imagine seek times on any disk that large. Or rebuild times....yikes.

[–] ricecake@sh.itjust.works 15 points 17 hours ago

Definitely not for either of those. Can get way better density from magnetic tape.

They say they got the increased capacity by increasing storage density, so the head shouldn't have to move much further to read data.

You'll get further putting a cache drive in front of your HDD regardless, so it's vaguely moot.

[–] WolfLink@sh.itjust.works 5 points 14 hours ago

Random access times are probably similar to smaller drives but writing the whole drive is going to be slow

[–] RedWeasel@lemmy.world 7 points 16 hours ago

For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.

load more comments (1 replies)
[–] RememberTheApollo_@lemmy.world 9 points 17 hours ago (1 children)

I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.

[–] frezik@midwest.social 18 points 17 hours ago* (last edited 17 hours ago) (4 children)

One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You're sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won't help anything.

There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).

[–] GamingChairModel@lemmy.world 12 points 16 hours ago

If you're writing 100 MB/s, it'll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.

[–] RememberTheApollo_@lemmy.world 7 points 17 hours ago

Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›