this post was submitted on 27 Nov 2023
890 points (96.8% liked)
memes
10477 readers
3204 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I know this a a joke but in case some people are actually curious: The manufacturer gives the capacity in Terabytes (= 1 Trillion Bytes) and the operating system probably shows it in Tebibytes (1024^4 Bytes ≈ 1.1 Trillion Bytes). So 2 Terabytes are two trillion bytes which is approximately 1.82 Tebibytes
They could easily use the proper units, but sometime someone decided to cheat and now everyone does to the point that this is the standard now.
Eh, that at least goes back to the days of dial-up (at least).
56k modem connections were 7k bytes or less.
The drive thing confused and angered many cause most OSs of the time (and even now) report binary kilobytes (kiB) as kB which technically was incorrect as k is an SI prefix for 1000 (10^3) not the binary unit of 1024 (2^10).
Really they should have advertised both on the boxes.
I think Mac OS switched to reporting data in kilobibytes (kiB) vs kB since Mac OS 10.6.
I remember folks at the time thinking the new update was so efficient it had grown their drive space by 10%!
While macOS did indeed primarily switch to KiB, MiB and Gib, it does at times still report storage as KB, MB, GB, etc., however it uses the (correct) 1000B = 1KB
And afaik, Linux also uses the same (correct) system, at least most of the time.
The only real outlier is Windows, which still uses the old system with KB = 1024B, some of the time. In certain menus, they do correctly use KiB
Please note that kilo is a small k. n, μ, m, k, M, G, T, ...
And yes. A lot of people here get at least one of those wrong.
While you are correct, I know no operating system that doesn’t capitalize the K. At the very least not consistently.
I just checked and my Android phone does indeed make the same error. Amazing.
I guess it’s for consistency. M, G and T are all capital and n, p or μ aren’t relevant for bits and bytes. Makes sense to also capitalize the k.
Edit: In case of kbps and Mbps, the capitalization is usually correct though…
Network / signal engineers have always, and are still, operating in bits not bytes. They've been doing that when what we understand now as byte was still called an octet and when you send a byte over any network transport it's probably not going to send eight bits but that plus party, stop, whatnot ask a network engineers.
What speed test are you running that gives its results in bytes?
His speed tests consists of downloading files lol
Granted, that's probably a better way of getting the actual attainable speed
Nonsense. It's a simple continuation of something that has always been around. They would have needed to actively and purposefully changed it. The first company that tried to sell "1 Megabyte/s" instead of "8 Megabits/s" is shooting themselves in the foot because the number is smaller. If it was going to change, you would need everyone to agree at once to correct the numbers the same way.
Modems were 300 baud, then 1200 baud, then 56.6k baud. ISDN took things to 128k baud, and a T1 was 1.544M baud. Except that sometime around the time things went into tens of k, we started saying "bits" instead of "baud". In any case, it simply continued with the first DSL and cable modems being around 1 to 10 Mbits. You had to be able to compare it fairly to what came before, and the easiest way to do that is to keep doing what they've been doing.
Ethernet continues to be sold in the same system of measurement, for the same reasons.
You're telling me that what I say is nonsense and you just paraphrase what I said.
Don't go thinking engineering has anything to do with what marketing put up on their storefront.
It has plenty to do with engineering, because it was engineering that first decided to measure things this way. Marketing merely continued it.
Which, as you mentioned, they keep because if they didn't it wouldn't be a good marketing move, higher number sells more. Even though it doesn't reflect the modern end user internet experience. They don't keep it because an engineer prefer that. Marketing will fight tooth and nail to screw us engineers over if it sells better.
Thing is, there's no rational reason to arbitrarily use groups of 8 bits for transmission over the wire. It's not just ISPs who use bits, the whole networking industry does it that way.
To expand on this a bit more, bits are used for data transmission rates because various types of encoding, padding, and parity means that data on the wire isn't always 8 bits per byte. Dial up modems were very frequently 9 bits per byte (8-n-1 signalling), and for something more modern PCIe uses 8b/10b encoding, which is 10 bits on the line for each 8 bits of actual payload.
Before mibi-, gibi-, tibibytes, etc. were a thing, it was the harddrive manufacturers who were creating a little. Everyone saw a kilobyte as 1024 bytes but the storage manufacturers used the SI definition of kilo=1000 to their advantage.
By now, however, kibibytes being 1024 bytes and kilobytes being 1000 bytes is pretty much standard, that most agree on. One notable exception is of course Windows…
Indeed, Windows could easily stop mislabeling TiB as TB, but it seems it's too hard for them.
The IEC changing the definition of 1KB from 1024 bytes to 1000 bytes was a terrible idea that's given us this whole mess. Sure, it's nice and consistent with scientific prefix now... except it's far from consistent in actual usage. So many things still consider it binary prefix following the JEDEC standard. Like KiB that's always 1024 bytes, I really think they should've introduced another new unambiguous unit eg. KoB that's always 1000 bytes and deprecated the poorly defined KB altogether
M stands for Mega, a SI prefix that existed longer than the computer data that is being labeled. MB being 1000000 bytes was always the correct definition, it's just that someone decided that they could somehow change it.
Consistency with proper scientific prefix is nice to have, but consistency within the computing industry itself is really important, and now we have neither. In this industry, binary calculations were centric, and powers of 2 were much more useful. They really should've picked a different prefix to begin with, yes. However, for the IEC correcting it retroactively, this has failed. It's a mess that's far from actually standardised now
B and b have never been SI units. Closest is Bq. So if people had not been insisting that it's confusing noone would've been confused.
does not mean you can misuse SI prefixes if the unit itself is not part of the system.
I think there were some court cases in the US the HDD manufacturers won that allows them to keep using those stupid crap units to continue to mislead people. Been a minor annoyance for decades but since all the competition do it & no govt is willing to do anything everyone is stuck accepting it as is. I should start writing down the capacity in multiple units in review whenever buy storage devices going forward.
And as far as my wife is concerned, I'm definitely 6 ft tall. Height ain't what it used to be.
So what you're saying is that ... we can make up whatever number and standard we want? ... In that case, would you like to buy my 2 Tyranosaurusbytes Hard Drive?
Nah, the prefixes kilo-, mega-, giga- etc. are defined precisely how hard drive manufacturers use them, in the SI standard: https://en.wikipedia.org/wiki/International_System_of_Units#Prefixes
The 1024-based magnitudes, which the computing industry introduced, were non-standard. These days, the prefixes are officially called kibi-, mebi, gibi- etc.: https://en.wikipedia.org/wiki/Binary_prefix
You're missing a huge part of the reason why the term 'tebibytes' even exists.
Back in the 90s, when USB sticks were just coming out, a megabyte was still 1024 kilobytes. Companies saw the market get saturated with drives but they were still expensive and we hadn't fully figured out how to miniaturize them.
So some CEO got the bright idea of changing the definition of a "megabyte" to mean 1000. That way they could say that their drive had more megabytes than their competitors. "It's just 24 kilobytes. Who's going to notice?"
Nerds.
They stormed various boards to complain but because the average user didn't care, sales went through the roof and soon the entire storage industry changed. Shortly after that, they started cutting costs to actually make smaller sized drives but calling them by their original size, ie. 64MB* (64 MB is 64000).
The people who actually cared had to invent the term "mebibyte" purely because of some CEO wanting to make money. And today we have a standard that only serves to confuse people who actually care that their 2TB is actually 2048 GiB or 1.8 TiB.
Dude, a "1.44MB" floppy disk was 1.38MiB once formatted (1,474,560 B raw). It's been going on for eternity.
It's inconsistent across time though. 700MB on a CD-R was MiB, but a 4.7GB DVD was not.
RAM has always, without exception, been reported in 1024 B per KB. Inversely, network bandwidth has been 1000 B per KB for every application since the dialup days (and prior).
One thing to point out, The floppy thing isn't due to formatting, the units themselves were screwed up: It's not 1.44 million bytes or 1.44 MiB regardless of formatting - they are 1440 kiB! (Which produces the raw size you gave) which is about 1.406 MiB unformatted.
The reason is because they were doubled from 720 kiB disks*, and the largest standard 5¼ inch disks ("1.2 MB") were doubled from 600 kiB*. I guess it seemed easier or less confusing to the users then double 600k becoming 1.17M.
(* Those smaller sizes were themselves already doubled from earlier sizes. The "1.44 MB" ones are "Double sided double density")
That's just wrong. "Kilo" is ancient Greek for "thousand". It always meant 1000. Because bytes are grouped on powers of two and because of the pure coincidence that 10^3 (1000) is almost the same size as 2^10 (1024) people colloquially said kilobyte when they meant 1024 bytes, but that was always wrong.
Update: To make it even clearer. Try to think what historical would have happened if instead of binary, most computers would use ternary. Nobody would even think about reusing kilo for 3^6 (=729) or 3^7 (=2187) because they are not even close.
Resuing well established prefixes like kilo was always a stupid idea.
Or - you know - for consistency? In physics kilo, mega etc. are always 10^(3n), but then for some bizarre reason, unit of information uses the same prefixes, but as 2^(10n).
Depends on the OS. For some reason MacOS uses Base 10.