this post was submitted on 13 Jan 2025
416 points (93.9% liked)
Technology
60456 readers
4127 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Up until the early 2000s, serial computation speed doubled about every 18 months. That meant that virtually all software just ran twice as quickly every 18 months of CPU advances. And since taking advantage of that was trivial, new software releases did, traded CPU cycles for shorter development time or more functionality, demanded current hardware to run at a reasonable clip.
In that environment, it was quite important to upgrade the CPU.
But that hasn't been happening for about twenty years now. Serial computation speed still increases, but not nearly as quickly any more.
This is about ten years old now:
https://preshing.com/20120208/a-look-back-at-single-threaded-cpu-performance/
We can also look at about the twelve years since then, which is even slower:
https://www.cpubenchmark.net/compare/2026vs6296/Intel-i7-4960X-vs-Intel-Ultra-9-285K
This is using a benchmark to compare the single-threaded performance of the i7 4960X (Intel's high-end processor back at the start of 2013) to that of the Intel Ultra 9 285K, the current one. In those ~12 years, the latest processor has managed to get single-threaded performance about
(5068/2070)=~2.448
times the 12-year-old processor. That's(5068/2070)^(1/12)=1.07747
, about a 7.7% performance improvement per year. The age of a processor doesn't matter nearly as much in that environment.We still have had significant parallel computation increases. GPUs in particular have gotten considerably more powerful. But unlike with serial compute, parallel compute isn't a "free" performance improvement -- software needs to be rewritten to take advantage of that, it's often hard to parallelize solving problems, and some problems cannot be solved in parallel.
Honestly, I'd say that the most-noticeable shift is away from rotational drives to SSDs -- there are tasks for which SSDs can greatly outperform rotational drives.
My line for computational adequacy was crossed with the Core2Duo. Any chip since has been fine for everyday administration or household use, and they are still fine running linux.
Any Apple silicon including the M1 is now adequate even for high end production, setting a new low bar, and a new watershed.
You know, that would explain a lot because I had no idea that there was an authentication pin and that's total bullshit.