this post was submitted on 04 Nov 2024
278 points (98.3% liked)

Technology

59761 readers
3295 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Buffalox@lemmy.world 61 points 1 month ago (1 children)

I look forward to watching a Gamers Nexus review of this. I hope it's as good as they say. πŸ˜€

[–] downhomechunk@midwest.social 6 points 4 weeks ago (1 children)

Lead us to salvation tech jesus!

[–] Buffalox@lemmy.world 4 points 4 weeks ago

And he is, one review at a time.

[–] cordlesslamp6891@lemmy.world 49 points 1 month ago

Finally, now I can afford the 5800x3D.

[–] solrize@lemmy.world 20 points 1 month ago (1 children)

I'm an antifan of Apple but the M4 Max is supposed to be faster than any x86 desktop CPU, and use a lot less power. That's per geekbench 6. I'd be interested in seeing other measurements.

[–] Viri4thus@feddit.org 41 points 1 month ago* (last edited 1 month ago) (3 children)

Geekbech is as useful as a metric as an umbrella on a fish. Also the M4 max will not consume less energy than the competition. That is a misconception arising from the lower skus in mobile devices. The laws of physics apply to everyone, at the same reticle size the energy consumption in nT worlkloads is equivalent. The great advantage of Apple is that they are usually a node ahead and the eschewing of legacy compatibility saves space and thus energy in the design that can be leveraged to reduce power consumption on idle or 1T. Case in point, Intel's latest mobile CPUs.

[–] independantiste@sh.itjust.works 27 points 1 month ago (2 children)

Exactly, the apple chips excel at low power tasks and will consume basically nothing doing them. It's also good for small bursty tasks, but for long lived intensive tasks it behaves basically the same as an equivalent x86 chip. People don't seem to know that these chips can easily consume 80-90W of power when going full tilt.

[–] Buffalox@lemmy.world 5 points 4 weeks ago (2 children)

The new Intel Arrow Lake is supposed to max out at 150W, but it doesn't. And that's still almost 40% better than previous gen Intel!
So hovering around 80-90W max is pretty modest by today's standards.

[–] Cocodapuf@lemmy.world 6 points 4 weeks ago (2 children)

That's impressive, or should I say scary? 150w is a lot of heat to dissipate... I hope those aren't laptop chips...

[–] daellat@lemmy.world 8 points 4 weeks ago

The 14900k is an absolute oven

[–] Buffalox@lemmy.world 4 points 4 weeks ago

No but the M4 Max is claimed to be as fast, and Intel improved their chip, so it's down from 250W for previous gen! And the M4 Max is faster.

Oh of course, the apple chips are faster, and this is likely a combination of more efficiency thanks to the newer process node and apple being able to optimize the chips and power draw much better because they make everything. Apple can also afford to use larger chips because they make a profit on the entire computer, not just the processor itself.

[–] Viri4thus@feddit.org 4 points 4 weeks ago (1 children)

We're condemned to suffer uninformed masses on this. Zen 5 mobile is on N4p at 143transistors/um2, the M4max is on N3E at 213transistors/um2. That's a gigantic advantage in power savings and logic per mm2 of die. Granted, I don't think the chiplet design will ever reach ARM levels of power gating but that's a price I'm willing to pay to keep legacy compatibility and expandable RAM and storage. That IO die will always be problematic unless they integrate it in the SOC but I'd prefer if they don't. (Integration also has power saving advantages, just look at Intel's latest mobile foray)

[–] pycorax@lemmy.world 4 points 4 weeks ago

Not to mention, Apple is able to afford the larger die size per chip since they do vertical integration and don't have to worry about the cost of each chip in the way that Intel and AMD has to when they sell to device manufacturers.

[–] Buffalox@lemmy.world 16 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

The laws of physics apply to everyone

That is obviously true, but a ridiculous argument, there are plenty examples of systems performing better and using less power than the competition.
For years Intel chips used twice the power for similar performance compared to AMD Ryzen. And in the Buldozer days it was the same except the other way around.

Arm has designed chips for efficiency for a decade before the first smartphones came out, and they've kept their eye on the ball the entire time since.
It's no wonder Arm is way more energy efficient than X86, and Apple made by far the best Arm CPU when M1 arrived.

The great advantage of Apple is that they are usually a node ahead

Yes that is an advantage, but so it is for the new Intel Arrow Lake compared to current Ryzen, yet Arrow Lake use more power for similar performance. Despite Arrow Lake is designed for efficiency.

It's notable that Intel was unable to match Arm on power efficiency for an entire decade, even when Intel had the better production node. So it's not just a matter of physics, it is also very much a matter of design. And Intel has never been able to match Arm on that. Arm still has the superior design for energy efficiency over X86, and AMD has the superior design over Intel.

[–] Viri4thus@feddit.org 2 points 4 weeks ago (1 children)

Intel has had a node disadvantage regarding Zen since the 8700K... From then on the entire point is moot.

[–] Buffalox@lemmy.world 2 points 4 weeks ago (2 children)

From then on the entire point is moot.

No it's not, because the point is that design matters. When Ryzen came out originally, it was far more energy efficient than the Intel Skylake. And Intel had the node advantage.

[–] Viri4thus@feddit.org 4 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

https://www.techpowerup.com/review/intel-core-i7-8700k/16.html

https://www.techpowerup.com/cpu-specs/core-i7-6700k.c1825

Ryzen was not more efficient than skylake. In fact, the 1500x was actually consuming more energy in nT workloads than skylake while performing worse, which is consistent with what I wrote. What Ryzen was REALLY efficient at was being almost as fast as skylake for a fraction of the price.

https://www.notebookcheck.net/Apple-M3-Max-16-Core-Processor-Benchmarks-and-Specs.781712.0.html

Will you look at that, in nT workloads the M3 Max is actually less efficient than competitors like the ryzen 7k hs. The first N3 products had less than ideal yields so apple went with a less dense node thus losing the tech advantage for one generation. That can be seen in their laughable nT performance/watt. Design does matter however, and in 1T workloads Apple's very wide design excells by performing very well while consuming lower energy, which is what I've been saying since this thread started.

[–] Buffalox@lemmy.world 1 points 4 weeks ago (1 children)

Power consumption is not efficiency, PPW is.

[–] Viri4thus@feddit.org 1 points 4 weeks ago

Tell me you didn't open the links without telling me you didn't open the links. Have a nice day friend.

[–] barsoap@lemm.ee 3 points 4 weeks ago

Not to mention ARM chips which by and large were/are more efficient on the same node than x86 because of their design: ARM chip designers have been doing that efficiency thing since forever, owing to the mobile platform, while desktop designers only got into the game quite late. There's also some wibbles like ARM insn decoding being inherently simpler but big picture that's negligible.

Intel just really, really has a talent for not seeing the writing on the wall while AMD made a habit out of it out of sheer necessity to even survive. Bulldozer nearly killed them (and the idea itself wasn't even bad, it just didn't work out) while Intel is tanking hit after hit after hit.

[–] _____@lemm.ee 15 points 1 month ago (6 children)

what game can't be ran by a 5800x3D ? if anything I feel like graphic cards are the biggest bottle neck right now

[–] tee9000@lemmy.world 10 points 4 weeks ago

Simulators and games with mods can push the cpu. But yeah. Mostly gpu limited.

[–] ColeSloth@discuss.tchncs.de 8 points 1 month ago (1 children)

The gpu has been the gaming bottleneck for decades.

Yup. I have no trouble running modern games on my Ryzen 5600, which doesn't even have the massive cache of the 3D chips. I'm not spending >$1k on a GPU, so my CPU is likely more than sufficient for quite a while.

[–] szczuroarturo@programming.dev 6 points 4 weeks ago (1 children)

Almoast any paradox game , except for maybe victoria 3.

[–] RixMixed@lemmy.ca 3 points 4 weeks ago

1900’s to end date would like a word with you.

[–] KoalaUnknown@lemmy.world 5 points 4 weeks ago

Escape from Tarkov. If you want 120+ fps on streets you pretty much need a 7800x3d.

[–] nightlily@leminal.space 1 points 4 weeks ago

Dragons Dogma 2 is notoriously CPU hungry.

[–] Lettuceeatlettuce@lemmy.ml 1 points 4 weeks ago

5800X3D is my CPU for the next 3-5 years probs. Maybe even longer, it's so damn good.

[–] Defaced@lemmy.world 8 points 1 month ago (2 children)

While the 9000 series looks decent, I honestly think Intel has a really interesting platform to build off of with the core ultra chips. It feels like Intel course correcting with poor decisions made for the 13th and 14th gen chips. Wendel from Level1 techs made a really good video about the good things Intel put into the chips while also highlighting some of the bad things, things like a built-in NPU and how they're going to use that to pull in profiles for applications and games with ML, or the fact that performance variance occurs between chipset makers more often with the core ultra. It's basically a step forwards in tech but a step backwards in price/performance.

[–] Mesophar@lemm.ee 11 points 4 weeks ago (1 children)

Work at a tech store; the technicians that build the PCs for customers recently tried building with the new Core Ultra 7 256K. Two processors were dead or unstable right out of thr box. Tried with known good RAM, two different cpus on two different motherboards. It seems that Intel hasn't really fixed their stability issue, which should be their first concern.

[–] Defaced@lemmy.world -1 points 4 weeks ago

Well I didn't say they were perfect.

[–] ColeSloth@discuss.tchncs.de 3 points 1 month ago

So long as they stopped building the ram in and losing $16,000,000,000 in a fiscal year.

[–] SynopsisTantilize@lemm.ee 3 points 4 weeks ago (1 children)

You guys are actually buying these processors? I'm still running a 4770 and a 1060.

[–] Confetti_Camouflage@pawb.social 2 points 4 weeks ago (1 children)

I'm on a 4770k and GTX 980 as well but I'm really feeling the pain because all the newer games I want to play are CPU bottlenecked.

[–] SynopsisTantilize@lemm.ee 1 points 4 weeks ago

Helldovers runs like shit lol

[–] dosse91@lemmy.trippy.pizza 2 points 4 weeks ago

I think I might be the only person who bought a 9950x on launch and was actually very happy with it. Not only it performs excellent, but unlike its predecessor, I can actually use it with air cooling, it's a very efficient and powerful CPU.

[–] IndustryStandard@lemmy.world 0 points 4 weeks ago

Now that is a big boost

[–] wewbull@feddit.uk -1 points 1 month ago (2 children)

Is 20% faster than intel a step up, generation on generation?

[–] frezik@midwest.social 11 points 1 month ago (1 children)

It'll be a step up from the 7800x3d, but how much is a question. The 9000 series in general has been a disappointment in terms of the gains that were expected, but it does show some kind of gain. There's reason to think those issues are fixable. Linux performance does show a decent uplift, for one, which has not been the case with Intel's Arrow Lake chips.

[–] TheGrandNagus@lemmy.world 30 points 1 month ago* (last edited 1 month ago) (3 children)

I know people meme about "Zen 5%" (sidenote: genuinely a clever quip), but most of that is down to AMD massively reducing the power draw of the chips.

If you set it to the same power limits as Zen4, you can get large performance improvements.

Gamers have been saying for years that stuff is getting too power-hungry, but when steps were made to reverse this, they collectively lost their minds.

Seriously, what are they expecting, a 25% improvement in performance at half the power draw, while staying on a 5nm-family node?

AMD were dumb for thinking gamers give even the slightest fuck about power usage. Gamers would much more readily accept a CPU going from 120W to 500W if it meant an imaginary +20% perf uplift over a CPU going from 120W to 70W with a +5% perf uplift. I say imaginary because nobody with a high end CPU and a 4090 actually plays their games at 1080p low.

[–] alphabethunter@lemmy.world 9 points 1 month ago (1 children)

I couldn't quite understand why people were memeing on Zen 5. It's 5% performance increase while at much lower TDP, what is there not to like? Efficiency is plenty important. And even if we could see a 20% performance increase while using more power, is that worth it? What are the true benefits of a 20% faster CPU when considering pure gaming while we are already at the top of the spec sheet? The games where the difference would be a massive number of FPS are those like CS2 where you would go from 600 to 720 fps, does that truly matter? I like my pcs running as efficient as possible, that way I know they'll last longer.

[–] fluckx@lemmy.world 7 points 1 month ago

To them it probably is. I've seen literal posts ( or GitHub comments - I forgot ) where they are raging their fps dropped from 420 to 370 with the latest patch and that the game is now completely unplayable!

They have a point complaining because the patch had a big fps drop, but the game is unplayable? At 370fps? Gtfo xD.

There's people playing on a lot less than that.

So the smart move here for AMD would have been to bin the chips differently according to their tested stability for power usage, like Intel T SKUs. It’s the same chip, but the β€œX” versions are running at full power (with bios options to turn it down to be more efficient, or aggressively scale power delivery, or what have you), and β€œE” versions that just always run at lower voltages and currents.

I agree that cutting TDP nearly in half while STILL pulling out a perf gain is remarkable, but also not something most gamers are going to care much about in the context of a desktop system.

[–] Aceticon@lemmy.world 2 points 4 weeks ago* (last edited 4 weeks ago)

There are gamers and there are gamers.

Some gamers prefer not to have the level of noise of a jet engine taking off right next to them to get a couple percent more frames per second on a game.

I would say there are at least two quite different markets amongst PC gamers who have different preferred balances between performance and the downsides of it (noise, heat, power costs), a bit like not all people who enjoy driving want muscle cars.

[–] Dudewitbow@lemmy.zip 4 points 4 weeks ago

the main benefit on the performance increase from zen4 to zen 5 is the reordering of the cache and chip layers allowed them to clock the cores higher, as one of the biggest bottlenecks for older x3d designs was clocks, due to the chip internally insulating a lot of the heat, so their clocks were stepped back from their non x3d counterparts.

the 9800x3d base and turbo clocks are a generous step up from previous gen, and likely the biggest contributing factor to the performamce increase when reviews drop.