this post was submitted on 13 Apr 2024
492 points (96.4% liked)

Technology

59607 readers
3441 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sugar_in_your_tea@sh.itjust.works 4 points 7 months ago* (last edited 7 months ago) (2 children)

Do you have actual numbers to back that up?

The best I've found is benchmarks of Apple silicon vs Intel+dGPU, but that's an apples to oranges comparison. And if I'm not mistaken, Apple made other changes like a larger bus to the memory chips, which again makes comparisons difficult.

I've heard about potential benefits, but without something tangible, I'm going to have to assume it's not the main driver here. If the difference is significant, we'd see more servers and workstations running soldered RAM, but AFAIK that's just not a thing.

[–] Turun@feddit.de 1 points 7 months ago (2 children)

I understand the scepticism, but without links of what you've found or which parts in particular you consider dubious claims (ram speed can be increased when soldered, higher speeds lead to better performance, etc) it comes across as "i don't believe you, because i choose to not believe you"

LTT has made a comparison video on ram speeds: https://www.youtube.com/watch?v=b-WFetQjifc

Do you need proof that soldered ram can be made to run faster?

[–] PipedLinkBot@feddit.rocks 1 points 7 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=b-WFetQjifc

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] sugar_in_your_tea@sh.itjust.works 1 points 7 months ago (1 children)

Yes, and the results from that video (i assume, I skimmed it, but have watched similar videos) is that the difference is negligible (like 1-10FPS) and you're usually better off spending that money on something else.

I look at the benchmarks between the Intel MacBook Pro and the M1 MacBook Pro, and both use soldered RAM, yet the M1 gets so much better performance, even on non-GPU tasks (e.g. memory-heavy unit tests at work went from 3-5min to 45-50sec from latest Intel to M1). Docker build times saw a similar drop. But it's hard for me to know what the difference is between memory vs CPU changes. I'd have to check, but I'm guessing there's also the DDR4 to DDR5 switch, which increases memory channels.

The claim is that proximity to the CPU explains it, but I have trouble quantifying that. For me, a 1-10FPS drop isn't enough to reduce repairability and expandability. Maybe it is for others though, but if that's the difference, that's a lot less than the claims they seem to make.

[–] Turun@feddit.de 1 points 7 months ago* (last edited 7 months ago) (1 children)

The video has a short section on productivity (i.e. rendering or compiling). That part is probably the most relevant for most people. Check the chapter view in YouTube to jump directly to it.

I think a 2x performance improvement is plausible when comparing non-soldered ram to the Apple silicon, which goes even further and has the memory on the die itself. If, of course, ram is the limiting factor.

The advantages of upgradable, expandable ram are obvious. But let's face it: most people don't need and even less use that capability.

short section on productivity

Looks about the same as the rest. Big gains for handbrake, pretty much nothing for anything else. And that makes sense, because handbrake will be doing lots of roundtrips to the GPU for encoding.

has the memory on the die itself

On the package, not the die. But perhaps that's what you meant. On die would be closer to a massive cache like on the X3D AMD chips.

The performance improvement seems to be that Apple has a massive iGPU, not anything to do with RAM next to the CPU. So in CPU-only benchmarks, I'd expect the lion's share of the difference to be CPU design and process node, not the memory.

Also, unified memory isn't particularly new, APUs have supported it for years. It's just not well utilized by devs because most users have dGPUs. So I think the main innovation here is Apple committing to it and providing tooling for devs to utilize the unified memory better, like console manufacturers have done.

So I guess that brings a few more questions:

  • what performance improvements could we see if devs use unified memory in socketed LPDDR memory in laptops?
  • how would that compare to Apple's on-package RAM (I think it's also LPDDR, so more apples to apples?)?
  • how likely are AMD and Intel to push for massive APUs on laptops?

I guess we're kind of seeing it with the gaming PC handhelds, like Steam Deck and Ayaneo etc al, so maybe that'll become more mainstream.

[–] BorgDrone@lemmy.one 1 points 7 months ago

The best I've found is benchmarks of Apple silicon vs Intel+dGPU, but that's an apples to oranges comparison.

The thing with benchmarks is that they only show you the performance of the type of workload the benchmark is trying to emulate. That’s not very useful in this case. Current PC software is not build with this kind of architecture in mind so it was never designed to take advantage of it. In fact, it’s the exact opposite: since transferring data to/from VRAM is a huge bottleneck, software will be designed to avoid it as much as possible.

For example: a GPU is extremely good at performing an identical operation on lots of data in parallel. The GPU can perform such an operation much, much faster than the CPU. However, copying the data to VRAM and back may add so much additional time that it still takes less time to run it on the CPU, a developer may then choose to run it on the CPU instead even if the GPU was specifically designed to handle that kind of work. On a system with UMA you would absolutely run this on the GPU.

The same thing goes for something like AI accelerators. What PC software exists that takes advantage of such a thing?

A good example of what happens if you design software around this kind of architecture can be found here. This is a post by a developer who worked on Affinity Photo. When they designed this software they anticipated that hardware would move towards a unified memory architecture and designed their software based on that assumption.

When they finally got their hands on UMA hardware in the form of an M1 Max that laptop chip beat the crap out of a $6000 W6900X.

We’re starting to see software taking advantage of these things on macOS, but the PC world still has some catching up to do. The hardware isn’t there yet, and the software always lags behind the hardware.

I've heard about potential benefits, but without something tangible, I'm going to have to assume it's not the main driver here. If the difference is significant, we'd see more servers and workstations running soldered RAM, but AFAIK that's just not a thing.

It’s coming, but Apple is ahead of the game by several years. The problem is that in the PC world no one has a good answer to this yet.

Nvidia makes big, hot, power hungry discrete GPUs. They don’t have an x86 core and Windows on ARM is a joke at this point. I expect them to focus on the server-side with custom high-end AI processors and slowly move out of the desktop space.

AMD has the best papers for desktop. They have a decent x86 core and GPU, they already make APUs. Intel is trying to get into the GPU game but has some catching up to do.

Apple has been quietly working towards this for years. They have their UMA architecture in place, they are starting to put some serious effort into GPU performance and rumor has it that with M4 they will make some big steps in AI acceleration as well. The PC world is held back by a lot of legacy hard and software, but there will be a point where they will have to catch up or be left in the dust.