this post was submitted on 16 Oct 2024
73 points (95.1% liked)

Linux

47941 readers
1542 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I see the raise of popularity of Linux laptops so the hardware compatibility is ready out of the box. However I wonder how would I build PC right know that has budget - high end specification. For now I'm thinking

  • Case: does not matter
  • Fans: does not matter
  • PSU: does not matter
  • RAM: does not matter I guess?
  • Disks: does not matter I guess?
  • CPU: AMD / Intel - does not matter but I would prefer AMD
  • GPU: AMD / Intel / Nvidia - for gaming and Wayland - AMD, for AI, ML, CUDA and other first supported technologies - Nvidia.

And now the most confusing part for me - motherboard... Is there even some coreboot or libreboot motherboard for PC that supports "high end" hardware?

Let's just say also a purpose of this Linux PC. Choose any of these

  1. Blender 3D Animation rendering
  2. Gaming
  3. Local LLM running

If you have some good resources on this also let me know.

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 16 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Basically the only thing that matters for LLM hosting is VRAM capacity. Hence AMD GPUs can be OK for LLM running, especially if a used 3090/P40 isn't an option for you. It works fine, and the 7900/6700 are like the only sanely priced 24GB/16GB cards out there.

I have a 3090, and it's still a giant pain with wayland, so much that I use my AMD IGP for display output and Nvidia still somehow breaks things. Hence I just do all my gaming in Windows TBH.

CPU doesn't matter for llm running, cheap out with a 12600K, 5600, 5700x3d or whatever. And the single-ccd x3d chips are still king for gaming AFAIK.

[–] GenderNeutralBro@lemmy.sdf.org 4 points 2 weeks ago (2 children)

Basically the only thing that matters for LLM hosting is VRAM capacity

I'll also add that some frameworks and backends still require CUDA. This is improving but before you go and buy an AMD card, make sure the things you want to run will actually run on it.

For example, bitsandbytes support for non-CUDA backends is still in alpha stage. https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend

[–] brucethemoose@lemmy.world 6 points 2 weeks ago

For local LLM hosting, basically you want exllama, llama.cpp (and derivatives) and vllm, and rocm support for all of them is just fine. It's absolutely worth having a 24GB AMD card over a 16GB Nvidia one, if that's the choice.

The big sticking point I'm not sure about is flash attention for exllama/vllm, but I believe the triton branch of flash attention works fine with AMD GPUs now.

[–] Psyhackological@lemmy.ml 2 points 2 weeks ago

From what I have seen CUDA is still first supported.

[–] possiblylinux127@lemmy.zip 2 points 2 weeks ago (1 children)

This only matters if you are running large models. If you stick with Mistral sized models you don't need nearly as much hardware.

[–] brucethemoose@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

These days, there are amazing "middle sized" models like Qwen 14B, InternLM 20B and Mistral/Codestral 22B that are such a massive step over 7B-9B ones you can kinda run on CPU. And there are even 7Bs that support a really long context now.

IMO its worth reaching for >6GB of VRAM if LLM running is a consideration at all.

[–] Psyhackological@lemmy.ml 1 points 2 weeks ago (2 children)

VRAM and RAM I think. Still AMD seems always slower than Nvidia for some reason for this purpose. Same for Blender benchmarks.

Ah I use my AMD GPU with Bazzite and it is wonderful.

CPU does not matter when GPU matters. Otherwise small models will do fine on CPU especially with more recent instructions for running LLMs.

[–] GenderNeutralBro@lemmy.sdf.org 3 points 2 weeks ago (1 children)

Yeah, AMD is lagging behind Nvidia in machine learning performance by like a full generation, maybe more. Similar with raytracing.

If you want absolute top-tier performance, then the RTX 4090 is the best consumer card out there, period. Considering the price and power consumption, this is not surprising. It's hardly fair to compare AMD's top-end to Nvidia's top-end when Nvidia's is over twice the price in the real world.

If your budget for a GPU is <$1600, the 7900 XTX is probably your best bet if you don't absolutely need CUDA. Any performance advantage Nvidia has goes right out the window if you can't fit your whole model in VRAM. I'd take a 24GB AMD card over a 16GB Nvidia card any day.

You could also look at an RTX 3090 (which also has 24GB), but then you'd take a big hit to gaming/raster performance and it'd still probably cost you more than a 7900XTX. Not really sure how a 3090 compares to a 7900XTX in Blender. Anyway, that's probably a more fair comparison if you care about VRAM and price.

[–] Psyhackological@lemmy.ml 1 points 1 week ago

Great read, thanks!

About the Blender: https://opendata.blender.org/ There is this site so you can compare CPU and GPU and its scores.

[–] brucethemoose@lemmy.world 2 points 2 weeks ago (1 children)

I am not a fan of CPU offloading because I like long context, 32K+. And that absolutely chugs if you even offload a layer or two.