this post was submitted on 27 Jun 2024
6 points (87.5% liked)
LocalLLaMA
2262 readers
12 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I had much better success using WSL, but I haven't used it or even updated it in a long while. (I have been meaning to see how AMD GPU support has evolved over the last few months. Back in January'ish, AMD support was still bad.)
Anything that even is remotely Linux related is much easier to get working with WSL, btw. Almost all of my personal python stuff is running under it and it works great with VS Code
I mean Linux is an option but haven't people been saying nvida drivers are a huge hassle to use on Linux?
Nah. There are some nvidia issues with wayland (that are starting to get cleared up), and nvidia's drivers not being open-source rubs some people the wrong way, but getting nvidia and cuda up and running on linux is pretty easy/reliable in my experience.
WSL is a bit different but there are steps to get that up and running too.
They can be, I suppose. However, the AI libraries that I was tinkering with seemed to all be based around Ubuntu and Nvidia. With Docker, GPU passthrough is much better under Linux and Nvidia.
WSL improved things a bit after I got an older GTX 1650. For my AMD GPU, ROCm support is (was?) garbage under Windows using either Docker or WSL. I don't remember having much difficulty with Nvidia drivers though... I think there might have been some strange dependency problems I was able to work through though.
AMD GPU passthrough on Windows to Docker containers was a no-go. I remember that fairly clear though.
My apologies. It has been a few months since I messed with this stuff.