AMD
PC Gaming
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
Here is my process for new cards: Pick a pricepoint, head over to Video Card Benchmark and scroll down until you find the first (ie fastest) video card that meets that price point. Also double check prices at PC Part Picker. For used cards, the chart is still pretty useful, just a bit more manual (and money saving!) to get used prices from eBay/Mercari.
I personally have an AMD bias though, since they have pulled way less shit than nVidia. But, that is your decision to make.
Isn't passmark the benchmark software that is biased against AMD? I thought their benchmarks weren't really trustworthy.
You are thinking of UserBenchmark, which is a stain on the PC gaming industry.
That's the one! Thank you.
I must be in the minority, but I’ve been digging Intels Arc GPUs. For their price point and the fact I don’t play bleeding edge AAA games, they’ve actually done pretty well. Additionally I’m tired of nvidia’s price gouging and AMD following after, I want to support a disruptive third party. Their driver support gets better every release and I can’t wait to see their next generation of cards.
I agree with the Arc cards.
They are good, they are cheap, and they're targeting the midrange to low-end hardware segment which is not covered by any other manufacturer.
I have a 3090 in my desktop but I have an Arc card on my server for Moonlight/Sunshine streaming, as well as Plex transcoding. It's the cheapest card to have AV1 encoding built in.
I also keep seeing them increase performance significantly with every driver update, which is pretty cool.
I'm interested in your use of the Arc card for media transcoding. What one did you get and how would you say it compares to a GTX 960? The one in my server died and I stuck a spare 2060 in there a while back and am looking to downgrade to something sensible.
Most of my media is 1080p x264 with some 4k HEVC (and growing) if that helps.
I had a 1050ti in the machine and I bought an A770. It's overpowered for transcoding but I do remotely stream games at 1080p, which is a good workout for the card.
For simple transcoding I would buy the A310 since it's the cheapest card with AV1. I'm running an old 6th Gen i7-6600k and I had to mess with the UEFI to allow REBAR, but I used this tool to do it.
Too many edge case issues, especially for someone who plays a lot of indie titles and uses Linux. Also, they kinda just went into the low performance market. If they'd launch something for the upper midrange I'd be more interested (assuming they improved on a lot of fronts of course).
I'd get an arc when upgrading but I've an 6800xt so there wouldn't be an upgrade for me.
Didn't even know Intel made dedicated GPUs. The integrated ones have always been positively terrible.
The new dedicated cards are actually very good. They sell them at a competitive price because they are not powerhouses, but they get the job done. If you're targeting 1080p at your top end, it's almost a no-brainer to go with an Arc card. If you're pushing a higher resolution, it's probably better to go with another manufacturer, unless you're fine with higher resolutions and lower framerates.
I use Linux, so not Nvidia. AMD is great. Good power for the money.
Supposedly Nvidia has become a lot better on Linux lately. They finally dropped their weird framebuffer API or whatever (the one that was the reason for horrible Wayland compatibility and also caused a heated Linus Torvalds moment), and I think they even made their linux drivers open source.
They do support their driver yes, but it will never be as good as long as it’s proprietary. The open nvidia module isn’t ready and still backed by proprietary blobs.
Historically speaking, Nvidia was always the best for Linux. Nvidia's success history with Linux trace back to the 2004 with State-of-the-art 3d capabilities (albeit for arcade machines). At that time ATi radeon 3D capabilities for Linux were below sub-par.
The problem with Linux+Nvidia is that it was never "the Linux way"... but always the "Nvidia way".
The Linux way is... flexibility: it mean you can use whatever kind of Linux you want, and the drivers works straight out of the box (basically you need open source drivers). Instead Nvidia always pushed for fixed binary blob that required specific kernel and rigid environment.
The modern support for Linux by AMD is mostly "the Linux way", that's why the Linux community love AMD more than Nvidia.
In any case of hardware parity between Nvidia and AMD; Linux crowd will always prefer AMD, because AMD mean you can use any kind of Linux distro-thing and have an uncompromising gaming experience.
They’re all pretty good. Even the Intel cards are pretty good now. I guess, what’s most important to you? If you want maximum compatibility with games, go for Nvidia. If you want better price to performance, go with AMD or Intel. Although, if I were you, I’d wait until AMD and Intel’s next gen. Both are coming (relatively) soon (probably before the end of the year), and will probably be a lot better than what’s out now.
One caveat, if you use or plan to use Linux, Nvidia can present some difficulties, so avoid them.
Actually two caveats, if you plan to use hardware encoding, like you’ll be streaming on Twitch while you play games, avoid AMD. Their hardware encoding is pretty trash. Both Nvidia and Intel are much better.
My current lineup (I know I have a lot of machines, but my wife and I both play games, and I do AI workloads as well):
- RTX 3090 (mostly for AI)
- Radeon RX 6700 XT (great card)
- Arc A380 (for transcoding, but I’ve gamed on it, and it’s great)
- Radeon RX 6600 (my main card, just because it’s in my living room HTPC, running ChimeraOS)
The amount of self-hosted AI integrations is only going to grow as well. I have a 3090 in a closet PC and I use it for everything from image generation to VSCode/Neovim code completion and code chat. One of the things I'd really like to see in the next few years is a wide variety of local AI driven self hosted Alexa replacements.
Oh, I would love that. Self hosted voice assistant is like the panacea. Mycroft was awesome at first, but it never really panned out.
For the hardware encoding side it used to be true before OBS introduced better AMD encoder support. I have a 6800XT and it works just fine for streaming casually, though I agree that if you stream professionally then Nvidia is the better option.
How much VRAM does your AI card have? The one I have only has 6GB, and I've found that quite limiting.
The 3090 has 24GB. Yeah, 6GB is too small for a lot of things. Even 24GB is too small for some of the models I’ve tried.
I've been wondering what would be the smartest choice to upgrade my 1660 Super. CPU is a Ryzen 5 3200 and I've got 16GB of RAM. Dunno if just upgrading the GPU would make a huge difference.
On Windows, Nvidia without thinking twice. On Linux, depends, on rDNA 4 and the next release of Nvidia drivers, but probably still Nvidia.
Unfortunately, despite how much I would rather buy from someone else, AMD's products are just inferior, especially software.
Examples of AMD being worse:
- AMD's implementation of opengl is a joke, the open source implementation used on Linux is several times faster and made for free by volunteers, without internal knowledge
- AMD will never run physx, which is every day less relevant, but if AMD from the past had proposed an alternative we would have a standardized physics extension in DirectX by now, like with dlss
- AMD's ray accelerators are "incomplete" compared to Nvidia RT cores, which is why ray tracing is better on Nvidia, and which is why with rDNA 4 they are changing how they work
- GCN was terrible and very different from Nvidia's architecture, it was hard to optimize for both. rDNA is more similar, but now AMD has a plethora of old junk to maintain compatible with rDNA
- Nvidia has been constantly investing in new software technologies (nowadays it's mainly AI), AMD didn't and now it's always playing catch up
AMD also has its wins, for example:
- They often make their stuff open source, mainly because it's convenient for its underdog position
- Has a pretty good software stack on Linux (much better than on windows) partly because it's not entirely done by them
- Nvidia has been a bad faith actor for many years on the Linux space, even if it's in its redemption arc
- Modern GPU seems to be catching up in compute performance
- AMD is less greedy with VRAM, mainly because they are less at risk of competing with their own enterprise lineup
- Current Nvidia's prices are stupid
I would still prefer Nvidia right now, but maybe it's gonna change with the next releases.
P.s. I have used a GTX 1060, an RX 480, and a Vega 56
but if AMD from the past had proposed an alternative we would have a standardized physics extension in DirectX by now, like with dlss
Why the fuck put this on AMD when it was Nvidia who did their usual proprietary bullshit? "AMD is worse than Nvidia because they didn't provide us with a better alternative!" ???
For your points against:
The OpenGL UMD was completely re-engineered. This premiered with the 22.7.1 release, so nearly two years ago. AMD now have the most performant, highest quality OpenGL UMD in the industry, which is particularly relevant for workstation use cases (where OpenGL remains the backbone of WS graphics).
PhysX is proprietary, I don't know what can be done about that, but your point is valid here, though given the rise of other physics engines at play, I don't really know if this is a big hit? Do we really want further consolidation in game systems?
AMDs approach to ray acceleleration has always favoured die area efficiency up until now, though I can totally understand your disappointment with the performance in that area. That said, the moment I really care about RTRT in gaming is when it's no longer contingent on the raster model. reflections, shadows and GI are nice and all, but we're still not really there yet.
I dont know how GCN was such a terrible arch since it was the basis of an entire console generation. An argument could be made about how its GPGPU design may have hindered it at gaming on desktops but it had matured extremely well over time with driver upgrades, despite their given price + perf targets at release. Aside from that (and related to point 1), RDNA UMDs are all PAL based. I'm not sure what you're alluding to with this? Could you please elaborate?
Your final remark is untrue (FMF, AL+, gfx feature interop, mic ANS, a plethora of GPUOpen technologies) but I will forgive you not keeping up with a vendor's tech if you don't actively use their products.
Found the Nvidia fan boy
I'm literally using a full AMD PC right now. I don't like Nvidia as much as the next person. I think they use terrible monopolistic practices, and if the competition were on par I would not buy Nvidia. But they aren't.
The guy asked what's better for gaming and you want on a rant about Nvidia being better because of AI workloads and other software.
Amd are the better cards for gaming, Nvidia may have better ray tracing but most games don't even use ray tracing so you will spend an extra 30% to get the same gaming performance from an AMD card that actually has enough Vram to play the games at ultra settings and higher resolution.
Well, if you are not gonna use Nvidia's extra stuff, buy an AMD, by all means.
But what you say is disingenuous. "AI and other software" is not entirely unrelated to gaming. Things like hairworks, physx, and most gameworks in general run on CUDA. And for AI (which I don't care about that much) there is DLSS, and they are working on AI enhanced rendering.
Most games don't use those technologies, but some do, and you will miss out on those.
I had a 1060 and upgraded to a 3080 ahile ago. For next upgrade most likely will do AMD unless Nvidia can convince me to go with them again
Brand-wise I've had great reliability with Zotac. They're seen as a budget brand but I've been using their GPUs for years without issue.
Nothing at the moment, I'd wait for the Nvidia RTX 5000 and AMD RX8000 cards. They should release later this year.