this post was submitted on 28 Jan 2025
325 points (96.3% liked)

Technology

61227 readers
4044 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Capsicones@lemmy.blahaj.zone 124 points 2 days ago (3 children)

There seems to be some confusion here on what PTX is -- it does not bypass the CUDA platform at all. Nor does this diminish NVIDIA's monopoly here. CUDA is a programming environment for NVIDIA GPUs, but many say CUDA to mean the C/C++ extension in CUDA (CUDA can be thought of as a C/C++ dialect here.) PTX is NVIDIA specific, and sits at a similar level as LLVM's IR. If anything, DeepSeek is more dependent on NVIDIA than everyone else, since PTX is tightly dependent on their specific GPUs. Things like ZLUDA (effort to run CUDA code on AMD GPUs) won't work. This is not a feel good story here.

[–] pr06lefs@lemmy.ml 34 points 2 days ago

This specific tech is, yes, nvidia dependent. The game changer is that a team was able to beat the big players with less than 10 million dollars. They did it by operating at a low level of nvidia's stack, practically machine code. What this team has done, another could do. Building for AMD GPU ISA would be tough but not impossible.

[–] eager_eagle@lemmy.world 13 points 2 days ago* (last edited 2 days ago) (1 children)

I don't think anyone is saying CUDA as in the platform, but as in the API for higher level languages like C and C++.

PTX is a close-to-metal ISA that exposes the GPU as a data-parallel computing device and, therefore, allows fine-grained optimizations, such as register allocation and thread/warp-level adjustments, something that CUDA C/C++ and other languages cannot enable.

[–] Capsicones@lemmy.blahaj.zone 24 points 2 days ago (1 children)

Some commenters on this post are clearly not aware of PTX being a part of the CUDA environment. If you know this, you aren't who I'm trying to inform.

[–] eager_eagle@lemmy.world 7 points 2 days ago

aah I see them now

[–] Gsus4@mander.xyz 1 points 2 days ago (1 children)

I thought CUDA was NVIDIA-specific too, for a general version you had to use OpenACC or sth.

[–] remotelove@lemmy.ca 3 points 2 days ago (1 children)

CUDA is NVIDIA proprietary, but may be open to licensing it? I think?

https://www.theregister.com/2021/11/10/nvidia_cuda_silicon/

[–] KingRandomGuy@lemmy.world 2 points 1 day ago

I think the thing that Jensen is getting at is that CUDA is merely a set of APIs. Other hardware manufacturers can re-implement the CUDA APIs if they really wanted to (especially since AFAIK, Google v Oracle ruled that APIs cannot be copyrighted). In fact, AMD's HIP implements many of the same APIs as CUDA, and they ship a tool (HIPIFY) to convert code written for CUDA for HIP instead.

Of course, this does not guarantee that code originally written for CUDA is going to perform well on other accelerators, since it likely was implemented with NVIDIA's compute model in mind.