Founder of company which makes major revenue by selling GPUs for machine learning says machine learning is good.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
I worry for the future generations that cant debug cos they dont know how to program and just use ai.
Don't worry, they'll have AI animated stick figures telling them what to do instead...
As a developer building on top of LLMs, my advice is to learn programming architecture. There's a shit ton of work that needs to be done to get this unpredictable non deterministic tech to work safely and accurately. This is like saying get out of tech right before the Internet boom. The hardest part of programming isn't writing low level functions, it's architecting complex systems while keeping them robust, maintainable, and expandable. By the time an AI can do that, all office jobs are obsolete. AIs will be able to replace CEOs before they can replace system architects. Programmers won't go away, they'll just have less busywork to do and instead need to work at a higher level, but the complexity of those higher level requirements are about to explode and we will need LLMs to do the simpler tasks with our oversight to make sure it gets integrated correctly.
I also recommend still learning the fundamentals, just maybe not as deeply as you needed to. Knowing how things work under the hood still helps immensely with debugging and creating better more efficient architectures even at a high level.
I will say, I do know developers that specialized in algorithms who are feeling pretty lost right now, but they're perfectly capable of adapting their skills to the new paradigm, their issue is more of a personal issue of deciding what they want to do since they were passionate about algorithms.
Having used Chat GPT to try to find solutions to software development challenges, I don't think programmers will be at that much risk from AI for at least a decade.
Generative AI is great at many things, including assistance with basic software development tasks (like spinning up blueprints for unit tests). And it can be helpful filling in code gaps when provided with a very specific prompt... sometimes. But it is not great at figuring out the nuances of even mildly complex business logic.
This.
I got a github copilot subscription at work and its useful for suggesting code in small parts, but i would never let it decide what design pattern to use to tackle the problem we are solving. Once i know the solution i can use ai, and verify its output to use in the code
This overglorified snake oil salesman is scared.
Anyone who understands how these models works can see plain as day we have reached peak LLM. Its enshitifying on itself and we are seeing its decline in real time with quality of generated content. Dont believe me? Go follow some senior engineers.
Any recommendations whom to follow? On Mastodon?
There is a reason they didn't offer specific examples. LLM can still scale by size, logical optimization, training optimization, and more importantly integration. The current implementation is reaching it's limits but pace of growth is also happening very quickly. AI reduces workload, but it is likely going to require designers and validators for a long time.
Why do you think we've reached peak LLM? There are so many areas with room for improvement
Lmao do the opposite of whatever this guy says, he only wants his 2 trillion dollar stockmarket bubble not to burst
the day programming is fully automated, so will other jobs.
maybe it'd make more sense if he suggested to be a blue collar worker instead.
Human can probably still look forward to back breaking careers of manual labor that consist of complex varied movements!
Well. That's stupid.
Large language models are amazingly useful coding tools. They help developers write code more quickly.
They are nowhere near being able to actually replace developers. They can't know when their code doesn't make sense (which is frequently). They can't know where to integrate new code into an existing application. They can't debug themselves.
Try to replace developers with an MBA using a large language model AI, and once the MBA fails, you'll be hiring developers again - if your business still exists.
Every few years, something comes along that makes bean counters who are desperate to cut costs, and scammers who are desperate for a few bucks, declare that programming is over. Code will self-write! No-code editors will replace developers! LLMs can do it all!
No. No, they can't. They're just another tool in the developer toolbox.
I've been a developer for over 20 years and when I see Autogen generate code, decide to execute that code and then fix errors by making a decision to install dependencies, I can tell you I'm concerned. LLMs are a tool, but a tool that might evolve to replace us. I expect a lot of software roles in ten years to look more like an MBA that has the ability to orchestrate AI agents to complete a task. Coding skills will still matter, but not as much as soft skills will.
You remember when everyone was predicting that we are a couple of years away from fully self-driving cars. I think we are now a full decade after those couple of years and I don't see any fully self driving car on the road taking over human drivers.
We are now at the honeymoon of the AI and I can only assume that there would be a huge downward correction of some AI stocks who are overvalued and overhyped, like NVIDIA. They are like crypto stock, now on the moon tomorrow, back to Earth.
Two decades. DARPA Grand Challenge was in 2004.
Yeah, everybody always forgets the hype cycle and the peak of inflated expectations.
Waymo exists and is now moving passengers around in three major cities. It's not taking over yet, but it's here and growing.The timeframe didn't meet the hype but the technology is there.
Yes, the technology is there but it is not Level 5, it is 3.5-4 at best.
The point with a full self-driving car is that complexity increases exponentially once you reach 98-99% and the last 1-2% are extremely difficult to crack, because there are so many corner cases and cases you can't really predict and you need to make a car that drives safer than humans if you really want to commercialize this service.
Same with generative AI, the leap at first was huge, but comparing GPT 3.5 to 4 or even 3 to 4 wasn't so great. And I can only assume that from now on achieving progress will get exponentially harder and it will require usage of different yet unknown algorithms and models and advances will be a lot more modest.
And I don't know for you but ChatGPT isn't 100% correct especially when asking more niche questions or sending more complex queries and often it hallucinates and sometimes those hallucinations sound extremely plausible.
It's just as crazy as saying "We don't need math, because every problem can be described using human language".
In other words, that might be true as long as your problem is not complex enough to be able to be understood using human language.
You want to solve a real problem? It's way more complex with so many moving parts you can't just take LLM to solve it, because that takes an actual understanding of a problem.
This seems as wise as Bill Gates claiming 4MB of ram is all you'll ever need back on 98 🙄
I don't think he's seen the absolute fucking drivel that most developers have been given as software specs before now.
Most people don't even know what they want, let alone be able to describe it. I've often been given a mountain of stuff, only to go back and forth with the customer to figure out what problem they're actually trying to solve, and then do it in like 3 lines of code in a way that doesn't break everything else, or tie a maintenance albatross around my neck for the next ten years.
I don't see how it would be possible to completely replace programmers. The reason we have programming languages instead of using natural language is that the latter has ambiguities. If you start having to describe your software's behaviour in natural language, then one of three things can happen:
- either this new natural programming language has to make assumptions about what you intend, and thus will only be capable of outputting a certain class of software (i.e. you can't actually create anything new),
- or you need to learn a new way of describing things unambiguously, and now you're back to programming but with a new language,
- or you spend forever going back and forth with the generator until it gives you the output you want, and this would take a lot longer to do than just having an experienced programmer write it.
And if you don't know how to code, how do you even know if it gave you the output you want until it fails in production?
I think this is bullshit regarding LLMs, but making and using generative tools more and more high-level and understandable for users is a good thing.
Like various visual programming means, where you sketch something working via connected blocks (like PureData for sounds), or in Matlab I think one can use such constructors to generate code for specific controllers involved in the scheme, or like LabView.
Or like HyperCard.
Not that anybody should stop learning anything. There's a niche for every way to do things.
I just like that class of programs.
Jensen fucking Huang is a piece of shit and choke full of it too
Actually, AI can replace this dick at a fraction of the cost instead of replacing developers. Bring out the guillotine mfs
Your vulgarity and call to violence are quite convincing, sir. Mayhaps you moonlight as a bard?
Doubt
Why would he lie? Other than to pump the companies shares
I can kind of see his point, but the things he is suggesting instead (biology, chemistry, finance) don't make sense for several reasons.
Besides the obvious "why couldn't AI just replace those people too" (even though it may take an extra few years), there is also a question of how many people can actually have a deep enough expertise to make meaningful contributions there - if we're talking about a massive increase of the amount of people going into those fields.
I mean why have a CS degree when an AI subscription costs $30/month?
/s
After using co pilot and other AI code tools it's obvious to see the limitations of it, programming is a lot more than just writing "ok" code
I think the Jensen quote loosley implies we don't need to learn a programming language but the logic was flimsy. Same goes for the author as they backtrack a few times. Not a great article in my opinion.
Jensen's just trying to ride the AI bubble as far as it'll go, next he'll tell you to forget about driving or studying
It's not really about the coding, it's about the process of solving the problem. And ai is very far away from being able to do that. The language you learn to code in is probably not the one you will use much of you life. It will just get replaced by which ai you will use to code.
Yep. The best guy on my team isn't the best coder. He's the best at visualizing the complete solution and seeing pinch points in his head.
Don't tell me what to do. Going to spend more time learning to code from now on, thanks.