this post was submitted on 17 Feb 2025
411 points (95.2% liked)

Technology

62936 readers
4289 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] RamenJunkie@midwest.social 21 points 2 days ago (1 children)

I am not a professional coder, just a hobbyist, but I am increasingly digging into Cybersecurity concepts.

And even as an "amature Cybersecurity" person, everything about what you describe, and LLM coders, terrifies me, because that shit is never going to have any proper security methodology implemented.

[–] phlegmy@sh.itjust.works 6 points 2 days ago

On the bright side, you might be able to cash in on some bug bounties.

[–] filister@lemmy.world 47 points 2 days ago (2 children)

The problem is not only the coding but the thinking. The AI revolution will give birth to a lot more people without critical thinking and problem solving capabilities.

[–] OrekiWoof@lemmy.ml 11 points 2 days ago (1 children)

apart from that, learning programming went from something one does out of calling, to something one does to get a job. The percentage of programmers that actually like coding is going down, so on average they're going to be worse

[–] mr_jaaay@lemmy.ml 1 points 1 day ago

This is true for all of IT. I love IT - I've been into computer for 30+ years. I run a small homelab, it'll always be a hobby and a career. But yeah, for more and more people it's just a job.

[–] commander@lemmings.world 11 points 2 days ago

That's the point.

Along with censorship.

[–] phoenixz@lemmy.ca 16 points 2 days ago (1 children)

To be fair, most never could. I've been hiring junior devs for decades now, and all the ones straight out of university barely had any coding skills .

Its why I stopped looking at where they studied, I always first check their hobbies. if one of the hobbies is something nerdy and useless, tinkering with a raspberry or something, that indicates to me it's someone who loves coding and probably is already reasonably good at it

load more comments (1 replies)
[–] socsa@piefed.social 16 points 2 days ago* (last edited 2 days ago) (1 children)

This isn't a new thing. Dilution of "programmer" and "computer" education has been going on for a long time. Everyone with an IT certificate is an engineer th se days.

For millennials, a "dev" was pretty much anyone with reasonable intelligence who wanted to write code - it is actually very easy to learn the basics and fake your way into it with no formal education. Now we are even moving on from that to where a "dev" is anyone who can use an AI. "Prompt Engineering."

[–] frezik@midwest.social 14 points 2 days ago

"Prompt Engineer" makes a little vomit appear in the back of my mouth.

[–] zerofk@lemm.ee 96 points 3 days ago* (last edited 3 days ago) (4 children)

As someone who has interviewed candidates for developer jobs for over a decade: this sounds like “in my day everything was better”.

Yes, there are plenty of candidates who can’t explain the piece of code they copied from Copilot. But guess what? A few years ago there were plenty of candidates who couldn’t explain the code they copied from StackOverflow. And before that, there were those who failed at the basic programming test we gave them.

We don’t hire those people. We hire the ones who use the tools at their disposal and also show they understand what they’re doing. The tools change, the requirements do not.

[–] uranibaba@lemmy.world 18 points 2 days ago (3 children)

I think that LLMs just made it easier for people who want to know but not learn to know. Reading all those posts all over the internet required you to understand what you pasted together if you wanted it to work (not always but the barr was higher). With ChatGPT, you can just throw errors at it until you have the code you want.

While the requirements never changed, the tools sure did and they made it a lot easier to not understand.

load more comments (3 replies)
load more comments (3 replies)
[–] drathvedro@lemm.ee 19 points 2 days ago (1 children)

This post is literally an ad for AI tools.

No, thanks. Call me when they actually get good. As it stands, they only offer marginally better autocomplete.

I should probably start collecting dumb AI suggestions and gaslighting answers to show the next time I encounter this topic...

[–] finitebanjo@lemmy.world 7 points 2 days ago (1 children)

It's actually complaining about AI, tho.

[–] drathvedro@lemm.ee 7 points 2 days ago (3 children)

There are at least four links leading to AI tools in this page. Why would you link something when you complain about it?

load more comments (3 replies)
[–] froggycar360@slrpnk.net 25 points 2 days ago (1 children)

I could barely code when I landed my job and now I’m a senior dev. It’s saying a plumber’s apprentice can’t plumb - you learn on the job.

[–] FlyingSquid@lemmy.world 11 points 2 days ago (2 children)

You're not learning anything if Copilot is doing it for you. That's the point.

[–] Mr_Dr_Oink@lemmy.world 5 points 2 days ago* (last edited 2 days ago)

100% agree.

I dont think there is no place for AI as an aid to help you find the solution, but i dont think it's going to help you learn if you just ask it for the answers.

For example, yesterday, i was trying to find out why a policy map on a cisco switch wasn't re-activating after my radius server came back up. Instead of throwing my map at the AI and asking whats wrong l, i asked it details about how a policy map is activated, and about what mechanism the switch uses to determine the status of the radius server and how a policy map can leverage that to kick into gear again.

Ultimately, AI didn't have the answer, but it put me on the right track, and i believe i solved the issue. It seems that the switch didnt count me adding the radius server to the running config as a server coming back alive but if i put in a fake server and instead altered the IP to a real server then the switch saw this as the server coming back alive and authentication started again.

In fact, some of the info it gave me along the way was wrong. Like when it tried to give me cli commands that i already knew wouldn't work because i was using the newer C3PL AAA commands, but it was mixing them up with the legacy commands and combining them together. Even after i told it that was a made-up command and why it wouldn't work, it still tried to give me the command again later.

So, i dont think it's a good tool for producing actual work, but it can be a good tool to help us learn things if it is used that way. To ask "why" and "how" instead of "what."

load more comments (1 replies)
[–] pls@lemmy.plaureano.nohost.me 27 points 2 days ago

Of course they don't. Hiring junior devs for their hard skills is a dumb proposition. Hire for their soft skills, intellectual curiosity, and willingness to work hard and learn. There is no substitute for good training and experience.

[–] endeavor@sopuli.xyz 8 points 2 days ago

Im in uni learning to code right now but since I'm a boomer i only spin up oligarch bots every once in a while to check for an issue that I would have to ask the teacher. It's far more important for me to understand fundies than it is to get a working program. But that is only because ive gotten good at many other skills and realize that fundies are fundamental for a reason.

[–] corsicanguppy@lemmy.ca 130 points 3 days ago* (last edited 3 days ago) (7 children)

I've said it before, but this is a 20-year-old problem.

After Y2K, all those shops that over-porked on devs began shedding the most pricey ones; worse in 'at will' states.

Who were those devs? Mentors. They shipped less code, closed fewer tickets, cost more, but their value wasn't in tickets and code: it was investing in the next generation. And they had to go because #numbersGoUp

And they left. And the first gen of devs with no mentorship joined and started their careers. No idea about edge cases, missing middles or memory management. No lint, no warnings, build and ship and fix the bugs as they come.

And then another generation. And these were the true 'lost boys' of dev. C is dumb, C++ is dumb, perl is dumb, it's all old, supply chain exploits don't exist, I made it go so I'm done, fuck support, look at my numbers. It's all low-attention span, baling wire and trophies because #numbersGoUp.

And let's be fair: they're good at this game, the new way of working where it's a fast finish, a head-pat, and someone else's problem. That's what the companies want, and that's what they built.

They say now that relying on Ai makes one never really exercise critical thought and problem-solving, and I see it when I'm forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won't learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember. But we're seeing people do that with actual work; with go and rust code, and yeah, no concept of why we want to check for completeness let alone a concept of how.

What do we do, though?

If we're in a position to do so, FAIL some code reviews on corner cases. Fail some reviews on ISO27002 and supply chain and role sep. Fail some deployments when they're using dev tools in prod. And use them all as teachable moments. Honestly, some of them got no mentorship in college if they went, and no mentorship in their first ten years as a pro. It's going to be hard getting over themselves, but the sooner they realise they still have a bunch to learn, the better we can rebuild coders. The hardest part will be weaning them off GPT for the cheats. I don't have a solution for this.

One day these new devs will proudly install a patch in the RTOS flashed into your heart monitor and that annoying beep will go away. Sleep tight.

[–] SpicyLizards@reddthat.com 41 points 3 days ago (1 children)

I have seen this too much. My current gripe isn't fresh devs, as long as they are teachable and care.

My main pain over the last several years has been the bulk of 'give-no-shit' perms/contractors who don't want to think or try when they can avoid it.

They run a web of lies until it is no longer sustainable (or the project is done for contractors) and then again its someone else's problem.

There are plenty of 10/20 year plus and devs who don't know what they are doing and don't care whose problem it will be as long as it isnt theirs.

I'm sick of writing coding 101 standards for 1k+ a day 'experts'. More sick of PR feedback where it's a battle to get things done in a maintainable manner from said 'experts'.

load more comments (1 replies)
[–] MagicShel@lemmy.zip 26 points 3 days ago* (last edited 3 days ago) (2 children)

No one wants mentors. The way to move up in IT is to switch jobs every 24 months. So when you're paying mentors huge salaries to train juniors who are velocity drags into velocity boosters, you do it knowing they are going to leave and take all that investment with them for a higher paycheck.

I don't say this is right, but that's the reality from the paycheck side of things and I think there needs to be radical change for both sides. Like a trade union or something. Union takes responsibility for certifying skills and suitability, companies can be more confident of hires, juniors have mentors to learn from, mentors ensure juniors have aptitude and intellectual curiosity necessary to do the job well, and I guess pay is more skill/experience based so developers don't have to hop jobs to get paid what they are worth.

Fixed typos due to my iPhone hating me.

load more comments (2 replies)
load more comments (5 replies)
[–] Phoenicianpirate@lemm.ee 9 points 2 days ago

I could have been a junior dev that could code. I learned to do it before ChatGPT. I just never got the job.

[–] spark947@lemm.ee 105 points 3 days ago (8 children)

What are you guys working on where chatgpt can figure it out? Honestly, I haven't been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.

[–] 0x0@programming.dev 45 points 3 days ago (1 children)

I'm forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times... the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.

[–] expr@programming.dev 12 points 2 days ago (1 children)

Forced to use copilot? Wtf?

I would quit, immediately.

[–] 0x0@programming.dev 4 points 2 days ago (2 children)

I would quit, immediately.

Pay my bills. Thanks.
I've been dusting off the CV, for multiple other reasons.

load more comments (2 replies)
[–] Thorry84@feddit.nl 32 points 3 days ago (3 children)

Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.

It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I'm on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.

I'm afraid for a future where forums and such don't exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn't know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.

[–] Hoimo@ani.social 1 points 1 day ago

This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it's going to degenerate the output.

What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don't need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.

load more comments (2 replies)
[–] theterrasque@infosec.pub 27 points 3 days ago* (last edited 3 days ago) (2 children)

When I had to get up to speed on a new language, it was very helpful. It's also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I've been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code).

It's also nice for writing C# unit tests.

However, the times I've been stuck on my main languages, it's been utterly useless.

[–] MagicShel@lemmy.zip 29 points 3 days ago

ChatGPT is extremely useful if you already know what you're doing. It's garbage if you're relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.

It's also probably useful for looking like you kinda know what you're doing as a junior in a new project. I've seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.

load more comments (1 replies)
load more comments (5 replies)
[–] barsoap@lemm.ee 76 points 3 days ago (1 children)

Not in any way a new phenomenon, there's a reason fizzbuzz was invented, there's been a steady stream of CS graduates who can't code their way out of a wet paper bag ever since the profession hit the mainstream.

Actually fucking interview your candidates, especially if you're sourcing candidates from a country with for-profit education and/or rote learning cultures, both of which suck when it comes to failing people who didn't learn anything. No BS coding tests go for "explain this code to me" kind of stuff, worst case they can understand code but suck at producing it, that's still prime QA material right there.

[–] sugar_in_your_tea@sh.itjust.works 31 points 3 days ago (8 children)

We do two "code challenges":

  1. Very simple, many are done in 5 min; this just weeds out the incompetent applicants, and 90% of the code is written (i.e. simulate working in an existing codebase)
  2. Ambiguous requirements, the point is to ask questions, and we actually have different branches depending on assumptions they made (to challenge their assumptions); i.e. simulate building a solution with product team

The first is in the first round, the second is in the technical interview. Neither are difficult, and we provide any equations they'll need.

It's much more important that they can reason about requirements than code something quick, because life won't give you firm requirements, and we don't want a ton of back and forth with product team if we can avoid it, so we need to catch most of that at the start.

In short, we're looking for actual software engineers, not code monkeys.

load more comments (8 replies)
[–] TsarVul@lemmy.world 73 points 3 days ago (18 children)

I'm a little defeatist about it. I saw with my own 3 eyes how a junior asked ChatGPT how to insert something into an std::unordered_map. I tell them about cppreference. The little shit tells me "Sorry unc, ChatGPT is objectively more efficient". I almost blew a fucking gasket, mainly cuz I'm not that god damn old. I don't care how much you try to convince me that LLMs are efficient, there is no shot they are more efficient than opening a static page with all the info you would ever need. Not even considering energy efficiency. Utility aside, the damage we have dealt to developing minds is irreversible. We have convinced them that thought is optional. This is gonna bite us in the ass. Hard.

load more comments (18 replies)
[–] 7fb2adfb45bafcc01c80@lemmy.world 33 points 3 days ago (1 children)

To me, I feel like this is a problem perpetuated by management. I see it on the system administration side as well -- they don't care if people understand why a tool works; they just want someone who can run it. If there's no free thought the people are interchangeable and easily replaced.

I often see it farmed out to vendors when actual thought is required, and it's maddening.

load more comments (1 replies)
[–] Matriks404@lemmy.world 8 points 2 days ago

No wonder open source software becomes more efficient than proprietary one.

[–] nexguy@lemmy.world 42 points 3 days ago (9 children)

Stack Overflow and Google were once the "AI" of the previous generation. "These kids can't code, they just copy what others have done"

load more comments (9 replies)
[–] FarceOfWill@infosec.pub 45 points 3 days ago* (last edited 3 days ago) (3 children)

Junior Devs could never code, yes including us

[–] patatahooligan@lemmy.world 29 points 3 days ago (1 children)

Agreed. A few year back the devs looking for quick fixes would go over to StackOverflow and just copy answers without reading explanations. This caused the same type of problems that OP is talking about. That said, the ease of AI might be making things even worse.

load more comments (1 replies)
load more comments (2 replies)
[–] avidamoeba@lemmy.ca 54 points 3 days ago* (last edited 3 days ago) (1 children)

Unless AI dramatically improves from where LLMs are today (in ways that it so far hasn't), as a worker, I'm looking forward to the drastic shortage of experienced senior devs in a few years time.

load more comments (1 replies)
[–] ryven@lemmy.dbzer0.com 41 points 3 days ago

Recently my friend was trying to get me to apply for a junior dev position. "I don't have the right skills," I said. "The biggest project I ever coded was a calculator for my Java final, in college, a decade and a half ago."

It did not occur to me that showing up without the skills and using a LLM to half ass it was an option!

[–] Evotech@lemmy.world 19 points 3 days ago (1 children)
[–] invertedspear@lemm.ee 21 points 3 days ago (3 children)

Exactly, the jr dev that could write anything useful is a rare gem. Boot camps cranking out jr dev by the dozens every couple of months didn’t help the issue. Talent needs cultivation, and since every tech company has been cutting back lately, they stopped cultivating and started sniping talent from each other. Not hard given the amount of layoffs lately. So now we have jr devs either unable to find a place to refine them, or getting hired by people who just want to save money and don’t know that you need a senior or two to wrangle them. Then chat gpt comes along and gives the illusion of sr dev advice, telling them how to write the wrong thing better, no one to teach them which tool is the right one for the job.

Our industry is in kind of a fucked state and will be for a while. Get good at cleaning up the messes that will be left behind and that will keep you fed for the next decade.

load more comments (3 replies)
load more comments
view more: next ›