this post was submitted on 14 Feb 2025
662 points (99.0% liked)

Programmer Humor

20673 readers
1474 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] CandleTiger@programming.dev 9 points 4 days ago (1 children)

The problem with LLM AIs Ous that you can’t sanitize the inputs safely. There is no difference between the program (initial prompt from the developer) and the data (your form input)

[–] RandomVideos@programming.dev 2 points 4 days ago (1 children)

You can make it more resistant to overwriting instructions at least

[–] CandleTiger@programming.dev 6 points 4 days ago

You can try, but you can’t make it correct. My ideal is to write code once that is bug-free. That’s very difficult, but not fundamentally impossible. Especially in small well-scrutinized areas that are critical for security it is possible with enough care and effort to write code with no security bugs. With LLM AI tools that’s not even theoretically possible, let alone practical. You will just need to be forever updating your prompt to mitigate the free latest most fashionable prompt injections.