this post was submitted on 14 Jan 2025
65 points (97.1% liked)

Asklemmy

44331 readers
1145 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there's hype. There are ethical concerns but we'll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it's a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what's the point of it all?

top 50 comments
sorted by: hot top controversial new old
[–] gravitywell@lemmy.ml 1 points 15 minutes ago

I have a friend with numerous mental issues who texts long barely comprehensible messages to update me on how they are doing, like no paragraphs, stream of consciousness style... and so i take those walls of text and tell chat gpt to summarize it for me, and it goes from a mess of words into an update i can actually understand and respond to.

Another use for me is getting quick access to answered id previously have to spend way more time reading and filtering over multiple forums and stack exchanges posts to answer.

Basically they are good at parsing information and reformatting it in a way that works better for me.

[–] sunzu2@thebrainbin.org 1 points 2 hours ago

Learning how to use Linux

[–] happydoors@lemm.ee 1 points 2 hours ago

I use it in a lot of tiny ways for photo-editing, Adobe has a lot of integration and 70% of it is junk right now but things like increasing sharpness, cleaning noise, and heal-brush are great with AI generation now.

[–] fmstrat@lemmy.nowsci.com 2 points 3 hours ago

Fake frames. Nvidia double benefits.

Note: Tis a joke, personally I think DLSS frame generation is cool, as every frame is "fake" anyway.

[–] passiveaggressivesonar@lemmy.world 1 points 3 hours ago (1 children)
[–] dQw4w9WgXcQ@lemm.ee 1 points 1 hour ago

Absolutely this. I've found AI to be a great tool for nitty-gritty questions concerning some development framework. While googling/duckduckgo'ing, you need to match the documentation pretty specifically when asking about something specific. AI seems to be much better at "understanding" the content and is able to match with the documentation pretty reliably.

For example, I was reading docs up and down at ElasticSearch's website trying to find all possible values for the status field within an aggregated request. Google only lead me to general documentations without the specifics. However, a quick loosely worded question to chatGPT handed me the correct answer as well as a link to the exact spot in the docs where this was specified.

[–] peppers_ghost@lemmy.ml 1 points 4 hours ago (1 children)

"at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself."

I've not experienced this. Debugging for me is always faster than writing something entirely from scratch.

[–] Archr@lemmy.world 1 points 3 hours ago

100% agree with this.

It is so much faster for me to give the ai the api/library documentation than it would be for me to figure out how that api works. Is it a perfect drop-in, finished piece of code? No. But that is not what I ask the ai for. I ask it for a simple example which I can then take, modify, and rework into my own code.

[–] octochamp@lemmy.ml 1 points 4 hours ago

AI saves time. There are few use cases for which AI is qualitatively better, perhaps none at all, but there are a great many use cases for which it is much quicker and even at times more efficient.

I'm sure the efficiency argument is one that could be debated, but it makes sense to me in this way: for production-level outputs AI is rarely good enough, but creates really useful efficiency for rapid, imperfect prototyping. If you have 8 different UX ideas for your app which you'd like to test, then you could rapidly build prototype interfaces with AI. Likely once you've picked the best one you'll rewrite it from scratch to make sure it's robust, but without AI then building the other 7 would use up too many man-hours to make it worthwhile.

I'm sure others will put forward legitimate arguments about how AI will inevitably creep into production environments etc, but logistically then speed and efficiency are undeniably helpful use cases.

[–] SplashJackson@lemmy.ca 2 points 6 hours ago* (last edited 6 hours ago) (1 children)

I wish I could have an AI in my head that would do all the talking for me because socializing is so exhausting

[–] tetris11@lemmy.ml 1 points 3 hours ago* (last edited 3 hours ago) (1 children)

Other people would then have AIs in their heads to deal with the responses.

A perfect world, where nothing is actually being said, but goddamn do we sound smart saying it

[–] communism@lemmy.ml 3 points 7 hours ago

I use LLMs for search results when conventional search engines aren't providing relevant results, and then I can fact-check whatever answers the LLMs give me. Especially using them to ask questions that are easy to verify, like mathematical questions where I can check the validity of the answers. Or similarly programming questions where I can read through the solution, check the documentation for any functions used, and make sure the output is logical, and make any tweaks if the LLM gives a nearly-correct answer. I always ask LLMs to cite their sources so I can check those too.

I also sometimes use LLMs for formatting, like when I copy text off a PDF and the spacing is all funky.

I don't use LLMs for this, but I imagine that they would be a better replacement for previous automated translation tools. Translation seems to be one of the most obvious applications since LLMs are just language pattern recognition at the end of the day. Obviously for anything important they need to be checked by a human, but they would e.g. allow for people to participate in online communities where they don't speak the community's language.

[–] CaptainBlagbird@lemmy.world 6 points 9 hours ago* (last edited 9 hours ago)

I generate D&D characters and NPCs with it, but that's not really a strong argument.

For programming though it's quite handy. Basically a smarter code completion that takes the already written stuff into account. From machine code through assembly up to higher languages, I think it's a logical next step to be able to tell the computer, in human language, what you actually are trying to achieve. That doesn't mean it is taking over while the programmer switches off their brain of course, but it already saved me quite some time.

[–] orcrist@lemm.ee 3 points 8 hours ago

There is no point. There are billions of points, because there are billions of people, and that's the point.

You know that there are hundreds or thousands of reasonable uses of generative AI, whether it's customer support or template generation or brainstorming or the list goes on and on. Obviously you know that. So I'm not sure that you're asking a meaningful question. People are using a tool to solve various problems, but you don't see the point in that?

If your position is that they should use other tools to solve their problems, that's certainly a legitimate view and you could argue for it. But that's not what you wrote and I don't think that's what you feel.

[–] arken@lemmy.world 3 points 8 hours ago

There are some great use cases, for instance transcribing handwritten records and making them searchable is really exciting to me personally. They can also be a great tool if you learn to work with them (perhaps most importantly, know when not to use them - which in my line of work is most of the time).

That being said, none of these cases, or any of the cases in this thread, is going to return the large amounts of money now being invested in AI.

[–] hamid@vegantheoryclub.org 2 points 8 hours ago

I use it to re-tone and clarify corporate communications that I have to send out on a regular basis to my clients and internally. It has helped a lot with the amount of time I used to spend copy editing my own work. I have saved myself lots of hours doing something I don't really like (copy-editing) and more time doing the stuff I do (engineering) because of it.

[–] ekky@sopuli.xyz 3 points 10 hours ago

I think genAI would be pretty neat for bit banging tests, aka. Throwing semi-random requests and/or signals at some device in the hopes of finding obscure edge-cases or security holes.

[–] olafurp@lemmy.world 3 points 10 hours ago

It's pretty good at looking up readily available knowledge that doesn't have a lot of nuance to it. There's a lot of stuff you can look up but it always comes with a grain of salt.

Home remedies, bunch of baby facts like poop color meaning, recipes and adjustments, programming examples (requires very prompting skills).

Rewriting stuff into business English is another very nice use case. Tell the AI your qualitifations, ask to make a cover letter for "job description" then review. Drafting text and summarising also pretty good.

Adding modifiers to questions like "list of 20 for X" for a brainstorming or "include how scientifically reliable the claim is on scale of 1-10" really help with getting a good answer and some nuance to whatever claims.

It's touted as the be all end all but in reality the use cases are very specific in my experience.

[–] Affidavit@lemm.ee 5 points 12 hours ago

I'd say there are probably as many genuine use-cases for AI as there are people in denial that AI has genuine use-cases.

Top of my head:

  • Text editing. Write something (e.g. e-mails, websites, novels, even code) and have an LLM rewrite it to suit a specific tone and identify errors.
  • Creative art. You claim generative AI art is soulless and poor quality, to me, that indicates a lack of familiarity with what generative AI is capable of. There are tools to create entire songs from scratch, replace the voice of one artist with another, remove unwanted background noise from songs, improve the quality of old songs, separate/add vocal tracks to music, turn 2d models into 3d models, create images from text, convert simple images into complex images, fill in missing details from images, upscale and colourise images, separate foregrounds from backgrounds.
  • Note taking and summarisation (e.g. summarising meeting minutes or summarising a conversation or events that occur).
  • Video games. Imagine the replay value of a video game if every time you play there are different quests, maps, NPCs, unexpected twists, and different puzzles? The technology isn't developed enough for this at the moment, but I think this is something we will see in the coming years. Some games (Skyrim and Fallout 4 come to mind) have a mod that gives each NPC AI generated dialogue that takes into account the NPC's personality and history.
  • Real time assistance for a variety of tasks. Consider a call centre environment as one example, a model can be optimised to evaluate calls based on language and empathy and correctness of information. A model could be set up with a call centre's knowledge base that listens to the call and locates information based on a caller's enquiry and tells an agent where the information is located (or even suggests what to say, though this is currently prone to hallucination).
[–] waka@discuss.tchncs.de 3 points 13 hours ago

Another point valid for GPTs is getting started on ideas and things, sorting out mind messes, getting useful data out of large amounts of clusterfucks of text, getting a general direction.

Current downsides are you cannot expect factual answers on topics it has no access to as it'll hallucinate on these without telling you, many GPT provides use your data so you cannot directly ask it sensitive topics, it'll forget datapoints if your conversation goes on too long.

As for image generation, it's still often stuck in the uncanny valley. Only animation topics benefit right now within the amateur realm. Cannot say how much GPTs are professionally used currently.

All of these are things you could certainly do yourself and often better/faster than an AI. But sometimes you just need a good enough solution and that's where GPTs shine more and more often. It's just another form of automation - if used for repetitive/stupid tasks, it's fine. Just don't expect it to just build you a piece of fully working bug-free software just by asking it. That's not how automation works. At least not to date.

[–] mindbleach@sh.itjust.works 5 points 14 hours ago (2 children)

What doesn't exist yet, but is obviously possible, is automatic tweening. Human animators spend a lot of time drawing the drawings between other drawings. If they could just sketch out what's going on, about once per second, they could probably do a minute in an hour. This bullshit makes that feasible.

We have the technology to fill in crisp motion at whatever framerate the creator wants. If they're unhappy with the machine's guesswork, they can insert another frame somewhere in-between, and the robot will reroute to include that instead.

We have the technology to let someone ink and color one sketch in a scribbly animatic, and fill that in throughout a whole shot. And then possibly do it automatically for all labeled appearances of the same character throughout the project.

We have the technology to animate any art style you could demonstrate, as easily as ink-on-celluloid outlines or Phong-shaded CGI.

Please ignore the idiot money robots who are rendering eye-contact-mouth-open crowd scenes in mundane settings in order to sell you branded commodities.

[–] Mr_Blott@feddit.uk 4 points 13 hours ago

For the 99% of us who don't know what tweening is and were scared to Google it in case it was perverted, it's short for in-betweening and means the short frames of an animation in-between two main scenes

[–] Even_Adder@lemmy.dbzer0.com 1 points 12 hours ago (1 children)

Have you seen this? There was another paper, but I can't remember the name of it right now.

[–] mindbleach@sh.itjust.works 1 points 11 hours ago (1 children)

I had not. There's a variety of demos for guessing what comes between frames, or what fills in between lines... because those are dead easy to train from. This technology will obviously be integrated into the process of animation, so anything predictable Just Works, and anything fucky is only as hard as it used to be.

[–] Even_Adder@lemmy.dbzer0.com 1 points 5 hours ago

I think this is the other one I remember seeing.

[–] whome@discuss.tchncs.de 8 points 16 hours ago

I use it to sort days and create tables which is really helpful. And the other thing that really helped me and I would have never tried to figure out on my own:

I work with the open source GIS software qgis. I'm not a cartographer or a programmer but a designer. I had a world map and wanted to create geojson files for each country. So I asked chatgpt if there was a way to automate this within qgis and sure thing it recommend to create a Python script that could run in the software, to do just that and after a few tweaks it did work. that saved me a lot of time and annoyances. Would it be good to know Python? Sure but I know my brain has a really hard time with code and script. It never clicked and likely never will. So I'm very happy with this use case. Creative work could be supported in a drafting phase but I'm not so sure about this.

[–] thepreciousboar@lemm.ee 2 points 12 hours ago

I know they are being used to, and are decently good for, extracting a single infornation from a big document (like a datasheet). Considering you can easily confirm the information is correct, it's quite a nice use case

[–] weeeeum@lemmy.world 3 points 14 hours ago

I think LLMs could be great if they were used for education, learning and trained on good data. The encyclopedia Britannica is building an AI exclusively trained on its data.

It also allows for room for writers to add more to the database, to provide broader knowledge for the AI, so people keep their jobs.

[–] Schorsch@feddit.org 33 points 23 hours ago (3 children)

It's kinda handy if you don't want to take the time to write a boring email to your insurance or whatever.

[–] Odelay42@lemmy.world 14 points 22 hours ago (2 children)

I sorta disagree though, based on my experience with llms.

The email it generates will need to be read carefully and probably edited to make sure it conveys your point accurately. Especially if it's related to something as serious as insurance.

If you already have to specifically create the prompt, then scrutinize and edit the output, you might as well have just written the damn email yourself.

It seems only useful to write slop that doesn't matter that only gets consumed by other machines and dutifully logged away in a slop container.

[–] CrabAndBroom@lemmy.ml 26 points 21 hours ago

It does sort of solve the 'blank page problem' though IMO. It sometimes takes me ages to start something like a boring insurance letter because I open up LibreOffice and the blank page just makes me want to give up. If I have AI just fart out a letter and then I start to edit it, I'm already mid-project so it actually does save me some time in that way.

[–] scrubbles@poptalk.scrubbles.tech 10 points 22 hours ago

For us who are bad at writing though that's exactly why we use it. I'm bad with greetings, structure, things that people expect and I've had people get offended at my emails because they come off as rude. I don't notice those things. For that llms have been a godsend. Yes, I of course have to validate it, but it conveys the message I'm trying to usually

load more comments (2 replies)
[–] nafzib@feddit.online 5 points 16 hours ago

I have had some decent experiences with Copilot and coding in C#. I've asked it to help me figure out what was wrong with a LINQ query I was doing with an XDocument and it pointed me in the right direction where I figured it out. It also occasionally has some super useful auto complete blocks of code that actually match the pattern of what I'm doing.

As for art and such, sometimes people just want to see some random bizarre thing realized visually that they don't have the ability (or time/dedication) to realize themselves and it's not something serious that they would be commissioning an artist for anyway. I used Bing image creator recently to generate a little character portrait for an online DND game I'm playing in since I couldn't find quite what I was looking for with an image search (which is what I usually do for those).

I've seen managers at my job use it to generate fun, relevant imagery for slideshows that otherwise would've been random boring stock images (or just text).

It has actual helpful uses, but every major corporation that has a stake in it just added to or listened to the propaganda really hard, which has caused problems for some people; like the idiot who proudly fired all of his employees because he replaced all their jobs with automation and AI, then started hunting for actual employees to hire again a couple months later because everything was terrible and nothing worked right.

They're just tools that can potentially aid people, but they're terrible replacements for actual people. I write automated tests for a living, and companies will always need people for that. If they fired me and the other QAs tomorrow, things would be okay for a short while thanks to the automation we've built, but as more and more code changes go into our numerous and labyrinthine systems, more and more bugs would get through without someone to maintain the automation.

[–] TORFdot0@lemmy.world 5 points 16 hours ago

If you don’t know what you are doing and ask LLMs for code then you are gonna waste time debugging it without understanding but if you are just asking it for boiler plate stuff, or are asking it to add comments and print outs to console for existing code for debugging, it’s really great for that. Sometimes it needs chastising or corrections but so do humans.

I find it very useful but not worth the environmental cost or even the monetary cost. With how enshittified Google has become now though I find that ChatGPT has become a necessary evil to find reliable answers to simple queries.

[–] howrar@lemmy.ca 15 points 20 hours ago

In the context of programming:

  • Good for boilerplate code and variables naming when what you want is for the model to regurgitate things it has seen before.
  • Short pieces of code where it's much faster to verify that the code is correct than to write the code yourself.
  • Sometimes, I know how to do something but I'll wait for Copilot to give me a suggestion, and if it looks like what I had in mind, it gives me extra confidence in the correctness of my solution. If it looks different, then it's a sign that I might want to rethink it.
  • It sometimes gives me suggestions for APIs that I'm not familiar with, prompting me to look them up and learn something new (assuming they exist).

There's also some very cool applications to game AI that I've seen, but this is still in the research realm and much more niche.

[–] mp3@lemmy.ca 12 points 20 hours ago

I treat it as a newish employee. I don't let it do important tasks without supervision, but it does help building something rough that I can work on.

[–] Vanth@reddthat.com 6 points 17 hours ago (1 children)

Idea generation.

E.g., I asked an LLM client for interactive lessons for teaching 4th graders about aerodynamics, esp related to how birds fly. It came back with 98% amazing suggestions that I had to modify only slightly.

A work colleague asked an LLM client for wedding vow ideas to break through writer's block. The vows they ended up using were 100% theirs, but the AI spit out something on paper to get them started.

[–] Mr_Blott@feddit.uk 3 points 13 hours ago (1 children)

Those are just ideas that were previously "generated" by humans though, that the LLM learned

[–] TheRealKuni@lemmy.world 1 points 2 hours ago

Those are just ideas that were previously "generated" by humans though, that the LLM learned

That’s not how modern generative AI works. It isn’t sifting through its training dataset to find something that matches your query like some kind of search engine. It’s taking your prompt and passing it through its massive statistical model to come to a result that meets your demand.

[–] simple@lemm.ee 24 points 22 hours ago* (last edited 22 hours ago)

People keep meaning different things when they say "Generative AI". Do you mean the tech in general, or the corporate AI that companies overhype and try to sell to everyone?

The tech itself is pretty cool. GenAI is already being used for quick subtitling and translating any form of media quickly. Image AI is really good at upscaling low-res images and making them clearer by filling in the gaps. Chatbots are fallible but they're still really good for specific things like generating testing data or quickly helping you in basic tasks that might have you searching for 5 minutes. AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great. It's also used to de-noise raytracing and show cleaner reflections.

Also people are missing the point on why AI is being invested in so much. No, I don't think "AGI" is coming any time soon, but the reason they're sucking in so much money is because of what it could be in 5 years. Saying AI is a waste of effort is like saying 3D video games are a waste of time because they looked bad in 1995. It will improve.

[–] theunknownmuncher@lemmy.world 5 points 16 hours ago

It has value in natural language processing, like turning unstructured natural language data into structured data. Not suitable for all situations though, like situations that cannot tolerate hallucinations.

Its also good for reorganizing information and presenting it in a different format; and also classification of semantic meaning of text. It's good for pretty much anything dealing with semantic meaning, really.

I see people often trying to use generative AI as a knowledge store, such as asking an AI assistant factual questions, but this is an invalid usecase.

[–] solomon42069@lemmy.world 8 points 18 hours ago* (last edited 18 hours ago)

There was a legitimate use case in art to draw on generative AI for concepts and a stopgap for smaller tasks that don't need to be perfect. While art is art, not every designer out there is putting work out for a gallery - sometimes it's just an ad for a burger.

However, as time has gone on for the industry to react I think that the business reality of generative AI currently puts it out of reach as a useful tool for artists. Profit hungry people in charge will always look to cut corners and will lack the nuance of context that a worker would have when deciding when or not to use AI in the work.

But you could provide this argument about any tool given how fucked up capitalism is. So I guess that my 2c - generative AI is a promising tool but capitalism prevents it from being truly useful anytime soon.

[–] neon_nova@lemmy.dbzer0.com 10 points 21 hours ago

I wrote guidelines for my small business. Then I uploaded the file to chatgpt and asked it to review it.

It made legitimately good suggestions and rewrote the documents using better sounding English.

Because of chatgpt I will be introducing more wellness and development programs.

Additionally, I need med images for my website. So instead of using stock photos, I was able to use midjourney to generate a whole bunch of images in the same style that fit the theme of my business. It looks much better.

[–] Dagamant@lemmy.world 9 points 22 hours ago* (last edited 22 hours ago)

I use it to help with programming and writing. Not as a way to have something so it for me but as something that can show me how to do something I am stuck on or give me ideas when Im drawing a blank.

Kinda like an interactive rubber duck. Its solutions arent always right or accurate but it does help me get past things I struggle with.

[–] SkaveRat@discuss.tchncs.de 9 points 22 hours ago

shitposting.

Need some weidly specific imagery about whatever you're going on about? It got you covered

[–] Hyphlosion@lemm.ee 4 points 18 hours ago

I just use it for fun. Like, my own personal iPhone backgrounds and stuff. Sometimes I’ll share them with friends or on Mastodon or whatever, but that’s about it.

Gemini is fun to dink around with. When it works…

[–] Fondots@lemmy.world 3 points 17 hours ago

I was asked to officiate my friend's wedding a few months back, I'm no writer, and I wanted to do a bit better than just a generic wedding ceremony for them

So I fired up chatgpt, told it I needed a script for a wedding ceremony, described some of the things I wanted to mention, some of the things they requested, and it spit out a pretty damn good wedding ceremony. I gave it a little once over and tweaked a little bit of what it gave me but 99% of it was pretty much just straight chatgpt. I got a lot of compliments on it.

I think that's sort of the use case. For those of us who aren't professional writers and public speakers, who have the general idea of what we need to say for a speech or presentation but can't quite string the words together in a polished way.

Here's pretty much what it spit out (Their wedding was in a cave)

Cell Phone Reminder

Officiant: Before we begin, I’d like to kindly remind everyone to silence your phones and put them away for the ceremony. Groom and Bride want this moment to be shared in person, free from distractions, so let's focus on the love and beauty of this moment.

Giving Away the Bride

And before we move forward, we have a special moment. Tradition asks: Who gives this woman to be married to this man?

[Response from Bride's dad]

Thank you.

Greeting

Welcome, everyone. We find ourselves here in this remarkable setting—surrounded by the quiet strength of these ancient walls, a fitting place for Groom and Bride to declare their love. The cave, much like marriage, is carved out over time—through patience, care, and sometimes a little hard work. And yet, what forms is something enduring, something that stands the test of time.

Today, we’re here to witness Groom and Bride join their lives together in marriage. In this moment, we’re reminded that love is not about perfection, but about commitment—choosing one another, day after day, even when things get messy, or difficult, or dark. And through it all, we trust in love to guide us, just as God’s love guides us through life’s journey.

Declaration of Intent

[Officiant turns toward Groom and Bride]

Groom, Bride, you are about to make promises to each other that will last a lifetime. Before we continue, I’ll ask each of you to answer a very important question.

Officiant: Groom, do you take Bride to be your lawfully wedded wife, to have and to hold, for better or for worse, in sickness and in health, for as long as you both shall live?

Groom: I do.

Officiant: Bride, do you take Groom to be your lawfully wedded husband, to have and to hold, for better or for worse, in sickness and in health, for as long as you both shall live?

Bride: I do.

Exchange of Vows

Officiant: Now, as a sign of this commitment, Groom and Bride will exchange their vows—promises made not just to each other, but before all of us here and in the sight of God.  

[Groom and Bride share their vows]

Rings

Officiant: The rings you’re about to exchange are a symbol of eternity, a reminder that your love, too, is without end. May these rings be a constant reminder of the vows you have made today, and of the love that surrounds and holds you both.

[Groom and Bride exchange rings]

Officiant: And now, by the power vested in me, and with the blessing of God, I pronounce you husband and wife. Groom you may kiss your bride.

[Groom and Bride kiss]

Officiant: Friends and family, it is my great honor to introduce to you, for the first time, Mr. and Mrs. [Name].

I pretty much just tweaked the formatting, worked in a couple little friendly jabs at the groom, subbed their names in for Bride and Groom, and ad-libbed a little bit where appropriate

load more comments
view more: next ›