this post was submitted on 13 Nov 2024
669 points (94.9% liked)

Technology

59631 readers
3022 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CerealKiller01@lemmy.world 31 points 1 week ago (7 children)

Huh?

The smartphone improvements hit a rubber wall a few years ago (disregarding folding screens, that compose a small market share, improvement rate slowed down drastically), and the industry is doing fine. It's not growing like it use to, but that just means people are keeping their smartphones for longer periods of time, not that people stopped using them.

Even if AI were to completely freeze right now, people will continue using it.

Why are people reacting like AI is going to get dropped?

[–] finitebanjo@lemmy.world 19 points 1 week ago* (last edited 1 week ago)

People are dumping billions of dollars into it, mostly power, but it cannot turn profit.

So the companies who, for example, revived a nuclear power facility in order to feed their machine with ever diminishing returns of quality output are going to shut everything down at massive losses and countless hours of human work and lifespan thrown down the drain.

This will have an economic impact quite large as many newly created jobs go up in smoke and businesses who structured around the assumption of continued availability of high end AI need to reorganize or go out of business.

Search up the Dot Com Bubble.

[–] theherk@lemmy.world 18 points 1 week ago

Because in some eyes, infinite rapid growth is the only measure of success.

[–] pdlorah@lemmy.ca 11 points 1 week ago (1 children)

People pay real money for smartphones.

[–] Petter1@lemm.ee 10 points 1 week ago

People pay real Money for AIaaS as well..

[–] drake@lemmy.sdf.org 4 points 1 week ago (2 children)

It’s absurdly unprofitable. OpenAI has billions of dollars in debt. It absolutely burns through energy and requires a lot of expensive hardware. People aren’t willing to pay enough to make it break even, let alone profit

Eh, if the investment dollars start drying up, they'll likely start optimizing what they have to get more value for fewer resources. There is value in AI, I just don't think it's as high as they claim.

[–] OsrsNeedsF2P@lemmy.ml 1 points 1 week ago

Training new models is expensive. Running them can be fairly cheap. So no

[–] Ultraviolet@lemmy.world 3 points 1 week ago (1 children)

Because novelty is all it has. As soon as it stops improving in a way that makes people say "oh that's neat", it has to stand on the practical merits of its capabilities, which is, well, not much.

[–] theherk@lemmy.world 7 points 1 week ago (2 children)

I’m so baffled by this take. “Create a terraform module that implements two S3 buckets with cross-region bidirectional replication. Include standard module files like linting rules and enable precommit.” Could I write that? Yes. But does this provide an outstanding stub to start from? Also yes.

And beyond programming, it is otherwise having positive impact on science and medicine too. I mean, anybody who doesn’t see any merit has their head in the sand. That of course must be balanced with not falling for the hype, but the merits are very real.

[–] Eccitaze@yiffit.net 5 points 1 week ago (2 children)

There's a pretty big difference between chatGPT and the science/medicine AIs.

And keep in mind that for LLMs and other chatbots, it's not that they aren't useful at all but that they aren't useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their "sell below cost and light VC money on fire to survive long enough to gain market share" phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?

[–] theherk@lemmy.world 2 points 1 week ago

Nothing to argue with there. I agree. Many companies will go out of business. Fortunately we’ll still have the llama3’s and mistral’s laying around that I can run locally. On the other hand cost justification is a difficult equation with many variables, so maybe it is or will be in some cases worth the cost. I’m just saying there is some merit.

[–] obbeel@lemmy.eco.br 1 points 1 week ago

I understand that it makes less sense to spend in model size if it isn't giving back performance, but why would so much money be spent on larger LLMs then?

[–] lightstream@lemmy.ml 1 points 1 week ago (1 children)

The merits are real. I do understand the deep mistrust people have for tech companies, but there's far too much throwing out of the baby with the bath water.

As a solo developer, LLMs are a game-changer. They've allowed me to make amazing progress on some of my own projects that I've been stuck on for ages.

But it's not just technical subjects that benefit from LLMs. ChatGPT has been a great travel guide for me. I uploaded a pic of some architecture in Berlin and it went into the history of it, I asked it about some damage to an old church in Spain - turned out to be from the Spanish civil war, where revolutionaries had been mowed down by Franco's firing squads.

Just today, I was getting help from an LLM for an email to a Portuguese removals company. I sent my message in English with a Portuguese translation, but the guy just replied back with a single sentence in broken English:

"Yes a can , need tho mow m3 you need delivery after e gif the price"

The first bit is pretty obviously "Yes I can" but I couldn't really be sure what he was trying to say with the rest of it. So I asked ChatGPT who responded:

It seems he's saying he can handle the delivery but needs to know the total volume (in cubic meters) of your items before he can provide a price. Here's how I’d interpret it:

“Yes, I can [do the delivery]. I need to know the [volume] in m³ for delivery, and then I’ll give you the price.”

Thanks to LLMs, I'm able to accomplish so many things that would have previously taken multiple internet searches and way more effort.

[–] Rekorse@sh.itjust.works 0 points 1 week ago

Okay now justify the cost it took to create the tool.

[–] ClamDrinker@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (1 children)

People differentiate AI (the technology) from AI (the product being peddled by big corporations) without making clear that nuance (Or they mean just LLMs, or they aren't even aware the technology has a grassroots adoption outside of those big corporations). It will take time, and the bubble bursting might very well be a good thing for the technology into the future. If something is only know for it's capitalistic exploits it'll continue to be seen unfavorably even when it's proven it's value to those who care to look at it with an open mind. I read it mostly as those people rejoicing over those big corporations getting shafted for their greedy practices.

[–] sugar_in_your_tea@sh.itjust.works 3 points 1 week ago (1 children)

the bubble bursting might very well be a good thing for the technology into the future

I absolutely agree. It worked wonders for the Internet (dotcom boom in the 90s), and I imagine we'll see the same w/ AI sometime in the next 10 years or so. I do believe we're seeing a bubble here, and we're also seeing a significant shift in how we interact w/ technology, but it's neither as massive or as useless as proponents and opponents claim.

I'm excited for the future, but not as excited for the transition period.

[–] ArchRecord@lemm.ee 2 points 1 week ago (1 children)

I’m excited for the future, but not as excited for the transition period.

I have similar feelings.

I discovered LLMs before the hype ever began (used GPT-2 well before ChatGPT even existed) and the same with image generation models barely before the hype really took off. (I was an early closed beta tester of DALL-E)

And as my initial fascination grew, along with the interest of my peers, the hype began to take off, and suddenly, instead of being an interesting technology with some novel use cases, it became yet another technology for companies to show to investors (after slapping it in a product in a way no user would ever enjoy) to increase stock prices.

Just as you mentioned with the dotcom bubble, I think this will definitely do a lot of good. LLMs have been great for asking specialized questions about things where I need a better explanation, or rewording/reformatting my notes, but I've never once felt the need to have my email client generate every email for me, as Google seems to think I'd want.

If we can just get all the over-hyped corporate garbage out, and replace it with more common-sense development, maybe we'll actually see it being used in a way that's beneficial for us.

[–] sugar_in_your_tea@sh.itjust.works 3 points 1 week ago* (last edited 1 week ago)

I initially started with natural language processing (small language models?) in school, which is a much simpler form of text generation that operates on words instead of whatever they call the symbols in modern LLMs. So when modern LLMs came out, I basically registered that as, "oh, better version of NLP," with all its associated limitations and issues, and that seems to be what it is.

So yeah, I think it's pretty neat, and I can certainly see some interesting use-cases, but it's really not how I want to interface with computers. I like searching with keywords and I prefer the process of creation more than the product of creation, so image and text generation aren't particularly interesting to me. I'll certainly use them if I need to, but as a software engineer, I just find LLMs in all forms (so far) annoying to use. I don't even like full text search in many cases and prefer regex searches, so I guess I'm old-school like that.

I'll eventually give in and adopt it into my workflow and I'll probably do so before the average person does, but what I see and what the media hypes it up to be really don't match up. I'm planning to set up a llama model if only because I have the spare hardware for it and it's an interesting novelty.