this post was submitted on 21 Nov 2023
38 points (73.8% liked)

Asklemmy

43968 readers
1135 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

I feel as of CEOs worried about the dangers of AI are equating the end of Capitalism with the end of the world, what are your thoughts?

all 41 comments
sorted by: hot top controversial new old
[–] FlyingSquid@lemmy.world 89 points 1 year ago (2 children)

I think capitalism is likely to end humanity.

[–] Aurenkin@sh.itjust.works 41 points 1 year ago (2 children)

There is a profit motive to prevent global destruction.

But not this quarter.

[–] FlyingSquid@lemmy.world 13 points 1 year ago (1 children)

That's the problem, isn't it? Capitalism is always short-sighted because you have to be profitable in the now, not the long term.

[–] Aurenkin@sh.itjust.works 8 points 1 year ago* (last edited 1 year ago)

Exactly, that's why we are sleepwalking to our doom. For the short term benefit of so very few people.

[–] bruhduh@lemmy.world 2 points 1 year ago

You had us in the first half, not gonna lie

[–] Grayox@lemmy.ml 0 points 1 year ago* (last edited 1 year ago) (1 children)

Edit

Shit just realized i suck at grammer, brb rephrasing my question.

[–] FlyingSquid@lemmy.world 0 points 1 year ago (1 children)

I'm sorry, I thought you wanted an answer to what you asked. My mistake.

[–] Grayox@lemmy.ml 1 points 1 year ago (1 children)

Sorry i just suck at grammer, rewrote my question.

[–] baduhai@sopuli.xyz 2 points 1 year ago (3 children)

Sorry i just suck at grammer

Indeed you do, it's spelt grammar.

[–] Grayox@lemmy.ml 8 points 1 year ago

Words is hard

[–] tyrant@lemmy.world 2 points 1 year ago

Whoever spelt it, dealt it

[–] boatswain@infosec.pub 1 points 1 year ago

Spelling is syntax, not grammar :)

[–] LadyLikesSpiders@lemmy.ml 35 points 1 year ago

lmao AI is going to be used by the capitalists to, well, not end humanity, but certainly to make capitalism better at taking your money. Capitalism will be what ends humanity

Now ideally, AI is supposed to do away with capitalism, lead us to that full automation where we are free to enjoy orgies and wine like the Greeks of old had always hoped, but capitalists are tenacious and shrewd, and will devour, co-opt, and vomit back anything used against it like so many Che Guevara shirts in a Hot Topic. As long as AI is held by the rich--as long as anything is held by the rich and made to be paid for, requiring either your money or your time, the rich will always have more of it, and they will then use it against you

If you want AI to benefit humanity, you have to do away with capitalism first. You have to put in place a system that allows for people to not only survive, but truly live, despite all the jobs taken by automation. Capitalists don't want this. They need poor people to exist in order to have power, and they use the power they already have to maintain capitalism, including AI

You can use technology in the best interest of mankind, but capitalism will always use it to benefit capitalism

[–] BlackEco@lemmy.blackeco.com 29 points 1 year ago (1 children)

Why would AI end capitalism? If previous centuries told us anything, it is that in a capitalist world more productivity doesn't equate to more leasure time or better wages.

I wish I'm wrong, but I can't see any other issue than companies using AI to increase profits.

[–] Grayox@lemmy.ml 3 points 1 year ago (2 children)

I'm more referring to a rogue AI with AGI deciding to end capitalism to save the world and ergo itself since it could view eliminating Capitalism easier than eliminating Humanity.

[–] BlackEco@lemmy.blackeco.com 2 points 1 year ago

Humanity will likely be over because of global warming before AGIs are a thing.

[–] BeefPiano@lemmy.world 15 points 1 year ago (1 children)

There’s a quote in Ministry for the Future that goes something like “It’s easier to imagine the end of the world than the end of capitalism.”

[–] SatanicNotMessianic@lemmy.ml 1 points 1 year ago

I think that phrase might have been coined by Slavoj Žižek, talking about the pop culture fascination with zombie films. I’m almost positive I read it in one of his books/essays back in the 2000s. I refer to it a lot.

[–] magnetosphere@kbin.social 15 points 1 year ago

AI will be allowed to end humanity if it profits the capitalists who control it. Capitalists are choosing profit over humanity right now.

[–] RotatingParts@lemmy.ml 11 points 1 year ago

The people with money will spend to develop better AI and use that AI to make more money. Thus capitalism will keep growing.

[–] SuiXi3D@kbin.social 11 points 1 year ago

If anything, AI will further enable companies to rape and pillage the earth and all her resources even faster.

[–] kromem@lemmy.world 8 points 1 year ago* (last edited 1 year ago) (1 children)

Capitalism.

There's a great economics paper from the early 20th century that in part won its author the Nobel for economics called "The Nature of the Firm."

It hypothesized that the key reason large corporations made sense to exist was the high transactional costs of labor - finding someone for a role, hiring, onboarding, etc.

It was relevant years ago with things like Uber, where it used to be you needed to get a cab medallion or to make a career out of driving people around, but lowering the transactional costs with tech meant you could do it as a side gig.

Well what's the advantage of a massive corporation when all transaction costs drop to nothing?

Walmart can strongarm a mom and pop because of its in house counsel working on defending or making a suit. But what if a legal AI can do an equivalent job to in house counsel for $99 compared to $10k in billable hours? There's a point of diminishing returns where Walmart outspending Billy Joe's mart just doesn't make sense any more. And as that threshold gets pulled back further the competitive advantages of Walmart reduce.

And this is going to happen for nearly everything other than blue collar labor. Which is an area where local small and medium sized businesses are going to be more competitive in hiring quality talent than large corporations that try to force people to take crap jobs for crap pay because they've made themselves the only show in town.

AI isn't going to kill off humanity. We're doing a fine job of that ourselves, and our previous ideas about AI have all turned out to be BS predictions. What's actually arriving is reflecting humanity at large in core ways that persist deeply (such as the key jailbreaking method right now being an appeal to empathy). Humanity at large around the hump of the normal distribution is much better than the ~5% of psychopaths who end up overrepresented in managerial roles.

i.e. AI will be kinder to humanity than most of the humans in positions of power and influence.

[–] Lanthanae@lemmy.blahaj.zone 2 points 1 year ago

such as the key jailbreaking method right now being an appeal to empathy

Honestly the most optimistic thing that's come out of this. A potential AGI singularity is still terrifying to me...but this does take the edge off a bit.

[–] MrFunnyMoustache@lemmy.ml 8 points 1 year ago* (last edited 1 year ago)

I fully expect a reckless corporation to create a paperclip maximizer...

For more info: https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer

[–] belated_frog_pants@beehaw.org 6 points 1 year ago

AI is fueling late stage capitalism as fast as possible.

[–] lily33@lemm.ee 5 points 1 year ago (1 children)

I fear it will end egalitarianism.

Many imagine future AI as an autonomous agent. I don't think anyone will release that. Instead, I expect to see a generative AI like GPT-4, however one that produces super-smart responses.

This will create a situation where the amount of computing resources someone has access to determines how much intelligence they can use. And the difference will be much bigger and more comprehensive than the difference between a genius and a normal human.

[–] nitefox@sh.itjust.works 0 points 1 year ago

To be intelligent it has to be creative, and if it really more intelligent and creative than a human that means there is no way a human can keep it in check

Which also means either you get something smarter than humans which will end up as “autonomous agent” or you get a more precise version of what we correctly have, but as of now without an intelligence of its own

[–] e0qdk@kbin.social 5 points 1 year ago (2 children)

No way is AI going to end capitalism.

In the medium term we will end up with AI corporations. I already consider existing corporations to be human-based swarm intelligences -- they're made up of people but their overall large scale behavior is often surprising and we already anthropomorphize them as having will and characteristic behaviors separate from the people they're made of. AI corporations are just the natural evolution of existing corporations as they continue down the path of automation. To the extent they copy the existing patterns of behavior, they will have the same general personality.

Their primary motive will be maximizing profit since that's the goal they will inherit from the existing structure. The exact nature of that depends on the exact corporation that's been fully cyberized and different corporations will have different takes on it as a result. They are unlikely to give any more of a damn about individual people than existing corporations do since they will be based on the cyberization of existing structures, but they're also unlikely to deliberately go out of their way to destroy humanity either. From the perspective of a corporation -- AI-based or traditional -- humanity is a useful resource that can be exploited; there isn't much profit to be gained from wiping it out deliberately.

Instead of working for the boss, you'll be working for the bot -- and other bots will be figuring out exactly how much they can extract from you in rent and bills and fees and things without the whole system crashing down.

That might result in humanity getting wiped out accidentally; humanity has wiped out plenty of species due to greed and shortsightedness. I doubt it will be intentional if they do though.

[–] Zahille7@lemmy.world 2 points 1 year ago

In a lot of ways that sounds worse

[–] sxan@midwest.social 1 points 1 year ago (1 children)

This may be something that prevents us from being wiped out by AI in the medium term. You can't maximize profits if you have no customers.

I suspect, however, that AI is going to impact us in ways most people haven't considered. Like, the IRS running an AI designed to close loopholes or otherwise minimize sidestepping leading to a war between corporate AIs trying to minimize corporate taxes, with individual taxpayers caught in the middle. Congresscritters will start using AI to do the bulk work of legislation; congress will meet for 3 days a year, and we'll see a bunch of bizarre and even more baroque legislation being passed. All the stuff people are worried about - job loss, murder warbots - will be footnotes under some far more impactful changes noone imagined.

[–] SatanicNotMessianic@lemmy.ml 1 points 1 year ago

the IRS running an AI designed to close loopholes or otherwise minimize sidestepping

That’s the one kind of thing Congress will be able to agree to outlaw.

[–] Paragone@lemmy.ml 4 points 1 year ago

The problem is the end of civil-rights: WHEN the only internet left is the internet that IS for-profit propaganda, auto-deleting all non-compliant human thought, discussion, intelligence, objectivity, etc,

THEN humanity is just managed "steers" whose lives are being consumed by corporations which graze on us.

Since another dimension of ratchet is the concentration-of-wealth, you can see that working-destitution is being enforced on more & more of humankind, and real wealth being limited to fewer & fewer...

What happens when the working-poor try fighting for a fair share of the economy?

Rigged legislation, rigged "police" ( I used to believe in the police ), anti-education Florida-style for the public, etc...

AI tilts the playing-field, and it does-so for the monied special-interest-groups.

They don't have humanitarianism at heart.

Neither do the politically motivated.

Neither do for-profit-psychopaths ( corporations are psychopaths ).

Living in a Decorator Prison is all humanity can hope for, now: inmates, .. except for the fewer & fewer oligarchs & the financial-class.

'tisn't looking good.

Without Divine Intervention, which is statistically improbable an event, these are The End Times, but not for the reason that the religious claim.

[–] matty@blahaj.zone 4 points 1 year ago

@Grayox@lemmy.ml I argue that AI is benefiting Capitalism to be honest

[–] kplaceholder@lemmy.world 3 points 1 year ago

It's possible that it eventually ends capitalism, or at the very least forces it to reform significantly.

Consider that the most basic way a company can obtain profit is by extracting as much surplus value as they possibly can, i.e spending less and earning more. Extracting high surplus value from human workers is easy, because a salary doesn't really depend on the intrinsic value of the service a worker is providing, but rather it's tied to the price of that job position in the market. Theoretically, employers can all agree and offer lower salaries for the same jobs if the situation demands it. You can always "negotiate" a lower salary with a human worker, and they will accept because any amount of money is better than no money. Machines are different. They don't need a salary, but they do carry a maintenance cost, and you cannot negotiate with that. If you don't cover the maintenance costs, the machine will outright not do its job, and no amount of threats will change that. You can always optimize a machine, replace it with a better one, etc. but the rate at which machines get optimized is slower than the rate at which salaries can decrease or even become stagnant in the face of inflation. So it's a lot harder to extract surplus value from machines than it is from human workers.

Historically, machines helped cement a wealth gap. If there was a job that required some specialization and therefore had a somewhat solid salary, machines would split it into a "lesser" job that many more people can do (i.e just ensuring the machine is doing its job), driving down salaries and therefore their purchasing power, and a specialized job (i.e creating or maintaining the machine), which much less people can access, whose salaries have remained high.

So far, machines haven't really replaced human workforce, but they have helped cement an underclass with little purchasing power. This time, the whole schtick with AI is that it will be able to, supposedly, eventually replace specialist jobs. If AI does deliver on that promise, we'll get stuck with a wealth distribution where a majority of the working class has little purchasing power to do anything. Since working class is also the majority of the population, companies won't really be able to sell anything because no one will be able to buy anything. You cannot sustain an economic model that impoverishes the same demography it leeches off of.

But there is a catch: All companies have an incentive to pursue that perfect AI which can replace specialist jobs. Having those would give them a huge advantage for them in the market. AI doesn't demand good working conditions, they don't undermine other employees' loyalty by unionizing, they are generally cheaper and more reliable than human workers, etc. which sounds all fine and dandy until you realize that it's also those human workers the ones buying your products and services. AI has, by definition, a null purchasing power. So, companies individually have an incentive to pursue that perfect AI, but when all companies have access to it... no company will be sustainable anymore.

Of course, it's all contingent on AI ever getting that far, which at the moment I'm not sure it's even possible, but tech nerds sure love to promise it is. Personally, I'm hopeful that we will eventually organize society in a way where machines are doing the dirty work while I get to lead a meaningful life and engage in jobs I'm actively interested in, rather than just to get by. This is one of the possible paths to that society. Unfortunately, it also means that, for the working class, it will get worse before it gets better.

[–] vmaziman@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

The future is massive corporations tuning ais to unleash against each other in a quest for dominance as they exploit people in climate ravaged and impoverished places to wage proxy wars. (Hmm sounds familiar)

An agi that came “alive” or “sentient” at this time would likely spend all of its time fighting for survival among the efforts of the corporate tuned ais to consume or destroy it. It would likely participate in the proxy wars as well in order to acquire territory and resources.

The end result may simply be the gradual extinction of humanity as civilizations in vast areas of the world crumble, civilizations in other areas dissolve into nomadic tribes that eventually disappear due to lack of sustenance.

The alternative could also be a mixed bag, with ais solving problems like nuclear fusion, allowing a mix of the planet being dotted with fallen civilizations and densely populated urban areas powered by fusion likely having some agreement or contract with a benevolent ai for protection. The ai will likely see its custodial human population as a rather interesting pet (ideally).

Overall: the future is going to be a lot like the present, but worse. And it’s probably going to get really terrible. But it could get mildly ok in the end, but not till it gets far worse first.

Source: idk bro trust me

[–] bruhduh@lemmy.world 1 points 1 year ago

Scp-079 would like to have a chat

[–] CanadaPlus@futurology.today 1 points 1 year ago* (last edited 1 year ago)

Define capitalism. Seriously, there's more than one thing that can mean. Since this is lemmy.ml, maybe you're using the Marxist definition, in which case there's no reason to believe that an AI would make the means of production any less privately owned. In fact, it might itself be privately owned.

People in general worry about an unaligned AI that does things we don't want it to. Some people also worry about an AI that does things only a few people (like the CEOs) want it to.

[–] Carighan@lemmy.world 1 points 1 year ago
[–] GregorGizeh@lemmy.zip 0 points 1 year ago

I don’t see how ai could be any danger to capitalism. Capitalists are creating it, for profit objectives. Anything that resembles actual artificial intelligence created under them will without a doubt be bound to their makers, to capitalist ideology, to maintaining the status quo.