this post was submitted on 16 Jul 2023
48 points (100.0% liked)

Technology

29 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

Someone used Midjourney to AI-generate images of politicians cheating on their spouses — though claims that it was well-intentioned.

top 39 comments
sorted by: hot top controversial new old
[–] Gutotito@kbin.social 31 points 1 year ago (3 children)

AI will revolutionize the blackmail industry

Or kill it. If every image is suspect, real crimes/transgressions become a lot harder to prove.

[–] falsem@kbin.social 5 points 1 year ago

You can't un-ring the bell.

[–] Cethin@lemmy.zip 1 points 1 year ago

Yep. When everything is believable, nothing is believable. We're already mostly post-truth without AI. I don't want to see what comes next.

[–] DangerousDetlef@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

That would be in the far future if even. Right now there are many ways to identify an AI generated picture especially if it has people in it (the infamous hands problem). And there's software that can reliably tell, too.

It's, as always, not a question about the tech but how we deal with it. If we were all reasonable and questioned such pictures if they pop up somewhere, you'd be right. However, if just enough people believe them to be real, the damage is done, even though if it proven to be fake afterwards.

That is most likely the reason blackmail and believing fakes are real didn't die and never will. Too many people believe what they want to believe, no matter what the actual truth is.

[–] donuts@kbin.social 19 points 1 year ago (6 children)

Frankly I'm struggling to see even a single upside to AI at this point. Shit like this fucking sucks.

[–] skeezix 22 points 1 year ago (1 children)

You’re just jealous that Bernie’s got a hot babe.

[–] Neon@kbin.social 20 points 1 year ago (1 children)
[–] quirzle@kbin.social 4 points 1 year ago

So...who has the deepfdeepfakes of Bernie-on-Bernie action?

[–] Aatube@kbin.social 11 points 1 year ago (1 children)

That’s like saying you’re struggling to see even a single upside to explosives.

[–] donuts@kbin.social 1 points 1 year ago (2 children)

I can genuinely see more upsides to explosives.

[–] Scubus@sh.itjust.works 1 points 1 year ago (1 children)

Better combat in games, massive increases in technology as a result of AI designing things, the ability to test millions of potential medications at once, guidance counciling, greif counciling, counciling in general, once it's gotten over it's hallucination issues massive increases in intellectual development, the ability to fully automate supply chains, the ultimate sword and shield combo with MAD hopefully ending all physically violent wars, and the fact that eventually anything a human can do, an AI could do better.

[–] Aatube@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

I think you're overestimating its potential... leaving very critical things to a machine is not a good idea, and I'm not sure how it will test medications

[–] Scubus@sh.itjust.works 0 points 1 year ago (1 children)

We leave critical things to humans all the time. We're just machines with a shitload more failure points. And here is how it designs and tests multiple medications at once. I don't think that article even touches on protein folding, which is another big one.

[–] Aatube@kbin.social 1 points 1 year ago

I understand leaving critical things to predictable machines but very critical things like counseling (preventing people from things like suicide) and nuclear warring (which should already have experienced commanders) should not be left to unpredictable machines.

[–] Aatube@kbin.social 1 points 1 year ago

Text generation, general advice, barebones stuff, easier way to create art, easier way to create, finding patterns and detecting things

[–] DaniAlexander@kbin.social 10 points 1 year ago (4 children)

Ai will give opportunities to the handicapped in a very near future. Imagine a paraplegic artist or coder or sculptor who can describe to a machine their 'vision' . They can do that now even with a picture. Soon they can do that with a 3d machine. I don't mean 'make a painting of the mountains' but instead 'cadmium red mixed with yellow brush stroke in circles'.

When you think of AI try not to think of the bad actors. Try to think of the good things that can come from it. All the worlds that will be opened up for people.

[–] DessertStorms@kbin.social 7 points 1 year ago (2 children)

I don't disagree with the point you're making*, but please, #SayTheWord - we are disabled, not handicapped (note that at the end of this they also discuss a shift to person first language, as in "person with disability", which some people do prefer, but many others, myself included, still favour simply "disabled" or "disabled person/adult/child/whatever is relevant").

*I will just say that disabled people currently needing to, in most cases, exchange privacy and sometimes even security so that the companies selling these devices can make even more money, for access to these new technologies, is not something we should be ok with, and we should be fighting for accessibility that isn't dependant on profiteering, but instead on the actual will to include disabled people in society.

[–] LoafyLemon@kbin.social 7 points 1 year ago* (last edited 1 year ago) (2 children)

There are lots of open source projects involving AI that you can run on your personal computer. I think the community-driven projects are heading in the right direction, but it's completely opposite for the ones owned by corporations as they're only driven by profit margins, not people.

[–] donuts@kbin.social 3 points 1 year ago

The problem with "open source" in the context of AI is that the source code is a much smaller factor than the training dataset. AI companies running around and scraping everybody's data as if they own anything they see is a real problem raising massive ethical and legal concerns.

[–] DessertStorms@kbin.social 1 points 1 year ago

That's great (genuinely), unfortunately having to work outside of the mainstream brings its own hurdles -this isn't on the same level but consider twitter vs mastodon or reddit vs lemmy: the corporate solution is shiny and easy and requires very little to no effort from the end user to use, while the other requires a little more understanding and effort and comfort with technology, and might not appeal, or even be known, to many. Sure, people can look it up and learn it, but that looking and learning are hurdles, and when it comes to accessibility devices, those hurdles tend to be more significantly in the way.

To be clear, I am not trying to shit on the open source stuff, I do genuinely think it's great, but like so many of the solutions we currently have to work with, it's a band-aid on a cancer. We need to remove the cancer.

[–] DaniAlexander@kbin.social 4 points 1 year ago (1 children)

I apologize that my choice of language was insulting to you. I am disabled(using your word, tho I grew up with and an comfortable with my own terms), so I rarely think about terminology for myself. I'll try to remember I'm the future.

As per your point, well I do see a problem with excess profits on the backs of other people, I also realize that innovation does not come for free. However you should probably look at open source AI . It is one of the fastest growing areas. I think if you are concerned about privacy and profits it would probably do you good to work with campaigns that are trying to get legislation passed in this area.

[–] DessertStorms@kbin.social 2 points 1 year ago

No worries, I wasn't personally insulted, I just think the words we use are important. Here is a good piece that talks more about it.

And thank you for the advice, and I agree, there are some smaller solutions coming through but I worry that in the environment they exist in (capitalism that already looks to exploit and ableism on top of that) won't allow them to become viable solutions. I think the problem is not one that can be solved with legislation, it (not just AI but the system it and we exist under) is a much larger problem that needs a much bigger solution and that's abolish it and build better.

[–] phi1997@kbin.social 5 points 1 year ago

Hard not to think of the bad actors when they can do a lot of damage to society

[–] donuts@kbin.social 3 points 1 year ago

Unfortunately AI "art" is almost exclusively bad actors, using massive datasets of scraped and stolen work without consent, copyright, or license. It doesn't have to be that way, and hopefully in the very near future the ethics and legality will be clarified and things will change, but right now AI "art" is simply plagiarism on an unprecedented, industrialized scale.

By the way, there are quite a lot of disabled artists around today.

[–] Machinist3359@kbin.social 3 points 1 year ago (1 children)

You may be right in some ways, but if encourage you (or anyone) to not use theoretical disabled people as counterpoints. Ideally, cite something someone has said instead.

I understand the impulse, but doing so often makes people sound more disabled today andputs words in the communities' mouth.

There are paraplegics writing and creating art today. There is a great list of needs they have from society which precedes ai assistance.

More nefarious people (not saying you, to be clear) also do this to veil shitty tech or policies. "Think of the disabled, with targeted advertisements based on personal data we'll make using the web less burdensome"

[–] DaniAlexander@kbin.social 3 points 1 year ago (1 children)

I think your point is kind of silly. There are lots of people that can do lots of things but still people that can't. I am also disabled. But I realized there are other people who cannot do what I can do that are also disabled. I think it's pretty clear I was speaking of them. I'm not sure why they are suddenly unimportant in terms of a discussion. When speaking of a discussion, the most incredible breakthroughs are the ones that should be touted, imo. And also in my opinion, the ability to create where you couldn't before, the ability to express your imagination that has been locked inside your head, is the greatest gift AI will give.

Maybe you don't feel the same way. That's fine but don't discount people with disabilities who cannot write or create right now.

[–] Machinist3359@kbin.social 2 points 1 year ago (1 children)

To be clear I'm not saying there's no value to such improvements, but specifically want people to exercise caution in the realm of the hypothetical.

Rather, we should lift up actual evidence and voices of the people affected. If such disabled people are hard to find, that's a good reason to reframe. Sometimes the actual needs are much less hypothetical. Sometimes the hypothetical greatly overestimated the tech.

To root this discussion, maybe linking to paraplegic speaking on creative AI tools? Or similar examples of AI being used for a11y today which indicates this trend is realistic and a priority.

[–] DaniAlexander@kbin.social 1 points 1 year ago

Such people are not hard to find it's just that this discussion is never centered around them. Why? Because this was out of the realm of possibility. This was just not on the radar for people.

[–] Alleywurds@kbin.social 7 points 1 year ago (1 children)

At a minimum, you haven't considered the benefits to medicine.

[–] donuts@kbin.social 2 points 1 year ago (1 children)

You possibly haven't considered the impact of unethical tech companies and governments using AI and pilfered genetic data to do any number of fucked up things.

[–] Alleywurds@kbin.social 2 points 1 year ago

No, I definitely have, and have written a good bit about and with AI. I just interviewed a someone who works in AI for medicine last week for my podcast.

Any tools can be used in fucked up ways, but that doesn't mean that they have literally no positive uses either.

[–] AndrewZabar@beehaw.org 1 points 1 year ago (1 children)

It’s already over. Our entire society is gonna collapse in the next couple of decades because of AI and climate change. So… I dunno, brace yourself.

Humanity has no self-restraint. If they can, they do. For money. For power. For advantage. For lust. Anything and everything.

Sorry to be such a downer, but the show’s already over. If you think otherwise, tell me please who and how is going to prevent lies, misinformation and deceit on a mass scale from ripping society apart? I hate to be right in this case, but I am. :-(

[–] Aatube@kbin.social 1 points 1 year ago (1 children)
[–] AndrewZabar@beehaw.org 0 points 1 year ago (1 children)

Clicks = money = supersede any desire to remove content because of fact checking. Lies yield money as green as the money truth yields.

[–] Aatube@kbin.social 1 points 1 year ago (1 children)

More AI lies = People see patterns = Less clicks, eventual scandal = Bankrupt or tabloid status
Not to mention most credible journalism outlets need to maintain credibility to keep their money

[–] AndrewZabar@beehaw.org 0 points 1 year ago (1 children)

People wouldn’t see patterns if you painted racing stripes on their shit. Yeah, some people are smart enough, but the vast majority, not.

[–] Aatube@kbin.social 1 points 1 year ago

Patterns on these outlets being wrong.

[–] CoderKat@lemm.ee 1 points 1 year ago* (last edited 1 year ago)

AI powered tooling is amazing. I already use it regularly for my work (I'm a programmer). It's primarily in the form of intelligent auto complete (lookup GitHub copilot for an example). But AI can also do stuff like catch some bugs, automatically address code review comments, etc. I look forward to seeing it being able to generate larger blocks accurately (in particular, I'd love it to automate test generation -- it currently can only handle very basic cases).

I'm sure other industries can benefit similarly. Eg,

  • Video game level design could take in some assets you made in the theme of what you want and then generate slight variations. We've already had procedural generation of stuff like plants for ages (you can generate countless slightly different tree models, for example). This is just the next step into more complicated structures. For example, suppose you're making a huge office space, like in Control. Many desks and whiteboards in that game suffer from asset reuse. AI could help give slight variations to make the setting feel more natural.
  • Graphics designers I'm sure already benefit from "magic eraser" functionality. It used to be time consuming to remove something from an image. Now it's easy. I'm sure the next step is generally easier image editing, like moving objects in an image (Google demoed something like that at I/O).
  • Countless scientific uses, especially for chemistry and biology, because AI can be really great at constraint solving. We're already seeing this. Specialized AI is better than doctors at diagnosing certain tumors, for example.
[–] IONLYpost@kbin.social 2 points 1 year ago

I remember when people called the use of Deepfake frightening, lmao this is merely the beginning.

load more comments
view more: next ›