AI will revolutionize the blackmail industry
Or kill it. If every image is suspect, real crimes/transgressions become a lot harder to prove.
This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!
AI will revolutionize the blackmail industry
Or kill it. If every image is suspect, real crimes/transgressions become a lot harder to prove.
You can't un-ring the bell.
Yep. When everything is believable, nothing is believable. We're already mostly post-truth without AI. I don't want to see what comes next.
That would be in the far future if even. Right now there are many ways to identify an AI generated picture especially if it has people in it (the infamous hands problem). And there's software that can reliably tell, too.
It's, as always, not a question about the tech but how we deal with it. If we were all reasonable and questioned such pictures if they pop up somewhere, you'd be right. However, if just enough people believe them to be real, the damage is done, even though if it proven to be fake afterwards.
That is most likely the reason blackmail and believing fakes are real didn't die and never will. Too many people believe what they want to believe, no matter what the actual truth is.
Frankly I'm struggling to see even a single upside to AI at this point. Shit like this fucking sucks.
You’re just jealous that Bernie’s got a hot babe.
Bernie IS the hot babe
So...who has the deepfdeepfakes of Bernie-on-Bernie action?
That’s like saying you’re struggling to see even a single upside to explosives.
I can genuinely see more upsides to explosives.
Better combat in games, massive increases in technology as a result of AI designing things, the ability to test millions of potential medications at once, guidance counciling, greif counciling, counciling in general, once it's gotten over it's hallucination issues massive increases in intellectual development, the ability to fully automate supply chains, the ultimate sword and shield combo with MAD hopefully ending all physically violent wars, and the fact that eventually anything a human can do, an AI could do better.
I think you're overestimating its potential... leaving very critical things to a machine is not a good idea, and I'm not sure how it will test medications
We leave critical things to humans all the time. We're just machines with a shitload more failure points. And here is how it designs and tests multiple medications at once. I don't think that article even touches on protein folding, which is another big one.
I understand leaving critical things to predictable machines but very critical things like counseling (preventing people from things like suicide) and nuclear warring (which should already have experienced commanders) should not be left to unpredictable machines.
Text generation, general advice, barebones stuff, easier way to create art, easier way to create, finding patterns and detecting things
Ai will give opportunities to the handicapped in a very near future. Imagine a paraplegic artist or coder or sculptor who can describe to a machine their 'vision' . They can do that now even with a picture. Soon they can do that with a 3d machine. I don't mean 'make a painting of the mountains' but instead 'cadmium red mixed with yellow brush stroke in circles'.
When you think of AI try not to think of the bad actors. Try to think of the good things that can come from it. All the worlds that will be opened up for people.
I don't disagree with the point you're making*, but please, #SayTheWord - we are disabled, not handicapped (note that at the end of this they also discuss a shift to person first language, as in "person with disability", which some people do prefer, but many others, myself included, still favour simply "disabled" or "disabled person/adult/child/whatever is relevant").
*I will just say that disabled people currently needing to, in most cases, exchange privacy and sometimes even security so that the companies selling these devices can make even more money, for access to these new technologies, is not something we should be ok with, and we should be fighting for accessibility that isn't dependant on profiteering, but instead on the actual will to include disabled people in society.
There are lots of open source projects involving AI that you can run on your personal computer. I think the community-driven projects are heading in the right direction, but it's completely opposite for the ones owned by corporations as they're only driven by profit margins, not people.
The problem with "open source" in the context of AI is that the source code is a much smaller factor than the training dataset. AI companies running around and scraping everybody's data as if they own anything they see is a real problem raising massive ethical and legal concerns.
That's great (genuinely), unfortunately having to work outside of the mainstream brings its own hurdles -this isn't on the same level but consider twitter vs mastodon or reddit vs lemmy: the corporate solution is shiny and easy and requires very little to no effort from the end user to use, while the other requires a little more understanding and effort and comfort with technology, and might not appeal, or even be known, to many. Sure, people can look it up and learn it, but that looking and learning are hurdles, and when it comes to accessibility devices, those hurdles tend to be more significantly in the way.
To be clear, I am not trying to shit on the open source stuff, I do genuinely think it's great, but like so many of the solutions we currently have to work with, it's a band-aid on a cancer. We need to remove the cancer.
I apologize that my choice of language was insulting to you. I am disabled(using your word, tho I grew up with and an comfortable with my own terms), so I rarely think about terminology for myself. I'll try to remember I'm the future.
As per your point, well I do see a problem with excess profits on the backs of other people, I also realize that innovation does not come for free. However you should probably look at open source AI . It is one of the fastest growing areas. I think if you are concerned about privacy and profits it would probably do you good to work with campaigns that are trying to get legislation passed in this area.
No worries, I wasn't personally insulted, I just think the words we use are important. Here is a good piece that talks more about it.
And thank you for the advice, and I agree, there are some smaller solutions coming through but I worry that in the environment they exist in (capitalism that already looks to exploit and ableism on top of that) won't allow them to become viable solutions. I think the problem is not one that can be solved with legislation, it (not just AI but the system it and we exist under) is a much larger problem that needs a much bigger solution and that's abolish it and build better.
Hard not to think of the bad actors when they can do a lot of damage to society
Unfortunately AI "art" is almost exclusively bad actors, using massive datasets of scraped and stolen work without consent, copyright, or license. It doesn't have to be that way, and hopefully in the very near future the ethics and legality will be clarified and things will change, but right now AI "art" is simply plagiarism on an unprecedented, industrialized scale.
By the way, there are quite a lot of disabled artists around today.
You may be right in some ways, but if encourage you (or anyone) to not use theoretical disabled people as counterpoints. Ideally, cite something someone has said instead.
I understand the impulse, but doing so often makes people sound more disabled today andputs words in the communities' mouth.
There are paraplegics writing and creating art today. There is a great list of needs they have from society which precedes ai assistance.
More nefarious people (not saying you, to be clear) also do this to veil shitty tech or policies. "Think of the disabled, with targeted advertisements based on personal data we'll make using the web less burdensome"
I think your point is kind of silly. There are lots of people that can do lots of things but still people that can't. I am also disabled. But I realized there are other people who cannot do what I can do that are also disabled. I think it's pretty clear I was speaking of them. I'm not sure why they are suddenly unimportant in terms of a discussion. When speaking of a discussion, the most incredible breakthroughs are the ones that should be touted, imo. And also in my opinion, the ability to create where you couldn't before, the ability to express your imagination that has been locked inside your head, is the greatest gift AI will give.
Maybe you don't feel the same way. That's fine but don't discount people with disabilities who cannot write or create right now.
To be clear I'm not saying there's no value to such improvements, but specifically want people to exercise caution in the realm of the hypothetical.
Rather, we should lift up actual evidence and voices of the people affected. If such disabled people are hard to find, that's a good reason to reframe. Sometimes the actual needs are much less hypothetical. Sometimes the hypothetical greatly overestimated the tech.
To root this discussion, maybe linking to paraplegic speaking on creative AI tools? Or similar examples of AI being used for a11y today which indicates this trend is realistic and a priority.
Such people are not hard to find it's just that this discussion is never centered around them. Why? Because this was out of the realm of possibility. This was just not on the radar for people.
At a minimum, you haven't considered the benefits to medicine.
You possibly haven't considered the impact of unethical tech companies and governments using AI and pilfered genetic data to do any number of fucked up things.
No, I definitely have, and have written a good bit about and with AI. I just interviewed a someone who works in AI for medicine last week for my podcast.
Any tools can be used in fucked up ways, but that doesn't mean that they have literally no positive uses either.
It’s already over. Our entire society is gonna collapse in the next couple of decades because of AI and climate change. So… I dunno, brace yourself.
Humanity has no self-restraint. If they can, they do. For money. For power. For advantage. For lust. Anything and everything.
Sorry to be such a downer, but the show’s already over. If you think otherwise, tell me please who and how is going to prevent lies, misinformation and deceit on a mass scale from ripping society apart? I hate to be right in this case, but I am. :-(
AI fact checkers?
Clicks = money = supersede any desire to remove content because of fact checking. Lies yield money as green as the money truth yields.
More AI lies = People see patterns = Less clicks, eventual scandal = Bankrupt or tabloid status
Not to mention most credible journalism outlets need to maintain credibility to keep their money
People wouldn’t see patterns if you painted racing stripes on their shit. Yeah, some people are smart enough, but the vast majority, not.
Patterns on these outlets being wrong.
AI powered tooling is amazing. I already use it regularly for my work (I'm a programmer). It's primarily in the form of intelligent auto complete (lookup GitHub copilot for an example). But AI can also do stuff like catch some bugs, automatically address code review comments, etc. I look forward to seeing it being able to generate larger blocks accurately (in particular, I'd love it to automate test generation -- it currently can only handle very basic cases).
I'm sure other industries can benefit similarly. Eg,
I remember when people called the use of Deepfake frightening, lmao this is merely the beginning.