this post was submitted on 03 Sep 2023
210 points (100.0% liked)

Technology

37739 readers
557 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dsemy@lemm.ee 36 points 1 year ago (3 children)

You don’t understand why people on Lemmy, an alternative platform not controlled by corporations, might not want to get in a car literally controlled by a corporation?

I can easily see a future where your car locks you in and drives you to a police station if you do something “bad”.

As to their safety, I don’t think there are enough AVs to really judge this yet; of course Cruise’s website will claim Cruise AVs cause less accidents.

[–] IWantToFuckSpez@kbin.social 21 points 1 year ago (2 children)

I can imagine in the future there will be grid locks in front of the police station with AV cars full of black people when the cops send out an ABP with the description of a black suspect.

We’ve seen plenty of racist AI programs in the past because the programmers, intentionally or not, added their own bias into the training data.

[–] lol3droflxp@kbin.social 8 points 1 year ago

Any dataset sourced from human activity (eg internet text as in Chat GPT) will always contain the current societal bias.

[–] jarfil@beehaw.org 1 points 1 year ago

The AIs are not racist themselves, it's a side effect of the full technology stack: cameras have lower dynamic resolution for darker colors, images get encoded with a gamma that leaves less information in darker areas, AIs that work fine with images of light skinned faces, don't get the same amount of information from images of dark skinned faces, leading to higher uncertainty and more false positives.

The bias starts with cameras themselves; security cameras in particular should have an even higher dynamic range than the human eye, but instead they're often a cheap afterthought, and then go figure out what have they recorded.

[–] Thorny_Thicket@sopuli.xyz 5 points 1 year ago (1 children)

You're putting words to my mouth. I wasn't talking about people on Lemmy not wanting to get into one of these vehicles.

The people here don't seem to want anyone getting into these vehicles. Many here are advocating for all-out ban on self-driving cars and demand that they're polished to near perfection on closed roads before being allowed for public use even when the little statistics we already have mostly seem to indicate these are at worst as good as human drivers.

If it's about Teslas the complain often is the lack of LiDAR and radars and when it's about Cruise which has both it's then apparently about corruption. In both cases the reaction tends to be mostly emotional and that's why every time one provides statistics to back up the claims about safety it just gets called marketing bullshit.

[–] dsemy@lemm.ee 8 points 1 year ago (2 children)

Honestly? I don’t want anyone to use AVs because I fear they will become popular enough that eventually I’ll be required to use one.

I honestly haven’t done enough research on AV safety to feel comfortable claiming anything concrete about it. I personally don’t feel comfortable with it yet since the technology is very new and I essentially need to trust it with my life. Maybe in a few years I’ll be more convinced.

[–] Thorny_Thicket@sopuli.xyz 5 points 1 year ago

I hear you. I love driving and I have zero interest in buying a self-driving vehicle. However I can still stand outside my own preferences and look at it objectively enough to see that it's just a matter of time untill AI gets so good at it that it could be considered irresponsible to let a human drive. I don't like it but that's progress.

[–] teawrecks@sopuli.xyz 1 points 1 year ago* (last edited 1 year ago) (1 children)

Travelling in a community whose public roads require 100% AVs will probably be the safest implementation of driving, period. But if you don't trust the tech, then just don't live or travel in that community.

I suspect we'll see an AV only lane on the hwy soon, and people will realize how much faster you can get through traffic without tailgaters and lane weavers constantly causing micro inefficiencies at best, and wrecks at worst.

[–] jarfil@beehaw.org 1 points 1 year ago

When vehicle-to-vehicle communication improves, and gets standardized, it will be interesting to see "AV road trains" of them going almost bumper to bumper, speeding up and slowing down all at the same time.

[–] teawrecks@sopuli.xyz 2 points 1 year ago (1 children)

Autonomous driving isn't necessarily controlled by a corporation any more than your PC is. Sure, the earliest computers were all built and run by corporations and governments, but today we all enjoy (the choice of) computing autonomy because of those innovations.

I can be pro AV and EV without being pro corporate control over the industries. It's a fallacy to conflate the two.

The fact is that letting humans drive in a world with AVs is like letting humans manually manage database entries in a world with MySQL. And the biggest difficulty is that we're trying to live in a world where both humans and computers are "working out of the same database at the same time". That's a much more difficult problem to solve than just having robots do it all.

I still have a gas powered manual that I love driving, but I welcome the advancement in EV/AV technology, and am ready to adopt it as soon as sufficient open standards and repairability can be offered for them.

[–] Kornblumenratte@feddit.de 10 points 1 year ago (1 children)

Autonomous driving isn't necessarily controlled by a corporation any more than your PC is.

That's just outright wrong.

Modern cars communicate with their manufacturer, and we don't have any means to control this communication.

I can disconnect my PC from the internet, I cannot disconnect my car. I can install whatever OS and apps pleases me on my PC, I cannot do anything about the software on my car's computer.

So, while I can take full control of my PC if it pleases me, I cannot take any control of my car.

[–] teawrecks@sopuli.xyz 5 points 1 year ago (1 children)

With all due respect, you're still not understanding what I'm saying.

If you traveled back 50+ years to when computers took up several hundred sq ft, one might try to make the same argument as you: "don't rent time on IBM's mainframe, they can see everything you're computing and could sell it to your competitor! Computers are always owned by the corporate elite, therefore computers are bad and the technology should be avoided!" But fast forward to today, and now you can own your own PC and do everything you want to with it without anyone else involved. The tech progressed. It wasn't wrong to not trust corporate owned computing, but the future of a tech itself is completely independent from the corporations who develop them.

For a more recent example, nearly 1 year ago, ChatGPT was released to the world. It was the first time most people had any experience with a LLM. And everything you sent to the bot was given to a proprietary, for profit algorithm to further their corporate interests. One might have been tempted to say that LLMs and AI would always serve the corporate elite, and we should avoid the technology like the plague. But fast forward to now, not even one year later, and people have replicated the tech in open source projects which you can run locally on your own hardware. Even Meta (the epitome of corporate control) has open sourced LLaMA to run for your own purposes without them seeing any of it (granted the licenses will prevent what you can do commercially).

The story is the same for virtually any new technology, so my point is, to denounce all of AVs because today corporations own it is demonstrably shortsighted. Again, I'm not interested in the proprietary solutions available right now, but once the tech develops and we start seeing some open standards and repairability enter the picture, I'll be all for it.

[–] abhibeckert@beehaw.org 1 points 1 year ago* (last edited 1 year ago) (1 children)

nearly 1 year ago, ChatGPT was released to the world. It was the first time most people had any experience with a LLM. And everything you sent to the bot was given to a proprietary, for profit algorithm to further their corporate interests

You might want to pick another example, because OpenAI was originally founded as a non-profit organisation, and in order to avoid going bankrupt they became a "limited" profit organisation, which allowed them to source funding from more sources... but really allow them to ever become a big greedy tech company. All they're able to do is offer some potential return to the people who are giving them hundreds of billions of dollars with no guarantee they'll ever get it back.

[–] teawrecks@sopuli.xyz 2 points 1 year ago

Maybe reread my post. I specifically picked ChatGPT as an example of proprietary corporate control over LLM tech.