ForgottenFlux

joined 9 months ago
 

Signups opened this week for Loops, a short-form looping video app from the creator of Instagram alternative Pixelfed, reports TechCrunch.

 

Signups opened this week for Loops, a short-form looping video app from the creator of Instagram alternative Pixelfed, reports TechCrunch.

 

Microsoft has fired two employees who organized an unauthorized vigil at the company’s headquarters for Palestinians killed in Gaza during Israel’s war with Hamas.

The two employees told The Associated Press they were fired by phone call late Thursday, several hours after a lunchtime event they organized at Microsoft’s campus in Redmond, Washington.

Both workers were members of a coalition of employees called “No Azure for Apartheid” that has opposed Microsoft’s sale of its cloud-computing technology to the Israeli government. But they contended that Thursday’s event was similar to other Microsoft-sanctioned employee giving campaigns for people in need.

“We have so many community members within Microsoft who have lost family, lost friends or loved ones,” said Abdo Mohamed, a researcher and data scientist. “But Microsoft really failed to have the space for us where we can come together and share our grief and honor the memories of people who can no longer speak for themselves.”

Microsoft said Friday it has “ended the employment of some individuals in accordance with internal policy” but declined to provide details.

Google earlier this year fired more than 50 workers in the aftermath of protests over technology the company is supplying the Israeli government amid the Gaza war. The firings stemmed from internal turmoil and sit-in protests at Google offices centered on “Project Nimbus,” a $1.2 billion contract signed in 2021 for Google and Amazon to provide the Israeli government with cloud computing and artificial intelligence services.

 

Powered wheelchairs – a sector dominated by a cartel of private-equity backed giants that have gobbled up all their competing firms – have a serious DRM problem.

Powered wheelchair users who need even basic repairs are corralled by DRM into using the manufacturer’s authorized depots, often enduring long waits during which they are unable to leave their homes or even their beds. Even small routine adjustments, like changing the wheel torque after adjusting your tire pressure, can require an official service call.

People with disabilities don’t just rely on devices that their bodies go into; gadgets that go into our bodies are increasingly common, and there, too, we have a DRM problem. DRM is common in implants like continuous glucose monitors and insulin pumps, where it is used to lock people with diabetes into a single vendor’s products, as a prelude to gouging them (and their insurers) for parts, service, software updates and medicine.

Even when a manufacturer walks away from its products, DRM creates insurmountable legal risks for third-party technologists who want to continue to support and maintain them. That’s bad enough when it’s your smart speaker that’s been orphaned, but imagine what it’s like to have an orphaned neural implant that no one can support without risking prison time under DRM laws.

Imagine what it’s like to have the bionic eye that is literally wired into your head go dark after the company that made it folds up shop – survived only by the 95-year legal restrictions that DRM law provides for, restrictions that guarantee that no one will provide you with software that will restore your vision.

Every technology user deserves the final say over how the systems they depend on work. In an ideal world, every assistive technology would be designed with this in mind: free software, open-source hardware, and designed for easy repair.

But we’re living in the Bizarro world of assistive tech, where not only is it normal to distribute tools for people with disabilities are designed without any consideration for the user’s ability to modify the systems they rely on – companies actually dedicate extra engineering effort to creating legal liability for anyone who dares to adapt their technology to suit their own needs.

Even if you’re able-bodied today, you will likely need assistive technology or will benefit from accessibility adaptations. The curb-cuts that accommodate wheelchairs make life easier for kids on scooters, parents with strollers, and shoppers and travelers with rolling bags. The subtitles that make TV accessible to Deaf users allow hearing people to follow along when they can’t hear the speaker (or when the director deliberately chooses to muddle the dialog). Alt tags in online images make life easier when you’re on a slow data connection.

Fighting for the right of disabled people to adapt their technology is fighting for everyone’s rights.

 

A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google's AI Overview.

Apparently, some London residents are getting fed up with social media influencers whose reviews make long lines of tourists at their favorite restaurants, sometimes just for the likes. Christian Calgie, a reporter for London-based news publication Daily Express, pointed out this trend on X yesterday, noting the boom of Redditors referring people to Angus Steakhouse, a chain restaurant, to combat it.

Again, at this point the Angus Steakhouse hype doesn’t appear to have made it into AI Overview. But it is appearing in Search results. And while this is far from being a dangerous attempt to manipulate search results or AI algorithms, it does highlight the pitfalls of Google results becoming dependent on content generated by users who could very easily have intentions other than providing helpful information. This is also far from the first time that online users, including on platforms outside of Reddit, have publicly declared plans to make inaccurate or misleading posts in an effort to thwart AI scrapers.

 

2024 has seen two mass layoffs at Microsoft, with 1900 staff laid off in January, before a further 650 Xbox employees were shown the door in September.

Regardless, Microsoft's shares are up and the company's market value is now higher than $3tn, as it works to capitalise on the rise of AI.

 

Cable companies, advertising firms, and newspapers are asking courts to block a federal "click-to-cancel" rule that would force businesses to make it easier for consumers to cancel services. Lawsuits were filed yesterday, about a week after the Federal Trade Commission approved a rule that "requires sellers to provide consumers with simple cancellation mechanisms to immediately halt all recurring charges."

The 5th Circuit is generally regarded as the nation's most conservative, but the 6th Circuit also has a majority of judges appointed by Republican presidents. When identical lawsuits are filed in multiple circuits, the Judicial Panel on Multidistrict Litigation randomly selects a court to handle the case.

The NCTA cable lobby group, which represents companies like Comcast and Charter, have complained about the rule's impact on their ability to talk customers out of canceling. NCTA CEO Michael Powell claimed during a January 2024 hearing that "a consumer may easily misunderstand the consequences of canceling and it may be imperative that they learn about better options" and that the rule's disclosure and consent requirements raise "First Amendment issues."

"Too often, businesses make people jump through endless hoops just to cancel a subscription," FTC Chair Lina Khan said. "The FTC's rule will end these tricks and traps, saving Americans time and money. Nobody should be stuck paying for a service they no longer want."

 

The administrative penalties, which are worth around $335 million at current exchange rates, have been issued by Ireland’s Data Protection Commission (DPC) under the European Union’s General Data Protection Regulation (GDPR). The regulator found a raft of breaches, including beaches to the lawfulness, fairness and transparency of its data processing in this area.

The GDPR requires that uses of people’s information have a proper legal basis. In this case, the justifications LinkedIn had relied upon to run its tracking ads business were found to be invalid. It also did not properly inform users about its uses of their information, per the DPC’s decision.

LinkedIn had sought to claim (variously) “consent”-, “legitimate interests”- and “contractual necessity”-based legal bases for processing people’s information — when obtained directly and/or from third parties — to track and profile its users for behavioral advertising. However, the DPC found none were valid. LinkedIn also failed to comply with the GDPR principles of transparency and fairness.

 

While it’s certainly smart to finally start tracking the sale of sensitive U.S. consumer data to foreign countries in more detail (and blocking direct sales to some of the more problematic adversaries), it’s kind of like building barn doors four years after all the animals have already escaped.

We’ve noted for most of the last two decades how a huge variety of apps, telecoms, hardware vendors, and other services and companies track pretty much your every click, physical movement, and behavior, then sell access to that data to a broad array of super dodgy and barely regulated data brokers.

These data brokers then turn around and sell access to this data to a wide assortment of random nitwits, quite often without any sort of privacy and security standards. That’s resulted in a flood of scandals from stalkers tracking women to anti-abortion zealots buying clinic visitor data in order to target vulnerable women with health care misinformation.

This continues to happen for two reasons: at every last step, U.S. leaders put making money above public safety and consumer protection. And the U.S. government has discovered that buying this data is a fantastic way to avoid having to get pesky warrants. This all occurs to the backdrop of a relentless effort to turn all U.S. consumer protection regulators into decorative cardboard cutouts.

 

Elon Musk may have personally used AI to rip off a Blade Runner 2049 image for a Tesla cybercab event after producers rejected any association between their iconic sci-fi movie and Musk or any of his companies.

In a lawsuit filed Tuesday, lawyers for Alcon Entertainment—exclusive rightsholder of the 2017 Blade Runner 2049 movie—accused Warner Bros. Discovery (WBD) of conspiring with Musk and Tesla to steal the image and infringe Alcon's copyright to benefit financially off the brand association.

Alcon said it would never allow Tesla to exploit its Blade Runner film, so "although the information given was sparse, Alcon learned enough information for Alcon’s co-CEOs to consider the proposal and firmly reject it, which they did." Specifically, Alcon denied any affiliation—express or implied—between Tesla's cybercab and Blade Runner 2049.

"Musk has become an increasingly vocal, overtly political, highly polarizing figure globally, and especially in Hollywood," Alcon's complaint said. If Hollywood perceived an affiliation with Musk and Tesla, the complaint said, the company risked alienating not just other car brands currently weighing partnerships on the Blade Runner 2099 TV series Alcon has in the works, but also potentially losing access to top Hollywood talent for their films.

The "Hollywood talent pool market generally is less likely to deal with Alcon, or parts of the market may be, if they believe or are confused as to whether, Alcon has an affiliation with Tesla or Musk," the complaint said.

Musk, the lawsuit said, is "problematic," and "any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account."

If Tesla and WBD are found to have violated copyright and false representation laws, that potentially puts both companies on the hook for damages that cover not just copyright fines but also Alcon's lost profits and reputation damage after the alleged "massive economic theft."

 
  • PayPal to Share Shopping Details
  • LinkedIn Opts You In for AI Data Sharing
  • 23andMe May Sell Your DNA Data
 
  • PayPal to Share Shopping Details
  • LinkedIn Opts You In for AI Data Sharing
  • 23andMe May Sell Your DNA Data
[–] ForgottenFlux@lemmy.world 13 points 3 months ago

Summary:

  • Colorado passes first-in-nation law to protect privacy of biological or brain data, which is similar to fingerprints if used to identify people.
  • Advances in artificial intelligence have led to medical breakthroughs, including devices that can read minds and alter brains.
  • Neurotechnology devices, such as Emotiv and Somnee, are used for health care and can move computers with thoughts or improve brain function and identify impairments.
  • Most of these devices are not regulated by the FDA and are marketed for wellness.
  • With benefits come risks, such as insurance companies discriminating, law enforcement interrogating, and advertisers manipulating brain data.
  • Medical research facilities are subject to privacy laws, but private companies amassing large caches of brain data are not.
  • The Neurorights Foundation found that two-thirds of these companies are already sharing or selling data with third parties.
  • The new law takes effect on Aug. 8, but it is unclear which companies are subject to it and how it will be enforced.
  • Pauzauskie and the Neurorights Foundation are pushing for a federal law and even a global accord to prevent brain data from being used without consent.
view more: next ›