TechTakes

1430 readers
128 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
451
452
453
 
 

Don't mind me I'm just here to silently scream into the void

Edit: I'm no good at linking to HN apparently, made link more stable.

454
19
submitted 1 year ago* (last edited 1 year ago) by gerikson@awful.systems to c/techtakes@awful.systems
 
 

Title quote stolen from JZW: https://www.jwz.org/blog/2023/10/the-best-way-to-profit-from-ai/

Yet again, the best way to profit from a gold rush is to sell shovels.

455
 
 

Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

456
457
458
 
 

One reason that, three and a half years later, Andreessen is reiterating that “it’s time to build” instead of writing posts called “Here’s What I Built During the Building Time I Previously Announced Was Commencing” is that Marc Andreessen has not really built much of anything.

459
460
 
 

I don’t really have much to say… it kind of speaks for itself. I do appreciate the table of contents so you don’t get lost in the short paragraphs though

461
462
 
 

archive.org | and .is

this is almost a NSFW? some choice snippets:

more than 1.5 million people have used it and it is helping build nearly half of Copilot users’ code

Individuals pay $10 a month for the AI assistant. In the first few months of this year, the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

good thing it's so good that everyone will use it amirite

starting around $13 for the basic Microsoft 365 office-software suite for business customers—the company will charge an additional $30 a month for the AI-infused version.

Google, ..., will also be charging $30 a month on top of the regular subscription fee, which starts at $6 a month

I wonder how long they'll try that, until they try forcing it on everyone (and raise all prices by some n%)

463
 
 

Carole Piovesan (formerly of McCarthy Tétrault, now at INQ Law) describes this as a "step in the process to introducing some more sort of enforceable measures".

In this case the code of conduct has some fairly innocuous things. Managing risk, curating to avoid biases, safeguarding against malicious use. It's your basic industrial safety government boilerplate as applied to AI. Here, read it for yourself:

https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems

Now of course our country's captains of industry have certain reservations. One CEO of a prominent Canadian firm writes that "We don’t need more referees in Canada. We need more builders."

https://twitter.com/tobi/status/1707017494844547161

Another who you will recognize from my prior post (https://awful.systems/post/298283) is noted in the CBC article as concerned about "the ability to put a stifling growth in the industry". I am of course puzzled about this concern. Surely companies building these products are trivially capable of complying with such a basic code of conduct?

For my part I have difficulty seeing exactly how "testing methods and measures to assess and mitigate risk of biased output" and "creating safeguards against malicious use" would stifle industry and reduce building. My lack of foresight in this regard could be why I am a scrub behind a desk instead of a CEO.

Oh, and for bonus Canadian content, the name Desmarais from the photo (next to the Minister of Industry) tweaked my memory. Oh right, those Desmarais. Canada will keep on Canada'ing to the end.

https://dailynews.mcmaster.ca/articles/helene-and-paul-desmarais-change-agents-and-business-titans/

https://en.wikipedia.org/wiki/Power_Corporation_of_Canada#Politics

464
 
 

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

465
 
 

Source: nitter, twitter

Transcribed:

Max Tegmark (@tegmark):
No, LLM's aren't mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally! We even discover a "longitude neuron"

Wes Gurnee (@wesg52):
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2! [image with colorful dots on a map]


With this dastardly deliberate simplification of what it means to have a world model, we've been struck a mortal blow in our skepticism towards LLMs; we have no choice but to convert surely!

(*) Asterisk:
Not an actual literal map, what they really mean to say is that they've trained "linear probes" (it's own mini-model) on the activation layers, for a bunch of inputs, and minimizing loss for latitude and longitude (and/or time, blah blah).

And yes from the activations you can get a fuzzy distribution of lat,long on a map, and yes they've been able to isolated individual "neurons" that seem to correlate in activation with latitude and longitude. (frankly not being able to find one would have been surprising to me, this doesn't mean LLM's aren't just big statistical machines, in this case being trained with data containing literal lat,long tuples for cities in particular)

It's a neat visualization and result but it is sort of comically missing the point


Bonus sneers from @emilymbender:

  • You know what's most striking about this graphic? It's not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It's just how sparse the data from the Global South are. -- Also, no, that's not what "world model" means if you're talking about the relevance of world models to language understanding. (source)
  • "We can overlay it on a map" != "world model" (source)
466
 
 

Direct link to the video

B-b-but he didn't cite his sources!!

467
468
 
 

After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

469
 
 

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. We’re looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

“Whoops, it’s done now, oh well, guess we’ll have to do it later”

Go fucking directly to jail

470
 
 

These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

"Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto", per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

"(Jeff) Macpherson is a director and co-founder at Xagency.AI", a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s "over 7 years in the tech sector" which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

"Illustrator Martin Deschatelets" whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

"Ottawa economist Armine Yalnizyan", per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the "we" who have to adapt here?

AI is apparently "something that can tell you how many cows are in the world" (J.M.). Detecting a lack of results validation here again.

"At the end of the day that's what it's all for. The efficiency, the productivity, to put profit in all of our pockets", from J.M.

"You now have the opportunity to become a Prompt Engineer", from J.M. to the author and illustrator. (It's worth watching the video to listen to this person.)

Me about the article:

I'm feeling that same underwhelming "is this it" bewilderment again.

Me about the video:

Critical thinking and ethics and "how software products work in practice" classes for everybody in this industry please.

471
472
 
 

I found this searching for information on how to program for the old Commodore Amiga’s HAM (Hold And Modify) video mode and you gotta touch and feel this one to sneer at it, cause I haven’t seen a website this aggressively shitty since Flash died. the content isn’t even worth quoting as it’s just LLM-generated bullshit meant to SEO this shit site into the top result for an existing term (which worked), but just clicking around and scrolling on this site will expose you to an incredible density of laggy, broken full screen animations that take way too long to complete and block reading content until they’re done, alongside a long list of other good design sense violations (find your favorites!)

bonus sneer arguably I’m finally taking up Amiga programming as an escape from all this AI bullshit. well fuck me I guess cause here’s one of the vultures in the retrocomputing space selling an enshittified (and very ugly) version of AmigaOS with a ChatGPT app and an AI art generator, cause not even operating on a 30 year old computer will spare me this bullshit:

like fuck man, all I want to do is trick a video chipset from 1985 into making pretty colors. am I seriously gonna have to barge screaming into another German demoscene IRC channel?

473
 
 

I think I giggled all the way through this one.

Pebble, a Twitter-style service formerly known as T2, today launched a new approach: Users can skip past its “What’s happening?” nudge and click on a tab labeled Ideas with a lightbulb icon, to view a list of AI-generated posts or replies inspired by their past activity. Publishing one of those suggestions after reviewing it takes a single click.

Gabor Cselle, Pebble’s CEO, says this and generative AI features to come will enable a kinder, safer, and more fun experience. “We want to make sure that you see great content, that you're posting great content, and that you're interacting with the community,” he says.

How is it "kinder, safer, and more fun"?

Cselle says he recognizes the perils of offering AI-generated text to users, and that users are free to edit or ignore the suggestions. “We don’t want a situation where bots masquerade as humans and the entire platform is just them talking to each other,” he says.

To protect the integrity of the community as it throws open the door to over 300 million people, Pebble will also be using generative AI to vet new signups. The system will use OpenAI’s GPT-3.5 model to compare the X bio and recent posts of people against Pebble’s community guidelines, which in contrast to Musk’s service ban all nudity and violent content.

Pebble CTO Mike Greer says the aim is to determine “whether someone is fundamentally toxic and treats other people poorly.” Those who are or do will be blocked and and manually reviewed. Pebble intends to vet would-be users against “other sources of truth” online once it opens signups further, he says, to include people without an X account.


There are too many quotable passages, so I'll stop there.

My favourite thing about these products is how they want to take on giants with these differentiating features that would be trivial plug-ins for the giants if they were to pose any threat. It's common in the enterprise blockchain world as well. It'll take SAP much less time to figure out blockchain than it will for your shitty blockchain startup to work out whatever SAP is.

474
475
 
 

the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @dgerard@awful.systems):

Naw, I figured it out; they absolutely don't care if AI doesn't work.

They really don't. They're pot-committed; these dudes aren't tech pioneers, they're money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it's literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it's not real and they don't care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don't know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it's total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it's all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

view more: ‹ prev next ›