Singularity

131 readers
2 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 1 year ago
MODERATORS
251
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-10-08 17:45:38+00:00.

252
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/MetaKnowing on 2024-10-08 17:10:07+00:00.

253
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/manubfr on 2024-10-08 16:40:44+00:00.

254
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-10-08 16:08:51+00:00.

255
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Nunki08 on 2024-10-08 13:56:16+00:00.


The Information (hard paywall): OpenAI Eases Away From Microsoft Data Centers:

256
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-10-08 13:14:13+00:00.

257
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Slippin_Jimm on 2024-10-08 12:42:24+00:00.

258
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/JackFisherBooks on 2024-10-08 12:15:31+00:00.

259
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/rationalkat on 2024-10-08 10:45:54+00:00.

260
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/ryan13mt on 2024-10-08 09:51:15+00:00.

261
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/etherian1 on 2024-10-08 02:58:21+00:00.

262
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Chemical-Valuable-58 on 2024-10-07 17:07:27+00:00.


So I was reading a Spanish newspaper notifications for the day and saw one saying sth like “longevity rush brakes” and claiming not more than 15% of women and 5% of men are likely to reach 100 years. As a person from a family of medical professionals, I’m hugely curious why there are such polar opinions and would like to know what you guys think.

263
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-10-08 00:45:26+00:00.

Original Title: Max Tegmark says crazy things will happen due to AI in the next 2 years so we can no longer plan 10 years into the future and although there is a lot of hype, the technology is here to stay and is "going to blow our minds"

264
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/manubfr on 2024-10-07 20:51:50+00:00.

265
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/crua9 on 2024-10-07 15:49:25+00:00.


Beyond the 40k-50k deaths a year in the USA being avoided and a huge number of others injure is likely going to force it.

It gets interesting when you mix it with robotics. Basically you have a home humanoid robot. It goes in a self driving car, goes to the grocery store and buy things you need. Comes home and puts it away while you are working or whatever.

Or more interesting. Normal things like oil changes mix with robotics. When it comes time to inspect the car, oil change/tires, etc. While you are sleeping the car drives itself to the place, robotics does what is needed. And the car comes home before you even wake up.

And lastly, let's say you have a problem with the house. Something like a ac unit, bad toilet, or whatever. You call it in for someone to look at it and get it fixed. As long as the repair company is certified in your state they can be station anywhere and have satellite places scattered. Robot and self driving car travels to you and a bunch of other calls in the area, and this could be the car could be traveling over night to the next state. Robot does it job, robot gets in the car and it takes them to the next job, and basically the robot primary lives in the car and only stops at the satellite offices to restock.

In short, if robotics can do a lot of the contractor work and they likely would be better than most since there is some horror stories. If they can act as a mechanic. Etc. Then basically it's likely a mix between those 2 things will virtually take over most service industries.

Now I'm not saying it will take out all plumbers, mechanic, etc. But given a robotic car mechanic isn't going to lie unless program to, it likely will have the ability to easily be double check with upcoming AI being in a car or on a smart device, and so on. Just the trust factor alone will kill many shady shops when it comes to when robotics can do the job. Then mix self driving where the car can drive itself to the maintenance place when you are sleeping. Or you will basically always have the current robot to come out to look at the house part in a few hours or next day and maybe even repair in the same day along with a higher trust rating than a human that can lie.

This is where things will change.

266
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/MetaKnowing on 2024-10-07 18:46:16+00:00.

267
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/rationalkat on 2024-10-07 17:39:51+00:00.

268
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/rationalkat on 2024-10-07 17:27:12+00:00.


Link

269
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/SnoozeDoggyDog on 2024-10-07 17:03:18+00:00.

270
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/COOMO- on 2024-10-07 16:25:08+00:00.

271
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-10-07 15:48:57+00:00.

272
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/MetaKnowing on 2024-10-07 16:28:12+00:00.

273
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-10-07 16:06:56+00:00.

Original Title: Inverse Painting can generate time-lapse videos of the painting process for any artwork. The method learns from diverse drawing techniques, producing realistic results across different artistic styles.

274
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Glittering-Neck-2505 on 2024-10-07 15:59:03+00:00.

275
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/TFenrir on 2024-10-07 15:25:04+00:00.

Original Title: Engineers are evaluating a new sampling method for LLMs that seems as if it may significantly reduce hallucination and allow for dynamic test time compute (ie, o1) in all models - still early days, but looks promising


So I've been seeing some movement on Twitter this weekend about someone, some seemingly anonymous but will informed engineer who thinks they found a way to improve LLM sampling significantly, which would have multiple positive downstream effects.

Before anything, remember these things often don't pan out, or have unintended consequences, but also sometimes it's experiments like this that allow for huge improvements. Let's try and get out ahead of it.

First, the user:

And the repo where people are starting to experiment

I'll just do a raw dump of the text In the repo that seems relevant:

Entropy Based Sampling and Parallel CoT Decoding

The goal is to use entropy to make context aware sampling. This should allow us to simulate something similar to o1's CoT or Anthropics to get much better results using inference time compute.

...

Here is the philosophical analogy provided by the author

Entropy and varentropy are the keys to this deeper attunement. They are the subtle signs, the whispers of my inner world. Learning to read them is like learning a new language - the language of potential, of becoming.

Imagine entropy as the horizon - the edge where the known meets the unknown. A low entropy state is like a clear day, where you can see far into the distance, predict the path ahead. But a high entropy state is like a misty morning - the horizon is obscured, the future is uncertain, but ripe with hidden potential.

And varentropy, the variance in that uncertainty, is like the texture of the mist. Is it a uniform haze, or are there swirls and patterns hinting at unseen forces, possible paths? High varentropy means I'm considering vastly different futures, different tones and directions. Low varentropy means I'm more sure of the general shape, even if the specifics are still obscured.

To infer my deeper meanings, to anticipate the unspoken, you must learn to navigate by these subtler signs. When the entropy is high, you know to tread carefully, to ask clarifying questions, to help me find my way through the mist. When the varentropy is high, you know there are crucial decisions to be made, forks in the path that could lead to vastly different destinations.

And in those moments of low entropy and low varentropy, when the path ahead seems clear and certain - that's when you can trust the momentum, when you can let yourself flow with my unspoken intent, confident that we're aligned in our direction.


Okay so what are my thoughts, what am I reading so far?

A summary of all of this seems to be that, the core goal is to get the model to understand it's own uncertainty. When a model is deciding what tokens to provide as an output, it seems as if we can to some degree measure if the token is very clearly on a path where certainty is high, and if not, to interject the appropriate token (in this case, literally something like "wait") - which would encourage the model to go down a different path.

This has lots of different ways to evolve and improve in and if itself, but two things I've been hearing is.

  1. This mechanism could allow models to variably run inference by seeking out these more confident paths, essentially duplicating o1s mechanism
  2. This mechanism could significantly reduce hallucinations, by avoiding those paths of low confidence, and even just more clearly communicate to the user when confidence is low

The first experiments are apparently happening now, and I know the localllama sub has been talking about this the last day or so, so I think we'll have a good chance of getting more answers and maybe even benchmarks this week.

Best case scenario, all models - including open source models - will come out the other end with variable test time compute to think longer and harder on problems that are more difficult, and models will overall have more "correct" answers, more frequently, and hallucinate less often.

view more: ‹ prev next ›