Singularity

131 readers
2 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 1 year ago
MODERATORS
601
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Anen-o-me on 2024-09-24 04:36:26+00:00.

Original Title: New partial reprogramming result from Altos Labs: the Belmonte group reports a ~12% lifespan increase (equivalent to a ~38% increase in remaining lifespan after the start of therapy at 18 months) in normal mice via a Cdkn2a-OSK gene therapy:

602
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/williamtkelley on 2024-09-24 00:04:05+00:00.


What changes do you expect AI and robotics will bring to your average day in 5 years?

Describe your average working/life day today.

Describe what you think your average working/life day will be like in 5 years.

603
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/AdorableBackground83 on 2024-09-24 02:55:54+00:00.


An over 99% decrease in 18 months.

If we go another 18 months we could get 1 cent for every 1 million tokens.

604
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Nleblanc1225 on 2024-09-24 01:27:30+00:00.

605
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Glittering-Neck-2505 on 2024-09-23 23:34:34+00:00.

606
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Glittering-Neck-2505 on 2024-09-23 22:53:26+00:00.


Was just solving an extremely algebra heavy integral, getting an answer slightly different than o1-mini and my integral calculator, and it was literally driving me up a wall.

All I did was tell it the approach that I used, which was different from its, and 2 sets of intermediate terms before I arrived at my final answer. I asked it to use this to find which component I had done incorrectly and after 19 seconds of thinking it had found a mistake in my calculations that I couldn’t find after tracing through my work several times. The terms of the evaluation are extremely ugly fractions and previous models would just hallucinate the answer to begin with, and couldn’t even come close to identifying a minute error.

For some tasks you don’t feel an improvement over 4o, but for the ones that you do, it can feel like using actual magic.

607
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Lammahamma on 2024-09-23 22:38:30+00:00.

608
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-23 21:14:52+00:00.

Original Title: OpenAI rival Anthropic has started talking to investors about raising capital in a deal that could value the startup at $30 billion to $40 billion, roughly doubling its valuation from a funding that closed early this year.

609
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-23 20:46:40+00:00.

Original Title: Yann LeCun says we will soon have AI that matches or surpasses human intelligence and we will have a team of AI assistants in smart glasses within a year or two that can translate hundreds of languages

610
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2024-09-23 20:39:38+00:00.

611
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/SomberOvercast on 2024-09-23 20:11:18+00:00.


612
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-23 20:14:05+00:00.

613
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/SnooPuppers3957 on 2024-09-23 19:36:00+00:00.

614
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/insufficientmind on 2024-09-23 16:56:28+00:00.

615
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/UpstairsAssumption6 on 2024-09-23 16:52:57+00:00.

616
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-23 16:48:23+00:00.

617
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Pro_RazE on 2024-09-23 17:10:08+00:00.

618
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gab1024 on 2024-09-23 17:03:26+00:00.

619
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/IlustriousTea on 2024-09-23 17:01:38+00:00.

620
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-23 16:46:00+00:00.

621
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/UpstairsAssumption6 on 2024-09-23 16:44:32+00:00.

622
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-23 16:32:45+00:00.

Original Title: SPARK can create high-quality 3D face avatars from regular videos and track expressions and poses in real time. It improves the accuracy of 3D face reconstructions for tasks like aging, face swapping, and digital makeup.

623
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/YouMissedNVDA on 2024-09-23 14:26:58+00:00.


I'll keep it short:

Iteration 0: Pretrained-only GPTs - before RLHF.

Some of you might not be familiar with this version, hence calling it 0. This article covers it well. Essentially, it could continue text well, but it could not be asked questions with the expectation it would answer. Instead, it would be more likely to continue the text with several similar questions as these next-tokens are probable.

From witnessing these deficiencies, using RLHF to direct the model into an "instruct" version that would respond instead of continue was found to be invaluable, and thus added to the training regime.

Iteration 1: ChatGPT.

We all know this one. Was pretty amazing initially, but then we started to be able to recognize the hallucination problem. No matter how hard of a question, it would respond immediately with plausible text. Easy questions would often be right, but harder ones would often result in a web of lies. If you imagine taking a test which you are graded on by your stream of consciousness output, you might do the same thing. But, strangely, asking it to think step by step first and to think out loud in tokens before an answer would help it a lot.

From witnessing these deficiencies, using CoT to force multi-step reasoning resulting in higher quality answers was found to be invaluable, and thus added to the training regime.

Iteration 2: o1 (preview)

Most here are well aware. The model can now think dynamically, able to take its time before responding to continously iterate on the answer before submitting a final answer. Not fool proof, still makes mistakes, but achieves a new level of performance that was previously unavailable, same as iteration 0 to 1.

But, what new weaknesses and deficiencies are we squarely facing that we couldn't quite see in the previous iteration? What new training paradigm might be possible/considered that was not before?

What's next?

Technological progress is an inherently compounding venture, with the next advances generally depending upon the previous. We should expect this space to be no different (and we can already see the pattern emerging).

In my opinion, what o1 allows us that is impossible before is world-model building. We all talk about "does it have an internal world model or not", as an accurate world model predicates accurate predictions/understandings of the world.

Before o1, if the model answered wrong, it was very difficult to understand why. What internal mechanisms failed and resulted in the inaccuracy? All you had was the output and the activations to investigate - not very useful.

But now we can see the reasoning steps, in plain language, and we can pin point exactly where the logic goes off the rails. Which means we can pinpoint exactly where we can train the inaccuracy out of the model. And maybe that means we can train the next model to have an explicit world-model portion, just as o1 has specific reasoning portions, where at the end of training we can inspect not only the output and reasoning chain, but the underlying truths/lemmas the model holds.

What do you think? Am I a cuckoo or is this a reasonable attempt at guessing the next? What do you think happens from here?

I still hold that AGI will be obvious when a model can become the best comedian in the world, selling out stadiums worldwide and leaving only happy attendees. I believe that there is so much of our humanity encoded into a successful routine that to succeed as well as AlphaZero in the space would mean it understands us better than ourselves.

624
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/MetaKnowing on 2024-09-23 14:18:55+00:00.

625
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/MetaKnowing on 2024-09-23 13:32:32+00:00.

Original Title: The founder of an AI social agent startup used his AI social agents to replace himself and automatically argue in favor of AI social agents with people who are sick of his AI social agents arguing with them

view more: ‹ prev next ›