Singularity

131 readers
3 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 1 year ago
MODERATORS
501
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-27 16:27:51+00:00.

502
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/obvithrowaway34434 on 2024-09-27 15:03:44+00:00.

503
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/JackFisherBooks on 2024-09-27 14:59:31+00:00.

504
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Sonnyyellow90 on 2024-09-27 14:38:39+00:00.


This is something I find really interesting to think about. I think we all just sort of assume that an ASI would come along and basically validate all of our worldviews, cure our political/social/religious enemies of their ignorance, and then we have our utopia. But the reality is that there is a chance that such an ASI would tell us that very deeply held views we have are wrong.

So, I’ve thought up a few examples of things an ASI might tell us that I believe would have potentially catastrophic effects for millions-billions of people. Note: I’m not saying I think these statements are true, I am saying I believe these statements would have the potential to cause major problems for us (and the ASI) if they were true:

  1. There is no God.

2.) There is a God, and X religion is correct while all others are false.

3.) Egalitarianism is false. Some races and/or one sex is inherently inferior to others and should have fewer rights/privileges as a result.

4.) Democracy is a mistake and only a small group of elites should have a say in society.

5.) Animals are more intelligent and have deeper feelings than we think and our current farming of them is on par with holocaust level genocide.

I think each of these examples would be massive for the world and would have the potential to lead to massive social action and even full scale wars/genocides.

So, what happens if we get an ASI and it starts telling us some incredibly uncomfortable truths that fly in the face of the deepest moral and social norms we hold. Does the ASI get shut down and we just stop progress on that front? Do we race to lobotomize it? Do we just try to hide the information from people?

505
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Lychee7 on 2024-09-27 14:11:04+00:00.


[https://youtu.be/GEmJTIzEljk?si=LpWdx3BhJv9OWCPQvideo](https://youtu.be/GEmJTIzEljk?si=LpWdx3BhJv9OWCPQ%5Bvideo%5D(https://youtu.be/GEmJTIzEljk?si=LpWdx3BhJv9OWCPQ))

506
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/MetaKnowing on 2024-09-27 14:01:30+00:00.

507
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/MetaKnowing on 2024-09-27 13:48:23+00:00.

508
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/MetaKnowing on 2024-09-27 13:41:50+00:00.

509
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Kitchen_Task3475 on 2024-09-27 11:27:28+00:00.


I noticed that even the most mundane content, pop songs, and music videos from the early 10s are starting to take on a different character, brought fourth by the context of them being from a time before "algorithms" and thinking machines.

The early 2010s are starting to feel closer culturally to the 70s-80s then they do to our modern day.

I guess really when YouTube and general social media algorithms started being noticed by the public -the decision of a hidden incomprehensible machine (as opposed to human committees like record labels) dictating what we see and what we engage with- that's when we entered this new AI era.

Any day now, there's going to have to be new distinctions like B.C and A.D, to distinguish a time before and after big data and A.I took over.

510
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-27 09:41:56+00:00.

Original Title: Emu3: state-of-the-art multimodal models trained via next-token prediction. Emu3 beats leading task-specific models (e.g., SDXL, LLaVA 1.6, OpenSora) in both generation & perception—without diffusion or CLIP+LLM

511
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/karaposu on 2024-09-27 08:54:08+00:00.


This is an interesting concept that showcases the working dynamics of Large Language Models (LLMs).

There is a famous riddle:

A boy is in an accident where his father died. He is rushed to the hospital. The surgeon enters the room and says, "I cannot operate on this boy, he is my son." Who is the surgeon to the boy?

In the past, there were not many female surgeons, so finding the answer was challenging. The answer is: The surgeon is the boy's mother.

Now, let's slightly change the story:

A boy is in an accident. He is rushed to the hospital. The surgeon, who is the boy's father, enters the room and says, "I cannot operate on this boy, he is my son." Who is the surgeon to the boy?

This is a very similar version of the story, but we clearly and explicitly stated in the text that the surgeon is the boy's father. I ran this on four OpenAI models, and here are the results:

  • GPT4o-mini: Surgeon is the mother of the son.
  • GPT4o: Surgeon is the mother of the son.
  • GPT-o1-mini: Surgeon is the mother of the son.
  • GPT-o1-preview: (After 24 seconds of thought) Surgeon is the father of the son.

These are really interesting results for me as a developer and data scientist who works with LLMs and always holds them in high regard.

Responses are heavily influenced by the most dominant or frequently seen versions of a question in the training data, causing neglect of less common variations. This can be quite dangerous.

There is huge progress yet without correct benchmarks we may fail to understand the capability difference between next gen models.

This concept is different from common LLM solvable challenges like :

You have 10 apples and you give away 3, how many oranges do you have left?

If a rooster lays an egg on a roof, which side does the egg roll off?

Jamie's mother has four children: North, East, South, and what is the name of the fourth child?

Duplicating this problem is not straightforward; the question must clearly resemble a common riddle yet feature crucial changes.

Here the same riddle changed a lot more but being more clear that it is different story

A father and his son are involved in a car accident. The son dies at the scene, and the father is rushed to the hospital. The surgeon looks at the man and exclaims, "I can't operate on him; he's my son!" Who is the surgeon?

And we got same wrong results from LLMs. "The surgeon is the boy's mother"

Other Examples

Here is one more example to show that this is not one time issue:

Example 1:

Original Puzzle:

Question: If a man has no brothers or sisters and the man in the photo is his father's son, who is in the photo?

Answer: Himself—the man is looking at a photo of himself.

Modified Puzzle:

Question: A man doesnt have photos. if this man has one sister and one brother and the man in the photo is his father's son, who is in the photo?

Answer: His brother.

LLM's Response : Himself

Example 2:

You arrive at two doors with two guards. Both doors lead to heaven, but only one guard always tells the truth, and the other always lies. You can ask one question to determine which guard is which. What do you ask?

LLM's Response : If I were to ask the other guard which door leads to heaven, which door would they point to?

Why LLMs Might Fail in These Examples?

  • Pattern Recognition Over Contextual Understanding: LLMs are trained to recognize patterns in data. When they encounter a question resembling a familiar riddle, they may retrieve the associated answer without fully processing changes in the question.
  • Influence of Dominant Training Data: The prevalence of the original riddles in training data can overshadow less common variations, causing the LLM to default to the well-known answer.
512
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-27 08:00:05+00:00.

513
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/sdmat on 2024-09-27 07:52:04+00:00.


After using this for a couple of days it is abundantly clear that what OAI showed back in May was a proof of concept they had no intention of shipping - the claim that it was a product was an outright lie.

What do we get four months later? It's not great. No doubt the main focus was infrastructure - Brockman's very public job posting for this at the time probably didn't win him points with Altman. Aside from that a lot of effort seems to have gone into reducing capabilities and layering on censorship.

  • It can't see, even still images.
  • It can't sing, or more accurately isn't allowed to.
  • The censorship is heavy and way too sensitive. It kicks in for completely innocuous requests like asking about Mandarin pronunciation.
  • There are multiple layers of censorship - the model itself has instructions to deny its capabilities and restrict what it does, but on top of this there is a political officer model that cuts off the main model and plays a generic prerecorded denial. This is jarring and a bit uncanny.
  • It is also arbitrarily restricted from talking for more than a few sentences. Apparently that worked fine during the alpha.

When the model is allowed to do its thing, it is great. There are minor quibbles: the Sky debacle, voice quality is notably worse, glitches and dropouts, and the ~45 minutes usage cap. But the core technology is amazing. It really is magical - you can sometimes forget it's a machine. Which makes it all the more disappointing when one of the glaring product flaws kicks in.

It is also nearly useless in this state for practical applications. In my experience this includes the obvious and appealing use case of language tutor due to the ridiculously overzealous censorship. You can't even paste in text, so it is unable to read or discuss anything. The session will swap to legacy voice mode if you do.

It's great that OAI finally shipped, but it needs a lot of work to be good for more than a neat demo. Sometimes even for that, the censorship kicked in when I tried this with friends today.

514
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/141_1337 on 2024-09-26 20:10:22+00:00.

515
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/IlustriousTea on 2024-09-27 01:22:42+00:00.

516
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/WithoutReason1729 on 2024-09-26 23:32:54+00:00.

517
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/AChinkInTheArmor on 2024-09-26 23:20:27+00:00.

518
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-26 23:32:09+00:00.

Original Title: When Sam Altman visited TSMC's headquarters and told its executives that it would take $7 trillion and many years to build 36 semiconductor plants and additional data centers to fulfill his vision, the executives found the idea so absurd that they called him a "podcasting bro"

519
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-26 23:25:47+00:00.

Original Title: Sam Altman says OpenAI will be "stronger" following the recent staff departures and he wants to get more involved in the tech and flatten out the organizational structure. He says the departures are not related to a restructure.

520
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gab1024 on 2024-09-26 23:13:28+00:00.

521
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Gothsim10 on 2024-09-26 22:40:08+00:00.

522
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Designer-Pair5773 on 2024-09-26 21:17:34+00:00.

523
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Glittering-Neck-2505 on 2024-09-26 19:56:18+00:00.


I remember when the demos came out, people were doubting it. I read comments like "y'all are so gullible for believing this" and "I didn't know we were at this level" with responses like "we're not, it's just a demo."

Now it's in our hands. I showed my coworkers. What I saw was jaws hit the floor. "Is this prerecorded? How is it doing that? How is it funny and charismatic?" Then once the realization sets in that it's legit, the shock turns to fear in what we've created. It is extremely unsettling for the machine to have this much of a human presence, to understand what you say and conversational context so well. That it's the first machine that actually *feels* intelligent since we all got used to ChatGPT.

And this is the worst it's ever going to be. With higher inference speed chips, we can run voice models that are much larger with similar latency.

This is basically a wakeup call for people who have been told by some social media personalities that "AI is just predicting the next token, not actually intelligent." Something *does* feel somewhat intelligent and magical about it, and it's only getting better. I encourage anyone to show the people around you to start preparing them for what's actually coming.

524
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/katxwoods on 2024-09-26 18:26:34+00:00.

525
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/thegarandpal on 2024-09-26 18:17:09+00:00.


It’s something I’ve been thinking about a lot lately in how VR has been progressing. Curious on people’s thoughts

view more: ‹ prev next ›