this post was submitted on 25 Sep 2023
521 points (96.3% liked)

Technology

59594 readers
3469 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A partnership with OpenAI will let podcasters replicate their voices to automatically create foreign-language versions of their shows.

you are viewing a single comment's thread
view the rest of the comments
[–] FireWire400@lemmy.world 47 points 1 year ago (3 children)

That's just weird... Part of the reason I listen to podcasts is that I just enjoy people talking about things and AI voices still have this uncanny quality to me

[–] danielbln@lemmy.world 30 points 1 year ago (3 children)
[–] nehal3m@sh.itjust.works 14 points 1 year ago

Point taken, well done.

[–] Hoimo@ani.social 2 points 1 year ago (1 children)

That's obviously way better than any TTS before it, but I still wouldn't want to listen to it for more than a few minutes. In these two sentences I can already hear some of the "AI quirks" and the longer you listen, the more you start to notice them.
I listen to a lot of AI celeb impersonations and they all sound like the same machine with different voice synthesizers. There's something about the prosody that gives it away, every sentence has the same generic pattern.
Humans are generally more creative, or more monotonous, but AI is in a weird inbetween space where it's never interested and never bored, always soulless.

[–] bamboo@lemm.ee 2 points 1 year ago

Having listened to it, I could not identify any sort of “AI quirk”. It sounded perfectly fine.

This is beautiful

[–] sudoshakes@reddthat.com 16 points 1 year ago (3 children)

A large language model took a 3 second snippet of a voice and extrapolated from that the whole spoken English lexicon from that voice in a way that was indistinguishable from the real person to banking voice verification algorithms.

We are so far beyond what you think of when we say the word AI, because we replaced the underlying thing that it is without most people realizing it. The speed of large language models progress at current is mind boggling.

These models when shown FMRI data for a patient, can figure out what image the patient is looking at, and then render it. Patient looks at a picture of a giraffe in a jungle, and the model renders it having never before seen a giraffe… from brain scan data, in real time.

Not good enough? The same FMRI data was examined in real time by a large language model while a patient was watching a short movie and asked to think about what they saw in words. The sentence the person thought, was rendered as English sentences by the model, in real time, looking at fMRI data.

That’s a step from reading dreams and that too will happen inside 20 months.

We, are very much there.

[–] Pete90@feddit.de 9 points 1 year ago (2 children)

I don't think what you're saying is possible. Voxels used in fMRI measure in millimeters (down to one of I recall) and don't allow for such granular analysis. It is possible to 'see' what a person sees but the image doesn't resemble the original too closely.

At least that's what I have learned a few years ago. I'm happy to look at new sources, if you have some though.

[–] sudoshakes@reddthat.com 0 points 1 year ago

Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

High-resolution image reconstruction with latent diffusion models from human brain activity: https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3

Semantic reconstruction of continuous language from non-invasive brain recordings: https://www.biorxiv.org/content/10.1101/2022.09.29.509744v1

[–] sudoshakes@reddthat.com -1 points 1 year ago (1 children)

I like how I said, the problem is progress is moving so far you don’t even realize what you don’t know about the subject as a layman… and then this comment appears saying things are not possible.

Lol.

How timely.

I the speed at which things are changing and redefining what is possible in this space is moving faster than any other are of research. It’s insane to the point that if you are not actively reading white papers every day, you miss major advances.

The layman had this idea of what “AI” means, but we have truly no good way to make the word align to its meaning and capabilities with how fast we change what it means underneath.

[–] Pete90@feddit.de 3 points 1 year ago* (last edited 1 year ago) (1 children)

I looked at your sources or at least one of them. The problem is, that, as you said, I am a layman at least when it comes To AI. I do know how fMRI works though.

And I stand corrected. Some of those pictures do closely resemble the original. Impressive, although not all subjects seem to produce the same level of detail and accuracy. Unfortunately, I have no way to verify the AI side of the paper. It is mind boggling that such images can be constructed from voxels of such size. 1.8mm contain close to 100k neurons and even more synapses. And the fMRI signal itself is only ablood oxygen level overshoot in these areas and no direct measurement of neural activity. It makes me wonder what constraints and tricks had to be used to generate these images. I guess combining the semantic meaning of the image in combination with the broader image helped. Meaning inferring pixel color (e.g. Mostly blue with some gray on the middle) and then adding the sematic meaning (plane) to then combine these two.

Truly amazing, but I do remain somewhat sceptical.

[–] sudoshakes@reddthat.com 1 points 1 year ago

The model inferred meaning much the same way it infers meaning from text. Short phrases can generate intricate images accurate to author intent using stable diffusion.

The models themselves in those studies leveraged stable diffusion as the mechanism of image generation, but instead of text prompts, they use fMRI data training.

[–] hobovision@lemm.ee 7 points 1 year ago (2 children)
[–] sudoshakes@reddthat.com 2 points 1 year ago

Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

High-resolution image reconstruction with latent diffusion models from human brain activity: https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3

Semantic reconstruction of continuous language from non-invasive brain recordings: https://www.biorxiv.org/content/10.1101/2022.09.29.509744v1

[–] Pantoffel@feddit.de 0 points 1 year ago (1 children)

For the last example: Here

Rendering dreams from fMRI is also already reality. Please, google that yourself if you'd like to see the sources. However, the image quality is not yet very good, but nevertheless it is possible. It is just a question of when the quality will be better.

Now think about smart glasses or whatever display you like, controlling it with your mind. You'd need Jedi concentration :D But I sure do think I will live long enough to see this technology.

[–] Not_mikey@lemmy.world 4 points 1 year ago (1 children)

Interesting and scary to think ai understands the black box of human neurology more than we understand the black box of ai.

[–] rigatti@lemmy.world 7 points 1 year ago (2 children)

It won't take long until that uncanny quality is worked out.

[–] danielbln@lemmy.world 6 points 1 year ago

Imho it has already been worked out. There is probably selection bias at play as you don't even recognize the AI voices that are already there.

[–] Pantoffel@feddit.de 1 points 1 year ago

Following up on the other comment.

The issue is that widely available speech models are not yet offering the quality that is technically possible. That is probably why you think we're not there yet. But we are.

Oh, I'm looking forward to just translate a whole audiobook into my native language and any speaking style I like.

Okay, perhaps we would still have difficulties with made up fantasy words or words from foreign languages with little training data.

Mind, this is already possible. It's just that I don't have access to this technology. I sincerely hope that there will be no gatekeeping to the training data, such that we can train such models ourselves.