As long as the models are OpenSource I have no complains
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
And the data stays local.
This might be one of the few times I’ve seen AI being useful and not just slapped on something for marketing purposes.
And not to do evil shit
But the toppings contains potassium benzoate.
The nice thing is, now at least this can be used with live tv from other countries and languages.
Think you want to watch Japanese tv or Korean channels with out bothering about downloading, searching and syncing subtitles
I prefer watching Mexican football announcers, and it would be nice to know what they're saying. Though that might actually detract from the experience.
GOOOOOOAAAAAAAAALLLLLLLLLL
Just fill up the whole screen with this.
The opposing team has scored.
This sounds like a great thing for deaf people and just in general, but I don't think AI will ever replace anime fansub makers who have no problem throwing a wall of text on screen for a split second just to explain an obscure untranslatable pun.
Bless those subbers. I love those walls of text.
They are like the * in any Terry Pratchett (GNU) novel, sometimes a funny joke can have a little more spice added to make it even funnier
Translator's note: keikaku means plan
Finally, some good fucking AI
I was just thinking, this is exactly what AI should be used for. Pattern recognition, full stop.
Yup, and if it isn't perfect that is ok as long as it is close enough.
Like getting name spellings wrong or mixing homophones is fine because it isn't trying to be factually accurate.
What’s important is that this is running on your machine locally, offline, without any cloud services. It runs directly inside the executable
YES, thank you JB
Amazing. I can finally find out exactly what that nurse is yelling about while she gets railed by the local basketball team.
Something about a full-court press?
Will it be possible to export these AI subs?
Imagine the possibilities!
Now I want some AR glasses that display subtitles above someone's head when they talk à la Cyberpunk that also auto-translates. Of course, it has to be done entirely locally.
As vlc is open source, can we expect this technology to also be available for, say, jellyfin, so that I can for once and for all have subtitles.done right?
Edit: I think it's great that vlc has this, but this sounds like something many other apps could benefit from
It's already available for anyone to use. https://github.com/openai/whisper
They're using OpenAI's Whisper model for this: https://code.videolan.org/videolan/vlc/-/merge_requests/5155
Note that openai's original whisper models are pretty slow; in my experience the distil-whisper project (via a tool like whisperx) is more than 10x faster.
The technology is nowhere near being good though. On synthetic tests, on the data it was trained and tweeked on, maybe, I don't know.
I corun an event when we invite speakers from all over the world, and we tried every way to generate subtitles, all of them run on the level of YouTube autogenerated ones. It's better than nothing, but you can't rely on it really.
Really? This is the opposite of my experience with (distil-)whisper - I use it to generate subtitles for stuff like podcasts and was stunned at first by how high-quality the results are. I typically use distil-whisper/distil-large-v3, locally. Was it among the models you tried?
I unfortunately don't know the specific names of the models, I will comment additionally if I will not forget to ask people who spun up the models themselves.
The difference might be that live vs recorded stuff, I don't know.
is your goal to rely on it, or to have it as a backup?
For my purpose of having backup nearly anything will be better than nothing.
When you do live streaming there is no time for backup, it either works or not. Better than nothing, that's for sure, but also maybe marginally better than whatever we had 10 years ago
And yet they turned down having thumbnails for seeking because it would be too resource intensive. 😐
I mean, it would. For example Jellyfin implements it, but it does so by extracting the pictures ahead of time and saving them. It takes days to do this for my library.