this post was submitted on 12 Jan 2025
658 points (98.0% liked)
Technology
60456 readers
4098 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's already available for anyone to use. https://github.com/openai/whisper
They're using OpenAI's Whisper model for this: https://code.videolan.org/videolan/vlc/-/merge_requests/5155
Note that openai's original whisper models are pretty slow; in my experience the distil-whisper project (via a tool like whisperx) is more than 10x faster.
Has there been any estimated minimal system requirements for this yet, since it runs locally?
It's actually using whisper.cpp
From the README:
Memory usage Model Disk Mem tiny 75 MiB ~273 MB base 142 MiB ~388 MB small 466 MiB ~852 MB medium 1.5 GiB ~2.1 GB large 2.9 GiB ~3.9 GiB
Those are the model sizes
Oh wow those pretty tiny memory requirements for a decent modern system! That's actually very impressive! :D
Many people can probably even run this on older media servers or even just a plain NAS! That's awesome! :D