this post was submitted on 25 Sep 2024
151 points (89.9% liked)
Technology
60038 readers
2862 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If you really care for an LLM, run it locally... Not sure if this does it...
Don't want to install and maintain 10gigs of cuda stuff on my PC. Next, my mum won't know how to do that. Her laptop is a potato. This add-on makes all of this way easier.
You don't need CUDA, it's actually pretty easy. You can run the Mistral 7B model this add-on is based on using GPT4All. It doesn't require much, if any, technical knowledge.
HOLY HELL THAT'S COOL. It can do so much too!!!
I locally installed some small LLM model more than a year ago. It took up like 25 gigs or something along with all CUDA libraries n stuff. It was alright, but I figured that cloud based solutions were the best for my use case, as they were better and for free.
I had no idea that open sourced AI progressed so much in the last year. Amazing stuff!
It depends how you run it etc. You may have not been using a quantized model.
I was using the quantized version :(
But again, do remember that this was when the first open sourced AI models had just begun to come out. Stuff from Open Assistant for example. I don't even remember the name of the model that I was running (it was just too weird and funny lol). I just remember it being HUGE, quite dumb and making my device sweat lol.
what's the min-sys requirements for a good experience?
A midrange graphics card and 16 GB of RAM should suffice. Check their site for specifics.
You're not generating models at this point. You don't need that kind of hardware to run these.
Well that comes with shit ton of privacy risk. If y'all are comfortable, then it is your choice