this post was submitted on 04 Sep 2024
65 points (90.1% liked)

Ask Lemmy

27049 readers
1735 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

Obviously there's not a lot of love for OpenAI and other corporate API generative AI here, but how does the community feel about self hosted models? Especially stuff like the Linux Foundation's Open Model Initiative?

I feel like a lot of people just don't know there are Apache/CC-BY-NC licensed "AI" they can run on sane desktops, right now, that are incredible. I'm thinking of the most recent Command-R, specifically. I can run it on one GPU, and it blows expensive API models away, and it's mine to use.

And there are efforts to kill the power cost of inference and training with stuff like matrix-multiplication free models, open source and legally licensed datasets, cheap training... and OpenAI and such want to shut down all of this because it breaks their monopoly, where they can just outspend everyone scaling , stealiing data and destroying the planet. And it's actually a threat to them.

Again, I feel like corporate social media vs fediverse is a good anology, where one is kinda destroying the planet and the other, while still niche, problematic and a WIP, kills a lot of the downsides.

you are viewing a single comment's thread
view the rest of the comments
[–] tkw8@lemm.ee 6 points 2 months ago (1 children)

I’m running Nvidia on Ubuntu. I’ll give exllama a shot.

[–] brucethemoose@lemmy.world 7 points 2 months ago* (last edited 2 months ago) (1 children)

I'd recommend TabbyAPI with your favorite frontend, anything that works with OpenAI.

Or exui (which is what I tend to use) but is a bit more manual. text-gen-web-ui has better samplers, but its IMO more clanky and crufty, and really slow at long context.

Also, uh, you'll have to be careful about picking a model, you have to fit it to your GPU instead of letting ollama do it for you. I view this as a positive, as it forces you to search more a more optimal fit.

[–] tkw8@lemm.ee 5 points 2 months ago (1 children)

I manually specify what models to pull. I’m not running anything too crazy. My largest model is gemma27B. But I’ve worked with dolphin-mistral which was fun.

[–] brucethemoose@lemmy.world 6 points 2 months ago (1 children)

If you have a 24GB card, just go straight to the most recent Command R, a 3.75bpw-4bpw quantization. It's incredible, and you can do the full 131K context on a 24GB GPU easy.

Gemma 27B Is actually quite good, but "narrow." Its super low context and seems to be hyper optimized for short chatbot-arena style questions.

[–] tkw8@lemm.ee 4 points 2 months ago* (last edited 2 months ago) (1 children)

Gemma 27B Is actually quite good, but "narrow." Its super low context and seems to be hyper optimized for short chatbot-arena style questions.

This is the stuff I love to know so thanks for sharing. I will be pulling Command R tomorrow.

[–] brucethemoose@lemmy.world 3 points 2 months ago

Good! So Command-R excels at "RAG" style tasks like asking questions about a huge document, continuing a long story or so on. You should also read up on its super intricate system prompt format, which can steer it quite well.

I dunno about code, I tend to use Mistral Code 22B (or deepseek v2 API) for that.

I am happy to ramble on about this stuff, just ask.