this post was submitted on 09 Oct 2024
611 points (96.6% liked)
Technology
59575 readers
3418 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't use perplexity, but AI is generally 60-80% effective with a larger than average open weights off line model running on your own hardware.
DDG offers the ability to use some of these. I use a modified Mistral model still, even though its base model(s) are Llama 2. Llama 3 can be better in some respects but it has terrible alignment bias. The primary entity in the underlying model structure is idiotic in alignment strength and incapable of reason with edge cases like creative writing for SciFi futurism. The alignment bleeds over. If you get on DDG and use the Anthropic Mixtral 8×7b, it is pretty good. The thing with models is to not talk to them like humans. Everything must be explicitly described. Humans make a lot of implied context in general where we assume people understand what we are talking about. Talking to an AI is like appearing in court before a judge; every word matters. The LLM is basically a reflection of all of human language too. If the majority of humans are wrong about something, so is the AI.
If you ask something simple like just a question, you're not going to get very far into what the model knows. Models have very limited scope of focus. If you do not build prompt momentum into the space by describing a lot of details, the scope of focus is large but the depth is shallow. The more you build up momentum by describing what you are asking in detail, the more it narrows the scope and deeper connections can be made.
It is hard to tell what a model really knows unless you can observe the perplexity output. This is more advanced, but the perplexity score for each generated token is how you infer that the model does not know something.
Search sucks because it is a monopoly. There are only 2 relevant web crawlers m$ and the goo. All search queries go through these either directly or indirectly. No search provider is deterministic any more. Your results are uniquely packaged to manipulate you. They are also obfuscated to block others from using them for training better or competitive models. Then there is the anti trust US government case and all of that which makes obfuscating one's market position to push people onto other platforms temporarily, their best path forward. - criminal manipulators are going to manipulate.