For programming it is Sonnet 3.5, there is no remotely close 2nd place that I have tried or heard of, and I am always looking. I personally don't really have any interest in measuring them in other ways. But for coding, Sonnet 3.5 is in a distant lead. Abacus.ai is a nice way to try various models for cheap. Really, some sort of agent setup like mixture of agents that uses Claude and got and maybe some others may do better than Claude alone. Matthew Berman shows Mixture of Agents with local models beating gpt4o, so doing it with sonnet3.5 and others of the best closed models would probably be pretty great.
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
What language are you programming in? In swift I have found all models (including sonnet) next to useless. Tells me something wrong almost every question i ask, has made up macros and apis, etc.
For English I have found Claude models slightly better than the GPT 4 subscription I used to have. For anything in multiple (human, not programming) languages, gpt has seemed best for me.
I was mainly doing python with gpt4, but now im working on an android project, so kotlin. Gpt4 wasn't much use for kotlin, especially for questions involving more than a couple files. Sonnet is crushing it though, even when I give it 2k+ LoC. I'd say I've done about 2 months of pre-llm work in the last week, granted I am no professional, just a hobbyist.
I'm learning Kotlin and Android Studio and for that I'm developing a very simple CRUD App. I used sonet 3.5 and was impressed when it developed the XML file, mainactivity, added internet access permits and wrote the restful API in PHP for XAMPP. It compiled at the first try, but for the life of me I can't find why the restful API keeps returning a 405 error. And I'm a seasoned programmer in C, C++, phyton and XAMPP! It was, at the same time, impressive and extremely frustrating.
I don't know about your specific issue, but I have found that it helps quite a bit to often start new conversations. Also, I have a couple of paragraphs explaining the whole idea of my project that I always paste in at the beginning of each conversation. I've not been doing anything terribly complicated or cutting-edge, but I haven't come across anything yet that Sonnet hasn't been able to figure out, although sometimes it does take me being very clear and wordy about what I'm doing and starting from a fresh slate. I've also found it helps a lot if I specifically tell it to debug with lots of logs. Then I just go back and forth, giving it the outputs and changing code for it.
Iβve been using Sonnet 3.5 a lot recently. Does seem like itβs better and more creative than others for a lot of tasks. I also think itβs training set is up to April 2024 which is nice.
Iβve also found that GPT-4o is worse than GPT-4 in my experience. Seems to hallucinate more
GPT-4 is apparently the model to beat. I haven't seen all that much difference in practice between GPT-4 and 4o. I've heard various claims about various other models outperforming it (notably including Claude) but I haven't seen the claims materialize over the long haul as yet.
I have however heard that Mistral can get quite close to GPT-4, run for free locally with the right hardware, if you build up a hand curated set of around 100 query/response pairs from GPT-4 that are what you want it to do, and then fine-tune Mistral against that training set. I haven't tried it but that's what I've heard.
I'm a total layman when it comes to setting up a language model locally. Any step by step guide on how to do it? And I mostly use AIs on my Android phone, not PC. Is it possible to synchronize it between two devices?
GPT4all can do it pretty easily on a desktop with a good GPU. I think it's unlikely that anything can run locally on your phone (LLMs are notably hogs in terms of even pretty capable desktop PC resources; there's just not a cheap way to do them). You could use colab or something via your phone, and there is probably a little howto guide somewhere that shows how to do a Mistral setup on colab. It'll take some technical skill though.
You might just bite the bullet and do $20/mo for the GPT-4 subscription also. It can also do web searches, I think, although in practice it's pretty clunky the times it's tried to do things like that for me. I'm not aware of one that does the "search the web for answers and get back to me" thing really all that perfectly or smoothly I'm sad to say.
Why do the $20 subscription when the API pricing is much cheaper, especially if you are trying different models out. I'm currently playing about with Gemini and that's free (albeit rate limited).
100% right; unless you are using it a ton the API pricing is likely to be cheaper
And also, any recommendations on a specific GPT4 addon or is the base model pretty much perfect as is?
GPT-4 generally doesnβt need fine tuning or anything no
Ollama (+ web-ui but ollama serve & && ollama run
is all you need) then compare and contrast the various models
I've had luck with Mistral for example
Most models that I've played with are only about as good as what you put into it. If you ask it the right questions in the right way, you can get pretty good results.
GPT3.5 has worked well for me. I've also run AI on my pc locally using Ollama and lots of different models. Most do well with simple questions or requests.
Llama 3 instruct is what I've liked the most so far.
Hence the job title βprompt engineerβ I guess. If you know about Soylent Green, AI is people!
Lol prompts are important for sure. Me and my boss often talk about what you can do with chatgpt when we use it at work amd what kind of prompts we use.
ChatGPT 4o is the top dog right now, by a lot.
It's GPT-4 to tell the truth.
Not sure it'll do the tasks you list at the start but it's the front runner.
No AI are to this level, are a massive security risk, and none are "smart".
pay if it's worth it
It isn't.
Not sure about paid models, but Claude Sonnet 3.5 is so good it's not even funny. I've had arguments with it, where it was right in the end, and it never even considered that I was right (because I wasn't; I ended up looking it up afterwards). I've never seen that with any other model
Humans. For the best experience, get some third world contractor. Costs more tho.
Reducing people from third world countries to "language models" as an attempt to critique AI aint it