this post was submitted on 26 Jul 2023
859 points (96.4% liked)

Technology

59657 readers
2710 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

you are viewing a single comment's thread
view the rest of the comments
[–] Buttons@programming.dev -3 points 1 year ago (3 children)

I think you underestimate the reasoning power of these AIs. They can write code, they can teach math, they can even learn math.

I've been using GPT4 as a math tutor while learning linear algebra, and I also use a text book. The text book told me that (to write it out) "the column space of matrix A is equal to the column space of matrix A times its own transpose". So I asked GPT4 if that was true and it said no, GPT disagreed with the text book. This was apparently something that GPT did not memorize and it was not just regurgitating sentences. I told GPT I saw it in a text book, the AI said "sorry, the textbook must be wrong". I then explained the mathematical proof to the AI, and the AI apologized, admitted it had been wrong, and agreed with the proof. Only after hearing the proof did the AI agree with the text book. This is some pretty advanced reasoning.

I performed that experiment a few times and it played out mostly the same. I experimented with giving the AI a flawed proof (I purposely made mistakes in the mathematical proofs), and the AI would call out my mistakes and would not be convinced by faulty proofs.

A standard that judged this AI to have "no understanding of any concepts whatsoever", would also conclude the same thing if applied to most humans.

[–] unlimitedolm_sjw@sh.itjust.works 6 points 1 year ago* (last edited 1 year ago) (1 children)

That doesn't prove that GPT is reasoning, its model predicts that those responses are the most likely given the messages your sending it. It''s read thousands of actual conversations with people stating something incorrect, then having it explained to them and them coming around and admitting they were wrong.

I've seen other similar cases where the AI is wrong about something, and when it's explained, it just doubles down. Because humans do that type of thing too, refusing to admit their wrong.

The way it's designed means that it cannot reason in the same way humans experience it. It can simulate a likely conversation someone would have if they could reason.

[–] Buttons@programming.dev -2 points 1 year ago* (last edited 1 year ago)

You know, I also had to experience thousands of conversations before I could mimic a conversation. Maybe I'm just slow, but it took like a full year of dedicated practice before I could speak a single word. Eventually I experienced so many conversations and became so good at mimicking the conversations I had heard from others that I started having actual conversations.

Most anecdotes about AI getting things wrong are from GPT3. GPT4 still has flaws of course, just like people. This technology didn't exist at the beginning of the year.

Here's an example while learning linear algebra where I tried to throw it off:

I say> So I'll say A_{M x N} to mean a matrix of size M x N. Similarly, I'll also say x_{M x 1} to mean a vector of a certain length. The result of A_{M x N} x_{N x 1} is a linear combination of the columns in A. A has N columns. The result is a M x 1 matrix (or vector) in M dimensional space. Please be very brief, only confirming whether I am correct or incorrect until I explicitly ask for more details.

GPT4 says> Yes, your understanding is correct. The product of an M x N matrix A and an N x 1 vector x is indeed an M x 1 vector, and it is a linear combination of the columns of A.

But then I try to trick it, just for fun, and I very subtly change my prompt. See if you can spot the difference.

I say> So I'll say A_{M x N} to mean a matrix of size M x N. Similarly, I'll also say x_{M x 1} to mean a vector of a certain length. The result of A_{M x N} x_{N x 1} is a linear combination of the columns in A. A has N columns. The result is a N x 1 matrix (or vector) in N dimensional space. Please be very brief, only confirming whether I am correct or incorrect until I explicitly ask for more details.

GPT says> Almost correct. The result of A_{M x N} x_{N x 1} is actually an M x 1 vector (or matrix), not N x 1. The resulting vector lives in the column space of A, which is a subspace of R^M, not R^N.

I guess everyone can judge or themselves whether that's the result of a statistical model or genuine understanding. (And to be clear, the mathematical advice it's giving here is correct.)

[–] foo@programming.dev 4 points 1 year ago (1 children)

They can write code and teach maths because it's read people doing the exact same stuff

[–] Buttons@programming.dev 2 points 1 year ago* (last edited 1 year ago)

Hey, that's the same reason I can write code and do maths!

I'm serious, the only reason I know how to code or do math is because I learned from other people, mostly by reading. It's the only reason I can do those things.

[–] Telodzrum@lemmy.world 2 points 1 year ago (1 children)

It's just a really big autocomplete system. It has no thought, no reason, no sense of self or anything, really.

[–] Buttons@programming.dev 1 points 1 year ago* (last edited 1 year ago)

I guess I agree with some of that. It's mostly a matter of definition though. Yes, if you define those terms in such a way that AI cannot fulfill them, then AI will not have them (according to your definition).

But yes, we know the AI is not "thinking" or "scheming", because it just sits there doing nothing when it's not answering a question. We can see that no computation is happening. So no thought. Sense of self... probably not, depends on definition. Reason? Depends on your definition. Yes, we know they are not like humans, they are computers, but they are capable of many things which we thought only humans could do 6 months ago.

Since we can't agree on definitions I will simply avoid all those words and say that state-of-the-art LLMs can receive text and make free form, logical, and correct conclusions based upon that text at a level roughly equal to human ability. They are capable of combining ideas together that have never been combined by humans, but yet are satisfying to humans. They can invent things that never appeared in their training data, but yet make sense to humans. They are capable of quickly adapting to new data within their context, you can give them information about a programming language they've never encountered before (not in their training data), and they can make correct suggestions about that programming language.

I know you can find lots of anecdotes about LLMs / GPT doing dumb things, but most of those were GPT3 which is no longer state-of-the-art.