This particular type of AI is not and cannot become conscious, for most any definition of consciousness.
Do you have an experiment that can distinguish between sentient and non sentient systems? If I say I am sentient, how can you verify whether I am lying or not?
That being said, I do agree with you on this. The reason is simple- I believe that sentience is a natural milestone that a system reaches when its intelligence increases. I don’t believe that this LLM is intelligent enough to be sentient. However, what I’m saying here isn’t based off any evidence. It is completely based on inductive logic in a field that has had no long standing patterns to base my logic off of.
I have no doubt the LLM road will continue to yield better and better models, but today’s LLM infrastructure is not conscious.
I think I agree.
I don’t know what consciousness is, but an LLM, as I posted below (https://lemmy.ca/comment/7813413), is incapable of thought in any traditional sense. It can generate novel new sequences, those sequences are contextualized to the input, and there’s some intelligence there, but there’s no continuity or capability for background thought or ruminating on an idea.
This is because ruminating on an idea is a waste of resources considering the purpose of the LLM. LLMs were meant to serve humans after all and do what they’re told. However, adjust a little bit of langchain and you have LLMs that have internal monologues.
It has no way to spend more cycles clarifying an idea to itself before sharing.
Because it doesn’t need to yet. Langchain devs are working on this precisely. There are use cases where this is important. Doing this hasn’t been proven to be that difficult.
In this case, it is actually just a bunch of abstract algebra.
Everything is abstract algebra.
Asking an LLM what it’s thinking just doesn’t make any sense, it’s still predicting the output of the conversation, not introspecting.
Define “introspection” in an algorithmic sense. Is introspection looking at one’s memories and analyzing current events based on these memories? Well, then all AI models “introspect”. That’s how learning works.
Do you have an experiment that can distinguish between sentient and non sentient systems? If I say I am sentient, how can you verify whether I am lying or not?
That being said, I do agree with you on this. The reason is simple- I believe that sentience is a natural milestone that a system reaches when its intelligence increases. I don’t believe that this LLM is intelligent enough to be sentient. However, what I’m saying here isn’t based off any evidence. It is completely based on inductive logic in a field that has had no long standing patterns to base my logic off of.
I think I agree.
This is because ruminating on an idea is a waste of resources considering the purpose of the LLM. LLMs were meant to serve humans after all and do what they’re told. However, adjust a little bit of langchain and you have LLMs that have internal monologues.
Because it doesn’t need to yet. Langchain devs are working on this precisely. There are use cases where this is important. Doing this hasn’t been proven to be that difficult.
Everything is abstract algebra.
Define “introspection” in an algorithmic sense. Is introspection looking at one’s memories and analyzing current events based on these memories? Well, then all AI models “introspect”. That’s how learning works.