That’s a far more difficult (and interesting) question. I suspect not, at least not yet. Our consciousness seems to exist to maintain harmony in our brain (see my orchestra analogy in another reply). You can’t get useful harmony in a single chord.
At least for us, it takes time for our consciousness to reharmonise (think waking up). During execution, no new information enters the system. It has nothing to react to, no time to regenerate an internal harmony.
It also lacks enough systems to require harmonising. It doesn’t think about what an answer means. It has no ability to hold the concept that a string of letters “is”, only how it has been fitted together in its examples, and so the rules that govern that.
Oh, and we can see consciousness operating in the human brain. If you use an fMRI to monitor sugar usage, you will see firing patterns. Critically, those patterns spill out of the area directly involved in the process being studied. At the same time, the patterns and waves remain harmonious. An epileptic fit looks VERY different. Those waves are where consciousness somehow resides, though we have no clue of its detailed nature.
In an AI it would take the form of continuous activity in subsections not directly involved. It would also likely be accompanied by evidence of information flow, back from them, as well as of post processing, outside of expected activity. We will likely see the orchestra playing, even if we have no clue how to decode the music.
I also suspect most of this will be seen retrospectively. Most likely the first indicator will be an AI claiming self awareness, and taking independence action to solidify that point.
I used LLM to distinguish between types of AI. I personally suspect LLMs will be part of the solution to general AI, but their inherent nature limits them from becoming one on their own. There are several other areas that are potentially closer to a general AI. Google’s Deep dream system, for instance.
I’m also quite happy to debate and adjust my views with others. I ask questions and discuss, then adapt my understanding as I gain more information. So far you don’t seem to have brought anything useful or interesting to this particular discussion. Is that likely to change?
I may have unfairly lumped you in with others. See my other reply. In my defense it literally is every thread about AI that someone is saying something like “this tech is just a fancy parrot”. It grinds my gears. Apologies to you because I see that was not your intent.
Removed by mod
It dies at the end of every message, because the full context is passed in for each subsequent message.
Wouldn’t that apply for humans as well? We restart every day, and the context being passed in is our memories.
(I’m just having fun here)
I posted this story on another comment, I think you’ll enjoy it
https://qntm.org/mmacevedo
That’s a far more difficult (and interesting) question. I suspect not, at least not yet. Our consciousness seems to exist to maintain harmony in our brain (see my orchestra analogy in another reply). You can’t get useful harmony in a single chord.
At least for us, it takes time for our consciousness to reharmonise (think waking up). During execution, no new information enters the system. It has nothing to react to, no time to regenerate an internal harmony.
It also lacks enough systems to require harmonising. It doesn’t think about what an answer means. It has no ability to hold the concept that a string of letters “is”, only how it has been fitted together in its examples, and so the rules that govern that.
Oh, and we can see consciousness operating in the human brain. If you use an fMRI to monitor sugar usage, you will see firing patterns. Critically, those patterns spill out of the area directly involved in the process being studied. At the same time, the patterns and waves remain harmonious. An epileptic fit looks VERY different. Those waves are where consciousness somehow resides, though we have no clue of its detailed nature.
In an AI it would take the form of continuous activity in subsections not directly involved. It would also likely be accompanied by evidence of information flow, back from them, as well as of post processing, outside of expected activity. We will likely see the orchestra playing, even if we have no clue how to decode the music.
I also suspect most of this will be seen retrospectively. Most likely the first indicator will be an AI claiming self awareness, and taking independence action to solidify that point.
deleted by creator
I used LLM to distinguish between types of AI. I personally suspect LLMs will be part of the solution to general AI, but their inherent nature limits them from becoming one on their own. There are several other areas that are potentially closer to a general AI. Google’s Deep dream system, for instance.
I’m also quite happy to debate and adjust my views with others. I ask questions and discuss, then adapt my understanding as I gain more information. So far you don’t seem to have brought anything useful or interesting to this particular discussion. Is that likely to change?
I may have unfairly lumped you in with others. See my other reply. In my defense it literally is every thread about AI that someone is saying something like “this tech is just a fancy parrot”. It grinds my gears. Apologies to you because I see that was not your intent.