When you say “aware of the delineation between self and not self”, what do you mean by “aware”? I’ve found that it’s often a circular definition, maybe with a few extra words thrown in to obscure the chain, like “know”, “comprehend”, “perceive”, etc.
Also, is a computer program that knows which process it is self aware? If not, why? It’s so simple, and yet without a concrete definition it’s hard to really reject that.
On the other extreme, are we truly self aware? As you point out, our bodies just kind of do stuff without our knowledge. Would an alien species laugh at the idea of us being self-aware, having just faint glimmers of self awareness compared to them, much like the computer program seems to us?
Anything dealing with perception is going to be somewhat circular and vague. Qualia are the elements of perception and by their nature it seems they are incommunicable by any means.
Awareness in my mind deals with the lowest level of abstract thinking. Can you recognize this thing and both compare and contrast it with other things, learning about its relation to other things on a basic level?
You could hardcode a computer to recognize its own process. But it’s not comparing itself to other processes, experiencing similarities and dissimilarities. Furthermore unless it has some way to change at least the other processes that are not itself, it can’t really learn its own features/abilities.
A cat can tell its paws are its own, likely in part because it can move them. if you gave a cat shoes, do you think the cat would think the shoes are part of itself? No, And yet the cat can learn that in certain ways it can act as though the shoes are part of itself. The same way we can recognize that tools are not us but are within our control.
We notice that there is a self that is unlike our environment in that it does not control the environment directly, and then there are the actions of the self that can influence or be influenced directly by the environment. And that there are things which we do not control at all directly.
That is the delineation I’m talking about. It’s more the delineation of control than just “this is me and that isn’t” because the term “self” is arbitrary.
We as social beings correlate self with identity, with the way we think we act compared to others, but to be conscious of one’s own existence only requires that you can sense your own actions and learn to delineate between this thing that appears within your control and those things that are not. Your definition of self depends on where you’ve learned to think the lines are.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
Furthermore, on the note of aliens, I think a better question to ask is “what do you think ‘self’ is?” Because that will determine your answer. If you think a system must be consciously aware of all the processes that make it up, I doubt you’ll ever find a life form like that. The reason those systems are subconscious is because that’s the most efficient way to be. Furthermore, those processes are mostly useful only to the self internally, and not so much the rest of reality.
To be aware of self is to be aware of how the self relates to that which is not part of it. Knowing more about your own processes could help with this if you experienced those same processes outside of the self (like noticing how other members of your society behave similarly to you) but fundamentally, you’re not necessarily creating a more accurate idea of self awareness just be having more senses of your automatic bodily processes.
It is equally important, if not more so, to experience more that is not the self rather than to experience more of what would be described as self, because it’s what’s outside that you use to measure and understand what’s inside.
I made another comment pointing this out for a similar definition, but OK so awareness is being able to “recognize”, and recognize in turn means “To realize or discover the nature of something” (using Wiktionary, but pick your favorite dictionary), and “realize” means “To become aware of or understand”, completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it’s worth, much like common definitions of “awareness”. I think it’s perfectly possible in theory to read someone’s brain to see how something is represented and then twiddle someone else’s brain in the same way to cause the same experience, or compare the two to see if they’re equivalent.
As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn’t itself. It can look at processes and say “this other process consumes more memory than I do”. It’s super primitive and hardcoded, but why doesn’t that count? I also thinking learning is separate but related. If we take the definition of “consciousness” as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I’ve reached the “Sure, why not?” stage. To be a useful definition though, we need to go beyond that and start asking questions like “Conscious of what?”
I think you’re getting hung up on the words rather than the content. While our definitions of terms may be rather vague, the properties I described are not cyclically defined.
To be aware of the difference between self means to be able to sense stimuli originating from the self, sense stimuli not from the self, and learn relationships between them.
As long as aspects of the self (like current and past thoughts) are able to be sensed (encoded into a representation which the mind can work with directly; in our case neural spike chains) exist and senses which compare those senses with other senses or past senses and finally that the mind can learn patterns in those encodings (like spiking neural nets) then it should be possible for conscious awareness to arise. (If you’re curious about the kind of learning that needs to happen you should look into Tolman-Eichenbaum machines, though non-spiking ones aren’t reallly capable of self learning)
I hope that’s a clear enough “empirical” explanation for you.
As for qualia, you are entirely wrong. What you describe would not prove that my raw experience of green is the same as your green, only that we both have qualia which can arise from the color green. You can say that it’s not pragmatic to think about that which cannot be known, and I’ll agree that qualia must be represented in a physical way and thus be recreatable in that persons brain, but the complexity of human brains actually precludes the ability to define what actually is the qualia and what are other thoughts. The difference between individuals likely precludes the ability to say “oh when these neurons are active it means this” because other people have different neural structures, similar? Absolutely, similar enough that for any experience you could find exactly the same neurons that would fire the same way as in someone else? Absolutely not.
Your last statements make it seem like you don’t understand the diffference between learning and knowledge. LLMs don’t learn when you use them. Neither do most modern chess models. They actually don’t learn at all unless they are being trained by an outside source who gives them an input, expects an output, and then computes the weight changes needed to get closer to the answer via gradient descent.
A typical ANN trained this way does not learn from new experiences furthermore, it is not capable of referencing its own thoughts because it doesn’t have any.
The self is that which acts, did you know LLMs aren’t capable of being aware they took any action? Are you aware chess engines can’t do that either? There is no comparison mechanism between what was and what is and what made that change. They cannot be self aware the same way a program hardcoded to kill processes other than itself is unaware. They literally lack any sense of their own actions directly. Once again, you not only need to be able to sense that information, but the program then needs a sense which compares that sensation to other sensations and learns the differences, changing the way it responds to those stimuli. You need learning.
I don’t reject the idea of machines being conscious, in fact I’m literally trying to make a conscious machine just to see if I can (which yeah to most people sounds insane). But I do not think we agree on much else because learning is absolutely essential for any thing to be capable of a conscious action.
When you say “aware of the delineation between self and not self”, what do you mean by “aware”? I’ve found that it’s often a circular definition, maybe with a few extra words thrown in to obscure the chain, like “know”, “comprehend”, “perceive”, etc.
Also, is a computer program that knows which process it is self aware? If not, why? It’s so simple, and yet without a concrete definition it’s hard to really reject that.
On the other extreme, are we truly self aware? As you point out, our bodies just kind of do stuff without our knowledge. Would an alien species laugh at the idea of us being self-aware, having just faint glimmers of self awareness compared to them, much like the computer program seems to us?
Anything dealing with perception is going to be somewhat circular and vague. Qualia are the elements of perception and by their nature it seems they are incommunicable by any means.
Awareness in my mind deals with the lowest level of abstract thinking. Can you recognize this thing and both compare and contrast it with other things, learning about its relation to other things on a basic level?
You could hardcode a computer to recognize its own process. But it’s not comparing itself to other processes, experiencing similarities and dissimilarities. Furthermore unless it has some way to change at least the other processes that are not itself, it can’t really learn its own features/abilities.
A cat can tell its paws are its own, likely in part because it can move them. if you gave a cat shoes, do you think the cat would think the shoes are part of itself? No, And yet the cat can learn that in certain ways it can act as though the shoes are part of itself. The same way we can recognize that tools are not us but are within our control.
We notice that there is a self that is unlike our environment in that it does not control the environment directly, and then there are the actions of the self that can influence or be influenced directly by the environment. And that there are things which we do not control at all directly.
That is the delineation I’m talking about. It’s more the delineation of control than just “this is me and that isn’t” because the term “self” is arbitrary.
We as social beings correlate self with identity, with the way we think we act compared to others, but to be conscious of one’s own existence only requires that you can sense your own actions and learn to delineate between this thing that appears within your control and those things that are not. Your definition of self depends on where you’ve learned to think the lines are.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
Furthermore, on the note of aliens, I think a better question to ask is “what do you think ‘self’ is?” Because that will determine your answer. If you think a system must be consciously aware of all the processes that make it up, I doubt you’ll ever find a life form like that. The reason those systems are subconscious is because that’s the most efficient way to be. Furthermore, those processes are mostly useful only to the self internally, and not so much the rest of reality.
To be aware of self is to be aware of how the self relates to that which is not part of it. Knowing more about your own processes could help with this if you experienced those same processes outside of the self (like noticing how other members of your society behave similarly to you) but fundamentally, you’re not necessarily creating a more accurate idea of self awareness just be having more senses of your automatic bodily processes.
It is equally important, if not more so, to experience more that is not the self rather than to experience more of what would be described as self, because it’s what’s outside that you use to measure and understand what’s inside.
I made another comment pointing this out for a similar definition, but OK so awareness is being able to “recognize”, and recognize in turn means “To realize or discover the nature of something” (using Wiktionary, but pick your favorite dictionary), and “realize” means “To become aware of or understand”, completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it’s worth, much like common definitions of “awareness”. I think it’s perfectly possible in theory to read someone’s brain to see how something is represented and then twiddle someone else’s brain in the same way to cause the same experience, or compare the two to see if they’re equivalent.
As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn’t itself. It can look at processes and say “this other process consumes more memory than I do”. It’s super primitive and hardcoded, but why doesn’t that count? I also thinking learning is separate but related. If we take the definition of “consciousness” as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.
I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I’ve reached the “Sure, why not?” stage. To be a useful definition though, we need to go beyond that and start asking questions like “Conscious of what?”
I think you’re getting hung up on the words rather than the content. While our definitions of terms may be rather vague, the properties I described are not cyclically defined.
To be aware of the difference between self means to be able to sense stimuli originating from the self, sense stimuli not from the self, and learn relationships between them.
As long as aspects of the self (like current and past thoughts) are able to be sensed (encoded into a representation which the mind can work with directly; in our case neural spike chains) exist and senses which compare those senses with other senses or past senses and finally that the mind can learn patterns in those encodings (like spiking neural nets) then it should be possible for conscious awareness to arise. (If you’re curious about the kind of learning that needs to happen you should look into Tolman-Eichenbaum machines, though non-spiking ones aren’t reallly capable of self learning)
I hope that’s a clear enough “empirical” explanation for you.
As for qualia, you are entirely wrong. What you describe would not prove that my raw experience of green is the same as your green, only that we both have qualia which can arise from the color green. You can say that it’s not pragmatic to think about that which cannot be known, and I’ll agree that qualia must be represented in a physical way and thus be recreatable in that persons brain, but the complexity of human brains actually precludes the ability to define what actually is the qualia and what are other thoughts. The difference between individuals likely precludes the ability to say “oh when these neurons are active it means this” because other people have different neural structures, similar? Absolutely, similar enough that for any experience you could find exactly the same neurons that would fire the same way as in someone else? Absolutely not.
Your last statements make it seem like you don’t understand the diffference between learning and knowledge. LLMs don’t learn when you use them. Neither do most modern chess models. They actually don’t learn at all unless they are being trained by an outside source who gives them an input, expects an output, and then computes the weight changes needed to get closer to the answer via gradient descent.
A typical ANN trained this way does not learn from new experiences furthermore, it is not capable of referencing its own thoughts because it doesn’t have any.
The self is that which acts, did you know LLMs aren’t capable of being aware they took any action? Are you aware chess engines can’t do that either? There is no comparison mechanism between what was and what is and what made that change. They cannot be self aware the same way a program hardcoded to kill processes other than itself is unaware. They literally lack any sense of their own actions directly. Once again, you not only need to be able to sense that information, but the program then needs a sense which compares that sensation to other sensations and learns the differences, changing the way it responds to those stimuli. You need learning.
I don’t reject the idea of machines being conscious, in fact I’m literally trying to make a conscious machine just to see if I can (which yeah to most people sounds insane). But I do not think we agree on much else because learning is absolutely essential for any thing to be capable of a conscious action.