It all depends on what you mean by “conscious”, which IMO doesn’t fall under “Maybe everything is conscious” because that’s wrongly assuming that “conscious” is a binary property instead of a spectrum that humans and plants are both on while clearly being at vastly different levels. Maybe I just have a much looser definition of “conscious” than most people, but why don’t tropisms count as a very primitive form of consciousness?
Personally, I’m more a fan of the binary/discrete idea. I tend to go with the following definitions:
Animate: capable of responding to stimuli
Sentient: capable of recognizing experiences and debating the next best action to take
Conscious: aware of the delineation between self and not self
Sapient: capable of using abstract thinking and logic to solve problems without relying solely on memory or hardcoded actions (being able to apply knowledge abstractly to different but related problems)
If you could prove that plants have the ability to choose to scream rather than it being a reflexive response, then they would be sentient. Like a tree “screaming” only when other trees are around to hear.
If I cut myself my body will move away reflexively, it with scab over the wound. My immune system might “remember” some of the bacteria or viruses that get in and respond accordingly. But I don’t experience it as an action under my control. I’m not aware of all the work my body does in the background. I’m not sentient because my body can live on its own and respond to stimuli, I’m sentient because I am aware that stimuli exist and can choose how to react to some of them.
If you could prove that the tree as a whole or that part of a centralized control system in the tree could recognize the difference between itself and another plant or some mycorrhiza, and choose to respond to those encounters, then it would be conscious. But it seems more likely that the sharing of nutrients with others, the networking of the forest is not controlled by the tree but by the natural reflexive responses built into its genome.
Also, If something is conscious, then it will exhibit individuality. You should be able to identify changes in behavior due to the self referential systems required for the recognition of self. Plants and fungi grown in different circumstances should respond differently to the same circumstances.
If you taught a conscious fungus to play chess and then put it in a typical environment, you would expect to see it respond very differently than another member of its species who was not cursed with the knowledge of chess.
If a plant is conscious, you should be able to teach it to collaborate in ways that it normally would not, and again after placing it in a natural environment you should see it attempt those collaborations while it’s untrained peers would not.
When you say “aware of the delineation between self and not self”, what do you mean by “aware”? I’ve found that it’s often a circular definition, maybe with a few extra words thrown in to obscure the chain, like “know”, “comprehend”, “perceive”, etc.
Also, is a computer program that knows which process it is self aware? If not, why? It’s so simple, and yet without a concrete definition it’s hard to really reject that.
On the other extreme, are we truly self aware? As you point out, our bodies just kind of do stuff without our knowledge. Would an alien species laugh at the idea of us being self-aware, having just faint glimmers of self awareness compared to them, much like the computer program seems to us?
Anything dealing with perception is going to be somewhat circular and vague. Qualia are the elements of perception and by their nature it seems they are incommunicable by any means.
Awareness in my mind deals with the lowest level of abstract thinking. Can you recognize this thing and both compare and contrast it with other things, learning about its relation to other things on a basic level?
You could hardcode a computer to recognize its own process. But it’s not comparing itself to other processes, experiencing similarities and dissimilarities. Furthermore unless it has some way to change at least the other processes that are not itself, it can’t really learn its own features/abilities.
A cat can tell its paws are its own, likely in part because it can move them. if you gave a cat shoes, do you think the cat would think the shoes are part of itself? No, And yet the cat can learn that in certain ways it can act as though the shoes are part of itself. The same way we can recognize that tools are not us but are within our control.
We notice that there is a self that is unlike our environment in that it does not control the environment directly, and then there are the actions of the self that can influence or be influenced directly by the environment. And that there are things which we do not control at all directly.
That is the delineation I’m talking about. It’s more the delineation of control than just “this is me and that isn’t” because the term “self” is arbitrary.
We as social beings correlate self with identity, with the way we think we act compared to others, but to be conscious of one’s own existence only requires that you can sense your own actions and learn to delineate between this thing that appears within your control and those things that are not. Your definition of self depends on where you’ve learned to think the lines are.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
Furthermore, on the note of aliens, I think a better question to ask is “what do you think ‘self’ is?” Because that will determine your answer. If you think a system must be consciously aware of all the processes that make it up, I doubt you’ll ever find a life form like that. The reason those systems are subconscious is because that’s the most efficient way to be. Furthermore, those processes are mostly useful only to the self internally, and not so much the rest of reality.
To be aware of self is to be aware of how the self relates to that which is not part of it. Knowing more about your own processes could help with this if you experienced those same processes outside of the self (like noticing how other members of your society behave similarly to you) but fundamentally, you’re not necessarily creating a more accurate idea of self awareness just be having more senses of your automatic bodily processes.
It is equally important, if not more so, to experience more that is not the self rather than to experience more of what would be described as self, because it’s what’s outside that you use to measure and understand what’s inside.
I made another comment pointing this out for a similar definition, but OK so awareness is being able to “recognize”, and recognize in turn means “To realize or discover the nature of something” (using Wiktionary, but pick your favorite dictionary), and “realize” means “To become aware of or understand”, completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it’s worth, much like common definitions of “awareness”. I think it’s perfectly possible in theory to read someone’s brain to see how something is represented and then twiddle someone else’s brain in the same way to cause the same experience, or compare the two to see if they’re equivalent.
As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn’t itself. It can look at processes and say “this other process consumes more memory than I do”. It’s super primitive and hardcoded, but why doesn’t that count? I also thinking learning is separate but related. If we take the definition of “consciousness” as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I’ve reached the “Sure, why not?” stage. To be a useful definition though, we need to go beyond that and start asking questions like “Conscious of what?”
I think you’re getting hung up on the words rather than the content. While our definitions of terms may be rather vague, the properties I described are not cyclically defined.
To be aware of the difference between self means to be able to sense stimuli originating from the self, sense stimuli not from the self, and learn relationships between them.
As long as aspects of the self (like current and past thoughts) are able to be sensed (encoded into a representation which the mind can work with directly; in our case neural spike chains) exist and senses which compare those senses with other senses or past senses and finally that the mind can learn patterns in those encodings (like spiking neural nets) then it should be possible for conscious awareness to arise. (If you’re curious about the kind of learning that needs to happen you should look into Tolman-Eichenbaum machines, though non-spiking ones aren’t reallly capable of self learning)
I hope that’s a clear enough “empirical” explanation for you.
As for qualia, you are entirely wrong. What you describe would not prove that my raw experience of green is the same as your green, only that we both have qualia which can arise from the color green. You can say that it’s not pragmatic to think about that which cannot be known, and I’ll agree that qualia must be represented in a physical way and thus be recreatable in that persons brain, but the complexity of human brains actually precludes the ability to define what actually is the qualia and what are other thoughts. The difference between individuals likely precludes the ability to say “oh when these neurons are active it means this” because other people have different neural structures, similar? Absolutely, similar enough that for any experience you could find exactly the same neurons that would fire the same way as in someone else? Absolutely not.
Your last statements make it seem like you don’t understand the diffference between learning and knowledge. LLMs don’t learn when you use them. Neither do most modern chess models. They actually don’t learn at all unless they are being trained by an outside source who gives them an input, expects an output, and then computes the weight changes needed to get closer to the answer via gradient descent.
A typical ANN trained this way does not learn from new experiences furthermore, it is not capable of referencing its own thoughts because it doesn’t have any.
The self is that which acts, did you know LLMs aren’t capable of being aware they took any action? Are you aware chess engines can’t do that either? There is no comparison mechanism between what was and what is and what made that change. They cannot be self aware the same way a program hardcoded to kill processes other than itself is unaware. They literally lack any sense of their own actions directly. Once again, you not only need to be able to sense that information, but the program then needs a sense which compares that sensation to other sensations and learns the differences, changing the way it responds to those stimuli. You need learning.
I don’t reject the idea of machines being conscious, in fact I’m literally trying to make a conscious machine just to see if I can (which yeah to most people sounds insane). But I do not think we agree on much else because learning is absolutely essential for any thing to be capable of a conscious action.
Conscious: aware of the delineation between self and not self
I don’t know whether this applies to plants and fungi, but it applies to just about every animal. There’s a minimum basic sense of self required in distinguishing one’s own movements from the approach of an attacker. Even earthworms react differently when they touch something vs when something touches them.
I’m inclined to believe every dynamic interconnected system is “conscious” to some degree. Not 1:1 with human consciousness obviously, but the same base phenomenon.
The main problem is that there aren’t very good metrics to distinguish how primitive a consciousness is. Where do you draw the line between consciousness and reflex? Is each of your cells conscious in its own impossibly tiny way?
Yeah, reflexes could be considered a conscious effort of a part of your body. Or your immune system might be considered “conscious” of a virus that it’s fighting off. What’s a testable definition of “conscious” that excludes those?
I think that “conscious” is also a relative term, i.e. “Conscious of what?” A cell in your body could be said to be conscious of a few things, like its immediate environment. It’s clearly not conscious of J-pop though. But to be fair to it, none of us are “really” conscious of say Sagittarius B2 or an organism living at the bottom of the ocean.
The best way I’ve found to think about it is that consciousness can be thought of as a world model. The bigger the world model, the more consciousness it could be said to have. Some world models might be smaller, but contain things that bigger ones don’t though. Worms don’t understand what an airplane is, but humans also don’t really understand the experience of wriggling through soil.
I think the big dividing line between what many animals do and what cells or plants do is the ability to react in different ways by considering stimuli in conjunction with memory, and then the next big divide is metacognition. I feel like there should be concrete words for these categories. “Sentient” and “conscious” have pretty much lost meaning at this point, as demonstrated by this discussion’s existence.
I will call them reactive awareness, decisive awareness, and reflective awareness in the absence of a better idea.
That’s not a problem. The idea is to define practical categories along the spectrum of consciousness so that they can be discussed without having to re-define terms prior to every discussion. There’s no reason any given organism should or shouldn’t fall into a particular category except for its properties directly regarding that category.
“Conscious” means being aware of oneself, one’s surroundings, thoughts, or feelings, being awake, or acting with deliberate intention, like a “conscious effort”. It refers to subjective experience and internal knowledge, differentiating from unconsciousness (sleep, coma).
It’s a spectrum, sure. But the spectrum is between ants and humans; not animals and plants.
What does “aware” mean, or “knowledge”? I think those are going to be circular definitions, maybe filtered through a few other words like “comprehend” or “perceive”.
Does a plant act with deliberate intention when it starts growing from a seed?
To be clear, my beef is more with the definition of “conscious” being useless and/or circular in most cases. I’m not saying “woah, what if plants have thoughts dude” as in the meme, but whatever definition you come up with, you have to evaluate why it does or doesn’t include plants, simple animals, or AI.
What do “sense” and “perceived” mean? I think they both loop back to “aware”, and the reason I point that out is that circular definitions are useless. How can you say that plants lack a sense of self and consciousness, if you can’t even define those terms properly? What about crown shyness? Trees seem to be able to tell the difference between themselves and others.
As an example of the circularity, “sense” means (using Wiktionary, but pick your favorite if you don’t like it) “Any of the manners by which living beings perceive the physical world”. What does “perceive” mean? “To become aware of, through the physical senses”. So in your definition, “aware” loops back to “aware” (Wiktionary also has a definition of “sense” that just defines it as “awareness”, for a more direct route, too).
I meant that plants don’t have thoughts more in the sense of “woah, dude”, pushing back on something without any explanatory power. But really, how do you define “thought”? I actually think Wiktionary is slightly more helpful here, in that it defines “thought” as “A representation created in the mind without the use of one’s faculties of vision, sound, smell, touch, or taste”. That’s kind of getting to what I’ve commented elsewhere, with trying to come up with a more objective definition based around “world model”. Basing all of these definitions on “representation” or “world model” seems to the closest to an objective definition as we can get.
Of course, that brings up the question of “What is a model?” / “What does represent mean?”. Is that just pushing the circularity elsewhere? I think not, if you accept a looser definition. If anything has an internal state that appears to correlate to external state, then it has a world model, and is at some level “conscious”. You have to accept things that many people don’t want to, such as that AI is conscious of much of the universe (albeit experienced through the narrow peephole of human-written text). I just kind of embraced that though and said “sure, why not?”. Maybe it’s not satisfying philosophically, but it’s pragmatically useful.
The foundational idea behind what the user is talking about is called panpsychism, it’s the idea that consciousness or awareness is actually a fundamental quality of the universe like fields or forces, in that it’s in everything, but only complex systems have actual thoughts.
The theory(?) states that even a single electron or proton has a state of awareness, but without any functional way to remember any information or think it’s just like some kind of flash of experience like if you suddenly developed perpetual amnesia about literally everything… while you were hurtling through the universe at high speed. You would still have a conscious experience, it would just be radically limited in what that “means.”
I get the concept, but I don’t get the usefulness of it. It feels too close to people wishing The Force was real.
Guys. You are not getting your light sabers this way.
I’m not advocating for consciousness as a fundamental quality of the universe. I think that lacks explanatory power and isn’t really in the realm of science. I’m kind of coming at it the opposite way and pushing for a more concrete and empirical definition of consciousness.
Not sure if you know that what you’re describing has a name it’s called Panpsychism and it is gaining some popularity but that’s not because there’s any reason to believe in it or any evidence, it’s a fanciful idea about the universe that doesn’t really help or connect anything. IE: panpsychism doesn’t make for a better explanation for anything than the idea that you are just a singular consciousness living in it’s most probable state to be able to observe or experience anything.
I’m not shooting it down, it’s one of those things we just will never know, but that’s a pretty huge list of things and possibilities so I just don’t know if it’s more or less useful than any other philosophical view.
I don’t think I’m talking about panpsychism. To me, that’s just giving up and hand wavey. I’m much more interested in trying to come up with a more concrete, empirical definition. I think questions like “Well, why aren’t plants conscious” or “Why isn’t an LLM conscious” are good ways to explore the limits of any particular definition and find things it fails to explain properly.
I don’t think a rock or electron could be considered conscious, for example. Neither has an internal model of the world in any way.
Panpsychism seems logically more possible than the alternative. If consciousness is an emergent property of complex systems, the universe is probably conscious because it’s the most complex system there is.
It depends on if you think consciousness is something that emerges from information exchange systems or some higher level “thing” we don’t understand yet, and I lean towards the idea that consciousness emerges from information exchange systems. If that’s the case, then the universe, while containing massive areas of complexity, isn’t entirely exchanging information, only in isolated areas that are borrowing energy even as entropy broadly increases. I would be more open the idea of some possibility of consciousness occurring in the hyper-low entropy state of the very early universe when everything was much closer together and there was enough energy to connect a whole universe worth of information in localized states.
There are massive problems with ideas like individual galaxies making up parts of some kind of galactic neural network where the information being exchanged takes place of billions of years to form some “super thought.”
Not the least of which is that it doesn’t say anything, it’s a stoner thought that for all I know is totally true, but it’s so impossible to either prove, model or test, and it doesn’t provide a more plausible explanation for anything.
We can just as easily say that a handful of gravel I pick up may form some kind of neural network because technically they are sharing information in subtle ways. A panpsychist might say “exactly!” and I just say “Why? What does this idea explain better than the models we have now?”
The other huge problem is the expansion of the universe and the way light behaves. I am not sure the cosmic super entity is doing so hot considering how rapidly it’s tearing itself apart. Since light doesn’t really experience time either way, those “thoughts” trapped in patterns of light rays basically had one moment of thought and then dispersed so far and wide that it was essentially instant. I could easily come up with workarounds that keep the idea alive, but then at that point aren’t we just desperately trying to find God?
it’s a stoner thought that for all I know is totally true, but it’s so impossible to either prove, model or test
That’s basically true for every hypothesis about consciousness, though. That’s why it’s called a hard problem. Like yeah, we can map neuron activity and record what the subject says they were thinking about. But that doesn’t tell us what consciousness itself is.
And those “stoner thoughts” are how we conceptually narrow down the possibilities via internal consistency, and maybe get to something we can test. Just because we haven’t developed a test for a hypothesis doesn’t mean it’s impossible to do so. And even if a test is impossible, that doesn’t mean the hypothesis isn’t true. It just means we can know whether or not it’s true.
We don’t really have models to compare too. We have hypotheses, but how do you test them? Is consciousness an electromagnetic phenomenon? Is it purely mathematical? Can it exist in gravitational systems?
We know precious little about the universe. We have snippets of data about our immediate locale, and ever-changing theories about our not-so-immediate locale. We are specks on a rocky speck orbiting a fiery speck on the outer spiral arm of a bigger speck.
Maybe consciousness is a fundamental force. Maybe it is emergent and the universe thinks a billion times slower and bigger than we do. We just don’t know, and we didn’t really have any way to measure one way or the other. That’s the tricky bit about subjective experience.
I don’t think it’s any more “desperate” than any other theory. The only default position is solipsism: mine is the only real consciousness, and all the rest of you could be inventions of my mind or clever automatons. Once you start generalizing more than that, any line is kinda arbitrary. You either wind up at the universe, or you have to come up with a good reason to stop; and I don’t think we have the physics to confidently place that line.
I think the definition of consciousness meaning “internal state that observably correlates to external state” would clarify here. Gravel wouldn’t be conscious, because it has no internal state that we can point to and say it correlates to external state. Galaxies/the universe doesn’t either, as far as we can tell. Galaxies don’t have internal state that represents e.g. other galaxies, other than including humans in that definition, but it would be more proper IMO to limit the definition the minimum amount of state possible. You don’t count the galaxy as having internal state that represents external state, if you can limit that definition to one tiny, self-contained part of the galaxy, i.e. a human brain.
Since you so clearly elucidated it, you may know this is actually a thing called panprotopsychism. I’m fully on board with it but of course, the Internet knows with absolute certainty it’s complete and utter bullshit, so ¯\_(ツ)_/¯.
It all depends on what you mean by “conscious”, which IMO doesn’t fall under “Maybe everything is conscious” because that’s wrongly assuming that “conscious” is a binary property instead of a spectrum that humans and plants are both on while clearly being at vastly different levels. Maybe I just have a much looser definition of “conscious” than most people, but why don’t tropisms count as a very primitive form of consciousness?
Personally, I’m more a fan of the binary/discrete idea. I tend to go with the following definitions:
If you could prove that plants have the ability to choose to scream rather than it being a reflexive response, then they would be sentient. Like a tree “screaming” only when other trees are around to hear.
If I cut myself my body will move away reflexively, it with scab over the wound. My immune system might “remember” some of the bacteria or viruses that get in and respond accordingly. But I don’t experience it as an action under my control. I’m not aware of all the work my body does in the background. I’m not sentient because my body can live on its own and respond to stimuli, I’m sentient because I am aware that stimuli exist and can choose how to react to some of them.
If you could prove that the tree as a whole or that part of a centralized control system in the tree could recognize the difference between itself and another plant or some mycorrhiza, and choose to respond to those encounters, then it would be conscious. But it seems more likely that the sharing of nutrients with others, the networking of the forest is not controlled by the tree but by the natural reflexive responses built into its genome.
Also, If something is conscious, then it will exhibit individuality. You should be able to identify changes in behavior due to the self referential systems required for the recognition of self. Plants and fungi grown in different circumstances should respond differently to the same circumstances.
If you taught a conscious fungus to play chess and then put it in a typical environment, you would expect to see it respond very differently than another member of its species who was not cursed with the knowledge of chess.
If a plant is conscious, you should be able to teach it to collaborate in ways that it normally would not, and again after placing it in a natural environment you should see it attempt those collaborations while it’s untrained peers would not.
Damn now I want to do some biology experiments…
When you say “aware of the delineation between self and not self”, what do you mean by “aware”? I’ve found that it’s often a circular definition, maybe with a few extra words thrown in to obscure the chain, like “know”, “comprehend”, “perceive”, etc.
Also, is a computer program that knows which process it is self aware? If not, why? It’s so simple, and yet without a concrete definition it’s hard to really reject that.
On the other extreme, are we truly self aware? As you point out, our bodies just kind of do stuff without our knowledge. Would an alien species laugh at the idea of us being self-aware, having just faint glimmers of self awareness compared to them, much like the computer program seems to us?
Anything dealing with perception is going to be somewhat circular and vague. Qualia are the elements of perception and by their nature it seems they are incommunicable by any means.
Awareness in my mind deals with the lowest level of abstract thinking. Can you recognize this thing and both compare and contrast it with other things, learning about its relation to other things on a basic level?
You could hardcode a computer to recognize its own process. But it’s not comparing itself to other processes, experiencing similarities and dissimilarities. Furthermore unless it has some way to change at least the other processes that are not itself, it can’t really learn its own features/abilities.
A cat can tell its paws are its own, likely in part because it can move them. if you gave a cat shoes, do you think the cat would think the shoes are part of itself? No, And yet the cat can learn that in certain ways it can act as though the shoes are part of itself. The same way we can recognize that tools are not us but are within our control.
We notice that there is a self that is unlike our environment in that it does not control the environment directly, and then there are the actions of the self that can influence or be influenced directly by the environment. And that there are things which we do not control at all directly.
That is the delineation I’m talking about. It’s more the delineation of control than just “this is me and that isn’t” because the term “self” is arbitrary.
We as social beings correlate self with identity, with the way we think we act compared to others, but to be conscious of one’s own existence only requires that you can sense your own actions and learn to delineate between this thing that appears within your control and those things that are not. Your definition of self depends on where you’ve learned to think the lines are.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
Furthermore, on the note of aliens, I think a better question to ask is “what do you think ‘self’ is?” Because that will determine your answer. If you think a system must be consciously aware of all the processes that make it up, I doubt you’ll ever find a life form like that. The reason those systems are subconscious is because that’s the most efficient way to be. Furthermore, those processes are mostly useful only to the self internally, and not so much the rest of reality.
To be aware of self is to be aware of how the self relates to that which is not part of it. Knowing more about your own processes could help with this if you experienced those same processes outside of the self (like noticing how other members of your society behave similarly to you) but fundamentally, you’re not necessarily creating a more accurate idea of self awareness just be having more senses of your automatic bodily processes.
It is equally important, if not more so, to experience more that is not the self rather than to experience more of what would be described as self, because it’s what’s outside that you use to measure and understand what’s inside.
I made another comment pointing this out for a similar definition, but OK so awareness is being able to “recognize”, and recognize in turn means “To realize or discover the nature of something” (using Wiktionary, but pick your favorite dictionary), and “realize” means “To become aware of or understand”, completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it’s worth, much like common definitions of “awareness”. I think it’s perfectly possible in theory to read someone’s brain to see how something is represented and then twiddle someone else’s brain in the same way to cause the same experience, or compare the two to see if they’re equivalent.
As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn’t itself. It can look at processes and say “this other process consumes more memory than I do”. It’s super primitive and hardcoded, but why doesn’t that count? I also thinking learning is separate but related. If we take the definition of “consciousness” as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.
I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I’ve reached the “Sure, why not?” stage. To be a useful definition though, we need to go beyond that and start asking questions like “Conscious of what?”
I think you’re getting hung up on the words rather than the content. While our definitions of terms may be rather vague, the properties I described are not cyclically defined.
To be aware of the difference between self means to be able to sense stimuli originating from the self, sense stimuli not from the self, and learn relationships between them.
As long as aspects of the self (like current and past thoughts) are able to be sensed (encoded into a representation which the mind can work with directly; in our case neural spike chains) exist and senses which compare those senses with other senses or past senses and finally that the mind can learn patterns in those encodings (like spiking neural nets) then it should be possible for conscious awareness to arise. (If you’re curious about the kind of learning that needs to happen you should look into Tolman-Eichenbaum machines, though non-spiking ones aren’t reallly capable of self learning)
I hope that’s a clear enough “empirical” explanation for you.
As for qualia, you are entirely wrong. What you describe would not prove that my raw experience of green is the same as your green, only that we both have qualia which can arise from the color green. You can say that it’s not pragmatic to think about that which cannot be known, and I’ll agree that qualia must be represented in a physical way and thus be recreatable in that persons brain, but the complexity of human brains actually precludes the ability to define what actually is the qualia and what are other thoughts. The difference between individuals likely precludes the ability to say “oh when these neurons are active it means this” because other people have different neural structures, similar? Absolutely, similar enough that for any experience you could find exactly the same neurons that would fire the same way as in someone else? Absolutely not.
Your last statements make it seem like you don’t understand the diffference between learning and knowledge. LLMs don’t learn when you use them. Neither do most modern chess models. They actually don’t learn at all unless they are being trained by an outside source who gives them an input, expects an output, and then computes the weight changes needed to get closer to the answer via gradient descent.
A typical ANN trained this way does not learn from new experiences furthermore, it is not capable of referencing its own thoughts because it doesn’t have any.
The self is that which acts, did you know LLMs aren’t capable of being aware they took any action? Are you aware chess engines can’t do that either? There is no comparison mechanism between what was and what is and what made that change. They cannot be self aware the same way a program hardcoded to kill processes other than itself is unaware. They literally lack any sense of their own actions directly. Once again, you not only need to be able to sense that information, but the program then needs a sense which compares that sensation to other sensations and learns the differences, changing the way it responds to those stimuli. You need learning.
I don’t reject the idea of machines being conscious, in fact I’m literally trying to make a conscious machine just to see if I can (which yeah to most people sounds insane). But I do not think we agree on much else because learning is absolutely essential for any thing to be capable of a conscious action.
I don’t know whether this applies to plants and fungi, but it applies to just about every animal. There’s a minimum basic sense of self required in distinguishing one’s own movements from the approach of an attacker. Even earthworms react differently when they touch something vs when something touches them.
Yes most definitely, I’d imagine most animals are conscious.
In fact my definition of sapience means several animals like crows and parrots and rats are capable of sapience.
it could be several spectra and what we call consciousness is a series of multiple phenomena
I’m inclined to believe every dynamic interconnected system is “conscious” to some degree. Not 1:1 with human consciousness obviously, but the same base phenomenon.
The main problem is that there aren’t very good metrics to distinguish how primitive a consciousness is. Where do you draw the line between consciousness and reflex? Is each of your cells conscious in its own impossibly tiny way?
The line is it being a conscious effort.
Reflexes weren’t a conscious effort.
Sure, how do you detect conscious effort from outside?
The decision would be made by measurable neurological means not chemical ones.
What measurable neurological means?
Yeah, reflexes could be considered a conscious effort of a part of your body. Or your immune system might be considered “conscious” of a virus that it’s fighting off. What’s a testable definition of “conscious” that excludes those?
I think that “conscious” is also a relative term, i.e. “Conscious of what?” A cell in your body could be said to be conscious of a few things, like its immediate environment. It’s clearly not conscious of J-pop though. But to be fair to it, none of us are “really” conscious of say Sagittarius B2 or an organism living at the bottom of the ocean.
The best way I’ve found to think about it is that consciousness can be thought of as a world model. The bigger the world model, the more consciousness it could be said to have. Some world models might be smaller, but contain things that bigger ones don’t though. Worms don’t understand what an airplane is, but humans also don’t really understand the experience of wriggling through soil.
I think the big dividing line between what many animals do and what cells or plants do is the ability to react in different ways by considering stimuli in conjunction with memory, and then the next big divide is metacognition. I feel like there should be concrete words for these categories. “Sentient” and “conscious” have pretty much lost meaning at this point, as demonstrated by this discussion’s existence.
I will call them reactive awareness, decisive awareness, and reflective awareness in the absence of a better idea.
Cells are very diverse, though. Some can get over your first divide.
That’s not a problem. The idea is to define practical categories along the spectrum of consciousness so that they can be discussed without having to re-define terms prior to every discussion. There’s no reason any given organism should or shouldn’t fall into a particular category except for its properties directly regarding that category.
“Conscious” means being aware of oneself, one’s surroundings, thoughts, or feelings, being awake, or acting with deliberate intention, like a “conscious effort”. It refers to subjective experience and internal knowledge, differentiating from unconsciousness (sleep, coma).
It’s a spectrum, sure. But the spectrum is between ants and humans; not animals and plants.
What does “aware” mean, or “knowledge”? I think those are going to be circular definitions, maybe filtered through a few other words like “comprehend” or “perceive”.
Does a plant act with deliberate intention when it starts growing from a seed?
To be clear, my beef is more with the definition of “conscious” being useless and/or circular in most cases. I’m not saying “woah, what if plants have thoughts dude” as in the meme, but whatever definition you come up with, you have to evaluate why it does or doesn’t include plants, simple animals, or AI.
Aware means it has a sense of self. They are circular because we use these words to define how that is perceived.
Plants do not act deliberately when they do anything, because they do not have a sense of self and are not conscious.
If you don’t think plants have thoughts then you agree they are not conscious.
What do “sense” and “perceived” mean? I think they both loop back to “aware”, and the reason I point that out is that circular definitions are useless. How can you say that plants lack a sense of self and consciousness, if you can’t even define those terms properly? What about crown shyness? Trees seem to be able to tell the difference between themselves and others.
As an example of the circularity, “sense” means (using Wiktionary, but pick your favorite if you don’t like it) “Any of the manners by which living beings perceive the physical world”. What does “perceive” mean? “To become aware of, through the physical senses”. So in your definition, “aware” loops back to “aware” (Wiktionary also has a definition of “sense” that just defines it as “awareness”, for a more direct route, too).
I meant that plants don’t have thoughts more in the sense of “woah, dude”, pushing back on something without any explanatory power. But really, how do you define “thought”? I actually think Wiktionary is slightly more helpful here, in that it defines “thought” as “A representation created in the mind without the use of one’s faculties of vision, sound, smell, touch, or taste”. That’s kind of getting to what I’ve commented elsewhere, with trying to come up with a more objective definition based around “world model”. Basing all of these definitions on “representation” or “world model” seems to the closest to an objective definition as we can get.
Of course, that brings up the question of “What is a model?” / “What does represent mean?”. Is that just pushing the circularity elsewhere? I think not, if you accept a looser definition. If anything has an internal state that appears to correlate to external state, then it has a world model, and is at some level “conscious”. You have to accept things that many people don’t want to, such as that AI is conscious of much of the universe (albeit experienced through the narrow peephole of human-written text). I just kind of embraced that though and said “sure, why not?”. Maybe it’s not satisfying philosophically, but it’s pragmatically useful.
The foundational idea behind what the user is talking about is called panpsychism, it’s the idea that consciousness or awareness is actually a fundamental quality of the universe like fields or forces, in that it’s in everything, but only complex systems have actual thoughts.
The theory(?) states that even a single electron or proton has a state of awareness, but without any functional way to remember any information or think it’s just like some kind of flash of experience like if you suddenly developed perpetual amnesia about literally everything… while you were hurtling through the universe at high speed. You would still have a conscious experience, it would just be radically limited in what that “means.”
I get the concept, but I don’t get the usefulness of it. It feels too close to people wishing The Force was real.
Guys. You are not getting your light sabers this way.
Sounds like anthropomorphism to me.
I’m not advocating for consciousness as a fundamental quality of the universe. I think that lacks explanatory power and isn’t really in the realm of science. I’m kind of coming at it the opposite way and pushing for a more concrete and empirical definition of consciousness.
So basically everything is tripping and only a few things can be legit sober?
Not sure if you know that what you’re describing has a name it’s called Panpsychism and it is gaining some popularity but that’s not because there’s any reason to believe in it or any evidence, it’s a fanciful idea about the universe that doesn’t really help or connect anything. IE: panpsychism doesn’t make for a better explanation for anything than the idea that you are just a singular consciousness living in it’s most probable state to be able to observe or experience anything.
I’m not shooting it down, it’s one of those things we just will never know, but that’s a pretty huge list of things and possibilities so I just don’t know if it’s more or less useful than any other philosophical view.
I don’t think I’m talking about panpsychism. To me, that’s just giving up and hand wavey. I’m much more interested in trying to come up with a more concrete, empirical definition. I think questions like “Well, why aren’t plants conscious” or “Why isn’t an LLM conscious” are good ways to explore the limits of any particular definition and find things it fails to explain properly.
I don’t think a rock or electron could be considered conscious, for example. Neither has an internal model of the world in any way.
Panpsychism seems logically more possible than the alternative. If consciousness is an emergent property of complex systems, the universe is probably conscious because it’s the most complex system there is.
It depends on if you think consciousness is something that emerges from information exchange systems or some higher level “thing” we don’t understand yet, and I lean towards the idea that consciousness emerges from information exchange systems. If that’s the case, then the universe, while containing massive areas of complexity, isn’t entirely exchanging information, only in isolated areas that are borrowing energy even as entropy broadly increases. I would be more open the idea of some possibility of consciousness occurring in the hyper-low entropy state of the very early universe when everything was much closer together and there was enough energy to connect a whole universe worth of information in localized states.
Who knows what energetic structures exist within galactic super clusters? Energy is constantly exchanged in the universe.
There are massive problems with ideas like individual galaxies making up parts of some kind of galactic neural network where the information being exchanged takes place of billions of years to form some “super thought.”
Not the least of which is that it doesn’t say anything, it’s a stoner thought that for all I know is totally true, but it’s so impossible to either prove, model or test, and it doesn’t provide a more plausible explanation for anything.
We can just as easily say that a handful of gravel I pick up may form some kind of neural network because technically they are sharing information in subtle ways. A panpsychist might say “exactly!” and I just say “Why? What does this idea explain better than the models we have now?”
The other huge problem is the expansion of the universe and the way light behaves. I am not sure the cosmic super entity is doing so hot considering how rapidly it’s tearing itself apart. Since light doesn’t really experience time either way, those “thoughts” trapped in patterns of light rays basically had one moment of thought and then dispersed so far and wide that it was essentially instant. I could easily come up with workarounds that keep the idea alive, but then at that point aren’t we just desperately trying to find God?
That’s basically true for every hypothesis about consciousness, though. That’s why it’s called a hard problem. Like yeah, we can map neuron activity and record what the subject says they were thinking about. But that doesn’t tell us what consciousness itself is.
And those “stoner thoughts” are how we conceptually narrow down the possibilities via internal consistency, and maybe get to something we can test. Just because we haven’t developed a test for a hypothesis doesn’t mean it’s impossible to do so. And even if a test is impossible, that doesn’t mean the hypothesis isn’t true. It just means we can know whether or not it’s true.
We don’t really have models to compare too. We have hypotheses, but how do you test them? Is consciousness an electromagnetic phenomenon? Is it purely mathematical? Can it exist in gravitational systems?
We know precious little about the universe. We have snippets of data about our immediate locale, and ever-changing theories about our not-so-immediate locale. We are specks on a rocky speck orbiting a fiery speck on the outer spiral arm of a bigger speck.
Maybe consciousness is a fundamental force. Maybe it is emergent and the universe thinks a billion times slower and bigger than we do. We just don’t know, and we didn’t really have any way to measure one way or the other. That’s the tricky bit about subjective experience.
I don’t think it’s any more “desperate” than any other theory. The only default position is solipsism: mine is the only real consciousness, and all the rest of you could be inventions of my mind or clever automatons. Once you start generalizing more than that, any line is kinda arbitrary. You either wind up at the universe, or you have to come up with a good reason to stop; and I don’t think we have the physics to confidently place that line.
I think the definition of consciousness meaning “internal state that observably correlates to external state” would clarify here. Gravel wouldn’t be conscious, because it has no internal state that we can point to and say it correlates to external state. Galaxies/the universe doesn’t either, as far as we can tell. Galaxies don’t have internal state that represents e.g. other galaxies, other than including humans in that definition, but it would be more proper IMO to limit the definition the minimum amount of state possible. You don’t count the galaxy as having internal state that represents external state, if you can limit that definition to one tiny, self-contained part of the galaxy, i.e. a human brain.
Since you so clearly elucidated it, you may know this is actually a thing called panprotopsychism. I’m fully on board with it but of course, the Internet knows with absolute certainty it’s complete and utter bullshit, so ¯\_(ツ)_/¯.