If we are talking about neurons, the human brain has ~90 billion vs. the most sophisticated AI model a couple of hundred million. Not even close, microscopic in scale compared to us and we don’t even fully understand the human brain and may very well never be able to (unless we evolve?).
Still, I can see us being able to create the illusion of artificial consciousness through elaborate programming. Interesting times!
True! It would be quite the computational feat to do this. Even if we were able to trim the 9E10 neuron simulation to 1.6E10 by just focusing on the cortical neurons involved in reasoning and thinking, it would still be several logs away from reaching a human brain. But that doesn't make me feel that secure considering the pace we have been and continue to make computational leaps.
I've heard of Roger Penrose, but haven't actually read or listened to any of his work. I'll put this on during my commute today, thanks!
I feel secure enough for now, although who knows what tomorrow brings! And re: the link. There are a couple of AI podcasts on there. I haven’t read his books, only listened to the podcasts there.
I think the problem with AI having consciousness is that it can only simulate emotion unlike humans. Even if it "chooses" to become as dangerous as we predict, it won't do it because it's conscious, but because somewhere in it's own algorithm, it decided to do the best course of action for whatever problem we give it. No remorse, no happiness or sadness, just pure, calculated execution of code. I don't think we'll ever be able to mathematically create something that replicates the human experience of emotion, however, everything we create can be almost perfectly simulated.
Q: If we train a large language model on all of your personal data, and it becomes indistinguishable from you to all the people in your life, is that AI conscious? Why or why not?
A: I agree with Nathan that this is too big a question to answer with any sense of intelligence. Plus he brings up a good point that we don’t have a good definition OF consciousness, so it makes it hard to answer your question.
But here are my two cents on the matter. I don’t think that if the trained AI became indistinguishable from you that it would be considered conscious. All you have is a computer that’s very well trained. If for some reason the trained AI all of a sudden starts exhibiting different traits, I could argue that it may be considered conscious, or it could just be a poorly trained computer. Personally, I’m starting to think that to have “consciousness,” there needs to be a biological factor involved. A machine is a machine is a machine. I don’t have good supporting material or evidence for the need of biological materials to support consciousness, it’s just a gut feeling. It feels like a circular conversation/argument otherwise because we have no good definition of what it means to be “conscious.”
It's an interesting answer. There is a part of me that also has this 'gut feeling' that consciousness (at least something similar to us) may be intrinsic to biological systems. Maybe there is a component to being alive that is part of our definition for consciousness. However, there is another part of me that doesn't see why consciousness needs to be substrate-dependent (i.e., consciousness can only exist in biology). If I posed this question a bit differently, I wonder if it would change your intuition.
Question: "Imagine we can fully simulate a human brain—not just training the computer to guess your next word like an LLM. This simulation would include each neuron of a brain, their respective configurations, their connections to other neurons, the communication synapses, and the neurotransmitter signaling. Couldn't we say this mind would be identical to ours in silico? In this case, would it be conscious?"
The answer still may be 'no.' But I think it's more likely that consciousness is an emergent property from a sophisticated biological neural network. I believe something similar is likely possible in a machine-based neural network, but it could be fundamentally different than what it is like to have a biological experience center.
Too big of a question for me to answer with any sense of intelligence 😄
It's a good one to ponder, though.
Will we ever understand what consciousness truly is? Perhaps if it can be demonstrated to some extent by an AI then we will be closer; though, without a strong and agreed-upon definition the perhaps this won't be possible.
Thanks summarising the article so well. It's outside my sphere but I find it fascinating. Have you listened to Sam's latest Making Sense interview with Mustafa Suleyman? Unfortunately I don't have a sub (I do on Waking Up, but not Making Sense) so I only caught the free section but it was an excellent listen.
Hey Nathan, thank you for the thoughts! I completely agree, this is a tough nut to crack. However, I think we are going to find (sooner than we are ready) that it needs an answer.
I'm glad you found the read interesting even if it's outside your usual sphere. I haven't listened to many of Sam's new Podcasts, but I'll have to add this to my queue. I previously supported Making Sense (formally Waking Up) and even saw him debate with Jordan Peterson in Dublin (happy coincidence that I was there for a conference). Many of the guests he introduced are what first got me into these spaces years ago.
If we are talking about neurons, the human brain has ~90 billion vs. the most sophisticated AI model a couple of hundred million. Not even close, microscopic in scale compared to us and we don’t even fully understand the human brain and may very well never be able to (unless we evolve?).
Still, I can see us being able to create the illusion of artificial consciousness through elaborate programming. Interesting times!
NB: (you may know this one already) On AI and consciousness: https://www.youtube.com/watch?v=hXgqik6HXc0
True! It would be quite the computational feat to do this. Even if we were able to trim the 9E10 neuron simulation to 1.6E10 by just focusing on the cortical neurons involved in reasoning and thinking, it would still be several logs away from reaching a human brain. But that doesn't make me feel that secure considering the pace we have been and continue to make computational leaps.
I've heard of Roger Penrose, but haven't actually read or listened to any of his work. I'll put this on during my commute today, thanks!
I feel secure enough for now, although who knows what tomorrow brings! And re: the link. There are a couple of AI podcasts on there. I haven’t read his books, only listened to the podcasts there.
I think the problem with AI having consciousness is that it can only simulate emotion unlike humans. Even if it "chooses" to become as dangerous as we predict, it won't do it because it's conscious, but because somewhere in it's own algorithm, it decided to do the best course of action for whatever problem we give it. No remorse, no happiness or sadness, just pure, calculated execution of code. I don't think we'll ever be able to mathematically create something that replicates the human experience of emotion, however, everything we create can be almost perfectly simulated.
Q: If we train a large language model on all of your personal data, and it becomes indistinguishable from you to all the people in your life, is that AI conscious? Why or why not?
A: I agree with Nathan that this is too big a question to answer with any sense of intelligence. Plus he brings up a good point that we don’t have a good definition OF consciousness, so it makes it hard to answer your question.
But here are my two cents on the matter. I don’t think that if the trained AI became indistinguishable from you that it would be considered conscious. All you have is a computer that’s very well trained. If for some reason the trained AI all of a sudden starts exhibiting different traits, I could argue that it may be considered conscious, or it could just be a poorly trained computer. Personally, I’m starting to think that to have “consciousness,” there needs to be a biological factor involved. A machine is a machine is a machine. I don’t have good supporting material or evidence for the need of biological materials to support consciousness, it’s just a gut feeling. It feels like a circular conversation/argument otherwise because we have no good definition of what it means to be “conscious.”
It's an interesting answer. There is a part of me that also has this 'gut feeling' that consciousness (at least something similar to us) may be intrinsic to biological systems. Maybe there is a component to being alive that is part of our definition for consciousness. However, there is another part of me that doesn't see why consciousness needs to be substrate-dependent (i.e., consciousness can only exist in biology). If I posed this question a bit differently, I wonder if it would change your intuition.
Question: "Imagine we can fully simulate a human brain—not just training the computer to guess your next word like an LLM. This simulation would include each neuron of a brain, their respective configurations, their connections to other neurons, the communication synapses, and the neurotransmitter signaling. Couldn't we say this mind would be identical to ours in silico? In this case, would it be conscious?"
The answer still may be 'no.' But I think it's more likely that consciousness is an emergent property from a sophisticated biological neural network. I believe something similar is likely possible in a machine-based neural network, but it could be fundamentally different than what it is like to have a biological experience center.
Too big of a question for me to answer with any sense of intelligence 😄
It's a good one to ponder, though.
Will we ever understand what consciousness truly is? Perhaps if it can be demonstrated to some extent by an AI then we will be closer; though, without a strong and agreed-upon definition the perhaps this won't be possible.
Thanks summarising the article so well. It's outside my sphere but I find it fascinating. Have you listened to Sam's latest Making Sense interview with Mustafa Suleyman? Unfortunately I don't have a sub (I do on Waking Up, but not Making Sense) so I only caught the free section but it was an excellent listen.
Hey Nathan, thank you for the thoughts! I completely agree, this is a tough nut to crack. However, I think we are going to find (sooner than we are ready) that it needs an answer.
I'm glad you found the read interesting even if it's outside your usual sphere. I haven't listened to many of Sam's new Podcasts, but I'll have to add this to my queue. I previously supported Making Sense (formally Waking Up) and even saw him debate with Jordan Peterson in Dublin (happy coincidence that I was there for a conference). Many of the guests he introduced are what first got me into these spaces years ago.
Ah that's good to hear. That must have been a great experience!
And yep, I'm sure this is all going to be move super fast. Fascinated to know even where we'll be this time next year!