They don’t really go into the size of the organoid, but it’s extremely doubtful that it’s large and complex enough to get anywhere close to consciousness.
There’s also no guarantee that a lump of brain tissue could ever achieve consciousness, especially if the architecture is drastically different from an actual brain.
Well, we haven’t solved the hard problem of consciousness, so we don’t know if size of brain or similarity to human brain are factors for developing consciousness. But perhaps a more important question is, if it did develop consciousness, how much pain would it experience?
Now emotional pain? I’m not sure it would even be able to accomplish emotional pain. So much of our emotions are intertwined with chemical balances and releases. If a brain achieved consciousness, but had none of these chemicals at all…I don’t know that’d even work.
While we haven’t confirmed this experimentally (ominous voice: yet), computationally there’s no reason even a simple synthetic brain couldn’t experience emotions. Chemical neurotransmitters are just an added layer of structural complexity so Church–Turing will still hold true. Human brains are only powerful because they have an absurdly high parallel network throughput rate (computational bus might be a better term), the actual neuron part is dead simple. Network computation is fascinating, but much like linear algebra the actual mechanisms are so simple they’re dead boring - but if you cram 200,000,000 of those mechanisms into a salty water balloon it can produce some really pompus lemmy comments.
Emotions are holographic anyways so the question is kinda meaningless. It’s like asking if an artificial brain will perceive the color green as the same color we ‘see’ as green. It sounds deep until you realize it’s all fake, man. It’s all fake.
They don’t really go into the size of the organoid, but it’s extremely doubtful that it’s large and complex enough to get anywhere close to consciousness.
There’s also no guarantee that a lump of brain tissue could ever achieve consciousness, especially if the architecture is drastically different from an actual brain.
Well, we haven’t solved the hard problem of consciousness, so we don’t know if size of brain or similarity to human brain are factors for developing consciousness. But perhaps a more important question is, if it did develop consciousness, how much pain would it experience?
Physical pain? Zero.
Now emotional pain? I’m not sure it would even be able to accomplish emotional pain. So much of our emotions are intertwined with chemical balances and releases. If a brain achieved consciousness, but had none of these chemicals at all…I don’t know that’d even work.
Did you think about this before you wrote it?
While we haven’t confirmed this experimentally (ominous voice: yet), computationally there’s no reason even a simple synthetic brain couldn’t experience emotions. Chemical neurotransmitters are just an added layer of structural complexity so Church–Turing will still hold true. Human brains are only powerful because they have an absurdly high parallel network throughput rate (computational bus might be a better term), the actual neuron part is dead simple. Network computation is fascinating, but much like linear algebra the actual mechanisms are so simple they’re dead boring - but if you cram 200,000,000 of those mechanisms into a salty water balloon it can produce some really pompus lemmy comments.
Emotions are holographic anyways so the question is kinda meaningless. It’s like asking if an artificial brain will perceive the color green as the same color we ‘see’ as green. It sounds deep until you realize it’s all fake, man. It’s all fake.