Log in
Register
Menu
Log in
Register
Home
What's new
Latest activity
Authors
Forums
New posts
Search forums
What's new
New posts
Latest activity
Members
Current visitors
New posts
Search forums
Menu
Log in
Register
Install the app
Install
Forums
Miscellaneous Sections
Tech Head - The Technology Section
Einstein's Alcove
Can a lightbulb be conscious?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="2old4this" data-source="post: 18084" data-attributes="member: 174998"><p>CH - my premise is that consciousness is arising in the "pattern" of neuronal firing, not in the mechanisms (neuronal connections) that facilitate that firing. So yes, I have reduced the complexity of the lattice connections (in fact I've removed it altogether) but not of the pattern of activity - and hence the consciousness should remain.</p><p>You may disagree with my premise. You may propose, for example, that it is precisely in the channels/connections between neurons that consciousness arises, the neurons being the mere facilitators of that. In that case, I would apply the same reductionist argument but replace the connections by lightbulbs (or maybe flourescent tubes) rather than the neurons. It would amount to the same thing.</p><p></p><p>On your second point, the fact that each neuron has multiple connections to others and that the firing of any given neuron may depend on muliple inputs from multiple other neurons, does not change the result. The result is that the neuron fires at a particular instant in time. My replacement lightbulb simply needs to fire at that same instant. The complexity of interactions that led to that event is not relevant, only the event itself.</p><p></p><p>Your third point is more a practical consideration than a theoretical one. I agree that it would be (with current knoweldge and technology) impossible to map the activity of a brain so precisely, and non-invasively. But for the purposes of this thought experiment we can imagine that some future technology would permit this. And bear in mind that the outside entity doing the swapping of neurons for lightbulbs does not need to know what thought process the neuron was engaged in in ordetr to do the swap. It simply needs to know at precisely what instant in time the neuron fired - not why.</p><p></p><p>2old</p></blockquote><p></p>
[QUOTE="2old4this, post: 18084, member: 174998"] CH - my premise is that consciousness is arising in the "pattern" of neuronal firing, not in the mechanisms (neuronal connections) that facilitate that firing. So yes, I have reduced the complexity of the lattice connections (in fact I've removed it altogether) but not of the pattern of activity - and hence the consciousness should remain. You may disagree with my premise. You may propose, for example, that it is precisely in the channels/connections between neurons that consciousness arises, the neurons being the mere facilitators of that. In that case, I would apply the same reductionist argument but replace the connections by lightbulbs (or maybe flourescent tubes) rather than the neurons. It would amount to the same thing. On your second point, the fact that each neuron has multiple connections to others and that the firing of any given neuron may depend on muliple inputs from multiple other neurons, does not change the result. The result is that the neuron fires at a particular instant in time. My replacement lightbulb simply needs to fire at that same instant. The complexity of interactions that led to that event is not relevant, only the event itself. Your third point is more a practical consideration than a theoretical one. I agree that it would be (with current knoweldge and technology) impossible to map the activity of a brain so precisely, and non-invasively. But for the purposes of this thought experiment we can imagine that some future technology would permit this. And bear in mind that the outside entity doing the swapping of neurons for lightbulbs does not need to know what thought process the neuron was engaged in in ordetr to do the swap. It simply needs to know at precisely what instant in time the neuron fired - not why. 2old [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
Miscellaneous Sections
Tech Head - The Technology Section
Einstein's Alcove
Can a lightbulb be conscious?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.
Accept
Learn more…
Top