
Michael Egnor X Christof Koch X Michael Shermer | A Debate on the Mind, Soul, and the Afterlife
June 25, 2025
Key Takeaways
- Neuroscience, while revealing the brain’s role in emotions, sensations, and memory, does not fully explain abstract thought or consciousness, suggesting the potential for immaterial aspects of the mind or soul.
- The debate between hylomorphism (mind as form of the body) and substance dualism highlights differing views on the relationship between the mind and the brain, with proponents of hylomorphism arguing that the brain is necessary but not sufficient for higher cognitive functions like intellect and will.
- The development of advanced AI like ChatGPT, capable of complex reasoning and learning, raises questions about the nature of intelligence and consciousness, and whether these capabilities can exist independently of subjective experience.
- The debate between materialism and dualism is central to understanding consciousness, with evidence from split-brain patients, near-death experiences, and brain stimulation studies offering contrasting perspectives on the mind-brain relationship.
- The concept of free will remains a contentious issue, with libertarian views emphasizing conscious choice and rational appetite, while determinists and compatibilists highlight brain correlates and practical implications.
- Near-death experiences, while often transformative and difficult to explain naturalistically, are debated as potential evidence for an immaterial soul versus complex neurological phenomena, with interpretations heavily influenced by metaphysical beliefs.
Segments
Hylomorphism vs. Dualism (00:13:01)
- Key Takeaway: Hylomorphism, the Aristotelian view of the soul as the form of the body, offers a framework where the mind’s immaterial powers like intellect and will are distinct from the brain’s material functions, yet integrated into a single substance.
- Summary: Egnore elaborates on his Thomistic dualist perspective, explaining hylomorphism as a unified substance with both material and immaterial aspects. He distinguishes between material soul powers (homeostasis, locomotion, sensation, emotion, memory) that are dependent on the brain, and immaterial powers (intellect, will) that are not generated by matter. Koch counters by highlighting neuroscientific evidence, such as brain stimulation studies, that suggest these higher functions are also tied to specific brain regions or distributed networks.
Intelligence, Consciousness, and AI (00:46:13)
- Key Takeaway: The capabilities of advanced AI like ChatGPT demonstrate intelligence through learning and adaptation, but this intelligence is distinct from subjective conscious experience, raising questions about the nature of consciousness itself.
- Summary: The discussion shifts to artificial intelligence, specifically ChatGPT, and its ability to reason and learn. Egnore argues that AI, while intelligent, lacks first-person experience and is essentially a sophisticated tool leveraging human intelligence. Koch, however, defines intelligence by its ability to adapt and learn, suggesting that AI is increasingly meeting this definition, even if consciousness remains a separate, unsolved problem. The conversation touches on the distinction between intelligence and consciousness, and the potential for future AI to develop something akin to subjective experience.
Split-Brain Patients and Consciousness (00:55:21)
- Key Takeaway: Studies on split-brain patients, where the corpus callosum is severed, show a perceptual split but a unified consciousness, supporting the idea that the intellect and will, as immaterial aspects, cannot be divided by physical separation of brain hemispheres.
- Summary: Egnore presents split-brain patients as evidence for his hylomorphic view, arguing that while perception can be divided, consciousness remains unified, suggesting the intellect and will are not material and thus cannot be severed. Koch disagrees, proposing that the standard interpretation suggests two consciousnesses and that the split is not as complete as Egnore suggests, with residual subcortical pathways and potential for left-hemisphere cues influencing the right. The debate centers on whether the observed phenomena support a unified, immaterial mind or a more complex, distributed brain function.
Split Brain and Consciousness (00:59:43)
- Key Takeaway: Complete sectioning of the corpus callosum in split-brain patients can lead to two distinct, albeit not fully independent, conscious experiences within a single skull.
- Summary: The discussion explores the implications of commissurotomy and callosotomy on hemispheric separation, examining subcortical pathways and the potential for merged consciousness. The case of conjoined twins with a thalamic bridge is used as an example of shared perception but independent intellect, leading to a debate on whether this constitutes two separate consciousnesses.
Vegetative State and Awareness (01:05:20)
- Key Takeaway: Research using fMRI has revealed that a significant percentage of individuals diagnosed with a persistent vegetative state exhibit patterns of brain activity suggesting awareness and the ability to respond to commands, challenging the notion of complete unconsciousness.
- Summary: The conversation delves into vegetative states and paradoxical lucidity, highlighting studies like Adrian Owen’s that detected awareness in seemingly unresponsive patients. The speakers debate whether this awareness is purely neurological or suggests a mind-brain disconnect, with one speaker arguing for conventional neuroscience explanations and the other for aspects of the mind transcending the brain.
Near-Death Experiences and Metaphysics (01:15:41)
- Key Takeaway: Near-death experiences present challenges to purely naturalistic explanations due to their clear content, external corroboration, encounters with deceased individuals, and transformative effects, prompting consideration of metaphysical interpretations.
- Summary: The discussion centers on the interpretation of near-death experiences (NDEs), with one speaker arguing for their profound impact and difficulty in explaining them solely through neuroscience, citing cases like Pam Reynolds. The other speaker counters that these experiences can be explained by neurological events and that extraordinary claims require extraordinary evidence, questioning the validity of anecdotal accounts.
Free Will and Consciousness (01:32:20)
- Key Takeaway: Libertarian free will, defined as a rational appetite based on reason rather than purely deterministic brain processes, is defended as a real and self-refuting concept to deny, though its precise location and causation remain debated.
- Summary: The speakers debate determinism, compatibilism, and libertarian free will. One speaker advocates for libertarian free will rooted in an intrinsic powers ontology, while the other argues that everyday actions and beliefs implicitly demonstrate free will, even if scientific experiments show brain activity preceding conscious decisions. The discussion touches on the philosophical implications of brain stimulation studies and the nature of consciousness.
Debug Information
Processing Details
- VTT File: mss527_Koch_Egnor_2025_06_24.vtt
- Processing Time: September 11, 2025 at 03:27 PM
- Total Chunks: 2
- Transcript Length: 155,353 characters
- Caption Count: 1,459 captions
Prompts Used
Prompt 1: Context Setup
You are an expert data extractor tasked with analyzing a podcast transcript.
I will provide you with part 1 of 2 from a podcast transcript.
I will then ask you to extract different types of information from this content in subsequent messages. Please confirm you have received and understood the transcript content.
Transcript section:
[00:00:00.160 --> 00:00:04.560] FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[00:00:04.560 --> 00:00:09.440] New customers can bet just $5 and get $300 in bonus bets if you win.
[00:00:09.440 --> 00:00:10.000] That's right.
[00:00:10.000 --> 00:00:16.480] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[00:00:16.480 --> 00:00:22.320] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[00:00:22.320 --> 00:00:24.240] The NFL season's almost here.
[00:00:24.240 --> 00:00:26.720] The only question is: are you ready to play?
[00:00:26.720 --> 00:00:32.560] Visit fanduel.com/slash sportsfan to download the FanDuel app today and get started.
[00:00:32.560 --> 00:00:35.040] Must be 21 plus and physically present in New York.
[00:00:35.040 --> 00:00:37.920] First online real money wager only, $10 first deposit required.
[00:00:38.000 --> 00:00:41.280] Bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[00:00:41.280 --> 00:00:42.080] Restrictions apply.
[00:00:42.080 --> 00:00:44.240] See terms at sportsbook.fanuel.com.
[00:00:44.240 --> 00:00:53.280] For help with a gambling problem, call 1-877-8 HopeNY or text HopeNY to 467-369 or visit oasas.ny.gov/slash gambling.
[00:00:53.280 --> 00:00:54.880] Standard text messaging rates apply.
[00:00:54.880 --> 00:00:58.720] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[00:00:58.720 --> 00:01:04.400] Warning, the following ZipRecruiter radio spot you are about to hear is going to be filled with F-words.
[00:01:04.400 --> 00:01:09.440] When you're hiring, we at ZipRecruiter know you can feel frustrated, forlorn, even.
[00:01:09.440 --> 00:01:11.120] Like your efforts are futile.
[00:01:11.120 --> 00:01:17.120] And you can spend a fortune trying to find fabulous people, only to get flooded with candidates who are just fine.
[00:01:18.240 --> 00:01:21.440] Fortunately, ZipRecruiter figured out how to fix all that.
[00:01:21.440 --> 00:01:26.560] And right now, you can try ZipRecruiter for free at ziprecruiter.com slash zip.
[00:01:26.560 --> 00:01:35.120] With ZipRecruiter, you can forget your frustrations because we find the right people for your roles fast, which is our absolute favorite F-word.
[00:01:35.120 --> 00:01:40.720] In fact, four out of five employers who post on ZipRecruiter get a quality candidate within the first day.
[00:01:41.040 --> 00:01:42.960] Fantastic.
[00:01:42.960 --> 00:01:49.200] So, whether you need to hire four, 40, or 400 people, get ready to meet first-rate talent.
[00:01:49.200 --> 00:01:53.360] Just go to ziprecruiter.com/slash zip to try ZipRecruiter for free.
[00:01:53.360 --> 00:01:56.160] Don't forget that's ziprecruiter.com/slash zip.
[00:01:56.160 --> 00:01:59.120] Finally, that's ziprecruiter.com/slash zip.
[00:02:02.600 --> 00:02:08.360] You're listening to The Michael Shermer Show.
[00:02:15.080 --> 00:02:18.360] All right, everybody, it's time for another episode of the Michael Shermer Show.
[00:02:18.360 --> 00:02:22.040] Hey, we've got a special episode for you today.
[00:02:22.040 --> 00:02:23.400] You are not going to believe this.
[00:02:23.400 --> 00:02:30.760] One of the most important and interesting topics there is: the nature of the soul, the mind, consciousness.
[00:02:30.760 --> 00:02:31.960] Where does it all go?
[00:02:31.960 --> 00:02:33.960] What happens after we die?
[00:02:33.960 --> 00:02:44.360] My guest today, and this is going to be a debate format, as it were, between two authors who have thought a lot, long and hard about this, both professional scientists.
[00:02:44.360 --> 00:02:46.280] The first one is Michael Egnore.
[00:02:46.280 --> 00:02:48.440] Here's his new book, The Immortal Mind.
[00:02:48.440 --> 00:02:51.320] Let me give it a proper introduction in total here.
[00:02:51.320 --> 00:02:59.800] He is a medical doctor, professor of neurosurgery and pediatrics at the Renaissance School of Medicine at Stony Brook University.
[00:02:59.800 --> 00:03:07.640] He received his medical degree from the College of Physicians and Surgeons at Columbia University and trained in neurosurgery at the University of Miami.
[00:03:07.640 --> 00:03:19.960] He's been on the faculty at Stony Brook since 1991, and he's the neurosurgery residence director and served as the director of pediatric neurosurgery and as vice chairman of neurosurgery at Stony Brook Medicine.
[00:03:19.960 --> 00:03:20.920] Oh my God.
[00:03:21.240 --> 00:03:22.600] My old job.
[00:03:22.920 --> 00:03:24.360] He has a strongest interest.
[00:03:24.600 --> 00:03:25.800] This is where it gets interesting.
[00:03:25.800 --> 00:03:29.240] He has a strong interest in Thomist philosophy, that is St.
[00:03:29.240 --> 00:03:37.560] Thomas Aquinas, philosophy of mind, Neuroscience, Evolution, and Intelligent Design, and bioethics, and has published and lectured extensively from the topic.
[00:03:37.560 --> 00:03:43.800] Here it is again: The Immortal Mind: A Neurosurgeon's Case for the Existence of the Soul.
[00:03:43.800 --> 00:03:46.640] My listeners will know I'm interested in that topic.
[00:03:44.680 --> 00:03:50.400] But today I'm going to be more neutral, and instead we're going to have Christophe Koch.
[00:03:44.920 --> 00:03:52.400] And here is his latest book.
[00:03:52.720 --> 00:03:59.760] It's called Then I Am Myself, the World, What Consciousness Is and How to Expand It.
[00:03:59.760 --> 00:04:06.640] Christophe is a neuroscientist at the Allen Institute and at the Tiny Blue Dot Foundation.
[00:04:06.960 --> 00:04:15.520] Excuse me, he's the former president of the Allen Institute for Brain Sciences in Seattle and former professor at Caltech, where we first met back in the 90s.
[00:04:15.520 --> 00:04:31.520] He's the author of four previous titles: The Feeling of Life Itself, Why Consciousness is Widespread But Can't Be Computed, Consciousness, Confessions of a Romantic Reductionist, that's nice, and The Quest for Consciousness, a Neurobiological Approach.
[00:04:31.520 --> 00:04:34.240] Again, here's the newest book, Then I Am Myself, the World.
[00:04:34.240 --> 00:04:40.800] He was on for that book last year, so he's our returning champion, as it were.
[00:04:40.800 --> 00:04:42.800] All right, Michael, let's start with you.
[00:04:43.280 --> 00:04:53.440] Just kind of broadly speaking, tell us where you're coming from and kind of wend your way to this question that I have, that I have for all theists.
[00:04:54.240 --> 00:05:19.120] To what extent did the arguments and evidence you present in your book from neuroscience support your beliefs in the afterlife and your Christianity and so on, what we loosely call your faith, such that if you took these away, if Christophe came up with such great arguments and he said, Oh my God, I'm wrong, would you abandon your faith or your religious beliefs?
[00:05:19.120 --> 00:05:25.440] Or are these more of just once you have the belief for other reasons, philosophical, theological, or whatever?
[00:05:25.440 --> 00:05:28.320] These are additional good reasons to believe.
[00:05:28.320 --> 00:05:34.520] Anyway, you can get yourself to that question by way of a general background here.
[00:05:34.840 --> 00:05:35.720] Thank you, Michael.
[00:05:35.880 --> 00:05:37.800] And that's actually a great question.
[00:05:38.200 --> 00:05:45.480] It's one that I've thought about a bit, not actually not quite as explicitly as you ask it, but I've thought about that.
[00:05:45.800 --> 00:05:54.440] Where I'm coming from, I grew up as an atheist and as believing in physicalism and materialism.
[00:05:54.440 --> 00:05:56.600] I was in love with science.
[00:05:56.920 --> 00:05:58.520] I went to medical school.
[00:05:58.520 --> 00:06:04.520] I majored in biochemistry undergraduate, and I went to medical school and fell in love with neuroscience.
[00:06:04.760 --> 00:06:11.160] I basically spent four years with my neuroscience textbooks because I thought it was so beautiful and fascinating.
[00:06:11.160 --> 00:06:21.400] And particularly with neuroanatomy, because I thought that we could understand who we are and what life was all about if we understood the brain in detail.
[00:06:21.640 --> 00:06:29.160] I decided that the way I wanted to understand it was to go into neurosurgery and kind of get some first-hand contact there.
[00:06:29.160 --> 00:06:37.560] So I trained in neurosurgery at the University of Miami and I began at Stony Brook on faculty in 1991, and I've been there ever since.
[00:06:38.200 --> 00:06:48.600] As I went along, I had some religious experiences and a lot of thoughts about that and had a conversion experience around the year 2000.
[00:06:48.600 --> 00:06:55.400] But along with that, I began seeing things in my clinical practice that really didn't fit the textbooks.
[00:06:55.400 --> 00:06:59.080] It didn't fit what I had learned in medical school.
[00:06:59.720 --> 00:07:02.120] And can we share screens?
[00:07:02.120 --> 00:07:03.160] I've got a scan.
[00:07:03.320 --> 00:07:05.080] I think there's an ability to do that.
[00:07:05.400 --> 00:07:06.520] That I don't know.
[00:07:07.400 --> 00:07:08.520] Let me just give it a try.
[00:07:08.920 --> 00:07:09.560] Okay.
[00:07:09.560 --> 00:07:12.200] Maybe tell you've tried that on the show.
[00:07:11.680 --> 00:07:12.920] Tell you what, why not?
[00:07:12.920 --> 00:07:14.200] Well, why don't we hold it?
[00:07:14.200 --> 00:07:14.800] All right.
[00:07:14.800 --> 00:07:17.200] I've got a very good example.
[00:07:14.360 --> 00:07:21.040] It's a little girl who was born missing half, two-thirds of her brain.
[00:07:22.000 --> 00:07:26.080] Her head was really largely of water, of spinal fluid.
[00:07:26.080 --> 00:07:30.320] And I told her parents I was dubious as to how well she would do in life.
[00:07:30.320 --> 00:07:33.200] I didn't think there was much of a chance of her having much function.
[00:07:33.200 --> 00:07:35.760] And I followed her as she grew up, and she's completely normal.
[00:07:35.760 --> 00:07:38.240] She's in her mid-20s now.
[00:07:38.480 --> 00:07:41.280] She's still missing about half her brain.
[00:07:41.520 --> 00:07:42.640] Completely normal person.
[00:07:42.640 --> 00:07:43.280] You speak to her.
[00:07:43.280 --> 00:07:46.960] She's actually rather bright, has a good job, et cetera.
[00:07:46.960 --> 00:07:49.760] I have a number of patients who are like that.
[00:07:50.560 --> 00:07:51.360] Not everybody is normal.
[00:07:52.800 --> 00:07:53.360] Not at all.
[00:07:53.360 --> 00:07:54.480] Not at all.
[00:07:55.040 --> 00:07:58.560] And it's not that uncommon.
[00:07:58.560 --> 00:08:00.800] Now, of course, she hasn't been studied carefully.
[00:08:00.800 --> 00:08:03.280] We've had MRIs of her brain.
[00:08:03.280 --> 00:08:04.960] I haven't done a functional MRI.
[00:08:04.960 --> 00:08:08.320] I'd love to see what's lighting up when she moves and does things.
[00:08:08.560 --> 00:08:12.640] But there's a lot of water up there, and it's not too much brain.
[00:08:12.640 --> 00:08:19.040] I have a little boy who's now a late teenager who had a similar scenario, also perfectly normal person.
[00:08:19.440 --> 00:08:25.840] I had a young child who had hydroencephaly, which is a condition in which babies are born.
[00:08:25.840 --> 00:08:30.800] They have strokes in intrauterine life, and they're born without cerebral hemispheres.
[00:08:30.800 --> 00:08:34.960] So the highest level of their central nervous system is their diencephalon.
[00:08:35.360 --> 00:08:37.680] And he had severe cerebral palsy.
[00:08:37.680 --> 00:08:43.120] He was quite handicapped, but totally awake, completely conscious person.
[00:08:43.600 --> 00:08:47.600] He couldn't speak, but he had a full range of emotions, of reactions.
[00:08:47.600 --> 00:08:53.440] He was happy about things and sad about things, and just he had bad cerebral palsy.
[00:08:53.840 --> 00:09:05.080] What really got me to thinking about this dramatically was: I was doing an awake craniotomy, which occasionally is indicated if we have to map the surface of the brain to protect eloquent regions of the brain.
[00:09:05.400 --> 00:09:10.760] This was a lady who had had an infiltrating tumor of her left frontal lobe.
[00:09:10.760 --> 00:09:18.520] And I wanted to map her broca's area, the speech area of the left frontal lobe, so I could avoid damaging it as I was taking the tumor out.
[00:09:18.520 --> 00:09:22.600] Because the tumor was infiltrating, I had to take out a good part of the frontal lobe.
[00:09:22.600 --> 00:09:25.480] So I was doing that while I was talking to her.
[00:09:26.040 --> 00:09:30.360] The operation took about four or five hours, and we had a nice long conversation.
[00:09:30.360 --> 00:09:36.760] And I was talking to her as much of her left frontal lobe was coming out, and she never turned a hair.
[00:09:36.760 --> 00:09:37.800] She's perfectly fine.
[00:09:37.800 --> 00:09:41.560] We're talking about the weather, we're talking about the cafeteria, about her family.
[00:09:41.560 --> 00:09:45.160] And I left the operating room thinking, this isn't in any textbook.
[00:09:45.480 --> 00:09:49.400] I'm not seeing this in Sidman and Sidman or Candell's textbook.
[00:09:49.400 --> 00:09:51.240] I don't see any of this anywhere.
[00:09:51.240 --> 00:09:53.240] So I began looking.
[00:09:53.240 --> 00:09:54.360] I began reading.
[00:09:55.720 --> 00:10:06.120] And one phenomenon that as a neurosurgeon is particularly important, this is when I teach the residents, this is something that neurosurgeons have to know intimately.
[00:10:06.120 --> 00:10:09.880] And that is the difference between eloquent and non-eloquent regions of the brain.
[00:10:09.880 --> 00:10:16.200] There are certain regions of the brain that have very well-defined functions that are completely intolerant of injury.
[00:10:16.520 --> 00:10:25.880] The motor strip in the posterior frontal lobe, the speech areas, Broca's area and Wernicke's area on the left side.
[00:10:26.520 --> 00:10:30.040] The brainstem, parts of the thalamus.
[00:10:30.040 --> 00:10:37.000] These are areas that if you damage them, even tiny, tiny injuries cause major deficits.
[00:10:37.000 --> 00:10:48.320] But many of the other parts of the brain, much of the frontal lobes, parts, the parietal lobes, you can take out and put in a wastebasket, and it has nothing to do with the patient's neurological functioning.
[00:10:48.320 --> 00:10:50.880] As with this woman I was taking that tumor out of.
[00:10:50.880 --> 00:10:56.080] Much of the cerebellum can be taken out and discarded without any effect on a person at all.
[00:10:56.480 --> 00:11:13.280] And so I am fascinated with this difference between parts of the brain that can be sacrificed at will without any discernible change in a person's neurological state and parts that you don't even dare touch without damaging somebody.
[00:11:13.600 --> 00:11:20.960] So I did a lot of reading on neuroscience, on philosophy, trying to come to grips with all of this.
[00:11:20.960 --> 00:11:29.200] And I came across a statement by Roger Scruton, who's a British philosopher who was a wonderful philosopher.
[00:11:29.200 --> 00:11:37.440] And I paraphrased, Scruton said that neuroscience is a vast trove of answers with no memory of the questions.
[00:11:37.760 --> 00:11:39.680] And I thought that just nailed it.
[00:11:39.680 --> 00:11:40.560] That just nailed it.
[00:11:40.560 --> 00:11:45.120] Because I had a professor, undergraduate in biology.
[00:11:45.120 --> 00:11:47.840] I was doing some research in molecular biology.
[00:11:47.840 --> 00:11:50.400] And he used to say to me all the time, he was a great philosopher.
[00:11:51.040 --> 00:11:56.240] He used to say, it's the question you ask that matters, not as much as the answer you get.
[00:11:56.240 --> 00:12:02.720] Because if you don't ask the right question, you'll misinterpret the data and misinterpret the answer, and it'll lead you in a bad direction.
[00:12:02.720 --> 00:12:05.680] The question has to be clear, and it has to be the right question.
[00:12:06.000 --> 00:12:12.640] Well, Michael, let me intervene here and just ask a clear question because we're already kind of into the weeds of neuroscience.
[00:12:12.640 --> 00:12:17.120] But what I wanted to start off with is just, let me summarize your book.
[00:12:17.120 --> 00:12:46.040] So, lots of these very specific examples, you've already given a couple, and there's a bunch more that Christophe can then respond to, lead you to the conclusion or deduction that the mind, and therefore I guess you're equating with the soul, is separate from the brain, from the neuro, from the neurons, and that when you die, it floats off or it's elsewhere and it goes off into some other heavenly state or some quantum field or whatever, something like this.
[00:12:46.040 --> 00:12:55.080] In other words, you, me, ourselves, the self, contained in this mind/slash soul, continues beyond the physical body.
[00:12:55.080 --> 00:12:56.600] Is that your conclusion?
[00:12:59.160 --> 00:13:00.040] Yes or no?
[00:13:01.160 --> 00:13:06.920] The view of the human person that I hold is that of Aristotle, basically, of which St.
[00:13:07.000 --> 00:13:08.920] Thomas elaborated that.
[00:13:08.920 --> 00:13:13.000] And that is that the version of hylomorphism, right?
[00:13:13.160 --> 00:13:14.680] Yes, yes, exactly.
[00:13:18.840 --> 00:13:31.400] In that view, human beings are a substance, meaning that we are a substance in the sense that we persist through time, that we have attributes, we have causal power, and that we remain ourselves.
[00:13:31.880 --> 00:13:44.280] And Aristotle said that living substances are composites of soul and body, which is the analogy to form and matter in inanimate things.
[00:13:44.600 --> 00:13:47.400] And the soul is nothing mystical.
[00:13:48.200 --> 00:13:50.200] The Aristotelian soul is not a mystical thing.
[00:13:50.200 --> 00:13:54.040] It's not a translucent thing that looks like you and you can put it alongside of you.
[00:13:54.040 --> 00:13:55.800] It's not like Caspar the ghost.
[00:13:55.800 --> 00:14:03.480] The soul is simply the sum of the activities that make us alive that differentiate us from a dead body.
[00:14:03.480 --> 00:14:18.480] So if you take everything I am right now, everything, my ability to speak, my heartbeat, my sensory powers, my memory, take all of that, and you subtract from that everything that I will be the moment I die.
[00:14:18.800 --> 00:14:20.240] My soul is what's left.
[00:14:14.920 --> 00:14:21.440] My soul is the difference.
[00:14:22.000 --> 00:14:23.360] So it's not mystical.
[00:14:23.840 --> 00:14:26.000] It's just a very practical definition.
[00:14:26.320 --> 00:14:28.240] Would you call yourself a dualist?
[00:14:29.120 --> 00:14:31.840] Yes, but I'm a thomistic dualist.
[00:14:31.840 --> 00:14:40.880] But thomistic dualism, thomistic dualists try to play down the dualist part because I'm not a substance dualist.
[00:14:41.360 --> 00:14:44.240] I think Cartesian dualism has a ton of problems.
[00:14:44.480 --> 00:14:49.840] I have a lot of, some of my best friends are Cartesian dualists, but I don't agree with them on many, many things.
[00:14:50.480 --> 00:15:01.280] So we are integrated single substances, but we have a principle of intelligibility, which is our soul, and a principle of individuation, which is our body.
[00:15:02.240 --> 00:15:03.200] And I think that...
[00:15:03.280 --> 00:15:08.800] Do you know the philosopher Matthew Owen who defends really the very similar version of hylomorphism?
[00:15:08.800 --> 00:15:14.800] It's not a substance to it, but really seems very, very similar to what you articulate.
[00:15:14.800 --> 00:15:15.680] Matthew Owen.
[00:15:16.320 --> 00:15:17.120] I know of him.
[00:15:17.280 --> 00:15:20.560] I haven't read him deeply, but I know his name.
[00:15:20.560 --> 00:15:21.120] Yeah.
[00:15:21.120 --> 00:15:25.040] Maybe, Christoph, now's a good time to bring you in and give us the big picture.
[00:15:25.040 --> 00:15:26.160] Are you a monist?
[00:15:26.560 --> 00:15:27.600] Are you an atheist?
[00:15:28.160 --> 00:15:34.400] What do you think happens to Christoph Koch, the person, the soul, the mind, whatever, when the body dies?
[00:15:34.720 --> 00:15:46.560] I don't know, but what I believe in is that my personal consciousness, everything that makes up me, my memories, my traits, will cease to exist.
[00:15:46.560 --> 00:15:50.560] That's what all the evidence shows, will be destroyed.
[00:15:50.560 --> 00:16:01.800] Once the physical substrate that is me destroyed, whether it returns to an ultimate reality that is phenomenal, you know, as an idealism, I'm increasingly more sympathetic to.
[00:16:02.840 --> 00:16:10.200] But all the evidence we have that conscious awareness in us when our brain is gone is also gone.
[00:16:10.520 --> 00:16:11.400] I see.
[00:16:11.400 --> 00:16:12.040] Okay.
[00:16:12.680 --> 00:16:14.680] So that's pretty clear.
[00:16:14.920 --> 00:16:24.120] Michael, when I wrote my book, Heavens on Earth, about the scientific search for the afterlife, I dealt with the mind uploaders, the cryonosis, and all those transhumanists.
[00:16:24.120 --> 00:16:29.640] But I had questions for them that I thought could apply to theists who make this argument.
[00:16:29.880 --> 00:16:32.600] That is, I'll just give you a few of these.
[00:16:32.600 --> 00:16:40.360] How does God go about the duplication or transformation process to ensure conscious continuity and personal perspective?
[00:16:40.360 --> 00:16:44.840] Is it your atoms and patterns that are resurrected or just the patterns?
[00:16:44.840 --> 00:16:52.520] If both, and you are physically resurrected, does God reconstitute your body so it's no longer subject to disease and aging?
[00:16:52.520 --> 00:16:58.520] If it's just the patterns of you that are resurrected, what is the platform that holds the information?
[00:16:58.520 --> 00:17:03.240] Is there something in heaven that is the equivalent of a computer hard drive or the cloud?
[00:17:03.240 --> 00:17:07.560] Is there some sort of heavenly quantum field that retains your thoughts and memories?
[00:17:07.560 --> 00:17:24.200] If you're not religious, or even if you are, and hold out hope that one day scientists will be able to clone your body and duplicate your brain or upload your mind into a computer or the cloud or create a virtual reality in which you come alive again, we face the same technological problems as God would of how all this is done.
[00:17:24.200 --> 00:17:26.040] What is your thought about that?
[00:17:26.360 --> 00:17:28.440] Well, St.
[00:17:28.520 --> 00:17:42.520] Thomas Aquinas believed that we can know a great deal about ourselves and our ultimate fate, et cetera, by reason, but that there are things we could only know by faith.
[00:17:42.840 --> 00:17:45.280] And I think one has to distinguish that.
[00:17:45.440 --> 00:18:02.400] There's much that I cannot prove or I cannot present evidence for about my beliefs about the afterlife and heaven and God that I do believe, but I believe as a matter of reading the Bible and being a Catholic and studying that.
[00:18:02.400 --> 00:18:05.760] But those are not strictly scientific questions.
[00:18:05.760 --> 00:18:10.400] But there's much that I do believe that has a scientific basis.
[00:18:10.720 --> 00:18:19.040] And from what I see, the science dovetails rather nicely with the religious perspective of Christianity.
[00:18:19.360 --> 00:18:22.080] But I don't claim scientific proof for any of that.
[00:18:22.800 --> 00:18:31.120] But I think that the soul is immortal based on scientific reasons that have nothing to do with faith.
[00:18:31.600 --> 00:18:33.360] Well, I mean, there's nothing that's immortal.
[00:18:33.360 --> 00:18:39.840] The universe has a finite lifetime, and there's no scientific evidence for anything being immortal.
[00:18:39.840 --> 00:18:42.480] But, Mike, let's get down a little bit to fact.
[00:18:42.560 --> 00:19:09.920] There's this beautiful case study by one of your colleagues, Itzak Fried, that I've worked with for many years, that was published a couple of years ago about a 14-year-old girl, no previous medical illnesses, who complained when she got 13 or so of intense episodes of feeling guilty, guilt, and some distress, typically in a social situation at school or with friends or with her parents.
[00:19:10.160 --> 00:19:30.000] At some point, she got tonic clonic epileptic seizures and was seen and evaluated by neurologists, and they detected a tumor in the anterior singlet subgeneral, you know, so in the frontal part, but underneath the corpus callosa.
[00:19:31.080 --> 00:19:47.880] And then they did this study where first Itag Friednau did put electrodes in there to monitor and could show that every time you stimulate Every time you stimulate in the brain of this girl, you invoked intense feelings of guilt.
[00:19:47.880 --> 00:19:53.080] Now, guilt, so I grew up, as a disclosure, I grew up as a Roman Catholic and was devout.
[00:19:53.080 --> 00:19:56.600] And of course, there's a lot of guilt in Roman.
[00:19:56.840 --> 00:20:01.160] I mean, it's sort of a Catholic emotion.
[00:20:01.160 --> 00:20:04.280] So here you could evoke guilt.
[00:20:04.280 --> 00:20:07.560] Furthermore, then they did the resection.
[00:20:07.560 --> 00:20:17.080] They removed the tumor and the girl never had, in a second operation, the girl never had these episodes of guilt again.
[00:20:17.080 --> 00:20:24.520] I don't know whether she had any feelings of guilt about anything, but what I do know that these intense episodes of guilt.
[00:20:24.520 --> 00:20:43.000] So here you have a really nice causal study where a social emotion that is often associated with religious practice, feeling guilty about a sin you committed or something that you did or didn't do, inappropriate, that seems to have a very concrete correlate in a particular brain area of the brain.
[00:20:43.000 --> 00:20:46.760] You stimulate, you get guilt, you remove the area, the guilt is gone.
[00:20:47.400 --> 00:20:50.360] So why do you need anything else?
[00:20:50.360 --> 00:20:57.240] Well, in addition, there's a soul that feels guilt, but yeah, you can postulate that, but that seems completely redundant.
[00:20:57.240 --> 00:21:01.880] Here, it's all explained by a particular part of the brain that's active.
[00:21:02.200 --> 00:21:05.960] Yeah, it's kind of like they removed her Catholicism.
[00:21:05.960 --> 00:21:07.320] Yeah, the.
[00:21:09.000 --> 00:21:09.640] Yeah, I don't know.
[00:21:09.720 --> 00:21:10.960] I mean, this is in Israelis.
[00:21:11.080 --> 00:21:12.680] I presumably she was Jewish.
[00:21:12.680 --> 00:21:14.120] All right, all right.
[00:21:15.920 --> 00:21:39.600] The Aristotelian way, and I think this is strongly supported by the neuroscience, of understanding the issues of necessity and sufficiency for various aspects of the mind, is that there are five aspects of the soul, which is which I'm almost interchangeable with the idea of mind.
[00:21:39.600 --> 00:21:52.000] Five aspects of the soul that are clearly material and are intimately associated with the body, and the body is necessary and sufficient for the exercise of these five things.
[00:21:52.000 --> 00:21:57.760] The first thing is homeostasis, physiological maintenance of the heart rate, the breathing, all that stuff.
[00:21:57.760 --> 00:21:59.840] Clearly, that's controlled by the brain.
[00:21:59.840 --> 00:22:01.440] The second is locomotion.
[00:22:01.440 --> 00:22:03.520] The brain lets me move my limbs.
[00:22:04.240 --> 00:22:05.840] The third is sensation.
[00:22:05.840 --> 00:22:10.720] The brain is obviously involved in my vision, my hearing, my tactile sense.
[00:22:10.880 --> 00:22:12.720] The fourth is emotion.
[00:22:12.720 --> 00:22:16.720] The brain is obviously involved in emotional states like feelings of guilt and so on.
[00:22:16.720 --> 00:22:18.960] That clearly is related to the brain.
[00:22:18.960 --> 00:22:23.120] You can change your emotional states very easily by having a glass of wine.
[00:22:23.440 --> 00:22:25.600] So that obviously changes.
[00:22:25.600 --> 00:22:27.120] And the fifth is memory.
[00:22:27.120 --> 00:22:32.240] The brain is clearly involved in memory, the hippocampi, the mammillary bodies.
[00:22:34.160 --> 00:22:39.440] You can lose the ability to form short-term memories by bilateral hippocampal lesions.
[00:22:39.440 --> 00:22:43.600] So those are material activities of the brain.
[00:22:43.920 --> 00:22:55.920] So this young girl was obviously having either subclinical seizures or some kind of problem in a region of her cingula gyri that were causing her to have this emotion of guilt.
[00:22:55.920 --> 00:22:59.360] But Aristotle would say, hey, that's perfectly fine.
[00:22:59.360 --> 00:23:00.760] Emotion comes from the brain.
[00:23:00.760 --> 00:23:01.880] No one doubts that.
[00:23:02.680 --> 00:23:04.920] Including the conscious experience, of course, right?
[00:23:05.400 --> 00:23:07.320] Of course I'm talking about the conscious experience.
[00:23:07.640 --> 00:23:08.680] Absolutely.
[00:23:09.000 --> 00:23:14.360] The Aristotelian view of the intellect and will is different.
[00:23:14.520 --> 00:23:16.520] And the Thomistic view follows that.
[00:23:16.520 --> 00:23:30.040] That the intellect, and that's the capacity to form concepts that are abstract, the capacity to make propositions, the capacity to do reason, the capacity to make arguments, is an immaterial power of the human soul.
[00:23:30.040 --> 00:23:31.400] It depends on the brain.
[00:23:31.400 --> 00:23:38.120] That is, the brain is necessary ordinarily for the ordinary exercise of our intellect, the kind of thing we're doing right now.
[00:23:38.600 --> 00:23:44.120] If I get hit on the head, I'm not going to be as effective on this podcast as if I wasn't hit on the head.
[00:23:44.120 --> 00:23:47.000] But the brain is not sufficient for that.
[00:23:47.000 --> 00:23:53.240] That is, that there's an immaterial aspect of our soul that is responsible for our intellect.
[00:23:53.240 --> 00:23:54.760] And the same for will.
[00:23:54.760 --> 00:23:58.120] Will is the appetite that follows on reason.
[00:23:58.120 --> 00:24:01.000] And that's an immaterial aspect of our soul as well.
[00:24:01.000 --> 00:24:04.920] And there's actually a ton of neuroscience that I think supports that viewpoint.
[00:24:04.920 --> 00:24:07.240] There's also a ton of logic that supports that viewpoint.
[00:24:07.240 --> 00:24:08.680] I'm happy to go into if you want.
[00:24:09.960 --> 00:24:11.080] Yeah, go ahead, Christophe.
[00:24:11.400 --> 00:24:31.960] Okay, so A, we know in neurosurgical experiments, right, you can evoke the feeling of not just the movement of limbs if you stimulate motor or premotor area, but you can also evoke the feeling of will, where people say, oh, I wanted to move my mouth, but I didn't actually move it.
[00:24:31.960 --> 00:24:32.840] I had an intent.
[00:24:33.000 --> 00:24:37.640] Either a patient will talk about intention or they will talk about will.
[00:24:37.640 --> 00:24:46.720] Now, so you can always say, so we know that stimulating particular parts of the brain seems to involve centers that typically are involved.
[00:24:46.720 --> 00:24:56.000] Also, if you scan people and you do, you know, you do appropriate experiments that seem to relate to the execution of what we would call voluntary activity, right?
[00:24:56.000 --> 00:24:59.040] I raise my hand now or now, like in the Libet experiment.
[00:24:59.040 --> 00:25:02.880] Now, of course, you can always say, Yeah, this is all necessary, but it's not sufficient.
[00:25:02.880 --> 00:25:05.360] There is this additional quantity X.
[00:25:06.000 --> 00:25:11.120] I'm not sure how I would ever show that's not the case.
[00:25:11.120 --> 00:25:17.520] You know, it reminds me a little bit of the receiver, the radio receiver theory of William James that he voiced at some point.
[00:25:17.520 --> 00:25:18.880] I don't know how seriously he was.
[00:25:18.880 --> 00:25:32.560] It was like, Yeah, I mean, clearly, the brain is necessary, but you know, there's this sort of this, it's as necessary as, you know, if you're listening to a symphony orchestra, the radio is necessary, but this radio itself doesn't generate the music.
[00:25:32.560 --> 00:25:34.640] It's generated somewhere else.
[00:25:34.640 --> 00:25:53.840] Again, that's not very satisfactory compared to the standard neuroscience explanation, which is, I know, this piece of brain is necessary and sufficient to generate consciousness, conscious experience, or the feeling of will, or wanting, or intending.
[00:25:54.160 --> 00:26:07.840] But, Michael, it's your contention that there are certain areas under brain stimulation that cause basic sensory or motor experiences, but not higher abstract reasoning or free will choices.
[00:26:07.840 --> 00:26:11.040] Where do you imagine these things are happening?
[00:26:11.920 --> 00:26:25.040] Well, first of all, if I may, I just want to point out, as Christoph raised the neuroscientific evidence, there's quite a bit of neuroscientific evidence that supports the view that the intellect and will are immaterial powers of the soul.
[00:26:25.280 --> 00:26:29.200] Wilder Penfield did a lot of great work on this.
[00:26:29.200 --> 00:26:33.960] He operated on 1,100 patients over his career with awake brain surgery.
[00:26:34.120 --> 00:26:39.080] The patients were awake, they had local anesthesias, they didn't feel pain, and he mapped their cortex.
[00:26:39.080 --> 00:26:47.080] And he found that he could elicit four and only four different kinds of responses from stimulating the cortex.
[00:26:47.080 --> 00:26:49.080] He could elicit movement.
[00:26:50.040 --> 00:26:52.280] He could elicit sensations of various sorts.
[00:26:52.280 --> 00:26:56.040] He could elicit memories occasionally, particularly in the temporal lobe.
[00:26:56.040 --> 00:26:58.760] And he could elicit emotions occasionally.
[00:26:58.760 --> 00:27:01.720] He could not elicit what he called mind action.
[00:27:01.720 --> 00:27:08.920] And what he meant by mind action was abstract thought, a sense of self, contemplation.
[00:27:08.920 --> 00:27:11.560] And he then looked at seizures.
[00:27:11.560 --> 00:27:17.400] And he looked at the phenomenology of seizures, what happens when people have seizures.
[00:27:17.400 --> 00:27:25.960] And I've seen the same thing, that when people have seizures, if they remain conscious during the seizures, they have one of four things.
[00:27:25.960 --> 00:27:27.000] They'll have movement.
[00:27:27.000 --> 00:27:29.960] They can have a focal seizure where they move a limb.
[00:27:29.960 --> 00:27:33.160] They'll have sensations, a tingling or a flash of light.
[00:27:33.720 --> 00:27:36.280] You can have memories evoked by seizures.
[00:27:36.600 --> 00:27:40.120] And you can have emotions evoked by seizures.
[00:27:40.120 --> 00:27:43.560] There are kinds of seizures that make you laugh, gelastic seizures.
[00:27:43.560 --> 00:27:46.200] There are seizures that make you very frightened.
[00:27:46.200 --> 00:27:48.440] But there are no calculus seizures.
[00:27:48.440 --> 00:27:50.040] There are no mathematic seizures.
[00:27:50.040 --> 00:27:51.640] There are no logic seizures.
[00:27:51.640 --> 00:27:55.800] There are no seizures that make a person engage in abstract thought.
[00:27:56.120 --> 00:28:00.920] The closest seizure that comes to that is a kind of a seizure called a forced thinking seizure.
[00:28:00.920 --> 00:28:04.200] And it's a seizure that was studied in some detail by Penfield.
[00:28:04.200 --> 00:28:05.320] They're very rare.
[00:28:05.320 --> 00:28:09.400] And forced thinking seizures, I think of as more like OCD seizures.
[00:28:09.400 --> 00:28:15.760] They're people who compulsively think about a particular thing, like, did I lock the door?
[00:28:14.840 --> 00:28:16.960] Did I turn the oven off?
[00:28:17.120 --> 00:28:17.600] Stuff like that.
[00:28:17.600 --> 00:28:22.320] But it's more OCD kind of things rather than abstract thought per se.
[00:28:22.320 --> 00:28:33.280] So there is considerable scientific evidence that intellect and will cannot be evoked from the brain in terms of abstract thought by stimulating the brain.
[00:28:36.080 --> 00:28:40.160] Evidence of absence isn't evidence of absence, right?
[00:28:40.160 --> 00:28:42.240] A prime precept of science.
[00:28:42.240 --> 00:28:44.320] And Penfield did his work 100 years ago.
[00:28:44.320 --> 00:28:55.600] Yes, he was a genius and he started or he really pushed the field of electric stimulation, but he also didn't find pain responses that we now know are prevalent in the insula.
[00:28:55.600 --> 00:29:00.880] And of course, you can put people in a magnetic scanner and ask them to do math problems, but people have done.
[00:29:00.880 --> 00:29:06.400] And you see particular FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[00:29:06.400 --> 00:29:11.280] New customers can bet just $5 and get 300 in bonus bets if you win.
[00:29:11.280 --> 00:29:11.840] That's right.
[00:29:11.840 --> 00:29:18.400] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[00:29:18.400 --> 00:29:24.240] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[00:29:24.240 --> 00:29:26.080] The NFL season's almost here.
[00:29:26.080 --> 00:29:28.560] The only question is, are you ready to play?
[00:29:28.560 --> 00:29:34.480] Visit fan duel.com/slash sportsfan to download the FanDuel app today and get started.
[00:29:34.480 --> 00:29:36.880] Must be 21 plus and physically present in New York.
[00:29:36.880 --> 00:29:38.400] First online real money wager only.
[00:29:38.400 --> 00:29:39.760] $10 first deposit required.
[00:29:39.760 --> 00:29:43.120] A bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[00:29:43.120 --> 00:29:43.920] Restrictions apply.
[00:29:43.920 --> 00:29:46.080] C terms at sportsbook.fanuel.com.
[00:29:46.080 --> 00:29:55.200] For help with a gambling problem, call 1-877-8 HOPE-NY or text HOPENY to 467-369 or visit oasas.ny.gov slash gambling.
[00:29:55.200 --> 00:29:56.720] Standard text messaging rates apply.
[00:29:56.720 --> 00:30:00.520] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[00:29:59.920 --> 00:30:03.320] Region in the prefrontal cortex will light up.
[00:30:03.400 --> 00:30:12.120] And incidentally, if you, yeah, so you know, one-third of the brain is what you and your colleagues call non-eloquent cortex.
[00:30:12.120 --> 00:30:15.720] But it's simply not true that if you remove them, patients don't have laws.
[00:30:15.720 --> 00:30:34.280] If you remove them and you test them in specific contexts where they have to do social reasoning or moral reasoning or higher order thinking, they all show the ulceral deficits because those parts of the brain are associated with what we call higher intelligence, with and really with intelligence, right?
[00:30:34.280 --> 00:30:38.120] Now you can say, well, that's necessary, but that's not sufficient.
[00:30:38.120 --> 00:30:40.760] Yeah, and that, you know, you can always argue that.
[00:30:40.760 --> 00:30:50.200] But it is well known and established to 100 years of functional neurology that parts of that, that, I mean, there are no bits and pieces of the brain that don't have any function.
[00:30:50.200 --> 00:30:54.280] That's just there to support the, you know, to fill up the cranium, right?
[00:30:54.280 --> 00:30:59.160] Every piece of brain has a specific function, otherwise it wouldn't have evolved.
[00:30:59.800 --> 00:31:01.480] Well, hang on.
[00:31:01.480 --> 00:31:11.480] So is it your point, Christophe, that the stimulating any one particular area is not going to trigger, say, mathematical reasoning because it's not done in any one area.
[00:31:11.800 --> 00:31:14.120] It's more of an integrated system.
[00:31:14.680 --> 00:31:17.800] Yeah, I mean, and what happens in electrical stimulation?
[00:31:17.800 --> 00:31:19.640] Why does electrical stimulation work?
[00:31:19.880 --> 00:31:28.200] So let's say the best exploded in the visual brain, where you have neurons nearby that represent, you know, let's say points in a particular part of the visual brain.
[00:31:28.520 --> 00:31:35.720] If you now introduce an electrode in this tissue, you're going to excite neurons primarily close by, not only, but primarily close by.
[00:31:35.720 --> 00:31:41.160] They all represent that particular portion of visual space, and therefore you see what's called a FOSCE.
[00:31:41.160 --> 00:31:45.000] You see some shimmering light there, and that's the energy reported.
[00:31:45.120 --> 00:31:56.880] Now, higher-order things like intelligence, like reasoning, is very likely not organized in this very topographic way because there isn't a single mapping.
[00:31:57.280 --> 00:32:06.400] The visual brain is organized in this map-like way because there's a nice mapping from the outside, from the visual world outside to our visual cortex.
[00:32:06.400 --> 00:32:10.080] There isn't such a mapping for intelligence, for higher-order reasoning.
[00:32:10.080 --> 00:32:19.600] And therefore, you don't get if you stimulate, if you put one electrode in, you don't get what Mike refers to as sort of, oh, I just thought of E equals M C square.
[00:32:19.600 --> 00:32:31.600] Although, I must point out, having worked on single neurons with Itzhak Fried, we have shown a patient, a physics student who had to get pre-epileptic surgery mapping.
[00:32:32.000 --> 00:32:36.640] We showed him images, including of E equals M C square and Albert Einstein.
[00:32:36.800 --> 00:32:39.680] We had single neurons that responded to that image.
[00:32:40.720 --> 00:32:42.080] Concept neurons, huh?
[00:32:42.080 --> 00:32:43.600] Yes, concept neurons, exactly.
[00:32:43.920 --> 00:32:45.520] The Jennifer Anderson neuron.
[00:32:45.840 --> 00:32:46.480] Right, right.
[00:32:47.600 --> 00:32:49.040] But I said, I agree.
[00:32:49.040 --> 00:32:52.160] Jennifer Anderson isn't an abstract scientific thought.
[00:32:52.160 --> 00:32:58.080] But anyhow, so they're not nearby in a way that you can directly evoke them with electrical stimulation.
[00:32:58.080 --> 00:33:04.480] And just because you don't see them in this way doesn't mean they don't have a substrate in the brain.
[00:33:04.480 --> 00:33:04.800] Right.
[00:33:04.800 --> 00:33:07.040] Well, a couple of things.
[00:33:07.040 --> 00:33:13.360] One is that one has to recognize the denominator in Penfield's work.
[00:33:13.840 --> 00:33:15.440] He did 1,100 patients.
[00:33:15.440 --> 00:33:23.760] If the operations took eight hours and he did two stimulations per minute, that means he did 1.1 million stimulations.
[00:33:23.760 --> 00:33:28.240] And in not a single one of them was he able to evoke any kind of abstract thought.
[00:33:28.560 --> 00:33:39.000] I did a back-of-the-envelope calculation of the number of seizures that human beings have had over the last 200 years, which is the period of time that we've had modern neuroscience.
[00:33:39.000 --> 00:33:41.000] And it's been about a quarter of a billion seizures.
[00:33:41.000 --> 00:33:49.080] And I'm unaware of a single report in the medical literature of a patient having a seizure whose ictus involved abstract thought.
[00:33:49.080 --> 00:33:58.760] So I recognize that you feel that the cause of abstract thought is sort of hiding in various places in the brain.
[00:33:59.160 --> 00:34:00.520] It's not hiding, it's distributed.
[00:34:00.520 --> 00:34:08.360] Just like in a deep neural network, where the intelligence that emerges out of ChatGPT is distributed across 120 layers.
[00:34:08.360 --> 00:34:09.320] It's not localized.
[00:34:09.560 --> 00:34:10.760] You can't point to this layer.
[00:34:10.760 --> 00:34:12.600] Ah, this is what makes it so smart.
[00:34:12.600 --> 00:34:15.560] No, it's distributed across 122 layers.
[00:34:15.560 --> 00:34:22.440] So if you were to simulate, to stimulate in a deep neural network, again, you wouldn't get an abstract thought.
[00:34:22.440 --> 00:34:29.400] You would get something you couldn't interpret because it is distributed across the entire network.
[00:34:29.400 --> 00:34:30.760] Same thing with human intelligence.
[00:34:31.240 --> 00:34:35.400] But you can get remarkably complex reactions.
[00:34:35.400 --> 00:34:37.000] For example, memories.
[00:34:37.000 --> 00:34:44.680] You can stimulate the temporal lobe and get detailed memories of events that happened years before, including the details of conversations.
[00:34:46.760 --> 00:34:50.120] There have been people who've, but you can also get that with seizures.
[00:34:50.120 --> 00:34:56.360] There have been people who've had complex partial seizures who are able to drive cars during the time they're having a seizure.
[00:34:56.840 --> 00:35:03.480] So you can elicit enormously complex things from the brain, except abstract thought.
[00:35:04.440 --> 00:35:06.120] Certain types of abstract thought.
[00:35:06.120 --> 00:35:08.680] Look, I mean, the literature is full here.
[00:35:08.680 --> 00:35:10.680] I can give you a description.
[00:35:10.680 --> 00:35:17.200] The stimulation induces the disappearance of the word in my mind and replaces it with an idea of leaving.
[00:35:17.520 --> 00:35:18.160] Leaving.
[00:35:18.160 --> 00:35:20.560] So he's in the OI, he's being stimulated.
[00:35:20.560 --> 00:35:25.440] This is in the context again of a preclinical workup.
[00:35:25.440 --> 00:35:32.080] So now we can argue: is this abstract the idea of leaving, or is this something still concrete because it represents space?
[00:35:32.080 --> 00:35:32.880] I don't know.
[00:35:33.360 --> 00:35:35.440] It's a continuum.
[00:35:36.080 --> 00:35:51.600] And I think, I mean, RenΓ© Descartes, Mike, what reminds me of RenΓ© Descartes is that one of the, probably the key reason he postulated Res Cogitan, thinking substance, you know, the classical dualism that I understand you don't hold.
[00:35:51.600 --> 00:36:06.320] But many people, of course, do hold, because he was unable to conceive how matter, mechanistic matter, as he understood, right, before the advent of electromagnetism and quantum mechanics, could produce thought.
[00:36:06.320 --> 00:36:14.160] Now, today we know how thinking and how intelligence arises because we can build it into artificial network.
[00:36:15.920 --> 00:36:19.520] So, how does a neural network produce a concept?
[00:36:20.160 --> 00:36:21.920] What are the details on that?
[00:36:22.240 --> 00:36:23.280] Oh, it's abstract.
[00:36:23.280 --> 00:36:26.560] It's a progressive layer, so you wouldn't get it in two or three layers.
[00:36:26.560 --> 00:36:37.200] But if you have deep enough network, look, if we interact every day over the next month or two, my brain will wire up neurons that represent you.
[00:36:37.520 --> 00:36:55.120] And if I think about you and your ideas, and hylomorphism, and Aristotle and Thomas Aquinas, all of that, I will wire up neurons that represent that because it's an elegant way for the brain using minimal neuronal resources to represent things that my brain obviously deems important enough.
[00:36:55.120 --> 00:36:59.880] I mean, we know because we can simulate such things in artificial neural networks.
[00:36:59.680 --> 00:37:01.000] There's no magic about it.
[00:37:01.880 --> 00:37:08.200] You can simulate the causation of concepts in artificial neural networks.
[00:37:09.320 --> 00:37:10.280] I don't get that.
[00:37:10.680 --> 00:37:13.560] Do artificial neural networks contemplate mercy?
[00:37:13.560 --> 00:37:15.960] I mean, it's a bizarre statement.
[00:37:15.960 --> 00:37:17.640] Not yet, but soon.
[00:37:18.280 --> 00:37:19.960] Yeah, I see a priori.
[00:37:19.960 --> 00:37:20.440] You're right.
[00:37:20.680 --> 00:37:24.840] I haven't dealt with mercy because I'm a visual neuroscientist.
[00:37:24.840 --> 00:37:27.000] I'm interested in, you know.
[00:37:27.320 --> 00:37:33.960] And so I haven't dealt with it, but I see no reason for it to.
[00:37:34.360 --> 00:37:37.960] Let me ask a counterfactual question for you, Michael, on specific examples.
[00:37:37.960 --> 00:37:43.640] The two most famous from neuroscience, Phineas Gage and Henry Moyleson.
[00:37:44.200 --> 00:37:48.920] Why didn't their brains or minds come back in after the damage?
[00:37:49.320 --> 00:37:51.080] Well, first of all, I don't know Moyleson.
[00:37:51.240 --> 00:37:51.960] I know Gage.
[00:37:52.840 --> 00:38:07.000] Moyleson was the one who was the HM, yeah, whose, what was it, the hippocampus was taken out because of his seizures, and then he could never again form short term, never again form memory.
[00:38:08.600 --> 00:38:12.040] So I'll ask a more general question, which I asked Deepak Tropra.
[00:38:12.280 --> 00:38:16.040] Where does Aunt Millie's mind go when her brain dies of Alzheimer's?
[00:38:16.040 --> 00:38:23.160] And Deepak's answer, it returns to where it was before because he believes consciousness is the ground of all being and so on.
[00:38:23.320 --> 00:38:34.040] First of all, Phineas Gage obviously got a spike driven through his frontal lobes, and it changed his emotional state, his baseline emotions.
[00:38:34.040 --> 00:38:44.720] He had been a very religious, kind of conservative guy beforehand, and was a little more of a libertine after this injury, which is totally consistent with a hylomorphic view of how the brain works.
[00:38:44.440 --> 00:38:47.840] Meaning, you can change emotions, you can change emotional states.
[00:38:48.240 --> 00:38:55.680] If you're continuously drinking whiskey, you're going to emotionally be a different person, or if you get a spike driven through your brain, you're going to be emotionally different.
[00:38:56.720 --> 00:39:00.640] Getting the spike driven through his brain didn't make him into a mathematician.
[00:39:00.640 --> 00:39:05.200] That is, there's a difference between emotional states and abstract thought.
[00:39:05.200 --> 00:39:11.600] On the loss of memory from bilateral hippocampal injury, yeah, I mean, that's a very, very real thing.
[00:39:12.080 --> 00:39:17.040] But that's memory, and that is a physical aspect of what the brain does.
[00:39:17.440 --> 00:39:19.200] No one denies that.
[00:39:19.200 --> 00:39:20.960] Memory is not abstract.
[00:39:20.960 --> 00:39:23.600] So you lose the ability to form new memories.
[00:39:23.600 --> 00:39:25.120] Yes, that's certainly true.
[00:39:25.120 --> 00:39:26.880] But it's abstract thought.
[00:39:26.880 --> 00:39:40.720] And I'm still not clear on how a neural network, integrated neural network, whatever, how does the Venn diagram of an integrated neural network overlap with psychological predicates like concepts?
[00:39:42.720 --> 00:39:43.680] How does that work?
[00:39:44.000 --> 00:39:51.040] Well, if you feed it, let's say human novels as a standard now in GPT, right?
[00:39:51.040 --> 00:39:52.800] You feed it human.
[00:39:53.280 --> 00:39:58.560] I am sure I haven't tested it, but because now it has access to all of the human literature.
[00:39:58.560 --> 00:40:04.400] And human literature is full of things like emotion and religion and mercy and other concepts.
[00:40:04.400 --> 00:40:11.840] And ultimately, in these networks, there will emerge nodes or clusters of nodes that will represent the idea of mercy.
[00:40:11.840 --> 00:40:20.080] And I'm sure we can right now, I can ask ChatGPT what does it think about mercy, and it'll give me some reasonable sounding answer.
[00:40:20.080 --> 00:40:20.560] Right.
[00:40:20.560 --> 00:40:27.040] But does the ChatGPT have the subjective experience of thinking about mercy?
[00:40:27.080 --> 00:40:27.680] Oh, boy.
[00:40:27.680 --> 00:40:28.960] No, so that's a separate.
[00:40:29.600 --> 00:40:32.360] Mike, so we can have an entirely separate debate.
[00:40:32.600 --> 00:40:38.840] And I think this is the debate I'm personally much more likely with something like hylomorphism may be relevant.
[00:40:39.000 --> 00:40:47.240] What we don't understand, even in the concept of a single visual stimulus, an isolated bright light, you know, cars can navigate.
[00:40:47.240 --> 00:40:50.200] A Tesla can navigate, but I can see.
[00:40:51.000 --> 00:40:53.800] Tesla car doesn't see in the way we see.
[00:40:54.040 --> 00:40:55.960] It doesn't have the subjective feeling.
[00:40:55.960 --> 00:40:57.400] But that applies to anything.
[00:40:57.400 --> 00:41:03.400] That applies to seeing, hearing, smelling, touching, wanting, desiring, and also abstract thoughts.
[00:41:03.400 --> 00:41:16.680] That is a true mystery which right now we don't have any at least generally accepted answer to how conscious experience comes about in a physical world.
[00:41:17.000 --> 00:41:31.160] I think the only thing that we can say with confidence, and we can say a lot of things with confidence about this, is that there is a correspondence between certain activities in the brain and certain psychological states.
[00:41:31.960 --> 00:41:33.400] There's no question about that.
[00:41:33.400 --> 00:41:51.640] However, from an ontological standpoint, it is not clear how the material activity in the brain corresponds to, or generates or is related to the psychological aspect of the beating heart of the mind-body problem.
[00:41:51.640 --> 00:41:52.040] Right.
[00:41:52.040 --> 00:42:00.280] And I think it is largely a problem that we've created ourselves with what I think is faulty metaphysics.
[00:42:00.840 --> 00:42:07.400] Prior to Descartes, the hylomorphic way of understanding the world was widespread in the Western world.
[00:42:07.400 --> 00:42:09.640] Descartes really changed things on that.
[00:42:09.640 --> 00:42:17.840] And of course, his view of the human being was that the human being was this composite of two substances, of race cogitans and race extensa.
[00:42:18.160 --> 00:42:25.600] And the race cogitans was the soul, the thinking substance, and a race extensa was really a machine, a biological machine.
[00:42:26.160 --> 00:42:32.320] Because of the rise of materialism over the next couple of centuries, people kind of discarded the race cogitans.
[00:42:32.320 --> 00:42:36.480] They got rid of the ghost in the machine and just left us with the machine.
[00:42:36.480 --> 00:42:44.880] And so from that kind of materialistic, and people call it a mechanistic philosophy, that kind of philosophical perspective, we are meat machines.
[00:42:44.880 --> 00:42:46.480] We're meat robots.
[00:42:46.480 --> 00:42:57.040] And then we wonder why we can't explain how the mind works when our very metaphysical presuppositions about what we are exclude the mind explicitly.
[00:42:57.040 --> 00:43:04.160] That is, if you're working from a Cartesian background and you get rid of race cogitans, the only thing you've got left is the meat machine.
[00:43:04.160 --> 00:43:06.640] Why do you wonder about why you can't explain the mind?
[00:43:08.320 --> 00:43:19.360] I would differ with you only in saying we can explain every behavior associated with mind can be explained satisfactorily using a standard metaphysical framework.
[00:43:19.360 --> 00:43:26.880] Why someone displays the emotions of guilt and thereby does certain action, all of that can be explained.
[00:43:26.880 --> 00:43:33.120] But I grant you, the feeling, the experiential aspect cannot be explained yet.
[00:43:33.120 --> 00:43:40.080] Now, it may be that may be a reformulation of, see, for the same reason you mentioned, we now live, of course, in quantum mechanics.
[00:43:40.080 --> 00:43:49.840] And quantum mechanics has a great, you know, quantum mechanics is much more difficult to capture within the standard materialistic framework that really depends on classical physics and RenΓ© Descartes.
[00:43:49.840 --> 00:43:51.840] So maybe there's an answer there.
[00:43:51.840 --> 00:43:55.280] Maybe some other answers are like PAMP panpsychism or idealism.
[00:43:55.280 --> 00:43:56.880] Those are other answers.
[00:43:57.200 --> 00:44:03.480] But by the way, none of them say anything at all about immortality, including my reading at least of Aristotle.
[00:44:03.640 --> 00:44:11.160] This immortality is something that was added post factum because people obviously prefer to live to live forever.
[00:44:11.160 --> 00:44:19.400] We have no evidence whatsoever, at least scientific, third-person evidence for this supposedly immortality.
[00:44:19.400 --> 00:44:20.520] Well, I'm all for it.
[00:44:20.840 --> 00:44:29.640] Yeah, well, I mean, Aristotle was actually silent on the question of immortality of the soul.
[00:44:30.680 --> 00:44:32.760] His people.
[00:44:32.920 --> 00:44:35.240] Plato, of course, and Thomas Aquinas.
[00:44:35.720 --> 00:44:41.240] The argument for immortality of the soul takes kind of two forms.
[00:44:41.240 --> 00:44:43.480] One is a logical argument.
[00:44:43.480 --> 00:44:48.280] The other is an argument particularly related to near-death experiences.
[00:44:48.280 --> 00:45:09.000] But the logical argument for immortality of the soul that was put forth by Thomas Aquinas is that because we have immaterial powers of intellect and will, immaterial capacities for reason, an immaterial power cannot die in the same way that a material substance can die.
[00:45:09.000 --> 00:45:12.920] When we die, what we mean by dying is we disintegrate.
[00:45:12.920 --> 00:45:15.880] That is, that, for example, my body won't go away.
[00:45:15.880 --> 00:45:18.200] The atoms in my body will always be here.
[00:45:18.200 --> 00:45:20.760] They'll just be eaten up by worms.
[00:45:21.080 --> 00:45:25.960] What will happen to my body when I die is that there will be a lack of integration.
[00:45:25.960 --> 00:45:28.520] The body will fall apart and go into pieces.
[00:45:28.520 --> 00:45:36.040] The thing is that immaterial aspects of human souls can't disintegrate because they aren't material to begin with.
[00:45:36.040 --> 00:45:38.280] They can't lose parts because they don't have parts.
[00:45:38.760 --> 00:45:40.440] Is it information, Michael?
[00:45:41.000 --> 00:45:47.200] Well, it resembles information, but information is a very tricky concept.
[00:45:47.200 --> 00:45:53.840] I mean, semantically, information is just the set of true propositions.
[00:45:53.840 --> 00:45:54.960] So, how that relates to the majority of the system.
[00:45:56.400 --> 00:45:59.840] If I can interrupt, information is informare.
[00:45:59.840 --> 00:46:06.640] It again comes from Aristotle to give form to informare, to give form to.
[00:46:06.640 --> 00:46:13.360] But, Mike, but aren't you then troubled by the fact that GPT 4, 30, that can reason?
[00:46:13.360 --> 00:46:15.520] 3.0 can actually reason, right?
[00:46:15.520 --> 00:46:16.240] You can ask it questions.
[00:46:16.320 --> 00:46:18.480] No, no, no, no, no, no, no, no, no, no, no, no, no, no, ask it to explain itself.
[00:46:18.480 --> 00:46:18.960] It can reason.
[00:46:19.200 --> 00:46:21.200] So, it also is immortal?
[00:46:21.200 --> 00:46:26.320] No, no, GPT can't reason any more than my watch can tell time.
[00:46:26.320 --> 00:46:28.480] GPT is a tool.
[00:46:28.480 --> 00:46:33.600] It's a tool in the same way a book is a tool, or a sundial is a tool, or a watch is a tool.
[00:46:33.680 --> 00:46:34.880] It's highly biased.
[00:46:34.880 --> 00:46:36.960] It has all the appearance of reasoning.
[00:46:36.960 --> 00:46:42.640] You can ask it a particular argument, it'll explain you why it came to this conclusion, right?
[00:46:42.640 --> 00:46:47.520] You can have a, some people now, Mike Weiser, he does regular mathematics with it.
[00:46:47.520 --> 00:46:49.200] He asks them why it is, why not that?
[00:46:49.360 --> 00:46:49.840] It can reason.
[00:46:50.080 --> 00:46:52.480] No, it's a tool that leverages human reason.
[00:46:52.480 --> 00:46:58.560] It's been programmed in such a way that given a particular input, it gives a particular output.
[00:46:58.560 --> 00:46:59.280] No, that sounds like a problem.
[00:46:59.360 --> 00:47:03.120] So, ask you the same question 10 times, it gives you 10 different answers.
[00:47:03.120 --> 00:47:06.480] It is not programmed in any way conventional programmed.
[00:47:06.480 --> 00:47:07.040] It has learned.
[00:47:07.440 --> 00:47:09.520] Yeah, so it's also going to read your book.
[00:47:09.680 --> 00:47:11.040] There's no question about it.
[00:47:11.040 --> 00:47:15.200] And you can probably ask it what it thinks about your book, and it'll give you some answer.
[00:47:15.520 --> 00:47:20.000] So, you believe that it can reason, but it has no first-person experience.
[00:47:20.640 --> 00:47:21.920] That's entirely correct.
[00:47:21.920 --> 00:47:23.680] Because reason is purely intelligent.
[00:47:23.680 --> 00:47:24.560] It's a computational experience.
[00:47:25.600 --> 00:47:27.760] How do you reason without first-person experience?
[00:47:28.080 --> 00:47:29.240] That's an interesting concept.
[00:47:28.960 --> 00:47:29.640] You can perfect.
[00:47:30.360 --> 00:47:34.520] Look, Leibniz, Leibniz first showed the way, right?
[00:47:35.160 --> 00:47:39.160] When he said we don't need to have this endless argument, we can just calculate.
[00:47:39.160 --> 00:47:41.560] You give me your premise, and we calculate.
[00:47:41.560 --> 00:47:46.040] We have some sort of calculus that allows us to come to the conclusion A or non-A.
[00:47:46.040 --> 00:47:48.120] This is exactly what these machines do.
[00:47:49.080 --> 00:47:53.160] They don't have any first-person thinking, but they are intelligent.
[00:47:53.160 --> 00:47:56.840] So intelligence and consciousness are really two different things, really orthogonal.
[00:47:57.320 --> 00:47:59.320] Is a book intelligent?
[00:47:59.640 --> 00:48:01.480] No, because it's entirely passive.
[00:48:01.960 --> 00:48:03.560] I mean, this book is a passive thing.
[00:48:03.560 --> 00:48:04.200] It doesn't do anything.
[00:48:04.200 --> 00:48:05.480] But ChatGPT is a.
[00:48:05.880 --> 00:48:07.080] Is a watch intelligent?
[00:48:07.400 --> 00:48:08.520] Is a watch intelligent?
[00:48:10.120 --> 00:48:12.040] Does a watch know what time it is?
[00:48:12.840 --> 00:48:13.160] No.
[00:48:13.640 --> 00:48:14.920] That's not intelligence.
[00:48:15.560 --> 00:48:16.920] That's self-consciousness.
[00:48:16.920 --> 00:48:17.240] No, no, no.
[00:48:17.320 --> 00:48:18.520] Intelligent has the ability.
[00:48:18.920 --> 00:48:20.440] No, does a watch know what time it is?
[00:48:20.600 --> 00:48:21.880] You don't have to be self-conscious.
[00:48:21.880 --> 00:48:23.240] I mean, does a watch know what time it is?
[00:48:23.880 --> 00:48:29.080] No, it doesn't know what it does not know what time it is because it's not conscious.
[00:48:29.080 --> 00:48:33.160] But ChatGPT, look, it depends what you mean by intelligence.
[00:48:33.160 --> 00:48:41.240] The way utilities define is intelligence ability to rapidly adapt to new environments, learn from them, and abstract that knowledge into new situations.
[00:48:41.240 --> 00:48:43.800] ChatGPT can do that brilliantly.
[00:48:44.280 --> 00:48:45.880] Is it smarter than us?
[00:48:45.880 --> 00:48:47.080] Maybe not yet, but very soon.
[00:48:47.480 --> 00:48:52.280] But it can certainly fulfill the standard definition of intelligence as measured by IQ tests.
[00:48:52.280 --> 00:48:57.400] By the way, you can get ChatGPT to take all these IQ tests, and of course it ACEs all of them.
[00:48:57.720 --> 00:49:00.600] Does my cruise control know how to drive?
[00:49:01.560 --> 00:49:02.840] It knows how to drive, yes.
[00:49:02.840 --> 00:49:03.640] But Michael, you're asking.
[00:49:03.720 --> 00:49:04.360] It knows how to drive.
[00:49:04.600 --> 00:49:09.640] I think we're back to the hard problem of consciousness and what it's like to be something, the first-person experience will never know.
[00:49:09.880 --> 00:49:17.120] Well, except that I would say, Michael, that intelligence, so the hard problem is all about consciousness.
[00:49:14.040 --> 00:49:18.480] It's not about intelligence.
[00:49:14.280 --> 00:49:22.320] Intelligence is not that controversial in principle.
[00:49:22.560 --> 00:49:25.920] It doesn't have a hard problem because we can see it.
[00:49:25.920 --> 00:49:26.960] We engineer it now.
[00:49:27.200 --> 00:49:28.480] We're building it every day.
[00:49:28.640 --> 00:49:30.160] It's getting better and better.
[00:49:30.480 --> 00:49:33.120] Is a pocket calculator intelligent?
[00:49:33.440 --> 00:49:36.000] No, because it doesn't learn.
[00:49:36.000 --> 00:49:38.000] It has a short-term memory, but it cannot learn.
[00:49:38.000 --> 00:49:39.920] But chat GPD can learn.
[00:49:40.720 --> 00:49:46.880] You can give it this book, and then it'll answer this book, and you can ask it, how is this book different from Aristotle?
[00:49:46.880 --> 00:49:49.200] Is it compatible with philomorphism?
[00:49:49.200 --> 00:49:51.600] What would Descartes say about reading this book?
[00:49:51.600 --> 00:49:52.960] I mean, it can do all of that.
[00:49:52.960 --> 00:49:56.480] And it can even say, oh, by the way, can you translate me this book into German?
[00:49:56.480 --> 00:49:58.640] And it'll translate it into German.
[00:49:58.640 --> 00:50:01.360] Now, if that isn't intelligence, what is intelligence?
[00:50:01.680 --> 00:50:05.280] Intelligence is something that only a living thing has.
[00:50:05.280 --> 00:50:09.600] I mean, all that you're talking about are tools.
[00:50:09.600 --> 00:50:11.280] They're fascinating tools.
[00:50:11.280 --> 00:50:13.200] They're tools that are very complex.
[00:50:13.200 --> 00:50:18.080] They're tools that have complexity that is, in some ways, more than we understand right now.
[00:50:18.080 --> 00:50:22.400] That is, we've built them, but they can do things we don't quite get.
[00:50:22.400 --> 00:50:24.320] But that doesn't mean that it's intelligence.
[00:50:24.960 --> 00:50:26.000] They're just tools.
[00:50:26.000 --> 00:50:31.600] You're defining it in a certain way such that data on Star Trek will never be a person.
[00:50:31.600 --> 00:50:32.240] Simply because...
[00:50:32.720 --> 00:50:33.440] Of course, of course.
[00:50:34.880 --> 00:50:39.680] So that's your privilege, but that's not the conventional meaning of intelligence.
[00:50:40.000 --> 00:50:41.440] Well, the convention is wrong.
[00:50:41.440 --> 00:50:48.560] I mean, it makes no sense to say that a mechanical object is intelligent.
[00:50:48.560 --> 00:50:50.160] A mechanical object is a tool.
[00:50:50.400 --> 00:51:00.760] The fact of the matter is, Mike, we are more and more surrounded by intelligent agents that everyone calls intelligent, that act intelligently, and that do all of our bidding intelligently or not.
[00:51:00.760 --> 00:51:08.760] And we face, I think, as a society, an enormous danger in misunderstanding the nature of the intelligence.
[00:51:08.760 --> 00:51:20.280] Every shred of intelligence that appears to come out of these machines is human intelligence that is altered and leveraged by the algorithm of the machine, but it's all human.
[00:51:21.720 --> 00:51:24.440] We will meet the enemy, and the enemy's us.
[00:51:24.440 --> 00:51:27.080] It's our intelligence that's in these machines.
[00:51:27.080 --> 00:51:27.640] No one else.
[00:51:29.480 --> 00:51:35.960] Mike, go back to the first triumph of this new approach, machine learning, AlphaGo.
[00:51:35.960 --> 00:51:50.760] So AlphaGo was taught and vanquished these Seydale in 2000, whatever, 21, because it learned it was taught a huge corpus of existing Go games.
[00:51:50.760 --> 00:51:52.760] AlphaGo Zero.
[00:51:52.760 --> 00:51:57.160] FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[00:51:57.160 --> 00:52:02.120] New customers can bet just $5 and get 300 in bonus bets if you win.
[00:52:02.120 --> 00:52:02.680] That's right.
[00:52:02.680 --> 00:52:09.160] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[00:52:09.160 --> 00:52:15.080] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[00:52:15.080 --> 00:52:16.920] The NFL season's almost here.
[00:52:16.920 --> 00:52:19.400] The only question is: are you ready to play?
[00:52:19.400 --> 00:52:25.240] Visit fanduel.com/slash sportsfan to download the FanDuel app today and get started.
[00:52:25.240 --> 00:52:27.720] Must be 21 plus and physically present in New York.
[00:52:27.720 --> 00:52:29.240] First online real money wager only.
[00:52:29.240 --> 00:52:30.600] $10 first deposit required.
[00:52:30.600 --> 00:52:33.960] A bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[00:52:33.960 --> 00:52:34.760] Restrictions apply.
[00:52:34.760 --> 00:52:36.920] See terms at sportsbook.fanuel.com.
[00:52:36.920 --> 00:52:46.000] For help with a gambling problem, call 1-877-8 HOPENY or text Hope NY to 467-369 or visit oasas.ny.gov/slash gambling.
[00:52:46.000 --> 00:52:47.600] Standard text messaging rates apply.
[00:52:44.920 --> 00:52:50.960] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[00:53:15.120 --> 00:53:23.200] By its own bootstrap, it managed to learn without seeing anything about any human.
[00:53:23.200 --> 00:53:26.080] It had no experience of human games whatsoever.
[00:53:26.080 --> 00:53:29.200] And the same thing, I think, ultimately will happen with intelligent.
[00:53:29.200 --> 00:53:30.400] Right now, yes, I agree.
[00:53:30.400 --> 00:53:34.720] They all fed humanity's collective literary output.
[00:53:35.120 --> 00:53:40.960] But sooner than later, and certainly within the next 10 years, they'll be able to bootstrap themselves without that.
[00:53:40.960 --> 00:53:47.920] But Christoph, I think Michael's point is that, say, Watson beating Ken Jennings on Jeopardy, was it excited?
[00:53:47.920 --> 00:53:49.040] Was it happy it won?
[00:53:49.040 --> 00:53:50.880] Does it know it was even playing Jeopardy?
[00:53:50.880 --> 00:53:51.600] No, it didn't.
[00:53:52.000 --> 00:53:53.840] It has no presence experience.
[00:53:55.280 --> 00:54:00.480] Although that may change with quantum computers, I mean, you know, again, the world is changing very rapidly.
[00:54:00.480 --> 00:54:05.280] With quantum computers, it may well feel like something to be one of these machines.
[00:54:06.160 --> 00:54:07.040] We don't know.
[00:54:07.040 --> 00:54:08.960] Okay, I think some of this has to do with language.
[00:54:08.960 --> 00:54:16.640] So when we use the word mind, I think Christophe and I mean something like it's just a word to describe what the brain is doing.
[00:54:16.640 --> 00:54:19.360] But Michael, I think you mean something different from mind.
[00:54:19.360 --> 00:54:20.480] It's different from brain.
[00:54:20.480 --> 00:54:20.960] Is that right?
[00:54:21.520 --> 00:54:27.920] Yeah, well, I think the hymorphic concept of soul is very, very, very helpful here.
[00:54:27.920 --> 00:54:33.960] What we talk about when we say mind is really several powers of what Aristotle or St.
[00:54:29.680 --> 00:54:36.840] Thomas would think of as the soul, what the soul can do.
[00:54:37.160 --> 00:54:50.520] So the mind is the set of powers, including memory, locomotion, sensation, intellect, will, imagination, all those things together are what we call mind.
[00:54:50.520 --> 00:54:58.920] And some of those powers are closely linked to matter, like locomotion, sensory, intellect, locomotion, sensory, memory, and emotion.
[00:54:58.920 --> 00:55:01.480] And some of them are not generated by matter.
[00:55:01.640 --> 00:55:04.360] They're immaterial powers like intellect and will.
[00:55:06.440 --> 00:55:11.880] Well, so let's look at some specific examples from your book that Christophe can address.
[00:55:11.880 --> 00:55:18.440] You talked about split brain patients who seem to know things they shouldn't know because the halves are separated.
[00:55:21.320 --> 00:55:21.960] You want to address that?
[00:55:22.200 --> 00:55:23.080] Sure, sure.
[00:55:25.000 --> 00:55:28.040] Split brain surgery, I've done it.
[00:55:28.040 --> 00:55:34.840] And what struck me as utterly remarkable was how normal these people are.
[00:55:34.840 --> 00:55:41.240] That is, that you cut the corpus callosum, and after the surgery, you cannot tell the difference.
[00:55:41.240 --> 00:55:48.280] I mean, I think if you put Christoph and I in a room with two people, one had split brain surgery, one didn't.
[00:55:48.280 --> 00:55:50.440] You make them wear hats so you can't see the scar.
[00:55:50.440 --> 00:55:51.800] I don't think we could tell the difference.
[00:55:51.960 --> 00:55:56.280] You really need specialized instruments and techniques to find them.
[00:55:56.280 --> 00:56:00.840] That's why Sperry won the Nobel Prize because he had to do all these experiments.
[00:56:01.160 --> 00:56:11.560] And what has been shown, I think, has been summarized very nicely by Yair Pinto in the Netherlands, that these patients have split perception and unified consciousness.
[00:56:11.560 --> 00:56:16.560] That is, that their consciousness really shows all the hallmarks of being a unified whole.
[00:56:14.920 --> 00:56:18.560] That certainly is their everyday experience.
[00:56:18.880 --> 00:56:32.880] And even in detailed testing, there's Alice Cronin, MIT neuroscientist, did some beautiful work back in the 1980s where she presented images to both visual fields.
[00:56:32.880 --> 00:56:45.840] And what she found was that there was certainly a perceptual split, the perceptual split in our ability to transmit verbal information across and geometrical information across.
[00:56:45.840 --> 00:56:48.160] But there was no conceptual split.
[00:56:48.160 --> 00:56:55.520] That is, concepts could be shared without any sign of splitting.
[00:56:55.520 --> 00:57:01.680] And I think this corresponds beautifully to the hylomorphic Aristotelian understanding of the soul.
[00:57:01.680 --> 00:57:09.840] That is, a knife can cut the material stuff, which is perception, but a knife can't cut the intellect because it's not a material thing.
[00:57:09.840 --> 00:57:12.160] It can't cut something that's not material.
[00:57:12.400 --> 00:57:17.040] And so split brain surgery, I think, very strongly supports the hylomorphic view.
[00:57:17.040 --> 00:57:18.080] Christophe?
[00:57:18.560 --> 00:57:22.000] I would beg to differ.
[00:57:22.000 --> 00:57:22.880] So it is true.
[00:57:22.880 --> 00:57:47.600] And in fact, when the operation was first performed by someone in Chicago in 1938, he already commented on the remarkable, unremarkableness of these patients, that you cut 200 million fibers roughly, which are the number of axons that cross the cortical hemispheres, and you find relatively little differences until you do the testing, and then it becomes very apparent.
[00:57:47.600 --> 00:57:54.400] Of course, in many patients, I don't know the patient you've operated on, Mike, in many patients, they don't have a complete commiserotomy.
[00:57:54.960 --> 00:57:59.440] You know, you sometimes leave the posterior or the anterior commissure.
[00:57:59.480 --> 00:58:12.600] And of course, you don't touch some of the deeper cross-the deeper fibers in the deeper part of the brain below the thalamus, like at the level of the superior colliculus, the tectum that connects the left and the right brain.
[00:58:12.840 --> 00:58:19.960] The Pinto study shows that, yes, unless you're really extremely careful, it's very easy for the right side.
[00:58:19.960 --> 00:58:27.960] You see, so the standard interpretation is the following: the standard interpretation of these split brains is that there are two conscious entities.
[00:58:27.960 --> 00:58:36.680] There's the left hemisphere, typically the speaking one, and that tends to dominate certainly a few days after the operation.
[00:58:36.680 --> 00:58:43.960] If you silence that by injecting sodium amisole here, you can briefly anesthetize the left hemisphere, then the right hemisphere emerges.
[00:58:46.040 --> 00:58:52.440] It can now command the opposite side, the left side, and signal that typically doesn't speak.
[00:58:52.680 --> 00:58:55.960] It has sort of non-linguistic competent consciousness.
[00:58:55.960 --> 00:59:02.280] So the idea is that two consciousnesses in there, but it's the right, if you talk to the patient, it's always the right hemisphere.
[00:59:02.280 --> 00:59:28.600] And unless you're really very careful doing the experiment, and I don't think Pinto was, and we can go into the technical details, the left hemisphere will take cues from the right hemisphere, you know, because it's like a clever Hansen phenomenon, except now it's my, you know, for example, if I, you know, if I do this with my right hand, that seems to indicate, you know, that's a cue for my left hand that something is on the other side, but it doesn't quite know what it is.
[00:59:28.600 --> 00:59:36.760] But that cube can be picked up by the right hemisphere, by the right arm, i.e., sorry, by the left arm, i.e., the right hemisphere.
[00:59:36.760 --> 00:59:43.160] So, it's very difficult to do these tests because they're not completely split.
[00:59:43.160 --> 00:59:47.120] It's not like you're taking a knife and you cut down all the way down to the brainstem.
[00:59:44.840 --> 00:59:47.600] Right.
[00:59:47.920 --> 00:59:58.320] Well, the study by Cronin at MIT involves three patients, and they had had a commissure automy, not just a callosotomy.
[00:59:58.320 --> 01:00:03.680] So, they had had sectioning of the anterior commissure and of the hippocampal commissures.
[01:00:03.840 --> 01:00:08.240] The posterior commissure, I don't believe, as far as I know, is not routinely sectioned.
[01:00:08.640 --> 01:00:09.120] No, it's not.
[01:00:09.680 --> 01:00:11.920] And because it's a very dangerous place to be.
[01:00:12.320 --> 01:00:15.760] It's by the aqueduct, and you don't be cutting anything there.
[01:00:17.120 --> 01:00:22.720] So, this was really as complete as it gets of separation of the hemispheres.
[01:00:22.720 --> 01:00:28.400] And one explanation would be that there's this cueing that goes on from side to side.
[01:00:29.440 --> 01:00:32.640] However, you can observe these people and they don't appear to be queuing.
[01:00:33.600 --> 01:00:37.200] The other explanation would be subcortical pathways.
[01:00:38.000 --> 01:00:45.280] There have been studies on the number of subcortical pathways, number of subcortical axons that are connecting the two hemispheres.
[01:00:45.280 --> 01:00:52.320] And the number that I've read is 1,500 axons connect the two hemispheres.
[01:00:52.560 --> 01:00:56.800] There are two, as you said, 200 million axons in the corpus callosum.
[01:00:56.800 --> 01:01:07.360] So that when you cut the corpus callosum, you're cutting, I actually calculated as 99.99925% of the connections between the hemispheres.
[01:01:07.360 --> 01:01:11.760] So these disconnections are almost complete.
[01:01:11.760 --> 01:01:16.000] The subcortical pathways are very, very small and have very, ve
Prompt 2: Key Takeaways
Now please extract the key takeaways from the transcript content I provided.
Extract the most important key takeaways from this part of the conversation. Use a single sentence statement (the key takeaway) rather than milquetoast descriptions like "the hosts discuss...".
Limit the key takeaways to a maximum of 3. The key takeaways should be insightful and knowledge-additive.
IMPORTANT: Return ONLY valid JSON, no explanations or markdown. Ensure:
- All strings are properly quoted and escaped
- No trailing commas
- All braces and brackets are balanced
Format: {"key_takeaways": ["takeaway 1", "takeaway 2"]}
Prompt 3: Segments
Now identify 2-4 distinct topical segments from this part of the conversation.
For each segment, identify:
- Descriptive title (3-6 words)
- START timestamp when this topic begins (HH:MM:SS format)
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Most important Key takeaway from that segment. Key takeaway must be specific and knowledge-additive.
- Brief summary of the discussion
IMPORTANT: The timestamp should mark when the topic/segment STARTS, not a range. Look for topic transitions and conversation shifts.
Return ONLY valid JSON. Ensure all strings are properly quoted, no trailing commas:
{
"segments": [
{
"segment_title": "Topic Discussion",
"timestamp": "01:15:30",
"key_takeaway": "main point from this segment",
"segment_summary": "brief description of what was discussed"
}
]
}
Timestamp format: HH:MM:SS (e.g., 00:05:30, 01:22:45) marking the START of each segment.
Prompt 4: Media Mentions
Now scan the transcript content I provided for ACTUAL mentions of specific media titles:
Find explicit mentions of:
- Books (with specific titles)
- Movies (with specific titles)
- TV Shows (with specific titles)
- Music/Songs (with specific titles)
DO NOT include:
- Websites, URLs, or web services
- Other podcasts or podcast names
IMPORTANT:
- Only include items explicitly mentioned by name. Do not invent titles.
- Valid categories are: "Book", "Movie", "TV Show", "Music"
- Include the exact phrase where each item was mentioned
- Find the nearest proximate timestamp where it appears in the conversation
- THE TIMESTAMP OF THE MEDIA MENTION IS IMPORTANT - DO NOT INVENT TIMESTAMPS AND DO NOT MISATTRIBUTE TIMESTAMPS
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Timestamps are given as ranges, e.g. 01:13:42.520 --> 01:13:46.720. Use the EARLIER of the 2 timestamps in the range.
Return ONLY valid JSON. Ensure all strings are properly quoted and escaped, no trailing commas:
{
"media_mentions": [
{
"title": "Exact Title as Mentioned",
"category": "Book",
"author_artist": "N/A",
"context": "Brief context of why it was mentioned",
"context_phrase": "The exact sentence or phrase where it was mentioned",
"timestamp": "estimated time like 01:15:30"
}
]
}
If no media is mentioned, return: {"media_mentions": []}
Prompt 5: Context Setup
You are an expert data extractor tasked with analyzing a podcast transcript.
I will provide you with part 2 of 2 from a podcast transcript.
I will then ask you to extract different types of information from this content in subsequent messages. Please confirm you have received and understood the transcript content.
Transcript section:
0:59:43.160] So, it's very difficult to do these tests because they're not completely split.
[00:59:43.160 --> 00:59:47.120] It's not like you're taking a knife and you cut down all the way down to the brainstem.
[00:59:44.840 --> 00:59:47.600] Right.
[00:59:47.920 --> 00:59:58.320] Well, the study by Cronin at MIT involves three patients, and they had had a commissure automy, not just a callosotomy.
[00:59:58.320 --> 01:00:03.680] So, they had had sectioning of the anterior commissure and of the hippocampal commissures.
[01:00:03.840 --> 01:00:08.240] The posterior commissure, I don't believe, as far as I know, is not routinely sectioned.
[01:00:08.640 --> 01:00:09.120] No, it's not.
[01:00:09.680 --> 01:00:11.920] And because it's a very dangerous place to be.
[01:00:12.320 --> 01:00:15.760] It's by the aqueduct, and you don't be cutting anything there.
[01:00:17.120 --> 01:00:22.720] So, this was really as complete as it gets of separation of the hemispheres.
[01:00:22.720 --> 01:00:28.400] And one explanation would be that there's this cueing that goes on from side to side.
[01:00:29.440 --> 01:00:32.640] However, you can observe these people and they don't appear to be queuing.
[01:00:33.600 --> 01:00:37.200] The other explanation would be subcortical pathways.
[01:00:38.000 --> 01:00:45.280] There have been studies on the number of subcortical pathways, number of subcortical axons that are connecting the two hemispheres.
[01:00:45.280 --> 01:00:52.320] And the number that I've read is 1,500 axons connect the two hemispheres.
[01:00:52.560 --> 01:00:56.800] There are two, as you said, 200 million axons in the corpus callosum.
[01:00:56.800 --> 01:01:07.360] So that when you cut the corpus callosum, you're cutting, I actually calculated as 99.99925% of the connections between the hemispheres.
[01:01:07.360 --> 01:01:11.760] So these disconnections are almost complete.
[01:01:11.760 --> 01:01:16.000] The subcortical pathways are very, very small and have very, very few axons.
[01:01:16.040 --> 01:01:26.000] There would be enough to mediate, I don't know about concepts, but there certainly would be enough to mediate base emotions like surprise or fear or startle response.
[01:01:26.240 --> 01:01:29.640] Yeah, but emotions also are hematological.
[01:01:29.520 --> 01:01:33.400] That is, you make adrenaline and you communicate with that.
[01:01:33.960 --> 01:01:37.640] I believe that Pinto estimated, I'm not sure where he got this number from.
[01:01:37.640 --> 01:01:47.160] He estimated that the rate of transfer in subcortical pathways in commissurotomy patients was no faster than one bit per second, was the number he used.
[01:01:47.160 --> 01:01:48.760] I don't know if you've looked at that.
[01:01:48.840 --> 01:01:50.760] No, how we yeah, I'm not sure how to do it.
[01:01:50.920 --> 01:02:00.760] One way to test this, by the way, would be to use, and this will come, this will happen, is to use something like Neuralink to try to connect your brain with my brain.
[01:02:00.760 --> 01:02:11.320] Because I think what this will show that if we essentially do a reverse commissarotomy, we add, let's say, for the sake of argument, we add 200 million fibers from my brain to your brain.
[01:02:13.640 --> 01:02:16.040] And I've written about it in my previous book.
[01:02:16.040 --> 01:02:16.680] What will happen?
[01:02:17.080 --> 01:02:21.080] So, first, I just see through your eyes a little bit like augmented reality.
[01:02:21.080 --> 01:02:24.680] I'm still Christoph, but I see now what you're seeing, right?
[01:02:24.680 --> 01:02:27.960] Just like we are or AR, augmented reality.
[01:02:27.960 --> 01:02:30.600] But then at some point, there will be an abrupt transition.
[01:02:30.600 --> 01:02:36.120] If the number of fibers exceed a certain threshold, then your mind and my mind will merge.
[01:02:36.120 --> 01:02:37.240] Mike will disappear.
[01:02:37.240 --> 01:02:38.440] Christoph will disappear.
[01:02:38.440 --> 01:02:47.320] Instead, there will be this new consciousness that has now four hemispheres, that has two mouths and four arms and four legs.
[01:02:47.320 --> 01:02:49.960] There will be a single conscious mind.
[01:02:50.200 --> 01:02:53.640] So if that's a directly, would it be hydromotor?
[01:02:53.720 --> 01:02:58.920] No, no, whether it's hydromorphic or Aristotle, of course, isn't think of this.
[01:02:59.240 --> 01:03:01.960] But experiential, it will be one mind.
[01:03:02.280 --> 01:03:04.600] Reverse commissarotomies have been done.
[01:03:06.120 --> 01:03:07.720] They've been done by nature.
[01:03:07.960 --> 01:03:11.880] There are conjoined twins who are conjoined at the brain.
[01:03:12.440 --> 01:03:22.160] Yeah, but the most interesting pair of them is Krista and Tantia Tatiana Hogan, who are from Canada.
[01:03:22.720 --> 01:03:24.640] And they share a thalamic bridge.
[01:03:24.800 --> 01:03:27.520] So there's a diencephalic bridge.
[01:03:27.520 --> 01:03:29.440] And they're absolutely fascinating.
[01:03:30.000 --> 01:03:32.640] They share vision in each other's eyes.
[01:03:32.880 --> 01:03:35.440] They share sensation on each other's skin.
[01:03:35.440 --> 01:03:39.440] They share a lot of emotions, but they do not share personality at all.
[01:03:39.440 --> 01:03:40.960] They're very different people.
[01:03:40.960 --> 01:03:43.280] They don't share, as far as I know, intellects.
[01:03:43.280 --> 01:03:50.160] That is, it's not as if one of them can study calculus, the other can study philosophy, and they both pass the test.
[01:03:51.040 --> 01:04:06.080] So when you, and the hylomorphic framework fits so beautifully for this, that they're sharing the material aspects of the brain, perceptions, but they don't share intellect.
[01:04:06.800 --> 01:04:07.920] They don't share the immaterial.
[01:04:09.120 --> 01:04:16.480] Because they don't, the bridge isn't between their left, you know, it's not like one has a right hemisphere and the other one.
[01:04:16.720 --> 01:04:22.640] They have two separate, they have two independent cortical, two sets of independent cortical hemispheres.
[01:04:22.640 --> 01:04:24.560] They share a thalamic bridge.
[01:04:24.560 --> 01:04:26.240] But to in that sense, they're separate.
[01:04:26.240 --> 01:04:34.640] Now, I would say if they join, particularly the prefrontal part, if it's joined in both, then they would have a single intellect.
[01:04:34.640 --> 01:04:35.440] But, you know.
[01:04:35.760 --> 01:04:36.080] Right.
[01:04:36.080 --> 01:04:39.360] Well, that's that's, I mean, that, that, that, that's an experimental prediction.
[01:04:39.520 --> 01:04:40.560] That's an experimental prediction.
[01:04:40.640 --> 01:04:44.880] They experience what the other one is experiencing, that first-person perspective.
[01:04:44.880 --> 01:04:45.600] Correct, correct.
[01:04:45.760 --> 01:04:47.680] There's no sign that they do that.
[01:04:49.360 --> 01:04:56.480] Except when they go to sleep, if you read carefully, I mean, they haven't been studied because their parents keep them away from the crying eye of scientists.
[01:04:56.800 --> 01:05:00.920] But when they go to sleep, A, they go to sleep at the same time as you would expect.
[01:04:59.920 --> 01:05:07.000] And the question is, once the ego defenses fall away during sleep, do they actually have, it's a fascinating question.
[01:05:07.160 --> 01:05:09.000] Do they both dream of the same thing?
[01:05:09.000 --> 01:05:09.240] Right.
[01:05:09.240 --> 01:05:10.840] Or do they have independent dreams?
[01:05:10.840 --> 01:05:11.240] Right.
[01:05:11.240 --> 01:05:12.440] Fascinating stuff.
[01:05:12.440 --> 01:05:14.280] They dream of electric sheep.
[01:05:15.560 --> 01:05:16.040] Conjoined sheep.
[01:05:16.520 --> 01:05:17.160] Conjoined sheep.
[01:05:17.160 --> 01:05:17.640] Conjoined sheep.
[01:05:17.720 --> 01:05:20.200] Okay, Michael, let's talk about vegetative states.
[01:05:20.440 --> 01:05:23.720] You know, my mom died after 10 years of brain tumors.
[01:05:23.720 --> 01:05:27.160] She eventually fell and went into a coma.
[01:05:27.160 --> 01:05:30.520] And after a couple of weeks with a feeding tube and so on, she eventually died.
[01:05:30.520 --> 01:05:35.720] But like so many people, as it turns out, I didn't know this at the time, you know, I'd hold her hand and I'd say, squeeze my hand if you hear me.
[01:05:35.720 --> 01:05:36.680] And she'd squeeze my hand.
[01:05:36.680 --> 01:05:38.920] I'm like, holy crap, she's in there.
[01:05:39.800 --> 01:05:43.480] What do you get out of the vegetative state examples in case studies?
[01:05:43.640 --> 01:05:50.440] Well, first of all, there's the issue of paradoxical lucidity, which is a very real thing.
[01:05:50.680 --> 01:05:52.360] I've seen several times.
[01:05:52.360 --> 01:05:55.160] It's quite well known to people who work in hospices.
[01:05:55.400 --> 01:05:58.360] That people in the later stages of Alzheimer's.
[01:05:59.320 --> 01:06:00.520] Terminal lucidity.
[01:06:00.520 --> 01:06:06.120] Yeah, although I work with a guy who's an ethicist at Stonerbrook named Stephen Post, who's a wonderful guy.
[01:06:06.120 --> 01:06:07.320] And he's written a lot about that.
[01:06:07.320 --> 01:06:16.920] And he hates the word terminal lucidity because it's not always terminal, meaning you can see it in people who just, you know, it doesn't always mean you're dying.
[01:06:17.320 --> 01:06:29.240] But that's a fascinating phenomenon where people will have a period of time, commonly around 30 minutes, where they just kind of wake up and they're their old self again, and they slip back down into their state.
[01:06:29.480 --> 01:06:32.280] The vegetative state is kind of a different thing.
[01:06:32.280 --> 01:06:37.080] The vegetative state is the deepest level of unresponsiveness.
[01:06:37.080 --> 01:06:38.360] It's even deeper than coma.
[01:06:38.360 --> 01:06:39.880] It's not considered a coma.
[01:06:39.880 --> 01:06:42.920] It's just a step above brain death.
[01:06:43.320 --> 01:06:50.480] And the traditional thinking has been that people in vegetative state don't have mental states, that they're really just a vegetable.
[01:06:50.480 --> 01:06:53.280] They're just an organism without a mind.
[01:06:53.280 --> 01:07:06.480] And back in about 20 years ago, Adrian Owen, a researcher at Cambridge, published a wonderful study called Detecting Awareness in the Persistent Vegetative State.
[01:07:06.480 --> 01:07:22.480] And he took a woman who was in her 30s who was diagnosed convincingly in a persistent vegetative state after a car accident and put her in an MRI machine and did functional MRI imaging and asked her questions like: imagine you're walking across a room or imagine you're playing tennis.
[01:07:22.480 --> 01:07:29.520] And her brain, what was left of her brain, it was just a very small amount of tissue, would light up in certain patterns.
[01:07:29.520 --> 01:07:37.920] And he took people who are normal and put them in the machine and asked them the same thing, and they had the same patterns of activation as if she was understanding what he was saying.
[01:07:37.920 --> 01:07:43.120] And then he put her back in the machine and he asked the same things, only he scrambled the words.
[01:07:43.120 --> 01:07:46.080] So the semantics was zero.
[01:07:46.080 --> 01:07:49.920] There was no meaning to it, and her brain didn't respond in the same way the other people.
[01:07:49.920 --> 01:07:53.200] So she was responding to the meaning of what he was asking.
[01:07:53.200 --> 01:08:07.120] And other people have done this research now, and about 40% of people who are diagnosed in persistent vegetative state show patterns of activation that suggests that they're quite aware of what's being asked of them and that they are thinking.
[01:08:07.120 --> 01:08:09.360] And we see this in patients in comas.
[01:08:09.360 --> 01:08:17.200] Nurses and ICUs know that don't say disturbing things in the room of somebody who's in a coma because you'll often see their blood pressure go up.
[01:08:17.200 --> 01:08:23.760] I've had people recover from comas who told me that they could hear what people were talking about in the room while they were in a coma.
[01:08:24.320 --> 01:08:35.000] And in my view, this emphasizes the notion that there are aspects of the mind that, as Penfield said, that the brain does not completely explain the mind.
[01:08:35.000 --> 01:08:39.560] There are aspects of the mind that transcend what the brain can explain.
[01:08:40.920 --> 01:08:41.800] I totally disagree.
[01:08:41.800 --> 01:08:43.640] I work with these patients.
[01:08:43.640 --> 01:08:52.760] And I can tell you those patients, of course, they have period when, so now I'm talking about so-called vegetative state or behavioral unresponsive syndrome patients, okay?
[01:08:53.080 --> 01:08:55.560] Typically, they're above coma in a sense.
[01:08:55.560 --> 01:08:58.760] Coma is often defined not exclusively as eyes closed.
[01:08:58.760 --> 01:09:01.320] Here you can have eyes open.
[01:09:01.640 --> 01:09:05.720] When they are, very often, of course, they're sedated for various clinical reasons.
[01:09:05.720 --> 01:09:09.400] So when you remove the sedation, yes, I totally agree.
[01:09:09.400 --> 01:09:22.440] This recent paper, New England Journal of Medicine that came out from Page of New York Times half a year ago showed that 25% of these patients can voluntarily modulate their pre-motor and motor response.
[01:09:22.440 --> 01:09:27.160] In other words, so these are patients that have Glasgow coma scale three or four minimal.
[01:09:27.160 --> 01:09:29.560] So you ask them, sir, sir, can you hear me?
[01:09:29.560 --> 01:09:30.760] Can you move your eyes?
[01:09:31.000 --> 01:09:33.320] You pinch them very hard on the fingernails.
[01:09:33.320 --> 01:09:35.880] They don't even orient toward the source of the pain.
[01:09:35.880 --> 01:09:41.240] So clinically, they define unresponsive, which traditionally meant unconscious.
[01:09:41.240 --> 01:09:46.520] But now you ask them, for instance, for 30 seconds, really clench your hands.
[01:09:46.520 --> 01:09:49.080] And they can't do it, but they imagine clenching their hand.
[01:09:49.080 --> 01:09:51.800] And 30 seconds relax, 30 seconds, clench your hand.
[01:09:51.800 --> 01:09:53.560] You can clearly see the brain.
[01:09:53.560 --> 01:10:02.680] I mean, in 25% of the cases, you can clearly see using either EG or FMI, you can see their pre-motor activity or motor activity being modulated.
[01:10:02.680 --> 01:10:05.560] So that tells me these people are not unconscious.
[01:10:05.560 --> 01:10:12.040] Yes, they're behaviorally unresponsive because of the stroke or the hemorrhage or the TBI, whatever, but they're not unconscious.
[01:10:12.040 --> 01:10:12.920] The brain isn't there.
[01:10:12.920 --> 01:10:16.160] The brain is actually part of the brain online.
[01:10:16.480 --> 01:10:23.120] The patient is sufficiently conscious to a certain extent, although of course we don't know what they're conscious about.
[01:10:23.120 --> 01:10:25.360] And they're using this way to communicate.
[01:10:25.360 --> 01:10:29.120] So it's perfectly explainable by conventional neuroscience.
[01:10:29.280 --> 01:10:38.960] In fact, people are now tracking this activity, which you can try to pinpoint using, you know, modern tools, you know, high-density EEG or intra-cranial recordings, etc.
[01:10:39.120 --> 01:10:47.680] So I think it's a beautiful case where there's this nice correlation between brain activity and behavior, including consciousness.
[01:10:47.680 --> 01:10:53.200] It doesn't require any thinking about any extramaterial responses.
[01:10:55.760 --> 01:10:56.480] Go ahead, Mike.
[01:10:56.480 --> 01:10:57.040] I'm sorry.
[01:10:57.200 --> 01:10:57.680] No, go ahead.
[01:10:58.800 --> 01:11:25.360] Yeah, the basis for my kind of embrace of hylomorphic understanding of the human soul is that there does seem to be across many, many different sectors of neuroscience, this recurrent observation that there is a disconnect between what the brain does and what the mind does, particularly when we talk about intellect and will.
[01:11:26.000 --> 01:11:28.640] And I think that holds up.
[01:11:28.640 --> 01:11:30.400] It certainly held up in Penfield's work.
[01:11:30.400 --> 01:11:33.120] It holds up in split brain research.
[01:11:33.680 --> 01:11:36.240] I think you can see it in paradoxical lucidity.
[01:11:36.240 --> 01:11:39.840] You can see it in responsiveness and persistent vegetative state.
[01:11:39.840 --> 01:11:46.080] There are aspects of the mind that I don't believe are entirely accounted for by aspects of the brain.
[01:11:46.080 --> 01:11:47.920] There is a disconnect there.
[01:11:48.400 --> 01:11:52.960] You realize when you say, not entirely, this is really a God of the gap argument.
[01:11:52.960 --> 01:11:55.280] Right now, we still have this little gap.
[01:11:55.280 --> 01:12:04.920] And then the next time somebody presents you, oh, yeah, I can evoke a rational thought about mathematics in a mathematician, then suddenly your argument collapses.
[01:12:05.240 --> 01:12:08.040] Yeah, but the gap is the Grand Canyon.
[01:12:08.040 --> 01:12:09.080] It's a big gap.
[01:12:09.400 --> 01:12:17.240] No, the gap is between conscious experience, but whatsoever construed, including seeing or hearing or feeling pain, it's not rational.
[01:12:17.480 --> 01:12:18.840] It's not rational thought.
[01:12:19.480 --> 01:12:21.400] It's any experience.
[01:12:21.400 --> 01:12:31.000] Penfield did, by just this back-of-the-envelope estimate, 1.1 million brain stimulations and never evoked abstract thought.
[01:12:31.240 --> 01:12:31.560] Never evoked.
[01:12:31.800 --> 01:12:36.440] Okay, that was 100 years ago using old technology that we don't use anymore in the orbit.
[01:12:36.520 --> 01:12:38.120] Oh, different electrodes.
[01:12:38.760 --> 01:12:40.760] We used, you know, different current.
[01:12:40.760 --> 01:12:43.800] For example, Holler, he typically didn't stimulate long enough.
[01:12:43.800 --> 01:12:54.200] It may well be that to recruit longer, you know, distributed function like abstract thought, you might, you know, stimulate for five seconds or for ten seconds.
[01:12:54.200 --> 01:12:55.160] You know, right, right.
[01:12:56.280 --> 01:12:59.160] But this is all stated in the subjunctive tests.
[01:12:59.160 --> 01:13:02.280] They are, well, maybe if we do this, maybe if we do that.
[01:13:02.280 --> 01:13:10.680] The actual experimental evidence does not show that intellectual, that abstract intellectual thought is evoked by brain stimulation.
[01:13:10.680 --> 01:13:12.280] That's the actual evidence.
[01:13:12.920 --> 01:13:14.280] That's not true.
[01:13:14.280 --> 01:13:15.480] That is simply not true.
[01:13:15.480 --> 01:13:17.800] You can get patients that say things I read.
[01:13:17.800 --> 01:13:18.520] You want to count.
[01:13:18.520 --> 01:13:20.120] Now you could say, well, that sounds abstract.
[01:13:20.120 --> 01:13:25.960] The idea of leaving, you know, the abstract, the idea of leaving, that to me seems very abstract.
[01:13:25.960 --> 01:13:28.440] People can get out-of-body experiences.
[01:13:28.440 --> 01:13:31.480] They can see themselves or they can experience themselves.
[01:13:32.920 --> 01:13:34.520] That's a perceptual issue.
[01:13:34.520 --> 01:13:35.400] That's a perceptual issue.
[01:13:35.480 --> 01:13:36.120] No one questioned.
[01:13:36.520 --> 01:13:38.280] No, but they can see their own body.
[01:13:38.280 --> 01:13:49.760] This is a classical NDE part, right, that many people take to mean that they're floating then they're heaven, where a particular site in the posterior precunus, right?
[01:13:49.760 --> 01:13:56.640] If you stimulate it in most people there, they have these weird dissociations between the self and their body.
[01:13:56.640 --> 01:14:03.280] So they can experience the self as an abstract entity, not their body, because they claim they can see their body.
[01:14:03.280 --> 01:14:04.000] And I believe them.
[01:14:04.000 --> 01:14:08.560] They can see their body, but now they apparently have a different point of view.
[01:14:08.560 --> 01:14:12.640] Well, if that isn't abstract, they abstract it from their body.
[01:14:12.640 --> 01:14:14.160] That's pretty abstract to me.
[01:14:14.160 --> 01:14:17.520] Well, vision is not an abstract neurological function.
[01:14:17.520 --> 01:14:20.000] Vision is a concrete neurological function.
[01:14:20.240 --> 01:14:26.560] Do you believe when they have the out-of-body experience that they actually see their body from outside of their body?
[01:14:27.200 --> 01:14:28.480] No, I have no evidence.
[01:14:28.480 --> 01:14:38.880] In fact, people in Amsterdam, in Holland, right, did this experiment where they put hidden numbers on top of the cabinets and they know they did not read those hidden numbers.
[01:14:38.880 --> 01:14:42.960] No, I have no evidence whatsoever that they can, but they certainly have the experience.
[01:14:42.960 --> 01:14:44.000] I don't doubt for once.
[01:14:44.000 --> 01:14:45.040] I've had an NDE.
[01:14:45.040 --> 01:14:51.040] I had a complete NDE tunnel, you know, dread, terror, ecstasy, all of that.
[01:14:51.040 --> 01:15:00.160] But I don't believe my whatever, whoever is Christoph, so that was actually materially floating in Cartesian space above my body.
[01:15:00.160 --> 01:15:00.560] No.
[01:15:01.600 --> 01:15:04.320] Maybe explain for the listeners, Christoph, what you did to induce that.
[01:15:04.640 --> 01:15:09.520] Well, no, no, that's right, right.
[01:15:10.800 --> 01:15:16.320] That's irrelevant to the point, which is we need, I totally agree with Mike.
[01:15:16.640 --> 01:15:41.320] I agree with many of the things you're saying, by the way, that we need to take near-death experience seriously, but they are deeply felt experiences that ultimately need to be explained by science, but I don't take them at all to be evidence that my soul is sort of this free-floating thing that can travel outside space, you know, outside the normal boundaries of space-time.
[01:15:41.320 --> 01:15:44.120] Well, there are, yeah, there's all that research like Dr.
[01:15:44.120 --> 01:15:55.080] James Winnery at the United States Air Force accelerating pilots in a centrifuge, and they have an out-of-body experience, little, you know, dream, dreamlets in the tunnel and all that stuff.
[01:15:55.080 --> 01:15:58.600] And his explanation was just oxygen deprivation.
[01:15:58.600 --> 01:15:59.240] Yeah.
[01:15:59.880 --> 01:16:15.880] Well, I think what you have to explain when you look, I mean, if we want to get into near-death experiences, first of all, I don't make a claim that every near-death experience is an actual experience of the soul leaving the body and stuff, because it's a huge thing.
[01:16:15.880 --> 01:16:21.160] I mean, there's been estimated that 9 million Americans have had some kind of near-death experience.
[01:16:21.400 --> 01:16:24.040] And some of them undoubtedly are just hallucinations.
[01:16:24.680 --> 01:16:25.640] Some are lies.
[01:16:25.640 --> 01:16:26.840] Some people just make it up.
[01:16:26.840 --> 01:16:27.880] Who knows?
[01:16:27.880 --> 01:16:33.720] But there are experiences that are awfully difficult to explain naturalistically.
[01:16:34.360 --> 01:16:41.560] But I think there are four aspects of near-death experiences that really do challenge naturalistic explanations.
[01:16:41.560 --> 01:16:46.200] One is that the experiences have an extremely clear content.
[01:16:46.200 --> 01:16:48.920] It's clear, it's highly organized.
[01:16:48.920 --> 01:16:58.200] There's often a life review, and there's a conscious decision to return to the body, which you don't get from hypoxia, hypercarbia, you don't get from all these temporal dope seizures.
[01:16:58.200 --> 01:16:59.720] None of that will cause that.
[01:16:59.720 --> 01:17:05.000] The second thing is experiences that are corroborated externally.
[01:17:05.000 --> 01:17:12.520] That is, that people see things that are going on in the room that they could not have seen because their brain was not working when these things happened.
[01:17:12.520 --> 01:17:17.280] The third is the corroboration with seeing only dead people at the other end of the tunnel.
[01:17:17.600 --> 01:17:25.680] I'm aware of any, I'm unaware of any near-death experience in which a person has encountered on the other side, someone who's still living.
[01:17:26.160 --> 01:17:32.400] Even if they didn't know the person was dead, it's only dead people that they encounter, which is an odd hallucination.
[01:17:32.400 --> 01:17:35.600] And the fourth thing is, of course, that these are transformative experiences.
[01:17:35.600 --> 01:17:38.560] People's lives are radically changed by these.
[01:17:38.560 --> 01:17:51.600] And things like, you know, temporal dope seizures or endorphins or ketamine, things like that, they may explain a tiny sliver of the experience, but you have to explain the whole thing.
[01:17:51.600 --> 01:17:54.800] And there are hundreds of people who've had the whole thing.
[01:17:55.600 --> 01:17:57.440] Pam Reynolds is a great example.
[01:17:57.440 --> 01:18:03.200] I'm sure you've heard of the case of Pam Reynolds, who had an aneurysm operated on in 1991.
[01:18:03.520 --> 01:18:04.480] Fully monitored.
[01:18:04.480 --> 01:18:11.280] Her brain was fully monitored, had a dramatic near-death experience under ideal experimental conditions.
[01:18:13.120 --> 01:18:23.920] Look, there are thousands of people under ayahuasca, okay, a powerful serotonergic hallucinogenic, that experience, you know, that have all these encounters with strange beings, with suppliant gods.
[01:18:23.920 --> 01:18:29.600] This happens to be in a South American, not in a Christian context, but they have, you know, very powerful experiences.
[01:18:29.840 --> 01:18:34.400] Once again, I don't doubt, and I never have doubted, the reality of these experiences.
[01:18:35.040 --> 01:18:37.040] As you say, Mike, transformative.
[01:18:37.040 --> 01:18:39.280] People can radically change their life.
[01:18:39.280 --> 01:18:40.720] You know, the classical one is St.
[01:18:40.800 --> 01:18:42.640] Paul turning into St.
[01:18:42.720 --> 01:18:43.760] Paul, right?
[01:18:44.560 --> 01:18:46.320] Following such a transformative experience.
[01:18:46.320 --> 01:18:47.440] So, I don't doubt that.
[01:18:47.440 --> 01:18:51.040] I'm a little bit more worried about the metaphysical implications.
[01:18:51.040 --> 01:18:51.680] I understand.
[01:18:51.680 --> 01:19:02.040] I mean, following William James, right, he has this, he talks about these in his book, The Varieties of Religious Experience, where he says people bring back this noatic quality, right?
[01:19:02.040 --> 01:19:08.920] They have this particular noatic quality that never leaves them, the certainty that they have experienced something else.
[01:19:08.920 --> 01:19:13.000] And again, I'm very sympathetic to that because I also had such an experience.
[01:19:13.000 --> 01:19:17.400] However, I still think that may all be true.
[01:19:17.400 --> 01:19:19.800] You might access a different metaphysical realm.
[01:19:19.800 --> 01:19:37.720] But what is true that as far as we can tell, every conscious experience, no matter how abstract, no matter how boring, whether it's tasting a slice of old leftover pizza or thinking something ethereal about human justice or about general relativity, we know has a correlate in the brain.
[01:19:38.840 --> 01:19:42.120] There is this clear substrate dependency.
[01:19:42.120 --> 01:19:51.480] And we can't allow that by saying, well, most everything is in the substrate, but then there is this special thing that's sort of done somewhere else in a realm I don't access.
[01:19:51.480 --> 01:19:56.360] And by the way, other things like Chat GPD that also have that aspect.
[01:19:56.360 --> 01:20:05.960] I'm just going to avoid dealing with them because I'm saying, well, by definition, I don't deal with them because they're not alive, which seems a rather arbitrary distinction.
[01:20:05.960 --> 01:20:09.160] I know Aristotle made that, but he lived in a radical distant world.
[01:20:09.160 --> 01:20:13.560] FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[01:20:13.560 --> 01:20:18.520] New customers can bet just $5 and get 300 in bonus bets if you win.
[01:20:18.520 --> 01:20:19.080] That's right.
[01:20:19.080 --> 01:20:25.560] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[01:20:25.560 --> 01:20:31.400] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[01:20:31.400 --> 01:20:33.320] The NFL season's almost here.
[01:20:33.320 --> 01:20:35.800] The only question is: are you ready to play?
[01:20:35.800 --> 01:20:41.640] Visit fanduel.com/slash sportsfan to download the FanDuel app today and get started.
[01:20:41.640 --> 01:20:44.040] Must be 21 plus and physically present in New York.
[01:20:44.040 --> 01:20:46.960] First online real money wager only, $10 first deposit required.
[01:20:46.960 --> 01:20:50.320] A bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[01:20:50.320 --> 01:20:51.120] Restrictions apply.
[01:20:44.840 --> 01:20:53.280] See terms at sportsbook.fan duel.com.
[01:20:53.280 --> 01:21:02.400] For help with a gambling problem, call 1-877-8 HOPENY or text Hope NY to 467-369 or visit oasas.ny.gov/slash gambling.
[01:21:02.400 --> 01:21:03.920] Standard text messaging rates apply.
[01:21:03.920 --> 01:21:08.000] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[01:21:08.000 --> 01:21:10.400] Where they had almost no machines.
[01:21:10.720 --> 01:21:29.760] So, Michael, yeah, a lot of these OBEs and NDEs are based on personal accounts that people give, which when I read Oliver Sachs's books, including his memoir where he talks about being under super stress during medical school and taking all kinds of things to stay awake and whatnot, having fantastic hallucinations.
[01:21:29.760 --> 01:21:38.960] And pretty much every chapter in all of his books is some weird thing that somebody experiences, like the man who thought mistook his wife for a hat or whatever it was.
[01:21:38.960 --> 01:21:40.160] You know, there's a lot of this.
[01:21:40.160 --> 01:21:44.800] And his conclusion from all this, I mean, he knows what you know about neuroscience and all this stuff.
[01:21:44.800 --> 01:21:53.920] He wrote a long essay in the Atlantic when Eben Alexander's book, Proof of Heaven, came out, in which Eben recounted his near-death experience.
[01:21:53.920 --> 01:21:58.880] And, you know, Oliver basically just said all this can be explained by neuroscience.
[01:21:58.880 --> 01:22:02.400] It seems like it comes down to a metaphysical decision.
[01:22:02.400 --> 01:22:13.200] I'm going to take these examples that are kind of spooky and weird and interpret them through this metaphysical model rather than the materialist model, something like that.
[01:22:13.520 --> 01:22:18.160] Well, I do consider subjective experience to be a kind of data.
[01:22:19.120 --> 01:22:26.240] It's different from objective data, but my goodness gracious, there are all kinds of sciences that are based on what people feel and think and so on.
[01:22:26.240 --> 01:22:28.240] So it is data.
[01:22:28.240 --> 01:22:30.680] But that's not really so much what I'm talking about.
[01:22:30.680 --> 01:22:37.960] I'm talking about corroborated experiences or experiences that you really can't explain through naturalistic means.
[01:22:38.200 --> 01:22:50.280] There are hundreds of very well-documented cases where people, during the time when their brain is not working, have detailed knowledge of what's happening in the room around them.
[01:22:50.680 --> 01:22:54.360] And Pam Reynolds is a great example of that.
[01:22:55.000 --> 01:22:59.320] That's a person who was, her brain was carefully monitored.
[01:22:59.320 --> 01:23:07.800] She had earphones that were giving her 100 decibel clicks every second in her ear, so she couldn't have heard conversations and so on.
[01:23:08.120 --> 01:23:30.680] And she had the experience of leaving her body, watching the operation, describe the surgical instruments, describe what the doctors were saying, described who came in and left the room, described what music was playing in the room during this, at a time when her brain was drained of blood and she was just absolutely as brain dead as you get, reversibly.
[01:23:31.080 --> 01:23:33.320] And you have to explain stuff like that.
[01:23:33.320 --> 01:23:34.280] And it can't just be cash.
[01:23:34.440 --> 01:23:36.360] I'm trying to find such an anecdote.
[01:23:36.360 --> 01:23:38.600] People try to replicate such anecdotes.
[01:23:38.600 --> 01:23:41.960] Like in the Palmier study, right, that came out last year in New York.
[01:23:41.960 --> 01:23:43.640] They tried to replicate some of the things.
[01:23:44.840 --> 01:23:48.040] They try to put things onto cabinets, et cetera.
[01:23:48.120 --> 01:23:54.680] You could never, so under truly objective circumstances, where people are trying to do this, you could never find any evidence for that.
[01:23:54.680 --> 01:23:59.080] And Carl Sagan, I see Carl Sagan hanging there behind Michael.
[01:23:59.080 --> 01:24:01.400] His dictum was extraordinary claims.
[01:24:01.400 --> 01:24:06.920] And what you're making isn't extraordinary claims require extraordinary evidence.
[01:24:06.920 --> 01:24:13.880] And so that means I want to be able to go routinely in the OR, try to see under which conditions these occur, and actually test.
[01:24:13.880 --> 01:24:15.000] Do they actually do these things?
[01:24:15.440 --> 01:24:17.920] Well, so for instance, one wait, let me just finish.
[01:24:18.560 --> 01:24:22.800] These patients almost never have high-density EG montage, right?
[01:24:22.800 --> 01:24:28.880] It's very difficult to ascertain whether the brain was really dead or is it just the two leads and you don't get prefrontal activity.
[01:24:28.880 --> 01:24:34.880] That's very different from saying the brain was entirely flatlined, in particular the deeper structure like the hippocampus, right?
[01:24:34.880 --> 01:24:38.880] So the ER or the ICU is not really equipped to deal with that.
[01:24:38.880 --> 01:24:49.120] And when people do try to do this, like there was a study last year in PNES where you had four people on ventilator, they had high-density EG montage, 64 channel.
[01:24:49.120 --> 01:24:51.920] Two of them did indeed show.
[01:24:51.920 --> 01:25:00.640] So in all four patients, people decided the loved one decided to withdraw life critical care and so they were going to die.
[01:25:00.640 --> 01:25:08.480] So they pulled the ventilator while the while all four patients were in Michigan were on EG monitoring.
[01:25:08.480 --> 01:25:11.120] And then what you can see in four, it just went down.
[01:25:11.440 --> 01:25:13.760] In two of the four, the EG went down.
[01:25:13.760 --> 01:25:21.760] In two of them, they had these paradoxical episodes of high-frequency activity for two or three minutes.
[01:25:23.680 --> 01:25:31.600] Where technically the heart had already stopped, but then they had the power, the EG power decrease, but then they had the spontaneous episodes.
[01:25:31.760 --> 01:25:35.840] But within seven minutes or everything, everything went flatline.
[01:25:35.840 --> 01:25:45.040] And that everything we know is once your brain is truly flatlined throughout the brain, including deep structure, yes, there is no more conscious experience.
[01:25:45.840 --> 01:25:55.920] How would that kind of brain activity account for the ability to read people's name tags and to know who's in the room and to I would like to see that tested?
[01:25:56.320 --> 01:25:57.120] I can't do that.
[01:25:57.120 --> 01:26:00.120] If that was the case, I would like to see that tested not retroactively.
[01:25:59.920 --> 01:26:01.560] No, no, no, no, no, no.
[01:26:02.600 --> 01:26:09.800] Pam Reynolds was, in a very real sense, a prospective study, meaning they didn't expect her to have brain death.
[01:26:10.280 --> 01:26:12.280] I'm sorry, to have a near-death experience.
[01:26:12.280 --> 01:26:13.960] But it was prospective in essentially.
[01:26:13.960 --> 01:26:17.480] She was as rigorously monitored as a human being can be monitored.
[01:26:17.480 --> 01:26:31.160] They drained the blood out of her brain in order to fix the aneurysm, and there was 30 minutes of no blood flow whatsoever to her brain with electrical silence in her brainstem and cortex.
[01:26:31.160 --> 01:26:34.680] And she watched the operation from the ceiling of the room.
[01:26:35.000 --> 01:26:38.440] And told them told them details.
[01:26:38.440 --> 01:26:42.120] I mean, I know Bob Spetzler, I know the surgeon who did it.
[01:26:42.120 --> 01:26:46.680] And he said, hey, I don't doubt that this is what she said.
[01:26:46.680 --> 01:26:51.960] However, you can't, there's simply no way in an uncontrolled study to rule out post-experimental chewing.
[01:26:52.440 --> 01:26:55.400] Look, she woke up an hour later, two hours later.
[01:26:55.400 --> 01:26:56.840] Maybe she imagined this.
[01:26:56.840 --> 01:27:03.560] Maybe, you know, again, this is why we need to do science, which we need to do in an unbiased way to try to test.
[01:27:04.360 --> 01:27:07.320] But a million people die each year in the U.S.
[01:27:08.360 --> 01:27:14.200] So we should be able to collect data, and people are trying, like the Sam Permian.
[01:27:15.160 --> 01:27:18.040] Yeah, so we should have a little bit more evidence.
[01:27:18.360 --> 01:27:20.520] Yeah, there's this one patient and this one.
[01:27:21.240 --> 01:27:24.280] This is not what Carl Sagan calls extraordinary evidence.
[01:27:25.560 --> 01:27:31.560] I think, I mean, Sagan, I think, is exactly right, that extraordinary claims require extraordinary evidence.
[01:27:31.560 --> 01:27:38.360] The thing is that the extraordinary claim here is that all of these people, what amounts to millions of people, are wrong.
[01:27:38.680 --> 01:27:40.600] That's the extraordinary claim.
[01:27:40.600 --> 01:27:51.600] And the extraordinary claim in naturalist or materialist neuroscience is that mental predicates can arise from brain predicates.
[01:27:51.840 --> 01:27:58.960] That is, the extraordinary claim is that concepts and a subjective experience arise from simple matters.
[01:27:59.120 --> 01:28:00.800] You see it every day in a scanner.
[01:28:00.800 --> 01:28:06.160] If you put a person in a scanner and you ask them to think abstract thoughts, they'll see activity in particular parts of their brain.
[01:28:06.640 --> 01:28:08.320] What you're seeing is correlation.
[01:28:08.320 --> 01:28:09.520] We're talking about causation.
[01:28:09.520 --> 01:28:10.720] It's a different thing.
[01:28:11.200 --> 01:28:12.640] No one questions correlation.
[01:28:12.640 --> 01:28:13.840] No one questions that.
[01:28:13.840 --> 01:28:17.040] There's no question that correlation goes on.
[01:28:17.040 --> 01:28:25.360] Causation is a completely different thing, and no one has the faintest clue as to how that could even happen, let alone.
[01:28:25.520 --> 01:28:26.320] Of course we have to.
[01:28:26.960 --> 01:28:32.160] I mean, there are lots of, you might not believe the books, but there are lots of theories written about how intelligent.
[01:28:32.320 --> 01:28:35.680] Read global neuronal workspace, how intelligent arise in the book is.
[01:28:36.400 --> 01:28:40.400] And the number of theories is the evidence that no one understands it.
[01:28:40.400 --> 01:28:42.640] Because if you understand it, you've got one theory.
[01:28:43.040 --> 01:28:44.800] The point is people are groping.
[01:28:45.280 --> 01:28:49.040] People are groping for theories because they don't have expertise.
[01:28:49.360 --> 01:28:55.680] The study of intelligence and natural intelligence and artificial intelligence is it's an infancy.
[01:28:55.680 --> 01:28:58.320] It's really only in, you know, 100 years old.
[01:28:58.320 --> 01:29:03.920] Yeah, if you compare us against a mature science like science, where there is one theory, yes, we're not at that stage yet.
[01:29:04.240 --> 01:29:10.160] So do you believe that in split brain surgery that there are two conscious human beings inside the brain?
[01:29:10.320 --> 01:29:16.640] In a complete, yes, in a complete split brain commiseromy, there will be two conscious beings, one that can talk, the other one.
[01:29:17.120 --> 01:29:18.720] That's a pretty extraordinary claim.
[01:29:19.920 --> 01:29:20.880] Why don't people feel that?
[01:29:21.040 --> 01:29:23.360] And a Nobel Prize was given out for it, indeed.
[01:29:24.720 --> 01:29:28.400] Why don't people feel, I mean, these people feel completely normal.
[01:29:28.400 --> 01:29:28.960] Completely normal.
[01:29:29.120 --> 01:29:32.760] Because you talk to, when you say people, you talk to the left hemisphere.
[01:29:32.920 --> 01:29:39.720] If you silent the left hemisphere using sodium amethyl, then what remains is someone who isn't very eloquent anymore.
[01:29:39.720 --> 01:29:41.320] They can still say yes, no.
[01:29:41.320 --> 01:29:42.520] They can still sing.
[01:29:42.520 --> 01:29:49.560] They can still, of course, explain the wide variety of the entire gamut of human emotions, but they can't talk with the fluency anymore.
[01:29:49.560 --> 01:29:51.400] So you think that's a different person?
[01:29:52.040 --> 01:29:53.400] It's a different person.
[01:29:54.680 --> 01:29:57.640] And partly, they have different memories.
[01:29:57.640 --> 01:29:58.040] Yes.
[01:29:58.600 --> 01:30:03.000] It's in patients who have cerebral palsy, it's very common.
[01:30:03.000 --> 01:30:10.040] In fact, one of the biggest problems they have is when they try to move, for example, if they try to move their arm, their leg will move also.
[01:30:10.040 --> 01:30:16.280] That is, there's a lack of integration of neurological function in these patients.
[01:30:16.280 --> 01:30:19.640] Do you believe these represent separate centers of consciousness in these patients?
[01:30:20.280 --> 01:30:22.120] I simply don't know enough about them.
[01:30:22.600 --> 01:30:24.920] In the corpus callosum, it's very clear.
[01:30:25.160 --> 01:30:31.960] They have two nameless fibers cut, so now they have two essentially relatively semi-not totally, but relatively independent cortical hemispheres.
[01:30:31.960 --> 01:30:33.800] I don't know, but Paul, I simply don't know.
[01:30:33.800 --> 01:30:42.120] So if these people were to, say, for example, sign their name, would the right hemisphere sign a different name than the left?
[01:30:42.280 --> 01:30:48.680] I mean, I don't even understand the notion of two separate consciousnesses in one human being.
[01:30:49.240 --> 01:30:50.840] That doesn't even make sense.
[01:30:51.480 --> 01:30:52.760] Why is this so difficult to understand?
[01:30:53.000 --> 01:30:54.920] Two conscious entities in one skull.
[01:30:55.080 --> 01:30:55.720] I don't see.
[01:30:55.720 --> 01:31:00.120] In fact, there's a novel written by a philosopher about it out of Toronto.
[01:31:00.440 --> 01:31:02.280] It's very funny, this idea.
[01:31:02.520 --> 01:31:08.920] It involves a jury trial, and the right hemisphere committed the crime, but no, sorry, the left hemisphere, but the right one is in the sense of the crime.
[01:31:09.960 --> 01:31:11.720] It's a novel because it's fiction.
[01:31:12.600 --> 01:31:13.320] No, it's not fiction.
[01:31:13.480 --> 01:31:15.360] It's what the okay, that's all right, right.
[01:31:15.520 --> 01:31:19.360] Yeah, a lot of this depends on what you mean by two separate consciousnesses.
[01:31:14.920 --> 01:31:21.840] I mean, I feel I am multiple selves.
[01:31:21.920 --> 01:31:29.280] There's the future Shermer, I'm going to be different tomorrow than I am today, and I operate on that, and I set up conditions now for myself.
[01:31:29.520 --> 01:31:29.920] No, I know.
[01:31:30.400 --> 01:31:43.040] The claim here is that this other mind, if I'm split-brained, that you're really Christoph is the left hemisphere, and the right hemisphere may as well be on the dark side of the moon with respect to Maya.
[01:31:43.120 --> 01:31:48.720] I would not be able to access the emotions and the experiential state of the right hemisphere.
[01:31:48.720 --> 01:31:49.680] That's a claim.
[01:31:50.000 --> 01:31:50.560] Okay.
[01:31:51.520 --> 01:31:55.200] So, Dan Dennett had a nice phrase called burden tannis.
[01:31:55.200 --> 01:31:57.200] You know, who has the burden of proof, right?
[01:31:57.520 --> 01:31:58.720] The burden of proof is on you.
[01:31:58.720 --> 01:32:00.320] Whack, no, the burden of proof is on you.
[01:32:00.320 --> 01:32:01.120] Whack, no, okay.
[01:32:01.120 --> 01:32:04.000] So, you know, the extraordinary claims and all that stuff.
[01:32:04.000 --> 01:32:07.200] Okay, so it depends on where your starting point is.
[01:32:07.200 --> 01:32:14.480] I guess in terms of materialism, where the brain is everything, that's the unusual belief in the world.
[01:32:14.480 --> 01:32:16.800] Most people are dualists, they're natural-born dualists.
[01:32:17.040 --> 01:32:20.000] That's what people have thought throughout history.
[01:32:20.000 --> 01:32:23.120] But, okay, let's turn to the next subject: free will.
[01:32:23.840 --> 01:32:30.880] So, the three positions are determinism, compatibilism, and well, I guess libertarian free will, as it's called.
[01:32:30.880 --> 01:32:36.640] Michael, I gather you would call yourself a libertarian free will defender, right?
[01:32:37.760 --> 01:32:40.400] Yeah, my view of free will, right?
[01:32:40.400 --> 01:32:41.200] I'm a libertarian.
[01:32:41.200 --> 01:32:49.120] I think that free will is what Aristotle would call a rational appetite, meaning, as opposed to sensitive appetite.
[01:32:49.120 --> 01:32:52.240] Sensitive appetite is from the brain, from the body.
[01:32:52.240 --> 01:32:56.640] You're hungry, you're tired, you do this or that, you have lust, whatever.
[01:32:56.960 --> 01:33:01.880] Rational appetite is where you have a desire that's based on reason.
[01:32:59.840 --> 01:33:06.280] For example, you read the Ten Commandments, makes sense to you, say, I'm going to follow these things.
[01:33:06.600 --> 01:33:08.200] That's what free will is.
[01:33:08.440 --> 01:33:10.600] And I think free will is real.
[01:33:10.760 --> 01:33:21.560] First of all, I think it's self-refuting to deny free will, because if you deny free will, what you're saying is that the very act of saying you deny free will is driven by the future.
[01:33:21.640 --> 01:33:24.920] But the determinist would say you were determined to say that you deny it.
[01:33:25.320 --> 01:33:26.360] Right, right, right, right.
[01:33:27.240 --> 01:33:30.760] But why would you then trust that opinion if you didn't choose it freely?
[01:33:30.760 --> 01:33:32.120] Okay, Christophe.
[01:33:33.400 --> 01:33:34.360] I have free will.
[01:33:34.360 --> 01:33:35.320] I believe in free will.
[01:33:35.320 --> 01:33:49.240] I believe in the libertarian free will, but it's based on an intrinsic powers ontology, where the real causal actors are the intrinsic powers inherent in a physical substrate like my brain.
[01:33:49.240 --> 01:33:52.600] And if I lose my brain, then there's no more free will.
[01:33:52.600 --> 01:33:54.040] In fact, there's no more mind.
[01:33:54.040 --> 01:34:01.240] But I do believe in, contrary to most scientists, I do believe in free will, in the classical libertarian free will.
[01:34:01.800 --> 01:34:06.040] But you're not suggesting there's a little homunculus up there making the decisions for you.
[01:34:06.040 --> 01:34:06.440] Right?
[01:34:06.440 --> 01:34:06.680] No.
[01:34:06.680 --> 01:34:07.480] No, it's my point.
[01:34:07.560 --> 01:34:08.840] It's exactly as Mike said.
[01:34:08.840 --> 01:34:13.400] When I am faced with a moral dilemma, do I do A or do I do B?
[01:34:13.400 --> 01:34:19.800] Then if I deliberate, if I have a clear fork, if I'm conscious aware of A and non-A, let's just keep it simple.
[01:34:20.120 --> 01:34:21.800] A versus non-A.
[01:34:21.800 --> 01:34:23.560] I think I go through the reasoning.
[01:34:23.560 --> 01:34:24.600] Why should I do A?
[01:34:24.600 --> 01:34:26.200] Why should I do non-A?
[01:34:26.200 --> 01:34:27.240] What are the consequences?
[01:34:27.240 --> 01:34:30.360] And I decide to do non-A, let's say.
[01:34:30.840 --> 01:34:33.480] Then this is a freely taken decision.
[01:34:33.480 --> 01:34:44.360] Yes, there's a correlate with the underlying brain, but the actual decision happens in consciousness, which is different, which is certainly different from its substrate, right?
[01:34:45.120 --> 01:34:47.200] In that sense, I'm not a classical materialist.
[01:34:47.440 --> 01:34:53.760] So, the Benjamin LeBay experiments, you would explain, is just a different part of the brain is making the decision before you're consciously aware of it?
[01:34:53.760 --> 01:34:54.880] No, it's a correlate.
[01:34:55.280 --> 01:35:00.240] It's clearly, no, the libert experiment, of course, is highly construed, right?
[01:35:00.400 --> 01:35:06.000] You're being trained, you're sitting there, and you're supposed to watch a clock and then lift the hand.
[01:35:06.000 --> 01:35:06.960] It's pretty meaningless.
[01:35:06.960 --> 01:35:15.440] In other words, there's no moral judgment implied or any consequences, behavioral consequence, whether I lift it now or lift it one second later my hand.
[01:35:15.440 --> 01:35:24.240] It's not like I'm being asked actually to choose something in a restaurant or to make a choice between mate A or mate B or job A or job B, right?
[01:35:24.240 --> 01:35:32.000] So those sort of experiments haven't been done yet because they're, of course, much more difficult to do in an experimental context.
[01:35:32.000 --> 01:35:38.000] But yeah, there will clearly be correlates in the brain, but those are not the true causes.
[01:35:38.000 --> 01:35:42.400] But that's according to this intrinsic power ontology.
[01:35:43.120 --> 01:35:45.360] Michael, does that surprise you to hear that?
[01:35:46.560 --> 01:35:53.200] Well, no, I mean, Libet also found, and again, I think there are tremendous limitations to his experimental method.
[01:35:53.200 --> 01:35:58.560] I mean, as I said, it's pushing a button doesn't have too much to do with the exercise of free will in real life.
[01:35:58.560 --> 01:36:06.480] But he found that the decision to veto pushing the button was electrically silent, which he described as free won't.
[01:36:08.080 --> 01:36:11.600] But I don't, I mean, his experiments are fascinating, and he was a fascinating man.
[01:36:11.600 --> 01:36:12.800] I think he was a woman.
[01:36:13.040 --> 01:36:17.040] But I mean, does Christoph being a libertarian free will defender surprise you?
[01:36:18.400 --> 01:36:18.960] No, it doesn't.
[01:36:18.960 --> 01:36:23.520] I mean, it surprises me just because most neuroscientists don't defend free will.
[01:36:23.520 --> 01:36:28.920] But I think Christoph is a very deep thinker and has a very open mind.
[01:36:28.800 --> 01:36:33.960] And I actually believe personally that no one actually denies free will.
[01:36:37.800 --> 01:36:38.600] The reason is...
[01:36:41.800 --> 01:36:42.120] Right.
[01:36:42.120 --> 01:36:44.680] I said, Dan, are you free to choose your wine?
[01:36:44.680 --> 01:36:45.080] Right, right.
[01:36:46.040 --> 01:36:46.680] And he did choose.
[01:36:46.840 --> 01:36:48.520] I said, see, wasn't that reason?
[01:36:48.520 --> 01:36:48.920] Oh, no.
[01:36:49.160 --> 01:36:50.120] It's all predetermined.
[01:36:50.360 --> 01:36:55.240] Because I think that what you believe is not merely what you say, it's what you do.
[01:36:56.680 --> 01:37:01.000] If you ask an embezzler, do you believe in being financially honest?
[01:37:01.000 --> 01:37:04.440] If he says yes, you know he's lying because that's not what he does.
[01:37:04.840 --> 01:37:15.720] So if you take a person who denies free will and you go and you pour your coffee on his laptop, he's going to be very upset with you for ruining his laptop.
[01:37:15.720 --> 01:37:17.480] And you can say, hey, I didn't have free will.
[01:37:17.480 --> 01:37:18.520] I couldn't have chosen.
[01:37:18.520 --> 01:37:20.360] You might as well blame the coffee cup.
[01:37:20.360 --> 01:37:21.560] But he won't buy that.
[01:37:21.560 --> 01:37:22.760] He won't accept that.
[01:37:22.760 --> 01:37:26.200] So everybody believes in free will in their everyday life.
[01:37:26.600 --> 01:37:28.360] That's a compatibilist view, right?
[01:37:28.360 --> 01:37:33.800] People believe it for practical reasons and for moral reasons to be able to punish people and reward people.
[01:37:35.080 --> 01:37:36.840] Well, you know, there's a.
[01:37:37.160 --> 01:37:53.720] But what about the evidence, Mike, that we do have evidence, despite my belief in that you can stimulate particular parts of the brain, in this case, the bride or percolum, and get the feeling that people, you know, it's quite striking if you read the dialogue with his patients.
[01:37:53.720 --> 01:37:55.640] Oh, suddenly I felt a will.
[01:37:55.640 --> 01:37:57.480] I felt an urge to move.
[01:37:57.800 --> 01:38:03.640] Well, yeah, sometimes you actually do get movement if you stimulate at a higher current amplitude.
[01:38:03.640 --> 01:38:12.360] Yeah, you don't, well, first of all, the actual stimulation of the will would be that you wouldn't realize it was connected to the stimulation.
[01:38:12.360 --> 01:38:14.560] That is, that you would think it was your own will.
[01:38:13.960 --> 01:38:20.160] And Penfield did that a lot, and he could never stimulate will on his experiments.
[01:38:20.480 --> 01:38:23.440] But you can alter the will just by having a few drinks.
[01:38:23.440 --> 01:38:25.440] I mean, you don't need to stimulate the brain.
[01:38:25.440 --> 01:38:35.280] Yeah, but here we're talking about, you know, more recent neurosurgeon doing the, I'm thinking about the French group, Desmoguette, they've published several papers on it, one in science, where they stimulate particular parts.
[01:38:35.280 --> 01:38:50.160] And then Itzach Fried again, another colleague of yours, you know, stimulating the anterior singulate, where they get a very discreet feeling, either of intentionality or where the patient himself says, I suddenly have the will to move.
[01:38:50.480 --> 01:38:50.960] Okay.
[01:38:51.600 --> 01:38:52.640] I'm not familiar with it.
[01:38:53.200 --> 01:38:54.160] I have to take a peek at it.
[01:38:54.160 --> 01:39:04.000] You would have to sort out emotional states, which can certainly be stimulated in the brain from will as kind of as an isolated thing.
[01:39:04.960 --> 01:39:11.760] This is relatively, you know, if you read the account, this is not confounded with emotion.
[01:39:11.760 --> 01:39:13.680] Yeah, I grant you, you can get emotion.
[01:39:13.680 --> 01:39:17.120] This is just the, for example, they have a desire to roll my tongue.
[01:39:17.120 --> 01:39:19.360] They have a desire to move the mouth.
[01:39:19.360 --> 01:39:26.640] One case of Itzach Fried, they have this girl, and every time he stimulates somewhere in prefrontal cortex, she has this desire.
[01:39:26.640 --> 01:39:28.880] She finds the situation incredible funny.
[01:39:28.880 --> 01:39:30.080] Talking about abstract.
[01:39:30.080 --> 01:39:30.800] She doesn't know why.
[01:39:30.800 --> 01:39:32.640] He said, why are you guys also funny?
[01:39:32.640 --> 01:39:38.000] And then he stimulates again and again, the patient reports this feeling of mirth, of hilarity.
[01:39:38.000 --> 01:39:38.320] Yeah, sure.
[01:39:38.320 --> 01:39:40.720] I mean, you can get that with gelastic seizures.
[01:39:40.720 --> 01:39:45.360] Seizures in the hypothalamus can make you think things are funny.
[01:39:45.600 --> 01:39:48.320] But but that's not will that follows on reason.
[01:39:48.560 --> 01:39:55.840] The only kind of free will I believe in is the the abstract judgment that something is morally right or right or wrong based on reason.
[01:39:57.040 --> 01:40:01.400] But here again, the difference between you two would be where the will is located for you, Michael.
[01:39:59.840 --> 01:40:03.560] I presume it's where the soul, the mind.
[01:40:07.000 --> 01:40:09.400] I mean, there's no okay.
[01:40:09.400 --> 01:40:17.480] So your answer back to my where does Aunt Millie's mind go when her brain dies of Alzheimer's and Deepak says it returns to the consciousness is the ground of all being or whatever.
[01:40:17.480 --> 01:40:18.680] That's not what you mean.
[01:40:18.680 --> 01:40:21.320] You mean, well, I don't know what you mean, actually.
[01:40:21.720 --> 01:40:25.640] Where, like, after the death of your brain and body, where is your soul?
[01:40:25.640 --> 01:40:26.440] Where does it go?
[01:40:26.920 --> 01:40:37.800] Well, I believe that your soul survives death because it has immaterial powers that make it incapable of disintegration that is characteristic of material things.
[01:40:37.800 --> 01:40:45.320] And I believe that one goes to heaven, to God, and that ultimately we are resurrected.
[01:40:45.320 --> 01:40:46.760] But that's not a scientific belief.
[01:40:47.480 --> 01:40:52.760] I understand, but just out of curiosity, is it the physical body that's resurrected or just the soul?
[01:40:52.760 --> 01:40:53.640] Physical body, yeah.
[01:40:53.720 --> 01:40:54.840] Only, well, as St.
[01:40:54.920 --> 01:41:03.320] Paul said in Corinthians 15, that it's a spiritual body, it's physical, but it has very different qualities than it doesn't age.
[01:41:03.960 --> 01:41:04.760] Correct, yeah.
[01:41:04.760 --> 01:41:08.120] And how old are you when you're that's a very interesting question?
[01:41:08.120 --> 01:41:10.040] Um, I'm going to be 30.
[01:41:10.040 --> 01:41:10.840] Yeah, right.
[01:41:11.480 --> 01:41:15.880] FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[01:41:15.880 --> 01:41:20.760] New customers can bet just $5 and get 300 in bonus bets if you win.
[01:41:20.760 --> 01:41:21.400] That's right.
[01:41:21.400 --> 01:41:27.880] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[01:41:27.880 --> 01:41:33.720] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[01:41:33.720 --> 01:41:35.640] The NFL season's almost here.
[01:41:35.640 --> 01:41:38.120] The only question is: are you ready to play?
[01:41:38.120 --> 01:41:43.960] Visit fanduel.com/slash sportsfan to download the FanDuel app today and get started.
[01:41:43.960 --> 01:41:46.400] Must be 21 plus and physically present in New York.
[01:41:44.840 --> 01:41:49.200] First online real money wager only, $10 first deposit required.
[01:41:49.280 --> 01:41:52.640] A bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[01:41:52.640 --> 01:41:53.440] Restrictions apply.
[01:41:53.440 --> 01:41:55.600] See terms at sportsbook.fan duel.com.
[01:41:55.600 --> 01:42:04.720] For help with a gambling problem, call 1-877-8 HopeNY or text Hope NY to 467-369 or visit oasas.ny.gov/slash gambling.
[01:42:04.720 --> 01:42:06.240] Standard text messaging rates apply.
[01:42:06.240 --> 01:42:10.160] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[01:42:10.400 --> 01:42:16.000] Theologically, I believe, at least in the Catholic tradition, I don't know that that's worked out.
[01:42:16.000 --> 01:42:30.880] What's kind of interesting is that in near-death experiences, people describe seeing loved ones on the other end of the tunnel, and very often they describe them as being 30, meaning they describe them as being young and healthy, not older and infirmed.
[01:42:30.880 --> 01:42:32.160] I certainly wish for.
[01:42:32.960 --> 01:42:36.000] I am rooting for 30 like it wouldn't believe that.
[01:42:36.160 --> 01:42:42.720] Me too, but you know, I'm 70, so if I am resurrected at 30, where's those 40 years of memories I've had?
[01:42:42.720 --> 01:42:49.920] And everything that well, all of your memories are gone because memory is a matter of brain.
[01:42:49.920 --> 01:42:53.200] But there is knowledge that's not memory.
[01:42:53.200 --> 01:42:59.440] That is, there's abstract knowledge, and you'll still have at your abstract knowledge.
[01:42:59.840 --> 01:43:05.040] Memory itself dissolves with the brand because it's part of the brand.
[01:43:05.040 --> 01:43:06.080] Christoph?
[01:43:06.720 --> 01:43:13.440] Yeah, I mean, I find this all completely ad hoc and arbitrary, Mike.
[01:43:13.440 --> 01:43:14.160] I'm sorry.
[01:43:14.160 --> 01:43:14.560] That's okay.
[01:43:14.800 --> 01:43:16.000] Particularly immortality.
[01:43:16.000 --> 01:43:21.200] There's nothing in physics or in science that, well, clearly everything has a finite lifetime.
[01:43:21.200 --> 01:43:23.280] Protons have a finite lifetime.
[01:43:23.280 --> 01:43:28.400] So I understand you can believe that, and that's fine, but it's a total ad hoc assumption.
[01:43:28.400 --> 01:43:30.840] In addition to everything else, we are mortal.
[01:43:30.840 --> 01:43:31.480] Yeah.
[01:43:31.800 --> 01:43:33.240] We all wanted to be immortal.
[01:43:33.240 --> 01:43:34.600] That doesn't make it so.
[01:43:34.600 --> 01:43:35.240] Right.
[01:43:35.240 --> 01:43:36.920] Well, first of all, I'm not sure.
[01:43:36.920 --> 01:43:41.080] I mean, there are plenty of people who don't relish the idea of immortality.
[01:43:41.480 --> 01:43:42.280] I know people who are terrified.
[01:43:43.080 --> 01:43:44.680] I mean, forever, that's a long time.
[01:43:45.080 --> 01:43:46.520] It's a long time than I felt.
[01:43:47.160 --> 01:43:52.360] I recall when my son, I would talk to him about this when he was a kid, and he was terrified of it.
[01:43:52.360 --> 01:43:54.280] He said, Do you mean forever?
[01:43:54.280 --> 01:43:57.960] Like, the whole concept of forever just pictures.
[01:43:58.360 --> 01:44:01.880] Christopher Hitchens had a great bit about this when he was dying of cancer.
[01:44:01.880 --> 01:44:13.000] He wrote about this in Vanity Fair, and you can get his book on this on immortality, in which you're at a party and you're tapped on the shoulder and told you have to leave.
[01:44:13.000 --> 01:44:15.320] And worse, the party's going to go on without you.
[01:44:15.320 --> 01:44:17.160] And he's like, oh, no, that's such a bummer.
[01:44:17.240 --> 01:44:18.760] He goes, but it could be worse.
[01:44:18.760 --> 01:44:24.040] What if you're at the party and you're tapped on the shoulder and told you can never leave the party?
[01:44:24.040 --> 01:44:24.920] Right, right.
[01:44:24.920 --> 01:44:26.360] Well, that's even worse.
[01:44:26.360 --> 01:44:26.840] Right.
[01:44:27.080 --> 01:44:29.240] That's Jean-Paul Saab, no exit.
[01:44:29.400 --> 01:44:29.720] Right.
[01:44:31.720 --> 01:44:33.240] Is that where Hitch got that?
[01:44:33.240 --> 01:44:33.480] Okay.
[01:44:33.800 --> 01:44:34.920] So I've no exit.
[01:44:35.160 --> 01:44:44.920] Yeah, I've never thought of annihilation as necessarily being such a terrible thing because if you think about it, if you don't exist after you die, you're in the same state as you were before you lived.
[01:44:44.920 --> 01:44:50.360] And we don't recall, I don't recall 1930 as being a terrible time because I wasn't around.
[01:44:50.520 --> 01:44:59.320] So my belief in eternal life is based on my Catholic faith and on my understanding of the hymorphism and your death experiences.
[01:44:59.320 --> 01:45:00.520] It's not wishful thinking.
[01:45:00.520 --> 01:45:07.560] And frankly, it's rather terrifying because I know that I got to behave myself or my eternity could be in a very hot place.
[01:45:08.680 --> 01:45:11.560] Now, you don't actually believe in a literal hell, do you?
[01:45:11.560 --> 01:46:58.920] Oh, yeah, yeah, sure absolutely absolutely well and what would that be where your soul goes if um i i i i i think it it's it's to to to be apart from god uh i think to be separated from god is the worst thing imaginable and it and it's much worse than fire uh and i sure hope nobody experiences it i certainly certainly don't like okay well christoph has been working on the hard problem of consciousness for a long time you wrote about it in your book in fact that he he lost the bet with uh david chalmers not that it could never be solved only it wasn't solved in 25 years what is your answer to the hard problem of consciousness michael um the the hard problem of consciousness is an artifact of um our cartesian way of looking at the world okay uh that we've gotten rid of race cogitans we have race extensa we're meat machines and we're all perplexed about how it is that meat machines could have experiences when we were the ones who um said that we can't have experiences so uh the human beings have subjective experience because we have souls and souls entail subjective experience so we have conscious experience because we are complex organ we have our brain is you know the most complex piece of highly active matter in the known universe and of course we're not the only one the other thing that's also i as i mentioned i grew up catholic what was always bothered me is we had dogs i grew up around dogs and i say how can it be that these they belong to our family these gorgeous creatures they're not supposed to go to heaven only we i mean either we both go to heaven or none of us goes to heaven but the fact that only humans are special and different from all other never struck me as being very uh very plausible i i i mean i've i just lost my my little bishon who i who i love and um yeah, that's okay.
[01:46:58.920 --> 01:46:59.720] Thank you.
[01:46:59.720 --> 01:47:01.480] And yeah, I thought the same thing.
[01:47:01.480 --> 01:47:10.200] I mean, my understanding of the Catholic view on that is not that they don't go to heaven, but that their souls are not naturally immortal because they don't have intellect and will.
[01:47:10.200 --> 01:47:13.320] They don't have immaterial powers of the soul.
[01:47:13.320 --> 01:47:17.240] So that doesn't mean that God doesn't recreate them.
[01:47:17.560 --> 01:47:21.480] So they may still go to heaven, but they need to be made de novo.
[01:47:21.960 --> 01:47:24.440] We go to heaven with the souls we have.
[01:47:24.840 --> 01:47:33.640] And interestingly, children who have near-death experiences very commonly will encounter deceased pets on the other side of the tunnel.
[01:47:33.800 --> 01:47:36.280] They don't encounter deceased people as much.
[01:47:36.280 --> 01:47:40.440] A reason because they're young and the people they know generally haven't died yet.
[01:47:42.440 --> 01:47:44.040] But they do encounter pets a lot.
[01:47:44.040 --> 01:47:47.640] So I sure hope that our pets are on the other side of the thing.
[01:47:47.800 --> 01:47:49.960] I'd love to see my little dog again.
[01:47:50.920 --> 01:47:53.160] Everything will be made whole again.
[01:47:54.200 --> 01:47:55.080] I hope.
[01:47:55.080 --> 01:48:07.080] Well, when these are called near-death experiences, Michael, doesn't that suggest that perhaps there's something still working for most of these patients, except for the one you yeah, I'm not sure most.
[01:48:07.080 --> 01:48:11.160] I mean, some are, I mean, at least 9 million people in this country.
[01:48:11.160 --> 01:48:13.160] Some of them have some brain functions, some don't.
[01:48:13.160 --> 01:48:14.040] Who knows?
[01:48:14.440 --> 01:48:19.160] But I do think that near-death is kind of a bad word.
[01:48:19.560 --> 01:48:21.080] It's reversible death.
[01:48:21.080 --> 01:48:28.200] That is, you can be dead and be reversed, meaning that if your heart isn't beating and you have no brain function, you're dead.
[01:48:28.200 --> 01:48:30.120] I mean, that's as dead as it gets.
[01:48:30.360 --> 01:48:39.880] But you can be brought back if you're brought back before permanent legally, although you're only dead if it's irreversible loss of every function of the brain, right?
[01:48:40.280 --> 01:48:41.960] The uniform determination of death acts.
[01:48:42.120 --> 01:48:42.280] Right.
[01:48:42.520 --> 01:48:48.320] Yes, but but but that's because um the determination of death makes you eligible to be an organ donor.
[01:48:44.680 --> 01:48:50.640] And if you're not irreversibly dead, then that's murder.
[01:48:50.800 --> 01:49:00.720] So so but but I mean, certainly Pam Reynolds, and I think no doubt many, many people who have near-death experiences are genuinely dead.
[01:49:01.200 --> 01:49:07.440] In fact, it's been taught for generations that you can't do CPR on somebody who's not dead.
[01:49:07.440 --> 01:49:10.080] That is, when you call a code blue, they're dead.
[01:49:10.880 --> 01:49:16.560] We have a system called rapid response that we call if someone is not dead, but they're in bad shape.
[01:49:16.880 --> 01:49:22.160] If it looks like they're heading for a code, you call a rapid response, but you don't pump on their chest.
[01:49:22.320 --> 01:49:24.720] You pump on their chest if they're dead.
[01:49:24.720 --> 01:49:28.960] So near-death is kind of a bad, bad term for you.
[01:49:29.520 --> 01:49:30.320] Okay.
[01:49:30.640 --> 01:49:35.600] First off, Michael said, or you said about some of these questions.
[01:49:35.600 --> 01:49:38.240] I don't know, or scientists don't know yet.
[01:49:38.240 --> 01:49:38.720] Okay.
[01:49:39.280 --> 01:49:40.240] Fair enough.
[01:49:40.240 --> 01:49:47.760] Does that allow room for a faith-based religious interpretation in the following sense?
[01:49:47.760 --> 01:49:57.280] In William James's kind of pragmatism example I'll use from Martin Gardner because he was one of the founders of the modern skepticism movement.
[01:49:57.280 --> 01:50:03.280] And, you know, buddies with the amazing Randy and all the atheists, Carl Sagan, Asimov, they're all atheists.
[01:50:03.280 --> 01:50:06.720] And Martin, you know, kind of made a famous statement.
[01:50:06.800 --> 01:50:08.320] Well, actually, I'm a theist.
[01:50:08.320 --> 01:50:10.320] I believe in God.
[01:50:10.320 --> 01:50:28.400] And in the sense that Miguel Unamuno and William James said, for questions that are really important in life, big questions that science cannot resolve or give us an answer, and if it really affects your life, it's okay to go ahead and believe it if it works for you.
[01:50:28.400 --> 01:50:35.000] Now, he made clear, this doesn't mean, you know, I think Erie Geller can really bend spoons or psychics can really read minds.
[01:50:35.240 --> 01:50:40.040] But for, you know, the God question, why there's something rather than nothing, the free will question.
[01:50:40.360 --> 01:50:43.560] And he said, I think there's a God, and I can't prove it.
[01:50:43.560 --> 01:50:48.040] I think the atheists have slightly better arguments than the theists do, but only slightly better.
[01:50:48.040 --> 01:50:49.080] And it's okay.
[01:50:49.080 --> 01:50:51.080] I'm not trying to convert anybody to anything.
[01:50:51.080 --> 01:50:53.560] I'm just saying it's what works for me.
[01:50:54.200 --> 01:50:56.280] He called that, that's called fideism.
[01:50:56.280 --> 01:51:00.440] Is there room for something like that in your worldview?
[01:51:01.080 --> 01:51:07.960] Yeah, particularly if you have one of these extraordinary transformative experiences, you're challenged because you've had this experience.
[01:51:07.960 --> 01:51:19.320] So as William James says, you come back with this noetic feeling that I've experienced something that cannot be explained, but you're still a scientist, you're still a reasoning person.
[01:51:19.320 --> 01:51:31.320] So you try to construct, given everything that I know, what is the least assumptions I can make that's compatible with both my experience as well as with the science that I practice?
[01:51:31.320 --> 01:51:37.960] Because we know science is by far the most successful technology humanity has ever invented to manipulate the world.
[01:51:37.960 --> 01:51:40.840] So you say, well, science is great at manipulating the world.
[01:51:40.840 --> 01:51:48.760] It may not explain all of metaphysics, but I think if you go to the metaphysical round, you should keep the number of assumptions to a minimum.
[01:51:48.760 --> 01:52:01.240] Like idealism, in fact, you know, that everything fundamentally is mental, seems much more elegant because it only requires me to accept the only reality that all of us are only acquainted with.
[01:52:01.240 --> 01:52:07.720] None of us have ever directly interacted with an atom or with a star or with a black hole.
[01:52:07.720 --> 01:52:14.840] We only know those because we look at images, we see oscilloscopes, we experience other people telling us about things.
[01:52:14.960 --> 01:52:22.320] So fundamentally all of the everything we know, including our own existence for RenΓ© Descartes, comes to us through consciousness.
[01:52:22.320 --> 01:52:27.760] So therefore, I think a very elegant assumption is fundamentally everything is just mental.
[01:52:28.080 --> 01:52:29.840] And that's perfectly rational.
[01:52:29.840 --> 01:52:32.400] It makes minimum the smallest number of assumptions.
[01:52:32.400 --> 01:52:38.720] And so it doesn't have these additional assumptions, or immortality plus an existence of a God or hell or other things.
[01:52:38.720 --> 01:52:42.400] Those are all extra assumptions one would have to make.
[01:52:42.400 --> 01:52:47.680] And I would like to make the minimum, I would like to have a worldview that makes a minimal number of assumptions.
[01:52:47.680 --> 01:52:48.320] Fair enough.
[01:52:48.320 --> 01:52:52.880] Michael, if Christoph and I are right, we'll never know.
[01:52:52.880 --> 01:52:56.320] But if you're right, Christophe and I will find out, I guess.
[01:52:56.320 --> 01:52:57.520] This is Piscal's wager.
[01:52:58.080 --> 01:52:59.040] Piscal's wager.
[01:52:59.760 --> 01:53:04.320] Well, I actually agree with Christoph very much about idealism.
[01:53:04.640 --> 01:53:17.200] And I think idealism, for overall metaphysics, is a beautiful way of looking at the world because the reality is that the universe is more like a mind than it is like a thing.
[01:53:17.600 --> 01:53:30.080] And one of the arguments that made, recalled, Eugene Wigner made the comment about the unreasonable effectiveness of mathematics in explaining the natural world.
[01:53:30.080 --> 01:53:41.040] And an idealist would say, well, that's because physics is mathematics, and mathematics is an idea, and the natural world is a mind.
[01:53:41.360 --> 01:53:45.440] And I think that makes a lot of sense, but then you have to ask, whose mind is it?
[01:53:47.520 --> 01:53:47.760] St.
[01:53:47.840 --> 01:53:59.560] Augustine said something that I think is one of the most beautiful concepts in Christianity, and that is that all that exists is a thought in the mind of God.
[01:53:58.720 --> 01:54:02.760] So each of us is an idea of God's.
[01:54:03.560 --> 01:54:08.440] And I think that's a very beautiful way of looking at things, and it has a lot of parsimony.
[01:54:08.680 --> 01:54:10.520] It's parsimonious.
[01:54:10.840 --> 01:54:13.320] All right, gentlemen, we're closing in on two hours.
[01:54:13.320 --> 01:54:14.920] Let me just wrap it up there again.
[01:54:14.920 --> 01:54:16.760] The immortal mind.
[01:54:16.760 --> 01:54:18.520] And then I am myself.
[01:54:18.520 --> 01:54:21.160] Just read both of these and decide for yourself.
[01:54:21.160 --> 01:54:21.800] They're great.
[01:54:21.800 --> 01:54:23.400] And listen to this conversation.
[01:54:23.400 --> 01:54:23.640] Thanks.
[01:54:23.880 --> 01:54:24.200] All right.
[01:54:24.200 --> 01:54:25.480] Thank you so much.
[01:54:25.480 --> 01:54:26.680] Thank you so much, Michael.
[01:54:26.680 --> 01:54:27.400] Christoph, thank you.
[01:54:27.560 --> 01:54:28.200] Thank you, Mike.
[01:54:28.200 --> 01:54:29.000] Thank you, Michael.
[01:54:29.480 --> 01:54:30.760] Very, very nice to meet you.
[01:54:30.760 --> 01:54:31.240] Thank you.
[01:54:31.240 --> 01:54:33.240] Yes, it was enjoyable.
[01:54:40.200 --> 01:54:45.160] At the Institute for Advanced Reconstruction, redefining what's possible is what we do best.
[01:54:45.160 --> 01:54:51.080] From phrenic nerve injuries to brachioplexus reconstructions, we give patients a future beyond limitations.
[01:54:51.080 --> 01:54:54.760] Refer with confidence at advancedreconstruction.com.
Prompt 6: Key Takeaways
Now please extract the key takeaways from the transcript content I provided.
Extract the most important key takeaways from this part of the conversation. Use a single sentence statement (the key takeaway) rather than milquetoast descriptions like "the hosts discuss...".
Limit the key takeaways to a maximum of 3. The key takeaways should be insightful and knowledge-additive.
IMPORTANT: Return ONLY valid JSON, no explanations or markdown. Ensure:
- All strings are properly quoted and escaped
- No trailing commas
- All braces and brackets are balanced
Format: {"key_takeaways": ["takeaway 1", "takeaway 2"]}
Prompt 7: Segments
Now identify 2-4 distinct topical segments from this part of the conversation.
For each segment, identify:
- Descriptive title (3-6 words)
- START timestamp when this topic begins (HH:MM:SS format)
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Most important Key takeaway from that segment. Key takeaway must be specific and knowledge-additive.
- Brief summary of the discussion
IMPORTANT: The timestamp should mark when the topic/segment STARTS, not a range. Look for topic transitions and conversation shifts.
Return ONLY valid JSON. Ensure all strings are properly quoted, no trailing commas:
{
"segments": [
{
"segment_title": "Topic Discussion",
"timestamp": "01:15:30",
"key_takeaway": "main point from this segment",
"segment_summary": "brief description of what was discussed"
}
]
}
Timestamp format: HH:MM:SS (e.g., 00:05:30, 01:22:45) marking the START of each segment.
Prompt 8: Media Mentions
Now scan the transcript content I provided for ACTUAL mentions of specific media titles:
Find explicit mentions of:
- Books (with specific titles)
- Movies (with specific titles)
- TV Shows (with specific titles)
- Music/Songs (with specific titles)
DO NOT include:
- Websites, URLs, or web services
- Other podcasts or podcast names
IMPORTANT:
- Only include items explicitly mentioned by name. Do not invent titles.
- Valid categories are: "Book", "Movie", "TV Show", "Music"
- Include the exact phrase where each item was mentioned
- Find the nearest proximate timestamp where it appears in the conversation
- THE TIMESTAMP OF THE MEDIA MENTION IS IMPORTANT - DO NOT INVENT TIMESTAMPS AND DO NOT MISATTRIBUTE TIMESTAMPS
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Timestamps are given as ranges, e.g. 01:13:42.520 --> 01:13:46.720. Use the EARLIER of the 2 timestamps in the range.
Return ONLY valid JSON. Ensure all strings are properly quoted and escaped, no trailing commas:
{
"media_mentions": [
{
"title": "Exact Title as Mentioned",
"category": "Book",
"author_artist": "N/A",
"context": "Brief context of why it was mentioned",
"context_phrase": "The exact sentence or phrase where it was mentioned",
"timestamp": "estimated time like 01:15:30"
}
]
}
If no media is mentioned, return: {"media_mentions": []}
Full Transcript
[00:00:00.160 --> 00:00:04.560] FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[00:00:04.560 --> 00:00:09.440] New customers can bet just $5 and get $300 in bonus bets if you win.
[00:00:09.440 --> 00:00:10.000] That's right.
[00:00:10.000 --> 00:00:16.480] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[00:00:16.480 --> 00:00:22.320] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[00:00:22.320 --> 00:00:24.240] The NFL season's almost here.
[00:00:24.240 --> 00:00:26.720] The only question is: are you ready to play?
[00:00:26.720 --> 00:00:32.560] Visit fanduel.com/slash sportsfan to download the FanDuel app today and get started.
[00:00:32.560 --> 00:00:35.040] Must be 21 plus and physically present in New York.
[00:00:35.040 --> 00:00:37.920] First online real money wager only, $10 first deposit required.
[00:00:38.000 --> 00:00:41.280] Bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[00:00:41.280 --> 00:00:42.080] Restrictions apply.
[00:00:42.080 --> 00:00:44.240] See terms at sportsbook.fanuel.com.
[00:00:44.240 --> 00:00:53.280] For help with a gambling problem, call 1-877-8 HopeNY or text HopeNY to 467-369 or visit oasas.ny.gov/slash gambling.
[00:00:53.280 --> 00:00:54.880] Standard text messaging rates apply.
[00:00:54.880 --> 00:00:58.720] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[00:00:58.720 --> 00:01:04.400] Warning, the following ZipRecruiter radio spot you are about to hear is going to be filled with F-words.
[00:01:04.400 --> 00:01:09.440] When you're hiring, we at ZipRecruiter know you can feel frustrated, forlorn, even.
[00:01:09.440 --> 00:01:11.120] Like your efforts are futile.
[00:01:11.120 --> 00:01:17.120] And you can spend a fortune trying to find fabulous people, only to get flooded with candidates who are just fine.
[00:01:18.240 --> 00:01:21.440] Fortunately, ZipRecruiter figured out how to fix all that.
[00:01:21.440 --> 00:01:26.560] And right now, you can try ZipRecruiter for free at ziprecruiter.com slash zip.
[00:01:26.560 --> 00:01:35.120] With ZipRecruiter, you can forget your frustrations because we find the right people for your roles fast, which is our absolute favorite F-word.
[00:01:35.120 --> 00:01:40.720] In fact, four out of five employers who post on ZipRecruiter get a quality candidate within the first day.
[00:01:41.040 --> 00:01:42.960] Fantastic.
[00:01:42.960 --> 00:01:49.200] So, whether you need to hire four, 40, or 400 people, get ready to meet first-rate talent.
[00:01:49.200 --> 00:01:53.360] Just go to ziprecruiter.com/slash zip to try ZipRecruiter for free.
[00:01:53.360 --> 00:01:56.160] Don't forget that's ziprecruiter.com/slash zip.
[00:01:56.160 --> 00:01:59.120] Finally, that's ziprecruiter.com/slash zip.
[00:02:02.600 --> 00:02:08.360] You're listening to The Michael Shermer Show.
[00:02:15.080 --> 00:02:18.360] All right, everybody, it's time for another episode of the Michael Shermer Show.
[00:02:18.360 --> 00:02:22.040] Hey, we've got a special episode for you today.
[00:02:22.040 --> 00:02:23.400] You are not going to believe this.
[00:02:23.400 --> 00:02:30.760] One of the most important and interesting topics there is: the nature of the soul, the mind, consciousness.
[00:02:30.760 --> 00:02:31.960] Where does it all go?
[00:02:31.960 --> 00:02:33.960] What happens after we die?
[00:02:33.960 --> 00:02:44.360] My guest today, and this is going to be a debate format, as it were, between two authors who have thought a lot, long and hard about this, both professional scientists.
[00:02:44.360 --> 00:02:46.280] The first one is Michael Egnore.
[00:02:46.280 --> 00:02:48.440] Here's his new book, The Immortal Mind.
[00:02:48.440 --> 00:02:51.320] Let me give it a proper introduction in total here.
[00:02:51.320 --> 00:02:59.800] He is a medical doctor, professor of neurosurgery and pediatrics at the Renaissance School of Medicine at Stony Brook University.
[00:02:59.800 --> 00:03:07.640] He received his medical degree from the College of Physicians and Surgeons at Columbia University and trained in neurosurgery at the University of Miami.
[00:03:07.640 --> 00:03:19.960] He's been on the faculty at Stony Brook since 1991, and he's the neurosurgery residence director and served as the director of pediatric neurosurgery and as vice chairman of neurosurgery at Stony Brook Medicine.
[00:03:19.960 --> 00:03:20.920] Oh my God.
[00:03:21.240 --> 00:03:22.600] My old job.
[00:03:22.920 --> 00:03:24.360] He has a strongest interest.
[00:03:24.600 --> 00:03:25.800] This is where it gets interesting.
[00:03:25.800 --> 00:03:29.240] He has a strong interest in Thomist philosophy, that is St.
[00:03:29.240 --> 00:03:37.560] Thomas Aquinas, philosophy of mind, Neuroscience, Evolution, and Intelligent Design, and bioethics, and has published and lectured extensively from the topic.
[00:03:37.560 --> 00:03:43.800] Here it is again: The Immortal Mind: A Neurosurgeon's Case for the Existence of the Soul.
[00:03:43.800 --> 00:03:46.640] My listeners will know I'm interested in that topic.
[00:03:44.680 --> 00:03:50.400] But today I'm going to be more neutral, and instead we're going to have Christophe Koch.
[00:03:44.920 --> 00:03:52.400] And here is his latest book.
[00:03:52.720 --> 00:03:59.760] It's called Then I Am Myself, the World, What Consciousness Is and How to Expand It.
[00:03:59.760 --> 00:04:06.640] Christophe is a neuroscientist at the Allen Institute and at the Tiny Blue Dot Foundation.
[00:04:06.960 --> 00:04:15.520] Excuse me, he's the former president of the Allen Institute for Brain Sciences in Seattle and former professor at Caltech, where we first met back in the 90s.
[00:04:15.520 --> 00:04:31.520] He's the author of four previous titles: The Feeling of Life Itself, Why Consciousness is Widespread But Can't Be Computed, Consciousness, Confessions of a Romantic Reductionist, that's nice, and The Quest for Consciousness, a Neurobiological Approach.
[00:04:31.520 --> 00:04:34.240] Again, here's the newest book, Then I Am Myself, the World.
[00:04:34.240 --> 00:04:40.800] He was on for that book last year, so he's our returning champion, as it were.
[00:04:40.800 --> 00:04:42.800] All right, Michael, let's start with you.
[00:04:43.280 --> 00:04:53.440] Just kind of broadly speaking, tell us where you're coming from and kind of wend your way to this question that I have, that I have for all theists.
[00:04:54.240 --> 00:05:19.120] To what extent did the arguments and evidence you present in your book from neuroscience support your beliefs in the afterlife and your Christianity and so on, what we loosely call your faith, such that if you took these away, if Christophe came up with such great arguments and he said, Oh my God, I'm wrong, would you abandon your faith or your religious beliefs?
[00:05:19.120 --> 00:05:25.440] Or are these more of just once you have the belief for other reasons, philosophical, theological, or whatever?
[00:05:25.440 --> 00:05:28.320] These are additional good reasons to believe.
[00:05:28.320 --> 00:05:34.520] Anyway, you can get yourself to that question by way of a general background here.
[00:05:34.840 --> 00:05:35.720] Thank you, Michael.
[00:05:35.880 --> 00:05:37.800] And that's actually a great question.
[00:05:38.200 --> 00:05:45.480] It's one that I've thought about a bit, not actually not quite as explicitly as you ask it, but I've thought about that.
[00:05:45.800 --> 00:05:54.440] Where I'm coming from, I grew up as an atheist and as believing in physicalism and materialism.
[00:05:54.440 --> 00:05:56.600] I was in love with science.
[00:05:56.920 --> 00:05:58.520] I went to medical school.
[00:05:58.520 --> 00:06:04.520] I majored in biochemistry undergraduate, and I went to medical school and fell in love with neuroscience.
[00:06:04.760 --> 00:06:11.160] I basically spent four years with my neuroscience textbooks because I thought it was so beautiful and fascinating.
[00:06:11.160 --> 00:06:21.400] And particularly with neuroanatomy, because I thought that we could understand who we are and what life was all about if we understood the brain in detail.
[00:06:21.640 --> 00:06:29.160] I decided that the way I wanted to understand it was to go into neurosurgery and kind of get some first-hand contact there.
[00:06:29.160 --> 00:06:37.560] So I trained in neurosurgery at the University of Miami and I began at Stony Brook on faculty in 1991, and I've been there ever since.
[00:06:38.200 --> 00:06:48.600] As I went along, I had some religious experiences and a lot of thoughts about that and had a conversion experience around the year 2000.
[00:06:48.600 --> 00:06:55.400] But along with that, I began seeing things in my clinical practice that really didn't fit the textbooks.
[00:06:55.400 --> 00:06:59.080] It didn't fit what I had learned in medical school.
[00:06:59.720 --> 00:07:02.120] And can we share screens?
[00:07:02.120 --> 00:07:03.160] I've got a scan.
[00:07:03.320 --> 00:07:05.080] I think there's an ability to do that.
[00:07:05.400 --> 00:07:06.520] That I don't know.
[00:07:07.400 --> 00:07:08.520] Let me just give it a try.
[00:07:08.920 --> 00:07:09.560] Okay.
[00:07:09.560 --> 00:07:12.200] Maybe tell you've tried that on the show.
[00:07:11.680 --> 00:07:12.920] Tell you what, why not?
[00:07:12.920 --> 00:07:14.200] Well, why don't we hold it?
[00:07:14.200 --> 00:07:14.800] All right.
[00:07:14.800 --> 00:07:17.200] I've got a very good example.
[00:07:14.360 --> 00:07:21.040] It's a little girl who was born missing half, two-thirds of her brain.
[00:07:22.000 --> 00:07:26.080] Her head was really largely of water, of spinal fluid.
[00:07:26.080 --> 00:07:30.320] And I told her parents I was dubious as to how well she would do in life.
[00:07:30.320 --> 00:07:33.200] I didn't think there was much of a chance of her having much function.
[00:07:33.200 --> 00:07:35.760] And I followed her as she grew up, and she's completely normal.
[00:07:35.760 --> 00:07:38.240] She's in her mid-20s now.
[00:07:38.480 --> 00:07:41.280] She's still missing about half her brain.
[00:07:41.520 --> 00:07:42.640] Completely normal person.
[00:07:42.640 --> 00:07:43.280] You speak to her.
[00:07:43.280 --> 00:07:46.960] She's actually rather bright, has a good job, et cetera.
[00:07:46.960 --> 00:07:49.760] I have a number of patients who are like that.
[00:07:50.560 --> 00:07:51.360] Not everybody is normal.
[00:07:52.800 --> 00:07:53.360] Not at all.
[00:07:53.360 --> 00:07:54.480] Not at all.
[00:07:55.040 --> 00:07:58.560] And it's not that uncommon.
[00:07:58.560 --> 00:08:00.800] Now, of course, she hasn't been studied carefully.
[00:08:00.800 --> 00:08:03.280] We've had MRIs of her brain.
[00:08:03.280 --> 00:08:04.960] I haven't done a functional MRI.
[00:08:04.960 --> 00:08:08.320] I'd love to see what's lighting up when she moves and does things.
[00:08:08.560 --> 00:08:12.640] But there's a lot of water up there, and it's not too much brain.
[00:08:12.640 --> 00:08:19.040] I have a little boy who's now a late teenager who had a similar scenario, also perfectly normal person.
[00:08:19.440 --> 00:08:25.840] I had a young child who had hydroencephaly, which is a condition in which babies are born.
[00:08:25.840 --> 00:08:30.800] They have strokes in intrauterine life, and they're born without cerebral hemispheres.
[00:08:30.800 --> 00:08:34.960] So the highest level of their central nervous system is their diencephalon.
[00:08:35.360 --> 00:08:37.680] And he had severe cerebral palsy.
[00:08:37.680 --> 00:08:43.120] He was quite handicapped, but totally awake, completely conscious person.
[00:08:43.600 --> 00:08:47.600] He couldn't speak, but he had a full range of emotions, of reactions.
[00:08:47.600 --> 00:08:53.440] He was happy about things and sad about things, and just he had bad cerebral palsy.
[00:08:53.840 --> 00:09:05.080] What really got me to thinking about this dramatically was: I was doing an awake craniotomy, which occasionally is indicated if we have to map the surface of the brain to protect eloquent regions of the brain.
[00:09:05.400 --> 00:09:10.760] This was a lady who had had an infiltrating tumor of her left frontal lobe.
[00:09:10.760 --> 00:09:18.520] And I wanted to map her broca's area, the speech area of the left frontal lobe, so I could avoid damaging it as I was taking the tumor out.
[00:09:18.520 --> 00:09:22.600] Because the tumor was infiltrating, I had to take out a good part of the frontal lobe.
[00:09:22.600 --> 00:09:25.480] So I was doing that while I was talking to her.
[00:09:26.040 --> 00:09:30.360] The operation took about four or five hours, and we had a nice long conversation.
[00:09:30.360 --> 00:09:36.760] And I was talking to her as much of her left frontal lobe was coming out, and she never turned a hair.
[00:09:36.760 --> 00:09:37.800] She's perfectly fine.
[00:09:37.800 --> 00:09:41.560] We're talking about the weather, we're talking about the cafeteria, about her family.
[00:09:41.560 --> 00:09:45.160] And I left the operating room thinking, this isn't in any textbook.
[00:09:45.480 --> 00:09:49.400] I'm not seeing this in Sidman and Sidman or Candell's textbook.
[00:09:49.400 --> 00:09:51.240] I don't see any of this anywhere.
[00:09:51.240 --> 00:09:53.240] So I began looking.
[00:09:53.240 --> 00:09:54.360] I began reading.
[00:09:55.720 --> 00:10:06.120] And one phenomenon that as a neurosurgeon is particularly important, this is when I teach the residents, this is something that neurosurgeons have to know intimately.
[00:10:06.120 --> 00:10:09.880] And that is the difference between eloquent and non-eloquent regions of the brain.
[00:10:09.880 --> 00:10:16.200] There are certain regions of the brain that have very well-defined functions that are completely intolerant of injury.
[00:10:16.520 --> 00:10:25.880] The motor strip in the posterior frontal lobe, the speech areas, Broca's area and Wernicke's area on the left side.
[00:10:26.520 --> 00:10:30.040] The brainstem, parts of the thalamus.
[00:10:30.040 --> 00:10:37.000] These are areas that if you damage them, even tiny, tiny injuries cause major deficits.
[00:10:37.000 --> 00:10:48.320] But many of the other parts of the brain, much of the frontal lobes, parts, the parietal lobes, you can take out and put in a wastebasket, and it has nothing to do with the patient's neurological functioning.
[00:10:48.320 --> 00:10:50.880] As with this woman I was taking that tumor out of.
[00:10:50.880 --> 00:10:56.080] Much of the cerebellum can be taken out and discarded without any effect on a person at all.
[00:10:56.480 --> 00:11:13.280] And so I am fascinated with this difference between parts of the brain that can be sacrificed at will without any discernible change in a person's neurological state and parts that you don't even dare touch without damaging somebody.
[00:11:13.600 --> 00:11:20.960] So I did a lot of reading on neuroscience, on philosophy, trying to come to grips with all of this.
[00:11:20.960 --> 00:11:29.200] And I came across a statement by Roger Scruton, who's a British philosopher who was a wonderful philosopher.
[00:11:29.200 --> 00:11:37.440] And I paraphrased, Scruton said that neuroscience is a vast trove of answers with no memory of the questions.
[00:11:37.760 --> 00:11:39.680] And I thought that just nailed it.
[00:11:39.680 --> 00:11:40.560] That just nailed it.
[00:11:40.560 --> 00:11:45.120] Because I had a professor, undergraduate in biology.
[00:11:45.120 --> 00:11:47.840] I was doing some research in molecular biology.
[00:11:47.840 --> 00:11:50.400] And he used to say to me all the time, he was a great philosopher.
[00:11:51.040 --> 00:11:56.240] He used to say, it's the question you ask that matters, not as much as the answer you get.
[00:11:56.240 --> 00:12:02.720] Because if you don't ask the right question, you'll misinterpret the data and misinterpret the answer, and it'll lead you in a bad direction.
[00:12:02.720 --> 00:12:05.680] The question has to be clear, and it has to be the right question.
[00:12:06.000 --> 00:12:12.640] Well, Michael, let me intervene here and just ask a clear question because we're already kind of into the weeds of neuroscience.
[00:12:12.640 --> 00:12:17.120] But what I wanted to start off with is just, let me summarize your book.
[00:12:17.120 --> 00:12:46.040] So, lots of these very specific examples, you've already given a couple, and there's a bunch more that Christophe can then respond to, lead you to the conclusion or deduction that the mind, and therefore I guess you're equating with the soul, is separate from the brain, from the neuro, from the neurons, and that when you die, it floats off or it's elsewhere and it goes off into some other heavenly state or some quantum field or whatever, something like this.
[00:12:46.040 --> 00:12:55.080] In other words, you, me, ourselves, the self, contained in this mind/slash soul, continues beyond the physical body.
[00:12:55.080 --> 00:12:56.600] Is that your conclusion?
[00:12:59.160 --> 00:13:00.040] Yes or no?
[00:13:01.160 --> 00:13:06.920] The view of the human person that I hold is that of Aristotle, basically, of which St.
[00:13:07.000 --> 00:13:08.920] Thomas elaborated that.
[00:13:08.920 --> 00:13:13.000] And that is that the version of hylomorphism, right?
[00:13:13.160 --> 00:13:14.680] Yes, yes, exactly.
[00:13:18.840 --> 00:13:31.400] In that view, human beings are a substance, meaning that we are a substance in the sense that we persist through time, that we have attributes, we have causal power, and that we remain ourselves.
[00:13:31.880 --> 00:13:44.280] And Aristotle said that living substances are composites of soul and body, which is the analogy to form and matter in inanimate things.
[00:13:44.600 --> 00:13:47.400] And the soul is nothing mystical.
[00:13:48.200 --> 00:13:50.200] The Aristotelian soul is not a mystical thing.
[00:13:50.200 --> 00:13:54.040] It's not a translucent thing that looks like you and you can put it alongside of you.
[00:13:54.040 --> 00:13:55.800] It's not like Caspar the ghost.
[00:13:55.800 --> 00:14:03.480] The soul is simply the sum of the activities that make us alive that differentiate us from a dead body.
[00:14:03.480 --> 00:14:18.480] So if you take everything I am right now, everything, my ability to speak, my heartbeat, my sensory powers, my memory, take all of that, and you subtract from that everything that I will be the moment I die.
[00:14:18.800 --> 00:14:20.240] My soul is what's left.
[00:14:14.920 --> 00:14:21.440] My soul is the difference.
[00:14:22.000 --> 00:14:23.360] So it's not mystical.
[00:14:23.840 --> 00:14:26.000] It's just a very practical definition.
[00:14:26.320 --> 00:14:28.240] Would you call yourself a dualist?
[00:14:29.120 --> 00:14:31.840] Yes, but I'm a thomistic dualist.
[00:14:31.840 --> 00:14:40.880] But thomistic dualism, thomistic dualists try to play down the dualist part because I'm not a substance dualist.
[00:14:41.360 --> 00:14:44.240] I think Cartesian dualism has a ton of problems.
[00:14:44.480 --> 00:14:49.840] I have a lot of, some of my best friends are Cartesian dualists, but I don't agree with them on many, many things.
[00:14:50.480 --> 00:15:01.280] So we are integrated single substances, but we have a principle of intelligibility, which is our soul, and a principle of individuation, which is our body.
[00:15:02.240 --> 00:15:03.200] And I think that...
[00:15:03.280 --> 00:15:08.800] Do you know the philosopher Matthew Owen who defends really the very similar version of hylomorphism?
[00:15:08.800 --> 00:15:14.800] It's not a substance to it, but really seems very, very similar to what you articulate.
[00:15:14.800 --> 00:15:15.680] Matthew Owen.
[00:15:16.320 --> 00:15:17.120] I know of him.
[00:15:17.280 --> 00:15:20.560] I haven't read him deeply, but I know his name.
[00:15:20.560 --> 00:15:21.120] Yeah.
[00:15:21.120 --> 00:15:25.040] Maybe, Christoph, now's a good time to bring you in and give us the big picture.
[00:15:25.040 --> 00:15:26.160] Are you a monist?
[00:15:26.560 --> 00:15:27.600] Are you an atheist?
[00:15:28.160 --> 00:15:34.400] What do you think happens to Christoph Koch, the person, the soul, the mind, whatever, when the body dies?
[00:15:34.720 --> 00:15:46.560] I don't know, but what I believe in is that my personal consciousness, everything that makes up me, my memories, my traits, will cease to exist.
[00:15:46.560 --> 00:15:50.560] That's what all the evidence shows, will be destroyed.
[00:15:50.560 --> 00:16:01.800] Once the physical substrate that is me destroyed, whether it returns to an ultimate reality that is phenomenal, you know, as an idealism, I'm increasingly more sympathetic to.
[00:16:02.840 --> 00:16:10.200] But all the evidence we have that conscious awareness in us when our brain is gone is also gone.
[00:16:10.520 --> 00:16:11.400] I see.
[00:16:11.400 --> 00:16:12.040] Okay.
[00:16:12.680 --> 00:16:14.680] So that's pretty clear.
[00:16:14.920 --> 00:16:24.120] Michael, when I wrote my book, Heavens on Earth, about the scientific search for the afterlife, I dealt with the mind uploaders, the cryonosis, and all those transhumanists.
[00:16:24.120 --> 00:16:29.640] But I had questions for them that I thought could apply to theists who make this argument.
[00:16:29.880 --> 00:16:32.600] That is, I'll just give you a few of these.
[00:16:32.600 --> 00:16:40.360] How does God go about the duplication or transformation process to ensure conscious continuity and personal perspective?
[00:16:40.360 --> 00:16:44.840] Is it your atoms and patterns that are resurrected or just the patterns?
[00:16:44.840 --> 00:16:52.520] If both, and you are physically resurrected, does God reconstitute your body so it's no longer subject to disease and aging?
[00:16:52.520 --> 00:16:58.520] If it's just the patterns of you that are resurrected, what is the platform that holds the information?
[00:16:58.520 --> 00:17:03.240] Is there something in heaven that is the equivalent of a computer hard drive or the cloud?
[00:17:03.240 --> 00:17:07.560] Is there some sort of heavenly quantum field that retains your thoughts and memories?
[00:17:07.560 --> 00:17:24.200] If you're not religious, or even if you are, and hold out hope that one day scientists will be able to clone your body and duplicate your brain or upload your mind into a computer or the cloud or create a virtual reality in which you come alive again, we face the same technological problems as God would of how all this is done.
[00:17:24.200 --> 00:17:26.040] What is your thought about that?
[00:17:26.360 --> 00:17:28.440] Well, St.
[00:17:28.520 --> 00:17:42.520] Thomas Aquinas believed that we can know a great deal about ourselves and our ultimate fate, et cetera, by reason, but that there are things we could only know by faith.
[00:17:42.840 --> 00:17:45.280] And I think one has to distinguish that.
[00:17:45.440 --> 00:18:02.400] There's much that I cannot prove or I cannot present evidence for about my beliefs about the afterlife and heaven and God that I do believe, but I believe as a matter of reading the Bible and being a Catholic and studying that.
[00:18:02.400 --> 00:18:05.760] But those are not strictly scientific questions.
[00:18:05.760 --> 00:18:10.400] But there's much that I do believe that has a scientific basis.
[00:18:10.720 --> 00:18:19.040] And from what I see, the science dovetails rather nicely with the religious perspective of Christianity.
[00:18:19.360 --> 00:18:22.080] But I don't claim scientific proof for any of that.
[00:18:22.800 --> 00:18:31.120] But I think that the soul is immortal based on scientific reasons that have nothing to do with faith.
[00:18:31.600 --> 00:18:33.360] Well, I mean, there's nothing that's immortal.
[00:18:33.360 --> 00:18:39.840] The universe has a finite lifetime, and there's no scientific evidence for anything being immortal.
[00:18:39.840 --> 00:18:42.480] But, Mike, let's get down a little bit to fact.
[00:18:42.560 --> 00:19:09.920] There's this beautiful case study by one of your colleagues, Itzak Fried, that I've worked with for many years, that was published a couple of years ago about a 14-year-old girl, no previous medical illnesses, who complained when she got 13 or so of intense episodes of feeling guilty, guilt, and some distress, typically in a social situation at school or with friends or with her parents.
[00:19:10.160 --> 00:19:30.000] At some point, she got tonic clonic epileptic seizures and was seen and evaluated by neurologists, and they detected a tumor in the anterior singlet subgeneral, you know, so in the frontal part, but underneath the corpus callosa.
[00:19:31.080 --> 00:19:47.880] And then they did this study where first Itag Friednau did put electrodes in there to monitor and could show that every time you stimulate Every time you stimulate in the brain of this girl, you invoked intense feelings of guilt.
[00:19:47.880 --> 00:19:53.080] Now, guilt, so I grew up, as a disclosure, I grew up as a Roman Catholic and was devout.
[00:19:53.080 --> 00:19:56.600] And of course, there's a lot of guilt in Roman.
[00:19:56.840 --> 00:20:01.160] I mean, it's sort of a Catholic emotion.
[00:20:01.160 --> 00:20:04.280] So here you could evoke guilt.
[00:20:04.280 --> 00:20:07.560] Furthermore, then they did the resection.
[00:20:07.560 --> 00:20:17.080] They removed the tumor and the girl never had, in a second operation, the girl never had these episodes of guilt again.
[00:20:17.080 --> 00:20:24.520] I don't know whether she had any feelings of guilt about anything, but what I do know that these intense episodes of guilt.
[00:20:24.520 --> 00:20:43.000] So here you have a really nice causal study where a social emotion that is often associated with religious practice, feeling guilty about a sin you committed or something that you did or didn't do, inappropriate, that seems to have a very concrete correlate in a particular brain area of the brain.
[00:20:43.000 --> 00:20:46.760] You stimulate, you get guilt, you remove the area, the guilt is gone.
[00:20:47.400 --> 00:20:50.360] So why do you need anything else?
[00:20:50.360 --> 00:20:57.240] Well, in addition, there's a soul that feels guilt, but yeah, you can postulate that, but that seems completely redundant.
[00:20:57.240 --> 00:21:01.880] Here, it's all explained by a particular part of the brain that's active.
[00:21:02.200 --> 00:21:05.960] Yeah, it's kind of like they removed her Catholicism.
[00:21:05.960 --> 00:21:07.320] Yeah, the.
[00:21:09.000 --> 00:21:09.640] Yeah, I don't know.
[00:21:09.720 --> 00:21:10.960] I mean, this is in Israelis.
[00:21:11.080 --> 00:21:12.680] I presumably she was Jewish.
[00:21:12.680 --> 00:21:14.120] All right, all right.
[00:21:15.920 --> 00:21:39.600] The Aristotelian way, and I think this is strongly supported by the neuroscience, of understanding the issues of necessity and sufficiency for various aspects of the mind, is that there are five aspects of the soul, which is which I'm almost interchangeable with the idea of mind.
[00:21:39.600 --> 00:21:52.000] Five aspects of the soul that are clearly material and are intimately associated with the body, and the body is necessary and sufficient for the exercise of these five things.
[00:21:52.000 --> 00:21:57.760] The first thing is homeostasis, physiological maintenance of the heart rate, the breathing, all that stuff.
[00:21:57.760 --> 00:21:59.840] Clearly, that's controlled by the brain.
[00:21:59.840 --> 00:22:01.440] The second is locomotion.
[00:22:01.440 --> 00:22:03.520] The brain lets me move my limbs.
[00:22:04.240 --> 00:22:05.840] The third is sensation.
[00:22:05.840 --> 00:22:10.720] The brain is obviously involved in my vision, my hearing, my tactile sense.
[00:22:10.880 --> 00:22:12.720] The fourth is emotion.
[00:22:12.720 --> 00:22:16.720] The brain is obviously involved in emotional states like feelings of guilt and so on.
[00:22:16.720 --> 00:22:18.960] That clearly is related to the brain.
[00:22:18.960 --> 00:22:23.120] You can change your emotional states very easily by having a glass of wine.
[00:22:23.440 --> 00:22:25.600] So that obviously changes.
[00:22:25.600 --> 00:22:27.120] And the fifth is memory.
[00:22:27.120 --> 00:22:32.240] The brain is clearly involved in memory, the hippocampi, the mammillary bodies.
[00:22:34.160 --> 00:22:39.440] You can lose the ability to form short-term memories by bilateral hippocampal lesions.
[00:22:39.440 --> 00:22:43.600] So those are material activities of the brain.
[00:22:43.920 --> 00:22:55.920] So this young girl was obviously having either subclinical seizures or some kind of problem in a region of her cingula gyri that were causing her to have this emotion of guilt.
[00:22:55.920 --> 00:22:59.360] But Aristotle would say, hey, that's perfectly fine.
[00:22:59.360 --> 00:23:00.760] Emotion comes from the brain.
[00:23:00.760 --> 00:23:01.880] No one doubts that.
[00:23:02.680 --> 00:23:04.920] Including the conscious experience, of course, right?
[00:23:05.400 --> 00:23:07.320] Of course I'm talking about the conscious experience.
[00:23:07.640 --> 00:23:08.680] Absolutely.
[00:23:09.000 --> 00:23:14.360] The Aristotelian view of the intellect and will is different.
[00:23:14.520 --> 00:23:16.520] And the Thomistic view follows that.
[00:23:16.520 --> 00:23:30.040] That the intellect, and that's the capacity to form concepts that are abstract, the capacity to make propositions, the capacity to do reason, the capacity to make arguments, is an immaterial power of the human soul.
[00:23:30.040 --> 00:23:31.400] It depends on the brain.
[00:23:31.400 --> 00:23:38.120] That is, the brain is necessary ordinarily for the ordinary exercise of our intellect, the kind of thing we're doing right now.
[00:23:38.600 --> 00:23:44.120] If I get hit on the head, I'm not going to be as effective on this podcast as if I wasn't hit on the head.
[00:23:44.120 --> 00:23:47.000] But the brain is not sufficient for that.
[00:23:47.000 --> 00:23:53.240] That is, that there's an immaterial aspect of our soul that is responsible for our intellect.
[00:23:53.240 --> 00:23:54.760] And the same for will.
[00:23:54.760 --> 00:23:58.120] Will is the appetite that follows on reason.
[00:23:58.120 --> 00:24:01.000] And that's an immaterial aspect of our soul as well.
[00:24:01.000 --> 00:24:04.920] And there's actually a ton of neuroscience that I think supports that viewpoint.
[00:24:04.920 --> 00:24:07.240] There's also a ton of logic that supports that viewpoint.
[00:24:07.240 --> 00:24:08.680] I'm happy to go into if you want.
[00:24:09.960 --> 00:24:11.080] Yeah, go ahead, Christophe.
[00:24:11.400 --> 00:24:31.960] Okay, so A, we know in neurosurgical experiments, right, you can evoke the feeling of not just the movement of limbs if you stimulate motor or premotor area, but you can also evoke the feeling of will, where people say, oh, I wanted to move my mouth, but I didn't actually move it.
[00:24:31.960 --> 00:24:32.840] I had an intent.
[00:24:33.000 --> 00:24:37.640] Either a patient will talk about intention or they will talk about will.
[00:24:37.640 --> 00:24:46.720] Now, so you can always say, so we know that stimulating particular parts of the brain seems to involve centers that typically are involved.
[00:24:46.720 --> 00:24:56.000] Also, if you scan people and you do, you know, you do appropriate experiments that seem to relate to the execution of what we would call voluntary activity, right?
[00:24:56.000 --> 00:24:59.040] I raise my hand now or now, like in the Libet experiment.
[00:24:59.040 --> 00:25:02.880] Now, of course, you can always say, Yeah, this is all necessary, but it's not sufficient.
[00:25:02.880 --> 00:25:05.360] There is this additional quantity X.
[00:25:06.000 --> 00:25:11.120] I'm not sure how I would ever show that's not the case.
[00:25:11.120 --> 00:25:17.520] You know, it reminds me a little bit of the receiver, the radio receiver theory of William James that he voiced at some point.
[00:25:17.520 --> 00:25:18.880] I don't know how seriously he was.
[00:25:18.880 --> 00:25:32.560] It was like, Yeah, I mean, clearly, the brain is necessary, but you know, there's this sort of this, it's as necessary as, you know, if you're listening to a symphony orchestra, the radio is necessary, but this radio itself doesn't generate the music.
[00:25:32.560 --> 00:25:34.640] It's generated somewhere else.
[00:25:34.640 --> 00:25:53.840] Again, that's not very satisfactory compared to the standard neuroscience explanation, which is, I know, this piece of brain is necessary and sufficient to generate consciousness, conscious experience, or the feeling of will, or wanting, or intending.
[00:25:54.160 --> 00:26:07.840] But, Michael, it's your contention that there are certain areas under brain stimulation that cause basic sensory or motor experiences, but not higher abstract reasoning or free will choices.
[00:26:07.840 --> 00:26:11.040] Where do you imagine these things are happening?
[00:26:11.920 --> 00:26:25.040] Well, first of all, if I may, I just want to point out, as Christoph raised the neuroscientific evidence, there's quite a bit of neuroscientific evidence that supports the view that the intellect and will are immaterial powers of the soul.
[00:26:25.280 --> 00:26:29.200] Wilder Penfield did a lot of great work on this.
[00:26:29.200 --> 00:26:33.960] He operated on 1,100 patients over his career with awake brain surgery.
[00:26:34.120 --> 00:26:39.080] The patients were awake, they had local anesthesias, they didn't feel pain, and he mapped their cortex.
[00:26:39.080 --> 00:26:47.080] And he found that he could elicit four and only four different kinds of responses from stimulating the cortex.
[00:26:47.080 --> 00:26:49.080] He could elicit movement.
[00:26:50.040 --> 00:26:52.280] He could elicit sensations of various sorts.
[00:26:52.280 --> 00:26:56.040] He could elicit memories occasionally, particularly in the temporal lobe.
[00:26:56.040 --> 00:26:58.760] And he could elicit emotions occasionally.
[00:26:58.760 --> 00:27:01.720] He could not elicit what he called mind action.
[00:27:01.720 --> 00:27:08.920] And what he meant by mind action was abstract thought, a sense of self, contemplation.
[00:27:08.920 --> 00:27:11.560] And he then looked at seizures.
[00:27:11.560 --> 00:27:17.400] And he looked at the phenomenology of seizures, what happens when people have seizures.
[00:27:17.400 --> 00:27:25.960] And I've seen the same thing, that when people have seizures, if they remain conscious during the seizures, they have one of four things.
[00:27:25.960 --> 00:27:27.000] They'll have movement.
[00:27:27.000 --> 00:27:29.960] They can have a focal seizure where they move a limb.
[00:27:29.960 --> 00:27:33.160] They'll have sensations, a tingling or a flash of light.
[00:27:33.720 --> 00:27:36.280] You can have memories evoked by seizures.
[00:27:36.600 --> 00:27:40.120] And you can have emotions evoked by seizures.
[00:27:40.120 --> 00:27:43.560] There are kinds of seizures that make you laugh, gelastic seizures.
[00:27:43.560 --> 00:27:46.200] There are seizures that make you very frightened.
[00:27:46.200 --> 00:27:48.440] But there are no calculus seizures.
[00:27:48.440 --> 00:27:50.040] There are no mathematic seizures.
[00:27:50.040 --> 00:27:51.640] There are no logic seizures.
[00:27:51.640 --> 00:27:55.800] There are no seizures that make a person engage in abstract thought.
[00:27:56.120 --> 00:28:00.920] The closest seizure that comes to that is a kind of a seizure called a forced thinking seizure.
[00:28:00.920 --> 00:28:04.200] And it's a seizure that was studied in some detail by Penfield.
[00:28:04.200 --> 00:28:05.320] They're very rare.
[00:28:05.320 --> 00:28:09.400] And forced thinking seizures, I think of as more like OCD seizures.
[00:28:09.400 --> 00:28:15.760] They're people who compulsively think about a particular thing, like, did I lock the door?
[00:28:14.840 --> 00:28:16.960] Did I turn the oven off?
[00:28:17.120 --> 00:28:17.600] Stuff like that.
[00:28:17.600 --> 00:28:22.320] But it's more OCD kind of things rather than abstract thought per se.
[00:28:22.320 --> 00:28:33.280] So there is considerable scientific evidence that intellect and will cannot be evoked from the brain in terms of abstract thought by stimulating the brain.
[00:28:36.080 --> 00:28:40.160] Evidence of absence isn't evidence of absence, right?
[00:28:40.160 --> 00:28:42.240] A prime precept of science.
[00:28:42.240 --> 00:28:44.320] And Penfield did his work 100 years ago.
[00:28:44.320 --> 00:28:55.600] Yes, he was a genius and he started or he really pushed the field of electric stimulation, but he also didn't find pain responses that we now know are prevalent in the insula.
[00:28:55.600 --> 00:29:00.880] And of course, you can put people in a magnetic scanner and ask them to do math problems, but people have done.
[00:29:00.880 --> 00:29:06.400] And you see particular FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[00:29:06.400 --> 00:29:11.280] New customers can bet just $5 and get 300 in bonus bets if you win.
[00:29:11.280 --> 00:29:11.840] That's right.
[00:29:11.840 --> 00:29:18.400] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[00:29:18.400 --> 00:29:24.240] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[00:29:24.240 --> 00:29:26.080] The NFL season's almost here.
[00:29:26.080 --> 00:29:28.560] The only question is, are you ready to play?
[00:29:28.560 --> 00:29:34.480] Visit fan duel.com/slash sportsfan to download the FanDuel app today and get started.
[00:29:34.480 --> 00:29:36.880] Must be 21 plus and physically present in New York.
[00:29:36.880 --> 00:29:38.400] First online real money wager only.
[00:29:38.400 --> 00:29:39.760] $10 first deposit required.
[00:29:39.760 --> 00:29:43.120] A bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[00:29:43.120 --> 00:29:43.920] Restrictions apply.
[00:29:43.920 --> 00:29:46.080] C terms at sportsbook.fanuel.com.
[00:29:46.080 --> 00:29:55.200] For help with a gambling problem, call 1-877-8 HOPE-NY or text HOPENY to 467-369 or visit oasas.ny.gov slash gambling.
[00:29:55.200 --> 00:29:56.720] Standard text messaging rates apply.
[00:29:56.720 --> 00:30:00.520] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[00:29:59.920 --> 00:30:03.320] Region in the prefrontal cortex will light up.
[00:30:03.400 --> 00:30:12.120] And incidentally, if you, yeah, so you know, one-third of the brain is what you and your colleagues call non-eloquent cortex.
[00:30:12.120 --> 00:30:15.720] But it's simply not true that if you remove them, patients don't have laws.
[00:30:15.720 --> 00:30:34.280] If you remove them and you test them in specific contexts where they have to do social reasoning or moral reasoning or higher order thinking, they all show the ulceral deficits because those parts of the brain are associated with what we call higher intelligence, with and really with intelligence, right?
[00:30:34.280 --> 00:30:38.120] Now you can say, well, that's necessary, but that's not sufficient.
[00:30:38.120 --> 00:30:40.760] Yeah, and that, you know, you can always argue that.
[00:30:40.760 --> 00:30:50.200] But it is well known and established to 100 years of functional neurology that parts of that, that, I mean, there are no bits and pieces of the brain that don't have any function.
[00:30:50.200 --> 00:30:54.280] That's just there to support the, you know, to fill up the cranium, right?
[00:30:54.280 --> 00:30:59.160] Every piece of brain has a specific function, otherwise it wouldn't have evolved.
[00:30:59.800 --> 00:31:01.480] Well, hang on.
[00:31:01.480 --> 00:31:11.480] So is it your point, Christophe, that the stimulating any one particular area is not going to trigger, say, mathematical reasoning because it's not done in any one area.
[00:31:11.800 --> 00:31:14.120] It's more of an integrated system.
[00:31:14.680 --> 00:31:17.800] Yeah, I mean, and what happens in electrical stimulation?
[00:31:17.800 --> 00:31:19.640] Why does electrical stimulation work?
[00:31:19.880 --> 00:31:28.200] So let's say the best exploded in the visual brain, where you have neurons nearby that represent, you know, let's say points in a particular part of the visual brain.
[00:31:28.520 --> 00:31:35.720] If you now introduce an electrode in this tissue, you're going to excite neurons primarily close by, not only, but primarily close by.
[00:31:35.720 --> 00:31:41.160] They all represent that particular portion of visual space, and therefore you see what's called a FOSCE.
[00:31:41.160 --> 00:31:45.000] You see some shimmering light there, and that's the energy reported.
[00:31:45.120 --> 00:31:56.880] Now, higher-order things like intelligence, like reasoning, is very likely not organized in this very topographic way because there isn't a single mapping.
[00:31:57.280 --> 00:32:06.400] The visual brain is organized in this map-like way because there's a nice mapping from the outside, from the visual world outside to our visual cortex.
[00:32:06.400 --> 00:32:10.080] There isn't such a mapping for intelligence, for higher-order reasoning.
[00:32:10.080 --> 00:32:19.600] And therefore, you don't get if you stimulate, if you put one electrode in, you don't get what Mike refers to as sort of, oh, I just thought of E equals M C square.
[00:32:19.600 --> 00:32:31.600] Although, I must point out, having worked on single neurons with Itzhak Fried, we have shown a patient, a physics student who had to get pre-epileptic surgery mapping.
[00:32:32.000 --> 00:32:36.640] We showed him images, including of E equals M C square and Albert Einstein.
[00:32:36.800 --> 00:32:39.680] We had single neurons that responded to that image.
[00:32:40.720 --> 00:32:42.080] Concept neurons, huh?
[00:32:42.080 --> 00:32:43.600] Yes, concept neurons, exactly.
[00:32:43.920 --> 00:32:45.520] The Jennifer Anderson neuron.
[00:32:45.840 --> 00:32:46.480] Right, right.
[00:32:47.600 --> 00:32:49.040] But I said, I agree.
[00:32:49.040 --> 00:32:52.160] Jennifer Anderson isn't an abstract scientific thought.
[00:32:52.160 --> 00:32:58.080] But anyhow, so they're not nearby in a way that you can directly evoke them with electrical stimulation.
[00:32:58.080 --> 00:33:04.480] And just because you don't see them in this way doesn't mean they don't have a substrate in the brain.
[00:33:04.480 --> 00:33:04.800] Right.
[00:33:04.800 --> 00:33:07.040] Well, a couple of things.
[00:33:07.040 --> 00:33:13.360] One is that one has to recognize the denominator in Penfield's work.
[00:33:13.840 --> 00:33:15.440] He did 1,100 patients.
[00:33:15.440 --> 00:33:23.760] If the operations took eight hours and he did two stimulations per minute, that means he did 1.1 million stimulations.
[00:33:23.760 --> 00:33:28.240] And in not a single one of them was he able to evoke any kind of abstract thought.
[00:33:28.560 --> 00:33:39.000] I did a back-of-the-envelope calculation of the number of seizures that human beings have had over the last 200 years, which is the period of time that we've had modern neuroscience.
[00:33:39.000 --> 00:33:41.000] And it's been about a quarter of a billion seizures.
[00:33:41.000 --> 00:33:49.080] And I'm unaware of a single report in the medical literature of a patient having a seizure whose ictus involved abstract thought.
[00:33:49.080 --> 00:33:58.760] So I recognize that you feel that the cause of abstract thought is sort of hiding in various places in the brain.
[00:33:59.160 --> 00:34:00.520] It's not hiding, it's distributed.
[00:34:00.520 --> 00:34:08.360] Just like in a deep neural network, where the intelligence that emerges out of ChatGPT is distributed across 120 layers.
[00:34:08.360 --> 00:34:09.320] It's not localized.
[00:34:09.560 --> 00:34:10.760] You can't point to this layer.
[00:34:10.760 --> 00:34:12.600] Ah, this is what makes it so smart.
[00:34:12.600 --> 00:34:15.560] No, it's distributed across 122 layers.
[00:34:15.560 --> 00:34:22.440] So if you were to simulate, to stimulate in a deep neural network, again, you wouldn't get an abstract thought.
[00:34:22.440 --> 00:34:29.400] You would get something you couldn't interpret because it is distributed across the entire network.
[00:34:29.400 --> 00:34:30.760] Same thing with human intelligence.
[00:34:31.240 --> 00:34:35.400] But you can get remarkably complex reactions.
[00:34:35.400 --> 00:34:37.000] For example, memories.
[00:34:37.000 --> 00:34:44.680] You can stimulate the temporal lobe and get detailed memories of events that happened years before, including the details of conversations.
[00:34:46.760 --> 00:34:50.120] There have been people who've, but you can also get that with seizures.
[00:34:50.120 --> 00:34:56.360] There have been people who've had complex partial seizures who are able to drive cars during the time they're having a seizure.
[00:34:56.840 --> 00:35:03.480] So you can elicit enormously complex things from the brain, except abstract thought.
[00:35:04.440 --> 00:35:06.120] Certain types of abstract thought.
[00:35:06.120 --> 00:35:08.680] Look, I mean, the literature is full here.
[00:35:08.680 --> 00:35:10.680] I can give you a description.
[00:35:10.680 --> 00:35:17.200] The stimulation induces the disappearance of the word in my mind and replaces it with an idea of leaving.
[00:35:17.520 --> 00:35:18.160] Leaving.
[00:35:18.160 --> 00:35:20.560] So he's in the OI, he's being stimulated.
[00:35:20.560 --> 00:35:25.440] This is in the context again of a preclinical workup.
[00:35:25.440 --> 00:35:32.080] So now we can argue: is this abstract the idea of leaving, or is this something still concrete because it represents space?
[00:35:32.080 --> 00:35:32.880] I don't know.
[00:35:33.360 --> 00:35:35.440] It's a continuum.
[00:35:36.080 --> 00:35:51.600] And I think, I mean, RenΓ© Descartes, Mike, what reminds me of RenΓ© Descartes is that one of the, probably the key reason he postulated Res Cogitan, thinking substance, you know, the classical dualism that I understand you don't hold.
[00:35:51.600 --> 00:36:06.320] But many people, of course, do hold, because he was unable to conceive how matter, mechanistic matter, as he understood, right, before the advent of electromagnetism and quantum mechanics, could produce thought.
[00:36:06.320 --> 00:36:14.160] Now, today we know how thinking and how intelligence arises because we can build it into artificial network.
[00:36:15.920 --> 00:36:19.520] So, how does a neural network produce a concept?
[00:36:20.160 --> 00:36:21.920] What are the details on that?
[00:36:22.240 --> 00:36:23.280] Oh, it's abstract.
[00:36:23.280 --> 00:36:26.560] It's a progressive layer, so you wouldn't get it in two or three layers.
[00:36:26.560 --> 00:36:37.200] But if you have deep enough network, look, if we interact every day over the next month or two, my brain will wire up neurons that represent you.
[00:36:37.520 --> 00:36:55.120] And if I think about you and your ideas, and hylomorphism, and Aristotle and Thomas Aquinas, all of that, I will wire up neurons that represent that because it's an elegant way for the brain using minimal neuronal resources to represent things that my brain obviously deems important enough.
[00:36:55.120 --> 00:36:59.880] I mean, we know because we can simulate such things in artificial neural networks.
[00:36:59.680 --> 00:37:01.000] There's no magic about it.
[00:37:01.880 --> 00:37:08.200] You can simulate the causation of concepts in artificial neural networks.
[00:37:09.320 --> 00:37:10.280] I don't get that.
[00:37:10.680 --> 00:37:13.560] Do artificial neural networks contemplate mercy?
[00:37:13.560 --> 00:37:15.960] I mean, it's a bizarre statement.
[00:37:15.960 --> 00:37:17.640] Not yet, but soon.
[00:37:18.280 --> 00:37:19.960] Yeah, I see a priori.
[00:37:19.960 --> 00:37:20.440] You're right.
[00:37:20.680 --> 00:37:24.840] I haven't dealt with mercy because I'm a visual neuroscientist.
[00:37:24.840 --> 00:37:27.000] I'm interested in, you know.
[00:37:27.320 --> 00:37:33.960] And so I haven't dealt with it, but I see no reason for it to.
[00:37:34.360 --> 00:37:37.960] Let me ask a counterfactual question for you, Michael, on specific examples.
[00:37:37.960 --> 00:37:43.640] The two most famous from neuroscience, Phineas Gage and Henry Moyleson.
[00:37:44.200 --> 00:37:48.920] Why didn't their brains or minds come back in after the damage?
[00:37:49.320 --> 00:37:51.080] Well, first of all, I don't know Moyleson.
[00:37:51.240 --> 00:37:51.960] I know Gage.
[00:37:52.840 --> 00:38:07.000] Moyleson was the one who was the HM, yeah, whose, what was it, the hippocampus was taken out because of his seizures, and then he could never again form short term, never again form memory.
[00:38:08.600 --> 00:38:12.040] So I'll ask a more general question, which I asked Deepak Tropra.
[00:38:12.280 --> 00:38:16.040] Where does Aunt Millie's mind go when her brain dies of Alzheimer's?
[00:38:16.040 --> 00:38:23.160] And Deepak's answer, it returns to where it was before because he believes consciousness is the ground of all being and so on.
[00:38:23.320 --> 00:38:34.040] First of all, Phineas Gage obviously got a spike driven through his frontal lobes, and it changed his emotional state, his baseline emotions.
[00:38:34.040 --> 00:38:44.720] He had been a very religious, kind of conservative guy beforehand, and was a little more of a libertine after this injury, which is totally consistent with a hylomorphic view of how the brain works.
[00:38:44.440 --> 00:38:47.840] Meaning, you can change emotions, you can change emotional states.
[00:38:48.240 --> 00:38:55.680] If you're continuously drinking whiskey, you're going to emotionally be a different person, or if you get a spike driven through your brain, you're going to be emotionally different.
[00:38:56.720 --> 00:39:00.640] Getting the spike driven through his brain didn't make him into a mathematician.
[00:39:00.640 --> 00:39:05.200] That is, there's a difference between emotional states and abstract thought.
[00:39:05.200 --> 00:39:11.600] On the loss of memory from bilateral hippocampal injury, yeah, I mean, that's a very, very real thing.
[00:39:12.080 --> 00:39:17.040] But that's memory, and that is a physical aspect of what the brain does.
[00:39:17.440 --> 00:39:19.200] No one denies that.
[00:39:19.200 --> 00:39:20.960] Memory is not abstract.
[00:39:20.960 --> 00:39:23.600] So you lose the ability to form new memories.
[00:39:23.600 --> 00:39:25.120] Yes, that's certainly true.
[00:39:25.120 --> 00:39:26.880] But it's abstract thought.
[00:39:26.880 --> 00:39:40.720] And I'm still not clear on how a neural network, integrated neural network, whatever, how does the Venn diagram of an integrated neural network overlap with psychological predicates like concepts?
[00:39:42.720 --> 00:39:43.680] How does that work?
[00:39:44.000 --> 00:39:51.040] Well, if you feed it, let's say human novels as a standard now in GPT, right?
[00:39:51.040 --> 00:39:52.800] You feed it human.
[00:39:53.280 --> 00:39:58.560] I am sure I haven't tested it, but because now it has access to all of the human literature.
[00:39:58.560 --> 00:40:04.400] And human literature is full of things like emotion and religion and mercy and other concepts.
[00:40:04.400 --> 00:40:11.840] And ultimately, in these networks, there will emerge nodes or clusters of nodes that will represent the idea of mercy.
[00:40:11.840 --> 00:40:20.080] And I'm sure we can right now, I can ask ChatGPT what does it think about mercy, and it'll give me some reasonable sounding answer.
[00:40:20.080 --> 00:40:20.560] Right.
[00:40:20.560 --> 00:40:27.040] But does the ChatGPT have the subjective experience of thinking about mercy?
[00:40:27.080 --> 00:40:27.680] Oh, boy.
[00:40:27.680 --> 00:40:28.960] No, so that's a separate.
[00:40:29.600 --> 00:40:32.360] Mike, so we can have an entirely separate debate.
[00:40:32.600 --> 00:40:38.840] And I think this is the debate I'm personally much more likely with something like hylomorphism may be relevant.
[00:40:39.000 --> 00:40:47.240] What we don't understand, even in the concept of a single visual stimulus, an isolated bright light, you know, cars can navigate.
[00:40:47.240 --> 00:40:50.200] A Tesla can navigate, but I can see.
[00:40:51.000 --> 00:40:53.800] Tesla car doesn't see in the way we see.
[00:40:54.040 --> 00:40:55.960] It doesn't have the subjective feeling.
[00:40:55.960 --> 00:40:57.400] But that applies to anything.
[00:40:57.400 --> 00:41:03.400] That applies to seeing, hearing, smelling, touching, wanting, desiring, and also abstract thoughts.
[00:41:03.400 --> 00:41:16.680] That is a true mystery which right now we don't have any at least generally accepted answer to how conscious experience comes about in a physical world.
[00:41:17.000 --> 00:41:31.160] I think the only thing that we can say with confidence, and we can say a lot of things with confidence about this, is that there is a correspondence between certain activities in the brain and certain psychological states.
[00:41:31.960 --> 00:41:33.400] There's no question about that.
[00:41:33.400 --> 00:41:51.640] However, from an ontological standpoint, it is not clear how the material activity in the brain corresponds to, or generates or is related to the psychological aspect of the beating heart of the mind-body problem.
[00:41:51.640 --> 00:41:52.040] Right.
[00:41:52.040 --> 00:42:00.280] And I think it is largely a problem that we've created ourselves with what I think is faulty metaphysics.
[00:42:00.840 --> 00:42:07.400] Prior to Descartes, the hylomorphic way of understanding the world was widespread in the Western world.
[00:42:07.400 --> 00:42:09.640] Descartes really changed things on that.
[00:42:09.640 --> 00:42:17.840] And of course, his view of the human being was that the human being was this composite of two substances, of race cogitans and race extensa.
[00:42:18.160 --> 00:42:25.600] And the race cogitans was the soul, the thinking substance, and a race extensa was really a machine, a biological machine.
[00:42:26.160 --> 00:42:32.320] Because of the rise of materialism over the next couple of centuries, people kind of discarded the race cogitans.
[00:42:32.320 --> 00:42:36.480] They got rid of the ghost in the machine and just left us with the machine.
[00:42:36.480 --> 00:42:44.880] And so from that kind of materialistic, and people call it a mechanistic philosophy, that kind of philosophical perspective, we are meat machines.
[00:42:44.880 --> 00:42:46.480] We're meat robots.
[00:42:46.480 --> 00:42:57.040] And then we wonder why we can't explain how the mind works when our very metaphysical presuppositions about what we are exclude the mind explicitly.
[00:42:57.040 --> 00:43:04.160] That is, if you're working from a Cartesian background and you get rid of race cogitans, the only thing you've got left is the meat machine.
[00:43:04.160 --> 00:43:06.640] Why do you wonder about why you can't explain the mind?
[00:43:08.320 --> 00:43:19.360] I would differ with you only in saying we can explain every behavior associated with mind can be explained satisfactorily using a standard metaphysical framework.
[00:43:19.360 --> 00:43:26.880] Why someone displays the emotions of guilt and thereby does certain action, all of that can be explained.
[00:43:26.880 --> 00:43:33.120] But I grant you, the feeling, the experiential aspect cannot be explained yet.
[00:43:33.120 --> 00:43:40.080] Now, it may be that may be a reformulation of, see, for the same reason you mentioned, we now live, of course, in quantum mechanics.
[00:43:40.080 --> 00:43:49.840] And quantum mechanics has a great, you know, quantum mechanics is much more difficult to capture within the standard materialistic framework that really depends on classical physics and RenΓ© Descartes.
[00:43:49.840 --> 00:43:51.840] So maybe there's an answer there.
[00:43:51.840 --> 00:43:55.280] Maybe some other answers are like PAMP panpsychism or idealism.
[00:43:55.280 --> 00:43:56.880] Those are other answers.
[00:43:57.200 --> 00:44:03.480] But by the way, none of them say anything at all about immortality, including my reading at least of Aristotle.
[00:44:03.640 --> 00:44:11.160] This immortality is something that was added post factum because people obviously prefer to live to live forever.
[00:44:11.160 --> 00:44:19.400] We have no evidence whatsoever, at least scientific, third-person evidence for this supposedly immortality.
[00:44:19.400 --> 00:44:20.520] Well, I'm all for it.
[00:44:20.840 --> 00:44:29.640] Yeah, well, I mean, Aristotle was actually silent on the question of immortality of the soul.
[00:44:30.680 --> 00:44:32.760] His people.
[00:44:32.920 --> 00:44:35.240] Plato, of course, and Thomas Aquinas.
[00:44:35.720 --> 00:44:41.240] The argument for immortality of the soul takes kind of two forms.
[00:44:41.240 --> 00:44:43.480] One is a logical argument.
[00:44:43.480 --> 00:44:48.280] The other is an argument particularly related to near-death experiences.
[00:44:48.280 --> 00:45:09.000] But the logical argument for immortality of the soul that was put forth by Thomas Aquinas is that because we have immaterial powers of intellect and will, immaterial capacities for reason, an immaterial power cannot die in the same way that a material substance can die.
[00:45:09.000 --> 00:45:12.920] When we die, what we mean by dying is we disintegrate.
[00:45:12.920 --> 00:45:15.880] That is, that, for example, my body won't go away.
[00:45:15.880 --> 00:45:18.200] The atoms in my body will always be here.
[00:45:18.200 --> 00:45:20.760] They'll just be eaten up by worms.
[00:45:21.080 --> 00:45:25.960] What will happen to my body when I die is that there will be a lack of integration.
[00:45:25.960 --> 00:45:28.520] The body will fall apart and go into pieces.
[00:45:28.520 --> 00:45:36.040] The thing is that immaterial aspects of human souls can't disintegrate because they aren't material to begin with.
[00:45:36.040 --> 00:45:38.280] They can't lose parts because they don't have parts.
[00:45:38.760 --> 00:45:40.440] Is it information, Michael?
[00:45:41.000 --> 00:45:47.200] Well, it resembles information, but information is a very tricky concept.
[00:45:47.200 --> 00:45:53.840] I mean, semantically, information is just the set of true propositions.
[00:45:53.840 --> 00:45:54.960] So, how that relates to the majority of the system.
[00:45:56.400 --> 00:45:59.840] If I can interrupt, information is informare.
[00:45:59.840 --> 00:46:06.640] It again comes from Aristotle to give form to informare, to give form to.
[00:46:06.640 --> 00:46:13.360] But, Mike, but aren't you then troubled by the fact that GPT 4, 30, that can reason?
[00:46:13.360 --> 00:46:15.520] 3.0 can actually reason, right?
[00:46:15.520 --> 00:46:16.240] You can ask it questions.
[00:46:16.320 --> 00:46:18.480] No, no, no, no, no, no, no, no, no, no, no, no, no, no, ask it to explain itself.
[00:46:18.480 --> 00:46:18.960] It can reason.
[00:46:19.200 --> 00:46:21.200] So, it also is immortal?
[00:46:21.200 --> 00:46:26.320] No, no, GPT can't reason any more than my watch can tell time.
[00:46:26.320 --> 00:46:28.480] GPT is a tool.
[00:46:28.480 --> 00:46:33.600] It's a tool in the same way a book is a tool, or a sundial is a tool, or a watch is a tool.
[00:46:33.680 --> 00:46:34.880] It's highly biased.
[00:46:34.880 --> 00:46:36.960] It has all the appearance of reasoning.
[00:46:36.960 --> 00:46:42.640] You can ask it a particular argument, it'll explain you why it came to this conclusion, right?
[00:46:42.640 --> 00:46:47.520] You can have a, some people now, Mike Weiser, he does regular mathematics with it.
[00:46:47.520 --> 00:46:49.200] He asks them why it is, why not that?
[00:46:49.360 --> 00:46:49.840] It can reason.
[00:46:50.080 --> 00:46:52.480] No, it's a tool that leverages human reason.
[00:46:52.480 --> 00:46:58.560] It's been programmed in such a way that given a particular input, it gives a particular output.
[00:46:58.560 --> 00:46:59.280] No, that sounds like a problem.
[00:46:59.360 --> 00:47:03.120] So, ask you the same question 10 times, it gives you 10 different answers.
[00:47:03.120 --> 00:47:06.480] It is not programmed in any way conventional programmed.
[00:47:06.480 --> 00:47:07.040] It has learned.
[00:47:07.440 --> 00:47:09.520] Yeah, so it's also going to read your book.
[00:47:09.680 --> 00:47:11.040] There's no question about it.
[00:47:11.040 --> 00:47:15.200] And you can probably ask it what it thinks about your book, and it'll give you some answer.
[00:47:15.520 --> 00:47:20.000] So, you believe that it can reason, but it has no first-person experience.
[00:47:20.640 --> 00:47:21.920] That's entirely correct.
[00:47:21.920 --> 00:47:23.680] Because reason is purely intelligent.
[00:47:23.680 --> 00:47:24.560] It's a computational experience.
[00:47:25.600 --> 00:47:27.760] How do you reason without first-person experience?
[00:47:28.080 --> 00:47:29.240] That's an interesting concept.
[00:47:28.960 --> 00:47:29.640] You can perfect.
[00:47:30.360 --> 00:47:34.520] Look, Leibniz, Leibniz first showed the way, right?
[00:47:35.160 --> 00:47:39.160] When he said we don't need to have this endless argument, we can just calculate.
[00:47:39.160 --> 00:47:41.560] You give me your premise, and we calculate.
[00:47:41.560 --> 00:47:46.040] We have some sort of calculus that allows us to come to the conclusion A or non-A.
[00:47:46.040 --> 00:47:48.120] This is exactly what these machines do.
[00:47:49.080 --> 00:47:53.160] They don't have any first-person thinking, but they are intelligent.
[00:47:53.160 --> 00:47:56.840] So intelligence and consciousness are really two different things, really orthogonal.
[00:47:57.320 --> 00:47:59.320] Is a book intelligent?
[00:47:59.640 --> 00:48:01.480] No, because it's entirely passive.
[00:48:01.960 --> 00:48:03.560] I mean, this book is a passive thing.
[00:48:03.560 --> 00:48:04.200] It doesn't do anything.
[00:48:04.200 --> 00:48:05.480] But ChatGPT is a.
[00:48:05.880 --> 00:48:07.080] Is a watch intelligent?
[00:48:07.400 --> 00:48:08.520] Is a watch intelligent?
[00:48:10.120 --> 00:48:12.040] Does a watch know what time it is?
[00:48:12.840 --> 00:48:13.160] No.
[00:48:13.640 --> 00:48:14.920] That's not intelligence.
[00:48:15.560 --> 00:48:16.920] That's self-consciousness.
[00:48:16.920 --> 00:48:17.240] No, no, no.
[00:48:17.320 --> 00:48:18.520] Intelligent has the ability.
[00:48:18.920 --> 00:48:20.440] No, does a watch know what time it is?
[00:48:20.600 --> 00:48:21.880] You don't have to be self-conscious.
[00:48:21.880 --> 00:48:23.240] I mean, does a watch know what time it is?
[00:48:23.880 --> 00:48:29.080] No, it doesn't know what it does not know what time it is because it's not conscious.
[00:48:29.080 --> 00:48:33.160] But ChatGPT, look, it depends what you mean by intelligence.
[00:48:33.160 --> 00:48:41.240] The way utilities define is intelligence ability to rapidly adapt to new environments, learn from them, and abstract that knowledge into new situations.
[00:48:41.240 --> 00:48:43.800] ChatGPT can do that brilliantly.
[00:48:44.280 --> 00:48:45.880] Is it smarter than us?
[00:48:45.880 --> 00:48:47.080] Maybe not yet, but very soon.
[00:48:47.480 --> 00:48:52.280] But it can certainly fulfill the standard definition of intelligence as measured by IQ tests.
[00:48:52.280 --> 00:48:57.400] By the way, you can get ChatGPT to take all these IQ tests, and of course it ACEs all of them.
[00:48:57.720 --> 00:49:00.600] Does my cruise control know how to drive?
[00:49:01.560 --> 00:49:02.840] It knows how to drive, yes.
[00:49:02.840 --> 00:49:03.640] But Michael, you're asking.
[00:49:03.720 --> 00:49:04.360] It knows how to drive.
[00:49:04.600 --> 00:49:09.640] I think we're back to the hard problem of consciousness and what it's like to be something, the first-person experience will never know.
[00:49:09.880 --> 00:49:17.120] Well, except that I would say, Michael, that intelligence, so the hard problem is all about consciousness.
[00:49:14.040 --> 00:49:18.480] It's not about intelligence.
[00:49:14.280 --> 00:49:22.320] Intelligence is not that controversial in principle.
[00:49:22.560 --> 00:49:25.920] It doesn't have a hard problem because we can see it.
[00:49:25.920 --> 00:49:26.960] We engineer it now.
[00:49:27.200 --> 00:49:28.480] We're building it every day.
[00:49:28.640 --> 00:49:30.160] It's getting better and better.
[00:49:30.480 --> 00:49:33.120] Is a pocket calculator intelligent?
[00:49:33.440 --> 00:49:36.000] No, because it doesn't learn.
[00:49:36.000 --> 00:49:38.000] It has a short-term memory, but it cannot learn.
[00:49:38.000 --> 00:49:39.920] But chat GPD can learn.
[00:49:40.720 --> 00:49:46.880] You can give it this book, and then it'll answer this book, and you can ask it, how is this book different from Aristotle?
[00:49:46.880 --> 00:49:49.200] Is it compatible with philomorphism?
[00:49:49.200 --> 00:49:51.600] What would Descartes say about reading this book?
[00:49:51.600 --> 00:49:52.960] I mean, it can do all of that.
[00:49:52.960 --> 00:49:56.480] And it can even say, oh, by the way, can you translate me this book into German?
[00:49:56.480 --> 00:49:58.640] And it'll translate it into German.
[00:49:58.640 --> 00:50:01.360] Now, if that isn't intelligence, what is intelligence?
[00:50:01.680 --> 00:50:05.280] Intelligence is something that only a living thing has.
[00:50:05.280 --> 00:50:09.600] I mean, all that you're talking about are tools.
[00:50:09.600 --> 00:50:11.280] They're fascinating tools.
[00:50:11.280 --> 00:50:13.200] They're tools that are very complex.
[00:50:13.200 --> 00:50:18.080] They're tools that have complexity that is, in some ways, more than we understand right now.
[00:50:18.080 --> 00:50:22.400] That is, we've built them, but they can do things we don't quite get.
[00:50:22.400 --> 00:50:24.320] But that doesn't mean that it's intelligence.
[00:50:24.960 --> 00:50:26.000] They're just tools.
[00:50:26.000 --> 00:50:31.600] You're defining it in a certain way such that data on Star Trek will never be a person.
[00:50:31.600 --> 00:50:32.240] Simply because...
[00:50:32.720 --> 00:50:33.440] Of course, of course.
[00:50:34.880 --> 00:50:39.680] So that's your privilege, but that's not the conventional meaning of intelligence.
[00:50:40.000 --> 00:50:41.440] Well, the convention is wrong.
[00:50:41.440 --> 00:50:48.560] I mean, it makes no sense to say that a mechanical object is intelligent.
[00:50:48.560 --> 00:50:50.160] A mechanical object is a tool.
[00:50:50.400 --> 00:51:00.760] The fact of the matter is, Mike, we are more and more surrounded by intelligent agents that everyone calls intelligent, that act intelligently, and that do all of our bidding intelligently or not.
[00:51:00.760 --> 00:51:08.760] And we face, I think, as a society, an enormous danger in misunderstanding the nature of the intelligence.
[00:51:08.760 --> 00:51:20.280] Every shred of intelligence that appears to come out of these machines is human intelligence that is altered and leveraged by the algorithm of the machine, but it's all human.
[00:51:21.720 --> 00:51:24.440] We will meet the enemy, and the enemy's us.
[00:51:24.440 --> 00:51:27.080] It's our intelligence that's in these machines.
[00:51:27.080 --> 00:51:27.640] No one else.
[00:51:29.480 --> 00:51:35.960] Mike, go back to the first triumph of this new approach, machine learning, AlphaGo.
[00:51:35.960 --> 00:51:50.760] So AlphaGo was taught and vanquished these Seydale in 2000, whatever, 21, because it learned it was taught a huge corpus of existing Go games.
[00:51:50.760 --> 00:51:52.760] AlphaGo Zero.
[00:51:52.760 --> 00:51:57.160] FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[00:51:57.160 --> 00:52:02.120] New customers can bet just $5 and get 300 in bonus bets if you win.
[00:52:02.120 --> 00:52:02.680] That's right.
[00:52:02.680 --> 00:52:09.160] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[00:52:09.160 --> 00:52:15.080] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[00:52:15.080 --> 00:52:16.920] The NFL season's almost here.
[00:52:16.920 --> 00:52:19.400] The only question is: are you ready to play?
[00:52:19.400 --> 00:52:25.240] Visit fanduel.com/slash sportsfan to download the FanDuel app today and get started.
[00:52:25.240 --> 00:52:27.720] Must be 21 plus and physically present in New York.
[00:52:27.720 --> 00:52:29.240] First online real money wager only.
[00:52:29.240 --> 00:52:30.600] $10 first deposit required.
[00:52:30.600 --> 00:52:33.960] A bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[00:52:33.960 --> 00:52:34.760] Restrictions apply.
[00:52:34.760 --> 00:52:36.920] See terms at sportsbook.fanuel.com.
[00:52:36.920 --> 00:52:46.000] For help with a gambling problem, call 1-877-8 HOPENY or text Hope NY to 467-369 or visit oasas.ny.gov/slash gambling.
[00:52:46.000 --> 00:52:47.600] Standard text messaging rates apply.
[00:52:44.920 --> 00:52:50.960] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[00:53:15.120 --> 00:53:23.200] By its own bootstrap, it managed to learn without seeing anything about any human.
[00:53:23.200 --> 00:53:26.080] It had no experience of human games whatsoever.
[00:53:26.080 --> 00:53:29.200] And the same thing, I think, ultimately will happen with intelligent.
[00:53:29.200 --> 00:53:30.400] Right now, yes, I agree.
[00:53:30.400 --> 00:53:34.720] They all fed humanity's collective literary output.
[00:53:35.120 --> 00:53:40.960] But sooner than later, and certainly within the next 10 years, they'll be able to bootstrap themselves without that.
[00:53:40.960 --> 00:53:47.920] But Christoph, I think Michael's point is that, say, Watson beating Ken Jennings on Jeopardy, was it excited?
[00:53:47.920 --> 00:53:49.040] Was it happy it won?
[00:53:49.040 --> 00:53:50.880] Does it know it was even playing Jeopardy?
[00:53:50.880 --> 00:53:51.600] No, it didn't.
[00:53:52.000 --> 00:53:53.840] It has no presence experience.
[00:53:55.280 --> 00:54:00.480] Although that may change with quantum computers, I mean, you know, again, the world is changing very rapidly.
[00:54:00.480 --> 00:54:05.280] With quantum computers, it may well feel like something to be one of these machines.
[00:54:06.160 --> 00:54:07.040] We don't know.
[00:54:07.040 --> 00:54:08.960] Okay, I think some of this has to do with language.
[00:54:08.960 --> 00:54:16.640] So when we use the word mind, I think Christophe and I mean something like it's just a word to describe what the brain is doing.
[00:54:16.640 --> 00:54:19.360] But Michael, I think you mean something different from mind.
[00:54:19.360 --> 00:54:20.480] It's different from brain.
[00:54:20.480 --> 00:54:20.960] Is that right?
[00:54:21.520 --> 00:54:27.920] Yeah, well, I think the hymorphic concept of soul is very, very, very helpful here.
[00:54:27.920 --> 00:54:33.960] What we talk about when we say mind is really several powers of what Aristotle or St.
[00:54:29.680 --> 00:54:36.840] Thomas would think of as the soul, what the soul can do.
[00:54:37.160 --> 00:54:50.520] So the mind is the set of powers, including memory, locomotion, sensation, intellect, will, imagination, all those things together are what we call mind.
[00:54:50.520 --> 00:54:58.920] And some of those powers are closely linked to matter, like locomotion, sensory, intellect, locomotion, sensory, memory, and emotion.
[00:54:58.920 --> 00:55:01.480] And some of them are not generated by matter.
[00:55:01.640 --> 00:55:04.360] They're immaterial powers like intellect and will.
[00:55:06.440 --> 00:55:11.880] Well, so let's look at some specific examples from your book that Christophe can address.
[00:55:11.880 --> 00:55:18.440] You talked about split brain patients who seem to know things they shouldn't know because the halves are separated.
[00:55:21.320 --> 00:55:21.960] You want to address that?
[00:55:22.200 --> 00:55:23.080] Sure, sure.
[00:55:25.000 --> 00:55:28.040] Split brain surgery, I've done it.
[00:55:28.040 --> 00:55:34.840] And what struck me as utterly remarkable was how normal these people are.
[00:55:34.840 --> 00:55:41.240] That is, that you cut the corpus callosum, and after the surgery, you cannot tell the difference.
[00:55:41.240 --> 00:55:48.280] I mean, I think if you put Christoph and I in a room with two people, one had split brain surgery, one didn't.
[00:55:48.280 --> 00:55:50.440] You make them wear hats so you can't see the scar.
[00:55:50.440 --> 00:55:51.800] I don't think we could tell the difference.
[00:55:51.960 --> 00:55:56.280] You really need specialized instruments and techniques to find them.
[00:55:56.280 --> 00:56:00.840] That's why Sperry won the Nobel Prize because he had to do all these experiments.
[00:56:01.160 --> 00:56:11.560] And what has been shown, I think, has been summarized very nicely by Yair Pinto in the Netherlands, that these patients have split perception and unified consciousness.
[00:56:11.560 --> 00:56:16.560] That is, that their consciousness really shows all the hallmarks of being a unified whole.
[00:56:14.920 --> 00:56:18.560] That certainly is their everyday experience.
[00:56:18.880 --> 00:56:32.880] And even in detailed testing, there's Alice Cronin, MIT neuroscientist, did some beautiful work back in the 1980s where she presented images to both visual fields.
[00:56:32.880 --> 00:56:45.840] And what she found was that there was certainly a perceptual split, the perceptual split in our ability to transmit verbal information across and geometrical information across.
[00:56:45.840 --> 00:56:48.160] But there was no conceptual split.
[00:56:48.160 --> 00:56:55.520] That is, concepts could be shared without any sign of splitting.
[00:56:55.520 --> 00:57:01.680] And I think this corresponds beautifully to the hylomorphic Aristotelian understanding of the soul.
[00:57:01.680 --> 00:57:09.840] That is, a knife can cut the material stuff, which is perception, but a knife can't cut the intellect because it's not a material thing.
[00:57:09.840 --> 00:57:12.160] It can't cut something that's not material.
[00:57:12.400 --> 00:57:17.040] And so split brain surgery, I think, very strongly supports the hylomorphic view.
[00:57:17.040 --> 00:57:18.080] Christophe?
[00:57:18.560 --> 00:57:22.000] I would beg to differ.
[00:57:22.000 --> 00:57:22.880] So it is true.
[00:57:22.880 --> 00:57:47.600] And in fact, when the operation was first performed by someone in Chicago in 1938, he already commented on the remarkable, unremarkableness of these patients, that you cut 200 million fibers roughly, which are the number of axons that cross the cortical hemispheres, and you find relatively little differences until you do the testing, and then it becomes very apparent.
[00:57:47.600 --> 00:57:54.400] Of course, in many patients, I don't know the patient you've operated on, Mike, in many patients, they don't have a complete commiserotomy.
[00:57:54.960 --> 00:57:59.440] You know, you sometimes leave the posterior or the anterior commissure.
[00:57:59.480 --> 00:58:12.600] And of course, you don't touch some of the deeper cross-the deeper fibers in the deeper part of the brain below the thalamus, like at the level of the superior colliculus, the tectum that connects the left and the right brain.
[00:58:12.840 --> 00:58:19.960] The Pinto study shows that, yes, unless you're really extremely careful, it's very easy for the right side.
[00:58:19.960 --> 00:58:27.960] You see, so the standard interpretation is the following: the standard interpretation of these split brains is that there are two conscious entities.
[00:58:27.960 --> 00:58:36.680] There's the left hemisphere, typically the speaking one, and that tends to dominate certainly a few days after the operation.
[00:58:36.680 --> 00:58:43.960] If you silence that by injecting sodium amisole here, you can briefly anesthetize the left hemisphere, then the right hemisphere emerges.
[00:58:46.040 --> 00:58:52.440] It can now command the opposite side, the left side, and signal that typically doesn't speak.
[00:58:52.680 --> 00:58:55.960] It has sort of non-linguistic competent consciousness.
[00:58:55.960 --> 00:59:02.280] So the idea is that two consciousnesses in there, but it's the right, if you talk to the patient, it's always the right hemisphere.
[00:59:02.280 --> 00:59:28.600] And unless you're really very careful doing the experiment, and I don't think Pinto was, and we can go into the technical details, the left hemisphere will take cues from the right hemisphere, you know, because it's like a clever Hansen phenomenon, except now it's my, you know, for example, if I, you know, if I do this with my right hand, that seems to indicate, you know, that's a cue for my left hand that something is on the other side, but it doesn't quite know what it is.
[00:59:28.600 --> 00:59:36.760] But that cube can be picked up by the right hemisphere, by the right arm, i.e., sorry, by the left arm, i.e., the right hemisphere.
[00:59:36.760 --> 00:59:43.160] So, it's very difficult to do these tests because they're not completely split.
[00:59:43.160 --> 00:59:47.120] It's not like you're taking a knife and you cut down all the way down to the brainstem.
[00:59:44.840 --> 00:59:47.600] Right.
[00:59:47.920 --> 00:59:58.320] Well, the study by Cronin at MIT involves three patients, and they had had a commissure automy, not just a callosotomy.
[00:59:58.320 --> 01:00:03.680] So, they had had sectioning of the anterior commissure and of the hippocampal commissures.
[01:00:03.840 --> 01:00:08.240] The posterior commissure, I don't believe, as far as I know, is not routinely sectioned.
[01:00:08.640 --> 01:00:09.120] No, it's not.
[01:00:09.680 --> 01:00:11.920] And because it's a very dangerous place to be.
[01:00:12.320 --> 01:00:15.760] It's by the aqueduct, and you don't be cutting anything there.
[01:00:17.120 --> 01:00:22.720] So, this was really as complete as it gets of separation of the hemispheres.
[01:00:22.720 --> 01:00:28.400] And one explanation would be that there's this cueing that goes on from side to side.
[01:00:29.440 --> 01:00:32.640] However, you can observe these people and they don't appear to be queuing.
[01:00:33.600 --> 01:00:37.200] The other explanation would be subcortical pathways.
[01:00:38.000 --> 01:00:45.280] There have been studies on the number of subcortical pathways, number of subcortical axons that are connecting the two hemispheres.
[01:00:45.280 --> 01:00:52.320] And the number that I've read is 1,500 axons connect the two hemispheres.
[01:00:52.560 --> 01:00:56.800] There are two, as you said, 200 million axons in the corpus callosum.
[01:00:56.800 --> 01:01:07.360] So that when you cut the corpus callosum, you're cutting, I actually calculated as 99.99925% of the connections between the hemispheres.
[01:01:07.360 --> 01:01:11.760] So these disconnections are almost complete.
[01:01:11.760 --> 01:01:16.000] The subcortical pathways are very, very small and have very, very few axons.
[01:01:16.040 --> 01:01:26.000] There would be enough to mediate, I don't know about concepts, but there certainly would be enough to mediate base emotions like surprise or fear or startle response.
[01:01:26.240 --> 01:01:29.640] Yeah, but emotions also are hematological.
[01:01:29.520 --> 01:01:33.400] That is, you make adrenaline and you communicate with that.
[01:01:33.960 --> 01:01:37.640] I believe that Pinto estimated, I'm not sure where he got this number from.
[01:01:37.640 --> 01:01:47.160] He estimated that the rate of transfer in subcortical pathways in commissurotomy patients was no faster than one bit per second, was the number he used.
[01:01:47.160 --> 01:01:48.760] I don't know if you've looked at that.
[01:01:48.840 --> 01:01:50.760] No, how we yeah, I'm not sure how to do it.
[01:01:50.920 --> 01:02:00.760] One way to test this, by the way, would be to use, and this will come, this will happen, is to use something like Neuralink to try to connect your brain with my brain.
[01:02:00.760 --> 01:02:11.320] Because I think what this will show that if we essentially do a reverse commissarotomy, we add, let's say, for the sake of argument, we add 200 million fibers from my brain to your brain.
[01:02:13.640 --> 01:02:16.040] And I've written about it in my previous book.
[01:02:16.040 --> 01:02:16.680] What will happen?
[01:02:17.080 --> 01:02:21.080] So, first, I just see through your eyes a little bit like augmented reality.
[01:02:21.080 --> 01:02:24.680] I'm still Christoph, but I see now what you're seeing, right?
[01:02:24.680 --> 01:02:27.960] Just like we are or AR, augmented reality.
[01:02:27.960 --> 01:02:30.600] But then at some point, there will be an abrupt transition.
[01:02:30.600 --> 01:02:36.120] If the number of fibers exceed a certain threshold, then your mind and my mind will merge.
[01:02:36.120 --> 01:02:37.240] Mike will disappear.
[01:02:37.240 --> 01:02:38.440] Christoph will disappear.
[01:02:38.440 --> 01:02:47.320] Instead, there will be this new consciousness that has now four hemispheres, that has two mouths and four arms and four legs.
[01:02:47.320 --> 01:02:49.960] There will be a single conscious mind.
[01:02:50.200 --> 01:02:53.640] So if that's a directly, would it be hydromotor?
[01:02:53.720 --> 01:02:58.920] No, no, whether it's hydromorphic or Aristotle, of course, isn't think of this.
[01:02:59.240 --> 01:03:01.960] But experiential, it will be one mind.
[01:03:02.280 --> 01:03:04.600] Reverse commissarotomies have been done.
[01:03:06.120 --> 01:03:07.720] They've been done by nature.
[01:03:07.960 --> 01:03:11.880] There are conjoined twins who are conjoined at the brain.
[01:03:12.440 --> 01:03:22.160] Yeah, but the most interesting pair of them is Krista and Tantia Tatiana Hogan, who are from Canada.
[01:03:22.720 --> 01:03:24.640] And they share a thalamic bridge.
[01:03:24.800 --> 01:03:27.520] So there's a diencephalic bridge.
[01:03:27.520 --> 01:03:29.440] And they're absolutely fascinating.
[01:03:30.000 --> 01:03:32.640] They share vision in each other's eyes.
[01:03:32.880 --> 01:03:35.440] They share sensation on each other's skin.
[01:03:35.440 --> 01:03:39.440] They share a lot of emotions, but they do not share personality at all.
[01:03:39.440 --> 01:03:40.960] They're very different people.
[01:03:40.960 --> 01:03:43.280] They don't share, as far as I know, intellects.
[01:03:43.280 --> 01:03:50.160] That is, it's not as if one of them can study calculus, the other can study philosophy, and they both pass the test.
[01:03:51.040 --> 01:04:06.080] So when you, and the hylomorphic framework fits so beautifully for this, that they're sharing the material aspects of the brain, perceptions, but they don't share intellect.
[01:04:06.800 --> 01:04:07.920] They don't share the immaterial.
[01:04:09.120 --> 01:04:16.480] Because they don't, the bridge isn't between their left, you know, it's not like one has a right hemisphere and the other one.
[01:04:16.720 --> 01:04:22.640] They have two separate, they have two independent cortical, two sets of independent cortical hemispheres.
[01:04:22.640 --> 01:04:24.560] They share a thalamic bridge.
[01:04:24.560 --> 01:04:26.240] But to in that sense, they're separate.
[01:04:26.240 --> 01:04:34.640] Now, I would say if they join, particularly the prefrontal part, if it's joined in both, then they would have a single intellect.
[01:04:34.640 --> 01:04:35.440] But, you know.
[01:04:35.760 --> 01:04:36.080] Right.
[01:04:36.080 --> 01:04:39.360] Well, that's that's, I mean, that, that, that, that's an experimental prediction.
[01:04:39.520 --> 01:04:40.560] That's an experimental prediction.
[01:04:40.640 --> 01:04:44.880] They experience what the other one is experiencing, that first-person perspective.
[01:04:44.880 --> 01:04:45.600] Correct, correct.
[01:04:45.760 --> 01:04:47.680] There's no sign that they do that.
[01:04:49.360 --> 01:04:56.480] Except when they go to sleep, if you read carefully, I mean, they haven't been studied because their parents keep them away from the crying eye of scientists.
[01:04:56.800 --> 01:05:00.920] But when they go to sleep, A, they go to sleep at the same time as you would expect.
[01:04:59.920 --> 01:05:07.000] And the question is, once the ego defenses fall away during sleep, do they actually have, it's a fascinating question.
[01:05:07.160 --> 01:05:09.000] Do they both dream of the same thing?
[01:05:09.000 --> 01:05:09.240] Right.
[01:05:09.240 --> 01:05:10.840] Or do they have independent dreams?
[01:05:10.840 --> 01:05:11.240] Right.
[01:05:11.240 --> 01:05:12.440] Fascinating stuff.
[01:05:12.440 --> 01:05:14.280] They dream of electric sheep.
[01:05:15.560 --> 01:05:16.040] Conjoined sheep.
[01:05:16.520 --> 01:05:17.160] Conjoined sheep.
[01:05:17.160 --> 01:05:17.640] Conjoined sheep.
[01:05:17.720 --> 01:05:20.200] Okay, Michael, let's talk about vegetative states.
[01:05:20.440 --> 01:05:23.720] You know, my mom died after 10 years of brain tumors.
[01:05:23.720 --> 01:05:27.160] She eventually fell and went into a coma.
[01:05:27.160 --> 01:05:30.520] And after a couple of weeks with a feeding tube and so on, she eventually died.
[01:05:30.520 --> 01:05:35.720] But like so many people, as it turns out, I didn't know this at the time, you know, I'd hold her hand and I'd say, squeeze my hand if you hear me.
[01:05:35.720 --> 01:05:36.680] And she'd squeeze my hand.
[01:05:36.680 --> 01:05:38.920] I'm like, holy crap, she's in there.
[01:05:39.800 --> 01:05:43.480] What do you get out of the vegetative state examples in case studies?
[01:05:43.640 --> 01:05:50.440] Well, first of all, there's the issue of paradoxical lucidity, which is a very real thing.
[01:05:50.680 --> 01:05:52.360] I've seen several times.
[01:05:52.360 --> 01:05:55.160] It's quite well known to people who work in hospices.
[01:05:55.400 --> 01:05:58.360] That people in the later stages of Alzheimer's.
[01:05:59.320 --> 01:06:00.520] Terminal lucidity.
[01:06:00.520 --> 01:06:06.120] Yeah, although I work with a guy who's an ethicist at Stonerbrook named Stephen Post, who's a wonderful guy.
[01:06:06.120 --> 01:06:07.320] And he's written a lot about that.
[01:06:07.320 --> 01:06:16.920] And he hates the word terminal lucidity because it's not always terminal, meaning you can see it in people who just, you know, it doesn't always mean you're dying.
[01:06:17.320 --> 01:06:29.240] But that's a fascinating phenomenon where people will have a period of time, commonly around 30 minutes, where they just kind of wake up and they're their old self again, and they slip back down into their state.
[01:06:29.480 --> 01:06:32.280] The vegetative state is kind of a different thing.
[01:06:32.280 --> 01:06:37.080] The vegetative state is the deepest level of unresponsiveness.
[01:06:37.080 --> 01:06:38.360] It's even deeper than coma.
[01:06:38.360 --> 01:06:39.880] It's not considered a coma.
[01:06:39.880 --> 01:06:42.920] It's just a step above brain death.
[01:06:43.320 --> 01:06:50.480] And the traditional thinking has been that people in vegetative state don't have mental states, that they're really just a vegetable.
[01:06:50.480 --> 01:06:53.280] They're just an organism without a mind.
[01:06:53.280 --> 01:07:06.480] And back in about 20 years ago, Adrian Owen, a researcher at Cambridge, published a wonderful study called Detecting Awareness in the Persistent Vegetative State.
[01:07:06.480 --> 01:07:22.480] And he took a woman who was in her 30s who was diagnosed convincingly in a persistent vegetative state after a car accident and put her in an MRI machine and did functional MRI imaging and asked her questions like: imagine you're walking across a room or imagine you're playing tennis.
[01:07:22.480 --> 01:07:29.520] And her brain, what was left of her brain, it was just a very small amount of tissue, would light up in certain patterns.
[01:07:29.520 --> 01:07:37.920] And he took people who are normal and put them in the machine and asked them the same thing, and they had the same patterns of activation as if she was understanding what he was saying.
[01:07:37.920 --> 01:07:43.120] And then he put her back in the machine and he asked the same things, only he scrambled the words.
[01:07:43.120 --> 01:07:46.080] So the semantics was zero.
[01:07:46.080 --> 01:07:49.920] There was no meaning to it, and her brain didn't respond in the same way the other people.
[01:07:49.920 --> 01:07:53.200] So she was responding to the meaning of what he was asking.
[01:07:53.200 --> 01:08:07.120] And other people have done this research now, and about 40% of people who are diagnosed in persistent vegetative state show patterns of activation that suggests that they're quite aware of what's being asked of them and that they are thinking.
[01:08:07.120 --> 01:08:09.360] And we see this in patients in comas.
[01:08:09.360 --> 01:08:17.200] Nurses and ICUs know that don't say disturbing things in the room of somebody who's in a coma because you'll often see their blood pressure go up.
[01:08:17.200 --> 01:08:23.760] I've had people recover from comas who told me that they could hear what people were talking about in the room while they were in a coma.
[01:08:24.320 --> 01:08:35.000] And in my view, this emphasizes the notion that there are aspects of the mind that, as Penfield said, that the brain does not completely explain the mind.
[01:08:35.000 --> 01:08:39.560] There are aspects of the mind that transcend what the brain can explain.
[01:08:40.920 --> 01:08:41.800] I totally disagree.
[01:08:41.800 --> 01:08:43.640] I work with these patients.
[01:08:43.640 --> 01:08:52.760] And I can tell you those patients, of course, they have period when, so now I'm talking about so-called vegetative state or behavioral unresponsive syndrome patients, okay?
[01:08:53.080 --> 01:08:55.560] Typically, they're above coma in a sense.
[01:08:55.560 --> 01:08:58.760] Coma is often defined not exclusively as eyes closed.
[01:08:58.760 --> 01:09:01.320] Here you can have eyes open.
[01:09:01.640 --> 01:09:05.720] When they are, very often, of course, they're sedated for various clinical reasons.
[01:09:05.720 --> 01:09:09.400] So when you remove the sedation, yes, I totally agree.
[01:09:09.400 --> 01:09:22.440] This recent paper, New England Journal of Medicine that came out from Page of New York Times half a year ago showed that 25% of these patients can voluntarily modulate their pre-motor and motor response.
[01:09:22.440 --> 01:09:27.160] In other words, so these are patients that have Glasgow coma scale three or four minimal.
[01:09:27.160 --> 01:09:29.560] So you ask them, sir, sir, can you hear me?
[01:09:29.560 --> 01:09:30.760] Can you move your eyes?
[01:09:31.000 --> 01:09:33.320] You pinch them very hard on the fingernails.
[01:09:33.320 --> 01:09:35.880] They don't even orient toward the source of the pain.
[01:09:35.880 --> 01:09:41.240] So clinically, they define unresponsive, which traditionally meant unconscious.
[01:09:41.240 --> 01:09:46.520] But now you ask them, for instance, for 30 seconds, really clench your hands.
[01:09:46.520 --> 01:09:49.080] And they can't do it, but they imagine clenching their hand.
[01:09:49.080 --> 01:09:51.800] And 30 seconds relax, 30 seconds, clench your hand.
[01:09:51.800 --> 01:09:53.560] You can clearly see the brain.
[01:09:53.560 --> 01:10:02.680] I mean, in 25% of the cases, you can clearly see using either EG or FMI, you can see their pre-motor activity or motor activity being modulated.
[01:10:02.680 --> 01:10:05.560] So that tells me these people are not unconscious.
[01:10:05.560 --> 01:10:12.040] Yes, they're behaviorally unresponsive because of the stroke or the hemorrhage or the TBI, whatever, but they're not unconscious.
[01:10:12.040 --> 01:10:12.920] The brain isn't there.
[01:10:12.920 --> 01:10:16.160] The brain is actually part of the brain online.
[01:10:16.480 --> 01:10:23.120] The patient is sufficiently conscious to a certain extent, although of course we don't know what they're conscious about.
[01:10:23.120 --> 01:10:25.360] And they're using this way to communicate.
[01:10:25.360 --> 01:10:29.120] So it's perfectly explainable by conventional neuroscience.
[01:10:29.280 --> 01:10:38.960] In fact, people are now tracking this activity, which you can try to pinpoint using, you know, modern tools, you know, high-density EEG or intra-cranial recordings, etc.
[01:10:39.120 --> 01:10:47.680] So I think it's a beautiful case where there's this nice correlation between brain activity and behavior, including consciousness.
[01:10:47.680 --> 01:10:53.200] It doesn't require any thinking about any extramaterial responses.
[01:10:55.760 --> 01:10:56.480] Go ahead, Mike.
[01:10:56.480 --> 01:10:57.040] I'm sorry.
[01:10:57.200 --> 01:10:57.680] No, go ahead.
[01:10:58.800 --> 01:11:25.360] Yeah, the basis for my kind of embrace of hylomorphic understanding of the human soul is that there does seem to be across many, many different sectors of neuroscience, this recurrent observation that there is a disconnect between what the brain does and what the mind does, particularly when we talk about intellect and will.
[01:11:26.000 --> 01:11:28.640] And I think that holds up.
[01:11:28.640 --> 01:11:30.400] It certainly held up in Penfield's work.
[01:11:30.400 --> 01:11:33.120] It holds up in split brain research.
[01:11:33.680 --> 01:11:36.240] I think you can see it in paradoxical lucidity.
[01:11:36.240 --> 01:11:39.840] You can see it in responsiveness and persistent vegetative state.
[01:11:39.840 --> 01:11:46.080] There are aspects of the mind that I don't believe are entirely accounted for by aspects of the brain.
[01:11:46.080 --> 01:11:47.920] There is a disconnect there.
[01:11:48.400 --> 01:11:52.960] You realize when you say, not entirely, this is really a God of the gap argument.
[01:11:52.960 --> 01:11:55.280] Right now, we still have this little gap.
[01:11:55.280 --> 01:12:04.920] And then the next time somebody presents you, oh, yeah, I can evoke a rational thought about mathematics in a mathematician, then suddenly your argument collapses.
[01:12:05.240 --> 01:12:08.040] Yeah, but the gap is the Grand Canyon.
[01:12:08.040 --> 01:12:09.080] It's a big gap.
[01:12:09.400 --> 01:12:17.240] No, the gap is between conscious experience, but whatsoever construed, including seeing or hearing or feeling pain, it's not rational.
[01:12:17.480 --> 01:12:18.840] It's not rational thought.
[01:12:19.480 --> 01:12:21.400] It's any experience.
[01:12:21.400 --> 01:12:31.000] Penfield did, by just this back-of-the-envelope estimate, 1.1 million brain stimulations and never evoked abstract thought.
[01:12:31.240 --> 01:12:31.560] Never evoked.
[01:12:31.800 --> 01:12:36.440] Okay, that was 100 years ago using old technology that we don't use anymore in the orbit.
[01:12:36.520 --> 01:12:38.120] Oh, different electrodes.
[01:12:38.760 --> 01:12:40.760] We used, you know, different current.
[01:12:40.760 --> 01:12:43.800] For example, Holler, he typically didn't stimulate long enough.
[01:12:43.800 --> 01:12:54.200] It may well be that to recruit longer, you know, distributed function like abstract thought, you might, you know, stimulate for five seconds or for ten seconds.
[01:12:54.200 --> 01:12:55.160] You know, right, right.
[01:12:56.280 --> 01:12:59.160] But this is all stated in the subjunctive tests.
[01:12:59.160 --> 01:13:02.280] They are, well, maybe if we do this, maybe if we do that.
[01:13:02.280 --> 01:13:10.680] The actual experimental evidence does not show that intellectual, that abstract intellectual thought is evoked by brain stimulation.
[01:13:10.680 --> 01:13:12.280] That's the actual evidence.
[01:13:12.920 --> 01:13:14.280] That's not true.
[01:13:14.280 --> 01:13:15.480] That is simply not true.
[01:13:15.480 --> 01:13:17.800] You can get patients that say things I read.
[01:13:17.800 --> 01:13:18.520] You want to count.
[01:13:18.520 --> 01:13:20.120] Now you could say, well, that sounds abstract.
[01:13:20.120 --> 01:13:25.960] The idea of leaving, you know, the abstract, the idea of leaving, that to me seems very abstract.
[01:13:25.960 --> 01:13:28.440] People can get out-of-body experiences.
[01:13:28.440 --> 01:13:31.480] They can see themselves or they can experience themselves.
[01:13:32.920 --> 01:13:34.520] That's a perceptual issue.
[01:13:34.520 --> 01:13:35.400] That's a perceptual issue.
[01:13:35.480 --> 01:13:36.120] No one questioned.
[01:13:36.520 --> 01:13:38.280] No, but they can see their own body.
[01:13:38.280 --> 01:13:49.760] This is a classical NDE part, right, that many people take to mean that they're floating then they're heaven, where a particular site in the posterior precunus, right?
[01:13:49.760 --> 01:13:56.640] If you stimulate it in most people there, they have these weird dissociations between the self and their body.
[01:13:56.640 --> 01:14:03.280] So they can experience the self as an abstract entity, not their body, because they claim they can see their body.
[01:14:03.280 --> 01:14:04.000] And I believe them.
[01:14:04.000 --> 01:14:08.560] They can see their body, but now they apparently have a different point of view.
[01:14:08.560 --> 01:14:12.640] Well, if that isn't abstract, they abstract it from their body.
[01:14:12.640 --> 01:14:14.160] That's pretty abstract to me.
[01:14:14.160 --> 01:14:17.520] Well, vision is not an abstract neurological function.
[01:14:17.520 --> 01:14:20.000] Vision is a concrete neurological function.
[01:14:20.240 --> 01:14:26.560] Do you believe when they have the out-of-body experience that they actually see their body from outside of their body?
[01:14:27.200 --> 01:14:28.480] No, I have no evidence.
[01:14:28.480 --> 01:14:38.880] In fact, people in Amsterdam, in Holland, right, did this experiment where they put hidden numbers on top of the cabinets and they know they did not read those hidden numbers.
[01:14:38.880 --> 01:14:42.960] No, I have no evidence whatsoever that they can, but they certainly have the experience.
[01:14:42.960 --> 01:14:44.000] I don't doubt for once.
[01:14:44.000 --> 01:14:45.040] I've had an NDE.
[01:14:45.040 --> 01:14:51.040] I had a complete NDE tunnel, you know, dread, terror, ecstasy, all of that.
[01:14:51.040 --> 01:15:00.160] But I don't believe my whatever, whoever is Christoph, so that was actually materially floating in Cartesian space above my body.
[01:15:00.160 --> 01:15:00.560] No.
[01:15:01.600 --> 01:15:04.320] Maybe explain for the listeners, Christoph, what you did to induce that.
[01:15:04.640 --> 01:15:09.520] Well, no, no, that's right, right.
[01:15:10.800 --> 01:15:16.320] That's irrelevant to the point, which is we need, I totally agree with Mike.
[01:15:16.640 --> 01:15:41.320] I agree with many of the things you're saying, by the way, that we need to take near-death experience seriously, but they are deeply felt experiences that ultimately need to be explained by science, but I don't take them at all to be evidence that my soul is sort of this free-floating thing that can travel outside space, you know, outside the normal boundaries of space-time.
[01:15:41.320 --> 01:15:44.120] Well, there are, yeah, there's all that research like Dr.
[01:15:44.120 --> 01:15:55.080] James Winnery at the United States Air Force accelerating pilots in a centrifuge, and they have an out-of-body experience, little, you know, dream, dreamlets in the tunnel and all that stuff.
[01:15:55.080 --> 01:15:58.600] And his explanation was just oxygen deprivation.
[01:15:58.600 --> 01:15:59.240] Yeah.
[01:15:59.880 --> 01:16:15.880] Well, I think what you have to explain when you look, I mean, if we want to get into near-death experiences, first of all, I don't make a claim that every near-death experience is an actual experience of the soul leaving the body and stuff, because it's a huge thing.
[01:16:15.880 --> 01:16:21.160] I mean, there's been estimated that 9 million Americans have had some kind of near-death experience.
[01:16:21.400 --> 01:16:24.040] And some of them undoubtedly are just hallucinations.
[01:16:24.680 --> 01:16:25.640] Some are lies.
[01:16:25.640 --> 01:16:26.840] Some people just make it up.
[01:16:26.840 --> 01:16:27.880] Who knows?
[01:16:27.880 --> 01:16:33.720] But there are experiences that are awfully difficult to explain naturalistically.
[01:16:34.360 --> 01:16:41.560] But I think there are four aspects of near-death experiences that really do challenge naturalistic explanations.
[01:16:41.560 --> 01:16:46.200] One is that the experiences have an extremely clear content.
[01:16:46.200 --> 01:16:48.920] It's clear, it's highly organized.
[01:16:48.920 --> 01:16:58.200] There's often a life review, and there's a conscious decision to return to the body, which you don't get from hypoxia, hypercarbia, you don't get from all these temporal dope seizures.
[01:16:58.200 --> 01:16:59.720] None of that will cause that.
[01:16:59.720 --> 01:17:05.000] The second thing is experiences that are corroborated externally.
[01:17:05.000 --> 01:17:12.520] That is, that people see things that are going on in the room that they could not have seen because their brain was not working when these things happened.
[01:17:12.520 --> 01:17:17.280] The third is the corroboration with seeing only dead people at the other end of the tunnel.
[01:17:17.600 --> 01:17:25.680] I'm aware of any, I'm unaware of any near-death experience in which a person has encountered on the other side, someone who's still living.
[01:17:26.160 --> 01:17:32.400] Even if they didn't know the person was dead, it's only dead people that they encounter, which is an odd hallucination.
[01:17:32.400 --> 01:17:35.600] And the fourth thing is, of course, that these are transformative experiences.
[01:17:35.600 --> 01:17:38.560] People's lives are radically changed by these.
[01:17:38.560 --> 01:17:51.600] And things like, you know, temporal dope seizures or endorphins or ketamine, things like that, they may explain a tiny sliver of the experience, but you have to explain the whole thing.
[01:17:51.600 --> 01:17:54.800] And there are hundreds of people who've had the whole thing.
[01:17:55.600 --> 01:17:57.440] Pam Reynolds is a great example.
[01:17:57.440 --> 01:18:03.200] I'm sure you've heard of the case of Pam Reynolds, who had an aneurysm operated on in 1991.
[01:18:03.520 --> 01:18:04.480] Fully monitored.
[01:18:04.480 --> 01:18:11.280] Her brain was fully monitored, had a dramatic near-death experience under ideal experimental conditions.
[01:18:13.120 --> 01:18:23.920] Look, there are thousands of people under ayahuasca, okay, a powerful serotonergic hallucinogenic, that experience, you know, that have all these encounters with strange beings, with suppliant gods.
[01:18:23.920 --> 01:18:29.600] This happens to be in a South American, not in a Christian context, but they have, you know, very powerful experiences.
[01:18:29.840 --> 01:18:34.400] Once again, I don't doubt, and I never have doubted, the reality of these experiences.
[01:18:35.040 --> 01:18:37.040] As you say, Mike, transformative.
[01:18:37.040 --> 01:18:39.280] People can radically change their life.
[01:18:39.280 --> 01:18:40.720] You know, the classical one is St.
[01:18:40.800 --> 01:18:42.640] Paul turning into St.
[01:18:42.720 --> 01:18:43.760] Paul, right?
[01:18:44.560 --> 01:18:46.320] Following such a transformative experience.
[01:18:46.320 --> 01:18:47.440] So, I don't doubt that.
[01:18:47.440 --> 01:18:51.040] I'm a little bit more worried about the metaphysical implications.
[01:18:51.040 --> 01:18:51.680] I understand.
[01:18:51.680 --> 01:19:02.040] I mean, following William James, right, he has this, he talks about these in his book, The Varieties of Religious Experience, where he says people bring back this noatic quality, right?
[01:19:02.040 --> 01:19:08.920] They have this particular noatic quality that never leaves them, the certainty that they have experienced something else.
[01:19:08.920 --> 01:19:13.000] And again, I'm very sympathetic to that because I also had such an experience.
[01:19:13.000 --> 01:19:17.400] However, I still think that may all be true.
[01:19:17.400 --> 01:19:19.800] You might access a different metaphysical realm.
[01:19:19.800 --> 01:19:37.720] But what is true that as far as we can tell, every conscious experience, no matter how abstract, no matter how boring, whether it's tasting a slice of old leftover pizza or thinking something ethereal about human justice or about general relativity, we know has a correlate in the brain.
[01:19:38.840 --> 01:19:42.120] There is this clear substrate dependency.
[01:19:42.120 --> 01:19:51.480] And we can't allow that by saying, well, most everything is in the substrate, but then there is this special thing that's sort of done somewhere else in a realm I don't access.
[01:19:51.480 --> 01:19:56.360] And by the way, other things like Chat GPD that also have that aspect.
[01:19:56.360 --> 01:20:05.960] I'm just going to avoid dealing with them because I'm saying, well, by definition, I don't deal with them because they're not alive, which seems a rather arbitrary distinction.
[01:20:05.960 --> 01:20:09.160] I know Aristotle made that, but he lived in a radical distant world.
[01:20:09.160 --> 01:20:13.560] FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[01:20:13.560 --> 01:20:18.520] New customers can bet just $5 and get 300 in bonus bets if you win.
[01:20:18.520 --> 01:20:19.080] That's right.
[01:20:19.080 --> 01:20:25.560] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[01:20:25.560 --> 01:20:31.400] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[01:20:31.400 --> 01:20:33.320] The NFL season's almost here.
[01:20:33.320 --> 01:20:35.800] The only question is: are you ready to play?
[01:20:35.800 --> 01:20:41.640] Visit fanduel.com/slash sportsfan to download the FanDuel app today and get started.
[01:20:41.640 --> 01:20:44.040] Must be 21 plus and physically present in New York.
[01:20:44.040 --> 01:20:46.960] First online real money wager only, $10 first deposit required.
[01:20:46.960 --> 01:20:50.320] A bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[01:20:50.320 --> 01:20:51.120] Restrictions apply.
[01:20:44.840 --> 01:20:53.280] See terms at sportsbook.fan duel.com.
[01:20:53.280 --> 01:21:02.400] For help with a gambling problem, call 1-877-8 HOPENY or text Hope NY to 467-369 or visit oasas.ny.gov/slash gambling.
[01:21:02.400 --> 01:21:03.920] Standard text messaging rates apply.
[01:21:03.920 --> 01:21:08.000] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[01:21:08.000 --> 01:21:10.400] Where they had almost no machines.
[01:21:10.720 --> 01:21:29.760] So, Michael, yeah, a lot of these OBEs and NDEs are based on personal accounts that people give, which when I read Oliver Sachs's books, including his memoir where he talks about being under super stress during medical school and taking all kinds of things to stay awake and whatnot, having fantastic hallucinations.
[01:21:29.760 --> 01:21:38.960] And pretty much every chapter in all of his books is some weird thing that somebody experiences, like the man who thought mistook his wife for a hat or whatever it was.
[01:21:38.960 --> 01:21:40.160] You know, there's a lot of this.
[01:21:40.160 --> 01:21:44.800] And his conclusion from all this, I mean, he knows what you know about neuroscience and all this stuff.
[01:21:44.800 --> 01:21:53.920] He wrote a long essay in the Atlantic when Eben Alexander's book, Proof of Heaven, came out, in which Eben recounted his near-death experience.
[01:21:53.920 --> 01:21:58.880] And, you know, Oliver basically just said all this can be explained by neuroscience.
[01:21:58.880 --> 01:22:02.400] It seems like it comes down to a metaphysical decision.
[01:22:02.400 --> 01:22:13.200] I'm going to take these examples that are kind of spooky and weird and interpret them through this metaphysical model rather than the materialist model, something like that.
[01:22:13.520 --> 01:22:18.160] Well, I do consider subjective experience to be a kind of data.
[01:22:19.120 --> 01:22:26.240] It's different from objective data, but my goodness gracious, there are all kinds of sciences that are based on what people feel and think and so on.
[01:22:26.240 --> 01:22:28.240] So it is data.
[01:22:28.240 --> 01:22:30.680] But that's not really so much what I'm talking about.
[01:22:30.680 --> 01:22:37.960] I'm talking about corroborated experiences or experiences that you really can't explain through naturalistic means.
[01:22:38.200 --> 01:22:50.280] There are hundreds of very well-documented cases where people, during the time when their brain is not working, have detailed knowledge of what's happening in the room around them.
[01:22:50.680 --> 01:22:54.360] And Pam Reynolds is a great example of that.
[01:22:55.000 --> 01:22:59.320] That's a person who was, her brain was carefully monitored.
[01:22:59.320 --> 01:23:07.800] She had earphones that were giving her 100 decibel clicks every second in her ear, so she couldn't have heard conversations and so on.
[01:23:08.120 --> 01:23:30.680] And she had the experience of leaving her body, watching the operation, describe the surgical instruments, describe what the doctors were saying, described who came in and left the room, described what music was playing in the room during this, at a time when her brain was drained of blood and she was just absolutely as brain dead as you get, reversibly.
[01:23:31.080 --> 01:23:33.320] And you have to explain stuff like that.
[01:23:33.320 --> 01:23:34.280] And it can't just be cash.
[01:23:34.440 --> 01:23:36.360] I'm trying to find such an anecdote.
[01:23:36.360 --> 01:23:38.600] People try to replicate such anecdotes.
[01:23:38.600 --> 01:23:41.960] Like in the Palmier study, right, that came out last year in New York.
[01:23:41.960 --> 01:23:43.640] They tried to replicate some of the things.
[01:23:44.840 --> 01:23:48.040] They try to put things onto cabinets, et cetera.
[01:23:48.120 --> 01:23:54.680] You could never, so under truly objective circumstances, where people are trying to do this, you could never find any evidence for that.
[01:23:54.680 --> 01:23:59.080] And Carl Sagan, I see Carl Sagan hanging there behind Michael.
[01:23:59.080 --> 01:24:01.400] His dictum was extraordinary claims.
[01:24:01.400 --> 01:24:06.920] And what you're making isn't extraordinary claims require extraordinary evidence.
[01:24:06.920 --> 01:24:13.880] And so that means I want to be able to go routinely in the OR, try to see under which conditions these occur, and actually test.
[01:24:13.880 --> 01:24:15.000] Do they actually do these things?
[01:24:15.440 --> 01:24:17.920] Well, so for instance, one wait, let me just finish.
[01:24:18.560 --> 01:24:22.800] These patients almost never have high-density EG montage, right?
[01:24:22.800 --> 01:24:28.880] It's very difficult to ascertain whether the brain was really dead or is it just the two leads and you don't get prefrontal activity.
[01:24:28.880 --> 01:24:34.880] That's very different from saying the brain was entirely flatlined, in particular the deeper structure like the hippocampus, right?
[01:24:34.880 --> 01:24:38.880] So the ER or the ICU is not really equipped to deal with that.
[01:24:38.880 --> 01:24:49.120] And when people do try to do this, like there was a study last year in PNES where you had four people on ventilator, they had high-density EG montage, 64 channel.
[01:24:49.120 --> 01:24:51.920] Two of them did indeed show.
[01:24:51.920 --> 01:25:00.640] So in all four patients, people decided the loved one decided to withdraw life critical care and so they were going to die.
[01:25:00.640 --> 01:25:08.480] So they pulled the ventilator while the while all four patients were in Michigan were on EG monitoring.
[01:25:08.480 --> 01:25:11.120] And then what you can see in four, it just went down.
[01:25:11.440 --> 01:25:13.760] In two of the four, the EG went down.
[01:25:13.760 --> 01:25:21.760] In two of them, they had these paradoxical episodes of high-frequency activity for two or three minutes.
[01:25:23.680 --> 01:25:31.600] Where technically the heart had already stopped, but then they had the power, the EG power decrease, but then they had the spontaneous episodes.
[01:25:31.760 --> 01:25:35.840] But within seven minutes or everything, everything went flatline.
[01:25:35.840 --> 01:25:45.040] And that everything we know is once your brain is truly flatlined throughout the brain, including deep structure, yes, there is no more conscious experience.
[01:25:45.840 --> 01:25:55.920] How would that kind of brain activity account for the ability to read people's name tags and to know who's in the room and to I would like to see that tested?
[01:25:56.320 --> 01:25:57.120] I can't do that.
[01:25:57.120 --> 01:26:00.120] If that was the case, I would like to see that tested not retroactively.
[01:25:59.920 --> 01:26:01.560] No, no, no, no, no, no.
[01:26:02.600 --> 01:26:09.800] Pam Reynolds was, in a very real sense, a prospective study, meaning they didn't expect her to have brain death.
[01:26:10.280 --> 01:26:12.280] I'm sorry, to have a near-death experience.
[01:26:12.280 --> 01:26:13.960] But it was prospective in essentially.
[01:26:13.960 --> 01:26:17.480] She was as rigorously monitored as a human being can be monitored.
[01:26:17.480 --> 01:26:31.160] They drained the blood out of her brain in order to fix the aneurysm, and there was 30 minutes of no blood flow whatsoever to her brain with electrical silence in her brainstem and cortex.
[01:26:31.160 --> 01:26:34.680] And she watched the operation from the ceiling of the room.
[01:26:35.000 --> 01:26:38.440] And told them told them details.
[01:26:38.440 --> 01:26:42.120] I mean, I know Bob Spetzler, I know the surgeon who did it.
[01:26:42.120 --> 01:26:46.680] And he said, hey, I don't doubt that this is what she said.
[01:26:46.680 --> 01:26:51.960] However, you can't, there's simply no way in an uncontrolled study to rule out post-experimental chewing.
[01:26:52.440 --> 01:26:55.400] Look, she woke up an hour later, two hours later.
[01:26:55.400 --> 01:26:56.840] Maybe she imagined this.
[01:26:56.840 --> 01:27:03.560] Maybe, you know, again, this is why we need to do science, which we need to do in an unbiased way to try to test.
[01:27:04.360 --> 01:27:07.320] But a million people die each year in the U.S.
[01:27:08.360 --> 01:27:14.200] So we should be able to collect data, and people are trying, like the Sam Permian.
[01:27:15.160 --> 01:27:18.040] Yeah, so we should have a little bit more evidence.
[01:27:18.360 --> 01:27:20.520] Yeah, there's this one patient and this one.
[01:27:21.240 --> 01:27:24.280] This is not what Carl Sagan calls extraordinary evidence.
[01:27:25.560 --> 01:27:31.560] I think, I mean, Sagan, I think, is exactly right, that extraordinary claims require extraordinary evidence.
[01:27:31.560 --> 01:27:38.360] The thing is that the extraordinary claim here is that all of these people, what amounts to millions of people, are wrong.
[01:27:38.680 --> 01:27:40.600] That's the extraordinary claim.
[01:27:40.600 --> 01:27:51.600] And the extraordinary claim in naturalist or materialist neuroscience is that mental predicates can arise from brain predicates.
[01:27:51.840 --> 01:27:58.960] That is, the extraordinary claim is that concepts and a subjective experience arise from simple matters.
[01:27:59.120 --> 01:28:00.800] You see it every day in a scanner.
[01:28:00.800 --> 01:28:06.160] If you put a person in a scanner and you ask them to think abstract thoughts, they'll see activity in particular parts of their brain.
[01:28:06.640 --> 01:28:08.320] What you're seeing is correlation.
[01:28:08.320 --> 01:28:09.520] We're talking about causation.
[01:28:09.520 --> 01:28:10.720] It's a different thing.
[01:28:11.200 --> 01:28:12.640] No one questions correlation.
[01:28:12.640 --> 01:28:13.840] No one questions that.
[01:28:13.840 --> 01:28:17.040] There's no question that correlation goes on.
[01:28:17.040 --> 01:28:25.360] Causation is a completely different thing, and no one has the faintest clue as to how that could even happen, let alone.
[01:28:25.520 --> 01:28:26.320] Of course we have to.
[01:28:26.960 --> 01:28:32.160] I mean, there are lots of, you might not believe the books, but there are lots of theories written about how intelligent.
[01:28:32.320 --> 01:28:35.680] Read global neuronal workspace, how intelligent arise in the book is.
[01:28:36.400 --> 01:28:40.400] And the number of theories is the evidence that no one understands it.
[01:28:40.400 --> 01:28:42.640] Because if you understand it, you've got one theory.
[01:28:43.040 --> 01:28:44.800] The point is people are groping.
[01:28:45.280 --> 01:28:49.040] People are groping for theories because they don't have expertise.
[01:28:49.360 --> 01:28:55.680] The study of intelligence and natural intelligence and artificial intelligence is it's an infancy.
[01:28:55.680 --> 01:28:58.320] It's really only in, you know, 100 years old.
[01:28:58.320 --> 01:29:03.920] Yeah, if you compare us against a mature science like science, where there is one theory, yes, we're not at that stage yet.
[01:29:04.240 --> 01:29:10.160] So do you believe that in split brain surgery that there are two conscious human beings inside the brain?
[01:29:10.320 --> 01:29:16.640] In a complete, yes, in a complete split brain commiseromy, there will be two conscious beings, one that can talk, the other one.
[01:29:17.120 --> 01:29:18.720] That's a pretty extraordinary claim.
[01:29:19.920 --> 01:29:20.880] Why don't people feel that?
[01:29:21.040 --> 01:29:23.360] And a Nobel Prize was given out for it, indeed.
[01:29:24.720 --> 01:29:28.400] Why don't people feel, I mean, these people feel completely normal.
[01:29:28.400 --> 01:29:28.960] Completely normal.
[01:29:29.120 --> 01:29:32.760] Because you talk to, when you say people, you talk to the left hemisphere.
[01:29:32.920 --> 01:29:39.720] If you silent the left hemisphere using sodium amethyl, then what remains is someone who isn't very eloquent anymore.
[01:29:39.720 --> 01:29:41.320] They can still say yes, no.
[01:29:41.320 --> 01:29:42.520] They can still sing.
[01:29:42.520 --> 01:29:49.560] They can still, of course, explain the wide variety of the entire gamut of human emotions, but they can't talk with the fluency anymore.
[01:29:49.560 --> 01:29:51.400] So you think that's a different person?
[01:29:52.040 --> 01:29:53.400] It's a different person.
[01:29:54.680 --> 01:29:57.640] And partly, they have different memories.
[01:29:57.640 --> 01:29:58.040] Yes.
[01:29:58.600 --> 01:30:03.000] It's in patients who have cerebral palsy, it's very common.
[01:30:03.000 --> 01:30:10.040] In fact, one of the biggest problems they have is when they try to move, for example, if they try to move their arm, their leg will move also.
[01:30:10.040 --> 01:30:16.280] That is, there's a lack of integration of neurological function in these patients.
[01:30:16.280 --> 01:30:19.640] Do you believe these represent separate centers of consciousness in these patients?
[01:30:20.280 --> 01:30:22.120] I simply don't know enough about them.
[01:30:22.600 --> 01:30:24.920] In the corpus callosum, it's very clear.
[01:30:25.160 --> 01:30:31.960] They have two nameless fibers cut, so now they have two essentially relatively semi-not totally, but relatively independent cortical hemispheres.
[01:30:31.960 --> 01:30:33.800] I don't know, but Paul, I simply don't know.
[01:30:33.800 --> 01:30:42.120] So if these people were to, say, for example, sign their name, would the right hemisphere sign a different name than the left?
[01:30:42.280 --> 01:30:48.680] I mean, I don't even understand the notion of two separate consciousnesses in one human being.
[01:30:49.240 --> 01:30:50.840] That doesn't even make sense.
[01:30:51.480 --> 01:30:52.760] Why is this so difficult to understand?
[01:30:53.000 --> 01:30:54.920] Two conscious entities in one skull.
[01:30:55.080 --> 01:30:55.720] I don't see.
[01:30:55.720 --> 01:31:00.120] In fact, there's a novel written by a philosopher about it out of Toronto.
[01:31:00.440 --> 01:31:02.280] It's very funny, this idea.
[01:31:02.520 --> 01:31:08.920] It involves a jury trial, and the right hemisphere committed the crime, but no, sorry, the left hemisphere, but the right one is in the sense of the crime.
[01:31:09.960 --> 01:31:11.720] It's a novel because it's fiction.
[01:31:12.600 --> 01:31:13.320] No, it's not fiction.
[01:31:13.480 --> 01:31:15.360] It's what the okay, that's all right, right.
[01:31:15.520 --> 01:31:19.360] Yeah, a lot of this depends on what you mean by two separate consciousnesses.
[01:31:14.920 --> 01:31:21.840] I mean, I feel I am multiple selves.
[01:31:21.920 --> 01:31:29.280] There's the future Shermer, I'm going to be different tomorrow than I am today, and I operate on that, and I set up conditions now for myself.
[01:31:29.520 --> 01:31:29.920] No, I know.
[01:31:30.400 --> 01:31:43.040] The claim here is that this other mind, if I'm split-brained, that you're really Christoph is the left hemisphere, and the right hemisphere may as well be on the dark side of the moon with respect to Maya.
[01:31:43.120 --> 01:31:48.720] I would not be able to access the emotions and the experiential state of the right hemisphere.
[01:31:48.720 --> 01:31:49.680] That's a claim.
[01:31:50.000 --> 01:31:50.560] Okay.
[01:31:51.520 --> 01:31:55.200] So, Dan Dennett had a nice phrase called burden tannis.
[01:31:55.200 --> 01:31:57.200] You know, who has the burden of proof, right?
[01:31:57.520 --> 01:31:58.720] The burden of proof is on you.
[01:31:58.720 --> 01:32:00.320] Whack, no, the burden of proof is on you.
[01:32:00.320 --> 01:32:01.120] Whack, no, okay.
[01:32:01.120 --> 01:32:04.000] So, you know, the extraordinary claims and all that stuff.
[01:32:04.000 --> 01:32:07.200] Okay, so it depends on where your starting point is.
[01:32:07.200 --> 01:32:14.480] I guess in terms of materialism, where the brain is everything, that's the unusual belief in the world.
[01:32:14.480 --> 01:32:16.800] Most people are dualists, they're natural-born dualists.
[01:32:17.040 --> 01:32:20.000] That's what people have thought throughout history.
[01:32:20.000 --> 01:32:23.120] But, okay, let's turn to the next subject: free will.
[01:32:23.840 --> 01:32:30.880] So, the three positions are determinism, compatibilism, and well, I guess libertarian free will, as it's called.
[01:32:30.880 --> 01:32:36.640] Michael, I gather you would call yourself a libertarian free will defender, right?
[01:32:37.760 --> 01:32:40.400] Yeah, my view of free will, right?
[01:32:40.400 --> 01:32:41.200] I'm a libertarian.
[01:32:41.200 --> 01:32:49.120] I think that free will is what Aristotle would call a rational appetite, meaning, as opposed to sensitive appetite.
[01:32:49.120 --> 01:32:52.240] Sensitive appetite is from the brain, from the body.
[01:32:52.240 --> 01:32:56.640] You're hungry, you're tired, you do this or that, you have lust, whatever.
[01:32:56.960 --> 01:33:01.880] Rational appetite is where you have a desire that's based on reason.
[01:32:59.840 --> 01:33:06.280] For example, you read the Ten Commandments, makes sense to you, say, I'm going to follow these things.
[01:33:06.600 --> 01:33:08.200] That's what free will is.
[01:33:08.440 --> 01:33:10.600] And I think free will is real.
[01:33:10.760 --> 01:33:21.560] First of all, I think it's self-refuting to deny free will, because if you deny free will, what you're saying is that the very act of saying you deny free will is driven by the future.
[01:33:21.640 --> 01:33:24.920] But the determinist would say you were determined to say that you deny it.
[01:33:25.320 --> 01:33:26.360] Right, right, right, right.
[01:33:27.240 --> 01:33:30.760] But why would you then trust that opinion if you didn't choose it freely?
[01:33:30.760 --> 01:33:32.120] Okay, Christophe.
[01:33:33.400 --> 01:33:34.360] I have free will.
[01:33:34.360 --> 01:33:35.320] I believe in free will.
[01:33:35.320 --> 01:33:49.240] I believe in the libertarian free will, but it's based on an intrinsic powers ontology, where the real causal actors are the intrinsic powers inherent in a physical substrate like my brain.
[01:33:49.240 --> 01:33:52.600] And if I lose my brain, then there's no more free will.
[01:33:52.600 --> 01:33:54.040] In fact, there's no more mind.
[01:33:54.040 --> 01:34:01.240] But I do believe in, contrary to most scientists, I do believe in free will, in the classical libertarian free will.
[01:34:01.800 --> 01:34:06.040] But you're not suggesting there's a little homunculus up there making the decisions for you.
[01:34:06.040 --> 01:34:06.440] Right?
[01:34:06.440 --> 01:34:06.680] No.
[01:34:06.680 --> 01:34:07.480] No, it's my point.
[01:34:07.560 --> 01:34:08.840] It's exactly as Mike said.
[01:34:08.840 --> 01:34:13.400] When I am faced with a moral dilemma, do I do A or do I do B?
[01:34:13.400 --> 01:34:19.800] Then if I deliberate, if I have a clear fork, if I'm conscious aware of A and non-A, let's just keep it simple.
[01:34:20.120 --> 01:34:21.800] A versus non-A.
[01:34:21.800 --> 01:34:23.560] I think I go through the reasoning.
[01:34:23.560 --> 01:34:24.600] Why should I do A?
[01:34:24.600 --> 01:34:26.200] Why should I do non-A?
[01:34:26.200 --> 01:34:27.240] What are the consequences?
[01:34:27.240 --> 01:34:30.360] And I decide to do non-A, let's say.
[01:34:30.840 --> 01:34:33.480] Then this is a freely taken decision.
[01:34:33.480 --> 01:34:44.360] Yes, there's a correlate with the underlying brain, but the actual decision happens in consciousness, which is different, which is certainly different from its substrate, right?
[01:34:45.120 --> 01:34:47.200] In that sense, I'm not a classical materialist.
[01:34:47.440 --> 01:34:53.760] So, the Benjamin LeBay experiments, you would explain, is just a different part of the brain is making the decision before you're consciously aware of it?
[01:34:53.760 --> 01:34:54.880] No, it's a correlate.
[01:34:55.280 --> 01:35:00.240] It's clearly, no, the libert experiment, of course, is highly construed, right?
[01:35:00.400 --> 01:35:06.000] You're being trained, you're sitting there, and you're supposed to watch a clock and then lift the hand.
[01:35:06.000 --> 01:35:06.960] It's pretty meaningless.
[01:35:06.960 --> 01:35:15.440] In other words, there's no moral judgment implied or any consequences, behavioral consequence, whether I lift it now or lift it one second later my hand.
[01:35:15.440 --> 01:35:24.240] It's not like I'm being asked actually to choose something in a restaurant or to make a choice between mate A or mate B or job A or job B, right?
[01:35:24.240 --> 01:35:32.000] So those sort of experiments haven't been done yet because they're, of course, much more difficult to do in an experimental context.
[01:35:32.000 --> 01:35:38.000] But yeah, there will clearly be correlates in the brain, but those are not the true causes.
[01:35:38.000 --> 01:35:42.400] But that's according to this intrinsic power ontology.
[01:35:43.120 --> 01:35:45.360] Michael, does that surprise you to hear that?
[01:35:46.560 --> 01:35:53.200] Well, no, I mean, Libet also found, and again, I think there are tremendous limitations to his experimental method.
[01:35:53.200 --> 01:35:58.560] I mean, as I said, it's pushing a button doesn't have too much to do with the exercise of free will in real life.
[01:35:58.560 --> 01:36:06.480] But he found that the decision to veto pushing the button was electrically silent, which he described as free won't.
[01:36:08.080 --> 01:36:11.600] But I don't, I mean, his experiments are fascinating, and he was a fascinating man.
[01:36:11.600 --> 01:36:12.800] I think he was a woman.
[01:36:13.040 --> 01:36:17.040] But I mean, does Christoph being a libertarian free will defender surprise you?
[01:36:18.400 --> 01:36:18.960] No, it doesn't.
[01:36:18.960 --> 01:36:23.520] I mean, it surprises me just because most neuroscientists don't defend free will.
[01:36:23.520 --> 01:36:28.920] But I think Christoph is a very deep thinker and has a very open mind.
[01:36:28.800 --> 01:36:33.960] And I actually believe personally that no one actually denies free will.
[01:36:37.800 --> 01:36:38.600] The reason is...
[01:36:41.800 --> 01:36:42.120] Right.
[01:36:42.120 --> 01:36:44.680] I said, Dan, are you free to choose your wine?
[01:36:44.680 --> 01:36:45.080] Right, right.
[01:36:46.040 --> 01:36:46.680] And he did choose.
[01:36:46.840 --> 01:36:48.520] I said, see, wasn't that reason?
[01:36:48.520 --> 01:36:48.920] Oh, no.
[01:36:49.160 --> 01:36:50.120] It's all predetermined.
[01:36:50.360 --> 01:36:55.240] Because I think that what you believe is not merely what you say, it's what you do.
[01:36:56.680 --> 01:37:01.000] If you ask an embezzler, do you believe in being financially honest?
[01:37:01.000 --> 01:37:04.440] If he says yes, you know he's lying because that's not what he does.
[01:37:04.840 --> 01:37:15.720] So if you take a person who denies free will and you go and you pour your coffee on his laptop, he's going to be very upset with you for ruining his laptop.
[01:37:15.720 --> 01:37:17.480] And you can say, hey, I didn't have free will.
[01:37:17.480 --> 01:37:18.520] I couldn't have chosen.
[01:37:18.520 --> 01:37:20.360] You might as well blame the coffee cup.
[01:37:20.360 --> 01:37:21.560] But he won't buy that.
[01:37:21.560 --> 01:37:22.760] He won't accept that.
[01:37:22.760 --> 01:37:26.200] So everybody believes in free will in their everyday life.
[01:37:26.600 --> 01:37:28.360] That's a compatibilist view, right?
[01:37:28.360 --> 01:37:33.800] People believe it for practical reasons and for moral reasons to be able to punish people and reward people.
[01:37:35.080 --> 01:37:36.840] Well, you know, there's a.
[01:37:37.160 --> 01:37:53.720] But what about the evidence, Mike, that we do have evidence, despite my belief in that you can stimulate particular parts of the brain, in this case, the bride or percolum, and get the feeling that people, you know, it's quite striking if you read the dialogue with his patients.
[01:37:53.720 --> 01:37:55.640] Oh, suddenly I felt a will.
[01:37:55.640 --> 01:37:57.480] I felt an urge to move.
[01:37:57.800 --> 01:38:03.640] Well, yeah, sometimes you actually do get movement if you stimulate at a higher current amplitude.
[01:38:03.640 --> 01:38:12.360] Yeah, you don't, well, first of all, the actual stimulation of the will would be that you wouldn't realize it was connected to the stimulation.
[01:38:12.360 --> 01:38:14.560] That is, that you would think it was your own will.
[01:38:13.960 --> 01:38:20.160] And Penfield did that a lot, and he could never stimulate will on his experiments.
[01:38:20.480 --> 01:38:23.440] But you can alter the will just by having a few drinks.
[01:38:23.440 --> 01:38:25.440] I mean, you don't need to stimulate the brain.
[01:38:25.440 --> 01:38:35.280] Yeah, but here we're talking about, you know, more recent neurosurgeon doing the, I'm thinking about the French group, Desmoguette, they've published several papers on it, one in science, where they stimulate particular parts.
[01:38:35.280 --> 01:38:50.160] And then Itzach Fried again, another colleague of yours, you know, stimulating the anterior singulate, where they get a very discreet feeling, either of intentionality or where the patient himself says, I suddenly have the will to move.
[01:38:50.480 --> 01:38:50.960] Okay.
[01:38:51.600 --> 01:38:52.640] I'm not familiar with it.
[01:38:53.200 --> 01:38:54.160] I have to take a peek at it.
[01:38:54.160 --> 01:39:04.000] You would have to sort out emotional states, which can certainly be stimulated in the brain from will as kind of as an isolated thing.
[01:39:04.960 --> 01:39:11.760] This is relatively, you know, if you read the account, this is not confounded with emotion.
[01:39:11.760 --> 01:39:13.680] Yeah, I grant you, you can get emotion.
[01:39:13.680 --> 01:39:17.120] This is just the, for example, they have a desire to roll my tongue.
[01:39:17.120 --> 01:39:19.360] They have a desire to move the mouth.
[01:39:19.360 --> 01:39:26.640] One case of Itzach Fried, they have this girl, and every time he stimulates somewhere in prefrontal cortex, she has this desire.
[01:39:26.640 --> 01:39:28.880] She finds the situation incredible funny.
[01:39:28.880 --> 01:39:30.080] Talking about abstract.
[01:39:30.080 --> 01:39:30.800] She doesn't know why.
[01:39:30.800 --> 01:39:32.640] He said, why are you guys also funny?
[01:39:32.640 --> 01:39:38.000] And then he stimulates again and again, the patient reports this feeling of mirth, of hilarity.
[01:39:38.000 --> 01:39:38.320] Yeah, sure.
[01:39:38.320 --> 01:39:40.720] I mean, you can get that with gelastic seizures.
[01:39:40.720 --> 01:39:45.360] Seizures in the hypothalamus can make you think things are funny.
[01:39:45.600 --> 01:39:48.320] But but that's not will that follows on reason.
[01:39:48.560 --> 01:39:55.840] The only kind of free will I believe in is the the abstract judgment that something is morally right or right or wrong based on reason.
[01:39:57.040 --> 01:40:01.400] But here again, the difference between you two would be where the will is located for you, Michael.
[01:39:59.840 --> 01:40:03.560] I presume it's where the soul, the mind.
[01:40:07.000 --> 01:40:09.400] I mean, there's no okay.
[01:40:09.400 --> 01:40:17.480] So your answer back to my where does Aunt Millie's mind go when her brain dies of Alzheimer's and Deepak says it returns to the consciousness is the ground of all being or whatever.
[01:40:17.480 --> 01:40:18.680] That's not what you mean.
[01:40:18.680 --> 01:40:21.320] You mean, well, I don't know what you mean, actually.
[01:40:21.720 --> 01:40:25.640] Where, like, after the death of your brain and body, where is your soul?
[01:40:25.640 --> 01:40:26.440] Where does it go?
[01:40:26.920 --> 01:40:37.800] Well, I believe that your soul survives death because it has immaterial powers that make it incapable of disintegration that is characteristic of material things.
[01:40:37.800 --> 01:40:45.320] And I believe that one goes to heaven, to God, and that ultimately we are resurrected.
[01:40:45.320 --> 01:40:46.760] But that's not a scientific belief.
[01:40:47.480 --> 01:40:52.760] I understand, but just out of curiosity, is it the physical body that's resurrected or just the soul?
[01:40:52.760 --> 01:40:53.640] Physical body, yeah.
[01:40:53.720 --> 01:40:54.840] Only, well, as St.
[01:40:54.920 --> 01:41:03.320] Paul said in Corinthians 15, that it's a spiritual body, it's physical, but it has very different qualities than it doesn't age.
[01:41:03.960 --> 01:41:04.760] Correct, yeah.
[01:41:04.760 --> 01:41:08.120] And how old are you when you're that's a very interesting question?
[01:41:08.120 --> 01:41:10.040] Um, I'm going to be 30.
[01:41:10.040 --> 01:41:10.840] Yeah, right.
[01:41:11.480 --> 01:41:15.880] FanDuel's getting you ready for the NFL season with an offer you won't want to miss.
[01:41:15.880 --> 01:41:20.760] New customers can bet just $5 and get 300 in bonus bets if you win.
[01:41:20.760 --> 01:41:21.400] That's right.
[01:41:21.400 --> 01:41:27.880] Pick a bet, bet $5, and if it wins, you'll unlock $300 in bonus bets to use all across the app.
[01:41:27.880 --> 01:41:33.720] You can build parlays, bet player props, ride the live lines, whatever your style, FanDuel has you covered.
[01:41:33.720 --> 01:41:35.640] The NFL season's almost here.
[01:41:35.640 --> 01:41:38.120] The only question is: are you ready to play?
[01:41:38.120 --> 01:41:43.960] Visit fanduel.com/slash sportsfan to download the FanDuel app today and get started.
[01:41:43.960 --> 01:41:46.400] Must be 21 plus and physically present in New York.
[01:41:44.840 --> 01:41:49.200] First online real money wager only, $10 first deposit required.
[01:41:49.280 --> 01:41:52.640] A bonus issued as non-withdrawable bonus bets that expire seven days after receipt.
[01:41:52.640 --> 01:41:53.440] Restrictions apply.
[01:41:53.440 --> 01:41:55.600] See terms at sportsbook.fan duel.com.
[01:41:55.600 --> 01:42:04.720] For help with a gambling problem, call 1-877-8 HopeNY or text Hope NY to 467-369 or visit oasas.ny.gov/slash gambling.
[01:42:04.720 --> 01:42:06.240] Standard text messaging rates apply.
[01:42:06.240 --> 01:42:10.160] Sports betting is void in Georgia, Hawaii, Utah, and other states where prohibited.
[01:42:10.400 --> 01:42:16.000] Theologically, I believe, at least in the Catholic tradition, I don't know that that's worked out.
[01:42:16.000 --> 01:42:30.880] What's kind of interesting is that in near-death experiences, people describe seeing loved ones on the other end of the tunnel, and very often they describe them as being 30, meaning they describe them as being young and healthy, not older and infirmed.
[01:42:30.880 --> 01:42:32.160] I certainly wish for.
[01:42:32.960 --> 01:42:36.000] I am rooting for 30 like it wouldn't believe that.
[01:42:36.160 --> 01:42:42.720] Me too, but you know, I'm 70, so if I am resurrected at 30, where's those 40 years of memories I've had?
[01:42:42.720 --> 01:42:49.920] And everything that well, all of your memories are gone because memory is a matter of brain.
[01:42:49.920 --> 01:42:53.200] But there is knowledge that's not memory.
[01:42:53.200 --> 01:42:59.440] That is, there's abstract knowledge, and you'll still have at your abstract knowledge.
[01:42:59.840 --> 01:43:05.040] Memory itself dissolves with the brand because it's part of the brand.
[01:43:05.040 --> 01:43:06.080] Christoph?
[01:43:06.720 --> 01:43:13.440] Yeah, I mean, I find this all completely ad hoc and arbitrary, Mike.
[01:43:13.440 --> 01:43:14.160] I'm sorry.
[01:43:14.160 --> 01:43:14.560] That's okay.
[01:43:14.800 --> 01:43:16.000] Particularly immortality.
[01:43:16.000 --> 01:43:21.200] There's nothing in physics or in science that, well, clearly everything has a finite lifetime.
[01:43:21.200 --> 01:43:23.280] Protons have a finite lifetime.
[01:43:23.280 --> 01:43:28.400] So I understand you can believe that, and that's fine, but it's a total ad hoc assumption.
[01:43:28.400 --> 01:43:30.840] In addition to everything else, we are mortal.
[01:43:30.840 --> 01:43:31.480] Yeah.
[01:43:31.800 --> 01:43:33.240] We all wanted to be immortal.
[01:43:33.240 --> 01:43:34.600] That doesn't make it so.
[01:43:34.600 --> 01:43:35.240] Right.
[01:43:35.240 --> 01:43:36.920] Well, first of all, I'm not sure.
[01:43:36.920 --> 01:43:41.080] I mean, there are plenty of people who don't relish the idea of immortality.
[01:43:41.480 --> 01:43:42.280] I know people who are terrified.
[01:43:43.080 --> 01:43:44.680] I mean, forever, that's a long time.
[01:43:45.080 --> 01:43:46.520] It's a long time than I felt.
[01:43:47.160 --> 01:43:52.360] I recall when my son, I would talk to him about this when he was a kid, and he was terrified of it.
[01:43:52.360 --> 01:43:54.280] He said, Do you mean forever?
[01:43:54.280 --> 01:43:57.960] Like, the whole concept of forever just pictures.
[01:43:58.360 --> 01:44:01.880] Christopher Hitchens had a great bit about this when he was dying of cancer.
[01:44:01.880 --> 01:44:13.000] He wrote about this in Vanity Fair, and you can get his book on this on immortality, in which you're at a party and you're tapped on the shoulder and told you have to leave.
[01:44:13.000 --> 01:44:15.320] And worse, the party's going to go on without you.
[01:44:15.320 --> 01:44:17.160] And he's like, oh, no, that's such a bummer.
[01:44:17.240 --> 01:44:18.760] He goes, but it could be worse.
[01:44:18.760 --> 01:44:24.040] What if you're at the party and you're tapped on the shoulder and told you can never leave the party?
[01:44:24.040 --> 01:44:24.920] Right, right.
[01:44:24.920 --> 01:44:26.360] Well, that's even worse.
[01:44:26.360 --> 01:44:26.840] Right.
[01:44:27.080 --> 01:44:29.240] That's Jean-Paul Saab, no exit.
[01:44:29.400 --> 01:44:29.720] Right.
[01:44:31.720 --> 01:44:33.240] Is that where Hitch got that?
[01:44:33.240 --> 01:44:33.480] Okay.
[01:44:33.800 --> 01:44:34.920] So I've no exit.
[01:44:35.160 --> 01:44:44.920] Yeah, I've never thought of annihilation as necessarily being such a terrible thing because if you think about it, if you don't exist after you die, you're in the same state as you were before you lived.
[01:44:44.920 --> 01:44:50.360] And we don't recall, I don't recall 1930 as being a terrible time because I wasn't around.
[01:44:50.520 --> 01:44:59.320] So my belief in eternal life is based on my Catholic faith and on my understanding of the hymorphism and your death experiences.
[01:44:59.320 --> 01:45:00.520] It's not wishful thinking.
[01:45:00.520 --> 01:45:07.560] And frankly, it's rather terrifying because I know that I got to behave myself or my eternity could be in a very hot place.
[01:45:08.680 --> 01:45:11.560] Now, you don't actually believe in a literal hell, do you?
[01:45:11.560 --> 01:46:58.920] Oh, yeah, yeah, sure absolutely absolutely well and what would that be where your soul goes if um i i i i i think it it's it's to to to be apart from god uh i think to be separated from god is the worst thing imaginable and it and it's much worse than fire uh and i sure hope nobody experiences it i certainly certainly don't like okay well christoph has been working on the hard problem of consciousness for a long time you wrote about it in your book in fact that he he lost the bet with uh david chalmers not that it could never be solved only it wasn't solved in 25 years what is your answer to the hard problem of consciousness michael um the the hard problem of consciousness is an artifact of um our cartesian way of looking at the world okay uh that we've gotten rid of race cogitans we have race extensa we're meat machines and we're all perplexed about how it is that meat machines could have experiences when we were the ones who um said that we can't have experiences so uh the human beings have subjective experience because we have souls and souls entail subjective experience so we have conscious experience because we are complex organ we have our brain is you know the most complex piece of highly active matter in the known universe and of course we're not the only one the other thing that's also i as i mentioned i grew up catholic what was always bothered me is we had dogs i grew up around dogs and i say how can it be that these they belong to our family these gorgeous creatures they're not supposed to go to heaven only we i mean either we both go to heaven or none of us goes to heaven but the fact that only humans are special and different from all other never struck me as being very uh very plausible i i i mean i've i just lost my my little bishon who i who i love and um yeah, that's okay.
[01:46:58.920 --> 01:46:59.720] Thank you.
[01:46:59.720 --> 01:47:01.480] And yeah, I thought the same thing.
[01:47:01.480 --> 01:47:10.200] I mean, my understanding of the Catholic view on that is not that they don't go to heaven, but that their souls are not naturally immortal because they don't have intellect and will.
[01:47:10.200 --> 01:47:13.320] They don't have immaterial powers of the soul.
[01:47:13.320 --> 01:47:17.240] So that doesn't mean that God doesn't recreate them.
[01:47:17.560 --> 01:47:21.480] So they may still go to heaven, but they need to be made de novo.
[01:47:21.960 --> 01:47:24.440] We go to heaven with the souls we have.
[01:47:24.840 --> 01:47:33.640] And interestingly, children who have near-death experiences very commonly will encounter deceased pets on the other side of the tunnel.
[01:47:33.800 --> 01:47:36.280] They don't encounter deceased people as much.
[01:47:36.280 --> 01:47:40.440] A reason because they're young and the people they know generally haven't died yet.
[01:47:42.440 --> 01:47:44.040] But they do encounter pets a lot.
[01:47:44.040 --> 01:47:47.640] So I sure hope that our pets are on the other side of the thing.
[01:47:47.800 --> 01:47:49.960] I'd love to see my little dog again.
[01:47:50.920 --> 01:47:53.160] Everything will be made whole again.
[01:47:54.200 --> 01:47:55.080] I hope.
[01:47:55.080 --> 01:48:07.080] Well, when these are called near-death experiences, Michael, doesn't that suggest that perhaps there's something still working for most of these patients, except for the one you yeah, I'm not sure most.
[01:48:07.080 --> 01:48:11.160] I mean, some are, I mean, at least 9 million people in this country.
[01:48:11.160 --> 01:48:13.160] Some of them have some brain functions, some don't.
[01:48:13.160 --> 01:48:14.040] Who knows?
[01:48:14.440 --> 01:48:19.160] But I do think that near-death is kind of a bad word.
[01:48:19.560 --> 01:48:21.080] It's reversible death.
[01:48:21.080 --> 01:48:28.200] That is, you can be dead and be reversed, meaning that if your heart isn't beating and you have no brain function, you're dead.
[01:48:28.200 --> 01:48:30.120] I mean, that's as dead as it gets.
[01:48:30.360 --> 01:48:39.880] But you can be brought back if you're brought back before permanent legally, although you're only dead if it's irreversible loss of every function of the brain, right?
[01:48:40.280 --> 01:48:41.960] The uniform determination of death acts.
[01:48:42.120 --> 01:48:42.280] Right.
[01:48:42.520 --> 01:48:48.320] Yes, but but but that's because um the determination of death makes you eligible to be an organ donor.
[01:48:44.680 --> 01:48:50.640] And if you're not irreversibly dead, then that's murder.
[01:48:50.800 --> 01:49:00.720] So so but but I mean, certainly Pam Reynolds, and I think no doubt many, many people who have near-death experiences are genuinely dead.
[01:49:01.200 --> 01:49:07.440] In fact, it's been taught for generations that you can't do CPR on somebody who's not dead.
[01:49:07.440 --> 01:49:10.080] That is, when you call a code blue, they're dead.
[01:49:10.880 --> 01:49:16.560] We have a system called rapid response that we call if someone is not dead, but they're in bad shape.
[01:49:16.880 --> 01:49:22.160] If it looks like they're heading for a code, you call a rapid response, but you don't pump on their chest.
[01:49:22.320 --> 01:49:24.720] You pump on their chest if they're dead.
[01:49:24.720 --> 01:49:28.960] So near-death is kind of a bad, bad term for you.
[01:49:29.520 --> 01:49:30.320] Okay.
[01:49:30.640 --> 01:49:35.600] First off, Michael said, or you said about some of these questions.
[01:49:35.600 --> 01:49:38.240] I don't know, or scientists don't know yet.
[01:49:38.240 --> 01:49:38.720] Okay.
[01:49:39.280 --> 01:49:40.240] Fair enough.
[01:49:40.240 --> 01:49:47.760] Does that allow room for a faith-based religious interpretation in the following sense?
[01:49:47.760 --> 01:49:57.280] In William James's kind of pragmatism example I'll use from Martin Gardner because he was one of the founders of the modern skepticism movement.
[01:49:57.280 --> 01:50:03.280] And, you know, buddies with the amazing Randy and all the atheists, Carl Sagan, Asimov, they're all atheists.
[01:50:03.280 --> 01:50:06.720] And Martin, you know, kind of made a famous statement.
[01:50:06.800 --> 01:50:08.320] Well, actually, I'm a theist.
[01:50:08.320 --> 01:50:10.320] I believe in God.
[01:50:10.320 --> 01:50:28.400] And in the sense that Miguel Unamuno and William James said, for questions that are really important in life, big questions that science cannot resolve or give us an answer, and if it really affects your life, it's okay to go ahead and believe it if it works for you.
[01:50:28.400 --> 01:50:35.000] Now, he made clear, this doesn't mean, you know, I think Erie Geller can really bend spoons or psychics can really read minds.
[01:50:35.240 --> 01:50:40.040] But for, you know, the God question, why there's something rather than nothing, the free will question.
[01:50:40.360 --> 01:50:43.560] And he said, I think there's a God, and I can't prove it.
[01:50:43.560 --> 01:50:48.040] I think the atheists have slightly better arguments than the theists do, but only slightly better.
[01:50:48.040 --> 01:50:49.080] And it's okay.
[01:50:49.080 --> 01:50:51.080] I'm not trying to convert anybody to anything.
[01:50:51.080 --> 01:50:53.560] I'm just saying it's what works for me.
[01:50:54.200 --> 01:50:56.280] He called that, that's called fideism.
[01:50:56.280 --> 01:51:00.440] Is there room for something like that in your worldview?
[01:51:01.080 --> 01:51:07.960] Yeah, particularly if you have one of these extraordinary transformative experiences, you're challenged because you've had this experience.
[01:51:07.960 --> 01:51:19.320] So as William James says, you come back with this noetic feeling that I've experienced something that cannot be explained, but you're still a scientist, you're still a reasoning person.
[01:51:19.320 --> 01:51:31.320] So you try to construct, given everything that I know, what is the least assumptions I can make that's compatible with both my experience as well as with the science that I practice?
[01:51:31.320 --> 01:51:37.960] Because we know science is by far the most successful technology humanity has ever invented to manipulate the world.
[01:51:37.960 --> 01:51:40.840] So you say, well, science is great at manipulating the world.
[01:51:40.840 --> 01:51:48.760] It may not explain all of metaphysics, but I think if you go to the metaphysical round, you should keep the number of assumptions to a minimum.
[01:51:48.760 --> 01:52:01.240] Like idealism, in fact, you know, that everything fundamentally is mental, seems much more elegant because it only requires me to accept the only reality that all of us are only acquainted with.
[01:52:01.240 --> 01:52:07.720] None of us have ever directly interacted with an atom or with a star or with a black hole.
[01:52:07.720 --> 01:52:14.840] We only know those because we look at images, we see oscilloscopes, we experience other people telling us about things.
[01:52:14.960 --> 01:52:22.320] So fundamentally all of the everything we know, including our own existence for RenΓ© Descartes, comes to us through consciousness.
[01:52:22.320 --> 01:52:27.760] So therefore, I think a very elegant assumption is fundamentally everything is just mental.
[01:52:28.080 --> 01:52:29.840] And that's perfectly rational.
[01:52:29.840 --> 01:52:32.400] It makes minimum the smallest number of assumptions.
[01:52:32.400 --> 01:52:38.720] And so it doesn't have these additional assumptions, or immortality plus an existence of a God or hell or other things.
[01:52:38.720 --> 01:52:42.400] Those are all extra assumptions one would have to make.
[01:52:42.400 --> 01:52:47.680] And I would like to make the minimum, I would like to have a worldview that makes a minimal number of assumptions.
[01:52:47.680 --> 01:52:48.320] Fair enough.
[01:52:48.320 --> 01:52:52.880] Michael, if Christoph and I are right, we'll never know.
[01:52:52.880 --> 01:52:56.320] But if you're right, Christophe and I will find out, I guess.
[01:52:56.320 --> 01:52:57.520] This is Piscal's wager.
[01:52:58.080 --> 01:52:59.040] Piscal's wager.
[01:52:59.760 --> 01:53:04.320] Well, I actually agree with Christoph very much about idealism.
[01:53:04.640 --> 01:53:17.200] And I think idealism, for overall metaphysics, is a beautiful way of looking at the world because the reality is that the universe is more like a mind than it is like a thing.
[01:53:17.600 --> 01:53:30.080] And one of the arguments that made, recalled, Eugene Wigner made the comment about the unreasonable effectiveness of mathematics in explaining the natural world.
[01:53:30.080 --> 01:53:41.040] And an idealist would say, well, that's because physics is mathematics, and mathematics is an idea, and the natural world is a mind.
[01:53:41.360 --> 01:53:45.440] And I think that makes a lot of sense, but then you have to ask, whose mind is it?
[01:53:47.520 --> 01:53:47.760] St.
[01:53:47.840 --> 01:53:59.560] Augustine said something that I think is one of the most beautiful concepts in Christianity, and that is that all that exists is a thought in the mind of God.
[01:53:58.720 --> 01:54:02.760] So each of us is an idea of God's.
[01:54:03.560 --> 01:54:08.440] And I think that's a very beautiful way of looking at things, and it has a lot of parsimony.
[01:54:08.680 --> 01:54:10.520] It's parsimonious.
[01:54:10.840 --> 01:54:13.320] All right, gentlemen, we're closing in on two hours.
[01:54:13.320 --> 01:54:14.920] Let me just wrap it up there again.
[01:54:14.920 --> 01:54:16.760] The immortal mind.
[01:54:16.760 --> 01:54:18.520] And then I am myself.
[01:54:18.520 --> 01:54:21.160] Just read both of these and decide for yourself.
[01:54:21.160 --> 01:54:21.800] They're great.
[01:54:21.800 --> 01:54:23.400] And listen to this conversation.
[01:54:23.400 --> 01:54:23.640] Thanks.
[01:54:23.880 --> 01:54:24.200] All right.
[01:54:24.200 --> 01:54:25.480] Thank you so much.
[01:54:25.480 --> 01:54:26.680] Thank you so much, Michael.
[01:54:26.680 --> 01:54:27.400] Christoph, thank you.
[01:54:27.560 --> 01:54:28.200] Thank you, Mike.
[01:54:28.200 --> 01:54:29.000] Thank you, Michael.
[01:54:29.480 --> 01:54:30.760] Very, very nice to meet you.
[01:54:30.760 --> 01:54:31.240] Thank you.
[01:54:31.240 --> 01:54:33.240] Yes, it was enjoyable.
[01:54:40.200 --> 01:54:45.160] At the Institute for Advanced Reconstruction, redefining what's possible is what we do best.
[01:54:45.160 --> 01:54:51.080] From phrenic nerve injuries to brachioplexus reconstructions, we give patients a future beyond limitations.
[01:54:51.080 --> 01:54:54.760] Refer with confidence at advancedreconstruction.com.