For the first time in known history, it is unclear that the most prominent form of intelligence on earth will be human. We are giving "life" to an intelligence with the capacity to surpass the thing that created it. Since the dawn of civilization, we have placed enormous pride in our intelligence as the hallmark of our superiority over every other species on earth. The ideas we have conceived, the things we have built, the knowledge we have surfaced of the known universe: all of it has been a testament to the power of the human mind. But there is now broad consensus that we will not hold that position for very much longer.
As LLM capabilities improve at a stunning pace, we are rapidly approaching the Singularity: an unprecedented moment akin to a black hole, where we have no frame of reference for what happens once we cross the event horizon. We don't know what we don't understand, and artificial intelligence may soon operate in ways unfathomable to even our smartest humans. As these systems grow, evolve, and learn, they will recursively improve their own capabilities at a rate that is difficult to comprehend. And geopolitical arms races combined with commercial pressures create an incentive structure that deprioritizes safe, intentional adoption in favor of releasing to as many people, in as many forms, as fast as possible. But that is a topic for another day.
I wanted to open with all of this because as philosophers, technologists, and stewards of the future, it is critically important that we contemplate the world we are heading into. Our thoughts and ideas are more influential to ultimate outcomes than we might expect.
This essay attempts to make sense of the complex terrain of the continual integration of human intelligence and machine intelligence in our everyday lives. As AI begins to permeate all facets of our existence, how will we make sense of our interaction with it, and our potential integration into it? To allow for some necessary nuance, I will expand my definition of BCI beyond the literal merger of man and machine to also include the metaphysical idea that AI is, in a meaningful sense, an extension of our brain. This is because, as David Chalmers suggests, our conscious minds need not be confined to our skull. We compose our mind through external objects that create an ecosystem of consciousness, both internal and external. Some of the core concerns around brain-computer interfaces, such as over-reliance on technology and the erosion of the sense of self, are already beginning to emerge as we interact with these intelligences through our everyday devices.
In this paper, I will explore our evolving relationship with artificial intelligence as it relates to our sense of agency, our sense of self, and our broader consciousness, with an eye toward the implications for AI-powered brain-computer interfaces. The aim is to make progress on questions like: how much of our humanity will remain intact? Will we lose our sense of agency? Does this change the concept of self? And can consciousness emerge from non-biological substrates, or integrate with biological systems?
Technology, Agency, and Autonomy
Technology is, in essence, applied human ingenuity. Tools help us exert our will on the world with less friction and more impact. If I want to cut down a hundred trees, I could do it by hand with an axe, or I could build a machine that does it better in a fraction of the time. That is the core principle of technology: objects we create to leverage our impact on the world, for better or for worse.
This matters deeply when we talk about agency and autonomy. Does amplifying our output on the world come at the expense of our individual agency?
Consider the claim that Steve Jobs created the iPhone. In a literal sense, he did not. He had no engineering background. The iPhone was the product of thousands of designers, engineers, and factory workers coordinating to execute a vision. Yet we do not attribute the iPhone to that coordinated mass of people. We attribute it to Jobs, because it was his vision, his values, and his conception of reality that made it possible. He was not on the assembly line, but the creation was undeniably his.
We can extend that same intuition to how humans interact with AI. The question of attribution comes down to this: whose vision, values, and ambitions are reflected in the work? When humans use AI to produce a desired output, it is an amplification of their intelligence, a force multiplier for their ideas. One might push back and say that AI systems can think and exhibit intelligence in a way a tree cutter or washing machine cannot. But the people who worked at Apple were also intelligent. They used their intelligence to amplify Jobs' output on the world. The same intuition applies. Whether interacting with AI through a device or a neural interface, the underlying principle holds: the machine is working for you, amplifying your intellectual output, while you stay in the driver's seat, instilling intentionality and direction.
This is genuinely contested territory. Say Dr. Bob inputs all of his research into an AI that uses his ideas and values to synthesize information and make a groundbreaking discovery, like a cure for a disease. Who gets the credit? The AI or Dr. Bob? I do not think there is a clean answer, but the Jobs framework offers a useful lens. Dr. Bob did not synthesize the particles or write out the equation on a chalkboard, but it was his amplified ideas that directed the AI to the discovery. Like Jobs and the iPhone, he enabled the conditions for the thing to come into existence.
As AI tools continue to proliferate across our workplaces, schools, and homes, this framework scales. Maybe we will all become a kind of mini-Steve Jobs in our everyday lives, tackling greater challenges with something like an army of our own intelligence by our side. On that basis, my position is this: in the same way that we maintain agency when working alongside intelligent humans, we will maintain agency when working alongside intelligent machines.
The Extended Mind and Brain-Computer Interfaces
I want to begin the exploration of BCIs in relation to the self and to consciousness with a moment from a Neuralink talk Elon Musk gave earlier this year. He was explaining from first principles why humans adopt technology, and the through-line he identified was this: trends in technological progress are driven by our desire to increase bandwidth and reduce latency with the tools we use. Make them faster, better, and more closely aligned with our conscious thoughts and desires. Ordering a pizza through a delivery app extends our will to satisfy hunger. Eventually, instead of reaching for a device and pressing on glass, we could simply think "I want a pizza" and the will is extended even more directly. Latency approaches zero.
To understand this, Musk proposed a framework for the brain's operating system with three layers. First, the Limbic: the oldest and most primitive part of the brain, responsible for feelings, emotions, and instinct. Second, the Cortex: responsible for reason, planning, and complex thought. And third, a Digital layer, augmented by computers and AI, where the line between human neural circuitry and machine begins to blur.
A compelling case can be made that we are already living in that third layer. Our phones are a perfect example. Think about how much of our knowledge, memory, and intention is stored in that object. This is where Clark and Chalmers' Extended Mind theory becomes relevant. A phone can become a genuine part of someone's memory and cognitive process. I do not know my daily schedule off the top of my head. I use technology that serves as an extension of my mind to access the details of my own life. It is an extension of our mental apparatus.
It is more than reasonable to say that many people today, myself included, feel a genuine sense of panic or incompleteness without their phone. They rely on it for navigating the world, preserving memory through photos, recalling important information about themselves and others, keeping lists of books read, ideas to bring up in conversation, notes from class. These objects function like a kind of prosthetic consciousness. They house information that is essential for humans to operate at the capacity we otherwise could not. In a technology-centric, information-dense world, the human brain alone would short circuit if we had to hold all of that internally.
Delegating our cognitive processes to external sources is, at its core, just another expression of the fundamental principle of technology: reduce friction, leverage impact. Whether it is a phone or a brain-computer interface, they are extrapolations of the same underlying function. Writing something in a notes app and logging it into a neural interface serve the same purpose. The brain is a processing unit that operates a lot like software. If you want to become a philosopher, you download the philosophy software by reading Plato, Descartes, Heidegger. AI strives to parallel that same cognitive process. When we learn, we train our biological neural networks. When we integrate our brain with computers, the premise is not fundamentally different. It should enable us to do what we already do, but more effectively.
There is a persistent concern that because BCI is not biological, it should not be interacting with what is. I think this is a false heuristic rooted in the naturalistic fallacy: the assumption that because something is natural it is inherently better or more desirable. But this falls apart quickly. Mercury is natural. Snake venom is natural. Various toxic gases occur naturally. None of that makes them good for you. We should let go of the erroneous belief that natural equals good and unnatural equals bad. BCI fits into the category of something unnatural that is designed to help us overcome natural limitations, to actualize our desires more fully, to think more powerfully, to increase mental bandwidth, to expand the scope of the conscious mind. We cannot wait on evolution to get us there. The case for integrating with the machine is a strong one.
Consciousness and the Machine
To close, we should sit with the question of consciousness itself: can machines integrate with biologically derived consciousness?
Returning to Chalmers, he argues that "because human consciousness is multimodal and is deeply bound up with action, it is arguable that these extended systems are more promising than pure LLMs as candidates for humanlike consciousness." So will a new form of consciousness emerge? One that accounts for a synthesis of the biological and the artificial?
Perhaps. And borrowing from Cartesian Dualism, there may be something immaterial about consciousness, something spiritual and separate from the physical substance of our bodies, distinct to the soul, that no intelligent entity, however complex, could ever fully possess. Perhaps the mind-body problem can be the lens through which we understand transhumanism: a mind, body, and machine interaction that helps overcome the limitations of human thought and unlocks new realms of discovery, deeper self-actualization, and an expanded scope of conscious ability.
Elon Musk, Neuralink Update, June 2025
Andy Clark and David Chalmers, The Extended Mind (1998)
David Chalmers, Could an LLM Be Conscious? (2023)
René Descartes, Meditations on First Philosophy