Reclaiming agency before becoming semi-conscious humans

Before we debate whether AI can be conscious, we must confront a closer question: are we awake to our own agency before we hand it over to machines?

By Sreedhar Potarazu and Carin-Isabel Knoop

In our previous two pieces, Filling in the Blanks and Knowing Pain and Knowing Gain, we explored our faculties of perception and decision as components of our overarching framework of agency—defined here as acting with intent. We argued for the indispensability of judgment in domains in which algorithms cannot capture the whole reality of human experience.

Agency before consciousness

Today, we delve deeper into an even more pressing question of what happens when we stop using machines to inform our judgment and instead allow them to replace it. Can machines replicate human beings where agency is human and consciousness is being? After all, both are required since, without agency, consciousness is ineffectual. Consciousness is awareness, while agency is acting with intention. What use is electricity if it cannot express its energy through an agent? Electricity is raw potential, but without a switch or a bulb, it remains unused. That is the tango between agency and consciousness.

CEO of Microsoft AI, Mustafa Suleyman, recently warned that we are on the brink of creating “Seemingly Conscious AI,” systems that are trying to simulate awareness. These systems, while not truly conscious, are designed to mimic human-like behaviors and responses.

Whether AI will ever be truly conscious remains unsettled. But even before we debate that question, we should ask something closer to home. Are we sufficiently conscious of our own agency? Are we awake to what it means to act, to choose, and to bear responsibility before we delegate those functions to machines? In other words, before we appoint another agent that thinks for us, we should stock up on our own agency.

From suggestion to decision

The stakes for our agency are not trivial. We have already transitioned from suggestive AI systems that assist by offering predictions, such as auto- complete, to decisive AI, in which those predictions silently solidify into decisions. When we type “I have been meaning to tell you…” and autocomplete offers “I love you” or “I miss you.” The machine does not just finish our thought—it has narrowed it. And with GenAI, LLMs are not finishing our sentences; they are writing them entirely.

Potarazu and Knoop: ‘Know pain, know gain’: On how ambition turns pain into currency — and why we must learn to spend it wisely (September 13, 2025)

In medical triage, algorithmic scoring systems can determine who receives urgent care. In hiring, automated screening tools exclude candidates before a human eye ever looks at a résumé. In law, AI-powered research and drafting increasingly shape which arguments even get tested in court. And in everyday life, autocomplete completes our sentences, sometimes even before we have fully formed the thought ourselves.

A tool that offers input preserves human agency while a tool that decides for us begins to erode it. Over time, delegation without deliberation becomes abdication. As Carin Isabel Knoop and colleagues have shown, our psychological vulnerabilities, need for recognition, perfectionism, and loneliness make us especially prone to over-dependence on systems that simulate empathy. When the signals of affirmation from a machine become a substitute for human connection, we are not only outsourcing decisions but also part of our very identity and agency. This potential loss of identity and agency should be a cause for concern.

The asymmetry of training

What makes this moment even more unsettling is the growing divergence between how machines are being trained and how we, as humans, are allowing our own faculties of agency to atrophy. Large language models absorb billions of words, developing a statistical sense of syntax and meaning that will enable them to predict with accuracy what comes next in a sentence or an argument. Vision models digest large repositories of images, learning to recognize faces, tumors, and traffic patterns. These machines are, in effect, becoming masters of the very craft that makes us human, which is language, observation, and prediction.

Meanwhile, our own practices of language and observation are diminishing. We text in fragments, often stripped of nuance, replacing the subtlety of grammar with strings of emojis. We skim headlines rather than read deeply. We substitute the quick “like” for conversation. In visual culture, we scroll endlessly through images but rarely pause to observe with care. We capture experiences on our phones instead of living them and outsourcing our memory to the cloud. The result is that our machines as agents are learning to speak and see more fluently, even as our own capacities to use words carefully or to notice the world with patience diminish.

Who is the better agent bere?

This asymmetry raises a haunting question of who is the better agent here? A machine learning to perceive patterns across troves of data or the distracted reader, hurried doctor, fatigued parent, or “all of us skimming instead of seeing,” and no longer reading attentively or observing deeply? When machines begin to finish our sentences before we start them, they are not only predicting but pre-empting us. When they label and categorize the world for us, they subtly determine what we see and what we overlook. Agency, in this sense, is not only about who makes the final decision but about who notices most and who has the stamina to learn. Increasingly, the answer is not us.

If human agency requires the ability to perceive, resist, endure, and decide, then our trajectory should worry us. Machines are becoming better at perceiving patterns than we are. They never tire, grow impatient, or skim because they are distracted. We, by contrast, surrender endurance for convenience, resistance for comfort, and decision-making for ease. This divergence does not mean machines are necessarily conscious, but it does mean that they are practicing, at scale, the habits that once distinguished human agency. It is crucial that we reclaim those habits, or we risk becoming spectators in our own story.

The four disciplines of agency

Agency begins with perception. Attention is not just passive input but is selective, contextual, and shaped by values. A physician who skims an alert based on algorithms may see the same vital signs but miss the nuance of a patient’s pain story. A recruiter who relies on a ranking score may miss true grit in a résumé. AI filters what it sees, and because of that, it changes what we notice. The first casualty of delegation is often the breadth of attention.

Agency also requires resistance. Companies design interfaces to nudge. Algorithms steer us and comfort us with familiarity. An effective agent must resist such nudges when they conflict with broader goals. Maintaining skepticism, interrogating incentives, and recognizing manipulation are critical as in resisting autoplay on Netflix or an algorithmic nudge to keep scrolling.

Endurance is another quality of agency. Decisions often require patience, tolerance for uncertainty, and the willingness to accept delayed or costly outcomes. Machines optimize for immediate results, but they don’t suffer the reputational or ethical costs of a bad decision as we do- like waiting for our doctor to discuss our lab results before rushing to self-diagnose online.

Finally, agency culminates in deciding the hard responsibility of aligning values with action. Machines can present options and rank them, but they cannot own the moral consequences. When thinking is delegated, moral responsibility can evaporate. Who bears the burden when a triage bot denies care? Who is accountable when a hiring model excludes based on proxies of race or class? If we surrender decision-making to systems we do not understand or supervise, we erode the possibility of moral agency.

Whom do you serve?

The importance of agency is not only a modern concern. Scripture insists on it: “Choose you this day whom ye will serve” (Joshua 24:15) reminds us that the act of choosing is central to human dignity. Philosophy, too, has long emphasized it. As Jean-Paul Sartre put it, “Man is condemned to be free,” meaning that even in the face of uncertainty and constraint, we cannot escape the burden of decision. Vedanta, the ancient Indian school of thought, goes further, asking us to inquire into the very nature of the one who chooses. Its central question, “Who am I?” frames agency not only as a decision but as self-realization. To act wisely, Eastern Philosophy requires knowing the agent behind the action, the witness consciousness that observes, discerns, and decides. In this way, scripture, philosophy, and Vedanta converge on a single truth, which is that agency is the essence of what it means to be human.

We are psychologically primed to accept delegation. Less thinking feels easy, which is why AI is so appealing. The remedy is not to reject AI but to design it in ways that preserve and strengthen human agency. Systems should include friction that forces reflection rather than invisibly nudging users into default acceptance. They should make their limitations and uncertainty visible, so users understand the implications of their recommendations. They should leave consequential choices to humans who remain accountable, rather than diffusing responsibility into opaque processes.

The last words belong to us

Suleyman’s psychosis risk, the idea that people will confuse performance with personhood, is a symptom, not the root. The deeper question is whether we, as individual and collective agents, are awake to our responsibilities. To perceive is to be present. To resist is to guard the self. To endure is to remain committed. To decide is to accept consequences.

If we sharpen our capabilities now through better design, policy, training, culture, and responsible use, we can have AI that augments human agency rather than a substitute for it. That is how we get the best of technology without ceding the core of what it means to be responsible beings: the capacity to act, to care, and to answer for our choices.

Ask yourself, tonight, before you trust another auto-suggested answer, before you accept a model’s ranking, before you outsource yet another micro-decision: am I using this tool to amplify my agency or to abdicate it? Machines can predict and autocomplete our futures, but we are and should be the only ones who can and should choose them.

(Sreedhar Potarazu, MD, MBA, is an ophthalmologist, entrepreneur, and author who writes frequently on the intersection of medicine, technology, and business. Carin-Isabel Knoop leads the Case Research & Writing Group at Harvard Business School and co-author of several works on human behavior, leadership, and organizational life in the digital age.)

Continue Reading