Do Algorithms Serve Us, or Do We Serve Them?

This is the fourth article in the "Freedom in Crisis: Navigating Systems from Global to Personal" series. Having explored economic systems, democratic institutions, and social structures, we now examine the technological layer that increasingly mediates our experience.
I woke up this morning and reached for my phone before I was fully conscious. Within minutes, I had consumed information, responded to notifications, and made decisions—all within environments designed by algorithms I neither see nor understand.
By the time I had my coffee, invisible systems had already influenced what news I read, whose messages I prioritized, and what products I considered purchasing.
What happens to freedom when algorithmic systems don't just execute our commands but actively shape our choices, perception, and behavior? Often without our awareness, let alone our consent?
As we've explored how economic, political, and social systems impact our freedom, I'm increasingly curious about perhaps the most elusive layer: the technological systems that increasingly mediate our relationship with reality itself.
Who do we become when algorithms increasingly shape our decisions?
The Illusion of Algorithmic Neutrality
I've been thinking about how we talk about algorithms in our daily lives. Have you noticed how people often deflect responsibility by blaming algorithms?
When a social media platform promotes harmful content: "That's just what the algorithm chose to highlight."
When a hiring system rejects qualified candidates from certain backgrounds: "The algorithm determined they weren't a good fit."
When a pricing system charges some people more than others: "The system optimizes for profit automatically."
What strikes me about these explanations is how we treat algorithms as if they're neutral forces of nature rather than human-created tools with specific goals and biases built into them. It's as if saying "the algorithm did it" somehow makes the decision beyond questioning or criticism.
I wonder if we're creating a new kind of authority figure in "the algorithm" - something we can point to when we don't want to take responsibility for decisions that affect people's lives. It reminds me of how people used to say "the market decided" as if markets weren't human creations with rules and power dynamics baked into them.
I'm increasingly skeptical of algorithmic neutrality. Could it be that algorithms inevitably embody the values, assumptions, and biases of their creators? That they optimize for specific outcomes—typically engagement, consumption, and profit—at the expense of others? That they encode particular worldviews while presenting themselves as mere mirrors of reality?
Consider how this might work in practice: A content recommendation algorithm doesn't just "show what people want to see"—might it actively shape what people come to want? A hiring algorithm doesn't simply "identify the best candidates"—could it replicate and amplify existing patterns of who has been considered "best" in the past? A criminal risk assessment algorithm doesn't merely "predict who might commit crimes again"—is it possible it perpetuates the biases inherent in the criminal justice data it was trained on?
These questions lead me to others that feel even more fundamental: What does freedom mean when the choices presented to us have been pre-filtered by systems designed to drive specific behaviors? When our "personalized" experiences are actually optimized for someone else's goals? When the illusion of abundant choice masks increasingly narrow paths?
I suspect the myth of algorithmic neutrality serves those who benefit from hiding the human decisions embedded in these systems—the values prioritized, the assumptions encoded, the goals optimized for. What happens when it transforms political and ethical questions into technical ones, removing them from public debate and democratic oversight?
From External Control to Internal Influence
I've been thinking about how earlier systems of control operated primarily through external constraint—laws, physical barriers, explicit rules. But algorithmic governance seems different. It doesn't just restrict choices; it shapes desires. It doesn't prohibit actions; it makes alternatives invisible or inconvenient. It doesn't command obedience; it engineers compliance through what's called "choice architecture."
This isn't entirely new. Traditional marketing has long attempted to shape desires rather than just meet them. What's different now is the scale, precision, and invisibility of this influence.
Today's algorithmic systems operate with unprecedented granularity—targeting individuals rather than mass audiences, adapting in real-time to responses, and working largely beneath conscious awareness. While a radio broadcast or television commercial once announced itself as an attempt at persuasion, algorithmic influence often masquerades as neutral information or organic discovery.
This shift from external constraint to internal influence makes me question how power operates now. When a social media platform designs its interface to maximize attention capture, it doesn't force you to keep scrolling—but it makes stopping require more willpower than continuing. When a shopping algorithm shows you increasingly expensive versions of products you've viewed, it doesn't force you to buy—but it gradually recalibrates your sense of what's normal or reasonable to spend.
What does autonomy mean when the very infrastructure of decision-making is designed to produce particular outcomes? When the options we perceive as available—what information we encounter, what possibilities we imagine, what choices we consider—are themselves the product of systems optimized for goals that aren't our own?
Hannah Arendt, the political philosopher who studied totalitarianism and freedom, observed that "freedom can be contingent on material factors, but freedom is not identical with them." Today's algorithmic systems complicate this insight. They don't just alter material conditions; they reshape the cognitive and emotional landscape within which freedom is exercised. They don't just change what we can do; they change what we want to do.
The Attention Economy and Freedom
The attention economy—that system of algorithmic capture transforming human attention into the primary commodity of the digital age—reveals something crucial about freedom in our time.
As former Google design ethicist Tristan Harris puts it: "If you're not paying for the product, you are the product." But this familiar saying misses something important. We aren't just the product being sold; we're also the workers producing the valuable resource—our attention—that platforms harvest and sell to advertisers.
This extraction economy operates through increasingly sophisticated methods of capturing and maintaining engagement—endless scrolls, intermittent variable rewards, strategic notification timing, emotionally manipulative content prioritization – all stuff most of us barely realize (or even understand) is going on. And the wild thing is, these aren't bugs; they're features. Deliberately designed patterns overriding our intentions in service of platform metrics.
As Cal Newport explores in his book "Digital Minimalism," these systems aren't just engaging, they're explicitly designed to create dependency. Engineered to bypass conscious decision-making and cultivate compulsive usage patterns that serve business models rather than user wellbeing.
What happens to freedom when our attention is systematically fragmented and captured? When our cognitive resources are depleted through constant interruption and stimulation? When the environments where we make decisions are engineered to maximize impulsivity rather than intentionality?
The attention economy raises fundamental questions about what freedom requires in the algorithmic age. If freedom depends on some capacity for reflection, intention, and choice—for determining our own ends rather than serving others—then the systematic capture of attention represents a profound threat to liberty itself.
This isn't about weak willpower or poor personal choices. It's about power asymmetry – the gap between individuals making decisions within environments and the teams of engineers, data scientists, and behavioral psychologists designing those environments to override conscious intention.
Algorithmic Governance and Collective Freedom
The challenge extends beyond individual autonomy to collective self-governance.
Democratic systems depend on certain shared conditions: access to reliable information, public spaces for deliberation, and common reference points for debate. Algorithmic systems are rapidly transforming these conditions, often in ways that undermine democratic functioning.
Authoritarian regimes and aspiring autocrats have embraced algorithmic control enthusiastically. From Russia's apparent manipulation of social media to influence elections abroad, to China's social credit system that algorithmically scores citizens, to surveillance technologies that track dissidents. Algorithms offer unprecedented tools for maintaining power. Even in democratic contexts, algorithmic manipulation of information environments can tilt electoral outcomes and policy debates in ways invisible to citizens.
This is nothing new in the sense that media has long been used to disseminate propaganda, as was the extreme case of Hitler using radio—the cutting-edge technology of his time—to influence mass behavior and emotion. But when algorithmic curation creates entirely different information environments for different citizens, the very notion of an informed public becomes tenuous.
When recommendation systems reward inflammatory content that drives engagement, public discourse suffers. When complex policy questions are reduced to whatever can capture attention in algorithmically defined feeds, nuanced collective decision-making becomes nearly impossible.
The result is a fracturing of shared reality that makes cooperative self-governance increasingly difficult. We aren't just divided by disagreement; we're operating with fundamentally different information, different perceptions of what issues matter, and different understandings of basic facts. We're effectively "living in different worlds".
Who benefits from this fragmentation? Primarily existing power structures—economic systems that benefit from consumers too distracted to organize, political systems that benefit from citizens too divided to mobilize, social systems that benefit from communities too fractured to build solidarity.
Does algorithmic governance represent not just a new method of control but a profound shift in how power operates? It works less through prohibition than through predictive manipulation—not preventing actions but making them increasingly predictable and manageable through systematic influence on the decision environment.
Reclaiming Agency in the Algorithmic Age
Given these realities, what might freedom require in the age of algorithmic control? How might we reclaim agency within systems designed to override it?
The typical response focuses on individual solutions—digital detoxes, attention management apps, personal discipline. But these approaches often inadvertently reinforce the very systems they aim to resist. They place responsibility on individuals to navigate environments explicitly designed to override individual intention. They treat collective problems as matters of personal choice.
More promising are collective responses that address the structural nature of algorithmic control:
Could legal and regulatory frameworks recognize attention as a protected resource rather than a commodity to be extracted? Might community-owned platforms optimize for human flourishing rather than engagement metrics? What would educational approaches that develop algorithmic literacy and critical awareness look like? How might we create design practices that enhance rather than undermine intentional choice and reflection?
Yet a fundamental challenge looms: How do we overcome the "small" fact that much of the tech we currently have (and that everyone uses) exists as monopolies run by billionaires who seem unconcerned with human flourishing beyond their own? How do we build alternatives when promising new technologies are routinely acquired and absorbed by these same power structures?
This pattern of consolidation isn't accidental. It reflects how capital flows in our current system—toward centralization, extraction, and control rather than distribution, regeneration, and freedom. The algorithmic attention economy is simply the latest manifestation of this deeper logic.
These aren't just technical questions but political ones—efforts to reclaim the power to determine how technological systems shape our individual and collective futures.
Perhaps the most radical form of resistance is creating spaces and relationships that operate according to different values entirely—connections that prioritize presence over productivity, depth over optimization, human flourishing over extraction. Not as escapes from technology, but as laboratories for different relationships with it.
Freedom Beyond the Algorithm
As I've moved through the layers of systems shaping our freedom—from economic structures to political institutions to social relationships and now to technological infrastructures—a pattern seems to emerge. I'm beginning to think freedom isn't just about removing external constraints but about examining and transforming the environments within which we make choices.
The algorithmic systems increasingly mediating our lives don't feel like just tools we use but architectures we inhabit—environments that shape perception, attention, desire, and behavior in ways that serve particular interests. Could freedom require not just navigating these environments but questioning and rewiring them?
This brings me to the next layer in our exploration: the body itself. If algorithms shape our digital environments, how might various systems of control extend to our physical embodiment? What happens to freedom when control reaches the most intimate level—our physical autonomy, our reproductive capacity, our very DNA?
That's where we'll turn next.
In the meantime, I'm curious: How do you experience algorithmic influence in your daily life? What practices help you maintain autonomy within technological systems? What would technology designed to enhance rather than extract freedom look like? I don't have definitive answers to these questions, but I believe exploring them together matters deeply for our shared future.
Next up: "Can Our Bodies Remain Our Own in an Age of Control?" where we'll examine how freedom relates to bodily autonomy in a time of intensifying regulation and surveillance of physical existence.
Support This Work
Questioning how algorithmic systems shape our freedom requires independence from the very attention economy being critiqued. If these explorations resonate—if you value analysis that examines the hidden influences on our autonomy—your support makes this work sustainable.
Comments ()