Tag: postcolonial theory

Math Meets Critical Theory: Jameson, Bhabha, and Spivak in Motion

There’s a moment, reading Homi Bhabha’s takedown of Fredric Jameson in The Location of Culture, when something clicks—not just in the argument, but in the style. Jameson is trying to solve culture like an equation: assign values, isolate variables, and arrive at a coherent output. Bhabha, meanwhile, is operating in a different mathematical universe entirely. He’s not solving; he’s deriving. His meanings shift, flow, contradict themselves, curve around colonial histories and linguistic fractures.

That’s when it hit me:
Jameson is doing algebra. Bhabha is doing calculus.

Jameson’s work, especially in Postmodernism, or, the Cultural Logic of Late Capitalism, constructs a kind of conceptual grid—a structured model for understanding cultural production under late capitalism. At one point, he tries to extract the entire cultural logic of Chinese revolutionary art from a few translated lines of a poem. Bhabha doesn’t just critique him—he highlights the large leap it takes to derive sweeping cultural meaning from such limited textual fragments, especially when filtered through translation and Western assumptions. It’s a perfect example of algebraic thinking: plug in limited data, assume fixed meaning, and solve for culture.

Bhabha won’t have it. The Location of Culture demands a theory in motion: layered, recursive, nonlinear. He’s writing from the margins, where meaning keeps shifting.

If algebra asks: what is the value? Calculus asks: how is it changing? Bhabha is telling us that culture can’t be understood without motion, history, and the messiness of change. Culture, for Bhabha, isn’t inherited wholesale—it’s negotiated in what he calls the Third Space: that unstable zone where meanings are constantly rewritten, identities shift by degrees, and history doesn’t sit still. It’s culture in motion—the kind you trace, not solve.

I found this perspective reinforced, unexpectedly, in Ruha Benjamin’s work. A sociologist of science and technology, Benjamin exposes how supposedly neutral systems—algorithms, predictive policing, medical technologies—inherit the biases of their creators. In Race After Technology, she warns against the “New Jim Code,” where racism is reproduced through technical design under the guise of objectivity.

Like Bhabha, Benjamin insists that to understand a system, you have to track not just what it does, but how it got that way. Present-day outcomes, she argues, are derivatives of historical forces. You can’t read a data point—or a person—without understanding its slope: the embedded assumptions, the legacy code, the inertia of injustice. She, too, resists the grid. Her critique unfolds across vectors—of race, design, history, and power—all moving.

As I thought through this fun thought experiment, I wondered: if Jameson is algebra and Bhabha is calculus, does that make Gayatri Spivak’s Can the Subaltern Speak? the equivalent of imaginary numbers? Not quite real, not quite unreal, but necessary to complete the system. Spivak, a literary theorist and postcolonial scholar, famously questions whether the subaltern—the colonized subject denied access to institutional power—can ever be fully represented within Western discourse.

Her answer is purposefully paradoxical: the moment the subaltern is heard, she is no longer subaltern. This epistemic instability mirrors the strangeness of imaginary numbers—unintuitive, elusive, but essential for solving real problems. Spivak isn’t offering closure. You can’t solve her. But she makes sure we don’t mistake solvability for truth.

Thinking in motion—historically, culturally, structurally—is hard. It’s messy. It rarely resolves into tidy answers. But it’s the only way to read honestly.

If Jameson gives us a model, Bhabha gives us a method. Benjamin updates that method for the digital age, showing how inequality is encoded, inherited, and disguised. And Spivak reminds us that some voices remain outside the solvable system entirely.

Culture doesn’t sit still long enough to be solved. And maybe that’s the point.

Rewriting Meaning: AI, Authority, and the Politics of Simplification

Four thinkers – postcolonial theorists Gayatri Spivak and Homi Bhabha, and Science and Technology Studies scholars Ruha Benjamin and Paul Dourish—have each challenged my default belief that clarity is always generous. What follows isn’t an exhaustive treatment of their work, but a tracing of how each helped me see simplification not as neutral, but as deeply political: a site where meaning, power, and identity are continuously remade.

Whether simplifying a research paper or translating my brother’s confusing emoji usage, I used to believe that making something easier to understand was inherently helpful. But my use of AI tools like ChatGPT has made me question that instinct. Every simplification is also a decision – one shaped by systems trained on opaque histories and silent assumptions. What feels seamless is never neutral. Simplification often appears helpful, but it can obscure power, flatten meaning, and replicate dominance. Clarity, then, is not always benevolent.

AI has made simplification easier – but also more invisible. With the click of a button, a sentence becomes friendlier, a paragraph simpler, and meaning subtly rewritten. In a world of automatic summaries and polite rephrasings, interpretation happens constantly, often without our awareness. Interpretation has always been political – but now it happens faster, at scale, and in the background. That makes it even more urgent to ask: who decides what gets preserved, and what gets erased?

Gayatri Spivak names one of the dangers directly. In Can the Subaltern Speak?, she critiques how even well-intentioned efforts to represent the voices of the oppressed can enact what she calls epistemic violence – not by misrepresenting them, but by foreclosing the possibility of the subaltern (those outside dominant power structures) speaking on their own terms. Her central example involves colonial and nationalist accounts of sati, the historical practice of widow immolation in India. British officials framed themselves as saving brown women from brown men; Indian nationalists defended sati as cultural tradition. In both narratives, the woman at the center disappears. Her experience becomes legible only through other people’s agendas. This resonates with AI: predictive systems cannot model what’s missing. They reproduce the visible archive, not the invisible silence. Meaning is generated from what we already trust – and trust is unevenly distributed. As Spivak reminds us, legibility is not freedom. In an AI-mediated world, we must ask not just what is being said, but what information it draws on.

Homi Bhabha reveals how attempts to fix meaning through translation can backfire. In The Location of Culture, he writes about how colonial missionaries translated Christian texts into Indian languages in an effort to preserve sacred meaning. Instead, they produced hybrid forms that blended biblical ideas with local idioms – reshaping both in the process. This wasn’t mere error, but what Bhabha calls mimicry: a form of rearticulation that doesn’t replicate dominant meaning, but reworks it through partial resemblance. Mimicry introduces ambivalence – a doubleness that both echoes and alters authority. Like colonial mimicry, AI-generated outputs appear obedient – but in their distortions, they may quietly resist or reframe authority. Generative systems remix inputs into outputs that resist pure reproduction or original intent. Their interpretations aren’t clean – they carry contradiction. And that ambivalence, far from a flaw, may be the most revealing thing about them.

Ruha Benjamin extends these concerns into the technological domain. In Race After Technology, she argues that coded systems don’t just reflect bias – they actively produce it under the guise of neutrality. What looks like efficiency or personalization often masks what she terms the “New Jim Code”: design choices that encode discrimination into seemingly objective infrastructures. Her work insists we examine not just what systems do, but what they assume. For instance, she analyzes how the COMPAS algorithm, widely used in U.S. courts to predict future crime, perpetuated racial bias under the guise of ‘objective risk scoring.’ Black defendants were more likely to be flagged as high risk – even when they hadn’t reoffended – because the algorithm mirrored patterns from biased policing data. White defendants, on the other hand, were often under-risked. This was not a coding error, but a reflection of whose lives were seen as risky in the first place. Racism does not need to be intentional to be racist. In this example, Benjamin shows that simplification – especially when automatic – can carry its own politics. Systems that appear seamless may actually smooth over friction that deserves to be felt.

In computer systems, we often think of context as a setting: something external and pre-defined, like location, time, or device state – an input to be detected and used, rather than something shaped through activity. But as Paul Dourish argues in What We Talk About When We Talk About Context, this view is limited. He gives the example of a meeting room: its physical properties don’t define its context – the purpose of the meeting, the relationships between participants, and the activity in the room all contribute to what that space means. Context, then, isn’t just where we are – it’s how we make sense of where we are, with whom, and toward what end. It’s not data; it’s practice.

Dourish tells us that context isn’t a container – it’s something we enact through interpretation and activity. That reframing matters, as it implies systems that automate interpretation – like generative AI – aren’t just neutral observers. They’re constructing context in their own way: deciding what meaning to carry forward, what to collapse, and what to ignore. Even a simple rephrasing isn’t just a linguistic shift – it’s a contextual one.

This reorients my long-held belief – shaped in part by years of engineering work and personal writing – that clarity is always a virtue. I’ve long seen simplifying complexity as generous: an attempt to bridge knowledge gaps. But the four authors’ reframing complicates that instinct. When simplifying, what assumptions am I importing? What power dynamics am I reproducing? Who am I privileging in the name of clarity?

The same questions arise in generative systems. When we ask AI to summarize a legal document or rephrase a tweet, we’re deciding what meaning to carry forward and what to lose. Only now, those decisions are faster, harder to see, and often made on our behalf.

To better trace how meaning gets reshaped through AI systems and human interaction, I sketch three overlapping “contextual layers” where simplification occurs: training, prompt, and interaction. Rather than a linear flow, these layers operate like overlapping lenses—each shaping and reshaping meaning in relation to the others. The goal isn’t to locate where distortion happens, but to see how interpretation accrues across layers of design, use, and reception. I name them not to fix boundaries, but to make visible how meaning gets revised – layer by layer, decision by decision – often without our noticing.

Meaning isn’t shaped in one place – it’s shaped at every step. In the training context, systems learn from historical patterns encoded in data: dominant voices are amplified, omissions calcified. Spivak reminds us that even trying to “give voice” can end up speaking over. The subaltern isn’t just ignored – they’re made legible to power in ways that serve it. What shows up as statistical fact might just be historical bias, polished and reissued. History is written by the victor, so to speak. The context in which humans choose the training data is mostly invisible but has huge repercussions.

In the prompt context, the end user decides how questions are framed. Phrasing shapes not only what answers are given, but what assumptions are smuggled in. Ask “What causes poverty?” and you frame a structural issue; ask “Why are poor people poor?” and you risk moralizing it. Prompts are worldviews in miniature – and AI, like any simplifying system, adapts to its reader.

In the interaction context, the same output carries different weight depending on where and how it appears. A summary in a search engine, a chatbot, or a courtroom carries different authority and impact. As Dourish notes, context isn’t just surroundings—it’s an active construction. And as Benjamin warns, the most seamless systems are often the most dangerous: hiding their assumptions under the banner of usability.

In tools like ChatGPT, simplification doesn’t just make things easier – it changes how we understand them. Rephrasings, summaries, autofills… they reshape what we notice, what we trust, what we think something means. Dourish points out that context comes from what people do, not just the metadata around it. But when systems automate those decisions, it gets harder to tell interpretation from clarity. It starts to feel like meaning just is – not something someone chose.

That’s what makes it tricky. Context has always been political. But now it’s easier to forget that. Every time I ask ChatGPT to summarize a paper or help decode one of my brother’s emoji-filled texts, I’m not just getting a shortcut – I’m getting layers of decisions made earlier, by someone else. The system carries a perspective, even if it doesn’t announce one.

Generative AI gives us answers that seem neutral, but behind the scenes, they’ve been shaped – by what was in the training data, by how the prompt was written, by how the interface nudges you to interact. It hides the human fingerprints. That’s why thinkers like Ruha Benjamin and Spivak matter here: they remind us that what’s missing is often the most important part. That simplification is never just about clarity – it’s also about selection.

This essay has mostly focused on the structural stuff: the layers of training, prompting, and interaction that shape how meaning shows up. But that’s just one side of it. The other is what people do with those outputs – how we question them, bend them, or just go along.

Dourish says context is something we do. We build it together – through what we say, how we say it, and who’s in the room. But with tools like ChatGPT, that messy, human work gets smoothed out. The choices are still there, just harder to spot. They’re baked into the defaults, the phrasing, the tone. What feels natural might not be neutral – it might just be someone else’s version of sense-making.

That’s what makes simplification feel so sneaky. It presents itself as helpful, even generous – but it’s still full of quiet choices: what’s emphasized, what’s softened, what’s cut. The politics of simplification don’t just live in system design; they show up in how the answer feels. So in a world full of seamless tools and layered interactions, I keep coming back to the same question: whose meaning gets preserved, and whose disappears?

© 2025 Madeline Neumiller

Theme by Anders NorenUp ↑