Tag: AI

Rewriting Meaning: AI, Authority, and the Politics of Simplification

Four thinkers – postcolonial theorists Gayatri Spivak and Homi Bhabha, and Science and Technology Studies scholars Ruha Benjamin and Paul Dourish—have each challenged my default belief that clarity is always generous. What follows isn’t an exhaustive treatment of their work, but a tracing of how each helped me see simplification not as neutral, but as deeply political: a site where meaning, power, and identity are continuously remade.

Whether simplifying a research paper or translating my brother’s confusing emoji usage, I used to believe that making something easier to understand was inherently helpful. But my use of AI tools like ChatGPT has made me question that instinct. Every simplification is also a decision – one shaped by systems trained on opaque histories and silent assumptions. What feels seamless is never neutral. Simplification often appears helpful, but it can obscure power, flatten meaning, and replicate dominance. Clarity, then, is not always benevolent.

AI has made simplification easier – but also more invisible. With the click of a button, a sentence becomes friendlier, a paragraph simpler, and meaning subtly rewritten. In a world of automatic summaries and polite rephrasings, interpretation happens constantly, often without our awareness. Interpretation has always been political – but now it happens faster, at scale, and in the background. That makes it even more urgent to ask: who decides what gets preserved, and what gets erased?

Gayatri Spivak names one of the dangers directly. In Can the Subaltern Speak?, she critiques how even well-intentioned efforts to represent the voices of the oppressed can enact what she calls epistemic violence – not by misrepresenting them, but by foreclosing the possibility of the subaltern (those outside dominant power structures) speaking on their own terms. Her central example involves colonial and nationalist accounts of sati, the historical practice of widow immolation in India. British officials framed themselves as saving brown women from brown men; Indian nationalists defended sati as cultural tradition. In both narratives, the woman at the center disappears. Her experience becomes legible only through other people’s agendas. This resonates with AI: predictive systems cannot model what’s missing. They reproduce the visible archive, not the invisible silence. Meaning is generated from what we already trust – and trust is unevenly distributed. As Spivak reminds us, legibility is not freedom. In an AI-mediated world, we must ask not just what is being said, but what information it draws on.

Homi Bhabha reveals how attempts to fix meaning through translation can backfire. In The Location of Culture, he writes about how colonial missionaries translated Christian texts into Indian languages in an effort to preserve sacred meaning. Instead, they produced hybrid forms that blended biblical ideas with local idioms – reshaping both in the process. This wasn’t mere error, but what Bhabha calls mimicry: a form of rearticulation that doesn’t replicate dominant meaning, but reworks it through partial resemblance. Mimicry introduces ambivalence – a doubleness that both echoes and alters authority. Like colonial mimicry, AI-generated outputs appear obedient – but in their distortions, they may quietly resist or reframe authority. Generative systems remix inputs into outputs that resist pure reproduction or original intent. Their interpretations aren’t clean – they carry contradiction. And that ambivalence, far from a flaw, may be the most revealing thing about them.

Ruha Benjamin extends these concerns into the technological domain. In Race After Technology, she argues that coded systems don’t just reflect bias – they actively produce it under the guise of neutrality. What looks like efficiency or personalization often masks what she terms the “New Jim Code”: design choices that encode discrimination into seemingly objective infrastructures. Her work insists we examine not just what systems do, but what they assume. For instance, she analyzes how the COMPAS algorithm, widely used in U.S. courts to predict future crime, perpetuated racial bias under the guise of ‘objective risk scoring.’ Black defendants were more likely to be flagged as high risk – even when they hadn’t reoffended – because the algorithm mirrored patterns from biased policing data. White defendants, on the other hand, were often under-risked. This was not a coding error, but a reflection of whose lives were seen as risky in the first place. Racism does not need to be intentional to be racist. In this example, Benjamin shows that simplification – especially when automatic – can carry its own politics. Systems that appear seamless may actually smooth over friction that deserves to be felt.

In computer systems, we often think of context as a setting: something external and pre-defined, like location, time, or device state – an input to be detected and used, rather than something shaped through activity. But as Paul Dourish argues in What We Talk About When We Talk About Context, this view is limited. He gives the example of a meeting room: its physical properties don’t define its context – the purpose of the meeting, the relationships between participants, and the activity in the room all contribute to what that space means. Context, then, isn’t just where we are – it’s how we make sense of where we are, with whom, and toward what end. It’s not data; it’s practice.

Dourish tells us that context isn’t a container – it’s something we enact through interpretation and activity. That reframing matters, as it implies systems that automate interpretation – like generative AI – aren’t just neutral observers. They’re constructing context in their own way: deciding what meaning to carry forward, what to collapse, and what to ignore. Even a simple rephrasing isn’t just a linguistic shift – it’s a contextual one.

This reorients my long-held belief – shaped in part by years of engineering work and personal writing – that clarity is always a virtue. I’ve long seen simplifying complexity as generous: an attempt to bridge knowledge gaps. But the four authors’ reframing complicates that instinct. When simplifying, what assumptions am I importing? What power dynamics am I reproducing? Who am I privileging in the name of clarity?

The same questions arise in generative systems. When we ask AI to summarize a legal document or rephrase a tweet, we’re deciding what meaning to carry forward and what to lose. Only now, those decisions are faster, harder to see, and often made on our behalf.

To better trace how meaning gets reshaped through AI systems and human interaction, I sketch three overlapping “contextual layers” where simplification occurs: training, prompt, and interaction. Rather than a linear flow, these layers operate like overlapping lenses—each shaping and reshaping meaning in relation to the others. The goal isn’t to locate where distortion happens, but to see how interpretation accrues across layers of design, use, and reception. I name them not to fix boundaries, but to make visible how meaning gets revised – layer by layer, decision by decision – often without our noticing.

Meaning isn’t shaped in one place – it’s shaped at every step. In the training context, systems learn from historical patterns encoded in data: dominant voices are amplified, omissions calcified. Spivak reminds us that even trying to “give voice” can end up speaking over. The subaltern isn’t just ignored – they’re made legible to power in ways that serve it. What shows up as statistical fact might just be historical bias, polished and reissued. History is written by the victor, so to speak. The context in which humans choose the training data is mostly invisible but has huge repercussions.

In the prompt context, the end user decides how questions are framed. Phrasing shapes not only what answers are given, but what assumptions are smuggled in. Ask “What causes poverty?” and you frame a structural issue; ask “Why are poor people poor?” and you risk moralizing it. Prompts are worldviews in miniature – and AI, like any simplifying system, adapts to its reader.

In the interaction context, the same output carries different weight depending on where and how it appears. A summary in a search engine, a chatbot, or a courtroom carries different authority and impact. As Dourish notes, context isn’t just surroundings—it’s an active construction. And as Benjamin warns, the most seamless systems are often the most dangerous: hiding their assumptions under the banner of usability.

In tools like ChatGPT, simplification doesn’t just make things easier – it changes how we understand them. Rephrasings, summaries, autofills… they reshape what we notice, what we trust, what we think something means. Dourish points out that context comes from what people do, not just the metadata around it. But when systems automate those decisions, it gets harder to tell interpretation from clarity. It starts to feel like meaning just is – not something someone chose.

That’s what makes it tricky. Context has always been political. But now it’s easier to forget that. Every time I ask ChatGPT to summarize a paper or help decode one of my brother’s emoji-filled texts, I’m not just getting a shortcut – I’m getting layers of decisions made earlier, by someone else. The system carries a perspective, even if it doesn’t announce one.

Generative AI gives us answers that seem neutral, but behind the scenes, they’ve been shaped – by what was in the training data, by how the prompt was written, by how the interface nudges you to interact. It hides the human fingerprints. That’s why thinkers like Ruha Benjamin and Spivak matter here: they remind us that what’s missing is often the most important part. That simplification is never just about clarity – it’s also about selection.

This essay has mostly focused on the structural stuff: the layers of training, prompting, and interaction that shape how meaning shows up. But that’s just one side of it. The other is what people do with those outputs – how we question them, bend them, or just go along.

Dourish says context is something we do. We build it together – through what we say, how we say it, and who’s in the room. But with tools like ChatGPT, that messy, human work gets smoothed out. The choices are still there, just harder to spot. They’re baked into the defaults, the phrasing, the tone. What feels natural might not be neutral – it might just be someone else’s version of sense-making.

That’s what makes simplification feel so sneaky. It presents itself as helpful, even generous – but it’s still full of quiet choices: what’s emphasized, what’s softened, what’s cut. The politics of simplification don’t just live in system design; they show up in how the answer feels. So in a world full of seamless tools and layered interactions, I keep coming back to the same question: whose meaning gets preserved, and whose disappears?

Modernist Writing: After AI

What photography did to painting, AI is now doing to writing. Once the domain of technical skill and representational fidelity, writing may soon be redefined not by what it can explain, but by what it can evoke. The word processor has long made writing efficient, but generative AI has made it prolific—frictionless, competent, and eerily coherent. This forces a new question, one that echoes the anxieties and awakenings of 19th-century painters confronting the camera: If machines can write, what is writing for?

Photography liberated painting from realism. Painters were no longer bound to perfectly mimic the world—the camera could do that better and faster. So they turned inward. The result was Impressionism, then Cubism, then Abstraction. Form fractured. Color pulsed. Meaning came not from accuracy but from feeling. Painting was reimagined as perception, not reproduction.

AI may offer writing the same release. If machines can deliver summaries, reports, student essays, product blurbs, and even halfway decent fiction, then human writers may no longer be rewarded for producing merely coherent text. Instead, they might be freed to explore what only they can offer: the glitch, the affective pulse, the voice that wavers. Writing can become more like painting in the post-camera era: expressive, fragmented, deeply subjective.

We may be entering an era of Impressionist Writing: prose that prioritizes mood over message, fragments over form. A novel that reads like vapor. An essay punctuated with silence. Emojis as brushstroke. Grammar undone to make space for emotional truth. If AI can mimic syntax, then syntax becomes less interesting. The writer’s power returns to what the machine cannot simulate convincingly: rhythm, contradiction, associative leaps, the raw.

In this light, we might imagine a Clement Greenberg of the AI-writing age. His massively influential essay “Modernist Painting” (1960) argued that painting, under threat from photography and mass reproduction, began to preserve itself by turning inward and focusing on what was unique to the medium. For Greenberg, modernism meant self-criticism: art shedding what it borrowed from other forms and intensifying what was most inherent to itself. In the case of painting, that meant flatness, the integrity of the picture plane, and the visibility of the brushstroke. Painting, in his view, became most itself when it stopped pretending to be something else. Think Mondrian’s Composition II in Red, Blue and Yellow—a painting that embraces geometry, flatness, and the surface itself as its subject.

File:Piet Mondriaan, 1930 - Mondrian Composition II in Red, Blue, and Yellow.jpg

A modernist writing theory might ask: What is irreducibly human about writing? What aspects of the medium resist automation? Voice? Rhythm? Intention? The refusal to explain?

Of course, this doesn’t mean abandoning structure or the art of craft. Just as painters still learn perspective and portraiture, and musicians still study scales, writers must still be taught how to form an argument, how to hold a reader, how to revise. The process of learning to write—the discipline of shaping thought into form—remains essential. But the output may now need to be read differently than before. We are not teaching writing merely for replication; we are teaching it for nuance, for voice, for expression.

We are not at the end of writing. We are at the end of writing as reproduction. The future lies in writing as presence. As texture. As residue.

AI has taken over the job of being legible. We get to be alive.

AI Is Making Everyone an Editor

There are way more books than editors.
That’s not a complaint—it’s a model I keep coming back to.

AI is starting to feel like the self-publishing revolution for knowledge work. The tools now generate a lot of output. Code, docs, analysis, mockups—you name it.

But if everyone has a book, the value shifts to the person who knows how to edit. As engineers, how can we stay relevant with AI?


The New Shape of Work

I don’t think AI is taking jobs away wholesale. It’s just quietly changing what the job is.

Suddenly, “doing the work” looks more like:

  • prompting a first draft

  • editing what comes back

  • deciding what’s good enough to ship

The bottleneck isn’t output. It’s judgment.


Not All Work Is Equal Anymore

I’ve been thinking of AI-era work as a kind of funnel:

Layer Role Future Value
Upstream Framing the problem, defining direction 🚀 High
Middle Generating raw output 📉 Shrinking
Downstream Reviewing, refining, validating 🚀 High

In the middle tier, AI’s getting faster. But upstream and downstream still need humans with context and taste.

And that’s where things get interesting… and honestly, a little uncomfortable.


What If Even That Isn’t Enough?

I’ve always leaned toward the upstream side—mapping patterns, breaking down problems, making architecture legible. It’s work I like. But lately I’ve been asking:

In a world where more people are doing this kind of work… is it still enough to stand out?

There are a lot of sharp engineers who can define systems and edit AI output. And as the middle collapses, more of them will be aiming for the same higher-value work.

It’s not just about being good anymore.
It’s about staying relevant in a world where the definition of “good” is shifting.


What Actually Sets People Apart?

What I’m starting to see is that it’s not the code or even the architecture—it’s:

  • The ability to navigate ambiguity

  • Taste, not just correctness

  • Thoughtfulness about how AI fits into human workflows

  • Influence—helping others make better decisions, not just better code

Those are the things I’m trying to sharpen, and frankly, trying to name. Because the more output we automate, the more we need people who can say, “Here’s what matters.”


This Feels Different Than Other Tech Work Disrupters

I’m not a season veteran software engineer yet – I’ve worked less than 10 years. So, sometimes I wonder—is this how engineers felt when containerization came out?

Probably not. That was a shift in tooling, in how we built and deployed software. It was frustrating at times, but it didn’t question the core of what we did. If anything, it made good engineers more powerful.

This feels bigger.
It’s not just changing how we work—it’s changing what counts as work, and who gets to do it.

There’s something heavier about that.
Less technical, more existential.


What I’m Trying to Do Differently

This is where I’ve landed, for now:

  • Keep focusing on judgment-heavy work, not throughput

  • Write more—because writing forces clarity and scales influence

  • Be thoughtful about how I use AI, not just whether I use it

  • Look for problems that don’t have clean owners—especially across team boundaries

  • Stay honest about where I’m coasting and where I need to grow


Still Figuring It Out

I don’t think I have the answers yet. But the question that keeps me moving is:

If AI can do most of what I used to do—what’s left that only I can bring?

That’s the edge I want to build on.

© 2025 Madeline Neumiller

Theme by Anders NorenUp ↑