Math Meets Critical Theory: Jameson, Bhabha, and Spivak in Motion

There’s a moment, reading Homi Bhabha’s takedown of Fredric Jameson in The Location of Culture, when something clicks—not just in the argument, but in the style. Jameson is trying to solve culture like an equation: assign values, isolate variables, and arrive at a coherent output. Bhabha, meanwhile, is operating in a different mathematical universe entirely. He’s not solving; he’s deriving. His meanings shift, flow, contradict themselves, curve around colonial histories and linguistic fractures.

That’s when it hit me:
Jameson is doing algebra. Bhabha is doing calculus.

Jameson’s work, especially in Postmodernism, or, the Cultural Logic of Late Capitalism, constructs a kind of conceptual grid—a structured model for understanding cultural production under late capitalism. At one point, he tries to extract the entire cultural logic of Chinese revolutionary art from a few translated lines of a poem. Bhabha doesn’t just critique him—he highlights the large leap it takes to derive sweeping cultural meaning from such limited textual fragments, especially when filtered through translation and Western assumptions. It’s a perfect example of algebraic thinking: plug in limited data, assume fixed meaning, and solve for culture.

Bhabha won’t have it. The Location of Culture demands a theory in motion: layered, recursive, nonlinear. He’s writing from the margins, where meaning keeps shifting.

If algebra asks: what is the value? Calculus asks: how is it changing? Bhabha is telling us that culture can’t be understood without motion, history, and the messiness of change. Culture, for Bhabha, isn’t inherited wholesale—it’s negotiated in what he calls the Third Space: that unstable zone where meanings are constantly rewritten, identities shift by degrees, and history doesn’t sit still. It’s culture in motion—the kind you trace, not solve.

I found this perspective reinforced, unexpectedly, in Ruha Benjamin’s work. A sociologist of science and technology, Benjamin exposes how supposedly neutral systems—algorithms, predictive policing, medical technologies—inherit the biases of their creators. In Race After Technology, she warns against the “New Jim Code,” where racism is reproduced through technical design under the guise of objectivity.

Like Bhabha, Benjamin insists that to understand a system, you have to track not just what it does, but how it got that way. Present-day outcomes, she argues, are derivatives of historical forces. You can’t read a data point—or a person—without understanding its slope: the embedded assumptions, the legacy code, the inertia of injustice. She, too, resists the grid. Her critique unfolds across vectors—of race, design, history, and power—all moving.

As I thought through this fun thought experiment, I wondered: if Jameson is algebra and Bhabha is calculus, does that make Gayatri Spivak’s Can the Subaltern Speak? the equivalent of imaginary numbers? Not quite real, not quite unreal, but necessary to complete the system. Spivak, a literary theorist and postcolonial scholar, famously questions whether the subaltern—the colonized subject denied access to institutional power—can ever be fully represented within Western discourse.

Her answer is purposefully paradoxical: the moment the subaltern is heard, she is no longer subaltern. This epistemic instability mirrors the strangeness of imaginary numbers—unintuitive, elusive, but essential for solving real problems. Spivak isn’t offering closure. You can’t solve her. But she makes sure we don’t mistake solvability for truth.

Thinking in motion—historically, culturally, structurally—is hard. It’s messy. It rarely resolves into tidy answers. But it’s the only way to read honestly.

If Jameson gives us a model, Bhabha gives us a method. Benjamin updates that method for the digital age, showing how inequality is encoded, inherited, and disguised. And Spivak reminds us that some voices remain outside the solvable system entirely.

Culture doesn’t sit still long enough to be solved. And maybe that’s the point.

Rewriting Meaning: AI, Authority, and the Politics of Simplification

Four thinkers – postcolonial theorists Gayatri Spivak and Homi Bhabha, and Science and Technology Studies scholars Ruha Benjamin and Paul Dourish—have each challenged my default belief that clarity is always generous. What follows isn’t an exhaustive treatment of their work, but a tracing of how each helped me see simplification not as neutral, but as deeply political: a site where meaning, power, and identity are continuously remade.

Whether simplifying a research paper or translating my brother’s confusing emoji usage, I used to believe that making something easier to understand was inherently helpful. But my use of AI tools like ChatGPT has made me question that instinct. Every simplification is also a decision – one shaped by systems trained on opaque histories and silent assumptions. What feels seamless is never neutral. Simplification often appears helpful, but it can obscure power, flatten meaning, and replicate dominance. Clarity, then, is not always benevolent.

AI has made simplification easier – but also more invisible. With the click of a button, a sentence becomes friendlier, a paragraph simpler, and meaning subtly rewritten. In a world of automatic summaries and polite rephrasings, interpretation happens constantly, often without our awareness. Interpretation has always been political – but now it happens faster, at scale, and in the background. That makes it even more urgent to ask: who decides what gets preserved, and what gets erased?

Gayatri Spivak names one of the dangers directly. In Can the Subaltern Speak?, she critiques how even well-intentioned efforts to represent the voices of the oppressed can enact what she calls epistemic violence – not by misrepresenting them, but by foreclosing the possibility of the subaltern (those outside dominant power structures) speaking on their own terms. Her central example involves colonial and nationalist accounts of sati, the historical practice of widow immolation in India. British officials framed themselves as saving brown women from brown men; Indian nationalists defended sati as cultural tradition. In both narratives, the woman at the center disappears. Her experience becomes legible only through other people’s agendas. This resonates with AI: predictive systems cannot model what’s missing. They reproduce the visible archive, not the invisible silence. Meaning is generated from what we already trust – and trust is unevenly distributed. As Spivak reminds us, legibility is not freedom. In an AI-mediated world, we must ask not just what is being said, but what information it draws on.

Homi Bhabha reveals how attempts to fix meaning through translation can backfire. In The Location of Culture, he writes about how colonial missionaries translated Christian texts into Indian languages in an effort to preserve sacred meaning. Instead, they produced hybrid forms that blended biblical ideas with local idioms – reshaping both in the process. This wasn’t mere error, but what Bhabha calls mimicry: a form of rearticulation that doesn’t replicate dominant meaning, but reworks it through partial resemblance. Mimicry introduces ambivalence – a doubleness that both echoes and alters authority. Like colonial mimicry, AI-generated outputs appear obedient – but in their distortions, they may quietly resist or reframe authority. Generative systems remix inputs into outputs that resist pure reproduction or original intent. Their interpretations aren’t clean – they carry contradiction. And that ambivalence, far from a flaw, may be the most revealing thing about them.

Ruha Benjamin extends these concerns into the technological domain. In Race After Technology, she argues that coded systems don’t just reflect bias – they actively produce it under the guise of neutrality. What looks like efficiency or personalization often masks what she terms the “New Jim Code”: design choices that encode discrimination into seemingly objective infrastructures. Her work insists we examine not just what systems do, but what they assume. For instance, she analyzes how the COMPAS algorithm, widely used in U.S. courts to predict future crime, perpetuated racial bias under the guise of ‘objective risk scoring.’ Black defendants were more likely to be flagged as high risk – even when they hadn’t reoffended – because the algorithm mirrored patterns from biased policing data. White defendants, on the other hand, were often under-risked. This was not a coding error, but a reflection of whose lives were seen as risky in the first place. Racism does not need to be intentional to be racist. In this example, Benjamin shows that simplification – especially when automatic – can carry its own politics. Systems that appear seamless may actually smooth over friction that deserves to be felt.

In computer systems, we often think of context as a setting: something external and pre-defined, like location, time, or device state – an input to be detected and used, rather than something shaped through activity. But as Paul Dourish argues in What We Talk About When We Talk About Context, this view is limited. He gives the example of a meeting room: its physical properties don’t define its context – the purpose of the meeting, the relationships between participants, and the activity in the room all contribute to what that space means. Context, then, isn’t just where we are – it’s how we make sense of where we are, with whom, and toward what end. It’s not data; it’s practice.

Dourish tells us that context isn’t a container – it’s something we enact through interpretation and activity. That reframing matters, as it implies systems that automate interpretation – like generative AI – aren’t just neutral observers. They’re constructing context in their own way: deciding what meaning to carry forward, what to collapse, and what to ignore. Even a simple rephrasing isn’t just a linguistic shift – it’s a contextual one.

This reorients my long-held belief – shaped in part by years of engineering work and personal writing – that clarity is always a virtue. I’ve long seen simplifying complexity as generous: an attempt to bridge knowledge gaps. But the four authors’ reframing complicates that instinct. When simplifying, what assumptions am I importing? What power dynamics am I reproducing? Who am I privileging in the name of clarity?

The same questions arise in generative systems. When we ask AI to summarize a legal document or rephrase a tweet, we’re deciding what meaning to carry forward and what to lose. Only now, those decisions are faster, harder to see, and often made on our behalf.

To better trace how meaning gets reshaped through AI systems and human interaction, I sketch three overlapping “contextual layers” where simplification occurs: training, prompt, and interaction. Rather than a linear flow, these layers operate like overlapping lenses—each shaping and reshaping meaning in relation to the others. The goal isn’t to locate where distortion happens, but to see how interpretation accrues across layers of design, use, and reception. I name them not to fix boundaries, but to make visible how meaning gets revised – layer by layer, decision by decision – often without our noticing.

Meaning isn’t shaped in one place – it’s shaped at every step. In the training context, systems learn from historical patterns encoded in data: dominant voices are amplified, omissions calcified. Spivak reminds us that even trying to “give voice” can end up speaking over. The subaltern isn’t just ignored – they’re made legible to power in ways that serve it. What shows up as statistical fact might just be historical bias, polished and reissued. History is written by the victor, so to speak. The context in which humans choose the training data is mostly invisible but has huge repercussions.

In the prompt context, the end user decides how questions are framed. Phrasing shapes not only what answers are given, but what assumptions are smuggled in. Ask “What causes poverty?” and you frame a structural issue; ask “Why are poor people poor?” and you risk moralizing it. Prompts are worldviews in miniature – and AI, like any simplifying system, adapts to its reader.

In the interaction context, the same output carries different weight depending on where and how it appears. A summary in a search engine, a chatbot, or a courtroom carries different authority and impact. As Dourish notes, context isn’t just surroundings—it’s an active construction. And as Benjamin warns, the most seamless systems are often the most dangerous: hiding their assumptions under the banner of usability.

In tools like ChatGPT, simplification doesn’t just make things easier – it changes how we understand them. Rephrasings, summaries, autofills… they reshape what we notice, what we trust, what we think something means. Dourish points out that context comes from what people do, not just the metadata around it. But when systems automate those decisions, it gets harder to tell interpretation from clarity. It starts to feel like meaning just is – not something someone chose.

That’s what makes it tricky. Context has always been political. But now it’s easier to forget that. Every time I ask ChatGPT to summarize a paper or help decode one of my brother’s emoji-filled texts, I’m not just getting a shortcut – I’m getting layers of decisions made earlier, by someone else. The system carries a perspective, even if it doesn’t announce one.

Generative AI gives us answers that seem neutral, but behind the scenes, they’ve been shaped – by what was in the training data, by how the prompt was written, by how the interface nudges you to interact. It hides the human fingerprints. That’s why thinkers like Ruha Benjamin and Spivak matter here: they remind us that what’s missing is often the most important part. That simplification is never just about clarity – it’s also about selection.

This essay has mostly focused on the structural stuff: the layers of training, prompting, and interaction that shape how meaning shows up. But that’s just one side of it. The other is what people do with those outputs – how we question them, bend them, or just go along.

Dourish says context is something we do. We build it together – through what we say, how we say it, and who’s in the room. But with tools like ChatGPT, that messy, human work gets smoothed out. The choices are still there, just harder to spot. They’re baked into the defaults, the phrasing, the tone. What feels natural might not be neutral – it might just be someone else’s version of sense-making.

That’s what makes simplification feel so sneaky. It presents itself as helpful, even generous – but it’s still full of quiet choices: what’s emphasized, what’s softened, what’s cut. The politics of simplification don’t just live in system design; they show up in how the answer feels. So in a world full of seamless tools and layered interactions, I keep coming back to the same question: whose meaning gets preserved, and whose disappears?

When Pain Disrupts the Self: My Two Month Migraine

Note: I don’t necessarily have tidy takeaways here. I wrote this to name the season I’ve been in—and to remind myself (and maybe someone else) that being stuck doesn’t mean being lost. The invisible illness of migraines has made me grapple with my sense of self.

The Shame Beneath the Symptoms

I do my best to be someone who shows up. I take pride in being reliable—at work, in friendships, in the small rituals of discipline that tether me to my goals. I don’t chase perfection, but I’ve long held myself to a quiet standard: that my word is good, that I deliver, that I don’t flake.

And then came the migraines.

They didn’t just knock me off my feet—they disrupted my entire sense of self. It has been two months of negotiating pain and obligation. First came the dizziness, the visual sensitivity, the nausea. But underneath that, a slower—in some ways more painful—ache bloomed: shame.

I began missing meetings. Rescheduling plans with friends. Canceling guitar lessons I looked forward to all week. The version of myself I’d carefully built—competent, consistent, dependable—started to feel like a memory I was failing to live up to.

I didn’t expect the guilt to be the worst part.

There’s a cruel math that starts to take root in moments like these:  

If I miss one guitar lesson, it’s understandable.  

If I miss three, I’m flaky.  

If I miss a work deadline due to a migraine, it’s health.  

But if I start altering how I work, what I commit to, or who can count on me—am I still the same person?

I didn’t want to be treated like someone who couldn’t be trusted.  

And worse—I didn’t want to believe it about myself.

When something I’d committed to at work moved forward without me, it felt like confirmation of what I feared most: that I was a failure.

When my guitar lessons moved me to the cancellation list after too many missed sessions, undisciplined echoed in my chest.  

When I couldn’t make it to a friend’s birthday because of a migraine, my conscience hissed: flaky.

All of it became evidence for a story I was afraid might be true:  

That I’m failing.  

That I’m not enough.  

That even though the cause is out of my control, the consequences are mine to carry—and that they define me.

I’ve poured so much time and energy into trying to fix this invisible illness. Even now, when people ask if I’m feeling better, I hedge. Yes, I’m “better”—but only in the sense that I’ve learned how to mask better. The migraines still come. The dizziness still hits. I’ve learned to work around them—until I can’t.

Trying to get proper care has felt like shouting into a void. Booking a neurologist appointment was hard enough. But the real gauntlet came after: follow-up questions about prescriptions that didn’t help, routed through a receptionist like a cruel game of telephone. I’ve been made to feel like I’m doing something wrong just by advocating for myself.

I wish I could snap my fingers and erase the last two months. I wish this wasn’t part of my story. But it is. And I’m still here—sorting through the guilt, the fear, the shame, the pain—and trying to find a version of myself I can still believe in.

Right now, I’m just trying to keep going—even if I don’t know what “better” will look like. Even if all I can do is name the ache and let it be seen.

What Audiobooks Gave Back to Me

Even when I was trapped in my apartment—blinds drawn, head pounding, the world tilting beneath me—I could still be transported.

The migraines knocked me out of rhythm, and out of recognition. I couldn’t trust my energy. I canceled plans. Missed things I cared about. I watched that version of myself I had always depended on—the reliable one, the capable one—start to fade.

So I did what I’ve always done when I feel unmoored: I reached for a book – or rather, audiobook. I’ve been a proud listener of Audible since 2015.

Now, when screens and pages could cause searing pain down the sides of my head, audiobooks were my lifeline. I started listening to Les Misérables, Femina, Race After Technology, How to Behave Badly in Elizabethan England, and Character, letting them play while I lay motionless in bed or shuffled to the kitchen to reheat chicken congee I made in bulk. I even listened to HCI papers through Speechify, letting their clunky syntax become oddly comforting.

Sometimes the topics were way too heavy for my sensitive mind. That’s when I turned to the books every woman reads but doesn’t always name publicly—the quiet comforts passed friend to friend without pretense. I could distract myself from the dizziness and nausea for a few hours at a time with some regency romance or litrpg.

And so listening became a lifeline. Not a hobby. Not a virtue. Not some aesthetic return to self-care. When I couldn’t move my body, these stories moved something in my mind. They reminded me that I still had curiosity. That I could still lose myself in ideas. That I could still feel joy, or fury, or awe—even when I couldn’t do much else.

This wasn’t the first time reading saved me. Two years ago, during concussion recovery, I got through an impressive (for me) number of classics while I was similarly trapped: Don Quixote, The Brothers Karamazov, Dracula, The Strange Case of Dr. Jekyll and Mr. Hyde, North and South, and Moby Dick. Of course, I balanced those with an equal (or greater) number of “embarrassing” reads. Each book reminded me that my brain was still mine, even if it wasn’t working quite right. That the light was still on, even if flickering.

Reading certainly didn’t fix my migraines. But it is giving me a way back to myself. Whether through Hugo’s moral universe, feminist theory, speculative critiques of tech, or soft romance tropes I’ll never publicly admit to loving, every book cracked open something in me.

They gave me back pieces of myself.
I am grateful to reading for that.

Modernist Writing: After AI

What photography did to painting, AI is now doing to writing. Once the domain of technical skill and representational fidelity, writing may soon be redefined not by what it can explain, but by what it can evoke. The word processor has long made writing efficient, but generative AI has made it prolific—frictionless, competent, and eerily coherent. This forces a new question, one that echoes the anxieties and awakenings of 19th-century painters confronting the camera: If machines can write, what is writing for?

Photography liberated painting from realism. Painters were no longer bound to perfectly mimic the world—the camera could do that better and faster. So they turned inward. The result was Impressionism, then Cubism, then Abstraction. Form fractured. Color pulsed. Meaning came not from accuracy but from feeling. Painting was reimagined as perception, not reproduction.

AI may offer writing the same release. If machines can deliver summaries, reports, student essays, product blurbs, and even halfway decent fiction, then human writers may no longer be rewarded for producing merely coherent text. Instead, they might be freed to explore what only they can offer: the glitch, the affective pulse, the voice that wavers. Writing can become more like painting in the post-camera era: expressive, fragmented, deeply subjective.

We may be entering an era of Impressionist Writing: prose that prioritizes mood over message, fragments over form. A novel that reads like vapor. An essay punctuated with silence. Emojis as brushstroke. Grammar undone to make space for emotional truth. If AI can mimic syntax, then syntax becomes less interesting. The writer’s power returns to what the machine cannot simulate convincingly: rhythm, contradiction, associative leaps, the raw.

In this light, we might imagine a Clement Greenberg of the AI-writing age. His massively influential essay “Modernist Painting” (1960) argued that painting, under threat from photography and mass reproduction, began to preserve itself by turning inward and focusing on what was unique to the medium. For Greenberg, modernism meant self-criticism: art shedding what it borrowed from other forms and intensifying what was most inherent to itself. In the case of painting, that meant flatness, the integrity of the picture plane, and the visibility of the brushstroke. Painting, in his view, became most itself when it stopped pretending to be something else. Think Mondrian’s Composition II in Red, Blue and Yellow—a painting that embraces geometry, flatness, and the surface itself as its subject.

File:Piet Mondriaan, 1930 - Mondrian Composition II in Red, Blue, and Yellow.jpg

A modernist writing theory might ask: What is irreducibly human about writing? What aspects of the medium resist automation? Voice? Rhythm? Intention? The refusal to explain?

Of course, this doesn’t mean abandoning structure or the art of craft. Just as painters still learn perspective and portraiture, and musicians still study scales, writers must still be taught how to form an argument, how to hold a reader, how to revise. The process of learning to write—the discipline of shaping thought into form—remains essential. But the output may now need to be read differently than before. We are not teaching writing merely for replication; we are teaching it for nuance, for voice, for expression.

We are not at the end of writing. We are at the end of writing as reproduction. The future lies in writing as presence. As texture. As residue.

AI has taken over the job of being legible. We get to be alive.

Character Is What Context Does

A light thought experiment after reading Paul Dourish and Robert McKee in the same day.

This isn’t a rigorous theoretical argument—just something that clicked into place for me after spending the morning with Paul Dourish’s HCI research paper What We Talk About When We Talk About Context and the afternoon with Robert McKee’s narrative theory book Character. I wasn’t planning to compare them, but the overlap was too clean to ignore — and I was too proud of my Notes app witticism while reading the latter: What We Talk About When We Talk About Character.

Both works challenge flattened, overly rigid definitions—Dourish in HCI, McKee in narrative—and both push toward something more dynamic, relational, and enacted.

McKee argues that character isn’t a list of traits—it’s revealed through action under pressure. What a person does when stakes are high is who they are. Dourish similarly critiques the way “context” is treated in ubiquitous computing: as a static container that explains behavior. But context, he argues, isn’t something that sits outside of action—it’s produced through it. It’s emergent, interpretive, socially negotiated. In short, context is what people do together.

The mirroring is almost uncanny. If McKee says “character is action,” Dourish might say “context is interaction.” Both reject essentialism. Both see behavior as a process, not a product. Both are frustrated with frameworks that claim to explain human experience while abstracting away the very conditions that make it human.

They’re also both obsessed with pressure. For McKee, pressure reveals character. For Dourish, moments of ambiguity or breakdown—when coordination falters or shared understanding is strained—reveal how context is not fixed, but fluid and co-constructed through interaction. Both writers seem to argue that to really understand anything about people, you have to study them in motion, under strain, in practice.

What Dourish does for computing, McKee does for storytelling: both move us from static models to lived complexity. And now that I’ve seen the overlap, I can’t unsee it.

We Haven’t Come That Far: Reading Femina by Janina Ramirez

We like to tell ourselves that the world is getting better for women. That every generation is a little more equal, a little more just. That visibility equals power, and institutional access equals permanence. But the more history I read, the less I believe that.

Progress isn’t linear. It’s circular. And women—especially women of color—have been here before: powerful, visible, contributing centrally to culture, science, politics, art. And then forgotten. Again and again.

Femina by Janina Ramirez forced me to reckon with this reality. It’s not a book about exceptional women in the Middle Ages. It’s a book about how common women’s power once was—until it was deliberately erased. Women weren’t marginal figures—they were landowners, spiritual leaders, intellectuals, authors of their own lives. Then the archive was rewritten—often under the guise of reactionary secularism that reframed women’s authority as deviant or irrelevant. Their stories weren’t lost; they were buried.

Reading The Swans of Harlem by Karen Valby made that pattern even more tangible. It tells the story of five Black ballerinas who danced with the Dance Theatre of Harlem during the 1970s and 1980s—artists who broke barriers, toured internationally, and helped reshape the face of American ballet. And yet, their contributions were largely wiped from public memory.

As a white reader, I came to this story without knowing that history—and that gap is part of the point. Misty Copeland, the first Black woman promoted to principal dancer at the American Ballet Theatre in its 75-year history, was widely celebrated in 2015 as a singular trailblazer. And yet, she herself had to learn as an adult that Black women had already carved space in ballet decades earlier. It’s something she mourns discovering so late, as she shares in her book The Wind At My Back, written in honor of her mentor Raven Wilkinson.

This wasn’t obscure or ancient history—Lydia Abarca Mitchell appeared on the cover of Dance Magazine in 1975, a publication still widely read today. Forty years later, the women of The Swans of Harlem watched with everyone else as Copeland was praised for ‘opening the door’—a door they themselves had walked through, unrecognized. Their legacy had been forgotten in real time.

Invisible Women by Caroline Criado-Perez reminds us that this isn’t just about memory—it’s structural. The systems we navigate daily, from crash test dummies to medical trials to city planning, were built on male defaults. Our needs aren’t included. Our safety isn’t assumed. Our time isn’t valued. And books like Sheryl Sandberg’s Lean In and Lois Frankel’s Nice Girls Don’t Get the Corner Office tell us how to survive in those systems by adapting to them. Smile more. Don’t cry. Don’t take up space. Don’t ask for too much. Together, these books reveal how the cycle sustains itself: systems erase women, and we’re taught to become more compatible with our own erasure. The loop tightens, disguised as empowerment.

Much of mainstream corporate feminism still clings to the idea that progress is linear and permanent—that once a woman makes it through the door, it stays open behind her. But history doesn’t bear that out. We say history repeats itself—but for women, it’s more than a saying. We’ve been here before. The real question isn’t just how to make more space at the table—it’s how to make sure the table doesn’t get dismantled the moment we look away. I don’t just want progress—I want to prevent regression. I want the kind of presence that can’t be erased.

Power without permanence isn’t power. Visibility without memory isn’t progress. When we fail to remember who came before us—when we treat each victory as a first—we play directly into the cycle of erasure.

So no, I don’t find comfort in the progress story anymore. I find resolve. We don’t need to be the first. We need to be the ones who remember. Because that’s how we stop playing the system’s game. That’s how we break the loop.

AI Is Making Everyone an Editor

There are way more books than editors.
That’s not a complaint—it’s a model I keep coming back to.

AI is starting to feel like the self-publishing revolution for knowledge work. The tools now generate a lot of output. Code, docs, analysis, mockups—you name it.

But if everyone has a book, the value shifts to the person who knows how to edit. As engineers, how can we stay relevant with AI?


The New Shape of Work

I don’t think AI is taking jobs away wholesale. It’s just quietly changing what the job is.

Suddenly, “doing the work” looks more like:

  • prompting a first draft

  • editing what comes back

  • deciding what’s good enough to ship

The bottleneck isn’t output. It’s judgment.


Not All Work Is Equal Anymore

I’ve been thinking of AI-era work as a kind of funnel:

Layer Role Future Value
Upstream Framing the problem, defining direction 🚀 High
Middle Generating raw output 📉 Shrinking
Downstream Reviewing, refining, validating 🚀 High

In the middle tier, AI’s getting faster. But upstream and downstream still need humans with context and taste.

And that’s where things get interesting… and honestly, a little uncomfortable.


What If Even That Isn’t Enough?

I’ve always leaned toward the upstream side—mapping patterns, breaking down problems, making architecture legible. It’s work I like. But lately I’ve been asking:

In a world where more people are doing this kind of work… is it still enough to stand out?

There are a lot of sharp engineers who can define systems and edit AI output. And as the middle collapses, more of them will be aiming for the same higher-value work.

It’s not just about being good anymore.
It’s about staying relevant in a world where the definition of “good” is shifting.


What Actually Sets People Apart?

What I’m starting to see is that it’s not the code or even the architecture—it’s:

  • The ability to navigate ambiguity

  • Taste, not just correctness

  • Thoughtfulness about how AI fits into human workflows

  • Influence—helping others make better decisions, not just better code

Those are the things I’m trying to sharpen, and frankly, trying to name. Because the more output we automate, the more we need people who can say, “Here’s what matters.”


This Feels Different Than Other Tech Work Disrupters

I’m not a season veteran software engineer yet – I’ve worked less than 10 years. So, sometimes I wonder—is this how engineers felt when containerization came out?

Probably not. That was a shift in tooling, in how we built and deployed software. It was frustrating at times, but it didn’t question the core of what we did. If anything, it made good engineers more powerful.

This feels bigger.
It’s not just changing how we work—it’s changing what counts as work, and who gets to do it.

There’s something heavier about that.
Less technical, more existential.


What I’m Trying to Do Differently

This is where I’ve landed, for now:

  • Keep focusing on judgment-heavy work, not throughput

  • Write more—because writing forces clarity and scales influence

  • Be thoughtful about how I use AI, not just whether I use it

  • Look for problems that don’t have clean owners—especially across team boundaries

  • Stay honest about where I’m coasting and where I need to grow


Still Figuring It Out

I don’t think I have the answers yet. But the question that keeps me moving is:

If AI can do most of what I used to do—what’s left that only I can bring?

That’s the edge I want to build on.

Stop Telling Women To Go Into Management When They Bring Up Diversity

My friend recently told me about a conversation she had with another engineer at her job. She was describing how her team seems to have one archetype she doesn’t fit into. The entire team is more senior than her, all male, and tend to focus more on the purely technical and not the glue work1 that makes up a large chunk of professional software engineering . There have even been recent instances where she’s seen communication fall apart completely. As a mid-level engineer interested in IC growth and promotion, it’s difficult for her to see how to do that when the team is only rewarding and supporting one archetype and type of person that she doesn’t want to fit into.

She told this to a staff level male engineer on another team, and while he was well intentioned, he asked her if she thought about management since she seemed to keep bringing up a lot of issues that management works on.

I’ve been used to being the only woman in the room since I was 14. It still affects me the same way it did when I was a teenager. There’s even science backing up how I feel.2 When I’m the only woman in the room I immediately feel self conscious and like the entire weight of my sex rests on my shoulders. I wonder if I’m coming off as too nice or too much of a nag. When I make a technical suggestion I wonder if people are even hearing what I’m saying or dismissing me because I’m a woman. I can also feel myself internalizing that since there are no women around me, I don’t belong. There’s a lot going on in my subconscious brain.

So eventually when things make their way to my conscious brain – and I’m an engineer, I like solving problems – I bring it up and try to figure out a solution. I have a real vested interest in making sure my work environment feels inclusive to me. It impacts whether or not I get a promotion, how much I’m paid, or if I am just plain happy in my day to day.

However, more often than not, it feels like when women bring up a company’s lack of representation or another cultural issue, the problem then falls on them to fix it. And this often subtly leads the woman into management. “We need more people like you to help us with our diversity,” is something I’ve heard a lot as a female engineer, and when I spend even a portion of my time on people related tasks, I naturally get better at it.

It might be counterintuitive, but promoting women into management may actually hurt gender diversity, too.3 It subtly reinforces the notion that women aren’t technical but are instead managerial. A lot of women, myself included, are drawn to the not purely technical. And a lot of women are also good employees and will excel at a job they are tasked with, but that doesn’t mean they should go into management.

I don’t want to be a manager right now – I like helping people grow and I like improving culture, but I have a lot more fun building things. At the moment, I know management won’t fulfill me. I know this because I’ve spent a lot of time introspecting about my goals, which is something all women should do, otherwise, I’m willing to bet, well meant advice is likely to lead women directly into management.4

So what are a couple things allies can say to a woman when she brings up “diversity”?

  1. Validate what they’re feeling. There’s a lot of nuance and we’re fighting social psychology right now. Chances are women are right when something feels “off”. A lot of women I know in engineering have asked themselves the question “Am I crazy?” when thinking about their experiences. This puts a lot of us in a vulnerable position when we share our stories, so when someone feels comfortable enough to share her story with you, this should not be treated lightly.5
  2. Promise your friend or colleague you will bring up diversity and champion culture more – allyship is key. It also normalizes the idea that all people and culture problems are everyone’s problems, not just those who are affected. Just make sure that if you’re repeating any ideas from others that any credit goes to where the idea came from.

Continue reading

Go Slice Debugging

I finally learned how Go slices work.

A few days ago I was struggling to understand why my submission for a LeetCode question was failing. As far as I could tell, the logic was there, but I was somehow using the same underlying slice memory for my answer, resulting in unintentional repetition.

After much frustration, I noticed a small difference that eventually got me what I wanted. I was baffled though, so I decided now was the time to learn all about slices.

Spot the difference

The premise of the original LeetCode question is to generate all permutations of an integer array.

I ended up getting so frustrated while trying to solve it, I eventually tried to emulate an already submitted answer. However, doing that still didn’t fix my issue! I was at my wit’s end until I noticed a very subtle difference in solutions. So let’s play “Spot the difference” between two submissions:

My initial solution

func permute(nums []int) [][]int {
   ans := make([][]int, 0, len(nums))
   backtrack(make([]int, 0, len(nums)), nums, &ans)
   return ans
}

func backtrack(left []int, rem []int, output *[][]int) {
   if len(rem) == 0{
      *output = append(*output, left)
   }
   for i, l := range rem {
      backtrack(append(left, l),
      append(append([]int{}, rem[:i]...), rem[i+1:]...), output)
   } 
}

Returns:

[[3,2,1],[3,2,1],[3,2,1],[3,2,1],[3,2,1],[3,2,1]]

The correct solution

func permute(nums []int) [][]int {
    ans := make([][]int, 0, len(nums))
    backtrack(make([]int, 0), nums, &ans)
    return ans
}

func backtrack(left []int, rem []int, output *[][]int) {
    if len(rem) == 0{
        *output = append(*output, left)
    }
    for i, l := range rem {
        backtrack(append(left, l),
        append(append([]int{}, rem[:i]...), rem[i+1:]...), output)
    }
}

Returns:

[[1,2,3],[1,3,2],[2,1,3],[2,3,1],[3,1,2],[3,2,1]]

For the savvy eye – you’ll see on line 3 for both there’s a difference. In one solution I allocated the expected end memory for that array, and in the other I did not. Why did that have such a tremendous impact?

How a slice works in Go

So I’ve been getting away with for a while simply knowing that slices don’t behave regularly when you pass them into a function. At a certain level they’re passed as value, and at others they are passed by reference. Where those distinctions lay, I’d never been terribly sure. This maddening LeetCode problem led me to finally invest in learning.

The blog by golang.org does a terrific job of explaining slices, and here I will try to summarize some of the main points.

We can think of a slice as a struct that contains 3 pieces of information:

  1. Capacity
  2. Length
  3. A Pointer to the first value in the slice

All three of those components are super important, and give slices their tremendous versatility. You can almost view a slice structure as a pointer to a Root node in a linked list, but with the added benefit of length and capacity.

With the above structure in mind, we can think of an integer slice as something similar to the following:

type intSlice {
    Length int
    Capacity int
    ZerothElement *int
}

Passing by reference vs passing by value.

With the above struct model, we can see how we could be passing slices as both reference and value. Take the following two examples (stolen from the blog linked above):

When a slice acts as something passed by value:

func SubtractOneFromLength(slice []byte) []byte {
    slice = slice[0 : len(slice)-1]
    return slice
}
func main() {
    fmt.Println("Before: len(slice) =", len(slice))
    newSlice := SubtractOneFromLength(slice)
    fmt.Println("After: len(slice) =", len(slice))
    fmt.Println("After: len(newSlice) =", len(newSlice))
}
Before: len(slice) = 50
After: len(slice) = 50
After: len(newSlice) = 49

When a slice acts as something passed by reference:

func AddOneToEachElement(slice []byte) {
    for i := range slice {
        slice[i]++
    }
}
func main() {
    slice := buffer[10:20]
    for i := 0; i < len(slice); i++ {
        slice[i] = byte(i)
    }
    fmt.Println("before", slice)
    AddOneToEachElement(slice)
    fmt.Println("after", slice)
}
before [0 1 2 3 4 5 6 7 8 9]
after [1 2 3 4 5 6 7 8 9 10]

Go passes slices by values unless otherwise specified. However, a copy of a pointer memory address still points at the same thing. Therefore whenever we change what is stored in the array itself, we’re accessing the original value that was passed in. So when we exit any function that supposedly passed by value, we can end up changing the original data!

Notice, however, when we update the length of a passed in slice, we’re not changing the original, since that’s not stored as a pointer. If for some reason slice’s stored *int for length, then we would see a change.

Capacity and make

Slice’s also store capacity. This is what really separates arrays and slices in Go. Imagine if we didn’t have a capacity field for a second. Every time we ran

s = append(s, "new field")

we would need to allocate new memory for another slice. Instead, Go uses capacity to set aside a certain amount of memory for each slice, making the majority of appends a O(1) operation.

Quite often, though, we do end up appending something that goes beyond the allowed capacity. In this case, Go will create a new underlying array with roughly double the capacity and copy the original array over to the new memory.

Copying things over isn’t a cheap operation. Since we store a slice by pointers, Go has to iterate through the entire array and copy the values to the new slice. One easy and quick performance optimization many Go writers use is by using make.

make allows the programmer to specify the length and capacity of a slice. The first argument to make is the type of structure you wish to make, for example []int or []string.

The next argument is an integer specifying length and is required in all slice make calls. If you specify a length n, the slice will create n zero values of that type in the slice.

The final optional argument specifies the capacity of the slice. This can help make append‘s more performant if we even have a rough idea of how long the slice will be.

Examples of make usage:

make([]int, 0, 3)

Slice: []
Slice struct: {
    Length: 0
    Capacity: 3
    ZerothElement: nil
}

make([]string, 2)

Slice: ["",""]
Slice struct: {
    Length: 2
    Capacity: 2
    ZerothElement: <Pointer to zeroth index>
}

Why didn’t my original solution work

When I was writing the input to the backtrack function, I knew the left array would eventually reach a certain size, so I figured I’d create a slice with that initial capacity to avoid having to reallocate memory.

However, since the function relies on distinct slices being passed through, every time I ran append(left, l) I wasn’t creating a new underlying slice so I was operating with the same memory for each level of the recursion. The only thing that was changing in each call was the length of the slice.

In the accepted solution I gave an initial capacity of 0 for the slice, and so Go allocated different memory blocks when append(left, l) was run. This was because the initial slice capacity was less than what was needed. When we allocated different memory each time, the recursion stack can operate on different pointers – leading to distinct slices.

Another way to get by this bug is to use copy and leave the initial capacity. I’ll leave that as an exercise for the reader though 🙂

© 2025 Madeline Neumiller

Theme by Anders NorenUp ↑