Embracing Failure as a Woman in Tech

At the age of 24, I was part of a DevOps team when I received a last-minute invitation to a Google workshop on Kubernetes. A group of engineers from my company was also attending, but they were seated together while I found myself at a different table with unfamiliar faces. There were approximately 80 men at the workshop, and I counted only two women, including myself. The realization that I was one of the few women present immediately put me on edge. I felt the weight of representing my entire gender and questioning my right to be in that room. When I missed a step in the workshop, the fear of being labeled as an outsider prevented me from asking clarifying questions. Eventually, the pressure to be perfect became too overwhelming, leading me to leave the workshop early. This experience made me reflect on the benefits I could have gained if I had been willing to admit and learn from my mistakes.

In that workshop, I faced a pivotal moment—a single misstep that triggered a wave of self-doubt. As a woman in engineering, the fear of being labeled a token engineer only added to the pressure of proving myself. Seeking help or asking questions felt like a sign of weakness. It became evident that embracing failure was not just about personal growth but also about challenging the biases and stereotypes that hinder women’s progress in the field.

Reflecting on my journey, I have come to realize that some of my greatest lessons leading to success have emerged from failures. At one point, I took on the role of project manager for a migration from Google’s Search API to ElasticSearch. It was my first time working on such a significant project, and I encountered uncertainties and made numerous mistakes along the way, including several production incidents. Despite achieving moderate success with the project, the most valuable lessons I learned were rooted in my failures. The experience provided me with profound insights into databases, rollouts, migrations, and project management. Although the mention of the database ElasticSearch still makes me cringe, I know that I have grown immensely as a result.

In academia and other spheres, a movement has emerged to celebrate failures as crucial learning experiences. Academics like Melanie Stefan and Johannes Haushoffer have advocated for documenting failures alongside successes in what is known as a “failure CV.” This practice nurtures introspection, resilience, and innovation. By embracing our failures, we not only learn from our own experiences but also inspire others to persevere and challenge the status quo.

Failure holds a profound significance, particularly for women in engineering. It presents an opportunity to overcome biases, break barriers, and redefine success. By sharing our stories and embracing failures as stepping stones to personal growth, we empower ourselves and others in the field. Let us foster a culture that celebrates the resilience and determination of women while challenging systemic biases. Together, we can navigate failure, shatter stereotypes, and forge a future where women thrive in engineering and beyond.

Software Engineering Is A Team Sport

Recently, I asked my friend, who is halfway through her second year of residency, if she ever feels nervous about making the wrong medical decision, especially in life-or-death situations. She said that generally, she doesn’t feel nervous because, in her hospital and department, patient care is a team sport, and that this culture also improves patient outcomes.

I realized the same lesson applies to engineering – that the more a team embraces this methodology, the less stressed the engineers are and the better the product.

Remembering engineering is a team sport is key to promoting a blameless culture. At times, engineers may become anxious about failing to deliver on a significant project, but it’s essential to remember that a project’s success is not dependent on one individual alone. Similar to the aviation industry, where a blameless culture was also adopted, if something fails, it’s more productive for the team to ask how they can prevent it from happening again. Companies work hard to put people and systems in place to help individuals contributors do their best work, and if an engineer is working hard but a delivery is a miss, that can’t be one person’s fault.


There are a lot of things supporting you at work

Besides blameless culture, you also benefit from investing in your team. The book The Effective Engineer by Edmund Lau contains lots of great wisdom, but the piece that has stuck with me the most is when Edmun Lau quoted Yishan Wong with the following: “Imagine that you have a magic wand, and by waving this magic wand, you can make every single person in your company succeed at their job [by] 120%. What would happen? [If] everyone knocked their job out of the park, the company would probably be a huge success and even if you did nothing else, you’d be swept along in the tide of success of everyone around you.” He goes on to say that Wong believes the secret to your own career success is to “focus primarily on making everyone around you succeed.”

Lau also quotes Andy Rachleff’s statement that “You get more credit than you deserve for being part of a successful company, and less credit than you deserve for being part of an unsuccessful company.”

Our team’s have huge opportunity for boosting our career success, so investing in my team’s growth is always one of my top priorities. How can I make sure what I know is passed on to people and make sure they feel happy and motivated to do their best work?


Some things are more important than others

Even when asking for promotions or growing, I like to make it a team sport and something me and my manager can both work on together. Your mileage may vary, but a good manager is one who loves to work with their reports to help them level up. Asking your manager what they’re looking for is a great way of making sure you stay aligned and it can greatly improve your relationship with your manager.

Which would you rather?

Media likes to portray engineering as a job for those who don’t like working with people, but I’ve found that working with people is often the best part, and I do my best work in a group that embraces this team mindset. Together we can use each other’s strengths and point of views to build the best product.

Stop Telling Women To Go Into Management When They Bring Up Diversity

My friend recently told me about a conversation she had with another engineer at her job. She was describing how her team seems to have one archetype she doesn’t fit into. The entire team is more senior than her, all male, and tend to focus more on the purely technical and not the glue work1 that makes up a large chunk of professional software engineering . There have even been recent instances where she’s seen communication fall apart completely. As a mid-level engineer interested in IC growth and promotion, it’s difficult for her to see how to do that when the team is only rewarding and supporting one archetype and type of person that she doesn’t want to fit into.

She told this to a staff level male engineer on another team, and while he was well intentioned, he asked her if she thought about management since she seemed to keep bringing up a lot of issues that management works on.

I’ve been used to being the only woman in the room since I was 14. It still affects me the same way it did when I was a teenager. There’s even science backing up how I feel.2 When I’m the only woman in the room I immediately feel self conscious and like the entire weight of my sex rests on my shoulders. I wonder if I’m coming off as too nice or too much of a nag. When I make a technical suggestion I wonder if people are even hearing what I’m saying or dismissing me because I’m a woman. I can also feel myself internalizing that since there are no women around me, I don’t belong. There’s a lot going on in my subconscious brain.

So eventually when things make their way to my conscious brain – and I’m an engineer, I like solving problems – I bring it up and try to figure out a solution. I have a real vested interest in making sure my work environment feels inclusive to me. It impacts whether or not I get a promotion, how much I’m paid, or if I am just plain happy in my day to day.

However, more often than not, it feels like when women bring up a company’s lack of representation or another cultural issue, the problem then falls on them to fix it. And this often subtly leads the woman into management. “We need more people like you to help us with our diversity,” is something I’ve heard a lot as a female engineer, and when I spend even a portion of my time on people related tasks, I naturally get better at it.

It might be counterintuitive, but promoting women into management may actually hurt gender diversity, too.3 It subtly reinforces the notion that women aren’t technical but are instead managerial. A lot of women, myself included, are drawn to the not purely technical. And a lot of women are also good employees and will excel at a job they are tasked with, but that doesn’t mean they should go into management.

I don’t want to be a manager right now – I like helping people grow and I like improving culture, but I have a lot more fun building things. At the moment, I know management won’t fulfill me. I know this because I’ve spent a lot of time introspecting about my goals, which is something all women should do, otherwise, I’m willing to bet, well meant advice is likely to lead women directly into management.4

So what are a couple things allies can say to a woman when she brings up “diversity”?

  1. Validate what they’re feeling. There’s a lot of nuance and we’re fighting social psychology right now. Chances are women are right when something feels “off”. A lot of women I know in engineering have asked themselves the question “Am I crazy?” when thinking about their experiences. This puts a lot of us in a vulnerable position when we share our stories, so when someone feels comfortable enough to share her story with you, this should not be treated lightly.5
  2. Promise your friend or colleague you will bring up diversity and champion culture more – allyship is key. It also normalizes the idea that all people and culture problems are everyone’s problems, not just those who are affected. Just make sure that if you’re repeating any ideas from others that any credit goes to where the idea came from.

Continue reading “Stop Telling Women To Go Into Management When They Bring Up Diversity”

Simple Go Concurrency Notes

I recently wrote a couple programs that relied on a separate process to continuously do some work. Whenever I’ve written concurrent programs in Go I usually go through the same process of reminding myself how channels work, what gets blocked, and when we need to rely on a WaitGroup. 

This blog is written with future me in mind when I need to remind myself how Go handles concurrency (also writing it like this I’m more likely to remember). This does not cover concurrent data access (i.e. mutexes and semaphores).

Goroutines

Goroutines are Go’s method of concurrency. They operate simultaneously alongside other routines. Every Go program has at least one goroutine – which is kicked off through main

To start another goroutine you can use the keyword go in front of a method or function call.

Example:

func hello() {
   fmt.Print(“hello ”)
}
func main(){
    go Hello()

    go func(){
       fmt.Println(“world”)
    }()
}

Channels

A channel is one of the main ways goroutines talk to each other. A huge advantage of it is that the communication is lock free.

Channels come in two forms: Buffered and unbuffered.

Since Go is a typed language, you need to specify what type of object the channel will pass around (interface{} works here too if your program needs to be type agnostic).

Unbuffered Channels

This is the default channel creation. You pass values from one goroutine to the other one at a time. The sending goroutine will block until the channel is read from.

unbufferedChannel := make(chan int)

Buffered Channels

Are particularly useful when you know how many goroutines you have launched or want to impose better limits on your program. You can create a channel that contains a specified length, and up until the buffer is full the go routines won’t block when they write to it. 

// creates a channel with a buffer size of 10
bufferedChannel := make(chan int, 10)

Further reading: 

Writing to a channel

Writing to a channel is incredibly easy, all you have to do is the following

c := make(chan int, 10)
c <- x

Note that on some channels you can’t write to them because they are saved as “read only” channels. For example, the following would NOT compile:

func main() {
   c := make(chan int, 1)
   cantWriteToReadChannel(c)
}

func cantWriteToReadChannel(c <-chan int){
   c <- 2
}

c <- chan int specifies a read only channel. To specify a write only channel you can move the arrow like so c chan<- int. Thus the following would compile:


func main() {
   c := make(chan int, 1)
   cantWriteToReadChannel(c)
}

func cantWriteToReadChannel(c chan<- int){
   c <- 2
}

Reading from a channel

The simplest way to read from a channel is the following:

package main

import "fmt"

func main() {
   c := make(chan string)
   go sendString(c)

   msg := <-c
   fmt.Println(msg)
}

func sendString(c chan<- string) {
   c <- "Read from single channel"
}

Output:

Read from single channel

It’s rare however we only want to send one message between goroutines. To continuously read we can set up a for loop and range over the channel.

for i := range c {
    fmt.Println(i)
}

Example usage:

package main

import "fmt"

func main() {
   c := make(chan int)
   go sendInt(c)

   for i := range c {
       fmt.Println(i)
       if i == 2 {
           return
       }
   }
}

func sendInt(c chan<- int) {
   for x := 0; x < 5; x++ {
       c <- x
   }
}

Outputs:

0
1
2

Closing a channel

In the above example of reading a channel, we break out of the for loop when we reach a certain input. This obviously isn’t what we want in real life. What if we want to read everything put on a channel and only exit when we’re done? We can close the channel, and the for loop will then be able to iterate what’s left in the channel and exit.

package main

import "fmt"

func main() {
   c := make(chan int)
   go sendInt(c)

   for i := range c {
       fmt.Println(i)
   }
}

func sendInt(c chan<- int) {
   for x := 0; x < 10; x++ {
       c <- x
   }
   close(c)
}

Outputs:


0
1
2
3
4

Reading from multiple channels

Oftentimes we want to read from multiple channels, for that we can use the select statement inside of a for loop.

package main

import "fmt"

func main() {
	cInt := make(chan int)
	cStr := make(chan string)
	go sendInt(cInt)
	go sendStr(cStr)

	for {
		select {
		case i := <-cInt:
			fmt.Print(i, " ")
		case m := <-cStr:
			fmt.Print(m, " ")
		}
	}
}

func sendInt(c chan<- int) {
	for x := 0; x < 5; x++ {
		c <- x
	}
}

func sendStr(c chan<- string) {
	msg := []string{"Hello", "World", "!"}
	for _, m := range msg {
		c <- m
	}
}

Outputs:

Hello World 0 ! 1 2 3 4
fatal error: all goroutines are asleep - deadlock!

goroutine 1 [select]:
main.main()
	//threadfun/main.go:12 +0x168
exit status 2

Notice that we enter deadlock after. If we were to try and close the channels similar to what we did before, we get the following output (and the program never exits):

Hello World ! 0  1 2 3  4 0  0   0 0  0 0   ……. (continuously and never ends)

Thus we need something a little different. Luckily we can use go’s built in functionality that let’s us know when a channel is closed. We can change the above to the following:

package main

import "fmt"

func main() {
   cInt := make(chan int)
   cStr := make(chan string)
   go sendInt(cInt)
   go sendStr(cStr)

   c1, c2 := true, true
   var i int
   var m string
   for c1 || c2 {
       select {
       case i, c1 = <-cInt:
           if c1 {
               fmt.Print(i, " ")
           }

       case m, c2 = <-cStr:
           if c2 {
               fmt.Print(m, " ")
           }
       }
   }
}

func sendInt(c chan<- int) {
   for x := 0; x < 5; x++ {
       c <- x
   }
   close(c)
}

func sendStr(c chan<- string) {
   msg := []string{"Hello", "World", "!"}
   for _, m := range msg {
       c <- m
   }
   close(c)
}

Outputs:

Hello 0 World ! 1 2 3 4

Waiting on Done

Another common way to stop reading on a goroutine is to wait on an explicit channel that will exit when called.

package main

import "fmt"

func main() {
   c := make(chan int)
   done := make(chan bool)
   go readOnChannel(c, done)

   for x := 0; x < 5; x++ {
       c <- x
   }
   done <- true

}

func readOnChannel(c <-chan int, done <-chan bool) {
   for {
       select {
       case i := <-c:
           fmt.Print(i, " ")

       case <-done:
           return
       }
   }
}

Context with cancel

A common practice for waiting on a done channel is to use a context with a cancel set on it. Contexts are useful because they allow the program to store information within it, so using the context to let a program know when to finish is a useful added abstraction.

import (
   "context"
   "fmt"
)

func main() {
   c := make(chan int)
   ctx, cancel := context.WithCancel(context.Background())

   go readOnChannel(ctx, c)
   for x := 0; x < 5; x++ {
       c <- x
   }
   cancel()

}

func readOnChannel(ctx context.Context, c <-chan int) {
   for {
       select {
       case i := <-c:
           fmt.Print(i, " ")
       case <-ctx.Done():
           return
       }
   }
}

Ticker

We can also use a select statement to run something every X amount of time using a ticker. 

func main() {
   t := time.NewTicker(time.Second * 1)
   for {
       select {
       case <-t.C:
           fmt.Println("Tock")
       }
   }
}

Outputs:

Tock
Tock
Tock
...etc...

WaitGroups

WaitGroups make it easier to wait for different goroutines to finish.

Imagine you work in a grocery store and are in charge of overseeing three employees taking inventory. You give each one a clipboard to take inventory, and when they’re done they give the clipboard back. Only when you’ve received the very last clipboard are you allowed to say that inventory is done. This is essentially what a WaitGroup is in Go.

Typically a WaitGroup is incremented at the beginning a go routine, and once that goroutine is finished, it calls Done. We can wait on that WaitGroup until every goroutine has called Done

package main

import (
   "fmt"
   "sync"
   "time"
)

func worker(id int, wg *sync.WaitGroup) {
   defer wg.Done()
   fmt.Printf("Worker %d starting\n", id)

   time.Sleep(time.Second)
   fmt.Printf("Worker %d done\n", id)
}

func main() {
   var wg sync.WaitGroup

   for i := 1; i <= 5; i++ {
       wg.Add(1)
       go worker(i, &wg)
   }

   wg.Wait()
}

Outputs:

Worker 2 starting
Worker 4 starting
Worker 5 starting
Worker 1 starting
Worker 3 starting
Worker 1 done
Worker 3 done
Worker 4 done
Worker 5 done
Worker 2 done

Go Slice Debugging

I finally learned how Go slices work.

A few days ago I was struggling to understand why my submission for a LeetCode question was failing. As far as I could tell, the logic was there, but I was somehow using the same underlying slice memory for my answer, resulting in unintentional repetition.

After much frustration, I noticed a small difference that eventually got me what I wanted. I was baffled though, so I decided now was the time to learn all about slices.

Spot the difference

The premise of the original LeetCode question is to generate all permutations of an integer array.

I ended up getting so frustrated while trying to solve it, I eventually tried to emulate an already submitted answer. However, doing that still didn’t fix my issue! I was at my wit’s end until I noticed a very subtle difference in solutions. So let’s play “Spot the difference” between two submissions:

My initial solution

func permute(nums []int) [][]int {
   ans := make([][]int, 0, len(nums))
   backtrack(make([]int, 0, len(nums)), nums, &ans)
   return ans
}

func backtrack(left []int, rem []int, output *[][]int) {
   if len(rem) == 0{
      *output = append(*output, left)
   }
   for i, l := range rem {
      backtrack(append(left, l),
      append(append([]int{}, rem[:i]...), rem[i+1:]...), output)
   } 
}

Returns:

[[3,2,1],[3,2,1],[3,2,1],[3,2,1],[3,2,1],[3,2,1]]

The correct solution

func permute(nums []int) [][]int {
    ans := make([][]int, 0, len(nums))
    backtrack(make([]int, 0), nums, &ans)
    return ans
}

func backtrack(left []int, rem []int, output *[][]int) {
    if len(rem) == 0{
        *output = append(*output, left)
    }
    for i, l := range rem {
        backtrack(append(left, l),
        append(append([]int{}, rem[:i]...), rem[i+1:]...), output)
    }
}

Returns:

[[1,2,3],[1,3,2],[2,1,3],[2,3,1],[3,1,2],[3,2,1]]

For the savvy eye – you’ll see on line 3 for both there’s a difference. In one solution I allocated the expected end memory for that array, and in the other I did not. Why did that have such a tremendous impact?

How a slice works in Go

So I’ve been getting away with for a while simply knowing that slices don’t behave regularly when you pass them into a function. At a certain level they’re passed as value, and at others they are passed by reference. Where those distinctions lay, I’d never been terribly sure. This maddening LeetCode problem led me to finally invest in learning.

The blog by golang.org does a terrific job of explaining slices, and here I will try to summarize some of the main points.

We can think of a slice as a struct that contains 3 pieces of information:

  1. Capacity
  2. Length
  3. A Pointer to the first value in the slice

All three of those components are super important, and give slices their tremendous versatility. You can almost view a slice structure as a pointer to a Root node in a linked list, but with the added benefit of length and capacity.

With the above structure in mind, we can think of an integer slice as something similar to the following:

type intSlice {
    Length int
    Capacity int
    ZerothElement *int
}

Passing by reference vs passing by value.

With the above struct model, we can see how we could be passing slices as both reference and value. Take the following two examples (stolen from the blog linked above):

When a slice acts as something passed by value:

func SubtractOneFromLength(slice []byte) []byte {
    slice = slice[0 : len(slice)-1]
    return slice
}
func main() {
    fmt.Println("Before: len(slice) =", len(slice))
    newSlice := SubtractOneFromLength(slice)
    fmt.Println("After: len(slice) =", len(slice))
    fmt.Println("After: len(newSlice) =", len(newSlice))
}
Before: len(slice) = 50
After: len(slice) = 50
After: len(newSlice) = 49

When a slice acts as something passed by reference:

func AddOneToEachElement(slice []byte) {
    for i := range slice {
        slice[i]++
    }
}
func main() {
    slice := buffer[10:20]
    for i := 0; i < len(slice); i++ {
        slice[i] = byte(i)
    }
    fmt.Println("before", slice)
    AddOneToEachElement(slice)
    fmt.Println("after", slice)
}
before [0 1 2 3 4 5 6 7 8 9]
after [1 2 3 4 5 6 7 8 9 10]

Go passes slices by values unless otherwise specified. However, a copy of a pointer memory address still points at the same thing. Therefore whenever we change what is stored in the array itself, we’re accessing the original value that was passed in. So when we exit any function that supposedly passed by value, we can end up changing the original data!

Notice, however, when we update the length of a passed in slice, we’re not changing the original, since that’s not stored as a pointer. If for some reason slice’s stored *int for length, then we would see a change.

Capacity and make

Slice’s also store capacity. This is what really separates arrays and slices in Go. Imagine if we didn’t have a capacity field for a second. Every time we ran

s = append(s, "new field")

we would need to allocate new memory for another slice. Instead, Go uses capacity to set aside a certain amount of memory for each slice, making the majority of appends a O(1) operation.

Quite often, though, we do end up appending something that goes beyond the allowed capacity. In this case, Go will create a new underlying array with roughly double the capacity and copy the original array over to the new memory.

Copying things over isn’t a cheap operation. Since we store a slice by pointers, Go has to iterate through the entire array and copy the values to the new slice. One easy and quick performance optimization many Go writers use is by using make.

make allows the programmer to specify the length and capacity of a slice. The first argument to make is the type of structure you wish to make, for example []int or []string.

The next argument is an integer specifying length and is required in all slice make calls. If you specify a length n, the slice will create n zero values of that type in the slice.

The final optional argument specifies the capacity of the slice. This can help make append‘s more performant if we even have a rough idea of how long the slice will be.

Examples of make usage:

make([]int, 0, 3)

Slice: []
Slice struct: {
    Length: 0
    Capacity: 3
    ZerothElement: nil
}

make([]string, 2)

Slice: ["",""]
Slice struct: {
    Length: 2
    Capacity: 2
    ZerothElement: <Pointer to zeroth index>
}

Why didn’t my original solution work

When I was writing the input to the backtrack function, I knew the left array would eventually reach a certain size, so I figured I’d create a slice with that initial capacity to avoid having to reallocate memory.

However, since the function relies on distinct slices being passed through, every time I ran append(left, l) I wasn’t creating a new underlying slice so I was operating with the same memory for each level of the recursion. The only thing that was changing in each call was the length of the slice.

In the accepted solution I gave an initial capacity of 0 for the slice, and so Go allocated different memory blocks when append(left, l) was run. This was because the initial slice capacity was less than what was needed. When we allocated different memory each time, the recursion stack can operate on different pointers – leading to distinct slices.

Another way to get by this bug is to use copy and leave the initial capacity. I’ll leave that as an exercise for the reader though 🙂