The Top 3 Points You Should Have Paid Attention to in the Spotify Engineering Culture Videos
That Aren't Squads, Chapters, Tribes, or Guilds
That Aren't Squads, Chapters, Tribes, or Guilds
When people say “Spotify Model” they’re almost always thinking about org structure (Squads, Chapters, Guilds, Tribes). Structure is the last thing you should worry about. Before structure, what you should have been paying attention to was aligned autonomy; building trust (cross-pollination, community); decoupling (including limiting blast radius). I’ll expand on these concepts from the old Spotify Engineering Culture videos as well as newer concepts that have come up in the intervening 8 years.
Jason is a Staff Agile Coach at Spotify based in NYC since early 2015. Previously he was a Principal Consultant at ThoughtWorks, primarily based in Sydney since early 2001. He first encountered Extreme Programming in 1999 from XProgramming.com, comp.object, comp.software-eng, and the Portland Pattern Repository (aka Ward's wiki) (akac2.com). These days he mostly posts to Twitter (@jchyip) and his Medium blog (https://jchyip.medium.com).
Business Agility Conference 2022 - Photos
I have already been talking to some people here because everyone wants to talk about the Spotify model. And that's what my talk is going to be about. About what you should have paid attention to, instead of what you're asking me about. If I ask the question, "what do you think of when I say Spotify model?" Yeah, essentially the wrong thing. So that's what I'm going to try to address here.
When most people say Spotify model, they only think about squads, chapters, tribes, guilds. In other words, org structure. When it comes to thinking about product development culture, I would say org structure is the last thing you should be considering, not the first. Generally, I would say the first thing is around business product strategy things, but I'm not going to get too much into that. Instead, I'm just going to point out a few interesting things that I think were in the videos, that, for some reason, nobody pays attention to. I don't know why. Actually, I'm very skeptical that all of you have watched the video as well, but we won't get into that.
Okay, so...We can't read that, so that's cool, because I'll just say it. Top three points that are the more important. First one I'll get into is this concept called aligned autonomy. Does everyone recognize that picture there? You've seen this before? Probably, some people have used it in internal presentations, which is great. This concept of aligned autonomy actually comes from a book called The Art of Action. I will just read through here.<.p>
There's a guy named Von Moltke, if you're interested in military history. If you're not, you might not like this book, but generally, it is a nice book. His insight is that there is no choice to make between alignment and autonomy. Far from it. You want high autonomy and high alignment at the same time. There is no compromise. The more alignment you have, the more autonomy you can grant. One enables the other. You don't see it as endpoints on a single line, but more of a two-dimensional thing. This is a summary of that. Alignment and autonomy are not two ends on a scale, but two dimensions on a two-by-two.
This is the first consultant. Consultant here got the joke. Two-by-two matrix. Henrik Kniberg created this thing, which is a nice summary of it. If you have high alignment but low autonomy, this is your traditional, top-down, authoritarian. This is what we need to do. This is exactly what you need to do it. This is like military history. This is old school, what people think military was like. Sorry, that was low alignment, low autonomy. High alignment, low autonomy is actually clarity of orders. It's important to distinguish the two. Because you could have something where you don't know what you're doing and you have no ability to control, which is the absolute worst. Better is knowing what you're supposed to be doing and being told what to do. At least you're clear why you're being ordered around. You don't like it, but you know, like, "Okay, we're doing something." At least it's still better. And then there's also, you get to do whatever you feel like, and there's no particular purpose for it, and that's the high autonomy, low alignment. Sometimes it's called do-whatever-you-feel-like autonomy. Which, unfortunately, a lot of people think that's what you should be doing, and that usually creates a mess.
The alternative, the one we want, is actually the high alignment, high autonomy, which is also known as aligned autonomy, so it's very clear what we're trying to accomplish, but the way in which you accomplish that is left up to the group, the team, okay? Cool. Insight is, alignment needs to be achieved around intent, and autonomy should be granted towards actions. Intent is then expressed in terms of what to achieve and why. While the actions are taken in order to realize the intent. So the team or the person decides what to do and how. That's the idea with aligned autonomy.
Now, within a product development context, at Spotify, there are really two parts to it that I've seen how it's expressed. I'm not going to say that this is how it's perfectly expressed. It's just generally what we're trying to push towards. One, a clearly expressed product strategy. Two, what's called a empowered product team, if you use the general phrase in the industry, but it's what we call squads. Clearly expressed product strategy. There's a book called Good Strategy, Bad Strategy. How many people have read this book? I'm just curious. Not enough. Okay, so this is actually... I first picked this up because there were some internal managers taking leadership at Spotify, a program that we have, and this is one of the books that everyone goes through. This is a really nice book on strategy. The key part of it is this thing called the kernel of good strategy, which I'll translate to product strategy. If you have a good strategy, you're going to have three things. The first thing is a diagnosis. When we talk about diagnosis, typically, we're also talking about a diagnosis based on data and insights. It's not just a gut feel. Well, there's a little bit of gut feel, too. It's quantitative-qualitative data and insights. Guiding policy. What that means is essentially a set of beliefs of what you think will work. Sometimes it's a little hard to explain, but let's say, "Here's what I think the problem is and why it's happening. Here's what I believe will be a good response to it." That's the beliefs around that, that's the guiding policy. Then finally, a key set of coherent actions. Within a product development context, we call those bets, so I believe that if we take this action, this will... When they reflect our beliefs, but they will essentially deal with the diagnosis of the situation. Product strategy, when it's good, reflects all of those things. That's just coming up with it. The other part is communicating that to everyone so that they can align to it. Again, so we get to create that aligned autonomy effect.
An example of what this actually looks like; we have a framework internally, we call it DIBBs. Data, insight, belief, bet. It's effectively the things I just mentioned before. Data and insight are the diagnosis part, belief is the guiding policy part, and bet is the coherent action part. When we do what we call company level bets, any really large project-type thing, you have this template. Someone will fill this out. Some of them are good, some of them are not, but at least there's this thing that you can critique. You can look at it and say, "That's the strategy". When this is done well, it means that you can say, "Hey, I'm working on this thing. What is the DIBB for it?" You can look at that and say, "Okay, that's the strategy. I know what we're trying to do here,". Or complain about "we don't know what we're trying to do here because it doesn't make any sense." Here's another expression of that. This comes more from Spotify Design. They came up with something called the Thoughtful Execution Framework. It's using different language, but it's still roughly reflecting the same idea. There's a diagnosis. They fleshed out more, so you have a goal, which is not a strategy. That's the thing that the strategy is trying to achieve. But you still have data and insights. You're still identifying opportunities, which is essentially a diagnosis of the problem. "Here are the things that we need to address." They talk about hypotheses, which are effectively the guiding policies. As in, these are the things we believe to address that will deal with the problem. And then coherent actions. Instead of bets, we're talking about different potential solutions and what we learn from that. Because it's design, they're thinking more of a exploratory-type idea as opposed to the bet taking. But it's still the same idea. It reflects that strategy structure. Again, you do something like this to allow alignment, because now we have clarity about why we're doing things. There's a structure to it.
This last example is what my area does, and I work in advertising and technology, advertising and R&D. So we tend to have a longer term idea of where the industry is going and how we line up to it. Here's an expression of strategy, which is cross-disciplinary, from a technology product design insights perspective Here are guiding principles, which are effectively what we believe is necessary to achieve success in response to that. And then we do something like top five priorities, which are like coherent action. So what we think are the top stuff we need to do to accomplish it. This is a presentation so it all looks clean. In reality, it's messy. But that's how it is. Which is fine. We're not dealing with a situation that is simple, straightforward, and none of these things are intended to make them simple or straightforward. They're just trying to create some degree of coherence so that we can work in a relatively coherent way. That's clarity of product strategy. The next thing is empowered product teams, otherwise known as squads. The translation didn't quite work here, but I'll just say it. Squad is not a synonym for team. It's a synonym for what Marty Cagan calls an empowered product team. For anyone who's virtual, I'll just update the slides so you can read them, which is cool, but I already remember this stuff. When we say squad, we don't mean generic team. We're referring to an empowered product team.
And an empowered product team refers to three things; one, that it's multidisciplinary. Within Spotify, there is four disciplines, typically tech, product, design, insights. Insights refers to essentially data science-type stuff. I'm assuming everyone knows what tech means, product means, and design means. Yes? Yes? Good? Okay, cool. In general, though, because I know some groups here are not doing product development, all the skills necessary to progress your mission, that's what we mean by multidisciplinary. Whatever you need, that's in the team. Second thing, is that you have a mission, that there is a scope, that there's a purpose for that team to exist. That has to be there, otherwise, it's not a clear team. And you have the expectation and authority to figure out how to accomplish the mission. I explicitly put expectation here because it is not do-what-you-feel-like autonomy. You have a goal, and you are expected to work out how to get there. And I think that's actually quite important. The authority part, I just throw that in there because it might actually be harder for other people, but the idea that... It's tied in with the expectation of... If I expect you to work it out, then you have the autonomy to work it out because it's there. Now this also includes coordination with other squads as necessary. Some of our stuff is quite complicated. You're working across teams, and they themselves have their own missions. You are expected to work out how to do that coordination too. So it's all a part of it. This I don't expect you to read. It's really to say that empowered product teams, cool, relatively straightforward concept. Sometimes difficult to execute, but let's say it's relatively straightforward. There's more to it. I generally recommend these days to look at the team topologies. The thing I put up here is really just a one-page summary we created to communicate that. Because you do have different dynamics, depending on the type of product capability you're working on, so your teams might look a little bit different. You have platform type stuff, et cetera. They all still follow the empowered product team thing, but there are variants to it, so you deal with that too. From again, back to the perspective of what enables aligned autonomy though, it still holds.
You have clearly expressed product strategy. You have this idea of this empowered team, which may be expressed differently depending on what you're actually dealing with. But you still need those two things, is what allows you to achieve aligned autonomy. Again, back to that's what you should be thinking of primarily more so than "we have squads". The second thing is trust at scale. By trust at scale, I guess this is straightforward. It means that at scale, for your entire organization, people trust each other. I'm not going to say more than that. Whatever that feels like to you, just even simply, that's what I'm talking about. You need to be able to say, "We're like a giant group of people, and I trust you so that we can operate more effectively".. And we have to somehow systematize that effect, which is like, "That sounds pretty straightforward," and it's crazy complicated to do at scale. And there's two things that are done to create that. Back in the day, it didn't matter because everyone's sitting in the same room, everyone knows each other, you're all in it. But at scale, it gets quite a bit more difficult. There's two things I think that are in play to enable that. One is cross-pollination, the other one is what I call a culture of mutual respect.
In the videos, they say people are greater than everything. That's effectively what that's referring to. Cross-pollination. I say cross-pollination, the idea here, the effect is that it humanizes across boundaries. Think of this where the person is the trust carrier. They're the mechanism in which trust moves across. I just move the person around. I'm treating them like a trust resource. People are resources. Three ways this happens. Yeah, but I treat them as trust resource, not a skill resource, it's a trust resource.
Okay, so embedding. First thing is embedding. Someone temporarily transfers to another team. They temporarily move over, they get to know each other, and then they come back. So now we have an in, we have a relationship. There's a funny effect I read from Team of Teams, where when people don't know you, they don't know your team, so whoever you send over, they assume the person you send over reflects everyone else on your team. So you make sure you send the best person over. Because they become the example. They just assume everyone's like them, even though... Maybe you're a horrible person, but you send the best person, and they just assume you're good too, so it's a nice little trick. Now, I'll mention this later, when you have a liaison which is not an embed, it's, let's say, a point of contact. So we'll say, especially with larger stuff, we'll say, "Hey, here's the primary point of contact. They'll be your representative and you talk to them." we are deliberately selecting that person for particular characteristics. One, that they need to be well respected. Two, that they're good at building a relationship, very explicitly. Because there are some people I know, they're really smart people, are going, "I don't want you talking to anyone". "We'll work on your skills over time, but right now, let's not." Again, because of that effect. Everyone will assume everyone in your team is like that person, and if they're not really good at it, everyone will hate your team. And it's not your team's fault. It's just you sent the wrong person. The final thing is internal movement, so a permanent transfer. Someone says, "Hey, I got tired of this team. I'm moving to another team," and I think there's maybe a reflex where you think that's a bad thing. But it's a good thing because now you have someone who knows you. And now they're over there, and they help create connections. Because it's like, "Hey, I used to work with them and then I can connect with things.". This actually has played out quite a bit where it helps work things out, so it's a good thing. Again, you do these deliberately to try to create this systemic trust across boundaries.
Very quick example here. I'm not going to go into these, but Cara Lemon, another coach, actually went into more detail. It's not just those three things. Those are broader categories. There's a lot of different combinations of team interactions that can occur that built things up. You mainly just think of this from the perspective of, how do I humanize across boundaries?
Culture, mutual respect, AKA people are greater than everything. Mutual respect encourages trust, and there's three things that achieve that that I think of. Role models, which means that if you have influential people consistently modeling respectful behavior... I'm not going to talk specifically what respectful behavior looks like, because there's a little bit of cultural stuff there too. But, anything you think of, "That seems very respectful," all of it, which is very long list. Systems incentivize respectful behavior and disincentives disrespectful behavior. Pretty much means that anyone who's disrespectful should not be rewarded. And you have to make sure that happens. That they don't get promoted and all this kind of thing. Stories about what is good to emphasize respect, stories about what is bad deemphasize disrespect, so when you are celebrating things in all-hands or whatever, that kind of stuff. I'm just watching the clock, so I'm going to get faster.
Fred Rogers said something, the great Fred Rogers, "Anything that is human is mentionable, and anything that is mentionable is manageable". There is something in the video where they talk about trust requires no politics, which I actually disagree with. Because I am a student of Fred Rogers, and I believe that politics is a very human thing. We're not clones, we have different interests, and it's something that we need to be able to talk about. You don't want to pretend that there isn't politics. You don't want to pretend there's no fear. You do want to encourage dialogue. If I'm concerned about stuff, I have a need that doesn't match your need, we should talk about it. We shouldn't fight about it, we should talk about it. That's important. Generally, I do training on a skill called crucial conversations. The good concept there is that you need to think of this pool where everyone brings their meaning in, and that's how you want to think about politics, about disagreements. Quick thing. This doesn't get any back and forth. We're seeing this right now. All of you sitting there, a mass group of people, there's no back and forth. It's all one way, because it's very uncomfortable for you to engage with me. Despite the fact I don't have any time, so I can't answer you anyway. But, if it was a small group, when we're sitting at a table, will be back and forth. That setup is designed to get engagement. It also is designed to show respect because I value your input so I'm going to create a small group to allow you to talk.
Last thing, I'm going very quickly because I'm running out of time, is decoupling. Key thing here is, really, your architecture should be coupled where product capability should be coupled and decoupled where product capability should be decoupled. And this has a reflection on your product lifecycle I should go back. The things that are earlier on in a product lifecycle are volatile, and the things later are stable. Things that are stable should not be coupled to things that are volatile because that makes them unstable. And you have this one-way thing. Mature stuff, everything can depend on them, but market development, new stuff should only be at the end. You don't want anything depending on it because it's dangerous. I've done a quick sketch here showing an example where I was actually mapping out different capabilities to show this, so people start thinking about, "Hey, what are the things? Where are they situated? How do we modify our architecture to allow us to decouple things, which will then allow us to be more flexible when we need to, and less flexible when we shouldn't be? That's very difficult, which is why... Maybe that's why no one talks about it. To summarize, top three points that are more important than squad, chapters, tribes, and guilds: aligned autonomy, two, a culture of mutual respect, and decoupling.
Please subscribe and become a member to access the entire Business Agility Library without restriction.