This was originally published in Medium, and reproduced here.

**Your product has solid retention, and now it’s time to spin up a Growth team. You ask yourself, how big should my team be?**

This is one of the most common questions I get asked when I consult with companies. The practical answer here is that size does matter. I would love to hear your opinions and experiences in the comments. Here is the short summary.

**TL;DR version:**

For a medium scale company, the formula for growth attributable to A/B tests is

Annual growth % =

where is the number of growth engineers and is the average probability of success.

For example, if and , expect an annual growth of

Therefore, a well-oiled growth team needs to have 20 engineers and supporting product managers, designers, statisticians, and marketers to achieve annual growth driven purely by experimentation. Read on to figure out how I arrived at this formula.

**Extended version:**

In this post, I will provide you with a mental model and a Fermi calculator to quantify growth team size. Before jumping into details, we will assume the following.

- Your product has hit product-market fit and you are measuring a meaningful north star metric. And not some vanity metric like open rate for a clickbait email 😊
- You have buy-in from the CEO and other execs, in principle and more importantly in action to build a growth team.
- You have an A/B testing infrastructure in place.

Here are some numbers to start with.

- A well-oiled growth team should strive to achieve an experimentation rate of
*one A/B test per engineer per week* - A new growth team typically starts with about a success rate. In other words,
*nine out of ten ideas fail*! - As the team matures, the success rate goes up. I have come across teams where the success rate is around , i.e.,
*three out of ten ideas succeed*.

These numbers are already good enough to estimate the team size. The only other number we need is how many experimentation focused weeks we have in a year.

- Let’s assume the team spends two weeks per quarter for planning, feedback reviews, etc.

That means we have weeks to work on growth experiments. - Let’s discount another to take into account vacations, sick days, removing technical debt, resolving bugs, etc.

This leaves us with 40 weeks in a year to run A/B tests.

Next, assume that there is an impact of an average of on the north star metric with each successful A/B test. We now get the following table.

Success rate | 10% | 20% | 30% |

Weekly A/B tests | (success, %-impact) | (success, %-impact) | (success, %-impact) |

1 | ( 4, 0.4% ) | ( 8, 0.8% ) | ( 12, 1% ) |

3 | ( 12, 1.2% ) | ( 24, 2.4% ) | ( 36, 3.7% ) |

6 | ( 24, 2.4% ) | ( 48, 4.9% ) | ( 72, 7.5% ) |

10 | ( 40, 4.1% ) | ( 80, 8.3% ) | ( 120, 12.7% ) |

20 | ( 80, 8.3% ) | ( 160, 17.3% ) | ( 240, 27.1% ) |

Click here to make a copy of this sheet.

What the table above suggests is that a YoY growth rate of about attributable to A/B tests needs 20 engineers delivering one experiment per week! You can now work backwards and estimate the total team size based on the PM to engineer ratio and other dynamics within your company.

**An approximate formula to compute YoY growth from A/B tests.**

If N is the number of experiments per week and p is the average probability of success, then for a *typical growth team*, the year over year metric growth attributable to A/B tests as a percentage is

For example, if and , this formula gives , which is approximately the last row in the column in the table above!

**Can this go on forever?**

Note that if you target to run 200 A/B tests per week, you get 2400 successful A/B tests in a year and a YoY growth of . So, is this achievable? The simple answer is no. Things don’t always go to the top right. You will hit diminishing returns as the team scales up. Some underlying causes include

- More managerial challenges, more meetings, more communication overhead as the team scales up.
- Market saturation
- Concerns over user product experience.
- Worrying about multivariate tests rather than A/B.

Diminishing returns are just a fact of life. The mathematical way to model them is with the logistic differential equation. Check out the wiki link if you are curious about it.

So, the answer to the original question is the following: Aim to get to a stage where you can run one A/B test per engineer per week. Figure out the average movement in the north star metric per successful experiment. And then plug it into the calculator!

**Fun fact: can you estimate the number of engineers in Facebook’s growth team?**

Yes, here is one approach. I was able to get the following numbers from various public sources for Facebook MAU over the years.

Reporting Date | MAU (millions) | YoY MAU Growth |

03/31/2009 | 242 | |

3/31/2010 | 482 | 99% |

3/31/2011 | 739 | 53% |

3/31/2012 | 955 | 29% |

3/31/2013 | 1155 | 21% |

3/31/2014 | 1317 | 14% |

3/31/2015 | 1490 | 13% |

3/31/2016 | 1712 | 15% |

3/31/2017 | 2006 | 17% |

At the scale at which FB operates, I would assume an impact rate of +0.01% in MAU and a success rate of 10%. The choice of these numbers reflects diminishing returns and not the quality of Facebook growth team. Let’s be generous and assume 50% of the YoY growth is coming from A/B tests and the remaining 50% from the organic linear trend. That already gives us that FB needs to run 200 A/B tests per week to get to 8.3% YoY growth. Remember, Fermi calculations like these only give you an order of magnitude estimate. So the number of growth engineers at FB is definitely in the early to mid 100s!

Thanks for reading so far. Let me know your thoughts in the comments!

Acknowledgement: Thanks Julia Gitis for giving it a first pass, and helping out with the presentation.