Tech Maturity Model: Avito case study

Mikhail Sukhov
10 min readSep 1, 2020

--

Ref.: Avito is the most popular classifieds web service in Russia and is the second biggest classified ads website in the world after Craigslist. Founded in 2007 by Swedish co-founders Jonas Nordlander and Filip Engelbert.

Internally at Avito, when they say “maturity model”, what that means exactly is a list of different engineering and product development practices, which act as a baseline for different teams to be measured against, giving insight into whether all teams are on the same page — for example striving for a test pyramid or regularly performing retrospective reviews.

A well-defined maturity model helps promptly synchronize the teams and discover areas of growth. The objective of a maturity model would be to level the depth and fullness of technological and process adoption in a large number of engineering teams.

On the way to creating the document, we made about ten approaches: the first ones were not very successful, but the last one took root and proved its usefulness. If you don’t want to read the creation details, you can skip straight to the section with the model and a description of how to use it.

The premise

The maturity model at Avito was primarily meant to provide a clear understanding of the engineering teams’ maturity status and to give insight into areas where the culture needs to be strengthened or highlighting those requiring extra attention. It would also help in situations when teams have to be systematically pumped up, which has to be done against a set of transparent and comprehensible rules.

In other words, a tool was required that described the basics of processes that are necessary for work as well as practices that would be good to implement. This tool was also supposed to help mathematically measure how close each team is to the expected level of maturity. The model was aimed to help avoid several problems in the future:

1. While the teams are evolving, each team has a different understanding of the baseline.

2. Some teams know what to upgrade, all they need is a vector. Other teams may believe what areas need improving, but it’s not necessarily what the company needs improved. There are also teams that do not want to improve — no time to sharpen the axe when it’s always feeling time.

3. In absence of a common standard some teams can run far ahead, while others would hardly move an inch. After a time, it will become increasingly difficult for them to find common ground and achieve results in a coordinated manner.

4. If nothing is changed, then over time the company’s quality control costs will increase. As a result, the technological or product brand will be harmed.

5. Few people are interested in working in an underdeveloped team. People want to use the most up-to-date technologies and best practices.

Pitfalls and solutions

I used Spotify’s Squad Health Check as an inspiration for the maturity model, which was further evolved through operational use.

At its core, the maturity model is a description of expectations from teams along certain parameters, as well as evaluations for each of said parameters. Expectations are defined by experts in relevant fields. At Avito, the creation of such a system tool involved a number of internal expert centers, such as Information Security, Quality Assurance, Performance, Frontend, Backend and Delivery. Draft descriptions and pilot runs were initially done for two areas; the rest were added to the model at a later stage.

In total, about ten iterations were made. Initially, the teams independently assessed their maturity on a scale from 0 to 10. In the later versions the scale became much narrower, with levels reviewed by experts. Here are the pitfalls I’ve encountered and some lessons I’ve learnt on the way to a working tool:

1. If the levels in the model aren’t clearly defined, the teams won’t know what the baseline is and how to achieve it. Lesson: describe what each level entails in detail, don’t leave it at just the basic “1 — partially corresponds to the base level, 2 — fully corresponds.” etc.

2. Self-assessment of teams did not live up to expectations due to its low accuracy. Some teams rated themselves better, some worse. Lesson: self-assessment only makes sense with peer validation.

3. Not all teams were able to self-assess effectively. Lesson: initially, someone else must help guide everyone and monitor the process.

4. Meetings with experts should be limited to 1 hour. You need to have a lot of prerequisites in the form of comments in Google documents before the meeting and its facilitation plan by the minute. Experts may find it difficult to perform multiple diagnostics per quarter, therefore everything should be planned with spending a minimal amount of time on each team in mind. It helps if you move all general questions into group chats before the meeting.

5. The time experts spend on helping teams estimate and plan for their relevant areas is not scalable. After the initial reviews, expert involvement significantly decreases. At the very beginning of the process, you need to sit down with every team and explain everything — but further on the existing teams move forward with the help of the model, only occasionally turning to experts. New teams also require the attention of experts, but time investment in such cases is fairly comparable.

6. Area descriptions are not always 100% clear. There will always be someone that misunderstands the content, so at the very beginning of the process, time and time again, the experts will have to explain the language and refine the descriptions based on feedback from the teams.

7. It is not clear what to do next with all the collected data. A system that helps track results is required: objectives in OKRs and tasks in JIRA, indicators in Google Docs — all bound together by a singular process.

8. One of the areas has inflated expectations. The areas should be scaled against each other so that reaching relevant baselines would be equally difficult or equally easy.

9. Some of the questions in the areas are not applicable to individual teams. They should be able to forego self-assessing along such questions, so as to avoid deviations in the assessment results.

The resulting process of working with the maturity model is as follows: the team evaluates itself on a scale from 1 to 3, then a meeting is held where experts adjust and confirm the team’s assessments. Experts offer guidance on Avito’s current direction, what teams need to be doing and why everything works the way it does. The experts’ main objective is to train teams and raise them to the level of maturity required by the company.

Herein lies the value of the final maturity model: we don’t need levels to play a strictly self-serving role. We need the team to have a meaningful dialogue with experts in relevant fields and form a clear action plan with deadlines aimed at achieving results.

The task for creating a maturity model was created before Avito went remote, however this is exactly when it became critical. The trend is that with distributed work a lot of things migrate to text — for example onboarding of a new engineer or even an entire engineering team. Well, this is exactly the situation where using this tool is indispensable — new people and teams can immediately see what engineering should ideally be, and what the company expects of them.

How it works

A maturity model, to put it simply, is a set of terms of reference for a perfect engineering culture within the company, a beacon and benchmark to strive to, a collection of best practices.

The evaluation of the engineering teams’ maturity was performed along six main groups:

  1. Information security.
  2. Quality assurance.
  3. Performance.
  4. Frontend.
  5. Backend.
  6. Product delivery.

Delivery is a process starting at the product backlog and ending at implementation of a task into production.

When a new team is introduced, its members study the maturity model and quickly realize which areas need to be improved. Level descriptions give the already existing teams an opportunity to understand how they can become better and faster.

Each scoring cell can be attributed one of six values. Here’s what they mean:

  • Level 0 means “CRAWL”.
  • Level 1 means “WALK”.
  • Level 2 means “RUN”.
  • Level 3 means “FLY”.
  • ? means that the team needs to know what level to choose. The level should be determined with the assistance of an expert.
  • N/A is not applicable, so this item is not applicable to a specific team. For example, the search infrastructure team does not grade front-end areas, as they aren’t involved with them.
  1. The team meets once a quarter, reviews their current maturity model indicators and evaluates changes that happened within the team.

2. If changes are observed, a meeting with a relevant area expert is held. The expert comments on the updated evaluations and future tasks the team set for itself. In fact, issues can be resolved with experts via chats, asynchronously. There is no need to get an expert involved if the team is 100% sure of the new assessment results, which is possible if the team has a clear understanding of the model and grows at a steady pace in the right direction.

Every quarter a new evaluation tab would be added to the table. Teams gather data, enter their levels into the current tab and set new goals for growth. This process is repeated cyclically. As a result, we get a consolidated picture and quarterly maturity dynamics in the same table for all teams.

When there is a variety of teams with different goals and needs, like it is at Avito, all teams still benefit from the model despite those differences, since the model does not necessarily require each and every item to be ranked. At first, the intention was to implement different models for product teams and the technical platform, but in the end, I found any contradiction resolvable via the N/A value. Let’s say that within one area half of the items are applicable to a specific team, and the other half are irrelevant. But with the assessment being confirmed by experts, this becomes a non-issue — further down the line the team no longer has to take said items into account.

Some teams might not want to follow the standards for their own reasons. With complex situations such as this, experts have to work in a more surgical manner in order to determine why exactly the team believes what it does and not otherwise. Perhaps the standards are really unsuitable for the team in question — then the model is no longer applied to said team. However, it is vitally important that the experts confirm this — in the end, they are the ones responsible for implemented technologies and established processes.

I see the maturity model as a benchmark that helps teams grow. The teams themselves determine what the priorities for growth are, adding these objectives to the OKRs. In this, the maturity model is akin to a holocron. The information contained within is valuable in the right hands, and can be used for good… or for evil. I hope, of course, that it is used for good, and to your advantage.

Misusing the model

If you apply the maturity model to discern good workers from bad, while issuing bonuses to the latter and punishing the former, then you are misusing the model. Seeing a team beneath a baseline is no reason to devalue or penalize its members.

If you stuff the model with practices that the company is yet to adopt, you are misusing the model. If you use it as a scaling tool for processes and practices… you guessed it, you are misusing it. It is first and foremost a tool for leveling and improving things that already exist. It is unsuitable for introducing new things.

Business benefits

So as a means of summing it up — why do businesses need it? A technical director’s main tasks are ensuring that A. Delivery of features to customers works like clockwork, and B. The team performs at a certain speed while providing a certain level of quality.

What helps with these objectives is an established set of terms of reference indicating what an engineering culture should be like. Maturity model is a solution to this, providing insight into whether the teams are really following the terms of reference. It’s a digitized benchmark showing whether or not all teams perform at the baseline, whereby once a team reaches the baseline the bar should be raised, thus providing further motivation for the team to grow.

For the teams the model is a tool for continuous growth — it’s a quantifiable insight into directions of growth and what mastery level is required to reach a set destination. Previously, Avito lacked distinct criteria; now, it’s easy to grasp what strong engineering means. Moreover, any team may contribute additional criteria to an area as they see fit, adding to overall growth.

The ultimate goal of implementing the tool is to create a strong engineering culture. It’s just a means to systematically pump up the teams.

The bonuses offered by a maturity model include but are not limited to: improved communications between team as a result of each team following the same set of standards; no more chaos in technologies and processes; general rules allow for a more comfortable coordinated working environment for engineers.

Lessons learned

If you want the maturity model to work and be beneficial for the company, what you need is:

  1. Experts that are involved in the process. This is crucial for growth and dynamics.
  2. Team soft commits — let the teams determine what they want to pump up, and set their own deadlines for it.
  3. Process control and transparency. Everything should be visualized in JIRA, google docs or any other appropriate tools.
  4. Occasional area reassessment and raising the baseline
  5. There must be someone responsible for the model itself, a person that would ensure that the gears keep spinning at company level.

Implementing the model allowed to “cash out” on the actual state of things. It provided an insight into areas where the baseline is dragging its feet and what the baseline actually is for different teams. Initially, nobody was at the baseline. Within a year, 20% of Avito engineering teams reached it.

--

--

Mikhail Sukhov
Mikhail Sukhov

Responses (1)