Monday, April 6, 2015

Creating a Data-Driven Organization: Two Years On

This is the third post in a series documenting the process of creating a more data-driven organization at Warby Parker. The first post covered my initial thoughts as I joined the company as Director of Data science in 2013. The second post documented progress made after one year. This next installment details progress made in my second year.

What a year it has been! I believe that we have made huge strides.

THE POWER OF THE DATA DICTIONARY
After trialing Looker, our business intelligence tool, last spring and receiving great feedback from our analysts, we made the decision to drop Tableau (at least for now, although we may get back to it for executive dashboards later this year) and instead focus on Looker.

We rolled out the tool in a fairly conservative manner over a six month period. This was not because Looker integration was difficult but we had a large number of data sources, we wanted to maintain the highest level of trust in our data, and because we needed to work with the business owners to agree upon and lock down the data dictionary. That is, set out the business logic that defined our business terms, such as precisely what constitutes a customer, how we define our sales channel logic and so on.

This data dictionary, and the consequent alignment of teams across the company, may be the most significant activity to date that has contributed to an enhanced data-driven culture. Thus, I want to go through this in some detail.

Creating and Validating The Data Dictionary
Our plan was to focus on one data source at a time, partner with the department(s) who "owned" the business logic, i.e. how the terms were defined, and who had datasets that we would validate against. They possessed Excel spreadsheets that contained raw data exported from our other systems but, importantly, in which they also had layered on derived metrics, additional logic that specified say how to handle say giftcards, exchanges, and giveaways when calculating what constituted a “sale.” The idea was that they would provide us with the business logic, we would implement that in Looker and then we would generate a dataset from Looker and compare that with the spreadsheet data, row by row, to validate. What happened was a very interesting and revealing process and is the reason that this process is so impactful to the organization.

There were a number of interesting lessons. First, unbeknownst to those teams, the business logic that they provided to us didn’t always precisely match what was actually in their spreadsheets. The reason is that these spreadsheets had grown organically over years, had multiple contributors, and contained all sort of edge cases. Thus, it was hard to keep track of the complete current logic. Therefore, the act of asking those teams to list out the logic very concretely—an actual series of IF THEN statements—turned out to be a really valuable exercise in itself.

Second, there was occasional mismatch among teams for the same metric. While the spreadsheets among different teams had originally been the same for those common metrics, they had unknowingly got out of sync. This set off very useful conversations about what the logic should be, where it should be the same, and also where and why those terms should differ. The output was a commonly agreed upon set of business logic and greater clarity and visibility about any terms that differed. For instance, our finance and product strategy teams have a different and valid perspective around the terms "bookings units," i.e. how many items we have sold. Now we are in a position to have two unambiguous, clearly-documented terms—"bookings units" and "product bookings units"—and can speak a more precise language across the company. Conversely, there were also several cases where there was a difference in definitions and the teams agreed that they should in fact be the same and they came to an agreement about what they should be.

Third, because we were using SQL during the validation process, we could easily drill down to understand the root causes of any rows that did not match. We found unusual edge cases that no-one had ever considered before, such as how some split orders are processed. When we explained these edge cases to the business owners, their reaction was often "That can’t possibly happen!" but with the evidence staring them in the face, we were able to apply those learnings to our internal processes and fix and improve our order handling and other scripts. Thus, everyone won from that activity.

Finally, some of the business logic we encountered in those Excel files was a workaround and based on the limitation of the enterprise resource planning software that generated the raw data. It was suboptimally-defined business logic. Thus, we were able to change the conversation and instead ask the business owners to specify their preferred business logic: in an ideal world, what would you like this logic to be? We were then able to implement that logic thus freeing up the teams to have simpler, cleaner, and more rational business logic that everyone could understand.

As you can imagine, this was a slow, painful process as we went through each of our many data sources working with those stakeholders to bring the data into Looker, validate it (which was the most time consuming step), and have those teams sign off on it. Those initial teams, however, saw the huge benefit to this process. They understood their own metrics better, had a centralized system that they could trust, was automated, and was locked down. Based on the benefits and great feedback that they were hearing, our CEOs made that a company priority: to get all the data into Looker, fully validated, and for analysts to use that as the primary source for all reporting and analysis. They helped us create a schedule for all the additional data sources to be included and got that necessary stakeholder buy in to do the work to define, validate, and sign off on the implemented logic.

I can’t stress enough the impact of this process on us being more data-driven. Even if we were to drop Looker today (which we don’t intend to), we would still have that data dictionary and that new alignment among all the stakeholders. It literally changed the conversation around data in the company.

To put the icing on the cake, we documented that logic in what we called the Warby Parker Data Book, an internal website with a book like interface (using gitbook.io) that lists out all our data sources, all our privacy and other data policies, and lists out that data dictionary. Everyone at Warby Parker can easily use the book to understand those terms. (This Data Book is the subject of a post on Warby Parker’s tech blog.)

Data Democracy
We now we have a suite of datasets in Looker. They can be sliced and diced with each other, the data are trusted, and are the central source of truth for the organization. Many reports are now auto-generated and directly emailed to stakeholders. For other reports, Looker is used to aggregate the data which are then exported for additional analysis or manual annotation to explain insights in the data. With Looker taking on the mechanics of crunching the numbers, this has freed up time for the analysts to spend on data discovery and analysis. Consequently, we are seeing more, deeper, and richer analyses. In addition, we are able to democratize data more than ever. For instance, Warby Parker sends out customer surveys to gather feedback about our brand, products, and experience, including within our retail stores. We now use Looker to aggregate responses that originated from each store and email them directly to the individual store leaders so that they can see and respond to what our customers are saying about their particular store. As you can imagine, those store leaders love these data and this new level of visibility.

ANALYST GUILD AND OFFICE HOURS
Switching gears and focussing on the analytics org itself, we decided that the analysts guild meetings, mentioned in the previous posts, were not as effective as they could be and we decided to shelve them for a while. They had reached a critical size in which a form of the bystander effect manifested itself. That is, the larger the group got, the less individuals wanted to help out such as volunteer to present or start or contribute to conversations—the size of the group became intimidating, especially for junior analysts. The breadth of interest and skill level of the large group meant that it was also hard to continue to find topics that were relevant and interesting to all. We decided that smaller more focussed discussions centered around a more precise topic and involving the most relevant stakeholder analysts and business owners would be a better approach. We haven’t found the right balance and process yet but is something that we are working on.

To provide additional support, I offer weekly analytics office hours, one in each of our two office buildings in New York. That is a chance for analysts to ask for help with statistics, experimental design, and in general act as a sounding board for their analysis, interpretations, and ideas. This is also helpful to me to understand what people are working on, what are their pain points, and how the data team can help.

Next on Deck
So what is coming up in terms of the analytics org? Lots of training for one. We've just had Sebastian Guttierez of https://www.dashingd3js.com/ do an in-house data visualization training session attended by a dozen of our analysts.

I am also planning to do some statistics training, not for the analysts but for the middle management at Warby Parker. You will recall from my last post that statistics training with the analysts did not work out well. Thus, my plan here is that by educating the managers and making them more demanding in the quality of analysts that they receive and the use of statistical inference—in short, making them more data literate—that will constitute more of a pull model on analysts. With me pushing from the bottom and managers pulling from the top, analysts will have no choice other than to level up.

Finally, I am working on an analyst competency matrix, a document that sets out the required skills for different levels of analysts. Thus, it specifies the level of data munging, data analysis, data visualization skills and so on that are required to jump from an analyst to a senior analyst. By providing a very clear career path, and the support to develop those skills needed to get promoted, we hope that this will make for happier, more content, and productive analysts.

More generally, I want to promote more forward thinking analyses in the next year: many more predictive models and hopefully even some stochastic simulation models for supply chain.

A BOOK
As an aside, one exciting thing that happened over this last year, at least for me, is that I decided to write a book. Based on the discussion and feedback to the previous two posts in this series, I approached O’Reilly Media with a proposal for a book (imaginatively) entitled "Creating a Data-Driven Organization" which was soon accepted. Thus, since August I’ve been more intensely researching what it means to be data-driven, interviewing others about their experiences, and writing up a long form synthesis. I’ve learned a huge amount, it has been a lot of fun, and I’m in the final stages—just revisions and corrections to do. In fact, although not quite complete, it is now available for purchase as part of their early release program.

As with these posts, I would love to continue the discussion and get your feedback and learn about your experiences. I shall be presenting on this topic at http://www.next.ml/ and at http://datadayseattle.com/.

AN INCREASING THIRST FOR DATA
Bringing the conversation back from the analytics org to the company level, I’m definitely seeing a thirst for data now. Analysts are wanting more and more data. This is a great problem to have. For instance, in my first year, analysts were doing crude, high-level geo-analyses. It had some value but they wanted more detailed insight into the business. Thus, we provided them with a dataset containing ZIP codes, CBSA (metropolitan areas), and DMA (TV viewing areas) and folded those into our customer and sales data. This set off a flurry of deeper,  more nuanced reporting, which was fantastic. Last week, however, that same team approached us again and asked how they can get neighborhood level detail. With Warby Parker opening more retail stores, they wanted a finer view of the local impact of those stores.

In addition, a couple of days ago, I attended a Warby Parker management retreat, a quarterly review and planning session. One of the themes that popped up in a number of conversations was more data, more visibility, and even the term "data-driven" was mentioned many times. Good things are happening and I really sense a cultural change.

As before, check back in a year’s time to monitor our progress.

5 comments:

  1. Great post, Carl. Thanks for sharing your insights!

    ReplyDelete
  2. I would like to thank you for the efforts you have made in writing this article.
    Friv 4
    Friv 5

    ReplyDelete
  3. This series of posts has been really helpful for me. Would you be willing to share your 'analyst competency matrix,' or even just a high-level summary of it?

    ReplyDelete
  4. Thank you for sharing this excellent information. -Jen Underwood

    ReplyDelete
  5. I would like to thank you for the efforts you have made in writing this article. In fact your creative writing abilities attract us.

    ReplyDelete

Note: Only a member of this blog may post a comment.