A switch from "I think" to "The data show"
A year ago, I wrote a well-received post here entitled How do you create a data-driven organization?". I had just
joined Warby Parker and set out my various thoughts on the subject at the time,
covering topics such as understanding the business and customer, skills and
training, infrastructure, dashboards and metrics. One year on, I decided to write an update. So, how did we do?
We've achieved a lot in the last year, made some great strides in some
areas, less so in others.
Initiative metrics
One of the greatest achievements and impacts, because it cuts across
the whole organization and affects all managers, are to do with our initiative
teams and how they are evaluated. Evaluation is now tied very strongly to
metrics, evidence backing underlying assumptions, and return on investment.
What’s an initiative team?
Much of the work and improvements that individual teams (such as Customer
Experience, Retail and Consumer Insights), requires software development work
from our technology team. For instance, retail might want a custom point of sale application, or Supply Chain might
want better integration and tracking with vendors and optical labs. The problem
is that the number of developers, here organized into agile teams, is limited.
Thus, different departments essentially have to compete for software
development time. If they win, they get use of a team --- a 3-month joint
"initiative" between an agile team and business owner --- and can implement
their vision. With such limited, vital resources, it is imperative that the
diverse initiative proposals are evaluated carefully and are comparable (that
we can compare apples to apples), and that we track costs, progress and success
objectively.
These proposals are expected to set out the metrics that the
initiative is trying to drive (revenue, cost, customer satisfaction etc.) and
upon which the initiative will be evaluated --- for example, reducing
website bounce rate (the proximate metric) should lead to increased revenue
(the ultimate metric). They are also expected to set out the initiative’s
assumptions. If you claim that this shiny new feature will drive $1 million in
increased revenue, you need to back up your claim. As these proposals will be
reviewed, discussed and voted upon by all managers in the company, and they are
in competition, it creates increased pressure to have a bullet proof argument
with sound assumptions and evidence, and to focus on work that will really make
a difference to the company. It also has an additional benefit: it keeps all
the managers up to speed with what teams are thinking about and what they would
like to accomplish, even if their initiative does not get "funded"
this time around.
It took a few rounds of this process to get where we are now and there
are still improvements to be made. For instance, in this last round we still
saw #hours saved as a proposed initiative impact when the hourly rate varies
considerably among employees. That is, they should be standardized into actual
dollars or at least a tiered system of hourly rates so that we can compare
against hours saved in a more expensive team. This visibility of work, metrics,
assumptions and the process by which resources are allocated has really pushed
us towards data-driven decisions about priorities and resource allocation.
ROI
While the initiative process covers the overall strategy and just
touches on tactics at a very high level, what happens within a funded
initiative is all about low-level tactics. Teams have different options to
achieve their goals and drive their metrics, what features should they work on
specifically, and when. This, too, is a very data-driven process all about
return on investment (ROI). Again, there is a good process in place in which our
business analysts estimate costs, returns, assumptions and impacts on metrics
(this is the “return” component). While the developmental time is mostly a
fixed cost (the agile teams are stable), costs can vary because they may choose
to pay for a 3rd party vendor or service rather than build the same
functionality (this is the “investment” component). These ROI discussions are
really negotiations between the lead on the agile team and the business owner
(such as head of Supply Chain): what makes most sense for us to work on this
sprint. This ROI process also covers my team too, the Data Science team, which
is outside the initiative process but involves similar negotiations with
department heads who request work; this allows us to say no to teams for
requests because the ROI is too low. By asking department heads to spell out
precisely the business impact and ROI for their requests, it also gets them to
think more carefully about their strategy and tactics.
Our ROI process is very new but is clearly a step in the right
direction. Estimating both return on investment and justifying the assumptions
is not at all that easy, but it is the right thing to do. In essence, we are
switching from "I think" to "The data show"...
"Guild meetings are held to improve. They are there to ‘sharpen your saw’. Every minute you use for ‘sawing’ decreases the amount of time for sharpening" from a post by Rini van Solingen
Analyst guild
Warby Parker employs a decentralized analyst model. That is, analysts
are embedded in individual teams such as Digital Marketing, Customer Experience,
and Consumer Insights. Those analysts report to their respective team leads and
share those team's goals. The advantage, of course, is that analysts are very
close to what their team is thinking about, what they are trying to measure,
what questions they are asking. The downside, however, is that metrics,
processes and tools can get out of sync with analysts on other teams. This can --- and in our cases did --- result in redundancy of effort, divergent metric
definitions and proliferation of tools and approaches, etc.
To compensate for these inefficiencies, we instituted a
"guild," a group that cuts across the organization (rather like a
matrix style organization). The guild is an email list and, more importantly, an
hour long meeting every two weeks, a place for all the analysts to come
together to discuss analytics, share their experiences and detail new data
sources that might be useful to other teams. In recent weeks, the guild has
switched to a more show-and-tell format in which they showcase their work, ask
for honest feedback and stimulate discussion. This is working really well. Now,
we all have a better sense of who to ask about metrics and issues, what our
KPIs mean, where
collaborations may lay and what new data sources and data vendors we are
testing or are in discussion with. When the analysts are aligned you stand a
far greater chance of aligning the organization, too.
SQL Warehouse
Supporting the analysts, my team has built a MySQL data warehouse that
pulls all the data from our enterprise resource
planning software (hereafter ERP; we use Netsuite) with 30-minute
latency and exposes those data in a simpler, cleaner SQL interface. Combined
with SQL training, this has had a significant impact in the analysts’ ability
to conduct analysis and compile reports on large datasets.
Prior to that, all analysts were exporting data from the ERP in CSV
file and doing analysis in Excel; that came with its problems. The ERP software
has limits, so exports can time out. Excel has its limits, and analysts could
sometimes run out of rows or more frequently memory. Finally, the ERP software
did not allow (easy) custom joins in the data; you exported what the view
showed. This meant that analysts had to export multiple sets of data in
separate CSV files and then run huge VLOOKUPs in the Excel file. Those lookups
might run for 6 hours or more and would frequently crash. (There is a reason
that the financial analysts have the machines with the highest amount of
RAM in the whole company.)
To combat this insanity, we built a data warehouse. We flattened some
of those tables to make them more easy to use. We then ran a number of SQL
trainings. We combined going through material in w3schools as well as
interactive sessions and tutorials using simplified Warby Parker data. After
analysts got their feet wet, we supplemented these with more one-on-one
tutorials and help sessions, and also hosted group sessions in the analyst
guild. The analyst guild was a place that individuals could show off their
queries and share how quickly and easily their queries ran compared to the old
ERP/Excel approach. We now have a reasonable number of analysts running queries
regularly and getting answers to their questions far more easily and quickly.
In addition, this centralization of not just the raw data but also the
derived measures, such as sales channel or net promoter score, means a central
source of truth with a single definition. This has helped move us away from
decentralized definitions in Excel formulae sitting on people’s laptops to standard
definitions baked into a SQL database field. In short, people now (mostly)
speak the same language when they reference these metrics and measures. While
there is still work to be done, we are in a better place than a year ago.
"People want to move from a culture of reporting to a culture of analytics" - Steffin Harris
BI tooling
While writing SQL is one approach to getting answers, it does not suit
all levels of skills and experience. Once you have a set of core queries, you
likely want to run these frequently, automatically and share results (think
canned reports and dashboards). This is where business intelligence tools come
into play. While we did a good job at automating a number of core queries and
reports using Pentaho Data Integration, we did not make
sufficient progress (and I am solely to blame for this) in rolling out a more
self-service set of business intelligence tools, a place where analysts can
spend more time exploring and visualizing data without writing SQL. While we
trialed Tableau and, more recently, Looker, my team did not push analysts hard
enough to use these tools and try to switch from Excel charting and dashboard to
report feedback. Thus, while we are currently rolling these out to
production this quarter, we could have done this up to 6 months ago.
Getting analysts to switch would have both created more high quality dashboards
that could be easily visible and shared around the company. It would have gotten
more people to see data on monitors or in their inbox. It would have also freed
up more time for analysts to conduct analysis rather than reporting,
an important distinction.
Statistics
Another area where I made less progress than I expected was the level
of statistical expertise across the company. Having a Ph.D. from a probability
and statistics department, I am clearly biased, but I think having statistical
training is hugely valuable not just for the analysts performing the analysis
and designing the experiments, but their managers too. Statistical training
imports a degree of rigor in thinking in terms of hypotheses, experimental
design, thinking about populations and samples, as well as the analysis per se. In many cases, analysts would
ask for my advice about how to analyze some dataset. I would ask
"precisely what are you trying to answer?", but they wouldn’t be able
to express it clearly and unambiguously. When I pushed them to set out a null
and alternative hypothesis, this crystallized the questions in their mind and
made the associated metrics and analytical approach far more obvious.
I announced that I would run some statistical training and 40 people
(which represented a large proportion of the company at the time) immediately
signed up. There was a lot of interest and excitement. I vetted a number of
online courses and chose Udacity's The science of decisions course. This has a great
interactive interface (the student is asked to answer a number of questions
inline in the video itself during each lesson) and good curriculum for an
introductory course. It also has course notes, another feature I liked. I
decided to send about 20 employees through the first trial.
It was a complete disaster.
The number of people who completed the course: zero. The number of
people who completed half the course: zero. The problem was completely
unrelated to Udacity; it was our fault. Students (=staff) weren't fully committed
to spending several hours per week of their own time learning what should be a
valuable, transferable skill. To truly embrace statistical thinking you have to
practice, to do the exercises, and to attempt to translate concepts you are
learning to personal examples such as specific datasets that you use in your
job as an analyst. There was insufficient buy in to this degree of effort. There
was also insufficient reinforcement and monitoring of progress from managers;
that is, expecting participation and following up with their direct reports. I
am also to blame for not having in-house check in sessions, a chance to go
through material and cover problematic concepts.
I haven't yet solved this. What I have noticed is that a concrete need
drives a response to "level up" and meet expectations. Over the last
few months, our A/B tests have picked up. The business owners, the agile teams
building features (including their business analysts and the project managers),
as well as the particular manager who runs our A/B tests and analysis, are all
simultaneously expecting and being expected to provide rigor and objectivity.
That is, to run a sample size and power analysis in advance of the test, to
define clear metrics and null and alternative hypotheses, to be able to explain
why they are using a Chi-squared test rather than another test. Having to
defend themselves from probing questions from other senior managers, ones who
do have some experience in this area, and just asking the right questions is
forcing people to learn, and to learn quickly. This is not ideal, and there is
a long way to go, but I do feel that this represents a significant shift.
In conclusion, the business is in a far better place than a year ago.
People are starting to ask the right questions and have the right expectations
from peers. More decisions are based from data-backed assumptions and results
than before. More people are starting to talk in more precise language that
involve phrases such as "test statistic" and "p-value." More
people are harnessing the power of databases to leverage its strength:
crunching through large amounts of data in seconds. We are not there yet. My
dream for the coming year is
- more canned reports
- more sharing of results and insights
- more dashboards on monitors
- more time spent on analysis and deep dives rather than reporting
- more accountability and retrospectives such as for prediction errors, misplaced assumptions
- more A/B testing
- more clickstream analysis
- more holistic view of the business
How will we do? I don't know. Check back next year!