About
Tiger Algebra, a pioneering EdTech startup, provides a free algebra search engine that converts math problems into step-by-step solutions. Operating solely on ad revenue, it has been a successful EdTech startup since before the term "EdTech" even existed.​​​​​​​
The Problem
The team consisted of four developers, one copywriter, and me, who was originally brought on by the CEO to troubleshoot a few UX and UI issues. When I started working with them, the team’s approach was just to improve any aspect of the product they arbitrarily deemed insufficient. The CEO, himself a developer, kept quarterly goals at arms length and scoffed at the notion of soliciting user feedback. ​​​​​​​Recognizing the pitfalls of aimlessly building without a goal in mind, I devoted the next several years with Tiger Algebra to instilling in the team a more human-centered approach to product development. 
I used Teresa Torres' Opportunity/Solution Tree model to map out the problem space.
Setting targets
I aimed to elevate the team's design maturity from 0/5 to at least 2/5 on Invision’s Design Maturity Model via a shift from appearance-focused design to user-driven, iterative product design. To do this, I decided to focus on setting a clear goal for us all to work toward and coming up with a plan for how to reach it: Increase site usage by at least 1000 users/month by the end of Q3 2022.
With something to aim at, I began mapping out the problem space using Teresa Torres' Opportunity/Solution Tree, filling it with a combination of insights gathered from the team and Google Analytics. It became clear that there were two ways to increase site usage: 1. Ramp up marketing efforts to reach additional users; 2. Increase user retention, which was extraordinarily low. 
As marketing was a non-existent function when I started at Tiger Algebra, and well outside of my expertise, I decided to focus on increasing user retention. 
Conducting remote usability testing on usertesting.com to validate/invalidate hypothesis about user retention.
Identifying the problems
Tiger-Algebra's primary competitive advantage was that it was completely free to use, whereas most of its competitors—at least pre-ChatGPT—charged visitors for full access. So what was happening on Tiger-Algebra that was preventing its users from returning?
In my experience, when faced with a murky, poorly understood problem, coming up with and quickly testing some hypotheses can really get the ball rolling. So we generated a number of hypotheses as a team and set out to test them. The first hypothesis we decided to test was based on my own experience using the product: users have trouble knowing how to format their inputs on TigerAlgebra. 
To do this, I turned to UserTesting.com to conduct some remote usability testing using inputs that the CEO had identified as particularly problematic for students, asking each study participant to use Tiger-Algebra to solve a word problem that did not explicitly show the input. For example, "Use Tiger-Algebra to find the midpoint of (1,5) and (5,11)". Such a problem is easy for TigerAlgebra, but formatting the input correctly to get it to generate the right type of solution was much less so. 
The results of these sessions were alarming. Not only were none of the five participants able to input the problem in such a way that yielded the results they were after, but one spent more than ten minutes trying to bypass a paywall the team was not even aware of before abandoning the session altogether. It was clear that something needed to change to make it easier for site visitors to find the information they were after, but what this would be was another question entirely.
Watching usability sessions as a team drastically increased stakeholder buy-in
Leading an ideation session to generate solutions for issues identified in usability testing.
Getting on the same page
If convincing company leadership to let me conduct this type of user research was, in and of itself, an uphill battle, then the prospect of convincing them to divert resources toward fixing the uncovered problems seemed like paddling a kayak up a waterfall. Instead of relying on my own persuasiveness (of which I do not have much), I relied on the insight from Leah Buley's User Experience Team of One that viewing recorded testing sessions together as a team can really galvanize support. The result of these excruciating viewings was a palpable shift in the collective mindset of the team toward a more goal-oriented, human-centered approach to product. This was a profound moment in my design education and had a significant impact on the way I communicate professionally and personally.
The next step was to figure out how to make it easier for visitors to find the information they were looking for. To do this, I hosted an ideation session with all stakeholders to come up with a few viable options and to increase the perceived sense of ownership in members of the team. Due to the way the code was structured, it would have been a significant effort to change the site to accept more inputs, so we ultimately decided to move forward with a formatting guide for different problem types.
Testing our “leap of faith assumptions” via a classic smoke screen test.
Assumption testing
As an extra precaution and to ensure correct implementation, we listed the assumptions that our solutions would rely on and tested the most important, “leap of faith” assumptions on which the solution hinged. In an ideal setting, we would have gone through every assumption, moving from the most important to the least, until we had substantial evidence that our proposed solution was the right one. But settings are rarely ideal and due to a number of constraints, we only tested our most important one: Users will go out of their way to find, click on, and read a guide about how to format inputs on the site.

To test this, we performed a classic smoke-screen test: The main developer on the team would implement a link labelled “Formatting help,” that would take anyone who clicked on it to a Google Form with some simple questions, including the problem that they were struggling to format, and whether they might be interested in a formatting guide of some sort. This yielded thousands of responses, which painted an extremely clear picture of which types of problems users were having the most trouble with. We also measured the number of users who clicked on the formatting guide link against unique sessions on the website, which allowed us to validate, with a fair degree of certainty, that users had interest in and could find the link to the formatting guide.
A Google Form where users could submit their intended inputs. This helped the team know where to focus their efforts 
Implementation
Production, including content, took less than a week, but as new features always need to be tested, we put our new formatting guide back through the wringer via remote usability testing. We uncovered a few bugs and usability issues, which we addressed according to severity, but for the most part it seemed to be ready for launch. The work that remained to be done on the formatting guide was to continuously track its use via a usability widget I particularly enjoy called Hotjar, which, amongst other things, gathers user ratings and comments to create a more general likeability score that’s not so different from a Net Promoter Score. This, of course, helps us learn what we are doing well and what we are doing poorly, but it also sometimes reveals new opportunities worth pursuing.
Additionally, as creating and maintaining such a guide requires significant effort and as the team was quite small, I suggested we start with the problem types that users struggled most often with and add to the guide over time. Because gauging which problem types fell into this category required a monitoring system of its own, I came up with the idea of adding a link to the bottom of each solution page labeled, "Not what your looking for? Let us know" that would take users to a Google Form requesting their actual input, the type of solution they were after, and any (optional) additional feedback they might have. As of the time I left Tiger-Algebra, the form had over 40k responses, providing a permanent North Star for all future decisions about the formatting guide. 
Using Hotjar to continuously measure user satisfaction and improve the new solution
Conclusion
TigerMilk and I went our separate ways shortly thereafter, but in the weeks after implementation, we saw an average 0.3 (out of a total 5.0) increase on Hotjar. User retention is a bit of lagging indicator, meaning we may never have a clear picture of the formatting guide’s effect on it—any number of factors can cause a random spike or dip—but reducing unintended friction, especially in high-use areas of a product, can only really benefit the overall site experience. The key to maintaining forward progress on a metric like user retention, which is so heavily dependent on subjective experience of the site, is finding ways to gather the right kinds of feedback and responding to it in ways that make users happy.

Ultimately, this experience was a very productive one for everyone involved, and I feel strongly that we moved the company toward a 2/5 on design maturity, evidenced by the team’s enthusiastic embrace of—and leadership’s strategic shift toward—a more user-guided approach to product and business development. 
Back to Top