Humans have a long, rich history of deluding themselves when faced with a challenge. David and Goliath, the Trojan Horse, Napoleon’s disastrous invasion of Russia — all historical tales of one side underestimating the other.
“There is no greater danger than underestimating your opponent,” Chinese philosopher Lao Tzu is often credited for saying. The exact origin of the quote aside, its wisdom remains.
Unfortunately, this is the exact task designers are faced with when assessing our competition. How are we to trust our ability to evaluate the competition when our track record as a species is so terrible? Perhaps Pulitzer Prize-winning economist Daniel Kahneman summed it up best in Thinking Fast And Slow when he said, “We can be blind to the obvious, and we are also blind to our blindness.”
To combat this well-documented weakness, we need to gain a fresh perspective, and that is best done by escaping from our own head and borrowing the eyes of another. Specifically, we need the perspective that can only be provided by watching people use our competitor’s product.
Generally speaking, the biggest hurdle anyone embarking in competitor research will face is finding qualified leads for the tests you need to run. Merely blasting your social feeds or even placing paid ads won’t find you the right people.
If your working for a client, they might already have a research panel. Now would be the time to tap directly into that group, and you can skip over the next few paragraphs.
For those still here, I suspect that you've read the chapter on Surveys and you’re building a research panel of your own.
A survey isn’t going to determine the direction any project should take, but surveys will help you find the right people for the activities that will set ultimately direct your path forward.
One of those activities is competitor testing (ta-da).
Seeking ‘what’ before ‘who.’
In the chapter on Observational Research, we touched on the distinctions between demographics (who people are) and psychographics (what people do). The best chance to gain actionable insight from competitive testing is to reach out to people who actually use the type of products you’re testing.
This initial sorting of the research panel should produce a smaller set of potential participants to work from. Now you can begin applying the demographic data to the group to help ensure you have a diverse group of participants. Perspective isn’t worth much without expertise, but expertise without varied perspectives is likely to paint a very narrow view for you to draw insight from.
Schedule sessions quickly
Once you have created a pool of potential participants (5-7 would be ideal), it’s essential that you reach out as soon as possible. This is because you will encounter a gap between the time that you begin your recruitment and the actual testing session itself. A timing hiccup might seem like a minor issue, but these issues have a tendency to compound on themselves — especially if you are tight on time.
To alleviate this issue, your email to all participants should be brief, include a deadline, and a link to schedule their session. If your testing can be conducted via video conference, you’ll want to set a deadline for your sessions within the next 48-72 hours. While that’s a short window, it’s also half a week when you are working with a client.
Assistance containing this testing window will be provided by using a service like Calendly to schedule. You can dictate times when you are available from within the platform, and Calendly will only display those times as the possible options when testers go to schedule their session.
Once you finish setting up Calendly, insert the link Calendly provides into your invitation email. Instruct participants to select a time that works best for them from the options that Calendly provides.
Now that we’ve addressed the audience component, we can finally turn our attention to the test itself. Unlike our approach to surveys, you want the tests you conduct to be in-depth as you’ll be analyzing the results of your observations to make critical decisions. Aside from a plan to guide you through the test, you’ll also need to determine how the session will be conducted.
While you have the opportunity to unearth a goldmine of insights, without the proper framework, testing can be an enormous waste of time for all involved. Follow the steps below to ensure you make the most of these moments.
A script with room to roam
Any time you have an opportunity to sit down with a user, you need to arrive with a plan. Your testing sessions should be a task-driven affair. In this manner, you’ll be leveraging what you learned from performing heuristic testing on your competitor’s product.
In this session, you’ll want to guide the user to perform several tasks, but unlike the product tests you’ll run on your own products, competitor testing is focused on what the user does rather than if the user can do it. The tasks you’ll be pursuing in these tests are meant to illuminate what aspects of the product that matter or frustrate the user.
Rather than “can you attempt to log in,” ask, “can you show me how you typically use the product.” This will reveal far more about what the user values. It will also showcase any usability issues that might be present along the way. You might not see the user fail as they work through the platform, but you will often hear them talk about aspects that bother them. You are mining for nuggets of information, and this is when it is vitally important that you are listening carefully and ready with follow-up questions.
Do not attempt to fill the entire session with tasks to be completed. By asking open-ended questions that encourage the user to display how they use the competitor’s product, you are likely to uncover details explaining how the product fits into their daily lives. If they talk about these aspects naturally, listen, and if they don’t, ask directly. You’ll be surprised how much useful information they will tell you by just talking about themselves.
The mechanics of the test
How you opt to conduct your session will hinge primarily on the product that is being tested. In-person testing has some advantages because it allows for more significant observation of the participant, but remote sessions remove location constraints, thus enabling you to dig deeper into your pool of potential participants. In either case, you can use a suite of online tools that will allow you to record your sessions without distracting the participant.
If the product your testing is browser-based, using Zoom becomes a logical option. Zoom is teleconferencing software that enables any participant to share their screen with others. This is helpful for you because you need to see your user working through the competitor’s website while also being able to see and hear their reactions.
Zoom also provides a robust mobile application that allows your test participants to share their screens while testing native applications. You will lose the ability to see the user while testing mobile apps, but this is a known trade-off for the ability to broadcast from their device.
Not to make this a commercial for Zoom, but you can even record the session to your device on their free tier — something that few other platforms offer. If you upgrade your account, you can record directly to Zoom and host sessions longer than 40 minutes.
Another service for testing is Lookback. Unlike Zoom, Lookback is purpose build for product testing and provides an excellent framework of tools to help organize your sessions. Lookback comes with a free 14-day trial before pushing users to upgrade to a paid tier.
Because we’re only running tests on existing competitor products, I’ll hold off going into further detail regarding prototype-related testing tools.
Acknowledge the assistance
Humans place an excessive amount of focus on the ending of experiences. As Daniel Pink points out in [When: The Scientific Secrets of Perfect Timing](https://amzn.to/2QbIZ5y, whether it’s a meal or a vacation, how an experience ends leaves a lasting impression. While we’ll circle back to this point when discussing our products, this concept also applies to our interaction with our research panel. Following up with a short email to say thank you will go a long way to fostering a healthy relationship with potential customers.
|Participant Scheduling with Calendly||New Pragmatic|
|The competitor research you may not be doing…||Bree Chapin|
|Remote usability testing tools||Jess Lewes|
Length: Two-to-four hours to complete.
As outlined in the preceding chapter, a successful testing session doesn’t happen by accident. Each takes a level of preparation and planning that goes beyond merely ‘talking to people.’
Being that you will be expected to host multiple testing sessions throughout the projects you work on, now is a great time to circle back to the survey data collected in the last chapter. Carefully review your data and use it to complete the following tasks.
- Parse your survey into people who have used a competitor’s product and those who have not.
- Of those that are active users, select a group of five and make sure they are not all of the same demographic backgrounds.
- Set up a Zoom or Lookback account and test the service.
- Set up your Calendly account and Calendly event that you will use to schedule your testing session. Include your Zoom/Lookback meeting URL in the auto-responder message.
- Craft a short email to send to each of your participants with a deadline to respond within 48 hours. Be sure to include your Calendly link.
- Write a short script to use with test participants. Have three to five questions ready to ask.
- Conduct at least 3 competitor tests.
- Document your observations in a Google Doc.
Once complete, update your Program Journal with links to any assets produced in this exercise. Post your Journal in the #Feedback-Loop channel for review.
Up next Competitive Analysis