Andre Powers is a two-time start-up founder and product leader in the financial technology (FinTech) sector. Over the course of launching multiple products from zero and leading product teams, he's developed a passion for lean, customer-centric product discovery.
Painted doors or “fake” doors have long been a popular way to get feedback from users and customers and test new ideas or assumptions about new features in software products. Classic examples of painted doors include buttons to activate features that don’t yet exist, or allowing users to add items to a checkout cart that are not yet for sale. Like a painted image of a door on a wall of a building, the idea is to make a user think they can access something that is ultimately not available to them to see if they would use it if it was available.
Proponents like that these tests seemingly provide a more honest reflection of a user’s response to a new feature or product than a response to a survey question, since the user took an action instead of just saying they would take an action. But this “honesty” comes at a cost. By not asking them directly for their feedback — making it clear you value them — you miss out on building a bond that could provide more and longer-term insights on how to create value for your customers. Plus, when properly designed, in-context surveys can be not only more insightful, but faster and cheaper to launch.
Let’s explore why you should retire painted door tests from your product manager toolbox.
What are in-product surveys?
In-product surveys are short surveys (often just a single question) with the goal of quickly getting feedback from users and customers. While they are most known for measuring things like net promoter score (NPS) or customer satisfaction (CSAT), surveys are great at rapidly answering questions of viability, desirability, or other assumptions about new feature ideas.
When you launch an in-product survey to validate an idea, you build conviction and reduce leaps of faith about the appetite or ability of users and customers to adopt a new feature or product.
They can be especially effective when used in-product, in-context. For example, when a user reaches a two-factor authentication (2FA) screen to provide an SMS code while logging into your application, you might trigger an in-product survey asking if there are other 2FA methods the user would prefer to use. The user can submit a response with a single click taking only a second, and you’ll get valuable feedback on what 2FA methods are preferred by your users.
The problem with painted doors
You could achieve a similar insight in the 2FA example using a painted door, for example by having a sidebar with buttons that provide other 2FA options to the user. When the user selects one, you record their click and provide some form of “coming soon” message to the user. You’ll get the same information with this method, but there is one obvious downside: you tricked your user!
The experience of a painted door leads to an element of disappointment. You make the user believe they have the ability to do something they can’t, and they only find that out after attempting it. The disappointment is then compounded by the realization that they were surreptitiously fooled into providing information to you. It’s hard to see how these feelings could constitute a positive experience. Imagine encountering an actual painted door in real life. You want to make use of the door, only to walk into a wall (ouch), and realize someone has fooled you. How many of us would describe that as a good experience?
The in-product survey by contrast is a much more transparent approach that shows your users that you are interested in their opinion, and view them as a partner in your product. It builds trust by engaging the user.
Building that trust pays dividends down the road. Not just in referrals, but a willingness to share their time in more intensive feedback channels like customer interviews and user testing sessions that generate more insights about valuable opportunities.
In addition, you can give users an open-ended option in the survey to provide context. This kind of context is rare in painted door tests where you’re more likely to just see what users did or chose, but not why they did.
Faster, cheaper, and easier to launch.
Painted doors are often promoted as quick and easy ways to get feedback, but they will almost always require more time and effort than a in-product survey. Assuming you have integrated a best-in-class tool like Sprig, precisely targeted survey experiences can be launched almost immediately, without any development work required. You can use any custom event in your application as a trigger to target the survey at precise moments, and you can filter down who sees the survey by using any user attributes you track in your application. Again, no dev work needed.
Even the smallest painted door experience usually requires code, and if you are using sprint release cycles that means it’s unlikely the test is launched right away. You’ll also have to remember to deactivate the painted door experience. Getting the results of the test in front of stakeholders will probably also take longer since the data has to be gathered from the database manually, rather than immediately available in an easy-to-access dashboard.
When painted doors are a good choice.
There are certain questions where painted doors can be more effective than in-product surveys. There is a well known disconnect between indicating willingness to buy in a survey question and actual willingness to buy. Here, painted door tests can be a useful way to capture candid willingness to buy that might get obscured in a survey question, especially if the test includes providing payment information. Similarly, testing price points is tricky with surveys, since the user has a strong incentive to indicate a preference for the lowest option even if they would pay more. A painted door test with different price points would give a much more reliable indication of what users are willing to pay.
Even so, it’s still possible to reduce the need for painted doors even on questions of price elasticity or willingness to buy. Staying close to your customers with regular in-depth interviews to understand their needs thoroughly will give you the insight to make really good decisions on questions like these without the risks that come with painted door tests.
A resolution for 2023
Painted door tests have been a clever way to get users to reveal information about their preferences for many years. But the most successful product teams now focus on building close relationships with their customers to understand them using interviews, surveys, and user testing.
The deeper understanding built with in-context survey methods pays ongoing dividends to product teams by giving much more information about how to create value for users. That means they are better for the user and for the product team. In 2023, I hope to bump into fewer painted doors.