Have you ever noticed how often people state theories as though they were facts?
It’s definitely going to rain today. Your favorite sports team will easily make it to the playoffs this year, or alternately, they’re so irredeemable they don’t stand a chance. Your diet will work this time.
…but would you like to test that prediction?
In both the marketing and product worlds, we often invoke testing as a sort of moral good, and there’s a logical reason for that: When you need to justify your value to your company, it really helps to have data behind you. “We tested this theory I had and it made you a million dollars.” Does anything feel sweeter?
As I’ve been navigating the transition from marketing-focused writing to product-focused writing, I’ve been thinking a lot about how “testing” works in those two separate worlds. In marketing, you typically run an A/B or multivariate test, where you split your audience randomly into segments and they each get a different version of your creative to see which one has the highest response rate.
Product folks do occasionally run this sort of test; think about how when social media platforms make an update to their app, they sometimes only release it to part of their user base first to make sure it performs well.
However, more often in this space we’re thinking about user interviews which provide more qualitative data. Ask a user to get from Point A to Point B in your prototype and see where they get stuck, or maybe have them group different words together to find out how your target audience would navigate your content hierarchy.
Both versions of testing, quantitative and qualitative, are valuable. But while it’s incredibly important to know if the content you’ve put together works, that doesn’t mean testing is as simple as just creating two versions of your design and running them head-to-head.
If you’re going to run an experiment, you need a hypothesis.
What does it mean if Version A outperforms Version B? What insights can we take from that win moving forward? Where are the opportunities for further optimization?
The smaller the difference between the versions, the easier the results are to understand. Those small changes are also less likely to be statistically significant, though, so sometimes your test can’t be as straightforward as “change the button text” (though if you’re going to run a clean test, that’s often the most impactful one).
The funny thing about having worked for a copy testing agency for five years is that it gave me something of a sixth sense for people who state opinions or hypotheses as though they were facts—and then often prove resistant to testing them.
Typical theories our clients shared without ever having put them to a test included:
“Always lead with urgency.” Note: While sharing an explicit deadline can improve response rates, an unspecific “Hurry!” rarely does.
“Users will only respond to conversational language and will be turned off by formal verbiage.” In reality, some of our best-performing language was extremely formal.
And of course, “the shorter, the better.” More copy often does convert users more effectively—it depends on what the copy is.
Since leaving that company, I’ve run across opinion-facts like:
“People don’t know what a ‘clinician’ is, so we shouldn’t use that word.” This is a struggle for healthcare writers across the US since you’re not allowed to refer to your medical team as “doctors” if you employ nurse practitioners as well.
“Our chatbot should speak in the first person so users feel more comfortable texting with it.” Maybe? Or maybe it feels weird and off-putting when an algorithm pretends to be sentient?
“When people are confused with an app, they want to be able to call someone for help rather than using a text chat.” This argument obviously illuminated a boomer vs. millennial divide in the office.
How do you learn to catch these pretend facts in the act? It can be hard to see your own blind spots, of course, so these moments of “…do you actually know that or is that just what you think?” often come up in conversation with others.
As a writer, I find providing justifications for why I made the choices I made to be extremely helpful for identifying testing opportunities. Whether you’re presenting your reasoning to someone else or not, knowing it yourself allows you to think through to the next stage: Why do I think this will work? What would it mean if this didn’t work? What is another copy option here that might work for a different reason?
You may or may not have the opportunity to run an experiment, but keeping your mind open to these possibilities can help you notice places where testing could make the largest impact so you can advocate for them with your team.
You can also catch these opinions-presented-as-facts quite frequently in the news, especially if you’re wading into topics popular in online discourse, like “Gen Z is disengaged from society because they’re on their phones all the time,” “New York City is a crime-infested hellhole,” or really any problem of correlation vs. causation. Does weight gain really cause so many medical issues, or do all these medical issues cause weight gain?
Unfortunately, in real life it’s not as easy to A/B test things. You can’t double-blind a nutrition study, for instance; people can see what they’re eating. But as writers, we should always be looking for opportunities to test, and we should be careful with our language when we’re presenting a theory rather than a fact.
As readers, we should look out for opinions pretending to be facts as a form of media literacy. Why does the writer of this news article want me to think this is true without evidence?
In my experience, people who reject testing things are often people who can’t accept the possibility that they might be wrong—or else they just don’t want to put in the work to run a test. Then again, I haven’t tested that, so let’s call it a working theory for now.
Testing. Science. It’s how we learn, grow and get better at doing what we want to do.