Improve Your Mailing Results By Testing Less But Smarter

““Obviously, computers have made differences. They have fostered the development of spaceships – as well as a great increase in junk mail.”” – Tracy Kidder

When it comes to testing, one of the biggest mistakes direct marketers make is that they test insignificant choices. In an effort to “perfect” or “tweak” a mailing package, they test all kinds of things – from color, to type style, to body copy, to paper style – that have no real chance of producing a meaningful change in response rate. What they get is usually in the plus or minus 10% to 25% range.

That range is enough for many. For me – based on what I’ve read and (more importantly) seen – it’s inadequate.

I am not a statistician, so I can’t make the statistical argument – but I can tell you this: Getting a 10% or even a 25% lift on a test cell of 5,000 or even 25,000 names is not always reliable. It may sometimes be enough to validate going forward with something you want to do anyway (say, a package that is less expensive), but it may not give you the level of confidence you are probably looking for.

Before I ever opened a book on statistics, I had 20 years of experience in planning, testing, and reacting to more than a billion dollars worth of testing. I’ve seen – and have been responsible for – just about every kind of test you can imagine. During that time, I’ve come up with all kinds of interesting theories that I proved in testing, satisfying myself with 25% lifts, only to see those results negated on back-tests. This has happened not just one time but hundreds of times. In fact, I think it’s fair to say that back-testing nullified prior tests more times than not. It’s Amazing How Many Foolish Things People Will Spend Money Testing

A young copywriter has the idea that several paragraphs in a sales letter are so bad that they must be rewritten and retested. The results come in – and, sure enough, the new package outpulls the old by 28%. A month later, however, the product manager screws up and mails the “inferior” copy, and guess what? It beats the new version by 22%.

An art director wants to demonstrate that his trendy new typeface will make a difference. He gets a 15% lift and is satisfied.

A product manager wants to prove to his client that he is right about something – almost anything – and creates a test to vindicate his point of view.

There are many reasons for testing insignificant options. Yet each test costs money. Each test reduces profits.

I remember when I was younger and more foolish. JSN and I noticed that all the really successful packages at that time had one thing in common: the color scheme. They all used black and burgundy ink on cream-colored stock. We rushed to test it and got a 30% lift. It was a “Eureka!” moment.

We immediately converted all our packages to this “ultimate’ color scheme. Luckily, we also did a little back-testing.

It’s a little like looking at an oil stain on an old window. If you really want to see the visage of the Virgin Mary, you will.

A Simple Rule For Making Your Testing Process Simpler, Cheaper, And More Effective

After thinking about this problem for a long time and reading some serious stuff recently (that I only half understood but that validated my experience), I’ve come up with a new rule of thumb. I offer it to you for consideration:

TEST NOTHING EXCEPT THAT WHICH CAN POTENTIALLY GIVE YOU A 100% LIFT.

It’s radical thinking, but it does two things that are very good for your mailing results:

1. It dramatically reduces useless testing and thus increases your profits.

2. It forces you to do the hard thinking necessary to come up with truly breakthrough testing ideas.

The Question Is This: What Can Possibly Double Response Rates?

Well, first I’ll tell you what can’t: Changing your color scheme, paper stock, or type size. Making your body copy smoother or cleaner. (Forget Strunk & White.) Correcting your data. Substituting a Johnson Box for a regular headline. (You’ve seen a Johnson Box: an outline of a box on top of the salutation inside of which the headline and/or lead goes.) Changing the person who signs the letter. Adding marginalia. Using a live stamp vs. an indicia vs. a meter. Changing the response method. Using testimonials. Using a product photo vs. an illustration. Your selection of premiums. Most forms of personalization. Individual involvement devices. Most lift notes. A BRE vs. a self-mailer. And (usually) using an 800 number.

I’m not saying these things don’t matter. They do. When you design your package, you need to keep all of them in mind and use your judgment to make the most appropriate choices. But though they matter as a group, they are insignificant individually. So don’t test them unless you have a very good reason to . . . unless you think they can make a very big difference.

What CAN double your response rate?

* your headline (See Message #192, “How to Be a Master Headline Writer.”)

* using teaser copy on your envelope

* the subject line in your e-mail solicitation

* your lead paragraph

These are the most important copy tests, because they incorporate – in the most visible way – the fundamental promise of your promotion. As I explained in Message #104, the most important job of the copywriter is to choose the “Big Promise” of the promotion. If you get that right, you are 80% home.

What else can make a difference?

* your offer (an often-overlooked element)

* the length of your letter (longer is usually better)

* the size of your package

* the number of premiums offered

There may be a few others – but the point I’m trying to make here is that the number of things you should test is (1) probably smaller than you thought and (2) certainly smaller than you’d surmise if you listened to the experts. (Target Marketing recently published a piece on 27 ideas you “should be testing.” Most of them made it to my “insignificant” list.) Remember, most direct-marketing experts are copywriters who see, relatively speaking, very few test results and aren’t responsible for the bottom line.