A control is a term used to refer to the current best prospect acquisition package.
When I am trying to explain the concept of prospect controls to new clients they are sometimes puzzled about what determines a control. They ask if I, the agency, decide, or do they, the client, decide? Or they want to get started mailing the control right away prior to having sent the first letter.
In my experience, the only thing that determines a control is response data. And to accurately call a prospect package a control it must be tested head-to-head against another appeal.
Here is the method we use at LDMI:
Two packages must be mailed to random 50% net outputs of the same lists at the same time. After a merge purge, the net output of each list should be split randomly in half and labeled according to the package test each will receive.
We like to test at least 5 lists with a minimum net per test segment of 2,500. So we usually order 7,500-12,500 names per outside prospect lists. After m/p we end up with net output between 5-10K per list. Then, after randomly splitting those lists into two test segments you arrive at test cells between 2,500-5,000.
This will bring each package you are testing to a total of 12,500-25,000 total records. If your test quantities are on the low end and results are not very convincing one way or the other then you should re-test. This is especially true if you are looking at test results which may cause you to replace a long running control.
There are different ways to determine a prospect winner, but if you HAD to choose one metric to determine a prospect winner, pick the percent response. Once you have converted a prospect you have time to work on other metrics like average gift, or sale, and you can always test package versions that reduce package costs.
So percent response is the first metric to look for. One thing to keep in mind however is, if the package with the highest percent response also has a much lower average gift, or average sale, you should then look at ROI.
This brings up an important point regarding the up-front work necessary to structure a test to determine a winning prospect control.
When evaluating package results make sure you are comparing cost per piece of equal quantity. If one package was produced at higher quantities that would reduce the cost per piece. This happens a lot when we have a long-standing control and do a test of a new package at smaller quantities. Make sure you recalculate head-to-head results using same quantity Rollout costs per piece for both packages.
The way Lornezo Cowgill, LDMI Production Manager, manages this is by getting roll-out costs for any new tests packages at the same time as the initial test quantity bids.
Another thing to consider if testing a new package against an existing control is the creative costs of the new test.
The way our agency allocates creative costs like writing and design is with the first mailing of that package. So if a longstanding control goes up against a new test, the test package will show those creative costs while the control will not.
In that scenario recalculate the head-to-head results as if BOTH packages had no creative costs.
In the real-life example below this test helped us establish a control for a new client that remained the prospect control for the next year.
This test compared two completely different prospect donor acquisition packages, HT & PC.
- HT was an Emergency Humanitarian appeal with: a 2 color 6×9 OE, a 4 page 8 & 1/2 x 11 two-sheet letter, a perf-off reply card with a line for the recipient to sign a pre-existing prayer intentions and a # 9 BRE.
- PC was a Pray Card appeal with: a 2 color #10 OE, a 4 page 8 & 1/2 x 11 two-sheet letter, a full page reply that included space for the recipient to write in their own prayer intentions, a separate prayer card and #9 BRE.
We mailed five outside prospect lists on the same mail date. After merge purge we split the net output in half so that each package was mailed to approximately 27,900 records.
PC had slightly higher percent response, much higher average gift and better ROI. Even though PC cost more to mail than HT the increase in average gift was enough to compensate for the higher costs.
PC offered BOTH a prayer card which the recipient could keep AND the area of the reply for the recipient to write their own prayer intentions. These added involvement techniques allowed, and encouraged, the recipient to handle and get involved with the package, which most likely explained its success.
Many of the best and longest running prospect controls I have been involved with have used similar involvement techniques.
“Involvement devices have been tested in so many variations and under so many circumstances that their effectiveness is a generally accepted fact. They’re especially powerful in acquisition”
If you have a control now and it doesn’t have an involvement device, plan a test version with one and see if it beats your existing control. And if you’re struggling to establish a control make sure your acquisition letters have an involvement device, or two.
But don’t assume anything about beating a control. The package that finally beat PC and became the new control was one that none of our team thought would win. We tested it anyway and it convincingly beat PC. And yes, it also had an involvement device.
Remember, our opinions don’t matter in determining a control.
Only the response data matters.
If you have constructed your test properly, have read the results accurately and have priced out the rollout correctly, you should be confident in your results. And when you find your winning prospect control package keep mailing it…and keep testing against it to try to beat it.