3

Jan

New Year; Good Old Basics: A 5-Minute Testing Refresher


monopoly-go-squareNo matter what your organization’s fiscal year is, the beginning of a new calendar year is like returning to “Go” in your donor communications. It marks a reset of your Annual Fund or membership program and the beginning of a new dialogue with your donors that will build over the coming months and culminate with your year-end campaigns.

What better time then to revisit the fundamentals of how you communicate with your donors and measure the efficacy of your communications – a.k.a. testing? Here’s a 5-minute refresher on what you need to know (or remember) about direct response testing in the year ahead.

  1. You need to test. The great thing about direct response fundraising and constituency building is you can measure your results and test strategies to learn what works best. Although you won’t be able to say why something works or doesn’t, you will be able to say what works. But you need to test new ideas and validate existing ones on an ongoing basis to be assured that you are doing what works.
  2. Testing usually costs more, but it’s worth it. It’s tempting to cut a test to meet a tight budget. But when you amortize the lift or savings you get from test results applied over time, it’s easy to see the greater cost of not testing. For example, if a direct mail test that cost you $500 to implement showed that you could save $100 by switching from a business reply envelope to a courtesy reply envelope, you will have lost money up front on the test. But apply that $100 savings per mailing over the course of time, and it really adds up.
  3. Testing is more work, but also worth it. You have to think harder to come up with meaningful tests. You have to deal with more complicated project specifications, write more copy and analyze more results. But your extra efforts will translate to better results and maximize donor contributions for your organization’s important work.
  4. Only test things that you really think will make a difference. Just because you know you’re supposed to test (see #1), it doesn’t mean you should test any old thing you can think of. Only test things that you genuinely believe have the potential to have a significant impact on your results. Developing meaningful tests takes a good deal of creative and strategic thinking on your part (see #3).
  5. Likewise, focus the majority of your tests on the best performing segments of your donor file. Your best donors are always going to be most responsive to your communications, so perfecting your solicitation approaches to your best donors should be your priority. That’s not to say that you shouldn’t test approaches to reengage lapsed donors and improve results with marginal donor groups. (In fact, it’s always easier to reinstate a lapsed donor than to find a brand new one, and reinstated donors tend to have a greater lifetime value than new ones.) But balance is important. For starters, focus 75% of your tests on your best donors, and tweak your formula as you go along based on your testing results.
  6. Test one variable per segment. If you want to accurately measure test results, make sure that you only test only one variable per test cell. Why? If you change multiple variables, you won’t know which was responsible for lifting (or suppressing) your results. Some of the elements you might test per cell would include subject line, premium, carrier envelope, landing page, insert, letter length or microsite, to name just a few.

Statistical validity is a big consideration that I’ll address in my next post (so check back next week). Also coming soon: real life test results that may surprise you.

To be sure you don’t miss a thing, subscribe by email (in the box in the upper right column), RSS or Kindle. And as always, I want hear your thoughts –so post them here, or email us directly at topics@nthfactor.com.

Looking forward to being connected in 2010!

Share  


Add a Comment