Give your business a boost: Chris Robinson of Boost Evaluation on the importance of well-designed surveys and evaluation strategies

Give your business a boost: Chris Robinson of Boost Evaluation on the importance of well-designed surveys and evaluation strategies

By Jim James, Founder EASTWEST PR and Host of The UnNoticed Podcast.

 

Chris Robinson, the founder of Boost Marketing Ltd. (a.k.a Boost Awards), has been helping companies with their awards entries. But another part of his business is centered on evaluating what customers think about their clients — a service that they offer through their subsidiary, Boost Evaluation.

When Boost Awards began in 2006, they found out that for a company to win awards, they had to research and conduct well-designed surveys. You can’t simply win an award with lovely purple prose. You have to back up your claims with evidence of satisfied customers, staff, suppliers, and community.

As they started before the recession, they found that a lot of people in the industry didn’t want to be seen hiring a marketing agency. Hence, the birth of Boost Evaluation. It’s a new legal entity that marries up what their people specialize in: working on award entries and conducting surveys and research.

 

The Importance of Listening

Traditionally, for companies to prove their worth, they do a survey and cherry-pick and focus on the best news and testimonials. They package it up without distorting the truth and then compile it into a narrative, which can be a case study or an award entry.

However, what people often forget to do is to listen. An important aspect of evaluation and especially relevant on podcasts about marketing and PR; listening helps in improving strategies and making important decisions. But as companies become so busy reflecting on how awesome their past was, they forget to improve their future.

The entire awards industry has been doing things the conventional way, guessing and firing from the hip. So when Boost Evaluation ran a survey (they got 330 people who entered awards and asked them to state what they really want for the awards industry; they also got several award organisers on a call and relayed what the respondents said), it became an epiphany for the award organisers and to the whole industry.

Boost Evaluation then worked with a recruitment company that stopped entering awards. Instead, the company focused their energy on a customer survey. They got out, worked with training departments, and assessed what people wanted to do (e.g., digital learning). Chris found out, in the process, that companies have to reinvent themselves, come up with entirely new propositions, and learn from the process — they don’t need to merely go back to the status quo.

 

Image from Boost Evaluation

 

How Evaluation is Done

Sometimes, people tend to use the telephone by default — an evaluation mechanism that is actually quite intrusive.

An easier way to begin evaluation is through a digital survey like Laser, Qualtrics, and Survey Monkey. With these platforms, you can be less intrusive by starting with a survey and then enquiring if the respondent is available to be contacted by phone.

To get as many responses as possible, incentivising a survey is key. When you incentivise, you won’t just get a higher response rate but a more balanced response. When you don’t offer incentives, you will end up experiencing the Maritime effect, wherein people respond prompted by a strong view (either they love you or hate you). In the end, you will get an imbalanced response.

Forum panel conversations are great as well. Zoom meetings proved to be helpful in hosting individual telephone calls.

However, it’s important to always bear in mind that a survey has to have two aspects: Qualitative, which are stories and anecdotes that add color to your research; and qualitative, which are numbers and figures.

When evaluating, you must also be familiar with these two more jargons: objective measurement and subjective measurement.

The latter is composed of questions that can only be measured through a conversation or survey. It focuses on feelings, emotions, and the respondents’ sense of the future. Meanwhile, the former are objects of data that defy interpretation. These are data that can be obtained from Google Analytics or Trustpilot scores.

If you’re coming up with an evaluation strategy, start with a discussion of your hypotheses (What are you trying to prove? What is something you don’t know that you need your responders to vote on?). Then, divide your list into qualitative and quantitative, subjective and objective. These are four jargons that must be incorporated into the evaluation process. Otherwise, if you simply dive in and write a survey, you might get responses that would make you think that you should have asked a different question or have phrased it differently, or have asked through a vote or a scale.

 

Countering Survey Dropouts

In conducting surveys, dropout is a major problem. There have been questions about some sort of magic number of questions or the duration of the survey.

Bain & Company once published a book called “The Ultimate Question” and their philosophy is that it should be just one popular question. For instance, “On a scale of 0 to 10, would you recommend us to your friends?” This is called the net promoter score.

However, asking one question is overdoing it a bit.

For example, if a huge survey will be done by facilities management on thought leadership, they would have the respondents say whether they have done it or not. If they did not do it, they will be answering questions about barriers, excitement, and ROI. If they did it, they will be reflecting on what worked and what didn’t. It’s a multi-path survey that’s 10 minutes on average.

 

Image from Boost Evaluation

 

How Evaluation is Done

Sometimes, people tend to use the telephone by default — an evaluation mechanism that is actually quite intrusive.

An easier way to begin evaluation is through a digital survey like Laser, Qualtrics, and Survey Monkey. With these platforms, you can be less intrusive by starting with a survey and then enquiring if the respondent is available to be contacted by phone.

To get as many responses as possible, incentivising a survey is key. When you incentivise, you won’t just get a higher response rate but a more balanced response. When you don’t offer incentives, you will end up experiencing the Maritime effect, wherein people respond prompted by a strong view (either they love you or hate you). In the end, you will get an imbalanced response.

Forum panel conversations are great as well. Zoom meetings proved to be helpful in hosting individual telephone calls.

However, it’s important to always bear in mind that a survey has to have two aspects: Qualitative, which are stories and anecdotes that add color to your research; and qualitative, which are numbers and figures.

When evaluating, you must also be familiar with these two more jargons: objective measurement and subjective measurement.

The latter is composed of questions that can only be measured through a conversation or survey. It focuses on feelings, emotions, and the respondents’ sense of the future. Meanwhile, the former are objects of data that defy interpretation. These are data that can be obtained from Google Analytics or Trustpilot scores.

If you’re coming up with an evaluation strategy, start with a discussion of your hypotheses (What are you trying to prove? What is something you don’t know that you need your responders to vote on?). Then, divide your list into qualitative and quantitative, subjective and objective. These are four jargons that must be incorporated into the evaluation process. Otherwise, if you simply dive in and write a survey, you might get responses that would make you think that you should have asked a different question or have phrased it differently, or have asked through a vote or a scale.

 

Countering Survey Dropouts

In conducting surveys, dropout is a major problem. There have been questions about some sort of magic number of questions or the duration of the survey.

Bain & Company once published a book called “The Ultimate Question” and their philosophy is that it should be just one popular question. For instance, “On a scale of 0 to 10, would you recommend us to your friends?” This is called the net promoter score.

However, asking one question is overdoing it a bit.

For example, if a huge survey will be done by facilities management on thought leadership, they would have the respondents say whether they have done it or not. If they did not do it, they will be answering questions about barriers, excitement, and ROI. If they did it, they will be reflecting on what worked and what didn’t. It’s a multi-path survey that’s 10 minutes on average.

 

How Evaluation is Done

Sometimes, people tend to use the telephone by default — an evaluation mechanism that is actually quite intrusive.

An easier way to begin evaluation is through a digital survey like Laser, Qualtrics, and Survey Monkey. With these platforms, you can be less intrusive by starting with a survey and then enquiring if the respondent is available to be contacted by phone.

To get as many responses as possible, incentivising a survey is key. When you incentivise, you won’t just get a higher response rate but a more balanced response. When you don’t offer incentives, you will end up experiencing the Maritime effect, wherein people respond prompted by a strong view (either they love you or hate you). In the end, you will get an imbalanced response.

Forum panel conversations are great as well. Zoom meetings proved to be helpful in hosting individual telephone calls.

However, it’s important to always bear in mind that a survey has to have two aspects: Qualitative, which are stories and anecdotes that add color to your research; and qualitative, which are numbers and figures.

When evaluating, you must also be familiar with these two more jargons: objective measurement and subjective measurement.

The latter is composed of questions that can only be measured through a conversation or survey. It focuses on feelings, emotions, and the respondents’ sense of the future. Meanwhile, the former are objects of data that defy interpretation. These are data that can be obtained from Google Analytics or Trustpilot scores.

If you’re coming up with an evaluation strategy, start with a discussion of your hypotheses (What are you trying to prove? What is something you don’t know that you need your responders to vote on?). Then, divide your list into qualitative and quantitative, subjective and objective. These are four jargons that must be incorporated into the evaluation process. Otherwise, if you simply dive in and write a survey, you might get responses that would make you think that you should have asked a different question or have phrased it differently, or have asked through a vote or a scale.

 

Countering Survey Dropouts

In conducting surveys, dropout is a major problem. There have been questions about some sort of magic number of questions or the duration of the survey.  

Bain & Company once published a book called “The Ultimate Question” and their philosophy is that it should be just one popular question. For instance, “On a scale of 0 to 10, would you recommend us to your friends?” This is called the net promoter score.

However, asking one question is overdoing it a bit.

For example, if a huge survey will be done by facilities management on thought leadership, they would have the respondents say whether they have done it or not. If they did not do it, they will be answering questions about barriers, excitement, and ROI. If they did it, they will be reflecting on what worked and what didn’t. It’s a multi-path survey that’s 10 minutes on average.

Image from Boost Evaluation

 

How Evaluation is Done

Sometimes, people tend to use the telephone by default — an evaluation mechanism that is actually quite intrusive.

An easier way to begin evaluation is through a digital survey like Laser, Qualtrics, and Survey Monkey. With these platforms, you can be less intrusive by starting with a survey and then enquiring if the respondent is available to be contacted by phone.

To get as many responses as possible, incentivising a survey is key. When you incentivise, you won’t just get a higher response rate but a more balanced response. When you don’t offer incentives, you will end up experiencing the Maritime effect, wherein people respond prompted by a strong view (either they love you or hate you). In the end, you will get an imbalanced response.

Forum panel conversations are great as well. Zoom meetings proved to be helpful in hosting individual telephone calls.

However, it’s important to always bear in mind that a survey has to have two aspects: Qualitative, which are stories and anecdotes that add color to your research; and qualitative, which are numbers and figures.

When evaluating, you must also be familiar with these two more jargons: objective measurement and subjective measurement.

The latter is composed of questions that can only be measured through a conversation or survey. It focuses on feelings, emotions, and the respondents’ sense of the future. Meanwhile, the former are objects of data that defy interpretation. These are data that can be obtained from Google Analytics or Trustpilot scores.  

If you’re coming up with an evaluation strategy, start with a discussion of your hypotheses (What are you trying to prove? What is something you don’t know that you need your responders to vote on?). Then, divide your list into qualitative and quantitative, subjective and objective. These are four jargons that must be incorporated into the evaluation process. Otherwise, if you simply dive in and write a survey, you might get responses that would make you think that you should have asked a different question or have phrased it differently, or have asked through a vote or a scale.

 

Countering Survey Dropouts

In conducting surveys, dropout is a major problem. There have been questions about some sort of magic number of questions or the duration of the survey.

Bain & Company once published a book called “The Ultimate Question” and their philosophy is that it should be just one popular question. For instance, “On a scale of 0 to 10, would you recommend us to your friends?” This is called the net promoter score.

However, asking one question is overdoing it a bit.

For example, if a huge survey will be done by facilities management on thought leadership, they would have the respondents say whether they have done it or not. If they did not do it, they will be answering questions about barriers, excitement, and ROI. If they did it, they will be reflecting on what worked and what didn’t. It’s a multi-path survey that’s 10 minutes on average.

 

 

Image from Boost Evaluation

 

Still, anonymity remains a debate. If you don’t do anonymity, you lose the ability to send out reminders or attribute problems correctly. There’s also this gray area wherein, for instance, in a staff survey, you can know which department your respondents are from but not their names.

 

The Cost Surveys and Evaluations

The pricing for surveys is flexible. In Survey Monkey, you can avail free licenses once you sign up. But it will be peppered with advertisements and there will be a limited number of people you can send your surveys to.

 

Meanwhile, a whole grading scale is also available. If you’re happy to pay 200 or 300 pounds, you can get a decent license wherein you can lose the Survey Monkey branding and input a lot of questions. As it is an American System, you also have to adhere to American CAN-SPAM laws and avoid spamming thousands of people. There are also other more economical alternatives.

In terms of hiring a consultancy, Boost Evaluation charges day rates. You can knock out a survey for 500 pounds or avail of a two-year-long evaluation strategy for a five-figure fee. For more details, visit their website boost-evaluation.co.uk.  

If you’re in business, it’s important to listen to and evaluate your customers. Equally, you’re also encouraged to sit down with your team and your partners to get some guidance on what you’d do moving forward after getting out of lockdown.

This article is based on a transcript from my Podcast The UnNoticed, you can listen here.

 

Chris Robinson
Guest
Chris Robinson
Managing Director