Training Evaluation Series Part 1: Survey Design Basics
This fall RU Training will offer a series of articles addressing advanced topics in training evaluation. I hope that these posts will benefit our current students and our alumni. This series assumes a basic knowledge of both Kirkpatrick’s Model and Phillip’s ROI Methodology.
The first article addresses survey design basics. Surveys are an essential tool throughout the training process. Before training design begins, surveys allow us to collect information about our participants and the organization where they work to guide the design process. During training, survey data helps us gauge how training is going, allowing for necessary changes to ensure that we are meeting the needs of our learners. After training, we use surveys and questionnaires to evaluate training both immediately when training ends and later as a follow-up to measure transfer.
The adage, “if you can’t do it right don’t do it at all” relates directly to survey data. When surveys are well designed, measure what they are supposed to measure, and are stable over time, you’ve got quality data, but when they are not, the results you receive can do more harm than good. If your survey is inaccurate, the data will be inaccurate and can take you and your organization down the wrong path—leading to decisions that are both incorrect and costly—so no data is better than bad data.
In this article, I will review the basics of survey design and guide you to resources that will help you master the craft of survey design.
The Basics of Survey Design
- First, consider the type of information you want to collect. The three basic kinds of information captured by surveys are knowledge, attitudes, and behavior. To gather knowledge ask factual questions, to measure attitude, ask questions that measure opinion, and for behavior, ask questions that address both knowledge and attitude and collect data over time—before training, when training ends, and when the trainee is back on the job–to identify the effects of training.
- There are two types of questions you might ask: open-ended questions and close-ended questions. Open-ended questions allow participants to express themselves freely but are time-consuming to “grade” while close-ended questions can be limiting, but better lend themselves to objective measurement. A simple rule of thumb for choosing between the two is if you know the specific information needed to answer a question, closed-ended responses are preferred (Converse and Presser, p.33). If however, you are not sure what the range of possible responses are to a question, and need to conduct a preliminary exploration of a topic, open-ended questions will work better. Note also that previous research shows that respondents are more willing to offer sensitive information on a survey using an open-ended response.
- Choose possible responses carefully when using close-ended questions. Most close-ended questions give respondents choices or a range of choices. You want to capture all possible options clearly and without bias. For example, Bradburn (2004) cautions against the use of vague categories such as never, rarely, occasionally, and often because there is no way to know what “often” means to each participant. A common choice for survey data collection is the Likert-type scale where participants are asked to rate each item. In general, a 5-point Likert scale is an appropriate choice and would include statements like: “strongly agree,” “agree,” “neutral,” “disagree,” and “strongly disagree.” Because Likert scales have numbers attached to statements, they are easy to tabulate and report back.
- When it comes to questionnaires, appearance counts. Make sure your questionnaire looks professional, is grammatically correct, easy to read, and contains clear instructions. It’s always a good idea to have others proof your questionnaire and provide feedback. Also, only ask questions that are necessary. Participants find overly long questionnaires daunting.
Remember, garbage in, garbage out. The quality of your data is dependent on the clarity of each question. A thoughtful design captures the experience and change of program participants.
- Confidentiality is key to getting honest answers. Most surveys are collected anonymously using an online tool like Survey Monkey or Qualtrics (available to students through the RU Library). It is important to communicate to participants that all data will be collected and reported anonymously.
To revisit our three types of information, when testing knowledge, use close-ended factual questions that directly measure course objectives. When measuring attitude, use a five-point Likert scale, and when measuring behavior, use a combination of both types of questions and administer the same questionnaire before training, right after training, and at one-month intervals following training.
Add a comment to this post to tell us about your experience with survey design—particularly what you think makes a survey work or not work effectively.
References and Resources
The books that follow are useful resources and will help you learn more about survey design. The first is more advanced, but the second book by Converse and Prosser is a quick read.
Bradburn, N., Sudman, S. and Wansink, B. (2004) Asking Questions. Jossey-Bass; San Francisco, Ca.
Converse, J. and Presser, S. (1986) Survey Questions: Handcrafting the Standardized Questionnaire. Sage University Press; Newbury Park, Ca.