Training Evaluation Series Part 1: Survey Design Basics

This fall RU Training will offer a series of articles addressing advanced topics in training evaluation. I hope that these posts will benefit our current students and our alumni. This series assumes a basic knowledge of both Kirkpatrick’s Model and Phillip’s ROI Methodology.

The first article addresses survey design basics. Surveys are an essential tool throughout the training process. Before training design begins, surveys allow us to collect information about our participants and the organization where they work to guide the design process. During training, survey data helps us gauge how training is going, allowing for necessary changes to ensure that we are meeting the needs of our learners. After training, we use surveys and questionnaires to evaluate training both immediately when training ends and later as a follow-up to measure transfer.

The adage, “if you can’t do it right don’t do it at all” relates directly to survey data. When surveys are well designed, measure what they are supposed to measure, and are stable over time, you’ve got quality data, but when they are not, the results you receive can do more harm than good. If your survey is inaccurate, the data will be inaccurate and can take you and your organization down the wrong path—leading to decisions that are both incorrect and costly—so no data is better than bad data.

In this article, I will review the basics of survey design and guide you to resources that will help you master the craft of survey design.

The Basics of Survey Design

  • First, consider the type of information you want to collect. The three basic kinds of information captured by surveys are knowledge, attitudes, and behavior. To gather knowledge ask factual questions, to measure attitude, ask questions that measure opinion, and for behavior, ask questions that address both knowledge and attitude and collect data over time—before training, when training ends, and when the trainee is back on the job–to identify the effects of training.
  • There are two types of questions you might ask: open-ended questions and close-ended questions. Open-ended questions allow participants to express themselves freely but are time-consuming to “grade” while close-ended questions can be limiting, but better lend themselves to objective measurement. A simple rule of thumb for choosing between the two is if you know the specific information needed to answer a question, closed-ended responses are preferred (Converse and Presser, p.33). If however, you are not sure what the range of possible responses are to a question, and need to conduct a preliminary exploration of a topic, open-ended questions will work better. Note also that previous research shows that respondents are more willing to offer sensitive information on a survey using an open-ended response.
  • Choose possible responses carefully when using close-ended questions. Most close-ended questions give respondents choices or a range of choices. You want to capture all possible options clearly and without bias. For example, Bradburn (2004) cautions against the use of vague categories such as never, rarely, occasionally, and often because there is no way to know what “often” means to each participant. A common choice for survey data collection is the Likert-type scale where participants are asked to rate each item. In general, a 5-point Likert scale is an appropriate choice and would include statements like: “strongly agree,” “agree,” “neutral,” “disagree,” and “strongly disagree.” Because Likert scales have numbers attached to statements, they are easy to tabulate and report back.
  • When it comes to questionnaires, appearance counts. Make sure your questionnaire looks professional, is grammatically correct, easy to read, and contains clear instructions. It’s always a good idea to have others proof your questionnaire and provide feedback. Also, only ask questions that are necessary. Participants find overly long questionnaires daunting.
    Remember, garbage in, garbage out. The quality of your data is dependent on the clarity of each question. A thoughtful design captures the experience and change of program participants.
  • Confidentiality is key to getting honest answers. Most surveys are collected anonymously using an online tool like Survey Monkey or Qualtrics (available to students through the RU Library). It is important to communicate to participants that all data will be collected and reported anonymously.

To revisit our three types of information, when testing knowledge, use close-ended factual questions that directly measure course objectives. When measuring attitude, use a five-point Likert scale, and when measuring behavior, use a combination of both types of questions and administer the same questionnaire before training, right after training, and at one-month intervals following training.

Add a comment to this post to tell us about your experience with survey design—particularly what you think makes a survey work or not work effectively.

References and Resources

The books that follow are useful resources and will help you learn more about survey design. The first is more advanced, but the second book by Converse and Prosser is a quick read.

Bradburn, N., Sudman, S. and Wansink, B. (2004) Asking Questions. Jossey-Bass; San Francisco, Ca.

Converse, J. and Presser, S. (1986) Survey Questions: Handcrafting the Standardized Questionnaire. Sage University Press; Newbury Park, Ca.


  • Hi Kathleen – great post. I always try to include two specific questions in any survey I design (especially level 1). The first is a predictive question to try to gauge what a level 3 evaluation will show. For example, “How likely are your work behaviors going to change as a result of this program?”.

    The second is a reverse question to make sure the participant is paying attention to the survey and not just clicking through. “How dissatisfied are you with the program?

    • Mike, your suggestion to add a reverse question is genius! Thanks for sharing.

      I’ve never thought to consider vague options like often, rarely and to opt for strongly agree and agree, but it makes complete sense. This language is not one in the same. Particularly, when communicating the data the use of clear language will help stakeholders take the results seriously.

  • Thanks for sharing Mike. I think both are excellent ways to make the most of level 1 assessment.


  • My experience with survey design is very well know and great. I have dealt with the open ended question, which gives me a wide range to grade what is actual going on. and to express my opinion.

  • One thing I like to see on surveys is space for comments. If the survey is really short something simple at the end might be enough, but I think when a survey has multiple sections, it’s nice to see a comment option after each section. I also like it when there are some slightly more guided questions, prompting people to comment about a specific issue or topic. In my limited experience, I’ve found the qualitative comments to be just as useful, if not more so, than the quantitative data.

  • Survey design is something that I don’t have a ton of experience with, so this article as a huge help to understand the basics to success when developing one. In being on the receiving end of surveys, I do see how useful they can be and how harmful they can be. This is think from the reactionary stage of course as I don’t typically have a had in analyzing the results. I’m personally more drawn to the surveys that follow a Likert format. Having clear quantitative and qualitative options is essential to well rounded data. If open ended question are implemented, I would always push for a max of two so that we don’t fall too far in the narrative. At that point interviews might make sense. Most importantly, I think a survey should be concise and if at all possible no more than one page, even if you have administer the second half down the line.

  • Factors that contribute to survey designs that work are:
    1. Timely. The survey should be administered shortly after training program ends so the participant’s experiences can be easily recalled.
    2. Objective and Participant focused, not content or facilitator focused. Questions address the learning objectives outlined in the course curriculum, classroom environment, and content delivery.
    3. Convenient for the participant. On-line surveys allow for the participant to take the survey when it’s convenient, location that may be private (their office, or home), and doesn’t interfere with job duties or responsibilities that otherwise could act as a detractor.
    4. Simple, uncomplicated, and not rushed. Less is more, when it comes to the number of survey questions- limiting them between 5-15 questions in total. Indicate the average time frame it will take to complete the survey before the participants begins it.
    5. Say what you mean, mean what you say. Keep the language and sentence structure simple, using words that will not be confusing, refrain from sentences that can be misinterpreted, and that the participant will understand how to answer them.
    6. Categorized to limit bias or personal opinion. Classroom environment, content, and facilitation style can often influence each category and how the participant will respond to the survey questions. By asking specific questions and outlining them in their own categories, it may lessen the influence or biased personal opinion.
    7. Provide space for additional comments. Especially if the participant responds to the survey question unfavorably. Detailed comments help support how the participant feels, why they responded the way they did, and provides positive and negative feedback that otherwise would not be shared in a confidential manner.

    Factors that contribute to ineffective survey designs are:
    1. Excluding the instructional design (if different than the survey design team) or facilitation team in the onset of the survey design questions. Often times, the survey questions developed miss the mark of the actual content being delivered. This can lead to a breakdown or inaccuracy of survey results.
    2. Creating surveys that are opposite of all factors that contribute to survey designs that work.

  • Great article! This article can definitely serve as a “best practice” for collecting feedback after a training program. I believe the best surveys have a combination of open and close-ended questions so that participants have a chance to add detail regarding certain areas of a program. I think anonymous surveys capture more honest feedback, compared to those that force self-identification. When administering or completing questionnaires, I prefer the Likert method format, because I think it’s quicker, while still allowing for a pretty accurate gauge on the experience received.


  • Great article! It seems a very simple point but I truly appreciate the thinking about how appearance counts for effective surveys. I certainly find that when great tools are poorly designed it negatively impacts the success of the tool! I recently was at a training and the post workshop evaluation was pretty clearly cut and pasted from another event; it not only had the wrong date but also asked for feedback and responses for content that was not covered. That impacted my willingness to spend time filling in the survey as it was clear it was incorrect. I also second the encouragement for a reasonable length! We often try to do way too much! Brief, well written, logically designed tools are so wonderful!

  • I used to be a program director in nonprofit and as the overseer of several programs it was my job to monitor their effectiveness and expansion. The best way to do this on a cost-effective measure was the use of surveys. They were heavily utilized throughout the organization and I often found myself drafting and tweaking them over time as I did trainings. The data collected became indispensable to constantly improve our programming and training effectiveness. There was certainly a learning curve to it at first. I was by no means a master at drafting surveys and often didn’t even know what I wanted to measure or what I was even looking for. It took time and practice to hone these skills to make more effective data collection and ultimately be able to apply that data to make quality changes to the programs. I am still learning and building on this foundational experience today.

  • This article is great and super helpful. While I lack survey design skills I have taken plenty of surveys following a training. And my favorite arrangement is having the survey broken into sections and that section having a short block of close-ended questions followed by an open-ended questions or comment field. It helps me to focus and be a little more objective rather than rushing through just to complete the survey.

  • Great post! This makes me rethink all the surveys I’ve taken. It also reminds me of metrics – if you don’t do anything with the data points you collect, they are not worth collecting. I see the value of employee satisfaction surveys and has me considering the use of them within my department. Thanks!

  • Great article that outlines the factors that are necessary when conducting a survey design. It almost seems as if some of these items are a no brainer in a sense such as, making sure that the questions make sense. However, the emphasis on the importance of how the questions are designed is crucial because if they are worded in a way that will persuade the reader to answer in a particular way. I used SurveyMonkey a couple of times and it would flag me for creating questions that may swing the reader in a particular way or another. Has anyone else used SurveyMonkey and if so, what were you thoughts on it?

  • Great article about how to properly design surveys. A key component, that I always felt absolutely necessary, is the confidentiality – when a survey calls for it. You will not get honest comments if participants fear retribution. I know that goes without saying, but too often people – even when you can submit responses through sites like Survey Monkey – people think their responses can be tracked by their I.T. departments. Clear communication about what is being asked, why, and sharing results is a great way to build trust, which will impact future surveys.

  • This article will be very helpful for me when I am designing survey questions at my workplace in the future. Measuring attitudes with the five-point scale and use the same questions for pre and post training will give me useful data to measure for improvements.

  • Thanks for this Helpful post, it helped with my final project. I knew that i wanted to use a survey as one of the tools to evaluate with but i was unsure where to begin. I now know that I will use attitude, and the closed ended question with the scale. But i also had a question i believe that you stated to use the same for before, after and during, am i am to use a different one for the 2nd and 3rd level?

  • Overall, this is a great blog. The information provided here helps me see the value both as a participant in surveys as well as the designer of a survey. However, I must say that I favor Likert Scale more. As mentioned in the blog, tallying the numbers is much easier for the designer of the survey. But thinking about the end user, making the survey simple but impactful, definitely would encourage the participants to complete the survey.


  • At my job the surveys are not anonymous and I know for a fact that it prevents employees from really expressing their concerns with the job. I think that concealing identities are one of the most important factors of a truthful survey. If not, then it’s basically all “bad data”. Also although it takes more time I do feel that the most useful surveys are the ones that ask the close ended questions, however still give the individual the opportunity to answer with an open ended question. For example a question that may ask “I feel appreciated at work” and it can give you the option to check off “agree, neutral, disagree” and then also give the option to elaborate. In this instant it will allow employees to only give more detailed feedback to their most important concerns.

  • In my experience, it has worked. It shows me which student(s) was paying attention to the materials that were presented. This is very important when they hit the call center floor because now they have to log on the computer and take calls. If you have an employee that didn’t pay attention in class, they can’t perform on the call center floor.

    They start asking questions that you may have gone over 3 to 4 times in training but answered the survey that they understood the materials covered in training and have no questions when the survey asks, is there something you don’t understand that the trainer can revisit before leaving training.

  • A brief yet powerful post explaining the importance of surveys. I like how it pin pointed the down side of close ended questions and highlighted the possibilities of open ended questions. I thought it was especially helpful when time was mentioned and how it could effect how we gather information. I think that time is one of the most important resources that we have and should always be considered.

  • I really appreciate this article on survey’s. My boss currently sends out training survey’s after providing a live webinar, however I personally never felt we developed any useful information from them. I see now that she uses many close ended questions that never gives us real feedback on how to better the training. I also appreciate the explanation of the basic information usually captured by surveys, and the types of questions you want to ask to achieve that information. I will be using these tips to reassess our current surveys in hopes that we can gather more useful information from out trainees. Thank you!

  • Within the last couple years, since have a role in the OD space, I have designed several surveys as follow up to training programs to understand the effectiveness of the program and how the behaviors expected had been applied. I have also designed surveys as a part of stakeholder manager approach to understand the opinion of stakeholders as it relates to a specific initiative. The challenge that I have personally faced is participation. Only on 1-2 of the surveys administered have I received full participation from the group. The company that I work for views surveying employees as a poor use of time that is pulling them away from their sales goals. Of course, I see that time to provide feedback as an investment in ensuring that the development programs in place are supporting the behaviors that will lead to employees reaching their sales goals. With that experience, I’d say, what makes them work is surveys that only take a short timeframe to complete that are not cumbersome and understood easily. I think leaders following up with their employees about the completion of the survey to support commitment is key. I also think that a feedback loop that shows employees where there feedback is going and how it is being used is helpful in encouraging participation in surveys. To that end, what detracts from survey completion is a leadership culture that expresses it as a poor use of time, or surveys that are in fact too long and cumbersome to complete in a productive environment.

  • Eman Abdellatif

    Great topic! surveys are very essential tool to measure how well training went. I also creates room for improvement because it gives participants a chance to leave feedback and suggest ways of how to improve.

  • Great article. I agree that “When surveys are well designed, measure what they are supposed to measure, and are stable over time, you’ve got quality data.” During my undergrad, I had to construct many surveys for classes in order to collect data for projects. One of the first things to consider is the type of data and how it should be collected, such as whether it is qualitative or quantitative. For any type of data, I think close-ended questions tend to be easier to code and understand for research purposes. Evaluating open ended data can be subjective and lead to some discrepancies between individuals who are reviewing the results. I do agree evaluating close-ended questions on the likert scale gives a reasonable range of options.

  • Ginger Ulloa-Enright

    This article is extremely helpful and is a resource now when designing surveys/questionnaires. It breaks down the process and provides valuable information on when measuring attitude, knowledge and the combination of both. I found it interesting that research shows participants are more likely to expand on sensitive information when given open-ended questions. Looking back, when I have filled out training surveys, I am more honest with feedback when I am presented with an opportunity to answer those types of questions.

    • Ginger Ulloa-Enright

      I am adding some additional thoughts to my previous comment. Since taking TRDV 434 Training Evaluation, I have further insight on developing questionnaires to assess knowledge, attitude and behavior. Our textbook, “Evaluating Training Programs: The Four Levels”, by Donald and James Kirkpatrick, does a great job in offering an abundant amount of useful examples for creating assessments that are adaptable for other trainings. In Chapter 15, page 144-199, “Evaluating a Leadership Program”, the authors provide a robust training evaluation that utilizes extensive questionnaires using closed and open ended questions to measure short and long-term behavior change for managers. The example also contains evaluations that measure behavior change from the managers direct reports as well as implements interval assessments for up to 6-9 months after the conclusion of the training. This allows for accountability to the learner and gives a comprehensive look at job transfer/behavior change. If you have not taken TRDV 434 yet and would like to expand your knowledge on the topic, grab the book if you get a chance!

  • Priscila Membreno

    The ideas of surveys sound so simple yet they can be so powerful. Surveys are the key to ensure rather something worked or not if one is trying to figure that out. I love surveys because they are so beneficial to the other party. The only time I get frustrated with surveys or I feel that it is not effective at all is when- one is doing a survey and the answer the person has is not an option. The person has a good answer to the survey question but the option is not there. Or when you do read a question and a word or the question can be interpreted to two meanings. Thats the only time I frown upon surveys. I understand they are trying to get good data and the questions may be great but you need to ensure your participant has room or an area or someway to really convey their opinion. As the article mentioned also, it has to be written super clear, no grammatical errors, no double meaning no anything. The questions has to be as clear as it can so everyone can understand what they are asking.

  • In organizations surveys are beneficial towards the success of organizations. Also, surveys allow individuals to express themselves and articulate what they already know, and what they feel is needed. Surveys can describe what an organizations’ needs are, and if it is cost-effective , the shareholders would love it even more.

  • This article was very insightful and genuinely addresses the importance and power surveys hold when implemented within an organization. My employer uses Peakon, which is distributed frequently to capture employee feedback regularly, and this tool has been effective since our first day of launching. The employees can express their concerns about anything and everything within the organization. The gathered responses get displayed in personalized dashboards to represent the average score of all who have answered the surveys. This article benefits those seeking to implement an engagement strategy and gather important feedback, especially after a training program.

Please post a comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s