Booklet 3: Collecting and Analyzing Evaluation Data

Title Page & Verso information
Figures

Preface

This booklet is part of the Planning and Evaluating Health Information Outreach Projects series designed to supplement Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach [1]. This series also supports evaluation workshops offered through the Outreach Evaluation Resource Center of the Network of the National Library of Medicine. The goal of the series is to present step-by-step planning and evaluation methods. 

The series is aimed at librarians, particularly those from the health sciences sphere, and representatives from community organizations who are interested in conducting health information outreach projects. We consider "health information outreach" to be promotional and educational activities designed to enhance community members' abilities to find and use health information. A goal of these activities often is to equip members of a specific group or community to better address questions about their own health or the health of family, peers, patients, or clients. Such outreach often focuses on online health information resources such as the websites produced by the National Library of Medicine. Projects may also include other sources and formats of health information. 

We strongly endorse partnerships among organizations from a variety of environments, including health sciences libraries, hospital libraries, community-based organizations and public libraries. We also encourage broad participation of members of target outreach populations in the design, implementation, and evaluation of the outreach project. We try to describe planning and evaluation methods that accommodate this participatory approach to community-based outreach. Still, we may sound like we are talking to project leaders. In writing these booklets we have made the assumption that one person or a small group of people will be in charge of initiating an outreach project, writing a clear project plan, and managing the evaluation process. 

Booklet 1 in the series, Getting Started with Community Assessment, is designed to help you collect community information to assess need for health information outreach and the feasibility of conducting an outreach project. Community assessment also yields contextual information about a community that will help you set realistic program goals and design effective strategies. It describes three phases of community assessment: 

  1. Get organized,

  2. Collect data about the community, and

  3. Interpret findings and make project decisions.

The second booklet, Planning Outcomes-Based Outreach Projects, is intended for those who need guidance in designing a good evaluation plan. By addressing evaluation in the planning stage, you are committing to doing it and you are more likely to make it integral to the overall project. The booklet describes how to do the following: 

  1. Plan your program with a logic model,

  2. Use your logic model for process assessment, and

  3. Use your logic model to develop an outcomes assessment plan.

The third booklet, Collecting and Analyzing Evaluation Data, presents steps for quantitative methods (methods for collecting and summarizing numerical data) and qualitative methods (specifically focusing on methods for summarizing text-based data.) For both types of data, we present the following steps: 

  1. Design your data collection methods,

  2. Collect your data,

  3. Summarize and analyze your data, and

  4. Assess the validity or trustworthiness of your findings.

Finally, we believe evaluation is meant to be useful to those implementing a project. Our booklets adhere to the Program Evaluation Standards developed by the Joint Committee on Standards for Educational Evaluation [2]Utility standards, listed first because they are considered the most important, specify that evaluation findings should serve the information needs of the intended users, primarily those implementing a project and those invested in the project's success. Feasibility standards direct evaluation to be cost-effective, credible to the different groups who will use evaluation information, and minimally disruptive to the project. Propriety standards uphold evaluation that is conducted ethically, legally, and with regard to the welfare of those involved in or affected by the evaluation. Accuracy standards indicate that evaluation should provide technically adequate information for evaluating a project. Finally, the accountability standards encourage adequate documentation of program purposes, procedures, and results. 

We sincerely hope that you find these booklets useful. We welcome your comments, which you can email to nec@northwestern.edu

Acknowledgements

We deeply appreciate Cathy Burroughs' groundbreaking work, Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach, and thank her for her guidance in developing the Planning and Evaluating Health Information Outreach Projects series as a supplement to her publication. We also are grateful to our colleagues who provided feedback for the first edition of the series. 

To update the series, we were fortunate to work with four reviewers who brought valuable different viewpoints to their critiques of the booklets. We want to thank our reviewers for their insightful suggestions: 

  • Molly Engle, PhD, Professor and Extension Evaluation Specialist, Oregon State University, Corvallis, OR
  • Sabrina Kurtz-Rossi, MA, Health Literacy Consultant and Principal, Kurtz-Rossi & Associates, Boston, MA
  • Annabelle Nunez, MA, Assistant Librarian, Arizona Health Sciences Center, Tucson, AZ
  • Ann Webb Price, PhD, President, Community Evaluation Solutions, Alpharetta, GA

This project has been funded in whole with federal funds from the Department of Health and Human Services, National Institutes of Health, National Library of Medicine, under Contract No. HHS-N-276-2011-00008-C with the University of Washington. 

Introduction

This booklet provides tips and techniques for collecting and analyzing information about your outreach projects so that you can make decisions like these: 

  • When planning a program, you need to understand the needs of program participants so you can choose outreach strategies that are motivational and supportive.
  • As you monitor project activities, you will need to decide whether to make changes to your plans.
  • As the project nears its end, you will decide how to report the results. You and others invested in the project, referred to as stakeholders, will have to decide how your project made a difference and if your outreach project should be continued.

If you are going to make good decisions about your outreach project − or any other project − you need information or data. In this booklet we use the word "data" to include numerical and text-based information (such as interview transcriptions or written comments) gathered through surveying, observing, interviewing, or other methods of investigation. 

During community assessment data can help you identify groups that are in particular need of health information outreach [3]. Data also can be used to assess the resources and challenges facing your project. While you are implementing your activities and strategies, data can provide you with feedback for project improvement − this is called process assessment, which we described in Booklet 2, Planning Outcomes-Based Outreach Projects [4]. During outcomes assessment, data can provide the basis for you and other stakeholders to identify and understand results and to determine if your project has accomplished its goals. 

Therefore, much care must go into the design of your data collection methods to assure accurate, credible and useful information. To really understand and assess an outreach project, it is recommended to use multiple and mixed methods when possible: 

  • "Multiple methods" means collecting data from more than one source and not relying on one survey or test or focus group to provide an adequate program assessment.
  • "Mixed methods" means that various types of information sources are used to assess your project.

We provide an example of how to use mixed methods in the Toolkit that starts on page 33. 

When possible, project evaluation should combine both em>quantitative and qualitative methods. Quantitative methods gather numerical data that can be summarized through statistical procedures. Qualitative methods collect nonnumerical data, often textual, that can provide rich details about your project. Each approach has its particular strengths and, when used together, can provide a thorough picture of your project. (Note: When we talk about data collection methods, we are referring to procedures or tools designed to gather information. Surveys and interviews are data collection methods. When we compile, summarize and analyze the data, we use the term "analytic methods.") 

Quantitative Methods

This booklet is organized into two sections: one for quantitative methods and one for qualitative methods. After a brief overview, each section focuses on a specific method that is common and applicable to a variety of evaluation projects. In the quantitative section, the survey method has been chosen. For the qualitative section, interviewing is the method addressed. 

However, we should note that neither surveys nor interviews are limited to collecting one type of data. Either method can be designed to produce qualitative or quantitative data. Often, they are designed to collect a combination of both. 

You choose the type of method based on the evaluation question you want to answer. Evaluation questions describe what you want to learn (as differentiated from survey and interview questions, which are carefully worded, sequenced, and formatted to elicit responses from participants). Figure 1 provides an approach to selecting the type of method. 

This section will take you through the steps of using quantitative methods for evaluation, as shown above in Figure 2. Any piece of information that can be counted is considered quantitative data, including: 

  • Attendance at classes or events
  • Participation or drop-out rates
  • Test scores
  • Satisfaction ratings

Quantitative methods show the degree to which certain characteristics are present, such as frequency of activities, opinions, beliefs, or behaviors within a group. They can also provide an "average" look at a group or population. 

The advantage of quantitative methods is the amount of information you can quickly gather and analyze. The questions listed below are best answered using quantitative methods: 

  • What is the average number of times per week that workshop participants search online for health information?
  • How many clinics in our outreach project have bookmarked National Library of Medicine resources on at least one of their computers?
  • On average, how much did trainees' confidence in using online health information resources improve after training?
  • What percentage of participants in a PubMed training session said their skills in using the resource improved as a result of taking the course?
  • How many people visited the resource website during the grant period?
  • What percentage of visitors to a booth at a health fair showed interest in finding prescription drug information online?
  • How likely are participants on average to recommend MedlinePlus to others?
  • What percentage of users improved their ability to find good consumer health information as a result of our sessions?

Appendix 1 describes some typical data collection methods for quantitative data. 

Step One - Design Your Data Collection Methods

This section will focus on one of the most popular quantitative methods: surveys. This method has been chosen because of its usefulness at all stages of evaluation. Surveys use a standard set of questions to get a broad overview of a group’s opinions, attitudes, self-reported behaviors, and demographic and background information. In this booklet, our discussion is limited to written surveys such as those sent electronically or through surface mail. 

Write your evaluation questions

The first task in developing the questionnaire is to write the general evaluation questions you want to answer with a questionnaire. Evaluation questions describe what you want to learn by conducting a survey. They are different from your survey questions, which are specific, carefully formatted questions designed to collect data from respondents related to the evaluation questions. (We will use the term "survey items" when referring to survey questions to distinguish them from evaluation questions.) 

Listed below are examples of evaluation questions associated with different phases of evaluation: 

  • Community assessment [3]. During the planning stages of an outreach project, you can use surveys to assess your outreach community members' beliefs, attitudes, and comfort levels in areas that will affect your outreach strategies. Evaluation questions may be:
    −"What health information resources do people in this community use most often?
    −"How many community members are experienced Internet users?"

If you have a logic model [4], you should review the input and activities sections to help you focus the community assessment questions. You also should consider the information you might want to gather to check assumptions listed in the assumptions section of the logic model.

  • Process assessment. Surveys are often used mid-project to get participants' feedback about the quality of the activities and products of your outreach project. So your evaluation questions might be:
    −"How do participants rate the effectiveness of our teaching methods?"
    −"How do participants rate the usefulness of the online resources we are providing?"
    −"How many people are likely to use the health resources after the training session?"
    −"How do participants rate the quality of the training session?
    "

You should look at the activities and inputs column of your logic model to determine the questions you might want to ask. You also can check the outcomes columns to determine if your survey can help you collect baseline information that will allow you to assess change. 

  • Outcomes assessment. At this stage, you use surveys to help assess the results of your outreach project. So questions might include:
    −"Do participants use the online resources we taught after they have completed training?"
    −"Have participants talked with their physicians about something they found at MedlinePlus?"
    −"How many health care professionals trained in our study said they retrieved information from MedlinePlus to give to a patient?"

    When designing a survey for outcomes assessment, review the outcomes columns of your logic model.

Develop the data collection tool (i.e. questionnaire)

Your next task is to write survey items to help you answer your evaluation questions. One approach is to use a table like that shown in Figure 3, above, to align survey items with evaluation questions. 

Writing surveys can be tricky, so you should consider using questions from other projects that already have been tested for clarity and comprehension. (Although adopting items from existing questionnaires does not mean that you should forgo your own pilot test of your questionnaire.) Journal articles about health information outreach projects sometimes include complete copies of questionnaires. You can also contact the authors to request copies of their surveys. You also could try contacting colleagues with similar projects to see if they mind sharing their surveys. However, if you do copy verbatim from other surveys, always be sure to secure permission from the original author or copyright holder. It also is a collegial gesture to offer to share your findings with the original authors. Figure 4, gives you six examples of commonly used item formats. 

The visual layout of your survey is also important. Commercial websites that offer online survey software give examples of how to use layout, color, and borders to make surveys more appealing to respondents and easier for them to complete. There are several popular commercial products to create web-based surveys, such as SurveyMonkey http://surveymonkey.com and Zoomerang http://www.zoomerang.com

In most cases, you will want to design online surveys that are accessible to respondents with disabilities. This means that your survey should be available to respondents who use screen reader software or need high contrast in the surveys or those who are limited in their keyboard use. SurveyMonkey.com states that its application meets all current U.S. Federal Section 508 certification guidelines and, that if you use one of its standard questionnaire templates, your survey will be 508 compliant. If you are not using SurveyMonkey or one of its templates, you should read tips from SurveyMonkey.com about how to make your questionnaires accessible by visiting its Section 508 Compliancy tutorial page [5]

Pilot test the questionnaire

Always pilot test your questionnaire before you send it to the target audience. Even if you think your wording is simple and direct, it may be confusing to others. A pilot test will reveal areas that need to be clarified. First, ask one or two colleagues to take the survey while you are present and request that they ask questions as they respond to each item. Make sure they actually respond to the survey because they may not pick up confusing questions or response options just by reading it. 

Once you have made adjustments to the survey, give it to a small portion of your target audience and look at the data. Does anything seem strange about the responses? For instance, if a large percentage of people are picking "other" on a multiple-option question, you may have missed a common option. 

The design stage also entails seeking approval from appropriate committees or boards that are responsible for the safety and well-being of your respondents. If you are working with a university, most evaluation research must be reviewed by an Institutional Review Board (IRB). Evaluation methods used in public schools often must be approved by the school board, and community-based organizations may have their own review processes that you must follow. Because many evaluation methods pose little to no threat to participants, your project may not require a full review. Therefore, you should consider meeting with a representative from the IRB or other committee to find out the best way to proceed with submitting your evaluation methods for approval. Most importantly, it is best to identify all these review requirements while you are designing your methods. Otherwise, your evaluation may be significantly delayed.

Once you have pilot-tested the survey and attained required approvals you are ready to administer it to your entire sample.

Step Two - Collect Your Data

Decide who will receive the questionnaire

As part of planning your survey, you will decide whether to collect data from a sample (that is, a subgroup) of your target population and generalize the responses to the larger population or to collect data from all participants targeted by the survey. Sampling is used when it is too expensive or time consuming to send a survey to all members of a group, so you send the survey to a portion of the group instead. 

Random sampling means everyone in the population has an equal chance of being included in the sample. For example, if you want to know how many licensed social workers in your state have access to online medical journals, you probably do not have to survey all social workers. If you use random sampling procedures, you can assume (with some margin of error) that the percentage of all social workers in your state with access is fairly similar to the sample percentage. In that case, your sample provides adequate information at a lower cost compared with a census of the whole population. For details about random sampling, see Appendix C of Measuring the Difference [1].

With smaller groups, it is possible to send the survey to everyone. In this case, any information you summarize is a description of the group of respondents only. For instance, if you survey all seniors who were trained in your outreach project to use MedlinePlus and 80% of them said they used it at home one month after the session, you can describe how many of your trainees used MedlinePlus after training. This percentage provides important information about a result of your outreach project. However, you cannot make the generalization that 80% of all trained seniors use MedlinePlus at home after they are trained because you have not randomly sampled from among all seniors who received training on the resource.

Maximize response rate

The quality of your survey data depends heavily on how many people complete and return your questionnaire. Response rate refers to the percentage of people who return a survey. When a high percentage of people respond to your survey, you have an adequate picture of the group. But when you have a high percentage of non-responders (members of your sample who did not complete your questionnaire), you are not sure if they share a lot of similar characteristics that might affect the accuracy of your interpretation of your findings. For example, the non-responders may have been less enthusiastic than responders and were not motivated to complete the questionnaire. If they had actually responded, you may have found lower levels of satisfaction in the total group. If the survey was administered electronically, the responders may be more computer literate than non-responders. Without participation of these non-responders, your findings may be favorably biased toward the online resources that you are promoting. The problem with low response rate is that, while you may suspect bias when your response rate is low, it is difficult to confirm your suspicion or determine the degree of bias that exists.

Statisticians vary in their opinions of what constitutes a good response rate. Sue and Ritter reviewed survey literature and reported that the median response rate was 57% for mailed surveys and 49% for online surveys [6]. In our experience talking with researchers and evaluators, 50% seems to be the minimal response rate acceptable in the field of evaluation.

Figure 5 defines a typical protocol for administering mailed surveys. Studies show that these procedures are effective for surveys sent either through regular mail or email [7, 8, 9]. Because electronic surveys are becoming increasingly popular, we have provided additional tips for increasing their response rate:

  • Keep your survey as simple as possible so it will load quickly.
  • Avoid questions that require a response before allowing participants to move on to the next question. If respondents cannot or do not know how to answer a question, they will likely not complete the questionnaire if they cannot move beyond that question.
  • Give participants an estimated time required to complete a survey. Use a "progress" bar that tells them how much of the questionnaire they have completed and how much is remaining.
  • Scrolling seems to provide better completion rates than forcing respondents to go through pages with one or two questions.
  • Be sure to start your survey with questions that are easy and interesting to answer. Do not start with open ended questions because they may make the survey seem overwhelming to respondents. Demographic questions should be at the end of the survey because respondents find them boring or, in some cases, offensive.
  • Incentives may help your response rate. For both online and mailed surveys, token incentives (as small as $5) usually are shown to be more effective if they are sent along with your first request for respondents' participation (the pre-notification letter). Typically, promises to send incentives after respondents complete a survey have little effect on response rate [8,9,10]. There are exceptions to this finding. One study of incentives with web-based surveys showed that promises to enter respondents into lotteries for $25-$50 gift cards was more effective than pre-paid or promised incentives [11].
  • While incentives seem to boost response rate, most survey researchers think that making multiple contacts has an equal or greater positive effect on response rates compared with incentives. So if you have to choose between incentives or postage for follow-up postcards and replacement surveys, choose the latter.

Check for non-response bias

Getting a high response rate can be difficult even when you implement procedures for maximizing it. Because survey researchers have been facing decreased levels of response rate in recent years, the field of public opinion research has a number of studies focused on the relationship between low response rate and bias (called non-response bias). Some survey researchers have concluded that the impact of non-response rate is lower than originally thought [12]. If you fail to get a return rate of 50% or more, you should try to discern where the bias might be:

  • If resources allow, attempt to contact non-responders with a short version of the survey to assess the level of bias in the sample.
  • You can compare your findings against information you collected through focus groups, interviews, and other qualitative methods to see if the numbers are consistent with survey findings.
  • Check to see how closely your respondents match the demographics of your sample. For instance, if you find that a higher proportion of hospital librarians in your sample responded compared with public librarians, you can speculate about how responses to your questions may not be generalizable to public librarians.
  • You also can compare responses of subgroups to further explore bias associated with low representation of a particular subgroup. Using the example from the bullet above, you can compare your public librarians' and hospital librarians' responses to see if your concerns are warranted.

The bottom line is that you should explore your sample for non-response bias. You may decide, in fact, that you should not analyze and report your findings. However, if you believe your data are still valid, you can then include your response rate, potential biases, and the results of your exploration into non-response bias to your stakeholders. They can then judge for themselves the credibility of your data. 

Provide motivation and information about risks and participants' rights

The correspondence around surveys is an important part of the overall survey design. The pre-notification letter is your first contact with respondents, and the impression you create will determine the likelihood that they will ultimately complete the questionnaire. It needs to be succinct and motivational.

The cover letter also is a motivational tool for inducing them to participate, and it should inform them before they begin the survey of their rights and potential risks to participation.

This is called informed consent, and it is part of the Program Evaluation Standards described in the booklet preface (specifically, the "propriety standard") [2]. If you must have your project reviewed through an institutional review board (IRB) or some other type of review group, you should get specific details of what should be in the letter. (The IRB will want to see your correspondence as well as your questionnaire.) 

Your reminder notices are your last attempt to motivate participation. They generally are kept short and to the point. Figure 6 provides a checklist for creating the letters and emails used in survey distribution. 

Once you have received the last of your surveys, you have accumulated raw data that you summarize so that you can see patterns and trends in the data that you can use to inform project decisions. To do so, you must summarize the raw data so you can then interpret it. 

Step Three - Summarize and Analyze Your Data

Compile descriptive data

The first step in analyzing quantitative data is to summarize the responses using descriptive statistics that help identify the main features of data and discern any patterns. When you have a group of responses for one question on your survey, that group of responses is called a "response distribution" or a "distribution of scores." Each question on your survey, except open-ended ones, creates a distribution of scores. Descriptive statistics describe the characteristics of that distribution. 

For some survey question distributions, you want to see how many respondents chose each possible response. This will tell you which options were more or less popular. You start by putting together a table that shows how many people chose each of the possible responses to that question. You then should show the percentage of people who chose each option. Percentages show what proportion of your respondents answered each question. They convert everything to one scale so you can compare across groups of varying sizes, such as when you compare the same survey question administered to training groups in 2011 and 2012. Figure 7 shows you how to construct a frequency table.

Calculate measures of central tendency and dispersion

You also should determine the "most representative score" of a distribution. The most representative score is called the "measure of central tendency." Depending on the nature of your score distribution, you will choose among three measures:

  • Mode. The most frequent response.
  • Median. The score that is in the middle of the distribution, with half of the scores above and half below. To find it, sort your distribution from highest to lowest ratings, then find the number that equally divides the distribution in half. (If you have an even number of scores, add the two most central scores and divide by two to calculate the median.)
  • Mean. This is known as the "average" response in your distribution. It is computed by adding all responses and dividing by the number of respondents who answered the question.

You also need to calculate the spread of your scores, so you can know "how typical" your measure of tendency is. We call these "measures of dispersion," the most frequently reported measures being range (the lowest and highest scores reported) and standard deviation (the "spread" of scores, with a higher standard deviation meaning a bigger spread of scores). 

You do not always report all central tendency and dispersion measures. The ones you use depend on the type of data collected by a given item. Figure 8 shows how you would represent these measures for three levels of data. The first level is called nominal-level data. It is considered "first level" because the only information you get from a respondent's answer is whether he or she does or does not belong in a given category. Table A shows a very typical nominal-level question: "Where do you live?" For this categorical data, the measure of central tendency (the "most representative response") is the mode. Percentages tell you how responses disperse among the options.

Table B describes responses to the same question used in Figure 7, on the previous page. The data collected in Table B's question gives you information about intensity. As you read the response options on Table B from left to right, each option indicates a higher level of agreement with the statement. So if someone marked "strongly agree," that respondent indicated a higher degree of agreement with the statement than a respondent who marked "somewhat agree." Because we can rank responses by their degree of agreement with the response, they are considered "ordinal-level" data. For ordinal-level data, the "most representative score" is the median. In the example in Table B, the median score is 4. That score indicates that 50% of responses were "4" (somewhat agree) or above and 50% were responses of "4" and below. The range of responses presents how widespread the ratings were in a distribution. For this question, all responses were between 2 ("somewhat disagree") and 5 ("strongly agree"). 

In Table C, the interval/ratio-level data suggest even more information than provided by the question in Table B. The question asked respondents how many times they visited their public library in the past 30 days. As with our question in Table B, a higher number means "more." A respondent who visited 4 times visited more often than a person who visited 2 times.

But notice that you also can describe "how much more," because each visit counts an equal amount. So you know that the respondent who went 4 times to the public library visited twice as often as the person who went 2 times. (There is a difference between interval-level and ratio-level data that we will not discuss here because both are described with the same statistics. If you are interested, this difference is described in any basic statistics textbook.) 

For this level of data, the most representative score usually is the mean (also known as the average) and the standard deviation is an index of how far the scores scatter from the mean. If you have a relatively normal distribution (something you probably know as a "bell-shaped" distribution), then approximately 68% of scores will fall between one standard deviation below and one standard deviation above the mean. The standard deviation is really more useful in understanding statistical tests of inference, such as t-tests and correlations. It may not be particularly meaningful to you, but if you report a mean, it is good practice to report the standard deviation. For one reason, it tells people how similarly the people in your sample were in their responses to the questions. Also, it is another way to compare similarities in samples that responded to the question. 

Notice that we qualified our statement about the mean being the most representative central tendency measure for interval/ ratio-level data. You also will notice that, in Table C, the median and range are reported as well as the mean and standard deviation. Sometimes, you may get a couple of extremely high or low scores in a distribution that can have too much effect on the mean. In these situations, the mean is not the "most representative score" and the standard deviation is not an appropriate measure of dispersion. 

For the data presented in Table C, let's say the highest number of visits was 30 rather than 7. For the fictional data of this example, changing that one score from 7 to 30 visits would alter the mean from 1.8 to 2.7. However, the median (which separates the top 50% of the distribution from the bottom 50%) would not change. Even though we increased that highest score, a median of 1 would continue to divide the score distribution in half. 

For the data presented in Table C, let's say the highest number of visits was 30 rather than 7. For the fictional data of this example, changing that one score from 7 to 30 visits would alter the mean from 1.8 to 2.7. However, the median (which separates the top 50% of the distribution from the bottom 50%) would not change. Even though we increased that highest score, a median of 1 would continue to divide the score distribution in half. 

In fact, there are other reasons you might want to report median and range rather than mean and standard deviation for interval/ ratio-level data. There are times when the "average" doesn't make as much sense as a median when you report your findings to others. In Table C, it may make more sense to talk about 2 visits rather than 1.8 visits. (No one really paid 1.8 visits, right?). The range is also easier to grasp than standard deviations. 

One other note: There are times that you might see people report means and standard deviations for data similar to what you see in Table B. Discussions among measurement experts and statisticians about the appropriateness of this practice have been long and heated. Our opinion is that our discussion here reflects what makes sense most of the time for health information outreach. However, we also can see the other point of view and would not discredit this practice out of hand. 

There are other ways to use tables to help you understand your data. Figure 9Figure 10Figure 11, and Figure 12 show formats that will help you analyze your descriptive data. After you compile a table, write a few notes to explain what your numbers mean. 

Simplify data to explore trends

You can simplify your data to make the positive and negative trends more obvious. For instance, the two tables in Figure 9 show two ways to present the same data. In Table A, frequencies and percentages are shown for each response category. In Table B, the "Strongly Agree" and "Agree" responses were combined into a "Positive" category and the "Disagree" and "Strongly Disagree" responses were put into a "Negative"category. 

Provide comparisons

Sometimes, you may want to see how participants' attitudes, feelings, or behaviors have changed over the course of the project. Figure 10 shows you how to organize pre-project and post-project data into a chart that will help you assess change. Figure 10 also presents means rather than percentages because numbers of websites represent interval-level data. Data to open-ended questions in which participants may give a wide range of scores, such as the number of continuing education credits completed, are easier to describe using averages rather than percentages. 

You may wonder if the findings vary for the different groups you surveyed. For instance, you may wonder if nurses, social workers, or members of the general public found your resources as useful as the health librarians who had your training. To explore this question, you would create tables that compare statistics for subgroups in your distribution, as in Figure 11.

Finally, you also may want to compare your findings against the criteria you identified in your objectives. Figure 12 gives an example of how to present a comparison of objectives with actual results.

Step Four: Assess the Validity of Your Findings

Validity refers to the accuracy of the data collected through your survey: Did the survey collect the information it was designed to collect? It is the responsibility of the evaluator to assess the factors that may affect the accuracy of the data and present those factors along with results. 

You cannot prove validity. You must build your case for the credibility of your survey by showing that you used good design principles and administered the survey appropriately. After data collection, you assess the shortcomings of your survey and candidly report how they may impact interpretation of the data. Techniques for investigating threats to validity of surveys include the following: 

  • Calculate response rate. As mentioned above, when small percentages of respondents return surveys, the potential for bias must be acknowledged. Present the limitation of your sample size, along with how you investigated potential bias and your conclusions based on your investigation.
  • Look for low completion rate of specific sections of surveys. If many respondents do not complete certain sections of the survey, you will have to question the findings of that part of the survey. For instance, respondents may not finish the survey, leaving final sections or pages blank. As mentioned earlier, progress bars and short surveys minimize the problem of low completion rates.
  • Look for low completion rate of questions. Even if you have a respectable response rate, you may have questions that are left blank by a number of respondents. There are several reasons why respondents do not answer particular questions. They may not find a response that applies to them, the question format may be confusing, or they do not understand the question. The best strategy for avoiding this problem is to carefully pilot-tested your questions. If your survey asks questions that are sensitive or threatening, your best strategy for getting responses is to conduct an anonymous survey.
  • Investigate socially desirable responding. Sometimes respondents are embarrassed to answer questions truthfully. If possible, avoid using questions that ask people to disclose information that may be embarrassing or threatening. This challenge may occur if your survey asks respondents to report health behaviors such as drinking, drug use, or even dietary habits. If you must ask such questions, providing anonymity may enhance the accuracy of responses. You may be able to find published studies that estimate the extent to which people in general overestimate or underestimate certain health behaviors (such as daily calorie consumption).

Surveys allow you to collect a large amount of quantitative data, which then can be summarized quickly using descriptive statistics. This approach can give you a sense of the experience of participants in your project and can allow you to assess how closely you have come to attaining your goals. However, based on the analysis given for each table in Figure 9Figure 10Figure 11, and Figure 12, you may notice that the conclusions are tentative. This is because the numbers may describe what the respondents believe or feel about the questions you asked but they do not explain why participants believe or feel that way. Even if you include open-ended questions on your survey, only a small percentage of people are likely to take the time to comment. 

For evaluation, the explanations behind the numbers usually are very important, especially if you are going to make changes to your outreach projects or make decisions about canceling or continuing your efforts. That is why most outreach evaluation plans include qualitative methods.

Qualitative Methods

This section will take you through the steps of using qualitative methods for evaluation, as shown above in Figure 13. Qualitative methods produce non-numerical data. Most typically these are textual data, such as written responses to open-ended questions on surveys; interview or focus group transcripts; journal entries; documents; or field notes. However, qualitative researchers also make use of visual data such as photographs, maps, and videos. 

The advantage of qualitative methods is that they can give insight into your outreach project that you could never obtain through statistics alone. For example, you might find qualitative methods to be particularly useful for answering the following types of questions: 

  • Why were certain activities more effective than others?
  • How did clients change as a result of their training?
  • How did clients use the resources outside of training?
  • Why did some clients continue to use the resources while some did not?
  • What barriers did our project team encounter when implementing the project? Which ones were dealt with effectively and which ones continued to be a problem?
  • What unexpected outcomes (positive or negative)occurred as a result of your project?
  • How was your project valuable to clients and different stakeholder groups?

Qualitative evaluation methods are recommended when you want detailed information about some aspect of your outreach project. Listed here are some examples of the type of information best collected through qualitative methods: 

  • Community assessment. Qualitative methods are useful for identifying factors in the community that may impact the implementation of your project. These may include determining the readiness of different groups in the outreach community to use the technological resources you want to introduce, identifying community resources that can help your outreach effort, or assessing the level of support among community leaders for your project. The descriptive information that you get from qualitative methods such as interviews and observations is particularly helpful for planning outreach projects.
  • Process assessment. As you monitor the progress of your project, qualitative methods are useful for getting specific feedback about outreach activities from those involved in the project and answering "why" questions: Why are morning training sessions more popular than evening ones? Why do we have more women signing up for training sessions than men? Who in the community is not signing up for training sessions and why?
  • Outcomes assessment. Qualitative methods can provide compelling examples of your results in a way that numbers will never capture. While numbers may tell you how many people use MedlinePlus after a training session, you will get descriptive examples of how they used it through qualitative methods. Because of the exploratory nature of most qualitative methods, you also are more likely to find out about unexpected outcomes (positive and negative) when you interview those involved in the project.

Appendix 2 describes some typical qualitative methods used in evaluation. Interviewing individual participants will be the focus of the remainder of this booklet because it is a qualitative method that has broad application to all stages of evaluation. We specifically talk about one-to-one interviewing here. There is overlap between one-to-one and focus group interviewing, but we do not go into details about aspects of focus groups that are important to understand, such as group composition or facilitation techniques. An excellent resource for conducting focus groups is Focus Groups by Krueger and Casey [13].

Step One: Design Your Data Collection Methods

Write your evaluation questions

As with quantitative methods, you design your qualitative data collection process around your evaluation questions. You may, in fact, decide to use both quantitative and qualitative methods to answer the same evaluation questions. For instance, if the evaluation question is " Do participants use the online resources we taught after they have completed training?" You may decide to include a quantitative "yes/no" question on a survey that is sent to all participants, but you also may decide to interview 10-12 participants to see how they used it. 

Develop the data collection tool (i.e., interview guide)

Once you have your list of evaluation questions, the next step is to design an interview guide which lists questions that you plan to ask each interviewee. Interviewing may seem less structured than surveys, but preparing a good interview guide is essential to gathering good information. An interview guide includes all the questions you plan to ask and ensures that you collect the information you need.

Patton [14] discusses different types of interview questions such as those presented in Figure 14 and provides these tips for writing questions:

  • Be sure to ask open-ended questions that cannot be answered with a single word or phrase. If you ask an interviewee "Did you learn anything from the training session about how to judge the quality of online health information?" the interviewee can answer with a simple "yes" or "no." Instead, say to the interviewee "Describe some of the techniques you learned in class about judging the quality of online health information."
  • Also, ask questions that are related to a single idea. Try to ask about one idea per question. You might introduce a line of inquiry with multiple ideas in a statement such as "Now I want to ask about what you like and dislike about PubMed." But then provide focus by saying "First, tell me what you like."
  • Be sure to use language that the interviewee understands and avoid jargon. It is sometimes difficult to recognize jargon or acronyms, so you might want to pilot test your questions with someone outside of your field to make sure the language is understandable.
  • Check "why" questions for vagueness. Questions that start with "why" tend to be unfocused and may be difficult for the interviewee to answer. The question "Why did you decide to become a hospital volunteer?" is more ambiguous than asking "What made you decide to become a hospital volunteer?" or "When you decided to become a volunteer, what made you choose to work in a hospital?"

You also need to pay attention to how you sequence your questions. Here are some tips, also adapted from Patton [14], to help you with the order of your questions: 

  • Start with noncontroversial experience or behavioral questions that are easy to answer, straightforward, and do not rely on much recall. These help you develop rapport with your interviewee before you venture into more personal territory.
  • Demographic questions may be good icebreakers, but they also can be either tedious or highly personal. You do want to start with less-personal questions early on in the interview to establish rapport and to get background information you will need to follow interviewees' answers to subsequent questions.
  • Questions about the present are easier to answer than questions about the past and future. If you plan to ask about the future or past, ask a "baseline" present question such as "Where do you usually go when you need to find health care information?" Then you can ask "Have you gotten health information anywhere else?" followed by "Are there other sources of health information you know about that you might use in the future?"
  • Knowledge and skill questions may be threatening when posed out of context. Try embedding them with experience questions. For instance, you might first ask "What training sessions have you taken that taught you about online consumer health resources?" followed by "What did you learn in those sessions that you now use?"

Pilot test the interview guide

As with a survey, it is a good idea to pilot-test your interview questions. You might pilot-test your guide with someone you are working with who is familiar with your interviewees. (This step is particularly important if your interviewees are from a culture that is different from your own.) Sometimes evaluators consider the first interview a pilot interview. Any information they gather on the first interview is still used, but they revisit the question guide and make modifications if necessary. Unlike surveys, question guides do not have to remain completely consistent from interview to interview. While you probably want a core set of questions that you ask each interviewee, it is not unusual to expand your question guide to confirm information you learned in earlier interviews. 

Finally, be sure your interview project is reviewed by the appropriate entities (such as your IRB). Interviews are so personal, they may not seem like research, and you may forget they are subject to the same review procedures as surveys. So do not make this assumption or you may face a delay in collecting your data. 

Step Two: Collect your data

Decide who will be interviewed

Like quantitative methods, interviewing requires a sampling plan. However, random sampling usually is not recommended for interviewing projects because the total number of interviewees in a given project is quite small. Instead, most evaluators use purposeful sampling, in which you choose participants who you are sure can answer your questions thoroughly and accurately. 

There are a number of approaches to purposeful sampling, and use of more than one approach is highly recommended. The following are just a few approaches you can take to sampling [14]

  • You may want to interview "typical" users or participants, such as the typical health information consumer or health care provider in your community.
  • To illuminate the potential of your project, you may decide to interview people who have made the most out of the training you have offered.
  • To explore challenges to your strategies and activities, you might choose to interview those who did not seem to get as much from the project or chose not to participate in outreach activities.
  • You may decide to sample for diversity such as interviewing representatives from all of the different groups involved in or affected by the project. For example, you may want to talk to different types of librarians who use your services.
  • You might set criteria for choosing interviewees, such as participants who completed 3 of 4 training sessions.
  • You can ask your project partners, team members, participants or stakeholders to recommend interviewees. (You can even ask other interviewees.) This is known as a snowball or chain approach where you ask knowledgeable people to recommend other potential interviewees.
  • There are occasions where random sampling of interviewees is warranted. In some cases, you will increase credibility of your results if you can demonstrate that you chose participants without knowing in advance how they would respond to your questions. In some circumstances, this is an important consideration. However, you must realize that a random sample generated for qualitative evaluation projects is too small to generalize to a larger group. It only shows that you used a sampling approach that ruled out your biases in choosing interviewees.

Convenience samples, in which participants are chosen simply because they are readily accessible, should be avoided except when piloting survey methods or conducting preliminary research. The typical "person-on-the-street" interview you sometimes see on the evening news is an example of a convenience sample. This approach is fast and low-cost, but the people who agree to participate may not represent those who can provide the most or best information about the outreach project. 

A common question asked by outreach teams is "How many interviews do we need to conduct?" That question can be answered in advance for quantitative procedures but not for qualitative methods. The usual suggestion is that you continue to interview until you stop hearing new information. However, resource limitations usually require that you have some boundaries for conducting interviews. Therefore, your sampling design should meet the following criteria: 

  • You should be able to articulate for yourself and stakeholders the rationale for why you have selected the interviewees in your sample.
  • Your list of interviewees should be adequate in number and diversity to provide a substantial amount of useful information about your evaluation questions.
  • The number and diversity of your interviewees should be credible to the project's stakeholders.

Provide informed consent information

Interviewing is a much more intimate experience than completing surveys, and the process is not anonymous. The ethics of interviewing require that you provide introductory information to help the interviewee decide whether to participate. You can provide this information in writing, but you must be sure the person reads and understands it before you begin the interview. If your project is to be reviewed by an IRB, the board's guidelines will help you develop an informed consent process. However, with or without institutional review, you should provide the following information to your interviewees:

  • The purpose of the interview and why their participation is important
  • How their responses will be reported and to whom
  • How you plan to protect the interviewees' confidentiality
  • The risks and benefits of participation
  • The voluntary nature of their participation and their right to refuse to answer questions or withdraw from the interview at any time.

If you want to record the interview, explain what will happen to the recording (e.g., who else will hear it, how it will be discarded). Then gain permission from the interviewee to proceed with the recording. 

Record the interviews

It is usually a good idea to record your interviews, unless your interviewee objects or becomes visibly uncomfortable. You may not transcribe the interview verbatim, but you will want to review the interview to get thorough notes. Here are some relatively inexpensive tools that will help you record and transcribe interviews:

  • Digital recorders unobtrusively record interviews that can be loaded onto a hard drive.
  • You can purchase telephone "pick-up" microphones that you plug into the digital recorder and put in your ear where you place the phone receiver, to record telephone interviews.
  • Online web meeting systems often have recording options for online interviews and focus groups.
  • Smart pens combine digital recording functionality with written note-taking. You can record the conversation while you write notes in notebooks designed specifically to use with these pens. Later, you can touch the pen to your notes and hear the recorded discussion that took place as you wrote that note. The recordings can be loaded onto a hard drive for transcription.
  • You can purchase transcription software packages that will facilitate transcribing interviews. Software that directly transcribes voices from recordings with two or more speakers is not yet available, so transcribers still have to listen and type recordings into text. However, recording software such as HyperTRANSCRIBE (http:// www.researchware.com) allow you to pause, rewind, and fast-forward recordings with keyboard strokes.

Build trust and rapport through conversation

How you conduct yourself in an interview and your ability to build trust and rapport with an interviewee will affect the quality of the data you collect. Patton wrote, "It is the responsibility of the interviewer to provide a framework within which people can respond comfortably, accurately, and honestly to open-ended questions." 1 To accomplish this framework you have to be a good listener. Sound consultant Julian Treasure uses the acronym RASA (which means "essence" in Sanskrit) to describe four steps to effective listening [15]

  1. Receive: Pay attention to your interviewee.
  2. Appreciate: Show your appreciation verbally by saying "hmm" and "okay."
  3. Summarize: Repeat what you heard.
  4. Ask: Further your understanding by asking follow-up questions.

(You can hone your skills by practicing with family, friends and colleagues. They probably will be happy to accommodate you.) 

An interview is a social exchange, and your question order should reflect social norms of conversations between strangers. Consider how you talk with a stranger you meet at a party. You usually start with easy, safe questions and then, if you build rapport in the first stages of conversation, you start to ask more about the stranger's opinions, feelings, and personal information. To protect the comfort of your interviewee, you might incorporate some of the following tips [14]

  • Frame questions so people feel that they are like others (or "normal"). For example, if you want to know how they feel about a website, you might say: "Some people find it easy to talk to their doctors about alternative medical treatments while others feel intimidated about bringing up the topic. What has your experience been with talking to your doctor about alternative medicine?"
  • It is okay to use "presupposition questions" in interviews. A question such as "What problems do you have finding online health information?" presupposes that your interviewee has had trouble finding information online. These "presuppositions" are not good form for survey questions, but they serve a social facilitation function in interviews. The wording signals to the interviewee that it is perfectly natural for people to have difficulty finding online information. On the other hand, if you start by asking "Do you ever have problems finding information about your health condition," the interviewee may fear looking inept by saying "Yes." So your presupposition is helpful in this case. And, unlike a poorly phrased  self-administered survey, he or she can always say to you "I've never actually had any problems finding information online." If you want to avoid presuppositions, you can always modify the question to "What problems, if any, do you have finding health information online?"
  • You may need to encourage people to answer complicated or emotionally difficult questions. You may preface questions by saying things such as "I know this question seems vague, but interpret it in the way you think appropriate..." or "This question may seem a little controversial, but your perspective is really valuable..."
  • People sometimes don't like to admit to behaving poorly or doing something "wrong." So rather than asking "What are some reasons you do not do the things you know you should to control your blood sugar," use a more abstract phrasing such as "What are some reasons people with your condition do not always do the things they should do."

Start the analysis during data collection

Step Three talks about summarizing and analyzing your interview data, but you should start doing some interpretation during the data collection stage. In preparation for this step, you should take reflective notes about what you heard soon after each actual interview (preferably within 24 hours). Reflective notes differ from the notes you take during the interview to describe what the participant is saying. They should include your commentary on the interaction. Miles and Huberman [16] suggest these memos should take from a few minutes to a half hour to construct and could address some of the following: 

  • What do you think were the most important points made by the interviewees? Why do you consider these important? (For example, note if the respondent talked about the topic several times or if other interviewee mentioned these points.)
  • How did the information you got in this interview corroborate other interviews?
  • What new things did you learn? Were there any contradictions between this interview and others?
  • Are you starting to see some themes emerging that are common to the interviews?
  • Was there any underlying "meaning" in what the informant was saying to you?
  • What are your personal reactions to things said by this informant?
  • Do you have any doubts about what the interviewee said (e.g., was he or she not sure how open he or she could be with you)?
  • Do you have any doubts about the quality of the information from other interviewees after talking with this person?
  • Do you think you should reconsider how you are asking your interview questions?
  • Are there other issues you should pursue in future interviews?
  • Did something in this interview elaborate or explain a question you had about the information you are collecting?
  • Can you see connections or contradictions between what you heard in this interview and findings from other data (such as surveys, interviews with people at other levels of the organization, etc.)?

Be sure to add descriptive information about the encounter: time, date, place, and interviewee. You also can start to generate a list of codes with each reflective note and write the codes somewhere in the margins or in the corner of your memo. 

By starting to process your notes during the data collection process, you may start to find themes or ideas that you can confirm in subsequent interviews. This reflective practice also will make Step Three a little less overwhelming.

Step Three: Summarize and Analyze Your Data

Those who are "number phobic" may believe that analyzing non-numerical data should be easier than analyzing quantitative data. However, the sheer amount of text that accumulates with the simplest evaluation project makes the data analysis task daunting. It might help to remember the goals of qualitative data analysis in the context of program evaluation [17]

  • Distill raw textual data into a brief summary.
  • Link the findings to your evaluation questions in a way that is transparent to your stakeholders and is defensible given how you collected and interpreted your data.
  • Provide a coherent framework for your findings that describes themes and how they are connected to other themes and to your quantitative findings.

There are various approaches to data analysis used by qualitative researchers. We have adapted an approach developed specifically for evaluation by Thomas [17]. We suggest you approach the data analysis process in phases. 

Prepare the text

Interviews may be transcribed verbatim, or you may produce summaries based on reviews of recordings or notes. If you are fortunate enough to pay a transcriptionist, you should still review your recordings and check the transcript for accuracy. Interviewers with more limited resources will produce detailed summaries from their notes and then fill in details by reviewing the recordings. In some instances, interviewers may have to simply rely on notes for their summaries. If you are not using verbatim transcripts, it is a good idea to have your interviewees review your summary for accuracy. Your transcripts, regardless of detail level, are your raw data. Each summary should be contained in its own document and identified by interviewee, date, location, and length of interview. It is also helpful for future analysis to turn on the "line numbering" function (use the continuous setting) so you can identify the location of examples and quotes. 

Note themes (or categories)

Once you have transcribed or summarized the information, read through all the qualitative data, noting themes or "categories." Create a code book to keep track of your categories, listing a category label (a short phrase that can be written easily in margins or with a qualitative software package) and a description (a lengthier description defining the category label.) You probably will have two tiers of categories. Upper-level categories are broader and may be associated with your evaluation questions. For instance, you may have conducted interviews to learn how participants in a training session are using the training and whether they have recommendations for improving future sessions. Therefore, you may read through the notes looking for examples that fit themes related to "results," "unexpected outcomes," "barriers to project implementation," and "suggestions for improvement." Lower-level categories emerge from phrases in the text. These lower-level categories may or may not be subthemes of your upper-level categories. 

Code the text

Systematically code your material. You do this by identifying "units" of information and categorizing them under one of your themes. A unit is a collection of words related to one main theme or idea and may be a phrase, sentence, paragraph or several paragraphs. Note that not all of your information may be relevant to your evaluation questions. You do not need to code all of your data. 

Try to organize your categories into major themes and subthemes. Combine categories that seem redundant. Thomas [17] suggests refining your categories until you have 3-8 major themes. To describe themes, identify common viewpoints along with contradictory opinions or special insights. Highlight quotes that seem to present the essence of a category. 

One simple approach to coding is to highlight each unit of information using a different color for upper-level categories. Then pull the upper-level categories into one table or document and apply subthemes. (See Figure 15 and Figure 16 for an example of how to do this.) For simpler projects, this process is manageable with tables and spreadsheets. If you have more complicated data, you may want to invest in a qualitative software package. There are various popular packages, including ATLAS.ti (http://www. atlasti.com/) and NVivo 9 (https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/buy-now). We have experience with HyperRESEARCH, which is produced by Researchware (http://www.researchware.com), the same company that offers HyperTRANSCRIBE. HyperRESEARCH includes helpful tutorials for how to use the software. 

Interpret results

Produce written summaries of the categories. The summaries include the broader theme, the sub-themes, a written definition of the category, and examples or quotes. See Figure 17 for how to produce these category write-ups.

Eventually you want to go beyond just summarizing the categories in your data. You should interpret the findings to answer questions such as: 

  • What worked well?
  • What were the challenges?
  • What can be improved?
  • What stories and quotes demonstrate the positive outcomes of our project?
  • What unexpected findings were reported?

You also might describe classifications of answers, such as categories of how people used MedlinePlus after training. 

The analysis might even involve some counting. For instance, you might count how many users talked about looking up health information to research their own health issues and how many used it to look up information for others. This will help you assess which uses were more typical and which ones were unusual. However, remember these numbers are only describing the group of people that you interviewed; they cannot be generalized to the whole population. 

It is a good idea to describe both the typical and the unusual cases in each category. You want to look for contradictory findings or findings that differ across groups. For example, you may find that doctors preferred different online health resources than did health educators or nurse practitioners. 

There are numerous approaches to analyzing qualitative data. Two excellent resources for beginners are "Analyzing Qualitative Data" at the University of Wisconsin-Extension website [18] and Glesne's Becoming Qualitative Researchers[19] Qualitative Data Analysis by Miles and Huberman [16] also provides methods for analysis, although a little more advanced.

Step Four: Assess the Trustworthiness of Your Findings

In quantitative data, you assess your findings for validity, which is roughly synonymous with accuracy. With qualitative analysis, you actually are exploring varying viewpoints, so qualitative researchers favor the term "trustworthiness" over "validity." A trustworthy report will focus on presenting findings that are fair and represent multiple perspectives [20]

Use procedures that check the fairness of your interpretation

Listed below are some approaches that you can choose from to assess the trustworthiness of your findings [14, 17, 19]

  • It is helpful to have two coders who can review a portion of the qualitative data and independently generate categories. The coders compare their lists to check for overlap. The two coders can then define a set of codes that merges both lists. The remainder of the data is then coded by one or both coders.
  • One coder codes a portion of the data and then gives the coded data to another coder. The second coder then receives a different portion of the data and codes it. The two coders compare notes and refine the category definitions before continuing on with the analysis.
  • Check for consistency in findings with data collected through other methods. This is called "triangulation."

When you interview, you should use at least one other source of data to see if the data corroborate one another. For instance, you may compare interview data to some focus group data or written comments on training evaluation forms. You do not have to triangulate with other qualitative data. In evaluation, it is not unusual to compare interview findings with survey data. 

Present findings to reflect multiple points of view

  • As you draw conclusions about your qualitative data, see if you can find information in the rest of the data that contradicts your interpretation or provides a different perspective.
  • Ask interviewees to read your report and provide feedback on your representations of their views.
  • Provide draft copies of your report to stakeholders and get their feedback. They will weigh your conclusions against their own experiences and ask you questions that may help you clarify your interpretations.
  • Check your interpretation against studies you find in the literature. For example, do published reports of health information outreach projects with lay health advisers present findings similar to the ones you identified in your coding project?

Take-Home Messages

  1. Be prepared to mix qualitative and quantitative data. Mixed approaches often tell the whole story better than either approach alone.
  2. Quantitative methods are excellent for exploring questions of "quantity:" how many people were reached; how much learning occurred; how much opinion changed; or how much confidence was gained.
  3. The two key elements of a successful survey are a questionnaire that yields accurate data and a high response rate.
  4. With surveys, descriptive statistics usually are adequate to analyze the information you need about your project. Using tables to make comparisons also can help you analyze your findings.
  5. Qualitative methods are excellent for exploring questions of "why," such as why your project worked; why some people used the online resources after training and others did not; or why some strategies were more effective than others.
  6. A good interview study uses a purposeful approach to sampling interviewees.
  7. Well-constructed and sequenced questions, along with good listening skills, facilitate the interview conversation.
  8. Analysis of interview data entails systematic coding and interpretation of the text produced from the interviews. Multiple readings of the data and revised coding schemes are typical.
  9. In interpreting and reporting findings from qualitative data analysis, make sure your interpretations are thorough, accurate, and inclusive of all viewpoints.

References

  1. Burroughs C. Measuring the difference: Guide to planning and evaluating health information outreach [Internet]. Seattle, WA: Network of the National Library of Medicine, Pacific Northwest Region; 2000 [cited 28 Feb 2012]. http://nnlm.gov/evaluation/guides.html#A1
  2. Yarbrough DB, Shulha LM, Hopson RK, Caruthers FA. The Program Evaluation Standards: A guide for evaluators and evaluation users. 3rd ed. Thousand Oaks, CA: Sage; 2011.
  3. Olney CA, Barnes SJ. Planning and evaluating health information outreach projects. Booklet 1: Getting started with community assessment, 2nd edition. Seattle, WA: Network of the National Library of Medicine Outreach Evaluation Resource Center; 2013.
  4. Olney CA, Barnes SJ. Planning and evaluating health information outreach projects. Booklet 2: Planning outcomes-based outreach projects, 2nd edition. Seattle, WA: Network of the National Library of Medicine Outreach Evaluation Resource Center; 2013.
  5. Survey Monkey. Tutorial: Section 508 compliancy [Internet] [cited 8 May 2012]. Note: this tutorial no longer exists. For more information about SurveyMonkey and section 508 compliancy, see their Creating Accessible Surveys webpage.
  6. Sue VM, Ritter LS. Conducting online surveys. Thousand Oaks, CA: Sage; 2007.
  7. Cui WW. Reducing error in mail surveys [Internet]. Practical assessment, research & evaluation. 2003; 8(18) [cited 17 March 2012]. http://pareonline.net/getvn.asp?v=8&n=18>
  8. Dillman DA, Smyth JD, Christian LM. Internet, mail, and mixed-mode surveys: The tailored design method. 3rd ed. Hoboken, NJ: Wiley; 2009.
  9. Millar MM, Dillman DA. Improving response to web and mixed-mode surveys [Internet]. Public opinion quarterly. 2011 Summer; 75(2): 249–269 [cited 8 May 2012].
  10. Birnholtz JF, Horn DB, Finholt TA, Bae SJ. The effects of cash, electronic, and paper gift certificates as respondent incentives for a web-based survey of technologically sophisticated respondents. Social science computer review. 2004 Fall; 22 (3): 355-362 [cited 8 May 2012]. http://www-personal.umich.edu/~danhorn/reprints/Horn_2004_Web_Survey_Incentives_SSCORE.pdf
  11. Bosniak M, Tuten TL. Prepaid and promised incentives in web surveys [Internet]. Paper presented to the 57th American Association of Public Opinion Research Annual Conference, St. Pete Beach, FL; 2002 [cited 8 May 2012]. http://www.psyconsult.de/bosnjak/publications/AAPOR2002_Bosnjak_Tuten.pdf
  12. Langer G. About response rate [Internet]. Public Perspectives. 2003 May/June; 16-18 [cited 8 May 2012]. http://www.aapor.org/AAPOR_Main/media/MainSiteFiles/Response_Rates_-_Langer.pdf [new link - 9 December 2016]
  13. Krueger RA, Casey MA. Focus groups: a practical guide for applied research. 4th ed. Los Angeles, CA: Sage; 2009.
  14. Patton MQ. Qualitative Research & Evaluation Methods. 3rd ed. Thousand Oaks, CA: Sage; 2002.
  15. Treasure J. 5 ways to listen better [Internet]. TEDtalks. 2011-July [cited 8 May 2012]. https://www.ted.com/talks/julian_treasure_5_ways_to_listen_better [new link 9 December 2016] [Transcript available by clicking on "interactive video" under the video.]
  16. Miles MB; Huberman M. Qualitative data analysis. 2nd ed. Thousand Oaks, CA: Sage; 1994.
  17. Thomas DR. A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation. 2006; 27: 237-246.
  18. Taylor-Powell ET, Renner M. Analyzing qualitative data [Internet]. Madison, WE: University of Wisconsin- Extension; 2003 [cited 8 May 2012]. http://learningstore.uwex.edu/Assets/pdfs/G3658-12.pdf
  19. Glesne, C. Becoming qualitative researchers. 2nd ed. New York: Longman; 1999.
  20. Lincoln YS, Guba EG. But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions in Program Evaluation. 1986 Summer; 30:73-84.

Appendix 1 - Examples of Commonly Used Quantitative Methods

Method Examples of Sources Examples of information collected
End-of session evaluations
  • Trainees
  • Service recipients
  • Satisfaction with training
  • Intentions of using the resources in the future
  • Beliefs about the usefulness of the resources for various health concerns
  • Confidence in skills to find information
Tests (best if conducted before and after training)
  • Trainees
  • Ability to locate relevant, valid health information
  • Ability to identify poor-quality health information

Surveys

  • Follow-up surveys (conducted some time period after training)
  • Attitude or opinion scales (e.g., strongly agree, agree, etc.)
  • Dichotomous scales (yes/no)
  • Trainees
  • Collaborative partners
  • Usefulness of resources for health concerns (becoming more informed about treatments, learning more about a family member's illness)
  • Use of resources as part of one's job
  • Level of confidence in using the resource
  • Sharing the resource with other co-workers, family members, etc.
  • Use and usefulness of certain supplemental products (listservs and special websites)

Records

  • Frequency counts
  • Percentages
  • Averages
  • Website traffic information
  • Attendance records
  • Distribution of materials
  • Hits to website
  • Amount of participation on listservs
  • Training participation levels
  • Retention levels (for training that lasts more than one session)
  • Numbers of people trained by "trainers"
  • Number of pamphlets picked up at health fairs

Observations

  • Absence/presence of some behavior or property
  • Quality rating of behavior (Excellent to Poor)
  • Trainee behavior
  • Site characteristics
  • Level of participation of trainees in the sessions
  • Ability of trainee to find health information for the observer upon request
  • Number of computers bookmarked to resource website
  • Number of items promoting the resources made available at the outreach site (handouts, links on home pages)

Appendix 2 - Examples of Commonly Used Qualitative Methods

Method Description Example
Interviews People with knowledge of the community or the outreach project are interviewed to get their perspectives and feedback
  • Interviews with people who have special knowledge of the community or the outreach project
  • Focus group interviews with 6-10 people
  • Large group or “town hall” meeting discussions with a large number of participants
Field observation An evaluator either participates in or observes locations or activities and writes detailed notes (called field notes) about what was observed
  • Watching activities and taking notes while a user tries to retrieve information from an online database
  • Participating in a health fair and taking notes after the event
  • Examining documents and organizational records (meeting minutes, annual reports)
  • Looking at artifacts (photographs, maps, artwork) for information about a community or organization
Written documents Participants are asked to express responses to the outreach project in written form
  • Journals from outreach workers about the ways they helped consumers at events
  • Reflection papers from participants in the project about what they learned
  • Electronic documents (chats, listservs, or bulletin boards) related to the project
  • Open-ended survey questions to add explanation to survey responses

Toolkit: Using Mixed Methods

Part 1: Planning a Survey

A health science library is partnering with a local agency that provides services, support, and education to low-income mothers and fathers who are either expectant parents or have children up to age 2. The projects will provide training on search strategies to staff and volunteers on MedlinePlus and Household Products with a goal of improving their ability to find consumer health information for their clients. 

The objectives of the project are the following: 

  • Objective 1: At the end of the training session, at least 50% of trained staff and volunteers will say that their ability to access consumer health information for their clients has improved because of the training they received.
  • Objective 2: Three months after the training session, 75% of trained staff and volunteers will report finding health information for a client using MedlinePlus or Household Products.
  • Objective 3: Three months after receiving training on MedlinePlus or Household Products, 50% of staff and volunteers will say they are giving clients more online health information because of the training they received.

All staff and volunteers will be required to undergo MedlinePlus training conducted by a health science librarian. Training will emphasize searches for information on maternal and pediatric health care. The trainers will teach users to find information in MedlinePlus's Health Topics, Drugs and Supplements, and Videos and Cool Tools. The training will also include use of the Household Products Database. 

To evaluate the project outcomes, staff and volunteers will be administered a survey one month after training. Worksheet 1 demonstrates how to write evaluation questions from objectives, then how to generate survey questions related to the evaluation questions. (This worksheet can be adapted for use with pre-program and process assessment by leaving the objectives row blank.)

Part 2: Planning an Interview

After six months of the training project, the team considered applying for a second grant to expand training to clients. The team decided to do a series of interviews with key informants to explore the feasibility of this idea. Worksheet 2 demonstrates how to plan an interview project. The worksheet includes a description of the sampling approach, the evaluation questions to answer, and some interview questions that could be included on your interview guide. 

Blank versions of the worksheets used in the case example are provided on pages 39 and 40 for your use.

Worksheet 1 – Planning a Survey

Objective 1 At the end of the training session, at least 50% of trained staff and volunteers will say that their ability to access consumer health information for their clients has improved because of the training they received. Evaluation
Evaluation Questions
  • Do staff and volunteers think the training session improved their ability to find good consumer health information?
  • Did the training session help them feel more confident about finding health information for their clients?
Survey Items
  • The training session on MedlinePlus improved my ability to find good consumer health information. (strongly disagree/disagree/neutral/agree/strongly agree)
  • The training session on MedlinePlus made me more confident that I could find health information for the agency's clients. (strongly disagree/disagree/neutral/agree/strongly agree)
Objective 2 Three months after the training session, 75% of trained staff and volunteers will report finding health information for a client using MedlinePlus or the Household Products Database.
Evaluation Questions
  • Did the staff and volunteers use MedlinePlus or Household Products to get information for clients?
  • What type of information did they search for most often?
Survey Items
  • Have you retrieved information from MedlinePlus or Household Products to get information for a client or to answer a client's question? (yes/no)
  • If you answered yes, which of the following types of information did you retrieve (check all that apply)
    • A disease or health condition
    • Prescription drugs
    • Contact information for an area health care provider or social
    • Service agency
    • Patient tutorials
    • Information about household products
    • Other (please describe)

 

Objective 3 Three months after receiving training on MedlinePlus or Household Products, 50% of staff and volunteers will say they are giving clients more online health information because of the training they received.
Evaluation Questions
  • Is staff helping more clients get online health information now that they have had training on MedlinePlus or Household Products?
  • What are some examples of how they used MedlinePlus or Household Products to help clients?
Survey Items
  • The training I have received on MedlinePlus or Household Products has made me more likely to look online for health information for clients. (strongly disagree/disagree/neutral/agree/strongly agree)
  • Since receiving training on MedlinePlus or Household Products, I have increased the amount of online health information I give to clients. (strongly disagree/disagree/neutral/agree/strongly agree)
  • Give at least two examples of clients’ health questions that you have answered using MedlinePlus or Household Products. (open-ended)

Worksheet 2 - Planning an Interview

Interview Group Staff
Sampling strategy
  • Agency director
  • Volunteer coordinator
  • 2 staff members
  • 2 volunteers
  • 2 health science librarian trainers
Evaluation questions
  • How ready are the clients to receive this training?
  • What are some good strategies for recruiting and training clients?
  • How prepared is the agency to offer this training to its clients?
  • Do the health science librarians have the skill and time to expand this project?
Sample questions for the interview guide
  • What are some good reasons that you can think of to offer online consumer health training to clients?
  • What are some reasons not to offer training?
  • If we open the training we have been offering to staff and volunteers to clients, how likely are the clients to take advantage of it?
  • What do you think it will take to make this project work? (Probe: recommendations for recruitment; recommendations for training. )
  • Do you have any concerns about training clients?
Interview Group Clients
Sampling strategy

Six clients recommended by case managers:

  • All interviewees must have several months of experience with the agency and must have attended 80% of sessions in the educational plan written by their case manager.
  • At least one client must be male
  • At least one client should not have access to the Internet from home or work
Evaluation questions
  • How prepared and interested are clients to receive training on online consumer health resources?
  • What are the best ways to recruit agency clients to training sessions?
  • What are the best ways to train clients?
Sample questions for the interview guide
  • When you have questions about your health, how do you get that information?
  • How satisfied are you with the health information you receive?
  • If this agency were to offer training to you on how to access health information online, would you be interested in taking it?
  • What aspects of a training session would make you want to come?
  • What would prevent you from taking advantage of the training?

 

Blank Worksheet 1 - Planning a Survey

Book 3, Worksheet 1: Planning a Survey

Blank Worksheet 2 - Planning an Interview

Book 3, Worksheet 2: Planning an Interview

Checklist for Booklet 3: Collecting and Analyzing Evaluation Data

Book 3 Checklist 

Quantitative Methods

Step One - Design Your Data Collection Methods

  • Write evaluation questions that identify the information you need to gather.
  • Write survey questions that are directly linked to the evaluation questions.
  • Pilot test the questionnaire with a small percentage of your target group.
  • Have your methods reviewed by appropriate individuals or boards.

Step Two - Collect Your Data

  • Decide whether to administer the survey to a sample or to everyone in your target group.
  • Follow procedures known to increase response rates.
  • Write a cover letter to motivate and inform respondents.

Step Three - Summarize and Analyze Your Data

  • Summarize your survey data using descriptive statistics.
  • Organize your data into tables to help answer your evaluation questions.
  • Write a brief description of the results.

Step Four - Assess the Validity of Your Findings

  • Calculate response rate.
  • Identify low completion rate of specific sections of surveys.
  • Identify low completion rate of any questions.
  • Look for socially desirable responding.

Qualitative Methods

Step One - Design Your Data Collection Methods

  • Write evaluation questions that identify the information you need to gather.
  • Write an interview guide using open-ended questions.
  • Pilot test the interview guide with one or two people from your target group.

Step Two - Collect Your Data

  • Decide who will be interviewed.
  • Provide informed consent information.
  • Conduct the interviews.
  • After each interview, spend a few minutes writing reflective notes.

Step Three - Summarize and Analyze Your Data

  • Prepare the text.
  • Note themes (or categories).
  • Code the interview data systematically.
  • Interpret the findings.

Step Four - Assess the Trustworthiness of Your Findings

  • Conduct procedures that check the fairness of your interpretations.
  • Present findings that represent multiple perspectives and varying points of view.
On this Page
On this Page