In this article tips for developing basic research survey are identified.
Authored by: Joshua M. DeMott, PharmD, MSc, BCPS, BCCCP
Last updated: 16 September 2020
In this day and age it feels like I am surveyed on how often I like being surveyed, which to be completely honest, is not often. The internet has revolutionized access to and ease of sending out surveys. However, because anyone can do them now, it seems as though less thought goes into the planning, question wording, desired response rate, and length. So in the era of a thousand surveys, how do you create a survey that: (1) people will actually take, (2) will provide accurate, generalizable data?
The purpose of this article is to provide a few tips on how you can level-up your survey projects.
Disclaimer #1: If you are mainly surveying coworkers for when happy hour should be, the scope of this blog post might be unnecessarily in-depth. Conversely, if you are doing intense grant funded research studies; these tips are not likely in-depth enough. This article is intended for pharmacists or medical professionals creating surveys that are somewhere in-between and looking to elevate the quality of their survey projects. Like the title says, these are tips for developing a basic research survey!
Disclaimer #2: If you’re hoping for a “how-to” guide for specific programs, this isn’t your jam. There are a lot of programs out there. Refer to the user guides or ask a friend with experience using the specific program of interest.
If you are still reading after the disclaimers, you are probably getting geeked for survey tips! Here we go, let’s do this!!!
1. Limit the number of questions
Asking too many questions reduces response rate.1 It will always be tempting to ask loads of questions with the hope of getting as much data as possible; however, this approach will be detrimental. Medical professionals already have a lower response rate than the rest of the population so don’t worsen the rate even more by asking too many questions.2 Keep your surveys short, concise, and targeted! There is no magic number, but be sure to think about how long it takes for respondents to complete the survey. Also, it is usually a good idea to mention the time commitment to the survey taker in the initial survey request so they know what they’re getting into.
- 5 min or less is ideal
- 5-10 min is OK
- >10 min have decreased response rates
2. Set a response rate goal: minimize non-response error3,4
Response rate is the percentage of people who complete your survey relative to how many you sent it to. Setting a response rate goal is sort of like powering a study. It is essentially answering the questions of: (1) what kind of response rates do I need to ensure my results are generalizable, and (2) if I do not get the response rates I’m hoping for, how do those non-responders affect my data variability (and thereby validity)?
Just like power, this step is a little tricky, but very important because it determines how you can interpret your data. If you are anything like me (or Las Vegas), I have always just assumed more is better (i.e., 35 oz. tomahawk steak). However, unlike sample size, where having a higher N will help increase power, for surveys, surveying more people is not always better because the people who are not responding could affect your data interpretation. Therefore, your focus should be on response rates of ‘large enough’ samples. Kind of confusing, but I promise it is really not too bad.
Example 1:
- Surveying 1,000 people with a 35% response rate generates 350 responses.
- Surveying 500 people with a 70% response rate also generates 350 responses.
- However, the possibility of non-respondents being different from respondents is likely greater in the 1000 person survey because there are 650 people who may have caused variability in the results versus only 150 in the 500 person study.
Important takeaway: When the number of non-respondents increases, it decreases generalizability.
Going further, using goal response rates you can ‘power’ each question. Let’s take on a dichotomous variable. The conservative approach of a 50/50 split (in other words, an equal chance of one response versus another) of interest at the conventional 95% confidence level for a population of 100, we would need a sample of 80 to ensure a sampling error of no more than +/- 5% at the 95% confidence level.
Example 2:
- You are surveying 100 hospitals and 50% responded (n=50) to a survey item with a simple yes/no answer (e.g., “Does your institution have an antimicrobial stewardship pharmacist?”) and responses were evenly split (50% yes and 50% no).
- It would not be prudent to extrapolate those findings to hospitals in general (n=100) because the range of possible true percentages would be 25%-75%.
- That is, all, some, or none of the 50 non-respondents could have an antimicrobial stewardship pharmacist at their hospital which would cause your variability to be 25%-75%.
- Which of course makes it difficult to generalize rather few (25%), half (50%), or most (75%) of hospitals have antimicrobial stewardship pharmacists.
Also unlike power, your goal response rate may be affected post hoc by how people responded. In the example above, if 100% of the 50 institutions responded that they had an antimicrobial stewardship pharmacist, your results may be more generalizable, despite an only 50% response rate. The other 50 non-respondents could be given an equal chance at a 50% yes or 50% no (i.e., n=25 do have a stewardship pharmacist and n=25 do not). Therefore, you may care less about your non-responders and your generalizability would increase because your variability jumps to 75-100%. You could say with a higher degree of certainty that most hospitals have a stewardship pharmacist, despite only half of the hospitals responding.
As you can imagine, not all questions will have a true “equal” chance at an answer. You may have prior knowledge or data that the variability may be something more like 70/30 rather than 50/50. For example, you may have reason to believe that the remaining hospitals do not have a 50/50 shot of having stewardship pharmacists (e.g., only the university medical centers responded and the smaller community ones did not, which you know from experience have less stewardship pharmacists).
This bias is commonly referred to as non-response error, which occurs if data are not collected from each member of the sample. If you do not want to go through actually trying to calculate a goal response rate, a minimum of 60-75% should be your goal. However, to make things even trickier, there is not a ‘perfect’ response rate and you really should sit down and think about how non-responders could have affected your results; maybe even for each question. If you are doing more in-depth survey research, I recommend you read more on the topic of non-response.
Because I tend to read more survey research than actually do, when I read I always try to ask: How did the non-response rates in this study affect variability and generalizability? You would be surprised how often just asking that question has changed my opinion of a survey project.
3. Good questions result in good answers5,6
Do not underestimate the importance of a good question. It is like asking my girlfriend “what do you want for dinner?” versus “are you cool if I bring home pizza?” The first question could mean anything from going out to dinner, to eating ice cream pints in our sweats (#perfectSunday). The vagueness of it could allow for hours of debate and deep google review research… The second question provides a suggestion and signals to her that I am feeling eating in tonight. It then shifts her thoughts to what she might really want me to bring home if she is not in a pizza mood (I suppose some people do not have the same perpetual state of craving pizza that I do).
Minor changes in question wording, question format, question order, or question context can result in major changes in the obtained results. It is out of the scope of this blog, but it is proven that how you word a question matters. There are concepts like priming, framing, anchoring, etc that all can affect how we process a question and therefore how we answer them. I have seen many surveys that have been sent out and the questions have not been given the thought and consideration necessary to provide solid data. So take the time! (if you would like more reading on how people unconsciously can be biased just by how information is presented, I would recommend to start off with, “Thinking Fast and Slow” by Daniel Kahneman)
Start to ask the questions to make a solid question:
- How do respondents make sense of the questions asked of them?
- Will respondents interpret your question the way you intend them to answer?
- What is the role of memory in retrospective behaviors and how can we increase the accuracy of these reports?
- Did I bias the respondent toward an answer?
- What techniques can we use to determine if a question “works” as intended?
Since we have to limit the number of questions to get better response rates, the ones you have, have to be awesome!
4. Internet is easy, but is it best?2,7
Almost all of us tend toward electronic survey tools because … do people even still get mail? However, response rates can actually be increased by incorporating different methods.7 My recommendation: online survey usually good for most. However, if you get a low response rate that you feel affects your results, consider looking at ways to expand (e.g., mail, telephone (text link), etc)
5. Time it right, send reminders, and consider piloting!
Because we are overloaded with surveys, I would recommend that you try to perfect the little details of the send out.
Don’ts:
✕ Don’t send at the end of the day, the survey request gets lost in evening emails
✕ Don’t send in the early AM, practitioners are catching up on email, starting clinical activities, planning the day, etc
Do’s:
✓ 1030a – 11a might be the sweet spot. It’s right before lunch for most people and a few hours before the end of the day. Most people are caught up on (or at least seen…) morning emails around this time so this may not be looked over as quickly.
✓ Would recommend 1-2 reminders, spaced about 2 days from initial survey request. Consider Tuesday through Thursday as your target days (avoid Mondays, Fridays, and weekends). Some programs like REDCap allow for reminders to be built at specific time intervals.
✓ Consider piloting a small sample to help uncover problems with questions, responses, and response rates
Quick mention of program selection
There are so many survey programs available. A google search could easily get you 50 choices, but to name a few: Feedier, LimeSurvey, Qualtrics, Research Core, SoGoSurvey, Survey Anyplace, SurveyGizmo, Survey Legend, Survey Monkey, REDCap, etc.
So, how do you choose? To be honest with you, most are probably A-Okay to use – especially if you do not mind paying. It really comes down to finding one that has the options you want before choosing.
- Does it cost money for advanced options?
- Can I send out anonymously?
- Do I need more complex data analysis? How easy does the program create data files for use in excel, SPSS, or SAS?
- Does it have features to send out reminders?
- If IRB approved, does IRB consider program to be safe and secure
My typical recommendation is REDCap. But this is mainly because my institution has full access, I am very familiar with it, our IRB considers it to be secure, I use it for my other research, and it has all the options I want like reminders and data analysis. But, choose your own adventure!
Closing thoughts
I see a lot of clinicians sending out surveys for clinical issues, attitudes, training, etc. Some are good and some are definitely bad. Some big and some small. These results are used to set up departmental processes and even serve as research projects.
Although you still have a long road and many more tips to creating the perfect survey, I hope you walk away from this blog thinking more about the number of questions, the effects of non-response on generalizability, and question wording. Focus on these three major things and you will see your survey be quickly set apart from the oceans of others!
If you have some cool tips you’d like to add, please share them with the world! Discuss them on twitter with @IDstewardship and tag @jdemo_pharmD.
REFERENCES
2. Jones TL, Baxter MA, Khanduja V. A quick guide to survey research. Ann R Coll Surg Engl 2013;95:5-7.
3. Fincham JE, Draugalis JR. The Importance of Survey Research Standards. Am J Pharm Educ 2013;77.
6. Krosnick JA. Survey research. Annu Rev Psychol 1999;50:537-67.
ABOUT THE AUTHOR
Joshua M. DeMott, PharmD, MSc, BCPS, BCCP is an Emergency Medicine Clinical Pharmacy Specialist and Assistant Professor for the Department of Emergency Medicine at Rush University Medical Center in Chicago. He is a clinician, researcher, educator, mentor, author, and number 4 of a 5 boy family.
Dr. DeMott graduated pharmacy school at Ferris State University (Big Rapids, MI). He then completed a PGY-1 at Saint Mary’s Health Care (Grand Rapids, MI). Following residency, Dr. DeMott has had an eclectic mix of experiences. After the economic housing market crash of 2008, he ended up leaving his home state of Michigan and venturing to the west coast where he started with internal medicine/transplant at Oregon Health & Sciences University (Portland, OR). Soon missing the Midwest and family, Dr. DeMott moved back to the city of Chicago and joined Loyola University Medical Center as a float critical care pharmacist. Wishing for a shorter commute, he ended up at Rush. Nearing a decade (~9 years) at Rush he spent the first 5 years practicing in critical care (mainly MICU) before finding his ‘true home’ and passion for Emergency Medicine. He also decided to go back and get his Masters in Clinical Research at Rush University, finishing in 2017. Since finishing his master’s degree and moving to the emergency department, he has gotten more involved in the critical care/emergency medicine research PRNs of SCCM and ACCP.
It took 8 years after graduating pharmacy school for Dr. DeMott to get his first publication and becoming truly interested in research, but since 2017 he has published 16 articles (Google Scholar) and been cited 70 times. However, his true passion is for teaching the research process and helping residents realize their research goals. He chairs the Research Task Force at Rush University, coordinates the Research Lecture Series, and provides five research-focused lectures annually. He has helped lead 6 residents to publication in the last 3 years and hopes he can continue to help play a role in these successes.
He’s open to collaborating, although he always tries to maintain a healthy work life balance. His life is an open book so feel free to contact him with any questions or comments (big or small): joshua_demott@rush.edu.
RECOMMENDED TO YOU