Surveys are a useful tool to get a better understanding of how others perceive a problem or perform a task. However, the answers are only as good as the questions. And while the temptation is to capture things that the survey initiator did not think of by adding an other box to each question, this may lead to answers that cannot be categorized and therefore analyzed along with the other data. Hence, the researcher needs to come up with questions and answers that minimize the use of other responses. The best way to do this is to develop the survey in collaboration with others working in the same field.
Once the person in charge of developing the survey has assembled a group then they can begin to ask what the survey is trying to do. Is it evaluating how practitioners received their training, where sedation is performed or how they are credentialed? Or is it looking for quality improvement measures such as how patients are triaged, failed sedations, inadequate sedation, or rescheduling? In either case, the assembled team of experts should have diverse backgrounds to ensure all potential parameters are explored. The group then votes on the top 10 or 20 parameters to explore. The list of questions then gets reviewed and voted on for up to 3 rounds. This is an example of the modified Delphi method, which has been shown to give the most robust surveys.
Now that the survey is developed it should be trialed to ensure that the survey link works and that there are not any typographical errors. The survey can then be distributed using programs such as Survey Monkey, RedCap and Qualtrics. The advantage of using these methods is that tracking and analytical software is built in to the survey software. The drawback is that the institution or researcher needs to pay to use the versions that allow more complete analysis.
The final survey is sent to the desired audience and the researcher waits for replies. Here is where tracking software pays off since it allows the researcher to keep track of who has responded. This is important since most publications require a response rate of at least 50% to be considered for publication. Once reminders are no longer generating additional responses then the survey should be closed and results tallied.