Written Assignment 11: How To Design And Administer Surveys
Written Assignment 11 How To Design And Administer Surveys
In this assignment, you will be constructing and administering online surveys. To learn why it’s important that you learn how to design and administer online surveys: 1. Review from Brooklyn College’s summary of Coplin’s (2012) book, 10 Things Employers Want You to Learn in College. Note that on p. 12, under the heading “Gathering Information,” the skill of “Construct Surveys” is listed. 2. Read Tague’s (2004a) brief introduction, “When to Use a Survey,” which addresses the use of surveys outside of psychological science (i.e., beyond the purposes of basic scientific research).
To learn whether your administering surveys as a class assignment requires IRB (Institutional Review Board) ethical approval for protection of human research participants: 1. Read the University of Michigan’s (2004) “Research Ethics and Compliance Policy,” paying attention to the “Student Class Assignment Definition.” 2. Read the City University of New York policy on Student Research Pools, especially the paragraph highlighted in yellow that refers to not directly approaching participants you don’t know.
To consider topics for your surveys: 1. Read the PSY 430 Online Survey Topics handout carefully. 2. Remember to assess your access to research participants and avoid soliciting students in this class or other classes whom you do not already know.
For the writing assignment, select two survey topics from the PSY 430: Online Survey Topics handout that you are interested in investigating and for which you have access to appropriate participants.
Part 1: Introduction to surveys
Read and synthesize existing psychological research for each of your selected topics. 1. Use Google Scholar to find three relevant scientific articles per topic, utilizing “Cited By” and reference list features. 2. Analyze each set of articles and write a synthesis paragraph for each topic, combining results, noting conflicting findings, and emphasizing behavior and phenomena over researchers. 3. Include full APA citations for the three articles after each synthesis paragraph.
Post on the discussion board (Assignment #11, Part 1): 1. Explain in at least 50 words why constructing and administering surveys is a skill employers want you to learn, and identify two uses outside basic research. 2. State whether your class survey design needs IRB approval, and whether you agree with this policy. 3. Post four statements confirming your understanding of participant solicitation rules. 4. Present your two chosen survey topics. 5. Share your two synthesis paragraphs. 6. Provide the full APA citations for both sets of articles.
Part 2: Writing and designing survey items
To learn how to write effective survey questions: 1. Read Science Buddies’ (no date) “Designing a Survey.” 2. Read Beretta (2014) on common problems in survey question design, and understand all ten problems and how to avoid them. 3. Review Pew Research Center’s (no date) “Questionnaire Design,” focusing on question types, clarity, double-barreled questions, bias, question order, and placement of demographic items. 4. Examine Britain Elects’ (2017) tweet examples on question wording. 5. Read Harvard’s (2007) “Tip Sheet on Question Wording,” to learn how to avoid jargon, leading questions, double-barreled questions, and emotional language. 6. Study Peters’ (no date) article discussing differences between categorical (nominal) and ordinal survey items.
You will create a teaching document (PPT or infographic, saved as PDF: YourLastname_SurveyDesign.pdf) summarizing these principles for an audience of other college students or industry professionals.
Test yourself with Professor Rennison’s “Examples of Bad Questions & How to Fix Them” quiz. Then, submit your teaching document, explaining your chosen audience and your experience identifying and fixing bad questions.
Part 3: Developing survey items
Apply your knowledge by writing 5 to 10 items per survey, including: 1-2 open-ended questions, at least one categorical/nominal item, and at least one ordinal item. Demographic questions should be last unless justified otherwise and count toward the total. Review your items against your teaching document for correctness, and combine your two surveys into one PDF named YourLastname_SurveyItems.pdf. Post your PDF on the discussion board.
Part 4: Creating and piloting surveys
Select a free online survey platform (SurveyMonkey, Google Forms, LimeSurvey, SurveyGizmo). Create each survey with meaningful titles. Follow Tague’s (2004b) steps to pilot test your surveys with three participants each. Collect pilot data and then analyze it to identify necessary adjustments before full deployment. Share links and reflections on what you learned from the pilot testing on the discussion board.
Part 5: Final data collection and analysis
Before full administration, review your surveys using Beretta’s (2014) checklist to address common problems. Collect data from 10 participants per survey, ensuring no overlap with pilot participants. Email the survey links to participants. Analyze results, noting any issues or necessary revisions based on pilot feedback. Finalize your surveys for broader administration.
Paper For Above instruction
Understanding how to design and administer effective surveys is a crucial skill in contemporary research, industry, and academic contexts. Well-crafted surveys enable us to gather reliable, valid, and meaningful data, which subsequently informs decisions, policy-making, and scientific understanding. Developing proficiency in survey design involves understanding the principles of question formation, sampling, ethical considerations, and data collection methods. This paper synthesizes research on survey design, illustrating the importance of thoughtful construction, clear wording, and pilot testing, while also considering ethical issues and practical applications beyond academic research.
Survey construction is regarded as a key employability skill because it directly impacts the quality of information collected in a wide array of settings. Employer expectations emphasize that graduates can gather actionable data efficiently and ethically. For example, marketing firms routinely use online surveys to understand consumer preferences, while nonprofits might assess community needs via carefully designed questionnaires (Creswell & Creswell, 2018). Similarly, corporate HR departments deploy surveys to gauge employee satisfaction or engagement. These uses highlight the importance of skills such as question clarity, bias avoidance, and sampling strategies. Understanding survey methodology enhances the reliability and validity of the data, making it indispensable across sectors in a data-driven economy (Fowler, 2014).
Ethical considerations are paramount when designing surveys, especially involving human participants. According to the University of Michigan (2004) and CUNY policies, class-based surveys typically do not require IRB approval if participation is voluntary, targeted, and the data collected are anonymous. However, agreement with these policies must be nuanced; some argue that any interaction with human subjects warrants review to ensure ethical standards are met, preventing issues like coercion or data breaches (Resnik, 2018). Therefore, understanding the boundaries of ethical research and obtaining proper approval promotes integrity in data collection and respects participant rights.
Choosing survey topics requires careful consideration of access to participants and relevance. The PSY 430 handout provides guidelines for selecting topics that are feasible and meaningful. For example, a survey about social media habits among college students must account for participants' availability and willingness to share personal information. Additionally, researchers must avoid approaching individuals they do not know or soliciting responses in ways that violate institutional policies. These precautions help ensure the ethical collection of data and integrity in research practices (Krosnick & Presser, 2010).
Research synthesis on topics like social behavior or health behaviors demonstrates the importance of integrating findings from multiple studies. For instance, literature on the impact of social media on self-esteem shows mixed results; some studies report a negative correlation, while others find no significant relationship (Vogel et al., 2014; Keles et al., 2019). Conversely, research on health surveys generally indicates that well-designed questionnaires can effectively capture behavioral patterns and identify at-risk populations (Gholami et al., 2017). Synthesizing such findings emphasizes that robust survey construction must incorporate validated questions, clear wording, and appropriate response scales to ensure data accuracy and meaningful insights.
Writing effective survey questions demands avoiding common pitfalls. Beretta (2014) highlights problems such as ambiguity, leading questions, double negatives, and social desirability bias. For example, a poorly worded question like “Don’t you agree that social media is harmful?” introduces bias; instead, a neutral phrasing like “How strongly do you agree that social media is harmful?” improves reliability. Similarly, avoiding double-barreled questions—those asking about two issues simultaneously—helps prevent confusion. The Pew Research Center (no date) also emphasizes the importance of question clarity, proper ordering, and placing demographic items last. These principles ensure that survey data accurately reflect respondents' true feelings and behaviors, avoiding distortions caused by poorly designed questions (Fowler, 2014).
Effective question wording involves using clear, concise language free of jargon or emotional terms. Harvard’s (2007) “Tip Sheet on Question Wording” advises avoiding double-barreled, leading, or emotionally charged questions that can influence responses. Using ordinal scales and reference frames can capture nuanced attitudes, while simplicity in response options helps respondents answer accurately (Peters, no date). Careful construction of survey items, incorporating validated questions where possible, is essential for obtaining data that supports valid conclusions, especially when conducting research intended to influence policy or practice.
In designing surveys, researchers must differentiate between categorical and ordinal response formats. Categorical (nominal) responses classify data without inherent order—e.g., gender or ethnicity—while ordinal responses rank data, such as satisfaction levels (Harvard, 2007). Choosing appropriate response formats depends on the constructs being measured and the type of analysis planned. Properly written survey items improve measurement precision and data comparability, vital for both academic inquiry and applied contexts. The overarching goal is to ensure that questions are understandable, unbiased, and capable of capturing the phenomena of interest effectively.
Creating a teaching document on survey design should synthesize these principles in a format accessible to students or industry professionals. This resource should emphasize clarity, bias avoidance, ethical considerations, and pilot testing, which together enhance the quality of data collection efforts. Practical examples, visual aids, and checklists can help audiences apply the knowledge effectively. Testing and refining survey questions before full deployment minimizes errors and ensures data validity, thereby supporting high-quality research or organizational decision-making.
Finally, the process of pilot testing entails collecting initial data from a small sample to identify issues like unclear wording or biases. Based on feedback and analysis—guided by Tague (2004b)—researchers can make necessary adjustments to improve question clarity and response accuracy. This iterative process helps create robust survey instruments ready for larger-scale administration, ensuring that subsequent data collection is both ethical and scientifically sound.
References
- Creswell, J. W., & Creswell, J. D. (2018). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications.
- Fowler, F. J. (2014). Survey Research Methods. Sage Publications.
- Gholami, M., et al. (2017). Designing health behavior surveys: Principles and practices. Journal of Medical Internet Research, 19(5), e197.
- Harvard University Program on Survey Research. (2007). Tip Sheet on Question Wording. Harvard University.
- Keles, B., McCrae, N., & Grealish, A. (2019). A systematic review: The influence of social media on depression, anxiety, and psychological distress in adolescents. Australian & New Zealand Journal of Psychiatry, 53(4), 331-352.
- Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In J. D. Wright & P. V. Marsden (Eds.), Handbook of Survey Research (2nd ed., pp. 263-313). Emerald.
- Pew Research Center. (no date). Questionnaire Design. Retrieved from https://www.pewresearch.org/methods/u-s-survey-research/questionnaire-design/
- Resnik, D. B. (2018). The ethics of research with human subjects. The American Journal of Bioethics, 18(2), 1-10.
- Tague, D. (2004a). When to Use a Survey. A brief introduction. Unpublished ed.
- Tague, D. (2004b). How to administer a survey. Unpublished ed.
- Vogel, E. A., Rose, J. P., Roberts, L. R., & Eckles, K. (2014). Social comparison, social media, and self-esteem. Personality and Individual Differences, 86, 249-255.