Social Work Research: Planning A Program Evaluation ✓ Solved
Resource 1 Social Work Research: Planning a Program Evaluat
Joan is a social worker who is currently enrolled in a social work PhD program. She is planning to conduct her dissertation research project with a large nonprofit child welfare organization where she has worked as a site coordinator for many years. She has already approached the agency director with her interest, and the leadership team of the agency stated that they would like to collaborate on the research project. The child welfare organization at the center of the planned study has seven regional centers that operate fairly independently. The primary focus of work is on foster care; that is, recruiting and training foster parents and running a regular foster care program with an emphasis on family foster care.
The agency has a residential program as well, but it will not participate in the study. Each of the regional centers services about 45–50 foster parents and approximately 100 foster children. On average, five to six new foster families are recruited at each center on a quarterly basis. This number has been consistent over the past 2 years. Recently it was decided that a new training program for incoming foster parents would be used by the organization.
The primary goals of this new training program include reducing foster placement disruptions, improving the quality of services delivered, and increasing child well-being through better trained and skilled foster families. Each of the regional centers will participate and implement the new training program. Three of the sites will start the program immediately, while the other four centers will not start until 12 months from now. The new training program consists of six separate 3-hour training sessions that are typically conducted in a biweekly format. It is a fairly proceduralized training program; that is, a very detailed set of manuals and training materials exists.
All trainings will be conducted by the same two instructors. The current training program that it will replace differs considerably in its focus, but it also uses a 6-week, 3-hour format. It will be used by those sites not immediately participating until the new program is implemented. Joan has done a thorough review of the foster care literature and has found that there has been no research on the training program to date, even though it is being used by a growing number of agencies. She also found that there are some standardized instruments that she could use for her study.
In addition, she would need to create a set of Likert-type scales for the study. She will be able to use a group design because all seven regional centers are interested in participating and they are starting the training at different times.
Paper For Above Instructions
The case of Baker and his statistical sampling for auditing Mill Company illustrates several incorrect assumptions and applications within the context of attributes sampling. Understanding the errors in Baker's approach can provide deeper insights not only into the auditing process but also into research design methodologies similar to those in social work evaluations.
Incorrect Assumptions and Inappropriate Applications in Baker's Procedures
Firstly, Baker's assumption that a tolerable deviation rate of 20% is appropriate for assessing control risk is questionable. In auditing, a tolerable rate of deviation reflects the maximum error rate allowable before the auditor must question internal controls. Setting this rate too high can lead to complacency regarding the effectiveness of controls. In social work research, a similar oversight could jeopardize the validity of program evaluations, compromising outcomes that impact service delivery and child welfare.
Secondly, Baker’s decision to initially estimate the population deviation rate at 3% without empirical evidence raises methodological concerns. Relying solely on past audits without considering current operational changes may lead to significant bias. In program evaluations, researchers must ensure that they use the latest and most relevant data to inform their sampling designs.
Additionally, the choice to use discovery sampling indicates a misunderstanding of the sampling method’s purpose. While discovery sampling is useful for determining the existence of deviations, it is less effective for comprehensively evaluating the overall control risk when combined with random sampling techniques. Misapplying such methods can skew results, much like incorrectly framing evaluation questions in social work PhD research could lead to questionable outcomes.
Sample Size Calculation Issues
Baker initially set his sample size to 80 based on assumptions about the expected deviation rate and population size without acknowledging that as the population increased during auditing, he later had to adjust the sample size to 100. This reflects a reactive rather than proactive approach which could potentially lead to inadequate risk assessment and faulty conclusions about control risks in the company's shipping and billing processes. Similarly, Joan must carefully consider sample sizes and adjustment procedures in her research using foster care programs to ensure reliability in her findings.
Reliability Levels and their Implications
Baker’s desire for a reliability level of 95% introduces another misstep. His reliance on observed deviations without adequately adjusting for the sample size and deviation from the tolerable rate puts significant pressure on assumptions about control risk. This parallels the need for Joan to maintain realistic reliability levels in her research, where findings could have real-world policy implications regarding child welfare practices.
Dealing with Deviations
Baker identified 8 deviations from the sample of 100, but he deemed a $9 billing error as immaterial. In sampling methodology, the concept of 'materiality' can sometimes mislead conclusions. Joan should establish clear criteria for what constitutes material findings within her study to uphold the integrity of her research. Any unconsidered deviation could mirror an overlooked issue in training impacts, leading to misguided improvements in the foster parent training program.
Conclusions and Future Directions
For Joan and her dissertation project, a key takeaway from Baker's auditing practices involves understanding the implications of sample control and rigorous evaluation design. As she navigates the process of selecting appropriate instruments and sampling strategies, ensuring definitions of success and error are clearly articulated is vital. Ultimately, the objective of both Baker's statistical sampling in auditing and Joan's investigation into foster care training program effectiveness rests on the foundations of sound methodological principles, ensuring that conclusions can withstand scrutiny and impact positively on vulnerable populations.
References
- Plummer, S.-B., Makris, S., & Brocksen, S. (2014). Social work case studies: Concentration year. Baltimore, MD: Laureate International Universities Publishing.
- Creswell, J. W. (2014). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. SAGE Publications.
- Patton, M. Q. (2008). Utilization-Focused Evaluation. SAGE Publications.
- Rubin, A., & Babbie, E. R. (2016). Research Methods for Social Work. Cengage Learning.
- Filstead, W. J. (1970). Qualitative Methodology and Its Role in Social Work Research. Social Work, 15(3), 21-27.
- Polit, D. F., & Beck, C. T. (2016). Nursing Research: Generating and Assessing Evidence for Nursing Practice. Lippincott Williams & Wilkins.
- Oliver, C. (2010). The Nature of Social Work Research. Social Work, 55(2), 126-134.
- Schneider, M. (2014). The Evaluation of Qualitative Studies: A Commentary. Qualitative Research, 14(5), 577-580.
- Jupp, V. (2006). The SAGE Dictionary of Social Research Methods. SAGE Publications.
- Baker, C. R. (1999). Effective Internal Control Systems: Implications for Internal and External Auditors. The CPA Journal, 69(11), 50-54.