Optimizing Evaluations in Health Education Programs
Central to every health education program is the desire to improve the health of the priority population. Through a robust planning process, health education and promotion specialists can identify priority health issues and create unique health programs tailored to the specific needs and capacities of the targeted community. The overall program effects will impact individuals, organizations, cultures, and systems when a comprehensive evaluation process is embedded in the program and aligned with program goals and objectives (Centers for Disease Control and Prevention [CDC], 2012). This process must be established in the initial planning phases, with significant stakeholder input, to determine the purpose of the program, the intended outcome, the evaluation design, and data collection methods (McKenzie et al., 2013). This paper will examine the benefits of developing a sound evaluation process, utilizing experimental design when feasible, for optimizing health program evaluation and furthering the art and science of health education and promotion.
Effective Experimental Design
Experimental designs, which measure both the randomized experimental and control groups, provides the greatest empirically-based evidence of program effectiveness (McKenzie et al., 2013). While all factors that affect program outcomes may not be controlled, experimental designs are the most appropriate, as they offer a robust set of data that often includes pretest and posttest information from at least two groups (Issel, 2014). This comparative data allows analysts to further narrow the variables and potentially draw credible causation conclusions, beyond the scope of a quasi-experimental or nonexperimental design (McKenzie et al., 2013). Health evaluation designs should be proportionate to the scope of the program resources, including funding, staffing, and operational capacity (CDC, 2012). When feasible, an experimental design built on a sound planning model will provide stakeholders with the necessary formative and summative evaluation information, that may be used for multiple purposes within the community and beyond it to the larger public health body of knowledge. Stakeholders, including staff members, those receiving services, sponsors, funding agencies, and the primary users of the evaluation data must be engaged in the process to ensure their perspectives are addressed, potentially creating multiple evaluation considerations (CDC, 2012). As the program is developed a clear and concise mission, coupled with specific goals and objectives, should further support the evaluation design decisions based on the program’s primary intent.
Purpose And Value Of Experimental Design
Program evaluation is a critical component of health programming. Understanding the purpose or purposes for the process, allows stakeholders to justify the associated resource allocation and support the process. When considering experimental design, program planners often seek to determine the program effectiveness as measured by the stated goals and objectives. In other words, did the program have the capacity to effect change as it was designed, and can the change be attributed to the program delivery (CDC, 2012). With the comprehensive data available from the randomized experimental process, health planners will evaluate the program, based on the outcomes central to many health education and promotion programs.
If the primary purpose of evaluation is to determine the achievement of objectives related to improved health status, the secondary purpose when using an experimental design is the benefit of creating empirically sound data that contributes to the scientific base for community public health interventions (McKenzie et al., 2013). A primary benefit of using the more complex and costly experimental design, is the value of the information collected on the target population. One of the key principles of public health is that the programs should be grounded in science (Turnock, 2011). By designing programs that follow the principles and procedures for empirically sound program evaluation, health planners can use the data for the primary purpose of determining program value, while contributing to the body of knowledge that moves health education and promotion forward (McKenzie et al., 2013). This cyclical nature of health program planning allows for mutually beneficial information sharing, with purposeful evaluation embedded in the process, providing quality data for the program stakeholders and the community at large.
Evaluation design decisions have a significant effect on the program participants, the organizations that are involved and that benefit from the evaluation data, and the systems that influence health from a community, societal, or global standpoint (CDC, 2012). Establishing best practices with the use of appropriate designs and methods is key to contributing credible data to the body of knowledge that is used to create efficient and effective programs. These critical health programs and the supporting stakeholders successfully influence health behaviors that lead to reduced morbidity and mortality (CDC, 2012). Experimental design offers the greatest potential to accomplish these overarching health education and promotion goals.
Assessing Implementation Recommendations
When stakeholders are engaged in the planning process, multiple perspectives are presented that need consideration for feasibility within the scope of the projected health program capacity. Planners may create a set of criteria for assessing recommendations, based on principles of successful program planning. Of primary importance is the need to objectively assess the merit, worth, and significance of the recommendations to determine their alignment with the program goals (Koplan, 1999). Assessing merit involves identifying the quality of the activity or program element, based on the value it brings to the program. Does it fit with the standards and principles established at the onset of planning? A second consideration is worth, or the efficiency that the recommendation adds to the program. Is it a cost-effective element that contributes to the evaluation of the goals and objectives? A final consideration is the significance of the recommendation, as it relates to the size and scope of the program. Is it an important element that was overlooked in the early planning stages, or perhaps identified in a formative evaluation?
These considerations include an understanding of the resources and expertise identified in the assessment phase of the program. Stakeholders that are engaged in the process may identify beneficial recommendations that align with the program goals and objectives, but the scope of the project and resources, or the timeline, may not allow for additional program considerations. Keeping the lines of communication open throughout the duration of the program is key to providing a positive experience for program partners that may offer recommendations after the program has been established. Modifications based on formative evaluations may address some of the additional concerns, with planners providing feedback at regular intervals to avoid conflict (Issel, 2014).
Rationale For Evaluation Focus
Empirically sound health programs have evaluation integrated into the planning process, to monitor the program performance and quality, and to ensure the program remains aligned with the goals and objectives identified by the stakeholders involved in the planning process (McKenzie et al., 2013). Outputs and outcomes must be evaluated, with appropriate standards of acceptability, following best practices and ethical guidelines. Without an instrument to measure the process, planners are unable to provide empirical data that is grounded in science and beneficial to health professionals. When planners value the process, and follow the evaluation procedures, the strategic process becomes an integrated program element, supporting the rationale that it is critical for measuring effective outcomes.
Identifying And Overcoming Barriers
Health planners face multiple barriers when incorporating effective evaluations into the planning process. In some cases, planners create the very barriers that prevent effective evaluation when they fail to include evaluation in the planning process, or fail to allocate appropriate resources (McKenzie et al., 2013). There may be other outside factors that prevent effective evaluations from occurring, such as organizational, policy, or allotted time restrictions that prohibit efforts. Stakeholders may influence the evaluation, with increased risk of bias, due to intentional or unintentional involvement outside their scope of expertise. Additionally, planners or other stakeholders may choose to avoid the process, if the intended outcomes are not considered successful, potentially discrediting health professionals when they perceive the lack of success as a threat. When the evaluation process is determined to be too complicated for the program staff, due to the complexity of the intervention strategies, it may be eliminated or significantly reduced to the point of lacking empirical value (McKenzie et al., 2013). These conditions and others related to operational program factors require strategic solutions to identify the causes early in the planning stages, and prevent a recurrence in future health programs.
When planners engage stakeholders from the program’s inception, they can greatly affect the perceived value of evaluation as an integral program component (CDC, 2011). A focused program design that clearly establishes a set of standards, with stakeholder input, can overcome barriers created by program partners and participants. From a professional standpoint, health education and promotion specialists can build greater credibility when they demonstrate the ability to appropriately design programs with evaluation as a key component (CDC, 2011). To prevent the possibility of evaluation failure due to insufficient data collection, “it is necessary to adhere to the highest level of rigor possible given the programmatic realities” (Issel, 2014, p. 423). This may include outsourcing the process to an organization with the necessary expertise.
Conclusion
Health education specialists have a professional obligation to develop the skills necessary to conduct effective evaluations and design evidence-based programs (National Commission for Health Education Credentialing, Inc. [NCHEC], 2010). The roles and responsibilities may vary from program to program, but the core set of skills must be present to lead stakeholders through the process of identifying priority health issues and developing appropriate health education or promotion programs with valid goals and measurable objectives. Multiple skill sets are needed for effective communication, program and resource management, operational oversight, and validation of evaluation processes that affect program outcomes (McKenzie et al., 2013).
Health education specialists must also lead evaluation design and methodology, providing stakeholders with an understanding of the significance and purpose. In essence, they must be artful communicators, establishing relationships with program partners, while their actions and decisions are grounded in science to ensure credibility and program quality. Planners must be committed to resource optimization, creating cost-effective strategies to deliver the most robust program, while remaining aware of the greater good that results from empirically sound program development, implementation, and evaluation.
References
Centers for Disease Control and Prevention (CDC). (2011). Evaluation steps. Retrieved from http://www.cdc.gov/eval/steps/index.htm
Centers for Disease Control and Prevention (CDC). (2012). Improving the use of program evaluation for maximum health impact: Guidelines and recommendations. [PDF]. Retrieved from http://www.cdc.gov/eval/materials/ finalcdcevaluationrecommendations_formatted_12041.pdf Issel, L. M. (2014). Health program planning and evaluation: A practical, systematic approach for community health. (3rd ed.). Burlington, MA: Jones and Bartlett Learning.
Koplan, J. P., Milstein, R., & Wetterhall, S. (1999). Framework for program evaluation in public health. MMWR: Recommendations and Reports, 48, 1-40.
McKenzie, J., Neiger, B., & Thackeray, R. (2013). Planning, implementing, and evaluating health promotion programs-a primer. Glenview, IL: Pearson Education.
National Commission for Health Education Credentialing, Inc. (NCHEC). (2010). Areas of responsibilities, competencies, and sub-competencies for the health education specialists 2010. Retrieved from http://www.nchec.org/_files/_items/nch-mr-tab3-110/docs/ areas%20of%20responsibilities%20competencies%20and%20sub-competencies%20for %20the%20health%20education%20specialist%202010.pdf
Turnock, B. (2011). Essentials of public health. Sudbury, MA: Jones & Bartlett Publishers.