Evaluation Design. USD 489 will utilize the e4E Evaluation Tool designed by Southwest Plains Regional Service Center (SWPRSC) for teacher evaluation. *See forms in Appendix.
1. Purpose: The four (4) Elements of an Effective Educator’s Evaluation toolwill
a. Serve as a means to improve the effectiveness of the Educator by identifying strengths and areas for improvement
b. Provide continued evaluation of an Educator’s teaching practice
c. Assist educators in self-reflection to improve teaching practices
d. Provide Administrators with information to aid in personnel decisions
e. Serve as a process for Educator development and Administrator professional growthand
f. Align with local and state goals or improved Educatorperformance.
2. Design: The e4E tool is comprised of four elements:
a. The Learner
b. The Knowledge Base
c. The Instruction
d. The Professional
3. All four elements are intertwined and are critical for effective Educator performance leading to increased student learning. Standards aligned with each element help implement the criteria listed in each. The Interstate Teacher Assessment and Support Consortium (InTASC) helped provide guidance on those areas to be measured by the Educator evaluationprocess.
4. The four elements represent the four main areas considered for evaluation. Within each element are standards specific to that element. Each standard has a set of rubrics identifying the criteria of that standard. The criteria in each rubric are separated into four levels so performance:
a. Novice Educator
b. Developing Educator
c. Proficient Educator
d. Distinguished Educator
5. The ‘Distinguished Educator’ level represents the peak performance of an Educator in a classroom, and it is our goal that all Educators aspire to reach this highest level of professionalaccomplishment.
6. Four (4)
Evaluation Design. The evaluation design will utilize a post-only assessment with a comparison group. The timeframe for the post-only period will begin when the current demonstration period begins, and ends when the current demonstration period ends.
Evaluation Design. The Evaluation Design shall include the following core components to be approved by CMS:
Evaluation Design. USD 489 will utilize the e4E Evaluation Tool designed by Southwest Plains Regional Service Center (SWPRSC) for teacher evaluation. *See forms in Appendix.
Evaluation Design. UN Women evaluations are gender-responsive meaning that both the process and analysis apply the key principles of a human rights-based approach: they are inclusive, participatory, ensure fair power relations, and transparent; and they analyze the underlying structural barriers and sociocultural norms that impede the realization of women’s rights. Gender-responsive evaluation applies mixed-methods (quantitative and qualitative data collection methods and analytical approaches) to account for complexity of gender relations and to ensure participatory and inclusive processes that are culturally appropriate. UN Women evaluations are also utilization-focused, which means that it will be tailored to the needs of the organization through a participatory approach from the inception through to the development of recommendations, which will facilitate production of a useful evaluation. The evaluation will be carried out in accordance with UNEG Norms and Standards and Ethical Code of Conduct and UN Women Evaluation Policy and guidelines and the UNEG Guidance: Integrating Human Rights and Gender Equality in Evaluation. The evaluation should employ a non-experimental, theory-based approach. A re-constructed Theory of Change will be used as the basis for contribution analysis. The evaluation methodology would enable achievement of the evaluation purpose, be aligned with the evaluation approach, and be designed to address the evaluation criteria and answer the key questions through credible techniques for data collection and analysis.
Evaluation Design. The evaluation design will be based on a mixed methods approach. Both qualitative and quantitative methods will be used to address evaluation questions. The CDC’s updated guidelines for evaluating surveillance systems will be used to determine the efficacy of the surveillance system. The following five evaluation questions will be addressed:
Evaluation Design. Proposed methodology, strategies, or approach clearly support attainment of project purpose and objectives. Key activities and procedures to complete the project are clearly articulated and reasonable.
a) Review process: Describes the process for reviewing current AEOP evaluation outcomes, and processes and tools, including the alumni evaluation and making modifications, as needed.
b) Data collection and analysis: Describes the process for collecting both quantitative and qualitative data and how this data will be analyzed. Includes a description of how the applicant will collaborate with Consortium members to ensure maximum participation.
c) Data dissemination: Describes how data will be shared with Consortium members to include, but is not limited to, evaluation reports and briefs, data dashboard, and presentations.
Evaluation Design. Provide information on how the evaluation will be designed. For example, will the evaluation utilize a pre/post comparison? A post-only assessment? Will a comparison group be included?
Evaluation Design. The draft design must discuss the outcome measures that shall be used in evaluating the impact of the Demonstration during the period of approval. It shall discuss the data sources, including the use of Medicaid encounter data, and sampling methodology for assessing these outcomes. The draft evaluation design must include a detailed analysis plan that describes how the effects of the Demonstration shall be isolated from other initiatives occurring in the State. The evaluation designs proposed for each question may include analysis at the beneficiary, provider, and aggregate program level, as appropriate, and include population stratifications to the extent feasible, for further depth and to glean potential non-equivalent effects on different sub-groups. The draft design shall identify whether the State will conduct the evaluation, or select an outside contractor for the evaluation.
Evaluation Design. This evaluation employs post-only analyses. Because the FPW program was initiated over 20 years ago, a pre-post approach is not ideal. Because the majority of women eligible for the FPW program do not enroll in a given year, this creates an opportunity for a relevant comparison group for several of the evaluation questions. Thus, this will be a post-only analysis with a comparison group where outcomes for FPW enrollees will be compared to outcomes for a control group which will consist of women eligible for FPW but that do not enroll in the program.