Skip to main content

Council For The Accreditation Of Education Outcomes

Quality Assurance

CAEP QUALITY ASSURANCE
Jackson State University maintains a quality assurance system comprised of valid data from multiple measures. We support continuous improvement that is sustained and evidence-based, and that evaluates the effectiveness of its completers. We use the results of inquiry and data collection to establish priorities, enhance program elements and capacity, and test innovations to improve completers’ impact on P-12 student learning and development.
QUALITY AND STRATEGIC EVALUATION OF ASSESSMENTS GUIDELINES

The EPP uses the following strategies to ensure fairness, accuracy, consistency, and elimination of bias throughout its assessment system:

1. Assessment Development/Adoption: Assessment development/adoption may begin with a single faculty member designing or reviewing existing assessments for inclusion as a key assessment at JSU. An originally designed or selected existing assessment must be aligned to INTASC, TGR, & SPA Standards and reviewed by the Quality Assurance & Assessment Committee (QAAC) to ensure that it meets CAEP EPP Created Assessment Criteria. Once this has occurred, the originating faculty member and/or Coordinator of Assessment & Accreditation may present the assessment to the Teacher Preparation Taskforce (TPTF) for review and the Professional Education Council (PEC) for approval.

2. Establish Content Validity of Key Assessments: Content validation refers to a process that aims to provide assurance that an instrument measures the content area it is expected to measure. One way of achieving content validity involves a panel of subject matter experts considering the importance of individual items within an instrument. We chose to use Lawshe’s method largely due to the ease of use and the fact that it has been widely used to establish and quantify content validity in diverse fields. It involves a panel of subject matter “experts” rating items into one of three categories: “essential,” “useful, but not essential,” or “not necessary.” Items deemed “essential” by a critical number of panel members are then included within the final instrument and items rated as not necessary are discarded. The CVR is determined on each indicator in the assessment rubric, then the average of all indicators is calculated to determine the overall CVR of the instrument. The panels may decide to keep a limited number of items rated as “useful, but not essential” as long as the Content Validity Ration (CVR) of the entire instrument does not fall below the established threshold of .65.

3. Establish Consistency of Key Assessments: Assessments are consistent when they produce dependable results or results that would remain constant on repeated trials. This is achieved at JSU by providing training for faculty that promotes similar scoring patterns and understandings regarding the meaning of the data. Faculty are divided into small teams and score samples of candidate work using key assessment rubrics. We document percentage of absolute and adjacent agreement from each group and discuss scorers’ rationale in case of adjacent agreement on an element of the rubric. 100% adjacent agreement is expected. In cases where adjacent agreement is not 100% we retrain faculty on expectations for that element. This training is to be offered annually for new assessors.

4. Piloting of a New Assessment: Newly developed key assessments may be piloted in a course or courses by a professor prior to review by the QAAS and entry into TK20. An assessment; however, cannot become an EPP or programmatic key assessment without being formally adopted (i.e. review by the QAAS and approval by the PEC).

Council For The Accreditation Of Education Outcomes

Tony Latiker, Ed.D.
Coordinator of Assessment and Accreditation
tony.t.latiker@jsums.edu
601.979.0300 (Work)