“What Do I Do with the Data Now?”: Analyzing A Comparison of Testing Conditions and the Implications for Validity

Suzanne L. Pieper, Keston H. Fulcher, Donna L. Sundre and T. Dary Erwin   |    Volume Three  |    Email Article Download Article

Abstract

Most colleges and universities have implemented an assessment program of some kind in an effort to respond to calls for accountability from stakeholders as well as to continuously improve student learning on their campuses. While institutions administer assessment instruments to students and receive reports, many campuses do not reap the maximum benefits from their assessment efforts. Oftentimes, this is because the data have not been analyzed in a way that answers questions that are important to the institution or other stakeholders. This paper describes four useful analytical strategies that focus on the following key educational research questions: (a) Differences: Do students learn or develop more if they participate in a course or program compared to other students who did not participate?; (b) Relationships: What is the relationship between student assessment outcomes and relevant program indicators (e.g., course grades, peer ratings)?; (c) Change: Do students change over time?; and (d) Competency: Do students meet our expectations? Each of these strategies is described, followed by a discussion of the advantages and disadvantages of each method. These strategies can be effectively adapted to the needs of most institutions. Examples from the general education assessment program at James Madison University are provided.



« Back to Archive