From the Editor – Begins with the End

Conceptually, this issue of Research & Practice in Assessment begins with its end. The concluding Ruminate section highlights an inter-cultural fable, “The Blind Men and the Elephant.” Nineteenth century poet, John Godfrey Saxe, penned the rendition that is familiar to most Western cultures:

It was six men of Indostan to learning much inclined,
Who went to see the Elephant (though all of them were blind),
That each by observation might satisfy his mind.

In the stanzas that follow, Saxe offers readers the observations and corresponding assessments made by each blind man. Two men assert that the matter is “mighty plain,” one “bawls” his deduction aloud, and three others conclude they are able to “see.” One subject. Six individuals. Six perspectives. Similarly, the current issue puts forth intentionally diverse views on higher education assessment. Recent structural changes to the Editorial and Review Boards are designed to sustain diversity of thought, and promote rich assessment discourse among colleagues.

This issue opens with a provocative special feature penned by Wake Forest University sociologist Joseph Soares, who argues for the further development and use of predictively powerful tests that lack social prejudice. The piece is adapted from his latest book, SAT Wars, an edited volume that examines social effects of high-stakes standardized testing. Three peer-review articles follow. Zilberberg, Anderson, Swerdzewski, Finney, and Marsh address the impact of college students’ understanding of federal accountability testing and their corresponding testing behaviors. From there, Hoffman and Bresciani examine knowledge, skill, and dispositional competencies among student affairs professionals. Finally, Erwin employs a longitudinal design to link alumni self-ratings of personal growth with intellectual development.

In the latter half of the issue, I encourage readers to peruse the Review and Notes In Brief sections. Lagotte offers commentary on Good Education in an Age of Measurement, a penetrating work for assessment professionals. Mahiri’s review of Science Learning and Instruction revisits the topic of knowledge integration and, in the first RPA software review, Gotzmann and Bahry focus on the free item analysis application jMetrik. Within the Notes In Brief section, practitioners and scholars alike may appreciate Zelna and Dunstan’s annotated list of selected assessment conferences. Ruminate closes this diverse issue (as it began) with a symbiotic display of image and prose by Basbagill and Saxe.

In the past six months, RPA has taken yet another qualitative leap through publication improvements and website development. I am indebted to various leaders of the Virginia Assessment Group – past and present who provided the necessary resources and assistance to accomplish these changes: Kathryne Drezek McConnell, Keston Fulcher, Robin Anderson, and the current board members. The comprehensive website redesign could not have been accomplished without the exceptional talents of Katelynn Stein and Patrice Brown.
A final note of commendation is in order for the RPA Editorial and Review Board members; your level of involvement and mentoring during the peer review process has been admirable. As you engage the pieces contained herein, I hope you will consider penning
your own scholarly piece for submission to Research & Practice in Assessment.

Regards,

Joshua Brown

Liberty University

For Tests that are Predictively Powerful and Without Social Prejudice

Joseph Soares is Professor of Sociology at Wake Forest University. This article is adapted from his latest book, SAT Wars: The Case for Test-Optional Admissions (Teacher’s College Press, 2011) an edited volume that examines the social effects of high-stakes standardized testing. Additional contributors include Richard Atkinson, Thomas Espenshade, Daniel Golden, Charles Murray, and Robert Sternberg, among others.

Growing Up with No Child Left Behind: An Initial Assessment of the Understanding of College Students’ Knowledge of Accountability Testing

Despite the extensive testing for federal accountability mandates, college students’ understanding of federal accountability testing (e.g., No Child Left Behind, Race to the Top, Spellings) has not been examined, resulting in a lack of knowledge regarding how such understanding (or lack thereof) impacts college students’ behavior on accountability tests in higher education contexts. This study explores college students’ understanding and misconceptions of federal accountability testing in K-12. To this end, we crafted nine multiple choice items with four distracters and piloted these items with two college student samples. The results indicated that college students tend to be moderately confident in their responses regardless of the accuracy of the response. These findings imply that educating students on the purpose and process of accountability testing will require not only imparting correct information, but also debunking misconceptions.

Identifying What Student Affairs Professionals Value: A Mixed Methods Analysis of Professional Competencies Listed in Job Descriptions

This mixed method study explored the professional competencies that administrators expect from entry-, mid-, and senior-level professionals as reflected in 1,759 job openings posted in 2008. Knowledge, skill, and dispositional competencies were identified during the qualitative phase of the study. Statistical analysis of the prevalence of competencies revealed significant differences between major functional areas and requirements for educational and work experience. Implications for institutional leaders, graduate faculty, and professional development planning as well as for mixed methods research are discussed.

Intellectual College Development Related to Alumni Perceptions of Personal Growth

Alumni self-ratings of their personal growth were linked to their intellectual development during college four to seven years earlier. Graduates that were satisfied with their personal growth in the arts, creative thinking, making logical inferences, learning independently, exercising initiative, and tolerating other points of view had higher intellectual scores in Commitment and Empathy as undergraduates years earlier. These findings support a relationship between college student intellectual development and alumni perceptions of their personal growth. The implications of this study support continuing the custom of querying graduates about their earlier education, a practice in wide use already; and add to the validity of the Scale of Intellectual Development as a measure of college impact upon personal dispositions.

Review of “Good Education in an Age of Measurement”

In Good Education in an Age of Measurement, Gert J.J. Biesta argues that analysis about what constitutes a “good” education demands more than the evidence-based, “best practice” paradigm currently offers. Furthermore, the narrow perspective of assessing learning outcomes may prove detrimental for education towards a deeply democratic society. Although not exactly the type of insight assessment researchers might welcome, Biesta’s thoughtful critique can ultimately enhance the ways scholars evaluate the quality of education. Biesta reinvigorates discussions about what constitutes a good education, specifically the purpose of education. Concerned about a lack of attention to purposes in the research literature, Biesta puts this issue front and center. His inquiry includes a normative perspective rather than only a managerial focus on education as a technique. That is, he produces a conceptual framework for why we ought to focus on particular educational goals. To this end, Biesta provides a three-prong framework for education, which should highlight a distinct outcome: producing a deliberative democratic order of increasing equity.

Review of “Science Learning and Instruction”

In Science Learning and Instruction: Taking Advantage of Technology to Promote Knowledge Integration, Linn and Eylon make a critical shift in the focus of assessment. In the quest to assess what students learn, they show why we must also assess how they learn. These researchers argue that this approach to assessment can substantively increase the quality of student knowledge when embedded in a process called “knowledge integration” (KI). They also demonstrate how KI can improve the quality of science learning overall when instruction, assessment, professional development, and school leadership are systematically aligned. These ameliorative possibilities begin with a simple premise: “Everyone can learn science” (p. ix).

Review of “jMetrik”

Technology, and the use of software to enhance or assist with evaluating measurement statistics, is currently a large emphasis for users. Measurement statistics, used in classical test theory (CTT) and item response theory (IRT), have been elusive for some users, as the measurement concepts are complex and investment of time to understand is intensive (Lord, 1980; Lord & Novick, 1968). However, users across many content disciplines are developing their understanding and applying these methodologies to new areas (i.e., medical education, psychology, etc.). As a result, the needs of researchers and applied practitioners have changed, and consequently, require
tools to apply psychometrics. Reliance on specialized or esoteric software has been the norm; however, according to Drasgow, Luecht, and Bennett (2006), “Technology offers solutions to many of the challenges faced by testing programs” (p. 471). That is, technology may provide many of the psychometric analyses to be more accessible to broader audiences, so that users of all levels of expertise can take advantage of the advances in educational measurement.

Finding the Right Fit: Choosing an Assessment Conference

There was a time when it could be a challenge to gain the necessary training and skills to conduct quality learning assessment. Fortunately, this is no longer the case as formal educational programs are now available through some graduate schools and, in addition, there are many outstanding assessment conferences. Faculty, staff and administrators sometimes struggle when researching the options for continuing education. The following pages provide the choices to consider when selecting an event as well as an abbreviated list of regional and national conferences.