Monday, October 13, 2014

Measuring the Unmeasurable, or, How Assessment Substantively Constrains Missions

Measurableness is next to godliness, or at least it seems in higher education. UVU's accreditor, the Northwest Commission on Colleges and Universities (NWCCU), demands that we assess mission fulfillment based on "analysis of meaningful, assessable, and verifiable data" and conduct "evidence-based assessment of its accomplishments."

In principle, some measurement can be obtained for any goal. In this sense, accreditation and assessment in higher education are hyperpositivist. The logical positivist philosophers at the turn of the 20th Century argued that the only things that existed were those that could be observed, directly or indirectly; all else was metaphysics. That informed the logic of social scientific research especially after World War II and especially in education.

Accreditation and assessment as we currently practice them go beyond that claim; that which is real in the management of higher education institutions is—and is only—that which is measurable, whether quantitatively or qualitatively. With apologies to Mr. Justice Stewart, seeing it isn't enough. One must be able to assign some sort of value to it.

Pragmatically there may be no difference between the two. To say that something was or was not observed is a binary measurement. But the rhetoric is important: measurement is more rigorous than mere observation, and it is the rigor of the process, not its ontological foundation, that legitimizes this approach to managing educational institutions. To pose the proposition that some things may not be measurable is thus to challenge the fundamental legitimacy of educational management.

(You don't think I am going to pass up an opportunity for that, do you?)


Let me suggest two ways in which an objective related to an institution's mission fulfillment might be unmeasurable. One might be that the objective itself might be, in fact, unobservable. Consider a rather simple example posed in a conversation I had with information ethicist Karla Carter:
My sense (now; at the time I was arguing to the contrary) is that one cannot measure the presence or absence of a sense of humor, only of whether people find something funny. To generalize that requires that we assume the joke was good, and that it discriminates between those with and without senses of humor. That seems dubious.

Another way in which something might be unmeasurable is when an apparatus needed to do so is absent. In the absence of a thermometer I can make a less precise measurement of temperature: cold, cool, warm, hot, or profanity-inducing. The redshift of faraway galaxies can only be measured if we have some way to measure their distance and the spectra of light from them.

Both of these cases have something in common: measurement requires that the measurer brings something to that which is measured, be it the assumption that the Dead Parrot Sketch is funny or the Mount Wilson Observatory. Without these, measurement is impossible. We might call these conditions that are measurable only with auxiliaries.

Auxiliaries themselves are not problems; if we think hard about it there are probably quite few things that can be observed in the absence of something that might be called an auxiliary, especially in higher education assessment. But what do we bring to assessment when we bring auxiliaries? This can be exceptionally problematic.

In the case of assumptions, we very quickly start measuring much more than we claim to be measuring. At the Rocky Mountain Association of Institutional Research conference in September, I took part in a panel discussion of measuring outcomes in secular and religious institutions. One thing the panel took up was the fact that many institutions of both types have, as part of their missions, the development of students as ethical people.

I was joined by, among others, Roy Atwood, president of New Saint Andrews College in Idaho. His institution's mission is, foremost, "to graduate leaders who shape culture through wise and victorious Christian living." UVU is a great example of an ethical mission in the public sector: one of our objectives that defines student success is that students will become "leaders, people of integrity, and stewards of their communities."

Those conditions are clearly not measurable in themselves. But the institutions solve this problem in very different ways. New Saint Andrews clarifies what "wise and victorious Christian living" with a Statement of Faith rooted in the Christian Reformed theological tradition. The consequence for them, then, is that they have changed what they are assessing: not Christian living, but living in accordance with the theological principles of a specific Christian denomination.

UVU's alternative is perhaps more ecumenical, but is certainly worse as a matter of assessment: we ignore that part of the objective. Our assessment of the objective does measure professional and academic success. But we punt on leadership. We don't study whether out students have integrity. We aren't interested in what they do for their communities. In short, when it comes to assessment the objective ends before we get to the ethical dimensions of success.

This is almost certainly true of any effort to create assessable ethical outcomes. But there are many other outcomes that are problematic in this way. Critical thinking, a remarkably common objective, is a very good example. There is no effective test of critical thinking processes that does not also exclude some substantive answers. The controversy over a school's use of Holocaust denial on a critical thinking test is a good example. There is no answer that reflects critical thinking that could lead to the conclusion that the Holocaust happened. But if some answers are proscribed from the outset so that we can make a reliable measurement of critical thinking skills, the measurement process isn't measuring critical thinking but rather substantive knowledge. The student who parrots the unquestionably sound arguments against Holocaust deniers appears no different from the student who has reached that conclusion after thinking critically about those arguments.

In these kinds of missions, there will be no outcome specific enough to measure reliably that does not constrain the students to a specific range of substantive viewpoints. The choice is to impose a viewpoint or to let the objective go unmeasured. The auxiliary is necessary for the objective to exist in the world of assessment. Of course, if it isn't measured, it isn't there. But when we impose these auxiliaries, we defeat the purpose of the objective; we can't distinguish between students are ethical or good critical thinkers and ones who know how to give the acceptable answers.

The structural auxiliaries are equally problematic. Many times an institution's mission might include efforts that are in principle measurable, but where the structures (most commonly technologies or organizational arrangements) to do so are lacking. UVU's efforts to measure one of its objectives related to engagement is a case in point. We want to "foster partnerships and outreach opportunities that enhance the regional, national, and global communities." The question, then, is what it takes to measure that commitment.

Here we find a fundamental contradiction: the more central an activity is to how an institution practices its mission—that is, the more the institution actually fulfills the mission organically, throughout the organization—the more difficult it is to reliably measure that activity. At UVU, engagement takes place throughout the institution. While engaged courses can have a designation in the curriculum making them easily measured, extra-curricular activities are exceptionally decentralized/ The Honors Program takes students to UVUs Capital Reef Field Station to work in collaboration with the National Park Service. The Center for Constitutional Studies sponsors Constitution Day, bringing in speakers from the community. The Equity in Education Center sponsors "Empowering Your Tomorrow," a conference to encourage interest in STEM fields among middle and high school...boys. (Yup. No further comment; don't shoot the messenger.)

Clearly, UVU is fostering quite a lot of partnerships and outreach opportunities that enhance the communities. But "we do a lot of that" isn't measuring. We need a number, or at least a list. And getting that when a function is decentralized is next to impossible. Many low-profile opportunities, for example small projects that involve only a professor and a few students or those initiated and operated independently by students, will go unnoticed by those responsible for measuring them. The pool of potential actors too wide, their incentive to let those responsible for assessment know about their activities too negligible.

If all of these functions needed to go through the Center for Engaged Learning for funding or approval, the Center can simply generate an annual report on engagement. We can meet our assessment needs by centralizing these functions, then. But that raises questions about carts and horses. And those questions are not simply procedural questions.

A Centralized Center for Measuring Previously Decentralized Functions (led, of course, by an Associate Vice President for Central Measurement of PDFs) would change these functions in two ways. One doesn't seem like a bad idea in principle: it imposes a centralized vision of that aspect of the institutional mission where the decentralized organization permits a pluralistic view of that. That is a tradeoff between, on the one hand, creativity and diversity, and coordination and collaboration on the other. I don't see a principled reason to prefer one or the other, but centralizing a function without being aware of this tradeoff opens a world of unintended consequences. We need to be aware of these unintended consequences before we decide to centralize for the sake of measuring.

The second consequence, however, I do see as unambiguously bad. A central center to promote a new aspect of a mission can be a powerful tool for change. But one that centralizes an organic and well established aspect of an institution's mission can transform institutional identity into something more pro forma, done because the administration wants it or there is some sort of self-interest involved. We not longer are engaged; we check the box that says we did something that meets the engaged rubric and move on to the next bit of paperwork. Centralizing these kinds of functions in order to demonstrate mission fulfillment, paradoxically, quite likely undermines true mission fulfillment.

By changing organizational structures to accommodate measurable assessment, the demand that we assess becomes a substantive constraint on institutional missions. Some parts of our mission can only be assessed if we are willing to impose auxiliary assumptions that defeat the purpose of that mission. Others can only be assessed if we create organizational structures that substitute a superficial demonstration of mission fulfillment for a mission that is an organic part of the institution. In either case, we are posed with the challenge of measuring the unmeasurable, and with the choice of abandoning some missions or transforming them beyond what we really mean by them. Worst of all. we are and doing so not just in spite of their centrality to the mission of the institution, but because of it.

No comments:

Post a Comment