Another feature any federal college rating system should include is metric transparency. What I mean by this concept is that calculation of all ratings be shared with all IR practitioners, the organizations that employ them, as well as the general public. For instance, rules for what students are included in retention figures should be clear, specific, free from ambiguity and applied consistently across all rated entities. This may not seem like such a big problem at the moment, as current specification for such figures are fairly transparent. However, I have noticed that different reports ask for measures in a slightly different way, leading to ambiguity about which figure is most appropriate and descriptive. The most salient example for me at the moment is the case of calculating student/teacher ratios for different reports I complete. Similarly, I cannot say for certain that all schools select retention cohorts in quite the same manner, and I would be surprised if variations did not become evident for institutions with different governing boards. This suspected variation leads to reduced utility of reported figures.
As for transparency, this is usually only an issue with proprietary rating/ranking systems such as those employed by major media outlets. This lack of transparency, while understandable given the competitive nature of for-profit media industry, is not appropriate for a federal ranking system. If institutions of higher learning are to be held accountable for their rating results, then they should also be able to determine which of their efforts will lead to the best improvements in subsequent ratings. Furthermore, understanding the impact of different variables on observed ratings might allow for cost/benefit calculations to determine the most cost effective interventions.
Related to this transparency issue is understanding of how any overall ratings are calculated. It is not sufficient to know which figures are included in any particular rating. For the best possible results, higher education policy makers (and those of us who advise them) must also know how important each variable was to that final score. For instance, let us assume a rating system framed as an A, B, C, D, and F scale for each college and university. Let us further assume that the letter grade corresponds to a single value calculated from several other values like affordability, accreditation status, student/teacher ratio, retention and so on. Each of the values in this formula must have a certain weight, or relative importance in calculating the final rating, and this value should be known to facilitate intervention efforts.
The final rating value should also incorporate a number of covariates, or statistical controls such that said scores are exclusively (or nearly so) a function of the college’s attributes and activities rather than some externality beyond its control. As an example, it is well-known that colleges vary in the extent to which they admit students with lower ACT composite scores. Some schools are simply more selective than others where academic credentials are concerned, while others employ an open admissions model. Given such variations, it is not unreasonable to expect that schools with very different selectivity will yield differences in academic outcomes. In this case much of the difference in student performance and quite possibly post-baccalaureate outcomes is external to the university. Quite simply, outcomes for these graduates are not directly attributable to actions of the university itself, rather to some pre-existing variable or collection of variables. In order to really see what portion of post-baccalaureate outcomes are attributable to the university, we must include statistical controls for as many extrinsic factors as is feasible. Examples of other critical factors that should be controlled are (but are not limited to) student socio-economic context, regional labor supply and macroeconomic variables that might impact post-baccalaureate employment and compensation patterns. Certainly, inclusion of covariates complicates data collection and interpretation for everyone, but these obstacles can be and should be overcome if we are to subject colleges to such high-stakes assessments.
Article Continues: Federal College Ratings: Individual-Level Records