Today I review the NSSE 2013 results for the Learning With Peers sub scale. This section of the survey asks students to rate the extent to which their university offers opportunities to work with their classmates as part of the learning experience. The sub scale is divided into two (2) dimensions: Collaborative Learning and Discussions With Diverse Others.
Figure 9 displays the average Collaborative Learning score for USAO and each of our comparison groups (for more information about these groups, visit NSSE 2013 Results: Academic Challenge). Analysis failed to uncover any statistically significant differences between these groups. In fact, the same can be said for senior scores for this dimension (see figure 10). Across classifications and comparison groups, students report having opportunities for collaborative learning at about the same rate. Placing these scores in the context of the full scale, we can see that all these are clustered at around the scale midpoint– halfway between sometimes and often.
Discussions With Diverse Others
This NSSE subscale asks students to rate how often they believe their institution provides them opportunities to engage in discussions with other individuals with ethnic, racial or cultural backgrounds different than their own. Figure 11 displays the average score for first year students across the different comparison groups. Overall, it appears that students report having this opportunity to a greater extent than collaborative learning (this is not a statement of statistical significance). All scores are clustered around 41, suggesting that students reported having this opportunity Often. Though it appears that USAO scores higher than its comparison groups, this difference does not meet standards for statistical significance.
Statistical Note: NSSE standard reporting does not specify significance levels unless they meet the p < .05, p < .01, or p < .001 thresholds, so I cannot comment on how close these differences were to said thresholds. It is possible to calculate these figures from some of the other data included in these reports. For instance I could use reported effect size to estimate variability and use that to in turn calculate significance levels, but anyone with the statistical sophistication to realize this also has the tools to make such calculations. I chose not to present that information here as it would clutter the narrative for my intended audience (not unlike this note). More importantly performing these unplanned calculations can unnecessarily elevate alpha error.
Examining the pattern of results for seniors in Figure 12 again reveals scores are clustered around 41. As before, score differences across groups do not meet standards for statistical significance, so they are considered functionally equivalent.