header photo

The Journal of Multimodal Rhetorics

ISSN: 2472-7318


To download a free dyslexia-friendly font, please visit OpenDyslexia (not associated with JOMR).

To download a free ADD/ADHD-friendly font, please visit BeeLine Reader (not associated with JOMR).

Universal Design in Apocalypse Time: A Short History of Accessible Teaching Exnovation

sarah madoka currie, University of Waterloo



[Editor’s note: The author has generously provided an expanded version of this article that contains financial figures related to institutional gains and losses, and thorough analyses of how these translate into increased access expenditures (hint: they don’t). For the expanded version, please click this link.]


I want to tell you a story about what words we use when.

In March 2020, Canadian universities entered apocalypse time[1] at the onset of the coronavirus pandemic. As the cases rose, streets cleared and the classrooms emptied, and it was the dis/ability community that asked first if we[2] were safe when we were isolated at home.

The pandemic conjured its own distinct way of learning and it took many names: education in emergency (Pokhrel & Chhetri, 2021), emergency remote teaching (Rapanta et al., 2021; Hodges, 2020; EduCause 2020), online learning panacea (Dahwan, 2020) and more simply, pandemic pedagogy (Rippé, et al., 2021). The online teaching methodologies of the before-times were rearranged into unrecognizable forms on learning management systems (LMS) that were unprepared to serve as more-than-subsidiary roles in student education. Emergency remote teaching was an overnight revolution brought on by exceptional suffering and anxiety we couldn’t readily recognize or interpret in entirely digital space. While mid-millennials automatically associate the university experience with hundreds of one-seater desks and PowerPoint slides read by monotonous instructors, the pandemic-laden Zoomer generation is rapidly associating learning not with auditoriums of anxious together-ness but Zoom rooms with featureless black tiles indicating users/classmates, a mosaic of gravity wells that once signified student support and empathetic connection space. Amidst the array of loneliness, we asked if we were enough.

Big data experts predicted that the eduTech industry would enjoy a market valuation of over $2 billion (USD) in a year, based on the rapid uptake of iconic pandemic technologies like Zoom, Blackboard, Coursera, Khan Academy and Kahoot (Dhawan 2020). The universities themselves, meanwhile, were facing the opposite: already facing a 47% federal funding decline since 2011, Ontario institutions carefully stacked matchstick towers of malingering funding, public aid injections and private investments on top of new expenditures taking the form of massive eduTech licenses, online infrastructural upgrades and increased technical support staff (Ansari, 2020). We asked, with more emphasis, if we had enough.

It wasn’t until December 2020 that the whispers from the disability community hit mainstream Canadian news sources in the form of a short news exposé. They named the central harm of the pandemic paradigm shift, claiming that disability accommodations were “nice-to-haves rather than absolute requirements” (Loeppky, 2020), mere peripheral obligations that couldn’t answer questions already asked before. It is from within this dynamic that the university started weaving a new story around the word access, taking advantage of and simultaneously leveraging the silence of bodyminds amidst the mass anxiousness and trauma, to capitalize on the emergent paradigm in exorbitantly profitable ways.

Yet, while the university assured us that we were safe, they excluded those who were precariously unsafe and could not answer. When the university assured us that we had enough, they didn’t answer for those who could not be in the room to answer. And when they assured us they were revolutionizing accessibility, those who didn’t have enough and weren’t safe were not allowed to speak a more accurate truth.

commodifying “access” for expansionism

Educational developer and ethical pedagogue Ann Gagne thoughtfully reflected that “the words we use have power, and educational environments are spaces in particular where power dynamics and discourse are analyzed” (2021). When teaching moved rapidly toward normalized digital disembodiment, the words we used were no longer couched with nonverbal signals to convey meaning, assisting the transmission of our intended message with body language, visual aids and facilitation[3] presence. In emergency pandemic teaching, the power dynamics — and the words we use toward that end — are exacerbated by this loss of embodiment and the warmth that comes from within our bodies and external signals. The deep anxiety that students and instructors (rightly) felt in apocalypse time was magnified by the literal black-box affectation of Zoom, our subtle inability to authentically humanize the other people in our educational spaces and LMS systems. And when thousands of teachers found themselves struggling to inject that humanity across webcams, recorded lectures and messaging boards —as stated in Harvard’s Usable Knowledge pedagogy column, Rethinking Learning, and the Journal of Teacher Education – we lost something we have largely taking for granted: a collective being-in-space that implicitly renders students and instructors as human-first.

To be sure, some students and instructors were never afforded this privilege. Those who engaged in distance education pre-pandemic, those to whom classrooms were inaccessible or unauthorized spaces, and those whose bodies pre-empted normative “human connection” experienced a strange echo pandemic when more privileged folk started to endure some of the access conditions they had already endured for decades. This double-marginalization dynamic was taken up artfully by disability historian Hannah Facknitz:

The pandemic in higher education was an historic moment that revealed with astonishing clarity the violent, explicit, intentional ableism of academia, and for folks like us — people who couldn’t often muster enough denial or privilege to move through violent institutions — the pandemic was too much. (Facknitz & Lorenz, 2021)

This is a complicated dynamic, an interplay of disability theory and the late-stage capitalist economic trajectory of the higher education institutions we trust with delivering a humanized yet marketable education that acknowledges basic equality (albeit through problematic measurements) while conferring desirable degrees to a post-millennial job market that overwhelmingly requires them for even the most basic employment. The echo pandemic worked overtime to prophetically highlight the commodification of access for profiteering, enacted in three steps: digitizing and reformulating “access,” marketing “access” as a utilitarian imperative, and then repackaging it to educators via “learning analytics.”


redefining “access” as digital divide

The institution is clever. In disruptive situations — like a worldwide deadly virus — people pay less attention to details, particularly details that were hard to discern while we enjoyed the extra energies, stamina and other critical thinking bonuses of pre-apocalyptic education. Education access theorist Emily Brier compares the prescribed instructor role in higher education to “low-level managers” who primarily work to “enforc[e] norms, repor[t] behaviours, and keep students in line” (Brier, 2021). This is a direct comparison to some performances of the tripartite professorial job description of “teaching, research and service.” Teaching can enforce educational norms through rubric grading, curricula and learning objectives. Research can report (abnormal) behaviours through our own qualitative metrics or interactions with other campus services. Services can re-commit to promote instructor visibility in easily recognizable, surveillance-based ways while also ensuring the non-visibility of student or collegial radicalism against this neoliberalized management system. In the entirely too-easy manipulation of tripartite standards in these ways within academic communities, professors are in a powerful position to reify codes and behaviours that intentionally perpetuate the normativity they are used to and feel comfortable in, even if they know this status quo isn’t fair to everyone inside (or kept out of) the room. Emily’s rendition of this ostensible compliance machine is much more overt: “faculty often act as the first line of control against student activism and dissent and enforce adherence to university policy and norms to create a properly cowed and compliant future workforce” (Brier, 2021). Much of university policy focuses quite explicitly on the ways in which students are permitted and disallowed from producing content in academic communities, often coded as “academic integrity” rules. The instructor’s job as a low-level manager is to teach these rules in ways that create an illusory agency of output — but within a container that tightly controls output parameters.

What may be less obvious is the dual-responsibility to hold other faculty colleagues to holding each other within these containers as well, (re)producing and enforcing autocratic normativity as part of their service role to the institution. When we enforce this “cop shit” (as Jeffrey Moros [2020] so beautifully termed it) amongst not only our students (via “academic integrity” and dataveillance, which we will get to later), but overwhelmingly amongst each other, we reify and affirm a paradigm of self-policed productivity, an architecture of peer pressure and peer-to-peer measurement that can result in no winners other than the institution itself. Enforced adherence to institutional policymaking is facilitated on individual resilience (itself measured ostensibly by productivity) and this peer-pressure container-making that creates a secondary “measurement” system by which instructors can assess themselves and each other’s success at low-level management. By achieving a communally agreed-upon definition of “normal” output, we create in-groups of resilient instructors and out-groups of instructors needing additional policing.

We could also derive harm from institutional usage of the word “resilient” as a descriptor for pandemic learning instructors, a relatively meaningless placeholder that reorients the duty of care (and basic safety) from the university to the staff themselves —and rewarding those willing to take higher risks for the same reward value. Adversarial surveillance rarely starts out as a method of punitive managerialism, but it always starts out as a means of implying (and thus enforcing) what’s normal: online surveillance tools, standard room control processes, and instructor performance reviews began as a means of rewarding those “resilient” students and educators who were appropriately and normatively productive. Emily’s managerial metaphor works well here: by “enforcing norms” (Brier, 2021) amongst each other, we create pseudo-integrity systems that are based on a group understanding of what productivity looks like in a normative sense. By acting strangely during a proctored exam or producing less than your peers, you are not only non-resilient: you are non-compliant and thus in need of additional policing. What happens, then, to instructors who are abnormal by definition? What happens to the instructors existing in a peer architecture that uses resilience and normalcy as proof of use-value, as proof of productive managerial influence?

This conception of neoliberal universities as sites of enforced normativity through productivity is echoed in neurodiverse theorist Ruth Osorio’s “I Am a Writer, Even On Days I Can’t Write: On Rejecting Productivity Advice”:

But productivity measures of writing are steeped in capitalist, and thus ableist, logic. Capitalism tells us that our worth as humans is based on what we produce and how much capital our contributions to society create. And as critical disability studies scholars tell us, when productivity is framed as a moral good, disabled people are further shunned from society, deemed unworthy because of their supposed lack of contributions to society. (2020, emphasis in original)

Just as Ruth presciently remarks, measuring resilience and ability based on a normative group-agreed-upon conception of resilience and productivity is fundamentally ableist, policing that will always create an out-group of instructors that do not meet managerial expectations set by the institution and policed by peer groups. By failing to perform well within the compliance machine, instructors will experience additional policing, additional punitive architecture and additional curated normalcy, Ruth, Jeffrey, and Hannah speak to the productivity architecture that valorizes normal performativity while violently identifying and removing vectors that fail to comply, as this top-down architecture rewards them for making these identifications. So if instructors are trapped in adversarial managerial ableism, what parameters are they forced to create for students who don’t easily comply with the siloed normativity enforcement (“resilience”) of consistent output-creation and control? What do we do with the students who are not so easily controlled, and how do they progress toward degrees that unlock market options for future employment? Ruth’s answer to this is anti-ableist conceptions of productivity, in which lives a version of the word “access” that did not match the pandemic self-policing container built in apocalypse time. Instead, the university and its compliant agents seek to reinforce normative productivity models by massaging the word “access” in pandemic literature to become synonymous with the paradigm of the digital divide.[4]

Through an informal review process, at the time of writing I discovered and perused about 60 peer-reviewed critical pedagogy works in the Canadian-American higher education context produced in 2020 and early 2021, as well as three major literature reviews attempting to summarize this deluge of ironically didactic crisis content at different points of the year. In Pokhrel & Chhetri’s 2021 review, they congratulated institutions on “offering their tools and solutions for free to help and support teaching and learning in a more interactive and engaging environment.” This is an understatement: the overnight popularity of online recipe-swapping, strategy-sharing and feverish blogging about emergency teaching in higher ed was itself the hilarious focus of multiple meta-research breakdowns (see Nature Index, 2020, Bell & Bell, 2020; Terada & Merrill, 2020; Lockee, 2021). And while much of this content production spurred from an authentic empathy for teaching communities or an altruistic view of community-derived pedagogy as survivalism, it also suffered from near-immediate obsolescence or fell victim to mass-confirmation biases amongst educators. Much of the early-2020 published advice was considered “bad practice” within months, the most iconic example being “cameras-on requirements'' in classrooms — but also included problematic conflations between “emergency” and “remote” teaching,[5] overreliance on plagiarism detectors and attendance checking, and decontextualizing embodied practices without critical depth (all of which I revisit as part of a larger dataveillance issue). While Pokhrel & Chhetri reward this altruism in their literature review as a measurement of group “support”, Rapanta et al.’s review is more critical of the obvious obsolescence problem: “teachers have been offered hundreds of “tips and tricks’, mostly without the contextualizing knowledge needed to judge which teaching tactic is likely to work where [...] and this tools-based approach does not give many pedagogical hints on how, when and why to use each of the tools” (2021, emphasis in original). While I appreciate the irony that I’m using reviews produced in 2020 to talk about the obsolescence problem of 2020 literature, I also find that symbolic of the impossible timespace universities created between the depressive slowing of apocalypse time and the understated cost of ceaseless production models enforced even while pre-pandemic traditions were becoming similarly obsolescent. Along the same pathway, while educators endured the recursive trial-and-error process of crisis instruction, their institutions advocated for a “return to normal'' that itself became recursively deferred. This meant an instructor’s “output’ had to continuously and closely mirror pre-apocalyptic productivity standards while their emotional “input’ (read: “well being”) was bombarded with unceasing anxiety based on government policies and institutional protocols mistakenly foreshadowing “normality’ over and over and over again:


HEADLINE: “Dr Fauci says U.S. could return to normal by mid-fall if most people get COVID vaccine.” Published Wednesday December 16 2020 by Noah Higgins Dunn and Berkeley Lovelace Jr for CNBC News.

Figure 1: CNBC headline.


HEADLINE: “As a return to ‘normal’ seems achievable, adjustment disorders are the new elephant in the room.” Published July 2 2021 by Adrienne Matei for The Globe and Mail.

Figure 2: The Globe and Mail headline.


HEADLINE: “Ontario’s universities and colleges told to prepare for normal fall – with backup plans.” Published July 19 2021 by The Canadian Press for CBC News.

Figure 3: CBC headline.


HEADLINE: “Fauci Says U.S. Could Return To Normal By Spring 2022 - If Vaccinations Go Up”. Published August 3 2021 by Joe Walsh for Forbes Magazine.

Figure 4: Forbes headline.


HEADLINE: “University of Waterloo expecting to return to pre-pandemic levels of classes in 2022.” Published September 28 2021 by Chris Thomson for CTV News Kitchener.

Figure 5: CTV headline.


HEADLINE: “Ontario university students face anxious winter as classes resume online, for now.” Published January 10 2022 by Nadine Yousif for the Toronto Star.

Figure 6: Toronto Star headline.


This micro-instantiation of doomscrolling highlights the breath-catching anxiety that accompanied a steady stream of official deferrals: between these headlines are the classroom plans scrapped and remade, the confused faculty meetings, the last-minute “shifts” in campus strategy, the home offices hastily (re)made, the unsung instructor efforts that required debilitating affective hyperextension to observe a spectre of the “normality” our reality could not return to, nor should. Crisis instruction circumstances reached beyond our normal resources. This in itself is not so much a problem when the stress is controlled and highly time-sensitive: exam season, midterm grading, and/or start of term can all act as anxiety-laden focal points for students and instructors. The problem is that every moment of apocalypse time was a focal point, demonstrating the potential to bring about stability while emotionally requiring a constant, contorted overextension. For over two years (at the time of writing), the dual-act of mitigating the regular anxieties of instruction and the apocalypse anxieties of safety, geography and resource allocation blended in our minds like oil and water. Would “returning to normal” (itself a problematic metric) after such a long, hyperextended stretch really have no impact on our bodies and minds, as “returning to normal” so casually infers?

Within the aporic context of all this rapidly redundant advice meant to mirror a “return to normal” in an all-but-critical way, a significant pattern emerged: the words “access” and “digital divide” occurring co-dependently. Pokhrel & Chhetri (2021) cite “e-learning [as] accessibility, affordability, flexibility” as one of the most “broadly identified challenges from 2020 literature,” significant not so much for its mention but for its list placement: beside economic imperative words “affordability” and “flexibility.” While Pokhrel & Chhetri’s review never explicitly mentions the digital divide, its shadowy presence is implied contextually: “research highlights certain dearth such as the weakness of online teaching infrastructure, [...] the information gap, non-conducive environment for learning at home, equity and academic excellence.” Dhawan’s September 2020 literature review for the Journal of Educational Technology Systems does explicitly draw the connective line, describing how students “may lose out because of the heavy costs associated with digital devices and internet data plans. This digital divide may widen the gaps of inequality.” He concedes that edtech literature generally stipulates blanket solution statements like “ensuring digital equity is crucial in this tough time” or that “steps must be taken to reduce the digital divide” (Pokhrel & Chhetri). While Rapanta et al.’s review (2021) is the most nuanced mega summary of learner-centric dialectics in online environment pedagogy, even this review doesn’t escape the unstated conflation of “access” and “digital divide,” advising readers that “...opportunities to interact through rich media and high frequency interaction may reduce flexibility and require greater bandwidth  — both of which may create accessibility challenges for some learners.”

So while major literature reviews tout the radical shift online as “more student-centered, more innovative, and even more flexible” (Dhawan, 2020), the “access” argument has been remixed and re-adapted to consider literal access to classrooms through technological means and conflating that literal access with more nuanced accommodations. I’m not attempting to advocate that the digital divide is a non-issue or posit value judgements that place physical access as “less important’ than disability access spacemaking, but I am advocating that this careful word choice (and when we execute that word choice) is a core element of the “academy violence” Hannah mentioned earlier. By silencing or trivializing every other usage of “access” in emergency pandemic teaching, we create circumstances where echo pandemics can manifest themselves forcefully in the disability community, now sitting twice-removed from “access.” By making the access problem reductive — and surmising its ease of resolution with bigger edtech budgets — the institution placed itself in prime position to weaponize this version of “access” as a desirable, highly propagandized selling point for vulnerable students considering higher education in apocalypse time. When repackaged as a feature of innovative educational futurity, wordsmith propheteering gives way to the profiteering potentiality of “access” in the pandemic university space.


marketing international “access” imperatives

Concomitant with the waterfall of accidentally ableist teaching content being produced at breakneck speed throughout 2020 as demonstrated by above-mentioned massive literary reviews punctuated by outdated models of accessible education paradigms, a disgruntled wave of disability and access education theorists were producing counterstories for emergency remote teaching. While much of the literature explored affective labour exploitation and the continual deferral of getting “back to normal,” the impact of emergency environments on mad and disabled students in the subsequent re-marginalizing echo pandemic was couched in a silent container, highly visible to those already in these discourse communities but erased by the steady stream of content echoing misrepresentational advice about equitable teaching (see: “cameras on” didacticism). That is, if equity was raised as a concern at all. This half-hearted reckoning with accessibility is the central concern of feminist disability scholar Aimi Hamraie, who argued in March 2020:

Disabled people have been using online spaces to teach, organize, and disseminate knowledge since the internet was invented. Disabled people are leading survival praxis in apocalyptic times. Please recognize that the very types of remote access that universities now mandate for classrooms and conferences have been denied to disabled people. Please also recognize that disabled people have long engaged in refining methods for remote access to protests, classrooms, doctor’s offices, public meetings, and other events. Mention this in your classes so that students know they are benefitting from crip technology and praxis. Commit to accessible teaching because it is crip technoscience and disabled ingenuity that has made remote participation possible. (Hamraie, 2020)

Without committing to a lengthy retelling of online learning and disability history, Hamraie both validates and emancipates instructors and students navigating the critical irony of “new normal” discourse, as they describe how many of these online modalities were “normal” for thousands of people prior to the pandemic even if mainstream audiences failed to notice. Accessing video lectures, closed-captioned content or transcription proceedings have indeed become more commonplace in LMS environments, but 2020 is just three years removed from a cultural moment where UC Berkeley felt it more reasonable to delete two decades worth of “Course Capture” teaching content (over 20,000 videos) than to add accommodation overlays to their existing material in order to make their library ADA-compliant, citing concerns about “protect[ing] instructor intellectual property from pirates” even while their library was licensed as BY-NC-ND content (read: essentially open access exclusive of profit) (Berkeley News, 2017). Benefitting from our closed-captioning proficient retrospective positions we can look back on this anecdote with a shared mass-market derisiveness; however, it’s simultaneously apparent that higher education instructors are not in a place to congratulate their collective enlightenment quite yet. The “new normal” is shockingly reminiscent of the old normal.

Canadian prestige machine Maclean’s University Rankings drew an explicit connection between pandemic austerity-budgeting and “access” infrastructure in coherence with the 2020 literature reviews. Facing the “lesser degree” stigmatization of all-online courses and provincial budgets slashed nearly in half since 2011 (StatsCan via Ansari, 2020), Maclean’s delivers some harsh realities about the “commodification of education as institutions insulate themselves against the damaging effects [of online enrolments]” (Ansari, 2020), which comes as a dual-reckoning not only in terms of what these hyper-expensive campus complexes have really been delivering, but the extent to which these campuses have invested in pedagogy that students can’t get elsewhere. Canada has no federal governing ministry for higher education, but a council of ministers distribute university grants at the national level which are then controlled and responsibilized through provincial education councils as part of “federation” governance (the Council of Ministers of Education Canada [CMEC]; the Canadian Information Centre for International Credentials [CICIC]; and the Higher Education Quality Council of Ontario [HEQCO]). Federation governance allows virtually unilateral power to educational ministries at the provincial level, which reifies itself through complicated degree program pathways, special mandates and performance goals, and standardized assessment and curriculum systems unique to each province and territory. From within that framework, individual universities are given powerful self-governance abilities to control their endowments, provincial funds and other grant sources as long as they can align it with their institutional mission statement. While Maclean’s reported on Ontario’s operational budget evaporation over the past several years, this pattern repeated itself in other provinces hosting University of Waterloo’s peer institutions.

Meanwhile, non-operational budgets virtually exploded in size in the same timeframe. We can learn interesting things from what is essentially a higher-ed namesake equity earnings report:

University of Waterloo endowment 2018-19 (pre-pandemic): 390 773 000

                                                            2019-20 (pandemic): 402 806 000

                                                            2020-21 (pandemic): 478 116 662

University of Alberta endowment 2018-19 (pre-pandemic): 1 432 000 000

                                                            2019-20 (pandemic): 1 284 000 000

                                                            2020-21 (pandemic): 1 456 000 000

University of British Columbia endowment 2018-19 (pre-pandemic): 1 720 000 000

                                                            2019-20 (pandemic): 1 799 000 000

                                                            2020-21 (pandemic): 2 008 000 000

Above are the endowment amounts for the University of Waterloo (Ontario), the University of Alberta (Alberta) and the University of British Columbia (British Columbia) for one year pre-pandemic and the following pandemic years. These universities are roughly similar size and host student enrolment in a roughly similar range, normalizing for provincial population: UBC66,000; UA—37,000; and UW—41,000 cumulatively (undergraduate and graduate). All three institutions are members of the U15 Group of Canadian Research Universities, who collectively control 79% of all “competitively allocated research funding in Canada” (U15, 2021) and thus have excelled at internally standardizing recruitment strategies that generate the most provincial kickbacks within this invitation-only privatized guild.

The endowment amounts make the university as an internationally indexed private corporation as opposed to a public government-assisted structure. The University of Waterloo’s endowment aims to build equity, not unlike the purchase of a subdomain or a private holdings firm. According to UW’s Support landing page, the 2020-2021 portfolio has a market value (or liquidity value) of 478 million dollars, managed carefully by a conglomerate of advisory groups, private asset managers and Toronto Dominion Bank. While the webpage claims the equity fund “further[s] our efforts to build a sustainable and habitable world,” in reality these portfolio assets are engaging in land claims, architectural development and wealth hoarding (which we might consider unsustainable, in broad terms). These holdings cannot be used to fund staff salary, library acquisitions, academic support services; even infrastructural utilities are “operational expenses” and thereby governed by provincial grants and ancillary revenue generation, not the amassed endowment (see UW, 2021; HEQCO). Much of the fundraising campaigns are tagged toward the endowment report, not operational expenses. In generously contributing to your school as an alumnus, you are adding not to the textbook budget of your home department but the massive fund generating namesake equity for the corporation (read: university). The pandemic has been exponentially profitable across the board for universities in multiple Canadian provinces: they were able to invest and acquire market assets at incredible speed for pure profit equity. All of which poses an access issue.

“Access” measures developed in response to the pandemic could be used by schools to market themselves abroad, not only to retain current student cohorts now alienated from campus, but to recruit thousands more international students faster than ever before. Complementing this, Ansari (2020) speculates that “universities across the board may become more accessible if online class sizes expand” using base-value metrics: “if you can ramp it up at a global level, or even a national level, the marginal cost starts to become much more attractive [...], if you start doing it for classes of 5,000, it starts to become financially much more viable.”  Faced with this heightened institutional redundancy problem, universities may have shifted to the “access” potential of online learning not for its inherent connectivity benefits but for the recruitment possibilities that non-geographically tied course modules offered departments. “Access” offered the university a new moralistic reason to engender this practice in the shifting, interconnected tributaries of the digital LMS. Universities and market speculators claimed across all major Canadian media platforms that the pandemic brought “devastating” financial consequences, particularly with regard to “significant drop[s] in admission[s], both domestic and international” with losses projected in the “billions of dollars,” capitulating about “0.8% to 7.5% [loss] of total revenues” in 2020 and 2021 (as reported by Global News; The Globe and Mail; Toronto Star). But were the devastating losses occurring at a purely institutional level, tucked within “overall” massive profits? Below is a quick breakdown of how “austerity budgeting” worked at the University of Waterloo during apocalypse time, with emphasis on international recruitment versus accessibility allocations.

This is a cleaned-up version of three different Senate budget documents, which map the profit potential of central income points and central expenses year-over-year as discussed and approved by the Senate Finance Committee. The first three columns denote the biggest three income sources: the provincial grant (as discussed earlier), domestic tuition and international tuition. Meanwhile, the accessibility budget is awarded their own line as standardized and enforced by the SMA3 agreement in Ontario: this money can only be used on accessibility services, campus upgrades and equity tech, as controlled and distributed by Waterloo’s AccessAbility Services Office. Because this allotment is provincially standardized, the year-over-year increase holds at 0% until 2021-22, where Ontario’s Action Plan (2021-2025) infused an extra 23% financing earmarked exclusively for improving accessibility and equity initiatives at Ontario universities (Ontario’s Universities 2021; MAE, 2021). It should be further clarified that this infusion had nothing to do with the SMA3 agreement ratified with the University of Waterloo: while the SMA2 agreement explicitly names accessibility as a performance target for the years 2015-2020, the SMA3 agreement (covering 2020-2025) removes any mention of equity or accessibility from performance targets. In fact, the only two times the word “access” appears in the new SMA3 is in reference to physically “access”-ing the Student Success Office, and the figurative “access” potential of UW’s business program credits when enrolling from other declared majors (MAE, 2021; MAE, 2018; SMA Landing Page, 2021). If we take this expense line as a “grand total” of student accessibility accommodations (as this is the only federated budget item that can be expensed toward these initiatives in any way) across the entire university, apocalypse time saw about 1.2 million dollars invested into student supports, accommodations and materials. This contrasts strongly with the income reports generated during the pandemic, which enjoy steep increases in international student enrolment (and equally steep tuition payments). Using this, the university clears over 809 million dollars in pure profit even while maintaining a steady decline in domestic tuition payout year-over-year, while never changing their “access” investment metrics (and in the case of UW, even removing accessibility as a performance indicator). This discrepancy in earned-by versus spent-on students – particularly disabled students – occurs independently of all revenue accrued from “other” sources. While major news outlets reported repeatedly on massive prospective losses, universities quietly amassed millions — in the case of UBC and the University of Toronto, billions — and re-invested virtually none of it into their new “accessible” education paradigm.


Plaintext. (valuations generated via Senate Budget publications 2018, 2019, 2020, 2021)

Chart 1: University of Waterloo's Operating Budget, 2019-2020 (UW, 2021).


These universities had more than enough to invest in increased access potential, but the digital divide is essentially problem-oriented: there are “haves” and “have nots” and we can currently only teach (and extract tuition from) those who “have.” In order to sell “access” based on this ostensibly derogatory understanding of our magic word, we needed a solution-oriented “strengths-based” way to retell the story. Universal design for learning (UDL) was co-opted toward this goal, and it appears strategically in the 2020 deluge in almost propagandistic ways as a kind of key benefit of online emergency learning. This thinking led Dhawan’s literature review to tout online learning as a “panacea” for educators. It led the Higher Education Quality Control Council of Ontario to recommend “the implementation of UDL principles in all courses” (Loeppky, 2020) as a nonsense blanket-directive that falsely promises a kind of de facto accessibility. And it sparked dozens of ad-hoc conferences and digital webinars promising instructors the three-letter answer to equity, diversity and inclusion in pandemic classrooms. For example, Pokhrel & Chhetri’s review (2021) proceeds to offer a number of UDL classroom strategies removed from contextual frameworks or sensible application guidelines, offering a kind of potpourri of recipe cards with no labels or ingredient measurements. The propagandist claims in Rapanta’s review (2021) are a little more covert, presenting the UDL mindset as a kind of education-plus (education+) initiative, shielding universities from accusations that their online degree isn’t “lesser” than the campus version while simultaneously asserting that these instructional methods magically enrich spaces wherever they appear, for everybody. The “for everybody” problem is sinister in its high potential for marketability — who wouldn’t want an “education-plus” classroom or prestigious/R1 institutions that that promise students UDL to explain their incredible tuition expenses? Making “access” as utilitarian as possible necessitates it apply to as many students as possible, especially the vague international student imperative whose recruitment is worth double or even triple the tuition money of a homeland hire.

This kind of UDL promises increased results-based education (building toward capitalistic hire-ability): use these listed “best practice’ inclusive methodologies, and your students will improve as learners by objective measurements of testing and assessment of “learning outcomes.” Some resources go as far as explicitly — and ironically — linking these derivative results: American non-profit Understood For All defines UDL as a “framework for how to develop lesson plans and assessments” (Understood, 2020), while CAST represents Rose’s “three principles” via brain scans highlighting specific neurochemical interactions (CAST 2021). Nelson’s process (as endorsed by UDL On Campus) “starts and ends with reflecting (...) on the desired outcomes of your students” (2020). The University of Kentucky advises “start[ing] with tight learning goals for your students and then provid[ing] multiple ways for them to access content materials” (2021), and Novak Education defines it as “an education framework based on decades of research in neuroscience and endorsed by the Every Student Succeeds Act” (2021). Clearly, the primary concern for educators and developers is the relationship between UDL and measurable results: making students super-students and making educators super-educators by following CAST-endorsed checklist items. But how can a teaching strategy based on accessibility and inclusivity promise to deliver such wide-ranging positive assessment results, optimal brain stimulation and rigorous achievement bars, faultlessly, for every student in the classroom, all the time? It can’t do that. The language is obtuse, blanketed, impossibly high-performing and dizzyingly all-encompassing because in promising everything it hopes you won’t notice it will only reliably deliver to those already set up to succeed. Proponents failed to speculate on why this tool(box) won’t work the way it’s meant to — creating a kind of false-utopian vision of educational practice that can deliver predictable results among students already well-equipped to succeed, much the same way that student wellness centers cater primarily to the “worried well” population. Those students exhibiting severe deficit, medical, or otherwise complex accommodation needs are not treated but instead removed from the waitlist or treated as exceptions, rather than comprising the core target audience of “wellness” services. Applying simplistic multiplicities (engagement, representation, expression) to lesson plans and assessments that are designed to cater only to “worried well” students will work insofar as these students have the tools in their toolbox to address a wide array of expectations and can effectively navigate checklist-style attempts at wide-cast nets of differentiation.

UDL can also be leveraged as a clever way to remove facilitation responsibility from the instructor and burden non-performativity metrics back onto the student: “I implemented UDL in my classroom, so [the problem student] just couldn’t cope with university” is a familiar refrain. Trying to make the argument that UDL should valence more heavily toward individual accommodation also steps on a number of toes: accessibility offices argue this is actually “differentiated instruction” (Durham College, 2020) or “universal instructional design” (University of Guelph, 2021), different methodologies with entirely different sets of principles and checklists that cater to the “problem student” from the refrain. These subsidiary methodologies claim to be more specifically focused on “inclusive and accessible learning environments” and more “explicitly presented and readily perceived” (OpenEd, 2020; University of Guelph, 2021; Durham College, 2020). These auxiliary methodologies are actually trying to accomplish what UDL was originally intended to do: provide explicit lenses focused on the accessibility potential of material and classroom content. UDL, in trying to simultaneously take advantage of “for everybody” discourse and creative responsibility-shifting, required the requisite creation of sub-modules of universal design which seek to more exclusively cater toward disability and access, which emerges as both a senseless research differentiation and a demonstrative exercise in revealing the “worried well” ambitions of UDL proper, as it’s conceived by educators today. This, of course, is not how the disability community sees universal design.

If we choose to treat UDL as a conglomerate initiative to design inclusive of and starting with disability, what emerges is not a checklist, set of principles, charted frameworks, brain scans, Venn diagrams of difference, complex infographics or student measurement (“achievement”) charts, but a crip facilitation style based in holding intentional, accessible space through trained implementation of restorative practices and mutual aid. This facilitation style requires disrupting two core assumptions that UDL makes about accessible classrooms:

  • there is someone “in charge” in an explicit way (the instructor of record)
  • “education+” for everyone in the room is always achievable in coherent, reproducible ways (measuring objectivity — students)

By removing the “instructor’ role from such prominent visibility, we interrupt the prerogative teachers have from using UDL as a means of recusing themselves from failure responsibility: adhering to increasingly complicated checklists or “best practices” will not solve access for everyone, in any classroom. What it accomplishes instead is a verifiable means of saying “I tried’ and declining more uncomfortable discourses of inclusivity in space-holding.

This ties in with the bricolaged manner of what I’ve been calling “education+”, the notion that UDL is a remedy for poor performance in students that can be readily applied in classrooms without specific contextual or interpersonal understanding. By acknowledging that the classroom is a place of dynamism, of always-already-changed space, we cannot simultaneously hold that brain scans, Venn diagrams and objective performance metrics can measure the “ability” potential of the space itself (or the learning-in-space or teaching-in-space). These two variables taken alone are not entirely new thoughts either, as disability rhetorician Jay Dolmage already productively discussed the space-as-verb potential almost 6 years ago:

UD should be registered as action – a patterning of engagement and effort. The push towards “the Universal” is a push towards seeing space as multiple and in-process. The emphasis on “design” allows us to recognize that we are all involved in the continued production of space (and that students should be agents in this negotiation). (Dolmage, 2015)

Leveraging UDL as a verb is a clever inquiry into the space-making potential that UDL offers, but I would argue he doesn’t push this potentiality far enough. In revisiting this idea in 2017’s Academic Ableism, he further clarifies the input negotiation problem as a subsidiary of redundancy (or “tolerance for error” as a means of generating more meaningful recursive feedback from participants) and overall “[student] agency” as a central “way to move” through classroom space. However, both these methods of un-erasure serve to halt the notion of space as always-already transforming (its verb potential): in simultaneously acknowledging the tradition of flexibility in university time, we create a rhetorical end-point to that time (in order to traverse it in coherent ways). As soon as we halt this potential for facilitated space to continuously re-construct itself and disrupt its essential incoherence, we’ve created a rubric-ed way of interacting with that space insofar as all we have left to do is measure the means by which inputs can be measured and scored, thus diverting to active checklist methods.

Instead of basing itself in “objective” results-based deliverables, UDL should move to valence itself toward investigation-based methods of assessing the dynamism of the whole student as they manifest themselves within equally bricolaged rooms, on parity terms with the facilitator holding space safely while pushing inquiry in productive directions. This emergent strategy (to borrow from adrienne maree brown) capitalizes not on market potential but change potential, and centers disability in ways that UDL has not been able to achieve under ironically rubriced methodologies and checklist-style implementation plans. Utilitarian “for everybody” approaches to UDL not only exclude the disabled audience it was originally designed for, but markets and projects dangerous assumptions about checklist-style “greater inclusion” metrics as a means of further individualizing and discretizing failure as a student problem rather than a system problem.


marketing utilitarian “access” imperatives

Meanwhile, many wise advocates from mad and disability circles faced this for-all languaging with renewed skepticism. Ann took up “for-all discourse” explicitly in a guest editorial for Online Lecture Toolkit in October 2021:

Though I can very much appreciate that the desire to use “for all’ framing comes from a place of wanting to be more inclusive, and support any student who may be part of our educational spaces, it could in fact do the opposite. This “for all’ framing is part of the same discussion had about the “universal’ in UDL that can cause interest convergence. (Ethical and accessible UDL)

Ann’s respectful critique is perhaps generous in the context of our conversation, but rings true for many educators who are eager to highlight EDI in their classrooms but execute it overbearingly, in excited overbroad strokes rather than a detailed highlight. She explains that instructors will feel that adhering to best-practice lists will make them more hostile to additional accommodations requests, and/or a quiet pressure can mount for disabled students who “feel uncomfortable asking for something that will support their [individual] learning because “for all" has been so embedded in the discourse around [course] strategies.” This ties in poignantly to a more complicated concept she mentions at the end, hyperlinking back to Jay in her reference to interest convergence, the implicit idea borrowed from critical race theorist Derrick Bell Jr. (1980) that “conditions change for minorities only when the changes can be seen (and promoted) as positive for the majority group as well” (qtd. in Dolmage, 2005). This is exemplified through the neuro-normativity rhetorics that some UDL conferences have become infamous for, the notion that “brain activity” can be traced onto easily-identified biological processes that serve as objective proof that universal design “works” in some identifiable way for all brain-types. This genetic story works to “prove” the for-all-ness of UDL while spectacularly missing the point of why these methods were developed in the first place, and by whom. Similarly, the “promotional” aspect of the definition is especially important: by making “access” and universal design non-reliant on context, the power of the goodwill supplied by the rhetoricity of “access” and accessibility creates market capital in an environment apparently drowned by debt due to extensive campus closures (which we know better than to believe at this point). In order to revitalize the neoliberal postindustrial institution, it’s not enough to accommodate á la carte — the accommodations have to present as easily-achieved features and interventions that as many students as possible would want, a buffet of faux-inclusivity branded as revolutionary accessible coursewares.

These undercurrent sentiments are echoed boldly by radical disability scholar Ada Hubrig, who shares that “access isn’t a project that can be completed: it’s not a checklist or a bulleted list, but ongoing conversations and actions that address the systematic inequalities and institutional barriers that exclude disabled and other marginalized bodies” (Hubrig, 2021). In their gentle calling out that “access” in emergency remote teaching was reductive and utilitarian, they also point to the continuous unrewarded labour work that advocates perform when this calling out occurs, articulating that “disabled students are doing us a favour in pointing out how our pedagogy, our curriculum, our institutions are ableist and how we can do better,” reifying the disabled praxis credit-giving Hamraie enacted a year prior. If so many of these UDL solutions are built in disabled community, and so many of our remote access strategies are “crip technoscience” (Hamraie, 2020), then to what extent do we owe individually situated (or á la carte) accommodations in our for-all buffet classrooms built using the bodymind knowledges we cast out so trivially?

Is it true that in our utilitarian shift, we dismissed the technologists and architects that supplied much of the literature and accommodations know-how to build the emergency pandemic infrastructures we desperately want to banish again in the “return to normal”? Advocates throughout 2020 and 2021 have emphatically argued that yes, this “access” wordsmithing acted as a middling bridge between increasing institutional “austerity” and digitization profiteering. In a 2021 lecture at the University of Utah, feminist mad disability theorist Margaret Price described an interview with a Deaf graduate student whose department advisor shared that “[their department] wasn’t going to take another Deaf student, because having Deaf students was too expensive, you know” (Price, 2021). The most distracting trait of that recollection — and she was quoting the student verbatim — was the quiet “you know” that came at the end, as if she had already considered that her audience was likely to think that the accommodations she required, despite being part of the ADA legal minimum, were obstructive and cost-inefficient. Margaret recounts three similar “survival” stories from her qualitative research, forming an easily discernible pattern of “reasonable” and “unreasonable” request parameters that exist far outside the statute that is meant to police exactly that. And when institutions choose to self-police the extent of accommodations, I imagine you can gather where this conversation is headed based on Ann, Jay and Ada’s still-shots of interest convergence at work: the uneven application of “checklist’-style methods, the neurotypical brain scans as proof of concept, the unsung recurring advocacy work of disabled bridge-makers. When we allow institutions the ability to invent and self-police “access” in practice, this is antithetical to the work Ada describes as “the move from accessibility as gift to accessibility improving the field [of writing studies] as a whole” (Hubrig 2021), which they believe can be accomplished with more considered attention to disabled equity accommodations rather than designing and marketing for-all initiatives that use the brilliance of disabled engineering, co-opt it, and resell it as a utilitarian classroom ethic that serves primarily the idealized, abled, preferably international bodies they recruit en masse to “balance” the budget.

We can push this conversation further still. Propelled by decreasing provincial funding and increasing capitalist notions of university-as-profitable-industry, institutions were able to use apocalypse time to warp the narrative behind the word “access” to benefit our highest-paying, most abled tuition-paying students. In order to maximize profit potential from a new model of fully-digitized learning that does not require the expenses of geolocation, and cast that population as “everybody,” they also needed to storywork the UDL catch-all to be most “accessible” to the desired group, the “worried well” mid-performers, while manipulating marginalized groups into discontinuing pandemic learning and reduce á la carte cost negotiations. This feat of misengineering was not directed solely at the student population this time, but sought to conspire with the educators themselves.


selling “access” to educators: algorithmic analytics

Currently (at the time of writing), universities are propagandizing learning analytics as ironic “access” points for higher education pedagogues. Earlier, I touched on an echo pandemic unfolding for mad and disabled institutional populations, a simultaneous crisis where waterfalls of feverish content about universal design, EDI and accessibility in emergency remote learning were being penned in the Canadian-American teaching scene while the students and instructors these methodologies were originally designed for (and by) were being pushed out of the academy in unprecedented numbers. If the accessibility revolution is executed as imagined, why are disabled instructors and students experiencing a dual-marginalization?

The answer to this was the commodification of the word “access” to create new profit avenues in a rapidly evaporating higher education budget. Partially tongue-in-cheek, Maclean’s describes this transition as “being ‘publicly funded’ to being ‘publicly aided’ as the gap between their expenditures and provincial grants steadily grew” (Ansari, 2020). The strategy devised for rejigging potentially massive pandemic budgetary deficits — which never transpired — was ostensibly to retell the narrative of “accessibility” as a keystone to the new digital pandemic university: a method that does not make student degrees “lesser,” but instead unlocks more customization options, accommodation infrastructures, and opportunities for international networking than ever before. Doubling down on the precedent method of relying on international students for triple tuition rates, universities sought to re-market themselves using a three-step system: digitizing and redefining “access,” marketing that “access” as a for-all buffet, and repackaging these notions for educators with expensive LMS and third-party programs that produce complex “learning analytics” that do not work as advertised. I have shown above how the wordsmithing of “access” unfolded over 2020, and disability scholars worked hard to shed light on the difference between UDL and pandemic pedagogy’s version of universal design (which was predicated on the “new” version of “access”). But an important cog in the education industry are the educators themselves, and they, too, needed to be sold on why fully-digital course environments were not going to make them redundant. Appealing to the ego has virtually never failed in the ivory tower, particularly among staff trained to believe in the objectivity of data and the trustworthiness of a “more is always better” research ethic. What better way to do that than with copious datasets mined from performance algorithms?

You might be aware that 2012 was the “Year of the MOOC” according to the New York Times (Pappano, 2012). This stands for “massive open online courses,” and may seem a little anachronistic reading now from a place where most university education has been essentially MOOCed. Stanford University developed courses that later became MOOC megaplayer Udacity, which lead to a startup windfall of MOOC incubators that included Coursera, MITx and EdX (Shah, 2020a). Edutech reporter Dhawal Shah wrote follow-up articles to the New York Times investigation twice in 2020 as interest grew in online learning LMS and online program management (OPM) software protocols, including “MOOCWatch 22: The MOOC Hype Revisited” (Shah, 2020b). While original investments were in relatively conservative millions in Silicon Valley dollars, the profitability of this market has more than quadrupled in 9 years: Shah reports $14mil seed investment in XuetangX, $14.1mil to Jolt, and over $200mil in TigerGlobal, the Indian answer to America’s Coursera MOOC. You do not need to be a university provost to detect profitability potentiality in the MOOC market, a boon for pandemic learning architecture and an optimistic message for universities pivoting fully-online in early 2020. What separated megacorporations like Coursera or EdX from a university-dedicated course was eugenicist exclusivity and the LMS itself, as most of these MOOCs had to be developed with OPM mirrors instead of university-connected dashboards, a market dominated by Brightspace/D2L, Blackboard and Canvas (Fenton, 2021). Ergo, to remain competitive with the cheaper MOOC, the university needed to design courses that could deliver “access” Coursera couldn’t, while also offering educators data points that EdX couldn’t provide for open-enrolment instruction. Within this antimony, “learning analytics” emerged.


Screen capture displays webpage with Desire2Learn’s banner and logographic (D2L, orange bold type) across the top of the page. A Macbook, left-center, displays a false student profile and a number of infographics pertaining to her course progression in a false D2L course module. Right-side text reads, “Better outcomes for at-risk students” (title text), followed by the following body text: “Predictive analytics and visual diagnostics can help instructors identify at-risk students in a course and take action to help them improve. / (>) Intuitive dashboard transform complex learner data into easily identified patterns of student engagement and academic risk. / (>) Integrated workflows help instructors quickly take prescriptive action for at-risk and disengaged students. / (>) With a better understanding of what’s working and what isn’t, instructors can improve and optimize their course content and delivery over time.

Figure 7: “D2L Performance+ At-Risk” screencap.


Above is a screenshot of D2L’s product page for their “Brightspace Performance+ for Higher Education” LMS package. This screenshot specifically has a lot to say in less than 100 words, most of which is implicit. The Performance+ package claims to provide educators with “better outcomes for at-risk students” (D2L, 2021) without rushing to define what “at-risk” means or which students it may flag. Instead, it shares that their predictive algorithm relies on “easily identified patterns of student engagement and academic risk.” A reasonable guess would be that this LMS would mirror current versions of D2L platforms that deliver student data about time spent on content pages, time spent typing responses, discussion engagement and number/duration of login attempts to compile this dataset. The extent to which these practices have potential to produce reliable student narratives is heavily informed by ableist conceptions of productivity, which you’ll remember Ruth explained earlier as “being taught to measure (...) based on output” (Osorio, 2020). She goes on to explain that in this system of measuring progression, “when productivity is framed as a moral good, disabled people are further shunned from society, deemed unworthy because of their supposed lack of contributions to society.” None of these performance metrics cohere to Hamraie’s conception of UDL, where every instantiation of being-in-space is mediated by and expressly agreed to by the students themselves through recurring conversations about “approaches to discussion (...), preferences of video discussion vs. boards (...), and hacking and tinkering with the educational process” (Hamraie, 2020). These Brightspace metrics encode, prioritize, and ultimately reward a very specific type of student, learning in a very specific way — a neuronormative way, based on calculations by neuronormative programmers and LMS designers. Naturally, you can’t try any of this beta content out yourself without your institution buying the software license, but luckily there are a number of theorists who already have access to the Performance+ version of Brightspace and the generosity to share how Orwellian this software gets.

Education technologist Brenna Clarke Gray has been outspoken about the perils of Brightspace on her Digital Detox blogroll, aptly summarizing that “some LMS analytics are wild, tracking how long students watched videos for or had a PDF open, and faculty rarely receive training on how to read this data. I think normalizing this kind of view of students — as being in need of surveillance — changes classroom dynamics for the worse and dissolves trust between students and faculty” (Gray, 2020). Access analyst Morgan Banville recovers yet more avenues of surveillance-building in her “Dataveillance” conference presentation, including “institutional tracking of IP addresses, instructor monitoring of LMS mouse clicks, implementation of lockdown browsers, camera [tracking], login times,” which are compiled into instructor interfaces to “enact the school’s disciplinary purpose” through rhetorics of making “invisibilized” students “visible” through habitual tracking and footprint-logging across all components of the LMS platform (Banville, 2021). This is not unlike the University of Toronto’s sousveillance (a term coined by wearable tech researcher Steve Mann), in which “wearable” embodied technologies could actively track and monitor outcomes, conditions and/or vitals — technology that has seen mass uptake in popular applications like the FitBit and GoPro. While sousveillance can be translated as “opt-in” (provided you, the wearer, are only recording your own stats), much of the dataveillance being enacted upon students in mass quantities is compiled unencumbered by consent dialogues or privacy statements; instructors are not required to inform students of the points of collection nor the amount of information they are collecting through interaction with the LMS platform — and interaction with this platform is often a rigid course requirement. If students were given no choice to “opt-in” or “opt-out” to data collection at this scale, can it be given the same positivistic spin awarded to sousveillance technologies like heart rate trackers?


After you add the Students at Risk widget to a course home page, you can begin using the widget to monitor learner success. By default, learner names display as Anonymous in the widget. This allows instructors to protect learner names in situations when the instructor displays the course home page to other learners, for example, in a classroom or during a web cast. Next to each learner, a predicted grade for the week displays. To view learner information for a single learner, point to the widget or move the input focus (for accessibility) to the anonymous name. To view the photos and names of all learners listed in the widget, click the Show names link. To display more detailed information about that learner%u2019s predicted outcome, click on a learner%u2019s name to load the dashboard page for that learner. To display all learners in the course and the success index, click View all predicted grades to load the standard D2L Student Success System page.

Figure 8: "Using the Students at Risk" widget.


Brenna identifies the trust erosion implicit in the inevitable disclosure of data collection done in such a backdoor manner — students on the platform are not told their habits are being tracked and graphed, and if they are, certainly not to the extent of eye tracking and page access time logs. Here, instructors become party to the villainization undertaken by the LMS platform by their compliance with it: by choosing to leverage this data and accept its assessment of “at risk” (or the measures by which this is calculated), the platform alone is not solely at fault for massive privacy violations and trust erosion. While the LMS definitely simplifies and streamlines content delivery and discussion, it infinitely complicates the “visibility” of the student (to use Morgan’s word). When Performance+ renders a “student profile” based on the metrics discussed, the student’s “visibility’ is mired by privileged beliefs about content uptake and workflow style, not unlike Ruth’s discussion of ableist productivity modeling.

Above is a screencap of D2L’s Help guide, describing “How to use the Students at Risk widget.” The Students at Risk widget is Brightspace’s methodology of compiling the data that Morgan and Brenna describe into a centralized module, which is able to re-organize that data to create “detailed information about that learner’s predicted outcome” (D2L, 2021). This is compiled into a “success index,” a framework whose name is so uncomfortably on-the-nose you can’t help but wonder if the programmers felt a sense of pseudo-ironic epistemic dread when compiling this software for Performance+ distribution to higher ed institutions. In the screencap, instructors are given directions about how to de-anonymize data (with photos attached), generate grade predictions on a weekly basis, hone in on specific “learner dashboards,” or view all learners in a comparative “success index” within their “Student Success System” analytic. Notwithstanding the fact that this is the only training given to instructors about the usage of the Students at Risk widget, this system engenders an Enlightenment-era rhetoric that surreptitiously equates the generation of “objective” data with reputable “truth” or predictable outcomes within their promise to predict weekly grading outcomes based on algorithmic input. For all the “visibilizing” work that these metrics output, an equal (arguably much greater) amount of de-visibilizing work is accomplished based on where that visibility is localized. To produce hard number outputs, we aren’t able to use variable factors like access to high-speed internet, stable connectivity time, stability of home, income management, or other critical success indicators that lack measurability.

We also can’t account for other common student circumstances like medical leave, family bereavement, learner accommodations or alternate assignment checkpoints, mental health issues or employment. Under the Students at Risk framework, we can only measure what we can count, and only what we can count can be used toward your success index score. And while it’s easy to point out all the negative access circumstances that prohibit a truly accurate assessment of “risk,” we could also posit an argument that this framework works hard to obfuscate positive access circumstances that de-level the playing field when taking the class as a whole-population success index. Brightspace is able to collect the number of times a student logged in (and for what duration), but cannot report the infrastructure they’re logging in on — containing a lot of information about the degree of difficulty to which they have logging in or what environmental factors may make that prohibitive. It also shields instructors from assessing student resources to buy materials needed to interact with modules, the “product keys” many textbooks require as follow-up material (which is portable to Brightspace gradebooks), the number of jobs or extraneous obligations (such as children) students are juggling with their coursework; but also factors like program year, previous familiarity, family background and generational trajectory. If we locate these factors as positive accommodations — circumstances that can make the course easier for some students than others — the Students at Risk has no ready algorithm to incorporate those positive modifiers. By failing to account for these positive accommodations and only accounting for negative accommodations when compiling an algorithmic success index score, what we’ve ostensibly created is a problematically accurate disability detector and legitimized it with “objective” rhetoric.

It is worth noting that there are credible arguments in favour of learning analytics as a means of locating invisible struggle or student floundering. In her discussion of inclusive learning design, Kate Lister (2021) demonstrates positive usage of analytics toward attentive instruction:

[Nguyen’s study] demonstrated how learning design “provides educators with pedagogical context for interpreting and translating Learning Analytics findings to direct interventions’ (Nguyen et al., 2017, p.1). This link to learning analytics demonstrates a developing use of technology to support distance learning as it enables educators to track student engagement, link it to specific activities and to target interventions and support when students are perceived to be disengaged. Thus, the distance educator is able to respond to student choices about their learning in a more immediate way, assuming of course that the student is choosing to study online. A meta-analysis of student behaviour over a whole course then informs educators’ decisions about adaptations to future activities, creating a virtuous circle of educator and learner choices.

In discussing the difference between protecting and policing students, metric debates capitalize on the ways in which instructors can use early intervention to keep students “on track” or in passing range for their online pandemic courses. There is inarguable benefit to the ability to detect legitimate “risk” in pandemic education, which relies heavily on noncorporeal environments and asynchronous interaction. For micro-assessing which students are engaged when inside individualized contexts, some of the LMS recorded information could be useful in flagging early disengagement or extra-educational struggle. There are so many advantages to sifting through gradebook scores and early activities to get a sense of the classroom climate and anticipate which students may benefit from extra encouragement or attention. Some educators swear by LMS metrics to determine “invisible” student difficulties much earlier than their in-class equivalent measures: Kate borrows from Stone’s 2016 distance learning analysis to highlight the obvious benefits of extra information about “less visible students,” offering that these analytics can help dissuade student stereotyping and deconstruct problematic demography-based research so often held as the gold standard in pedagogy research in the American context.

However, the goodwill of this argument was manipulated by pandemic circumstances to create ease-of-use at the expense of privacy, trust and collegiality. If we take as a given that a high-scoring student has a high success index score and instructors are only receiving flags about “at risk” students who comparatively score much lower, we have created a rhetorical problem: while the Students at Risk widget is able to locate “invisibility” (points of risk) with incredible competence, it is simultaneously able to de-visibilize unfair advantage and merit certain presentations of ability (that is, recognizable western white productivity models). Though it is true that the university structure is equally guilty of the same de-visibilizing duality, learning analytics are able to detect and mark students with objective measurements that are much more difficult to interrogate than macro-scale entrance admissions policies and institutional retention rates. Radical access tech theorists like Brenna and Morgan assert that these analytics are akin to panoptical strategies, “entrenching distinctly unequal distributions of power and rendering students visible through a range of coercive methods” (Banville, 2021) while also “sand[ing] off difference and forc[ing] a uniform experience (...), the most simplified experience we could provide, and [we] needed it to be the same for everyone” (Gray, 2020). By over-engineering ways in which we can more quickly identify when students are falling behind, we have also facilitated and reified privilege in who we don’t identify with the Students at Risk algorithm. The students who will reliably appear on the success index are very likely the same students who have many less barriers to access, and the more difference we “sand off” in the LMS, the more extreme the points of disparity between abled and disabled students become as a hidden consequence of these early warning frameworks. Instead of protecting students (and building in dynamics that interrogate objective measurements of productivity), universities invested in policing frameworks that detect and further marginalize students who do not engage privilege and/or ability in readily apprehendable ways. Instructors are not measuring positive accommodations or the extent to which the playing field is desperately uneven; they are measuring where students have failed and exporting those instantiations of failure (e.g., poor scores, low login durations, low engagement) as objective measurements of their success potential. Algorithms designed to invisibilize privilege and de-visibilize advantage aren’t objective risk assessments but deservingness conjecture.

Kate’s cross-section for Open Praxis noted that students with mental health problems were more likely to fail courses or drop out of university altogether, and by and large the institutional response to this has been akin to victim-blaming: “In recent years studies around how to better support student wellbeing have largely focused on pathologising students; for example, enhancing student resilience (Galante et al., 2018; Whiteside et al., 2017), enabling mental health-related support mechanisms (Brown, 2018; Byrom, 2018) and promoting self care (Ayala et al., 2017; White et al., 2019) rather than adapting university practices or systems to create learning environments that engender wellbeing” (Lister, 2021). Disempowering students, while not a great retention approach, creates some level of collective responsibility in a situation that should be construed as a massive infrastructural failure — and by sharing around this systemic failure as a whole university population, students are now implicated in the new blame-framework of institutional maladaptation. Health humanities scholar Shane Neilson is vocal about the relationship between students and the “neoliberalization of care,” particularly the “discourse of burnout” from his vantage point as a professor of biomedical epistemology.  Based on data collected from the Canadian Medical Association, Shane identifies a bold, causative assumption Ontario universities make between “low-resiliency [student physicians] and high likelihood of burnout” (Neilson, 2020), a relationship the empirical health survey bolstered as a key welfare claim for Canada’s future medical doctors entirely devoid of any close rhetorical inspection. In attempting to identify data points that could accurately predict student failure in advance, what the survey created instead was the same blame metric Brightspace’s Students at Risk widget uses to refocus university-wide issues as an individual student failure, which Shane ultimately attributes to harsh reliance of the university on “neoliberal monetary policy” and the exploitation inherent from encouraging “resiliency as something to be cultivated at the level of the individual” instead of building in systemic coping strategies and more industrial care frameworks.

Echoing many of the same conclusions, pedagogy scholar Emily Brier located pandemic instructors as “enforcers of the neoliberal university’s interests” through what she names the “violent surveillance university model” that manipulates the dataveillance technologies (she takes particular aim at exam proctoring tech, like Respondus LockDown Browser) to destabilize the interconnectedness between students and instructors as a means of reliably establishing more industry-reminiscent control practices in the radically disembodied emergency teaching format (Brier, 2021). Neither Shane nor Emily attempt to center instructors themselves as bad-faith, willing participants in the “hyper-surveilled university” (Brier 2021), but both argue in favour of an intentional refusal of these practices in their classrooms, else they be named complicit in creating a “burnout as destiny” (Neilson, 2020) structure for the most marginalized students in their classes — students most likely to be identified as “at risk’ or struggling with “resilience’ in their relationship to emergency online learning. As Hannah echoes, this “risk” status has been extremely overrepresented in the disabled student community: “access was imperfect and uneven, however, instituted ad hoc, and only when faculty or administrative interest materialized. Much of the access, too, did nothing to address the structural inequities that explicitly and intentionally exclude disabled people from academia” (Lorenz & Facknitz, 2021). And just as universities excised themselves from accountability in the pandemic macro-environment, learning analytics become a powerful method for teachers to similarly recuse themselves from student performativity blame in classroom micro-environments — eerily reminiscent of the UDL co-optation argument from earlier on in this chapter. Instead of protecting students with greater investment in accommodations or accessibility programs, universities chose to invest instead in the dataveillance frameworks that include the Students at Risk widget (as part of D2L’s Performance+ LMS), exam proctoring surveillance softwares, video conferencing licenses, automatic plagiarism detectors, and so on. While these programs were ironically flagged for institutions as “access” systems insofar as they enabled at-home education to continue as if facilitated on campus, these programs in practice are hyperfunctional disability detectors, collecting massive amounts of interactivity data and reporting non-normative interactions as “risk” behaviours, mining student metrics for ableist patterns of material uptake. There are plenty of instructors who didn’t respond to the tempting offers of recusing responsibility with UDL or D2L — how did we convince them of the value of dataveillance?

“Access,” yet again. Creating enthusiasm for panoptical police work by instructors who may disagree ethically with hyper-surveillance requires manipulating the belief that we are still able to translate apocalypse time in very normative ways, particularly when interfacing with programs like Respondus LockDown Browser or Zoom video-conferencing. When we fail to subvert systems that require students to disclose potentially unsafe work environments, work habits or coping mechanisms in their own work-from-home space using LMS analytics and digital proctoring, we consequently make the argument that their work-from-home space is “on campus” through policing it as though we are engaging with them in public space. But that’s not public space, and the bodies interacting in that private space are private bodies. Surveillance programs like Respondus and Zoom allow some informal trickery to occur, promising a comparable classroom experience by re-imagining it onto environments that have never been designed as public classroom spaces under the guise of “academic integrity” — an argument disability theorists will recognize instantly as direct transference of the lying-until-proven-truthful accommodations strategy. In particular, Respondus required students to submit to increasingly panoptical control settings by asking them to take video and audio recordings of testing workspaces including a full 360 surroundings sweep, a photo ID verification check and continual video monitoring during testing time, all while “locking down” their computer functionality to essential elements required for testing (normally D2L window and the video feed, with all other responsive elements locked until test duration expires). Respondus describes this approach as the “gold standard for securing online exams in classrooms or proctored environments” and boasts over 2,000 institutional recurring subscriptions with 100 million exams digitally proctored each year (Respondus, 2021). The digital accessibility infrastructures D2L capitalizes upon for their advertising campaigns are virtually nullified in their entirety by the incredibly restrictive feature-silencing of Respondus exam mode.


header “How LockDown Browser Works”. Subheader checklist function denoting the following checked items: “Assessments are displayed full-screen and cannot be minimized / Browser menu and toolbar options are removed, except for Back, Forward, Refresh and Stop / Prevents access to other applications including messaging, screen-sharing, virtual machines, and remote desktops / Printing and screen capture functions are disabled / Copying and pasting anything to or from an assessment is prevented / Right-click menu options, function keys, keyboard shortcuts and task switching are disabled / An assessment cannot be exited until the student submits it for grading / Assessments that are set up for use with LockDown Browser cannot be accessed with other browsers”.

Figure 9: LockDown function silencing graphic (Respondus, 2021).


Though many of these features may read to instructors as anti-cheat build-ins, there is an compelling argument that these build-ins are anti-accessibility. Disabling the browser menu removes functionality for alt-format programs such as screen readers, audio software, TTY interfaces and image enhancement applications, making the test single-format and untranslatable to other popular digital media (such as large-format, read-aloud or inverted colourization). Similarly, it explicitly “prevents access” to any other application running (and will prompt you to close any simultaneously running Windows application) while interacting with the test, removing any option for accommodated students to interact with LockDown testing formats with any accommodations they’ve previously fought for. Further, heavily locking keyboard functionality significantly restricts students with significant neurodiversities who interface differently with operating system build-ins like responsive Sticky Keys, search-and-replace and ADHD desktop settings, which all make significant use of keyboard shortcuts and alternative interfacing to create more manageable task orientation in relatively “stable” environments like the desktop browser. As well, the locked time duration functionality and inability to walk off-screen or take breaks plays into popular argument frameworks already productively nailed by other disability theorists under the general discourse of intentional ableism built into time-based assessment structures (Dolmage, 2017; Gernsbacher, 2020; Brown, 2020; MIT Technology Review, 2020; Hamraie via Allen, 2021; Lau, 2020). When adding disclosure of private space masquerading briefly as public space to the mix, Respondus has created a uniquely multifaceted, domineering and disingenuous way of visibilizing disability through environmental deficit (transforming private space to public space), accommodation deficit (denying “legitimate’ software use), spatial deficit (non-acknowledgement of safe spaces for testing) and digital deficit (non-acknowledgement of function locking as unfairly disabling) all occurring concomitantly in a chaotic slush of privilege-baiting.

This virtually weaponized usage of ability-measuring-as-proctoring was heavily criticized when implemented as the primary pandemic exam methodology at Ontario’s Wilfrid Laurier University, culminating in a virtual campaign of e-mails and social media posting from undergraduate students cognizant of the obviously disabling architecture these “academic integrity” programs base themselves in. The coordinated e-mail outcry was so effective that Laurier pulled back many core Respondus advertised features, relaying in an official press release that “over the last few weeks, students have been expressing frustration over the detailed test/exam requirements that have been causing additional stress and anxiety. Concerns have been shared through social media, by email to, and through the Wilfrid Laurier University Students’ Union” (WLU Students 2020). A series of digital proctoring accommodations were offered to Laurier’s rioting students, including the stipulations that “[s]econd ‘side-view’ cameras can no longer be required for proctored exams”: “[s]tudents can only be asked to use a mirror to show their workstation in the environmental scan, not for the whole exam” and “[Laurier will] clarify the process for students who need to use the washroom” (WLU, 2020). Wilfrid Laurier University presents an interesting case study where student campaigning resulted in a lessening of hyper-surveillance that is touted as “essential” to integrity-based online examinations at other institutions — proving that some Respondus features are not necessary to administration of an academically compliant exam. But if these features were meant to mirror or otherwise recreate the surveillance available to instructors in classroom environments, Laurier may have accidentally proven that many of the institution’s methods of lie-detecting and rigor-enforcement are ultimately unnecessary in the enforcement of academic standards.

Digital proctoring clawbacks at Wilfrid Laurier University revealed that by adapting non-normative apocalypse time in ways that intentionally mirror normative time, we can create unintended relationships between the actions we choose to remove in apocalypse time and translational actions that can apply to normative pre- or post-pandemic university time. By intervening into some of the more panoptical features of the Respondus LockDown browser and counting examination grades as legitimate with reduced oversight, the university revealed that we never needed such intense supervision of exam environments and the inherent ableism required to perpetuate strict surveillance as the norm, both digitally and in proctored auditoriums. Similarly, Laurier offered simplified credit/no credit [CR/NCR] grading in Spring 2020, Sprummer 2020, Fall 2020, Winter 2021 and continuing “until the academic disruption caused by the COVID-19 pandemic is declared over” with a simple form (Wilfred Laurier, 2020), creating a conversation around performance metrics and the presumed impossibility of CR/NCR mass-implementation post-apocalypse time. Why does that evaluation structure end when the pandemic ends? In what way is the GPA metric fundamentally different from the LMS data mining and LockDown restricted functions we use to measure student performance and integrity now? Echoing this, dataveillance features as rendered by D2L can face that same concentrated scrutiny: what is the true worth of knowing what time (and for how long) students engage with module content? Why do instructors benefit from reports detailing digital LMS interaction points that perversely overrepresent neurodiverse and disabled students as “at risk” or in need of additional surveillance? These widgets were designed to mirror the control (read: comfortable predictability) instructors felt they had in the classroom space, pointing out student “habits” that they were previously able to “catch and correct” themselves in physical classrooms. “Students at Risk” is designed to algorithmically generate the same “warning factors’ instructors have been relying on for decades as a means of regaining stability and control within physical classroom space. In translating those metrics to online environments, it may have become more apparent to some instructors that this performance-based modelling of student engagement is problematic and recursively disabling by design. By admitting that the hybrid, hyflex, or fully-online pandemic modalities were problematically ableist, we can’t stop ourselves from also admitting that these practices were developed to closely mirror the settings we already embodied and enthusiastically enforced as “rigor” or “integrity” checks.



By creating a greater focus on “access” potential and “online accessibility,” we’ve created a language fulcrum which extracts massive profits from apocalypse time enrolments based on these promises while reifying the same means of pre-pandemic risk assessment through more robust, algorithmic disability detectors (promise-breaking). By choosing not to remove these “risk factors” or disabling barriers in apocalypse time, we therefore choose to further legitimize “access” structures in ways that warped the original intention of the term. Even when faced with the big pandemic “reveal” that many of our means of student surveillance and assessment were arbitrary and unnecessary, the university — and instructors themselves — re-invested in the familiar, blame-deflecting territories of (data/sur)veillance and software policing to push the burden of performance squarely back on student shoulders. When we talk about what words we use when, a word we shouldn’t have used so religiously in a space as insistently inaccessible, as undeniably profit-based as apocalypse time, is “access” itself.



Allen, M. (2021, February 10). Designing for disability justice: On the need to take a variety of human bodies into account. Retrieved from

Ansari, D. (2020, March 27). Teaching the social determinants of health during the COVID-19 pandemic. Retrieved from

Ayala, E. M., et al. (2017). What do medical students do for self-care? A student-centered approach to well-being. Teaching and Learning in Medicine, 29(3), 237-246. DOI:10.1080/10401334.2016.1271334

Banville, M, & Sugg, J. (2021, October). Dataveillance in the classroom: Advocating for transparency and accountability in college classrooms. SIGDOC’21: The 39th ACM International Conference on Design of Communication online proceedings.

Bell, D., Jr. (1980, January). Brown v. Board of Education and the interest-convergence dilemma. Harvard Law Review, 93(3), 518-533.

Bell, R., & Bell, H. (2020) Applying educational theory to develop a framework to support the delivery of experiential entrepreneurship education. Journal of Small Business and Enterprise Development.

Berkeley News. (2017, March). Campus message on Course Capture video, podcast changes. Retrieved from

Brennan, J., et al. (2021, June). Investing in a better future: Higher education and post-COVID Canada. FACETS Journal, 6(1).

Brown, J. (2018). Student mental health: some answers and more questions. Journal of Mental Health, 27(3), 193–196.

Brier, E., 2021. Pandemic pedagogy: Practical and empathetic teaching practices. Spectra, 8(2), 31–37. DOI:

Byrom, N. (2018). An evaluation of a peer support intervention for student mental health. Journal of Mental Health, 27(3), 240–246.

CAST. UDL in higher ed. Retrieved from

Desire2Learn [D2L]. (2021). Students at risk: How LMS can help. Retrieved from

Dhawan, S. (2020). Online learning: A panacea in the time of COVID-19 crisis. Journal of Educational Technology Systems, 49(1), 5–22.

Dolmage, J. (2015). Universal design: Places to start. Disability Studies Quarterly, 35(2).

Dolmage, J. (2017). Academic ableism (open-access version). Fulcrum Downloads via University of Michigan Press.

Durham College Center for Teaching and Learning. (2021). UDL checklists. Retrieved from

Facknitz, H, & Lorenz, D. (2021, October 21). Reflections on disability and (dis)rupture in pandemic learning. Active History 2(1).

Fenton, W. (2018, January 12). The best (LMS) learning management systems. PC Magazine.

Financial Knowledge and Information Portal [FKip]. (2021, November). Market cap of S&P 500 index constituents 2021. Retrieved from

Gagne, A. (2021, October 28). Accessibility framing and “for-all’ discourse. OLT Faculty Development. Retrieved from

Ethical and accessible UDL with Ann Gagne. (2021, October). Retrieved from

Galante, J., et al. (2018). A mindfulness-based intervention to increase resilience to stress in university students (the Mindful Student Study): A pragmatic randomised controlled trial. Lancet Public Health, 3(2), 72-81.

Gernsbacher, M. A., et al. (2020). Four empirically-based reasons not to administer time-limited tests. Translational Issues in Psychological Society, 6(2), 175-190.

Gorski, P. (2002). Dismantling the digital divide: A multicultural education framework. Multicultural Education, 10(1), 28-30.

Gorski, P. (2013). Building a pedagogy of engagement for students in poverty. Phi Delta Kappan. 95(1).

Government of Alberta. (2021). Tuition regulations. Retrieved from

Gray, B. C. (2021, January 22). Digital detox 2: The LMS, tech-driven pedagogy, and making bad choices too easy. Digital Detox 2022 DEV.

Haller, B. (2001, July 22). Ignored aspect of digital divide. Retrieved from

Hamraie, A. (2020, March 10). Accessible teaching in the time of COVID-19. Critical Design Lab.

Hodges, C., et al. (2020, March 27). The difference between emergency remote teaching and online learning. Retrieved from

Hubrig, A. (2021). On “crip doulas,” invisible labor, and surviving academia while disabled. The Journal of Multimodal Rhetorics, 5(1), 33-36.

Lau, T. C. W. (2020). Access from afar: Cultivating inclusive, flexible classrooms after COVID-19. Nineteenth Century Gender Studies, 17(1).

Lister, K., & MacFarlane, R. (2021). Designing for wellbeing: An inclusive learning design approach with student mental health vignettes. Open Praxis, 13(2).

Lockee, B.B. (2021). Online education in the post-COVID era. Nature Electronics, 4, 5–6.

Loeppky, J. (2020, December 1). With the shift to online learning, students with disabilities face new barriers. Maclean’s.

Ministry of Advanced Education [MAE]. (2018). Strategic mandate agreement, 2nd edition [SMA2]: University of Waterloo. Retrieved from

Ministry of Advanced Education [MAE]. (2021). Strategic mandate agreement, 3rd edition [SMA3]: University of Waterloo. Retrieved from

Morin, A. (2021). What is Universal Design for Learning (UDL)? Understood.

Moro, J. (2020, February 13). Against cop shit. Retrieved from

Nature Index. (2020, August 28). COVID-19 research update: How many pandemic papers have been published? Retrieved from

Neilson, S. The problem with “burnout.” (2020). In P. Crawford, et al. (Eds.), The Routledge Companion to Health Humanities. Routledge.

Nelson, L. L. (2021). Design and deliver 2e: Planning and teaching using Universal Design for Learning (open-access ed.). Brookes.

Novak, K. (2021). What is UDL? infographic. Retrieved from

New York Stock Exchange [NYSE]. (2021, November). Traded products listings directory. Retrieved from

Osorio, R. (2020, September 24). I am a writer, even on days I can’t write: On rejecting productivity advice. BREVITY’s Nonfiction Blog.

Pappano, L. (2012, November). The year of the MOOC. The New York Times, online ed.

Precision Drilling (PDS). (2021). Precision Drilling [PDS] market cap. Retrieved from

Price, M. (2021). Everyday survival and collective accountability. Webinar for University of Utah’s accessibility series.

Province of British Columbia. (2021). The Province of British Columbia’s strategic plan, 2018-2022. Retrieved from

Pokhrel, S., & Chhetri, R. (2021). A literature review on impact of COVID-19 pandemic on teaching and learning. Higher Education for the Future, 8(1), 133–141.

Rapanta, C., et al. (2021). Balancing technology, pedagogy and the new normal: Post-pandemic challenges for higher education. Postdigital Science and Education, 3, 715–742.

Respondus LockDown Browser. (2021). Respondus monitor resources. Retrieved from

Rippé, C. B., et al. (2021). Pandemic pedagogy for the new normal: Fostering perceived control during COVID-19. Journal of Marketing Education, 43(2), 260–276.

S&P 500 companies listed by weight. (2021, November). Retrieved from

Shah, D. (2020a). Capturing the hype: Year of the MOOC timeline explained. The Report. Retrieved from

Shah, D. (2020b). MOOCWatch 22: The MOOC hype revisited. The Report Retrieved from

Terada, Y. & Merrill, S. (2020). The 10 most significant education studies of 2020. Edutopia Retrieved from

Top Hat. (2021). Universal Design for Learning. Top Hat Glossary Series. Retrieved from

U15 Group [U15]. (2021). Our impact. U15 Group of Canadian Research Universities/Regroupement des universités du recherche du Canada.

University of Guelph. Universal Instructional Design. Retrieved from

University of Kentucky. (2021). Definitions: Universal Design for Learning (UDL). Retrieved from

University of Waterloo. (2021). UW operating income budget, 2021-2022. Retrieved from

Watters, A. (2013, May 24). The myth and the millennialism of “disruptive innovation.” Hack Education.

White, M. A., et al. (2019). Evaluation of a self-care intervention to improve student mental health administered through a distance-learning course. American Journal of Health Education, 50(4), 213-224. DOI:10.1080/19325037.2019.1616012

Whiteside, M., et al. (2017). Promoting twenty-first-century student competencies: A wellbeing approach. Australian Social Work, 70(3), 324-336. DOI:10.1080/0312407X.2016.1263351

Wilfrid Laurier University. (2020). Laurier implements new student supports and resources for exams. Retrieved from


[1] I use this phrase to signal pandemic-circumstantial time while intentionally giving credence to the ways that other concomitant events greatly heightened the pain, tension, suffering and grief that transpired during this crisis temporality, which at present stretches from March 2020 — April 2022. “Apocalypse time” validates the myriad crushing events and byproduct trauma that the COVID-19 pandemic facilitated or otherwise deeply complicated.

[2] Mad and dis/abled students, staff and faculty.

[3] A word with many loaded disciplinary meanings, but I’ll be using it here akin to “intentional harm-reduction methods of room control,” deriving deliberate similarities to how this word is employed in advanced social work practice. For more on facilitation skills, the University of Kansas created a great open-source introductory chapter as part of their Community Toolbox initiative. See also (paid text) Mental Health Social Work Practice in Canada, 2nd ed. by Regehr & Glancy (2014) and/or (paid text) Theoretical Perspectives for Direct Social Work Practice: A Generalist-Eclectic Approach by Coady & Lehmann (2021).

[4] This argument is an essentially physical one: this dichotomy (pitting those that “have” against those that “have not”) platforms the literal access to machinery and infrastructure that allow individuals to use and maneuver digital content, including online learning interfaces. For more on the key problems with digital divide discourse, see Watters, 2019; Haller, 2001; and Gorski, 2002, 2013.

[5] I lack time and space to discuss this relationship here, but see the Educause article of by Hodges et al. on the epistemic difference and non-interplay between “emergency/pandemic teaching” and “online learning” (April 2020 issue).

[6]  Ableist teaching advice was in no short supply even beyond these massive review highlights. 2020 also featured hits such as EdTech Energy’s “31 Ways to Encourage Students to Turn Their Cameras On” (ETE, 2020), EduTopia’s “Strategies to Encourage Students to Turn Cameras On” (Loya, 2020), USA Today’s “Online Education is Making Students Fail” (Wong, 2020), and Educause’s “Bichronous Online Learning,” an article that steadfastly advocated that enforcing synchronous discussions would improve learning outcomes for students able to make ample use of synchronous situations (Martin & Polly, 2020). This list is incredibly non-exhaustive and primarily illustrative.

[7] They are using “apocalyptic time” here as a direct reference (they even hyperlink to it) to Leah Lakshmi Piepzna-Samarasinha’s use in an op-ed for Truthout (2018) about Trumpism and its devastating effects on the disability community.

[8] I realize this is about the fiftieth time we’ve used “ironic” as descriptor for many of the apocalyptic practices around access, but there was such a deluge of truly ironic “accessibility” protocol developments that I really do not feel, at the end of the day, we’re abusing this word.

[9] There’s an interesting conversation that could be had here about “for-profit” racketeering as rendered by privately owned, market-capped MOOCs versus publicly-funded-when-convenient hypercapitalist university courses (and how both are essentially for-profit models with creative language).

[10] This is a five-second-version of the central argument in Peter Fleming’s Dark Academia: How Universities Die (2021), an incisive deconstruction predicating the mass-academy despair of 2020-2021 on the extreme over-commitment to neoliberalist free market ethics (and the marketization of “globalized” education). Fleming didn’t invent this argument: Sophia Leonard, Merlyne Cruz, Jay Dolmage, Melanie Yergeau and Shane Neilson have written convincing versions of this argument under the broad umbrella of critical university studies.