Patrick Devey, April 16 2026

Ontario’s Colleges and Universities Have an AI Problem. It’s Not the One You Think.

Dr. Patrick DeveyPresident & CEO, Panoralex Group

Ontario’s colleges and universities do not have an AI cheating problem. They have an AI attention problem.

Nearly every AI conversation in Ontario’s postsecondary sector still begins and ends with academic integrity. Within minutes, the discussion turns to plagiarism detection, policy gaps, and whether students are using generative AI to complete assignments. That concern is valid. However, it has become the sector’s default frame, and it is quietly costing institutions time, capacity, and credibility.

This framing matters because Ontario’s colleges and universities are operating under extraordinary pressure. Ontario’s public colleges have already cut approximately $1.8 billion, suspended over 600 programmes, and eliminated more than 8,000 positions in response to recent funding and enrolment shocks (CBC News, 2025). Universities are projecting a system‑wide funding gap of $1.3 billion by 2028–29 (Council of Ontario Universities [COU], 2026). Tuition levels remain well below what the province’s own Blue‑Ribbon Panel identified as necessary for long‑term financial sustainability (Ministry of Colleges and Universities [MCU], 2023). These structural pressures were building for years. Federal changes to study permit allocation and post‑graduation work permit eligibility further accelerated an already fragile trajectory (ICEF Monitor, 2024).

And so here we are, with arguably the most powerful set of tools ever available to enhance productivity, accelerate decision‑making, and improve operational efficiency across the institution, yet the sector remains fixated on plagiarism.

The evidence is not subtle. The 2025 EDUCAUSE AI Landscape Study found that 74 percent of respondents identified academic integrity as the teaching and learning area most impacted by AI, ahead of coursework, assessment practices, and curriculum design (EDUCAUSE, 2025). Governance readiness remains uneven. Only 39 percent of institutions reported having AI‑related acceptable use policies in place, and just 9 percent believed their privacy and cybersecurity policies were adequate to address AI risks (EDUCAUSE, 2025).

This focus contrasts sharply with how AI is actually being used. EDUCAUSE’s research on AI and work found that 94 percent of higher‑education employees had used AI tools in the previous six months, and 67 percent identified five or more operational opportunities as “most promising,” including automating repetitive processes, offloading administrative burdens, and analysing large datasets (EDUCAUSE, 2026). In short, institutional attention remains on policing, while staff are already experimenting with AI in their daily work, whether they have “permission” to do so or not.

Where, then, has the sector made its most deliberate, stand‑alone AI investment? Plagiarism detection. Turnitin alone licenses its software to more than 16,000 institutions worldwide, and the global anti‑plagiarism market is projected to reach approximately $2 billion by 2027 (Fortune Business Insights, 2020). Many institutions have also adopted AI‑enabled remote proctoring tools. These are not trivial expenditures, and they are far from perfect instruments.

The result is a striking imbalance. The sector’s most visible AI spending is directed toward enforcement, while far more consequential AI capabilities have arrived quietly, already embedded in enterprise software institutions were paying for anyway.

This is where the sector’s real AI problem emerges. We are treating AI primarily as a governance and policing risk, rather than as an operational capability that could help institutions survive and adapt.

The AI Capacity Ontario Institutions Already Have

Most Ontario institutions do not need to “buy AI.” In many cases, AI capabilities are already embedded in the systems they license and rely on every day.

The question for senior administrators is no longer whether AI exists in the institutional technology stack. It is whether these capabilities are enabled, governed sensibly, and applied to priority workflows, or left dormant.

The Governance Hurdle, and Why It Does Not Require Perfection

This is where many institutions are currently stalling. AI becomes stuck behind a familiar refrain: “We cannot turn this on until we have a policy.”

Governance does matter. Privacy, academic integrity, and cybersecurity all deserve serious attention. However, there is a sequencing problem embedded in the “policy‑first” approach. Policy should not be driving the business. The business need, operational pressures, service gaps, and workforce constraints should drive the path forward, with governance enabling that movement responsibly.

Institutions that get this right identify where AI can deliver operational value, learn from real use, and then build governance around what they discover. Waiting for a comprehensive policy framework before doing anything at all reverses that order. In a sector under the level of pressure Ontario is experiencing, the cost of that reversal compounds every month.

This does not mean operating without structure. It means recognising a practical middle ground between a fully developed AI policy and doing nothing. That middle ground is guardrails: lightweight, operational controls that allow institutions to experiment safely while broader policy work continues in parallel. Guardrails are not a substitute for policy. They are the bridge that allows institutions to move forward responsibly while governance catches up.

The distinction matters, because policy and guardrails serve different purposes.

What Safe AI Experimentation Actually Looks Like

For institutional leaders, enabling AI does not mean opening the floodgates. It means being deliberate about where and how experimentation occurs. A practical guardrail‑based approach typically includes:

This is not governance theatre. It is about giving staff permission to use the tools institutions already own, within clear and sensible boundaries, and learning how to use them to be more productive, do more with less, and identify opportunities for meaningful impact.

The Bigger Unlock

For many institutions, thinking strategically can feel like a luxury when facing multi‑million‑dollar deficits. Survival mode is real. But survival mode is also precisely when institutions cannot afford to ignore tools that might provide relief.

If AI can reduce operational drag, improve decision quality, and help institutions do more with fewer resources, then deferring that investigation is not cautious, it is costly.

Five areas stand out where AI can deliver practical, near‑term value, using capabilities that already exist in the institutional technology stack:

Each of these areas can deliver value independently. An AI‑assisted scheduling tool can save the registrar’s office time by optimising class schedules, timetables, and teaching assignments. Predictive retention analytics can flag at‑risk students earlier, when intervention still matters. Chatbots can reduce call, email, and ticketing volume without diminishing service quality. These are worthwhile gains, and they represent exactly the kind of controlled experimentation guardrails are designed to enable.

The real transformation, however, occurs when these efforts are connected. When enrolment data feeds financial modelling, which informs workforce planning, which shapes programme development, and in turn influences recruitment and marketing, leadership gains a coherent, institution‑wide view rather than a collection of departmental reports and committee briefings. Decisions become more deliberate, better informed, and grounded in a shared understanding of how different parts of the institution interact.

This is where AI’s deepest value emerges. Its power lies not in optimising individual functions in isolation, but in serving as connective tissue across systems and portfolios, particularly in institutions where data and processes are fragmented across departments and owned by different vice‑presidential portfolios. When AI helps bridge those silos, the payoff extends beyond operational efficiency to more consistent institutional behaviour: faster response times, more personalised communication, and services that anticipate needs rather than responding after problems surface.

There are already signs that parts of the sector recognise this shift. Grassroots momentum is visible in academic programming, where many faculty have moved beyond the integrity debate and are actively embedding AI into their curriculum, recognising that graduates will be expected to use these tools in the workplace. This bottom‑up innovation is worth encouraging. Without institutional coordination, however, it develops unevenly across departments and disciplines, and the opportunity to translate that momentum into a durable strategic advantage is diminished.

The Bottom Line

The attention problem is also an opportunity problem. Every month the conversation remains fixated on plagiarism and AI policy is a month of operational capacity left unused, efficiencies unrealised, and leadership forced to make decisions without the insight those tools could provide.

The financial pressure is real, and the window for strategic action is narrowing. The institutions that emerge in better shape will not be those that used AI primarily to catch cheaters, but those that used it to work smarter. They will be the ones that put guardrails in place early, enabled controlled experimentation, and learned quickly from real workflows, accumulating small, practical wins that compound into meaningful institutional capacity.

Once those practices are understood and proven, policy can codify them. At that point, AI stops being an abstract debate about risk and compliance and becomes a practical, governed capability embedded in how the institution operates.

If you see AI as more than an academic integrity response and want to explore what it can do for the business side of your institution, I would welcome that conversation.

Dr. Patrick Devey is President and CEO of Panoralex Group, a consulting firm specialising in AI‑enabled process improvement, strategic enrolment management, and workforce development for postsecondary institutions.

 References

CBC News. (2025, November 6). Colleges could struggle further with latest lowered cap on international students. https://www.cbc.ca/news/canada/ottawa/colleges-could-struggle-further-with-latest-lowered-cap-on-international-students-9.6968440

Council of Ontario Universities. (2026). Ontario universities call for urgent action as funding gap grows to $1.3 billion. https://ontariosuniversities.ca/news/ontario-universities-call-for-urgent-action-in-the-2026-ontario-budget-as-funding-gap-grows-to-1-3-billion/

EDUCAUSE. (2025). 2025 EDUCAUSE AI landscape study: Into the digital AI divide. https://library.educause.edu/resources/2025/2/2025-educause-ai-landscape-study

EDUCAUSE. (2026). The impact of AI on work in higher education. https://library.educause.edu/resources/2026/1/the-impact-of-ai-on-work-in-higher-education

Fortune Business Insights. (2020). Anti‑plagiarism for the education sector market size, share & COVID‑19 impact analysis, by type, by application, and regional forecast, 2020–2027. https://www.fortunebusinessinsights.com/anti-plagiarism-for-the-education-sector-market-104552

ICEF Monitor. (2024). Canada announces updates for foreign enrolment cap and post‑study work permit rules. https://monitor.icef.com/2024/09/canada-announces-updates-for-foreign-enrolment-cap-and-post-study-work-rules/

Ministry of Colleges and Universities. (2023). Ensuring financial sustainability for Ontario’s postsecondary sector (Blue‑Ribbon Panel report). Government of Ontario. https://www.ontario.ca/page/ensuring-financial-sustainability-ontarios-postsecondary-sector

Written by

Patrick Devey