Category: Learning and Teaching
I've recently become aware that GCU is looking at procuring an AI tool for students and staff, with OpenAI's GPT and Meta's Llama currently on the table. I'm all for bringing AI into the university, honestly, it's overdue, but I worry we're moving too fast without asking the right questions.
Full disclosure: I'm a 3rd-year Data Science and AI student at GCU, so I do have skin in this game. My main point, though, is about process. We need to make sure whatever tool gets chosen actually works for students, not just whoever pitched it first. This isn't a stance on pro or anti-AI. It's asking GCU to evaluate properly before committing thousands of us to one option.
No AI tool is perfect. They all sometimes hallucinate and have data questions that need answering in an academic setting. The UK Government's own guidance on AI procurement says public bodies should use multidisciplinary teams, publish their evaluation criteria, independently test systems, and build privacy checks in from day one. A university should easily meet, and arguably exceed, that standard.
I've also talked this through with some committee members from my societies who have strong concerns about AI in education, and their points deserve to be heard. Recent work on AI in schools and universities has shown that over‑reliance on AI can undermine critical thinking and content knowledge, with some students admitting that using these tools makes it "easy, you don't need to use your brain". There are concerns that AI could widen gaps between students if some have access to more capable systems than others. Reports in 2026 even argue that, especially for younger learners, the risks currently outweigh the benefits and that assessment has to be redesigned rather than just dropped into an AI world. That is exactly why we need a careful process, not a rushed decision.
OpenAI's track record raises particular questions for a UK university. In 2023, a GDPR complaint accused the company of processing personal data without proper transparency or consent, describing its behaviour as "untrustworthy, dishonest, and perhaps unconscientious" when handling data subject requests. Italy's data protection authority later imposed a €15 million fine in 2025 for GDPR violations, including failures around data minimisation and privacy information. Poland's regulator has also opened an investigation into OpenAI over similar concerns. On the academic side, research has shown that ChatGPT's recall rate for relevant papers can be as low as 11.9%, with hallucinated references a well‑documented problem. For students writing assessed work, such glitches are not harmless; they pose real legal and academic integrity risks.
Llama, being open‑source, does offer transparency advantages. That doesn't automatically make it ready for university‑wide deployment, though. Whichever Llama variant is proposed should be independently validated against realistic academic tasks, with proper testing of hallucination rates, reasoning quality, and bias, in line with the due diligence process the UK Government recommends for AI procurement. At the moment, I don't see evidence that this has been done in a way that students can scrutinise.
Instead of locking ourselves into one or two tools straight away, GCU should define clear evaluation criteria first. Those criteria should include ethical alignment and governance (for example, whether the provider has published safety frameworks and a track record with public‑sector deployments), academic suitability (independent testing of hallucinations, reference accuracy, and reasoning), UK GDPR compliance, transparency about training data and known limitations, and how well the tool can be integrated into existing GCU systems and policies—tools used by governments and public bodies, such as Claude's deployment in the GOV.UK assistant or systems built around cited sources like Perplexity are examples of options that deserve to be assessed under the same standards as GPT and Llama, not automatically preferred or dismissed. All of them have limitations, including hallucinations and occasional citation errors, which is precisely why they all need to go through the same rigorous process.
Access to any AI tool must come only after completing proper training. That training has to cover academic integrity under GCU policies and how to spot poor‑quality outputs. In a university context, hallucinations aren't minor annoyances; they can turn into incorrect claims and fabricated references that expose students and staff to accusations of misconduct and potentially even the loss of a degree or professional consequences. People need clear boundaries between appropriate assistance and misuse.
Ethical and environmental issues need to be part of that conversation as well. Large AI models consume significant amounts of energy to run, and casual, unnecessary queries add to an already growing carbon footprint when human thinking or existing tools would have done the job. Responsible use is not just about what the models can do, but about when it is right to use them and when it is better to rely on your own judgement, existing library resources, or traditional teaching.
The training should also cover which types of data should never be entered into third‑party systems (for example, sensitive personal data, confidential research information, or anything subject to professional confidentiality), along with subject‑specific guidance on where AI is appropriate and where it is not. This needs to be a real, mandatory programme that everyone completes before getting credentials, not an optional slide deck that most people skip.
There also has to be proper governance so decisions about access and usage policies reflect the wider student body, not just a few individuals. Oversight should sit with a cross‑functional committee that includes academic staff, IT, the university's data protection officer, and elected student representatives, with no single group holding a majority. The committee's decisions on access rules, approved use cases, and any changes to the tool's scope should be published openly for all students and staff to read. Anyone involved in procurement or governance should be required to declare any personal, financial, or professional interests in the tools under consideration, and, where a conflict exists, they should not have a deciding vote. There should also be an annual public review of how the tool is affecting teaching, learning, equity and wellbeing, with student feedback at its core.
So what am I actually asking for?
First, GCU pauses any final procurement decision until a full ethical, academic, and UK‑GDPR review of all candidate tools has been carried out by a multidisciplinary group, following the kinds of steps set out in national AI procurement guidance. Second, the university publishes the evaluation criteria in advance, including how it will assess bias, data provenance, hallucination rates and compliance, and then applies those criteria consistently to all tools, not just the ones already in discussion. Third, that meaningful, mandatory AI literacy training is made a prerequisite for access for both staff and students. Fourth, that student representatives (not just FTO's) have a guaranteed seat at the table in the evaluation and governance process. Fifth, that transparent governance structures and conflict‑of‑interest declarations are put in place so no individual or small group can shape AI policy primarily in their own interests.
GCU has a real opportunity to set a standard for responsible AI in higher education. I'm not asking us to reject AI outright. I'm asking us to choose carefully, with our eyes open, and to build the structures around it that protect students, staff, academic integrity, and the environment.
For anyone who wants to check the claims made here, I've included some references:
TechCrunch (2023), "ChatGPT‑maker OpenAI accused of string of data protection breaches in GDPR complaint".[techcrunch]
ComplyDog (2025), "OpenAI's €15 Million GDPR Fine: What It Means for AI".[complydog]
Reuters (2023), "Poland investigates OpenAI over privacy concerns".[Reuters]
PMC / NCBI (2024), "Hallucination Rates and Reference Accuracy of ChatGPT".[pmc.ncbi.nlm.nih]
GOV.UK, "Guidelines for AI procurement".[gov]
Anthropic (2026), "Anthropic partners with the UK Government to bring AI assistance to GOV.UK".[anthropic]
Jenni.ai (2026), "Can Perplexity Be Used for Academic Research?" [Jenni]
NPR / related education reporting on risks of AI in schools and cognitive offloading.[npr]
ISC Research, "Balancing risk and opportunity with AI".[iscresearch]
I want this idea to be properly debated in Student Voice so that a wide range of student opinions, including those who are more sceptical about AI, can shape what GCU does next, rather than this being just my view.
All students at Glasgow Caledonian University are automatically members of GCU Students' Association.
This membership is free for current GCU students.
GCU Students' Association, 70 Cowcaddens Road, Glasgow, G4 0BA.
Tel: 0141 331 3886 | Email: hello@GCUstudents.co.uk | Messenger: m.me/GCUstudents
Comments
Current students, prospective students, and staff deserve a formal framework in order to address this novel and experimental technology. All affected parties need to share an accurate and honest understanding of these tools, their capabilities, and the consequences of their use before they are implemented. The University must not neglect its duty to act in the best interest of those who have invested their time and money.