Educators weigh in on ensuring ethical, responsible AI use in schools
Dubai: AI tools like ChatGPT and Microsoft Copilot have made their way into school assignments – a trend that leading educational groups in the UAE term a “natural” development in a world of emerging technology. Once viewed curiously, generative AI is now a reality that educators are learning to navigate while balancing innovation, ethics, and student development.
“Yes, students are indeed experimenting with tools like ChatGPT and Copilot,” confirmed Baz Nijjar, Vice President – Education Technology and Digital Innovation, GEMS Education. “This is natural, as they are always curious about new and emerging technology and are often first users.”
At GEMS, digital fluency is being viewed both as an opportunity and a challenge. Working groups have been set up to create policies and guidelines, integrate technology into curricula, and train teachers to identify AI-generated work. The aim is to steer students toward responsible use, while looking into ways to adapt assessments, he said.
James Efford, Elementary Principal at Dubai Schools Al Khawaneej (DSK), part of the Taaleem family, sees a similar trend.
“Yes, especially among our older students, tools like ChatGPT are increasingly being used to support their research and thinking.”
Interestingly, he notes, some multilingual learners are using “AI as a language bridge” to better articulate complex ideas in English or Arabic. “This has empowered more students to engage in rich classroom discussions and deepen their learning.”
At Bloom World Academy, founding principal John Bell has adopted a progressive approach. “AI is here to stay, and our role isn’t to ban it but to guide its use responsibly.”
The school blends traditional assessments like pen-and-paperwork with digital tools to preserve academic integrity.
Schools have faced a few cases of overdependence on AI tools. However, rather than penalising students, these institutions have turned such incidents into learning opportunities.
At DSK, Efford noted that when students overly rely on AI for writing tasks, such instances are being treated as “learning moments.”
“Teachers now incorporate checkpoints and draft reviews that focus on original thinking and voice,” said Efford.
GEMS’ approach is “educational and restorative,” Nijjar underlined.
“We have prompted task redesigns to require critical thinking, not just AI outputs – it’s a matter of reskilling and understanding that prompt engineering is a new technique students need to master,” he said, noting that from next academic year, parental engagement and awareness will be increasing to ensure there is home support for responsible AI use.
Bell pointed out the novel ways in dealing with AI-generated plagiarism.
“AI-generated plagiarism is subtle and evolving, which is why we maintain strong analogue components – verbal presentations and written assignments – so students can’t outsource comprehension.”
Teachers are using these tools to enhance lesson planning, generate personalised content, and create real-time assessments.
At GEMS, custom AI agents assist in crafting differentiated activities and quizzes. DSK educators are using AI to simulate student misconceptions or create tiered comprehension questions. At Bloom, AI supports both administrative and academic functions – supporting lesson planning, automating student tracking, and assisting with content creation.
“This use of AI doesn’t replace teachers but rather frees them to focus more deeply on pedagogy, creativity, and meaningful interactions with students,” Efford noted.
Integration of AI isn’t limited to classroom use. GEMS is embedding AI literacy across its curricula. Its GEMS School of Research and Innovation, opening next month, will feature AI/XR labs and specialised subjects like AI, robotics, game design and esports.
Bloom World Academy offers an AI-focused accredited course as part of its core curriculum for students aged 14+. This year, even KG2 students will begin AI literacy through a spiral curriculum.
Taaleem schools are embedding AI across subjects – from English classes critiquing AI-generated content to moral education exploring its ethical implications.
Every school leader emphasised the need for a strong ethical foundation. Nijjar noted their strategy focuses on anonymised data governance and asks not just ‘Can we?’ but ‘Should we?’ and ‘How do we do it safely and with positive use cases?’.
Bell’s team teaches students about fairness and bias through mock trials and critical thinking exercises. Efford’s school promotes a human-centred approach to AI, building a generation that knows how to question, verify, and use technology responsibly.
Educators agree AI offers unparalleled opportunities – personalised learning, reduced workload, and deeper engagement – but concerns persist with fast-paced development, potential for misuse, and disparities in access. Looking ahead, they envision AI becoming a foundational part of education and reshaping it, while schools strive to ensure it’s for the better.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox