Skip to navigation | Skip to main content | Skip to footer
Menu
Search the University of Manchester siteSearch Menu StaffNet

Guidelines for using or developing AI

13 Feb 2025

The University AI Strategy Group which comprises of colleagues from across the University have developed guiding principles for all our University community to advise on best practices around using and developing AI.

Chris Taylor, Associate Vice-President and Chair of the University AI group said “As AI technology rapidly advances and becomes embedded to everyday life, it is imperative for us to consider the potential of AI to enhance teaching and learning, research and processes whilst maintaining a commitment to responsible and ethical use.  

These guidelines seek to guide our University community, both staff and students, in how to embrace generative AI and support in the effective use of AI at work whilst also bringing to the surface risks that should be considered. 

They have been produced by the University AI Strategy Group and are a real cross-University effort.” 

The guidelines focus on: 

Principles for Appropriate Use 

These include five core principles that all colleagues and students should familiarise themselves with: 

  • Transparency - Always clearly indicate when and how you have used AI in your work. For example, by using a footnote to explain that Microsoft Copilot was used in the preparation of a document – as shown in our guidelines. 
  • Accountability – We are all responsible for the outputs generated by AI. It is important to verify that information generated is accurate and acknowledges where information has come from. 
  • Competence - Regularly update your AI knowledge and skills through continuous professional development. For example, attend events that introduce AI, or take advantage of courses available on LinkedIn Learning
  • Responsibility - Ensure your use of AI tools is ethical, legal, and fair by avoiding malicious uses, mitigating biases, protecting personal information, and respecting copyright and intellectual property rights. 
  • Respect - Ensure your use of AI tools respects individuals' privacy, mitigates negative societal impacts, and minimises environmental harm. For example, chatbots can be used in qualitative research to interview subjects, but it is important to avoid asking for personal information. It is also important to be aware of environmental harms – for example a Copilot query results in the emission of up to a gram of CO2. 

Applying the Principles  

This focuses on the application of these five core principles in specific contexts such as: 

  • Establishing which AI tool to use – There are a variety of tools available, and the University offers a licensed version of Microsoft Copilot that ensures data privacy and should be used to prevent inappropriate disclosure. 
  • Teaching and learning – AI tools can enhance teaching, learning, inclusivity, and accessibility, but their output must be treated like work from another person, used critically, licensed, cited, and acknowledged. The guidelines consider the use of AI in course unit variation, plagiarism, proof reading, detecting malpractice, access and choice. 
  • Research – AI can power research and innovation when used alongside our guiding principles. The guidelines consider the use of AI in data, publication, reviewing, chatbots and by students undertaking research. 

What next? 

We will be reviewing these guidelines regularly and will inform colleagues and students when a revised version of the guidelines is ready.  

We are keen to hear your views and thoughts on our institutional approach and the findings of the AI Report published last July. If you have any questions or feedback please get in touch via ai.review@manchester.ac.uk. 

Find out more