Why does this matter to me / higher education?
“Organisations that are trying to block people, or who say they are not ready are going to find their staff are doing it anyway – but without the governance, ethics, security and intellectual property controls they might have had if they’d “allowed” them“
(Jordan 2023)
the challenge of feedback and marking meets the opportunities and challenges of generative Ai
[Academics] talk … very positively about the actual work, even just in the critique of the work, ‘it was okay, but this needs … ’ and it’s full of passion and drive. That all disappears when it comes to sitting at the computer and
writing up the report. And then it becomes, you know … for want of a better phrase, ‘these bloody reports’!
(Senior Leader Interview, Arts)
Naomi E. Winstone & David Boud (2020): The need to disentangle
assessment and feedback in higher education, Studies in Higher Education, DOI:
10.1080/03075079.2020.1779687
my personal starting point
As I reached for another coffee before marking assignment number 15 of 20, my partner’s parting comment to me as he left the house for a dog walk was ”Why don’t you get ChatGPT to do the feedback for you’?
“yeah right…I don’t think it works like that” I eye-rolled into the empty space
“does it?’
maybe it does – I copied and pasted a block of text and asked a question (I now know is a called a prompt and there’s a whole emerging career path in being a chatGPT ‘prompt engineer’ )
ME to AI: can you mark this assignment……???
MY three starting points.
Something to do with AI
Something to do with assessment / marking
Something that connects to social justice
WHY?
A collision of the launch of ChatGPT, the marking strikes by university staff, and the impact of the inclusive practices unit on my own thinking, led me to reflect on how AI might impact how we assess and/or mark student work in the future and whether it would have the potential to bring a more equitable approach to marking student work.
Could AI bring rigour, fairness, and a more efficient method of providing feedback on student work, ironing out inconsistencies sometimes brought about by workload pressures or personal biases of staff.
I also felt the predominate narrative from universities was on the dangers of AI, something to be ‘managed’ to avoid problems. This was sometimes balanced by acknowledging the opportunities it brought but with an undercurrent of slight panic from institutions at the opportunity for ‘cheating’ from students. There appeared to be less discussion on how staff may or may not engage with it and here I thought there might be an opportunity to learn something to inform our discussions around the use of AI.
At the same time that I was pondering on all this I coincidentally (or probably not based on my internet search history) received an invitation to a symposium on the future of AI in Higher Education which seemed somewhat serendipitous…. or was it AI in action.
Social Justice context
I was slightly unclear how the social justice aspects of the work we have been doing on the course connected to my topic, but a conversation in one of our tutorials helped with clarity on this, combined with some reading around the topic of consistency, bias, and fairness in assessment and marking.
I had to be cautious to not disappear down a rabbit hole on the ethics of AI as there is so much to be learned and understood, however, that was not the scope of this ARP. Nevertheless, I felt it was important to understand some of the arguments and the following resources were incredibly helpful in giving me direction and inspiration:
Ethics of AI
An article by M. Healy was fascinating for me where he discusses the fundamental issues around the original design approach for Generative AI and argues that the use of labour from developing countries such as india and Kenya amounts to a form of ‘digital colonialism’
“Writing in The Guardian, Niamh Rowe (2023) notes that through a subcontractor, OpenAI paid Kenyan moderators $1.46 to $3.74 an hour to review texts and images “depicting graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest” leaving them with “serious trauma.” The colonial
implications and comparisons of a San Francisco tech company offshoring damaging work to low-paid African moderators are striking“
This created some conflict for me in terms of exploring this topic. Reflecting on the Inclusive Practices unit, I questioned whether we should be utilising ChatGPT, how will Higher Education take a position not only on the inherent challenges with this technology but also address the issues around its original design and development?
The Student Context
According to Tapper in the article ‘A Pedagogy of Social Justice Education: social identity, theory and intersectionality’ he discussed that ‘students’ identities need to be considered in all educational settings’, this led me to consider what role Generative AI may have in bringing fairer assessment processes.
The Times Educational Supplement article discusses the potential AI has to improve the relationship students have with assessment feedback.
“This is especially pertinent for students from widening participation backgrounds, or mature students entering higher education for the first time. Many non-traditional students have not just had complicated relationships with education; they may also have had complicated relationships with educators”
It also discusses whether the workload issues around feedback generation are ones of capacity and capability for tutors that may be ‘fixed’ by training or recruitment or whether they are issues of pedagogy, necessitating a move away from summative assessments which require heavy workload and time-consuming marking processes
more context
AI, ChatGPT, Mid-journey – we can’t go anywhere at the moment without the topic of AI and its role in our world, media, decision-making, and crucially the world of education. Some institutions are tackling it, some have their metaphorical fingers in their ears and keeping it in the too hard box.
The recent industrial action and marking boycott, the issues of fairness and inclusivity from the IP unit plus hours of marking text-based assignments over the last few months of the academic year led me to muse on the role of AI in teaching design and delivery including the topic of assessment
To manage the scope for this ARP I’ve decided to focus on marking and specifically the issue of the use of ChatGPT for feedback and informing marks for text-based assignments
I’m interested in the functional use case but also the more emotional aspects of engaging with AI
Research Questions
Starting point
How might ChatGPT be used for feedback on text-based assignments on the MADM an MAAD courses
Revised question
How might ChatGPT (LLM) work as a tool to support in the generation of feedback for text-based assignments for Masters level students at UAL?
Taking Stock
Topic is chosen
The scope is a little unclear ….needs more refinement

Half identified participants – 5-6 teaching staff across two MA courses. I’m working to keep the scope manageable in the context of my new teaching role and also marking final major projects in December and over the break.
The students are the primary stakeholders in terms of the impact on their studying however I feel focusing on the feelings and attitudes of teaching staff would be an appropriate starting point. Marking has been a significant issue in terms of recent industrial action
I’m going to do an observation and interview – comparing previously marked work with chatGPT’s output – put their own prompt in, then give them a prompt to use. Discussion during and after – semi-structured discussion
Radar chart to identify associated words – cheating, easier, relief, time ??
What do I need to do next
Write a brief for participants
Confirm participants
Identify work / prompts
Ethics consideration: remarking of work for both students and teaching staff
AI Conversations: A series of discussions surrounding the use and future of AI in Higher Education
As part of my exploratory work around the topic of AI, i attended a series of discussions organised by UAL to discuss and debate various issues around the topic of AI. I created mindmap notes on each discussion and highlighted areas of how this might either (1) add to my knowledge of AI or (2) inform the design of my research
Conversation 1
How are university students using AI?
Sue Atwell from JISC
Summary: An outline of student research that explored 5 questions with students around their use of AI
Application to my ARP: highlighted in yellow
Could the questions they used be helpful for me and transferable if they are doing peer-reviewed work?
The outcomes indicated a need to shift to teaching staff the use of AI and capability with tools and understanding the use of tools.