Anglia Ruskin Research Online (ARRO)
Browse

The Use Of Large Language Models In Assessments: A Study On The Use Of The ‘AI As Mentor’ Approach In Undergraduate Law Assessments

conference contribution
posted on 2025-05-07, 10:14 authored by Pauline Hall, Dan Burdge, Sohini Alg

This paper presents a study on the use of Artificial Intelligence (AI) in legal education. Through the study, the paper examines the impact of the ‘AI as Mentor’ approach on students’ ability to critically analyse as well as the impact on their confidence with this skill. The paper reviews the principles of the use of AI as studied by Dr Ethan Mollick and Dr Lilach Mollick in their paper ‘Assigning AI: Seven Approaches for Students with Prompts’ , tailoring it to a law specific educational setting.


The paper presents the results of a study carried out with undergraduate law students, at level 5 (second year of a three-year degree), in their Equity Trusts & Succession module, following a trial with students at level 6 in two of their modules. As part of their formative and summative assessment, students have been asked to complete an ‘essay writing log’, one element of which, involves the use of Large Language Models such as ChatGPT or Gemini (‘the LLM’). In this study students use the LLM in a limited and controlled way – by providing students with an initial instructive prompt to start their interaction with the LLM – then asking the students to request feedback on their initial plan from the LLM – then critically consider the feedback. The students are then asked to explain which elements of the feedback they have decided to act on, and what steps this has prompted them to take. They are encouraged to explain which elements of the feedback they have decided not to act on, and why. It is hoped that this will help students to develop the planning stage of their essay writing, as well as encourage them to critically reflect on their own work and defend their position. The aim is that this will foster active learning and self-reflection as students compare the LLM’s feedback with their own understanding.


Data will be collected from students by way of an evaluative survey, which will be carried out after the end-of-module assessment. The module is still running at the time of writing the abstract, so results are currently unavailable.


Through use of the study, this paper explores the potential of LLMs to enhance legal education. This paper acknowledges the ever-changing legal landscape and the need for law students to develop strong research and analytical skills. It identifies a prevalent struggle among students regarding critical analysis and evaluation of legal topics.


The use of AI in Higher Education is a current and exciting topic for debate; with each institution having a different stance on its use. This paper explores and examines how AI might add value to students studying Law in Higher Education. It hypothesises the development and improvement of student performance, understanding of critical analysis, as well as harnessing an understanding for the art of prompt engineering.


The paper also highlights the ethical considerations of using LLMs and the importance of students developing a critical lens to assess AI outputs. It is hoped, moving forward that LLMs can be effectively incorporated into education to help students fully understand the benefits of using the tools available but also the limitations of the software so they can be fully prepared for a world of work implementing these tools.

History

Refereed

  • Yes

Volume

1

Page range

5990-5996

ISSN

2340-1117

Publisher

IATED

Conference proceeding

EDULEARN Proceedings

Name of event

16th International Conference on Education and New Learning Technologies

Event start date

2024-07-01

Event finish date

2024-07-02

Affiliated with

  • School of Economics, Finance and Law Outputs

Usage metrics

    ARU Outputs

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC