Beyond AI Disclosure in Academic Assessment
- Joseph Nockels
- Oct 1
- 5 min read
It’s late at night in a college dormitory. A pale-blue light is cast across the room by a laptop screen. Behind it - a stressed student drowning in biochemistry notes. If only they had a MacBook Pro, which would enable them to condense everything, through the power of AI and the click of a button.
I see this advertisement often on YouTube, before watching videos on football tactics or vlogs about how the British high-street died. Presumably, the AI application works using a sophisticated Text Simplification (TS) algorithm, but that’s beside the point for this blog post.
Instead, I want to explore how such convenience is complicating humanities educators’ assessment of student assignments, with these tools being increasingly marketed to students and seen to bring workplace efficiencies (Dell’Acqua et al., 2023). Freeman’s recent report (2025, https://www.hepi.ac.uk/reports/student-generative-ai-survey-2025) on students’ Generative AI use, collecting responses from over 1,000 individuals, suggested that 92% had used such technologies in their studies, with 18% directly inputting generated text into written assignments. This is a known problem area and one news media often touts as jeopardising the whole arts and humanities project at universities, therefore requiring a quick and strong response (https://theconversation.com/our-new-study-found-ai-is-wreaking-havoc-on-uni-assessments-heres-how-we-should-respond-264787).
More locally, the way such technologies are marketed play into student fears of being left behind peers. Tools for generating notes, proof-reading and ideation, appear as a cudgel against the flow of complex, dense and varied information sometimes taught at students week-on-week. Simultaneously, there are very real opportunities for universities to use AI as a way of reconsidering traditional pedagogical approaches, with top-down methods for syllabus design, tutorials and practice-based learning coming under strain (Guo, 2024: 4). At the Digital Humanities Institute, we are increasingly exploring such newfound approaches: including lab-based experience of using AI on cultural heritage data, Wikipedia modules that lend themselves to semi-automatic approaches, and using phraseology around ‘doing more with less’ in our recruitment. This all relies on interpreting the University’s policy on AI in a positive manner, as the institution would have us do. However, one pedagogical area continues to appear a bad fit for this approach and wider stance. Dreaded assessment.
AI and assessment, a natural clash?
Guo’s (2024) sense that Generative AI will remove traditional top-down education is far from being implemented across entire courses, especially within the humanities. Methods for assessment, besides peer-to-peer models, appear more obviously intractable. Inherent to the task of marking student work against a set of criteria is that somebody - the marker, has a certain level of expertise and can offer insight on more foundational work. At Sheffield, we mark on a positive basis by rewarding inference and solid argumentation, instead of approaching work with a red pen and looking for pitfalls. Of course, marking is also subjective and positive marking is sometimes easier to preach than practice, especially when decaffeinated or marking between tasks; or, indeed, you uncover AI misuse.

I have seen departments advising students not to use GenAI tools whatsoever, as they continue to produce overly descriptive outputs that are noticeable to assessors, therefore resulting in a worse grade against a marking criteria that rewards the synthesis of ideas, nuance and cultural sensitivity when dealing with complex topics. This hard-stance runs up against LinkedIn posts from salespeople peddling AI solutions for students, as well as broad AI strategies within universities. Though individual educators may be digitally-minded and optimistic about AI’s affordances for education, when it comes to assessment there remains a lack of direction around how to cite models, training data, as well as spotting and treating AI interventions against criteria established for more traditional work. The result is a positioning AI disclosure as central to assessment.
AI Disclosure v Exposure
This is not a blog post suggesting that students are in breach of academic conduct or using AI in malicious ways. Personally, I find it interesting to understand how my students develop their own AI workflows and the reasons why? Anecdotally, I’ve noticed AI use stemming from a lack of language proficiency, with students using GenAI to polish an initial draft -> which they then translate into English. This results in the dropping of references and key terminology becoming misapprehended. In this case, the affordances of AI are clear, especially for students who are less confident in the assessment language. In other cases, GenAI tools are used to generate wireframes and 3D model images for practical illustrations, where we encourage the prompts used to be included as part of the assessment. In Digital Humanities journal articles, this is increasingly standard practice, with prompt engineering seen as a reasonable method, especially Human-in-the-Loop approaches that require fine-tuning in an interpretive manner.
So - What happens if students disclose their AI use? And is disclosure enough in our current moment of AI ubiquity?
Schilke and Reimann (2025) highlight, through experimental scenarios and social evaluations, that the consequences of AI disclosure are not simple. Disclosing AI usage, whether in business settings or as a student, compromises trust in almost all cases. However, this trust is further damaged if a student’s AI use is exposed from a different source. In our case, this happens when a marker reads overly-descriptive prose, uncovers prompts accidentally left in work (yes - I have seen this), or the utp: hyperlink extension for ChatGPT in bibliographical references, alongside the interface font being carried over into local documents. Again, in most student cases this AI use is not without an underlying reason, however - when spotted - students lose control over any legitimacy narrative. Of course, they fear this, although Freeman (2025) suggests that this is less pronounced among male and wealthier students. AI disclosure is therefore seen as a moral requirement, especially with high profile cases of manipulation occurring in headlines (Zetwerk, 2024). Including AI disclosure statements in assessments mirrors this attitude and forms a foundation for ensuring trust remains central. Trust is, after all, essential for academic assessment, student and educator relationships, and the broader functioning of university institutions.
Moving Beyond Disclosure
Assessment criteria needs to move beyond simple disclosure to account for AI limitations and affordances on an assessment-by-assessment basis. The same study from Schilke and Reimann (2025) suggested that the moral penalty given to those who disclose AI use is softened, but not eliminated, from evaluators with a knowledge of AI systems, including what they can and cannot do. Following this, the issue of assessing AI in student work appears to be one of interpretation and developing critical workplace attitudes toward technology. This sits alongside looking to industry-partners through situated learning as a way of clarifying how sectors students may eventually work in are approaching such issues. In my own digital cultural heritage research, the Library of Congress’s AI Planning Framework (https://libraryofcongress.github.io/labs-ai-framework/) offers an assessment for whether using such technology is legitimate, responsible and aligned to sectoral values. It encourages staff to Understand the broad capability of tools -> Experiment in controlled settings and finally -> Implement them for particular time-bound tasks. This could prove useful for establishing more interpretive student assessments, potentially even framing written assignments around such themes. This will depend on the subject, course and module, but may enable assessment criteria to step beyond disclosure and, in the process, avoid increased student exposure.
No AI use was undertaken in writing this blog post. I swear!
Bibliography -
Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Management Unit Working Paper (24-013).
Guo, Q. (2024) Prompting Change: ChatGPT's Impact on Digital Humanities Pedagogy - A Case Study in Art History. International Journal of Arts and Humanities Computing. 18(1): 58-78. 10.3366/ijhac.2024.0321
Schilke, O., Reimann, M. (2025). The transparency dilemma: How AI disclosure erodes
trust. Organizational Behavior and Human Decision Processes. 188: 1 - 16. 10.1016/j.obhdp.2025.104405
Zetwerk. (2024). Should businesses disclose their AI usage? https://www.zetwerk.com/ai-disclosure-in-business/.
I completely relate to the discussion about AI in academic assessments. As someone who works with digital tools daily, I understand how transformative technology can be. When I searched for the Lenovo laptop price in Karachi, I realized how many advanced features are now accessible to students and professionals alike. It’s inspiring to see how AI and powerful laptops can shape modern learning experiences.