by Robert Cole, Program Director, Reinert Center
AI slop is a less than endearing term for the relatively low quality, derivative, and hastily generated output from generative AI based on the prompts we provide. If you have dabbled in what generative AI can produce or not, left unchanged or unedited, you may well agree that it is rarely of good quality. It is often these features that lead to suspicion that perhaps a student didn’t complete his/her own work. It is pretty generic, occasionally incorrect or incomplete regarding the topic at hand, and often without a voice of the presumed author. All that being as it may, some instructors have decided that generative AI does not have a place in their courses, while others believe that there may be a way to use it and that in some majors and professions, it may even be expected. If one were inclined to make use of generative AI, how might someone approach it?
One way may be to become more fluent in using these tools as a collaborator. Like word processors, calculators, and the internet it will take time for students and instructors alike to gain proficiency or fluency in the use of generative AI. One model promoting generative AI fluency I have seen recently – the 4D Fluency Framework – resonated with me. This resonance was primarily because it very much includes using the human mind as an integral part of the process. This fluency framework was developed as part of a course by Anthropic, the company that created Claude. Claude is a generative AI like Chat GPT, both of which are Large Language Models (LLMs). The 4D Fluency Framework is comprised of four elements: Delegating, Describing, Discerning, and being Diligent. Each of these four D’s has a human component.
In Delegating, the user is to critically think about what part of a task could be and should be performed by generative AI and what part of the task is the human’s responsibility. What are your goals? Based on your goals, would one LLM or another generate better results? Which one?
Once you are satisfied with what you want the LLM to supply, the next step is Describing. You will need to describe to the LLM what you want generated. This is akin to prompt writing, but you might include things like in what order you may want things addressed, what process you want used or applied, what context you are working in, and in what form it should be when complete.
Perhaps the most important human element in the framework comes in the Discernment step. In this step the user critically evaluates the LLM output for both accuracy and quality. Is the generated material useful in the sense it was intended to be? If there are facts presented, are they correct? What does the user need to add to provide his or her voice to the generated output? If writing an argument, are facts presented supporting the argument you are trying to make? Do additional facts need to be added or are there extraneous facts that should be deleted? Are data summarized or presented correctly? These questions and many others are questions for which only the author can know the answers. Only the author knows the intention of the text and how it should be communicated.
Finally, being Diligent is the ethical check. Is this an appropriate use of this tool? Are you using this to further your thinking or to offload your work? Are you being responsible in using this tool? Are you comfortable with taking ownership for this work, while being transparent that you used an LLM to help accomplish it? Again, these questions can only be answered by the human author.
Practicing these steps can help us internalize the process of thinking through the use of LLMs in our work. Applying all four fluency elements may help us be more responsible in using them. On the other hand, it may bring us to another question altogether; one that is particularly important for our students. Might this use be detrimental to student learning? Will the act of using this tool cause cognitive debt? We may be learning more about that.
In a recent preliminary study (in preprint and not yet peer reviewed), “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” (Kosmyna, Hauptmann, Yuan, Situ, Liao, Beresnitzky, Braunstein, and Maes, 2025), participants exhibited more brain (neural) activity when using generative AI as a partner – as the collaborative manner described above – than in simply offloading the work or not using an LLM at all. It should be said that this study is far from perfect. The n is quite small and there are some design issues as well in my opinion. However, the ideas presented allowed me to see the connection of using generative AI intentionally and the importance of human beings regarding the quality of output generated by LLMs. Participants’ brains were more active and the output was better. Much more study will need to be done to determine how helpful – if at all – or undermining generative AI may be regarding student learning. But perhaps if our students become more fluent in the 4D sense, the human ideation, creativity and innovation will still be at the heart of their work.
If you would like to talk with someone about how generative AI may be incorporated for an assignment, you can request a teaching consultation using this link.
Anthropic. (2025). Ai fluency: Framework and foundations. AI Fluency: Framework and Foundations \ Anthropic. https://www.anthropic.com/learn/claude-for-you
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., … & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872, 4.