by Robert Cole, Instructional Developer, Reinert Center
As we all get progressively more and more busy, generative AI may come to some instructors’ minds as a way to ease our workload, while others are unsure whether they want to outsource their work in this way. There is an ethical balance in what we can do to make our workday more efficient, or to offload the work we are intended to do. In the absence of something in place to govern our use of large language models, it is up to us to govern ourselves thoughtfully and intentionally.
The following are a few ideas that may help you to think more about what the human component of using generative AI in your work may look like. We would suggest:
- Intentionally determine if generative AI would be beneficial in or for your discipline and how. Being deliberate in how you use generative AI and communicating that rationale to your students not only provides you with an important work tool but also offers students an example of your decision making regarding its role in your work, and potentially in theirs as well.
- Be transparent with any generative AI use in your work. Not only should students be privy to any AI use, but colleagues and others should also be informed about use if that use is not for you alone. Communicating how you use this tool and attributing the source is not only transparent but demonstrates good ethical use.
- Review, revise, and enhance generative AI output. Hallucinations aside, there are many reasons to work with any output AI tools may provide. While the output has gotten better over time – and will likely continue to do so – it will still lack the voice of the author (you). Reviewing provides an opportunity to ensure accuracy and revising and enhancing allows you to make the output your own, shaping it in a way that only you would.
On the other hand, there are things you may want to avoid. We would not encourage:
- Allowing a generative AI tool to usurp your own thoughts and thinking processes. While there are practices that may enhance your workflow and enable you to tackle some things more quickly, requesting output that does work for you mirrors how we may feel about students allowing AI to do work for them. Adherence to academic and personal integrity is a lifetime commitment.
- Using generative AI to grade student work. No one knows your students better than you do. In classes for which you have graded individual student work – not all of us do – we become acquainted with students’ work and may notice when it is particularly good or not. Generative AI cannot do that in a teaching context, which is after all what we are asked to do. Moreover, there are ethical questions about contributing a student’s intellectual property to the training corpus of generative AI models without his/her knowledge or permission.
- Using AI detectors to police students. Perhaps the first reason we do not suggest using these applications is because they do not and cannot detect. They simply compute a probability based on different factors depending on the application. So, if a student uses a particular vocabulary or writes in a specific way, he/she may be falsely identified as using generative AI. A great way to test this is to paste 5 or 6 paragraphs of your own published work into a detector and see what comes back. I did this with my dissertation – which was written well before ChatGPT was able to do what it can now – and found that there was a 60% probability that most of it was generated with AI.
We are all still navigating this novel way to work. New tools aren’t really new, word processors took the place of typewriters, calculators took the place of slide rules and technology will continue to advance. How we choose to employ these tools and integrate them with our work is important for our relationship with our work, our colleagues, and our students.