To use or not to use? That is the question about gen AI
In June, I attended the Consortium on Graduate Communication Summer Institute, where I took part in several sessions on the implications and possibilities for generative AI in graduate writing. My colleagues and I agree that we’re still in the middle of a massive experiment with this technology. But thanks to the hard work of many graduate writing professionals, some guidelines for gen AI use in graduate writing are starting to come together. That gives me some hope. On the other hand, our current ideas about how to mitigate the potential harms of gen AI in graduate education seem to revolve around students refraining from use in certain situations, and I fear that may be a losing battle. Read on for more about these good and not-so-good takeaways from the current conversation.
Maybe gen AI is just going to fizzle out
One of our keynote speakers, Nigel Caplan, was adamant that gen AI is NOT revolutionizing writing. He argued persuasively that companies have hyped the technology way beyond what it can actually do. And I have to say, as I’ve experimented with it and talked with many, many other people about their experiences with it, I sense he’s right. Of course I’ll offer the caveat that the technology is evolving, so it could be astonishingly more capable tomorrow (but maybe that idea is hype, too?). If, ultimately, people do not find this technology useful, it will disappear like other over-promising and under-delivering technologies before it.
How to use gen AI responsibly and ethically
On the other hand, everybody’s talking about gen AI and trying it out to support a variety of tasks in research, writing, and teaching. These tasks are hard, and it makes sense that people want to try something that might make them easier. And as they experiment with gen AI, they are looking for guidance on how to do it without getting into trouble. Here’s my synthesis of the various sets of guidelines that are coming out to guide graduate students and other researchers to use gen AI responsibly and ethically in the research and writing process. In this context, the term “gen AI” usually means the chatbots built on large language models (LLMs) like chatGPT, Gemini, and Claude.
Know your policies
Universities, professional organizations, and journals are still in the midst of developing policies around gen AI use in scholarly writing. It’s up to you as the author to research those policies—and keep researching them to stay up-to-date—so you know if and how gen AI use is permitted and the consequences for unauthorized use.
Keep records on everything you use AI for
Given that those policies are evolving and might not exist when you start your project, document all of your AI use. I always encourage my clients to keep a research journal where they record their decisions throughout the research process so they can describe them later when they detail their methods. Depending on the writing situation, you may or may not have to write this up, but you can’t do that if you didn’t keep records.
Guard sensitive, valuable, or private info
Assume the companies behind these chatbots are taking whatever you put into them. That seems to be the default setting for the chatbots, and even if you can change the settings so that the chatbot does not retain your data or use it to train the model, you still have to trust the company on that. Carefully consider what you input, and don’t input any ideas or information you need to maintain control over. If your research involves other people’s information, keep in mind what you promised them during the informed consent process.
Respect intellectual property
In addition to guarding your own ideas and information (or that of your research participants), you also need to safeguard other people’s intellectual property. If you input a journal article you accessed through your university’s library database, you may have just given that chatbot company access to intellectual property they did not previously have because it was behind a paywall. The authors didn’t get any say in whether or not you input their work.
Fact-check and bias-check outputs
These models generate text output based on statistical probabilities of what the next word will be in a string of words, based on their training data. The output can be wrong, so it is up to you as the author to check and correct it.
Practice discernment and restraint in what you use it for
Given everything laid out in the guidelines above, there are times when it is best not to use generative AI. For example, if you do not have the necessary expertise to fact-check and bias-check the outputs, you are putting yourself in danger of acting on inaccurate or biased information. Furthermore, in some instances, using gen AI may circumvent your opportunity to learn the very skills you came to graduate school to develop. Becoming more efficient and nuanced in your reading of the literature in your discipline is a key part of the transformation from student to scholar. How do you do that if you rely on gen AI to summarize articles for you rather than reading them yourself?
Discernment and restraint are hard
We’re still debating if gen AI offers benefits that are worth the risks. It’s not clear how to mitigate the potential for deskilling, other than selective use. As a strategy, selective use puts a lot of pressure on a graduate writer. In particular, it demands a lot from those graduate writers who are most vulnerable to the purported promises of gen AI to “level the playing field”:
those who lack confidence in their abilities as writers because their language has been marginalized in academia, such as multilingual writers, speakers of marginalized Englishes, first gen students
or because they process written language in a way that makes academic reading and writing especially challenging, such as people with dyslexia or ADHD
and people who are overworked in exploitative labor conditions or with caregiving responsibilities.
As I wrap up this post, I just got off a Zoom coffee chat with five fellow academic editors from the USA, Canada, and South Africa. Our conversation was about gen AI in academic writing and editing. Everyone was wary of the technology’s risk to intellectual property. No one had yet found a way to use it to make their work significantly easier or better.
I’ll continue following this technology and sharing what I learn with you here. I’d love to hear your thoughts and experiences with using gen AI in academic research, writing, editing, or teaching.