LLMs such as ChatGPT and Gemini don't need to be the boogeymen of academia if they are used properly.
Martin Bekker, a lecturer at the University of the Witwatersrand, has proposed a tiered approach to using LLMs in academic writing.
When presented with large bodies of text, an LLM can summarise that text and highlight what it believes are key points in that text. However, ChatGPT andGemini can also be used to generate text. Once ChatGPT launched, it was used to write essays and suddenly there was a market for Tier 1 and 5 are the most extreme ends of the tiers with Ban meaning no use of LLMs at all. The obvious risk here is that people will flout these rules as cheaters have for centuries. At Tier 5 the most obvious risk is that who authored the paper is unclear, and while research could be published faster, this may devalue that research given the tendency for LLMs to hallucinate and create information.
In addition to these tiers, Bekker also says that the use of LLMs in academia should hinge on two principles. The first is ownership if a human author uses an LLM they assume responsibility for any errors or hallucinations that may appear in their work.