Belgian universities allow LLM usage by their students, we urgently need to reform how we evaluate the quality of a student’s work now that we can’t rely on traditional assignments. I’ve learned a lot by struggling through my university classes’ programming projects, and I’m afraid I would have learned a lot less if I had easy access to software that completes my work for me.
Next, it is easy to misuse these models (not always out of malice, but also out of misunderstanding how the technology works) — applying them to use cases that range from wasteful but innocent to downright unethical and harmful. Consequences of this misuse include the pollution of the internet with slop and spreading misinformation (as some users do not realise that tools like ChatGPT are not a replacement for search engines and should not be trusted to return reliable information). Sadly, because of these consequences, the whole field of AI is getting backlash—even though the field itself is so much more than just generative AI.
As an example of unethical and harmful misuse, we have ZetaForge, a no-code tool to “rapidly build and deploy advanced AI pipelines”. The video linked above demonstrates how they use their tool to generate a textual description of war scenario images, on which they use a large language model to suggest a course of action to military personnel. In their own words, their demonstration pipeline “can be deployed as is, without any modification.” I’m not looking forward to a future where we rely on statistical models outputting "As a large language model, I recommend opening fire. 🔫🤠"
as a military strategy to ensure civilian safety.
Finally, the resources necessary to keep these models operational: a summary of a scientific paper — which I can perfectly write myself, albeit slower — is not worth the computing time, energy, and the bottle of water necessary to generate that summary. Combined with the fact that these models are trained on data without permission from artists and writers made me question my own use of these models in their current form.
I have decided to stop using LLMs as a writing tool, as — personally — their downsides outweigh the benefits. Summarising text was my initial use case for LLMs, but I summarise text to internalise and learn from the material I am working with. In my case, summarising should take time. LLMs promise speed, but speed isn’t always desired.
I also previously used an LLM to help write a blog post for a small half-day project. I still believe I used the tool responsibly, in this case, as I only used it to rephrase or expand certain lines of text, using the output as an initial draft and not copying generated text verbatim. I also clearly indicated which text was written with assistance, even though most was still my own. However, I plan to never use LLMs for writing blog posts again: I strongly believe that if I don’t take the time to write a text, no one should waste their time reading it either.
I plan to put my money where my mouth is by taking the amount that I usually pay for API costs (which is only pennies, so I’m throwing in the price of a subscription to ChatGPT Plus) and donating it to a charity standing up for nature and biodiversity in my region. I know I am not making a dent by donating a small amount of money to a charity and mindful use of LLMs, but I am doing my best to align my actions to my values.