I believe large language models can bring value in use cases where they assist with tasks like summarising, translating, and reformulating text. I have previously used these models for some quick translation work, and I also started using the OpenAI API to help me summarise the contents of my clipboard through an Espanso shortcut.
Outside of summarising and translating texts, there are advantages for software developers as well. GitHub Copilot drastically reduces the time needed to figure out popular APIs and successfully assists in writing proper documentation, for example. These tools could increase both the speed of development, codebase quality, and consistency as long as developers understand, reason about, and ultimately agree with every generated line of code and comment before committing their changes.
I am also surprised about the quick adoption by a non-technical audience. LLMs would have seemed like magic to me only a couple of years ago, even though they are now commonplace—with friends and family members asking me to teach them “how to use ChatGPT” or enthusiastically telling me they already use it regularly. During a lightning talk on the risks of these models, for example, I asked the audience to raise their hands if they had already used ChatGPT in a professional setting. Almost everyone in the audience had done so recently.
Flip side of the coin
Assistance with writing and coding aside, I believe there is also a serious flip side of the coin. Overreliance and excessive use of this technology could degrade writing proficiency and programming skills. As Belgian universities allow LLM usage by their students, we urgently need to reform how we evaluate the quality of a student’s work now that we can’t rely on traditional assignments. I’ve learned a lot by struggling through my university classes’ programming projects, and I’m afraid I would have learned a lot less if I had easy access to software that completes my work for me.
Next, it is easy to misuse these models (not always out of malice, but also out of misunderstanding how the technology works) — applying them to use cases that range from wasteful but innocent to downright unethical and harmful. Consequences of this misuse include the pollution of the internet with slop and spreading misinformation (as some users do not realise that tools like ChatGPT are not a replacement for search engines and should not be trusted to return reliable information). Sadly, because of these consequences, the whole field of AI is getting backlash—even though the field itself is so much more than just generative AI.
As an example of unethical and harmful misuse, we have ZetaForge, a no-code tool to “rapidly build and deploy advanced AI pipelines”. The video linked above demonstrates how they use their tool to generate a textual description of war scenario images, on which they use a large language model to suggest a course of action to military personnel. In their own words, their demonstration pipeline “can be deployed as is, without any modification.” I’m not looking forward to a future where we rely on statistical models outputting "As a large language model, I recommend opening fire. 🔫🤠"
as a military strategy to ensure civilian safety.
Finally, the resources necessary to keep these models operational: a summary of a scientific paper — which I can perfectly write myself, albeit slower — is not worth the computing time, energy, and the bottle of water necessary to generate that summary. Combined with the fact that these models are trained on data without permission from artists and writers made me question my own use of these models in their current form.
Personal ethics
I have decided to stop using LLMs as a writing tool, as — personally — their downsides outweigh the benefits. Summarising text was my initial use case for LLMs, but I summarise text to internalise and learn from the material I am working with. In my case, summarising should take time. LLMs promise speed, but speed isn’t always desired.
I also previously used an LLM to help write a blog post for a small half-day project. I still believe I used the tool responsibly, in this case, as I only used it to rephrase or expand certain lines of text, using the output as an initial draft and not copying generated text verbatim. I also clearly indicated which text was written with assistance, even though most was still my own. However, I plan to never use LLMs for writing blog posts again: I strongly believe that if I don’t take the time to write a text, no one should waste their time reading it either.
I plan to put my money where my mouth is by taking the amount that I usually pay for API costs (which is only pennies, so I’m throwing in the price of a subscription to ChatGPT Plus) and donating it to a charity standing up for nature and biodiversity in my region. I know I am not making a dent by donating a small amount of money to a charity and mindful use of LLMs, but I am doing my best to align my actions to my values.