How not to steal content cover image

How not to steal content


My stance on writing with AI

8/2/2024, 9:38 PM

Reading time: 4 minutes

Of course, every website needs an article on where AI is heading. But stating that it will take more and more jobs is kinda 2023 at this point. Also, I'm not talking about using GitHub Copilot to code. Instead, this is about using AI tools in creative writing, such as these blog posts.


How to use AI in writing

Over time, I've essentially used two ways of writing with AI. The first one is to write notes manually and then having an LLM like ChatGPT generate a full article from these notes. That way, you're still the author of the content, but don't have to bother with the actual writing process and choosing the right words.


The second way is to use GitHub Copilot to autocomplete sentences when writing manually. In my personal experience, this has been helpful when you're stuck, trying to find the right words, or it magically suggests exactly what you were thinking, and it's just one button press away.


The problem with this second way is that it might influence you and maybe even prevent you from thinking about the wording, as a friend of mine pointed out. He, instead, prefers asking an LLM about rephrasing a sentence rather than an LLM intervening by itself.


What about AI detection

Specifically in school / uni work, using an AI model in all the ways described above could be seen as risky, as some call that plagiarism. In my humble opinion, that's complete bullshit. Let's first recap on how LLMs actually work.


I assume everyone knows these challenges where you start a sentence on your mobile keyboard, and then "press the middle suggestion button" until you have a text. All this does is guess the next word, and it's pretty bad at it. But it turns out if you use the same concept with A LOT of data, make it run on GPUs that are more expensive than your house, and train it for some time, you get a model that predicts words more accurately.


Now let's say you are writing a text. Essentially, you have an idea, start writing a sentence, and try to figure out what word comes ____.


See what I did there? Your brain works the same way, you have heard similar sentences before, and you know what word comes next. So the fact that an AI model generates the same output as you has to mean that your brain is plagiarizing, right?


Now, of course, you cannot really say using one word is plagiarism. But there are is a problem. As these AI models get better, they can generate a lot more content in a "human" way. At the same time, our brains get used to reading AI-generated content, and start choosing our words accordingly.


What's original content?

If you think about it, virtually every sentence you write was either written by someone else before, or is a combination of smaller pieces that you stole. Try to think of a real word that has never been used before. That doesn't make sense. The only difference is whether you try to be creative, or you deliberately copy someone else's work. But you cannot really prove that you didn't copy someone else's work on purpose and didn't use an LLM to generate it.


Conclusion

AI detection tools are inherently flawed. In my opinion, the content and the personal experience you try to convey are what matters, not the specific wording that might have been chosen by AI. Just don't generate entire articles, they appear dry and soulless. The only time you write a sentence that was definitely not written by AI is when you make spelling and grammar mistakes, so just remember:

You're only being original when you make mistakes, and that's unprofessional.

Quick links

BlogContact

Legal

ImprintPrivacy policy