
There’s currently a lot of interest in ChatGPT and over the past week, like many others, I have been playing around with this artificial intelligence (AI) system and testing its knowledge and creative powers. The latter are really very impressive: I asked it to write a couple of limericks about Karin and my professions, and about our move to Denmark, and it produced a decent stab at those.
I also probed its knowledge about topics with which I’m familiar, mainly pollinators and pollination of course, and it produced some good answers. When I highlighted some errors it had made, ChatGPT went back and revised its answers, and politely thanked me in the process. If I were a student I would not rely on it to write accurate essays, but so far so good(ish).
This morning, however, I had a very odd experience. I was trying to track down something attributed to Erasmus Darwin (Charles’s grandfather) and I asked ChatGPT: “What did Erasmus Darwin say about birds visiting flowers?”
This is the answer that came back:
Erasmus Darwin was a naturalist, physician, and poet who wrote about a wide range of topics, including birds and flowers. In his 1794 poem “The Loves of the Plants,” he describes how different species of birds visit flowers in search of nectar:
“Birds of each hue, from the gay parrot’s beak
To the goldfinch, painted in tiny streak,
Sip nectar from the cups that bloom and bow
With every breeze, and nod and dance below.”
Now, the first part of this, about Erasmus’s life, is true. It’s also true that he wrote a poem with that title, though it was published in 1791, not 1794.
But, when I double-checked those attributed lines against the published version of the poem that’s on Project Gutenberg, I discovered that they did not exist. ChatGPT had made them up!
As I said, this AI is pretty creative and will certainly write a poem for you on any topic you care to request. But I didn’t ask it for that, I had made a factual query. And ChatGPT had just invented some poetry and claimed it as fact, unless of course those really are Erasmus’s lines and I’ve missed their source (please correct me if I have).
Now as a scientist and writer I try to be rigorous when quoting and citing sources, and if I hadn’t double-checked this I might have taken it at face value. And one could argue that my initial question – “What did Erasmus Darwin say…?” – was a little ambiguous. But nonetheless, I was not expecting an AI to try to pass off something it had created as fact.
Has anyone else experienced anything like this when using ChatGPT? It’s a fun, creative tool. But as far as I can tell, it also has a very human ability to lie.

I’ve seen a couple of these pretty scarily good AI creations pass by on Twitter. Honestly, it is better writing than that of most people… If it is this good now, in five years we don’t need to write papers anymore, although the fact vs. fiction will be an important problem to solve. The future will be interesting in this regard.
I agree, but in 100 years or so will anyone be able to tell truth from AI made-up nonsense?!
I think this will be within the next decade for sure!
Someone on Twitter (I forget who) found that it invented some citations to non-existent papers, in an otherwise plausible response to a prompt. There are many things to be concerned about, with this technology, but who’d have expected that?
I asked it a question about plant reproductive ecology and got a very well written summary that was correct (if a little superficial). However it included a reference to a paper (with title and journal) by two very prominent authors that doesn’t exist. And those authors have never published together. The AI hallucinations and inaccuracies are amusing and scary.
Thanks Randy, that’s worrying! Why the hell is it making up this stuff?
It’s not necessarily lying like we are used to, it’s simply an algorithm that puts words together that are commonly used together and should only be used as that. It’s not designed to look up facts like a Wikipedia, nor is it supposed to do correct mathematics. All it does is being one-million-monkeys-using-typewriters and then taking the version that looks nicest.
If I understand it correctly 🙂
That’s an interesting perspective Jonas. But then it begs the questions: what is ChatGPT actually for?
What I’m using it for, mostly, is to improve texts that I wrote myself. As in, I type down a blogpost as fast as I can, then ask ChatGTP to improve it for me. The content it gets from me, so it doesn’t have to make anything up, it just makes a better flow of words. It could change the tone or style of a text, make it more or less scientific, etc., but all that works best when it knows the content you want.
But then again, I’m also only just a fresh user of the tool, so don’t take my word for it 🙂
Interesting. It might be worth you reading SoundEagle’s comment on this.
That’s a good question, Jeff. Are they working towards a specific goal/product, or is it better described as pure research, to see where it leads?
Have there been other examples of AI where the public has had the chance to play like this with still-developing systems, other than the recent spate of image-generating packages? I’d say there’s so much AI around us now, without our recognising it—like camera autofocus/exposure—but this is far more palpable, with predictive text being the nearest practical example.
Great points.
Dear Jeff,
As academic researchers and writers, we may indeed have great concerns about the ramifications and oversights of AI such as ChatGPT (Generative Pre-trained Transformer) becoming mainstream. In particular, Wikipedia has listed the following:
As far as I am concerned, blurring authorship aside, the validity and reliability of ChatGPT (and for that matter, other AI variants, intelligent virtual assistant (IVA) or intelligent personal assistant (IPA)) as well as the extent to which AI can or should be used or coopted as a research tool, educational tool, authoring tool and/or expert system remain areas requiring urgent attentions and holistic solutions.
Yours sincerely,
SoundEagle
Thanks, that’s really useful and insightful. I’d not done that much research into ChatGPT before using it. Perhaps I should…
Dear Jeff,
You are very welcome. Moreover, you will be very amused to be informed that I have been mistaken as an AI by a few bloggers for many years. One of them is Emilio (disperser). The following is an example.
https://dispersertracks.com/2022/12/20/a-reminder-to-vote-a-bit-about-writing-and-more-chatgpt-stuff/
Here is an excerpt from the blog post:
Even more farcical is how he responded to my comment in the comment section of the said post.
Yours sincerely,
SoundEagle
Pingback: ChatGPT did not write this post | Scientist Sees Squirrel
I get the impression ‘SoundEagle’ is an AI too. One would expect it to prepare rebuttals to accusations as well as any human, and without ever getting tired or fed up with such trolling exchanges, in the way most humans would.
This is more concerning than the original post. We already can’t trust anything we read electronically now.
Glad I still have books from before the Internet!
If you look at SoundEagle’s second comment it/he/she says something about this. But I’m not sure either way!
It was the further comments that made me more suspicious. Sadly, if it is AI, it will have now learned to modify the responses to make them appear even more human.
This is really disturbing. 😦
Apparently my mother-in-law has been looking at one of these Large Language Models (possibly ChatGPT) to see what advice it can give on health problems. We are trying to make sure she doesn’t take the advice. Given all the kinds of problems documented in the other comments here.
That’s very wise!
Pingback: AI at the crossroads: can ChatGPT turn you into a statistical Robert Johnson? | Prof. Jeff Ollerton – ecological scientist and author