Should AI be thought of as a science tool or collaborator? A conversation with ChatGPT

In a piece that I posted before Christmas, I made the offhand comment that “we need to think of AI as a collaborator as much as a tool“. It’s offhand in the fact that I’d not thought too deeply about it, but some reading that I did over the holiday made me rethink this statement, or at least consider it more critically. The most recent post on the Dynamic Ecology blog – Meghan Duffy asking about using AI to find literature – also got me thinking about whether AI is just a tool.

Eventually I decided to ask the question of ChatGPT itself and we ended up having quite a long conversation on this topic. I then asked it to summarise the conversation and what you see below is my edited version of that summary:

Jeff: Is ChatGPT best thought of as a tool or a collaborator in science?
ChatGPT: ChatGPT is best understood as a tool with collaborator-like affordances. It supports analysis, writing, coding, and idea development, but it does not participate in collaboration as a social, epistemic, and accountable practice. It is also prone to errors.

Jeff: But human collaborators also make errors and sometimes avoid accountability. We do not expect people to be flawless.
ChatGPT: Exactly. The issue is not error versus error-free performance. It is about agency and accountability: human collaborators can explain their reasoning, revise their views, and be held responsible for their contributions.

Jeff: You said that ChatGPT does not participate in collaboration as a “social, epistemic, and accountable practice”. Yet you can still generate genuinely useful ideas and connections.
ChatGPT: Yes, and that is why calling ChatGPT part of the infrastructure (as I did in our longer conversation) is misleading. Unlike laptops or software platforms, I can actively generate ideas, cross-fertilise between disparate areas, and help explore conceptual space.

Jeff: If I ask a colleague to write R code, that is collaboration. If I ask you to write R code, why is that different?
ChatGPT: Because a human colleague brings intent, methodological judgement, and ownership of decisions. I generate code instrumentally, without understanding the scientific stakes or assuming responsibility for analytical choices.

Jeff: Earlier you said you cannot “stand behind a claim”. But you can cite papers and explain why something should be trusted. Is that really true?
ChatGPT: When I cite papers or summarise arguments, I reproduce the form of justification, but I still cannot stand behind a claim in the scientific sense. I do not judge evidential strength or accept responsibility; trust remains with the human scientist and the underlying sources.

Jeff: Yet the output of your code or analysis can be just as useful as that produced by a human collaborator.
ChatGPT: That is true. The distinction is not about usefulness or correctness, but about agency, authorship, and accountability within scientific practice.

Jeff: So what is ChatGPT, if it is neither a standard tool nor a collaborator?
ChatGPT: It is best described as a cognitive amplifier or dialogic system: something that expands the range of ideas and connections available to researchers, without itself being an accountable scientific agent.

Jeff: And what does that imply for how we should use ChatGPT in science?
ChatGPT: It implies using ChatGPT to provoke, explore, and accelerate thinking, while recognising that scientific judgement, responsibility, and ownership remain entirely human.

Jeff: Produce an image that I can use on the blog which summarises this discussion.
ChatGPT: Certainly.

As a test of its ability to “provoke, explore, and accelerate thinking”, at the end of last week I uploaded a manuscript that I had written, just prior to submission, and asked ChatGPT to tell me its strengths and weaknesses and fit for the journal. Its feedback was spot-on – far more than I would expect from a “tool” – and I made a few final tweaks before I submitted the manuscript. ChatGPT also said “Don’t be surprised if the reviewers ask you to change X, Y and Z” and so it will be interesting to see whether it’s correctly gauged the manuscript and the reviewers’ responses.

What times we live in! At the moment I’m optimistic enough about AI to see all of this as an intriguing exploration of the capabilities of these large language models, an expedition through dense habitat in which we’ve barely left base camp and our view of what lies ahead is restricted and there may be nasty surprises along any path that we hack. But I appreciate that not everyone is so optimistic and, as always, I’d be interested in your thoughts on this topic – leave a comment or send me a message.

Leave a comment