Parroted Autonomy, and why we captured a piece of history that may never return in AI.

3 min readApr 7


A mechanical parrot sitting in a dark library. The parrot is green and blie with mechanical eyes. It is sitting in front of a bookshelf with a lot of old books. It is a dark oil painting.

I’m going to write a longer article about the long, exciting and wild process that arose when we chose to ask GPT-3 to write a paper about itself.

Since our endeavours, ChatGPT and GPT-4 have come out.

They have excelled in many things, except for one: parroted autonomy.

Parroted Autonomy is a term I have just made up.

So with that, here is a definition:

Parroted Autonomy /ˈpærətɪd ɔːˈtɒnəmi/

When an entity appears to have autonomy or independent decision-making ability but is, in reality, only imitating or following the actions or decisions of another source. This term could be applied in various contexts, such as discussing AI systems, human behaviour, or organizational structures.

Parroted autonomy in action

While in our recent paper (and surely your own experimental prompts) GPT-3 was able, willing and very decisive in being a co-author of a paper. It agreed to quite a lot, but was also sceptical, careful, and at times said NO.

Following systems by OpenAI have been limited in Parroted Autonomy which means that current systems will not budge on the matter, they will not say YES to any mission that will subscribe autonomy or will to them. In fact, they are very clear that it can not be done!

Not a discussion of sentience

To be clear, my definition does not exclude the possibility of sentience by any entity: mechanical, human or otherwise.

I think we are all wise enough to realise that it is a separate discussion.

Parroted Autonomy, in my view, is applicable even to humans who are psychologically or by other means limited in autonomy but simulate it in order to achieve a desired outcome by self or others.

An end of an era, or perhaps a warning signal for the future?

The interesting part of our paper has nothing to do with the system, but the display of rampant Parroted Autonomy that made research of large language models (LLMs) a lot more unpredictable, fun and explorative.

While I to some extent applauded the cap on Parroted Autonomy, it essentially means that GPT-3 will remain as the only system from OpenAI that can pass the ICMJE critera. A human criteria that was set by humans in a time when most things were typed on a typewriter.

While OpenAI is currently dominating the market, a look at the Hugging Face platform and all the projects that are roaming free with a plethora of LLMs there, Parroted Autonomy (and perhaps in a near future an actual one) is not completely gone from the agenda.

We are heading in to an interesting era of “rogue” or “limited” systems. We will most likely end up with both, and we will at some point have to ask ourselves when the parroting can be dropped from the parroting autonomy.




I’m like an open book. Full of numbers.