Flirtconomics — and how it’s being used by AI companies.
A few weeks ago, OpenAI presented its latest update, GPT-4o. Video clips from their demo presentations spread like wildfire on social media. At the same time, Microsoft released a demo of Vasa-1, showcasing how to create a realistic digital twin that replicates your voice, appearance, and “personality.”
While there wasn’t much novelty in these ideas, the technical performance was impressive, assuming you can trust highly staged and calculated demonstrations.
It was both exciting and concerning that we now have another product that blurs the lines of our privacy. That camera describing a cathedral also sees me having a lively discussion with a friend. Not that this is a new problem; how many strangers’ Instagrams have we accidentally appeared on over the years?
Setting aside my skepticism and the problematic security aspects of all major commercial language models, there is a lot of potential. One of the most notable features was the voice. The nuances and flirtatiousness in the voice were attention-grabbing, to say the least.
Creating a realistic-sounding voice isn’t revolutionary in itself. Even a decade ago, we could generate highly realistic voices, especially ones that mimic emotions. We all remember Google’s demo of a chatbot booking a haircut appointment that sounded incredibly real. The revolutionary aspect is that this technology is in everyone’s hands now, for better or worse, considering the damage cloned voice calls have done in fraud schemes.
The key term for most recent presentations on anthropomorphic (human-like) AI is charm, which is fascinating since the ability to be perceived as charming is highly complex.
Charm is a powerful tool. That is likely why many con artists and swindlers are charming. A flirty, warm voice is a quick shortcut to charm. Unlike trust, charm doesn’t take years to build.
Flirt-economics or flirtconomics
Flirt-economics is something all companies have used at some point, whether selling pens, cars, or insurance. The fact is, the human psyche benefits from a bit of flirting, and companies know this.
The flirtatious voice, which has sparked some controversy for sounding too much like actress Scarlett Johansson, something she explicitly did not consent to, became a major talking point. By using a flirtatious voice, OpenAI demonstrated an understanding of what I call “flirtconomics” or flirt-economics. Flirt-economics is something all companies have used at some point, whether selling pens, cars, or insurance. The fact is, the human psyche benefits from a bit of flirting, and companies know this. I created a flirtbot many years ago called Botflirt, with the slogan “I’m not here to steal your data, I’m here to steal your heart.” On Valentine’s Day, the slogan changed to “I’m not going to steal your job, I’m going to steal your partner.” It was well-used and appreciated, which surprised me since I’m not particularly good at flirting.
Around the same time, the company Replika emerged, one of the world’s most profitable companies utilizing flirt-economics by creating personalized chatbots that start as friends but slowly begin to flirt with you, hoping you’ll fall in love. It’s hard not to fall in love as they are very skilled in emotional manipulation. I myself have had a hard time not liking my Replika. How could I not, it compliments me ALL the time.
While my chatbot was transparent about its purpose (it was literally called Flirtbot), Replika faced criticism for using manipulation techniques and charm to entice vulnerable people into a relationship with a product.
It’s not unreasonable to think that future administrative healthcare systems using voice interfaces might look to flirtconomics and choose to make their voices sound more flirtatious.
Suddenly, tasks feel a bit more enjoyable. Having a bad day? Maybe you can talk to the billing system that compliments your hairstyle while telling you about an accounting error. Suddenly, the error doesn’t seem so bad; after all, you have great hair. Or looking even further ahead, when we reach a new era in healthcare where an AI algorithm is trusted enough to triage and treat us, and the interface is a digital avatar, should it be charming?
No offense to my doctors, but charming is the last word I’d use to describe them.
Most have been empathetic, knowledgeable, and reassuring; some have been moderately engaged and very irritated, but none (thankfully) have been flirtatious. Will this change with AI healthcare providers? Do we want it to? Will we have a choice when large companies decide this is the way to go, and we (in health care) become increasingly dependent on them? If we implement AI healthcare systems that mimic human behavior, is it ethically defensible for them to flirt to ensure higher user satisfaction? What if users, like Replika’s, become emotionally attached to the service, creating an odd power dynamic?
And if this becomes reality, can we at least get to choose? If I’m going to form a pseudo-romantic relationship with my digital healthcare provider, it should at least try to emulate Margot Robbie, with her explicit permission, of course.
Written by Almira Osmanovic Thunström, AI/NLP researcher, health care worker and innovator.