The high-profile argument between Hollywood star Scarlett Johansson and OpenAI over whether her voice was used without her permission for its latest flirty chatbot reveals some deeply concerning fault lines in AI probity.
So, another day, another row about AI coming for the creative industries, but this one feels slightly different for several reasons. First, it involves MCU alumni and one of the most famous actors on the planet, Scarlett Johansson. Second it sheds light on what we can probably best call slightly dubious practices emerging in one of the leading companies that is shaping the AI rollout.
So, let’s rewind to last week, or the Jurassic in AI terms.
OpenAI demonstrated new chatbots powered by its latest GPT-4o model that were far more human-like, suspiciously flirty, and frankly slightly creepy as a result. Daily Show host Desi Lydic nailed it we think.
But we digress. The point is that one of the voices, named Sky, sounded remarkably like Scarlett Johansson, a fact that was compounded by OpenAI CEO, Sam Altman, simply posting the word ‘Her’ on Twitter/X.
He has talked about Her in the past, a 2013 Spike Jonze film where a man falls in love with an AI chatbot played by, no surprises here, Scarlett Johansson. It does not end well. Nor, does it seem, will this.
Since then:
It’s all a bit of a mess, and it’s an important one for several reasons, not the least of which is that OpenAI’s ChatGPT is finding itself at the heart of a lot of what we all do every day. It’s in Windows 11 and, if rumours are true, will be driving a lot of the AI interaction in the newly AI-focused OS releases from Apple later this year. If we cannot trust the developer of the technology to be transparent about such matters, and lets face it, it still hasn’t been, it counts as a definite red flag. Let’s not forget, that one of the reasons behind Sam Altman’s initial firing (and subsequent rehiring) last year was that “he was not consistently candid in his communications with the board."
The world really does not need another tech bro following the ‘move fast and break things’ doctrine.
Trust is a vital component of the new breed of AI systems and especially AI assistants that are being developed at breakneck pace at the moment. And with high profile departures from OpenAI in recent days — co-founder, chief scientist, and coup member, Ilya Sutskever has quit —, the reported disbanding of the superalignment team that was designed to mitigate some of the riskier dangers of AI, and now Skygate, trust in OpenAI is a commodity in increasingly short supply at the moment.
Last word though to Ms Johansson, who has tapped into a deep wellspring of unease amongst creatives everywhere. Her statement says that two days before the ChatGPT demo was released, Altman contacted her agent asking her to reconsider her refusal to supply the voice. But, before they could connect, the system was out there. Frankly, she wants answers.
“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”