Ensuring the integrity of user-generated video is crucial in combating fake news, with companies such as Newsflare developing rigorous authentication algorithms to ensure content can be trusted.
Generative AI is now so powerful that it’s possible to make a convincing-looking video within minutes from a simple text prompt. Today’s state-of-the-art in AI images and video is frequently breathtaking, yet the stark, hard-to-assimilate fact is that this is the worst it will ever be, because the technology will surely improve.
If fake or inauthentic video is a problem now, every technology trend points towards it growing as a threat to our ability to know what is real and what is intended to mislead us.
At the same time, GenAI can bring wonderful things. Used creatively - by creators - it opens up a universe of expanding opportunities that would be a shame to turn down. It is reasonable to hope that while creative talent might work at a higher level, it will still be “in charge” of content creation. No one in the content business wants content creators to be irrelevant.
For content businesses, fake video (or fake elements in video) poses a serious, possibly existential threat. Without reliable authentication, content’s value will likely trend towards zero: a massive issue. Luckily, companies like Newsflare take this very seriously, as we will see shortly.
Authenticity and trust
Footage captured from Dr Guan Zhu's dashboard cam in April 2014. Dr Zhu only received minor injuries. Video: Newsflare
Authenticity is at the core of any scheme or process to keep content real. It’s simple to define: content is authentic if it is what the author or the wider context claims it to be. Of course, some content, like special effects, is designed to mislead - but that is a special case. Given that AI can now generate anything in your imagination and beyond, authenticity is now de facto an essential part of any content’s specification.
Another concept that’s closely associated with authenticity is trust. While trust isn’t infallible, it goes a long way to help ensure that content is what it claims to be. A simple example might be a trusted news journalist/camera operator, known for their high-quality work and who has earned their trustworthiness by being open and honest about the provenance of their material. At the other end of the pipeline, content libraries will gain the trust of their customers if, over time, they have never supplied material which is fake or misleading.
It is possible to enforce trust as long as the process is end-to-end. It starts with the camera, it might use cryptographic techniques, and it will ensure at every stage - a bit like the basic rules of hygiene - that the content is not “contaminated” with inauthenticity at any stage during its progression from capture to consumption.
Content authenticity is an issue in wider areas than you might imagine. While news is the obvious genre of concern, user-generated content is very clearly a likely source of misleading intent.
It’s probably fair to say that social media viewers are already partly normalised to receiving misleading content. To some extent, their cognitive immune system will help them to filter out inauthentic material, but the better generative AI gets, and the more skilful the ill-intentioned creators become, the less even the most sceptical viewers will be able to differentiate between the real and the fake.
If and when viewers are fooled by fake content, the consequences can be severe. For news organisations, misleading content can change the world. Examples of content containing misleading claims have already influenced the outcome of elections. In advertising, broken trust could destroy a brand or a company.
Weeding out misinformation
Tourists filmed fleeing from the edge of a lagoon as a glacier calved in Iceland on March 31, 2019. Video: Newsflare
Authenticity is on its way to being one of the most critical aspects of content.
Newsflare is at the forefront of verifying authenticity in news and for-profit content. It is a licensing platform for user-generated video, servicing news publishers, factual TV and film production, and brands and marketers.
Authenticity and trust are at the core of Newsflare’s business, and as such, it has invested significant resources into building a reliable pipeline for vetted content for its licensees. Newsflare achieves this with its proprietary “Trust Algorithm” that builds on years of gathered training data to assign a “trust score” to the content based on a range of data points across years of licensing history. This score is used to filter out content of dubious provenance, protecting customers who receive feeds of highly relevant and vetted User-Generated Content (UGC) for their projects.
It’s important to say that while Newsflare is a pioneer in sourcing and licencing authentic content (i.e. captured by people witnessing an event), the company is not AI-phobic. For some time now, the company has used AI to inform the curation of content to customers using a data solution developed in-house called Content Brain. Indeed, it would be wrong to turn down the opportunities and new creative horizons offered by generative AI, so the company insists that AI-generated output has to be distinguishable from conventional material depicting the real world. Newsflare is also a pioneer in supplying its growing content library to AI customers for model training for various uses, including generative models (text to video, for example) and non-generative applications like autonomous driving.
Jon Cornwell, Newsflare’s CEO, said, “Maintaining the integrity of content and rewarding content creators fairly has been in Newsflare’s DNA since the inception of the business. Ensuring the integrity of content is a challenge that we address in a variety of ways: legally, technologically and culturally. However, perhaps the strongest lever is commercial – if the world needs video that can be trusted, then trustworthy filmers should get paid.”
Ensuring consent
Crucially, all Newsflare content licensed for AI training is with the consent of the rights holder with whom video licensing revenues are shared 50/50. Nurturing an ethical approach to training AI models by compensating rights owners is perhaps the only safe pathway to solving all the moral issues raised by the professional and commercial use of AI generative models.
AI will grow faster than we intuitively think it will. Even though widespread general adoption may seem slow, the true pace of advancement is completely off the scale. Developments like Sora, which made us all gasp less than a year ago, will continue to surprise even experts.
All of which makes the quest for verifiable authenticity the number one focus for content libraries and agencies. Newsflare is leading the way with its focus on upholding the integrity of content, underpinned by its proprietary Trust Algorithm, which will preserve the value of authentic content in a confusing sea of AI-generated material.
tl;dr
- Ensuring the integrity of user-generated video is essential for combating fake news, with companies like Newsflare implementing rigorous authentication and trust algorithms to verify content authenticity.
- As generative AI technology rapidly evolves, the ability to create convincing fake videos increases, making it increasingly difficult for viewers to differentiate between real and misleading content.
- Trust in content is vital, and organizations that build a strong reputation for authentic material can protect their credibility while combating the spread of misinformation.
- Newsflare combines its focus on authenticity with innovative AI solutions, utilizing a proprietary "Trust Algorithm" to vet content while also incorporating AI to enhance content curation, ensuring that generated materials remain distinguishable from real-world footage.
Comments