With little fanfare, mankind passed an important frontier last year by crossing the uncanny valley. The impact of this will be huge and long-lasting.
Uncanny valley refers to the unsettling feeling that people get when they see either a robot or a CGI representation of a human that is similar, but not exactly like a real person.
This milestone is a result of AI’s newly developed ability to create totally realistic synthetic representations of people and objects. This is being used to digitally forge video and audio of situations that never took place – known as Deepfakes.
AI makes sense
This development in machine learning is all part of a wider trend. While we’re a long way from AI becoming self-aware, machines’ ability to sense the world around them is advancing rapidly.
Increasingly machines are able to see, hear and touch their environment, allowing them to process ever greater amounts of data – helping them to make sense of the world in much the same way as a newborn baby.
Consider the sense of sight – over the last few years there have been exponential leaps forward in new machine vision capabilities.
Machines have been trained to decipher images and recognise faces using huge datasets. Having done its homework, AI can now alter and generate images and video to an astounding degree of realism.
Google’s DeepMind division recently released the source code of their newest Generative Adversarial Network called BigGAN. This image-generation engine uses Google’s enormous cloud computing power to generate images that are indistinguishable from reality.
Photo Credit – NVIDIA
Digital technology is a huge part of how we perceive the world and manage our lives. AI’s little leap across uncanny valley means that the ability to manipulate our perception of reality is now open source and online – all through easy to use tools.
This ability to create undetectable counterfeit images and video is keeping the intelligence community up at night. Some see it as the end of truth and a threat to democracy with the ability to inflame culture wars by taking fake news to the next level. Undoubtedly, they will further undermine our trust in what we see and hear online.
The implications for the media and marketing industries are huge. In industries where image is everything, the protection of intellectual property, rights management and brand reputation suddenly just got a lot more complicated.
Instead of social media crisis, brands, media entities and governments will struggle to cope with the latest Deepfake attack, as trolls and criminals deploy highly convincing misinformation against those they dislike or wish to exploit?
Infinite creative possibilities
It’s not all bad news though. The creative potential of this technology is limited only by our (and AI’s) imagination.
Expect to see it used in bold new ways across art and culture as a whole. Filmmaking and video production could be transformed by the ability to quickly and easily generate highly convincing synthetic environments or actors.
How will Netflix incorporate its creative power into its data-driven, formulaic and industrialised approach to content creation for example? What will this mean to actors?
Just like Pinocchio, it will breathe new life into computer-generated influencers such as Lil Miquela, helping such avatars to evolve and make their own journey across the uncanny valley – something they’ve always struggled to do.
It’s a safe bet that the gaming and virtual reality industries will welcome this technology as a way of taking their immersive experiences to the next level.
Lil Miquela. Photo Credit – TechCrunch
It will find its way into advertising and content too and we won’t have to wait too long before a brand campaign featuring AI generated video imagery of this type hits our screens.
We may even see our first AI generated brand ambassador too, after all China’s Xinhua News Agency has already launched its first AI-created newsreaders. It should make managing such talent considerably easier.
Photo Credit – BBC
Touching – really
Sight is not the only area where things are moving forward. The launch of Google Project Soli shows how machines can now use radar to understand the space around them. Although the hardware is still fairly bleeding edge, it means that machines can, in a way, touch the objects around, a bit like bats use sonar. It could open up entirely new ways for people to interact with products and services.
I hear you
Having considered these other senses we shouldn’t forget sound. Voice recognition continues to make great strides forward. Companies like Amazon and Google are making big bets on voice and embedding assistants like Alexa into everything from microwaves to the modern office.
Although many of the hard problems around speech and language have been addressed, access to data from the billions of smartphones and personal assistants is helping to address others. All this is enabling a new conversation between man and machines.
The invisible hand of AI
While the examples above are tangible ways we can observe the role of AI in our lives, we should also consider its invisible reach. Already our lives are being controlled by algorithms in a number of ways without us even knowing.
In her fascinating TED talk, Zeynep Tufecki talks of how AI and data are being used to build persuasion architectures that shape our choices and our lives in ways we cannot fathom. They are deployed at the scale of billions. She talks of companies using algorithms that no one can understand anymore – “giant matrices…millions of rows and columns, and not the programmers and not anybody who looks at it…understands anymore how exactly it’s operating”.
This frantic dance of ever-increasing sophistication and complexity is leading to unforeseen consequences, not just in the realm of our online buying habits, but across the world’s financial and political systems, and society as a whole.
Increasingly as awareness grows, people will want to feel they can trust the invisible and abstract entities that are shaping their lives. Perhaps we’ll see the emergence of branded algorithms. While the likes of Alexa and Siri are associated directly with the services they represent, will we see an Intel Inside approach to branding individual algorithms so people can choose what AI they invite into their lives, based on their values and characteristics.
Ethical source code
The truth is, AI is only just getting started and as it matures, its effect in the world will pose an increasing number of moral conundrums.
What’s needed is an ethical source code that all future AI should be built on. It needs a version of Asimov’s laws of robotics – not because of any techno-utopian dream but because it’s good business.
After all, in the digital post-mortems of the future, companies, governments and individuals will be held to account on how they used AI’s awesome power.
Making sense of it all
There’s no question that AI is an unstoppable technology that presents opportunities for brands and agencies alike – either in the short or longer term. While much has been said about automation, we should start to consider AI’s creative potential.
Project Soli should give the UX community plenty to think about. How can we harness it as an interface to create new products or brand experiences such as in retail or at events for example?
Brands will inevitably need to adapt their video strategy in the light of AI’s new ability to generate realistic environments and actors. Will companies be able to take their content marketing to the next level as production costs are slashed and new creative avenues are opened up?
While marketers need to focus on the day job of supporting sales and building the brand, it will be interesting to see how these tools advance and become embedded in our lives.