In the near future, applications for Synthetic Media can entirely transform how brands and consumers consume content. Didimo is featured in PlugAndPlay
As the world becomes increasingly digital, brands are looking for innovative new ways to communicate visual information. Synthetic Media first gained popularity in movies and gaming, but progressively more industries are finding use cases as the technology matures. The Synthetic Media market is still nascent with arguably two types of companies: high end creators who have a proud heritage of creating high quality computer graphics and start-ups who have identified an opportunity to leverage new artificial intelligence tools for content creation. In the near future, applications for Synthetic Media can entirely transform how brands and consumers consume content.
What is Synthetic Media?
Over the past decade, there has been a shift in how the world creates and consumes content. We stand on the brink of a new era where our devices will be both the tools and the creators. The increase in streaming services, the demand for content and the rise of artificial intelligence has created a new media category called Synthetic Media. Synthetic Content is digital content that is synthesized by devices and can be in the form of articles, books, imagery, movies, scripts, etc. Synthetic Media will democratize content creation, making it easier and more cost-effective to produce large amounts of audio, video or image content without the need for costly, physical processes. To support generative content, Artificial Intelligence will need to understand context, flow, human emotions, natural language, perspectives, scripts, tempo, tone and much more. Examples of generative content are emerging across multiple modalities, and in fact OPENAI used text generators to create the world’s first synthetic generated text-based games.
The three main components of Synthetic Media include:
Digital Humans (The User): This requires technologies and capabilities to create, animate and interact with digital humans. These capabilities include capture, modeling, features, encoding / decoding, rendering and animation.
Digital Realities (The Environment): This layer requires technologies to capture, model, personalize and synthesize different worlds and stories.
Digital Experiences (The Applications): This requires understanding of user interactions and/or experiences for different types of applications, like displays or emotion detection
Emerging Applications Are Driving the Need for a Standard Format
The first commercial applications for Synthetic Media will likely be in the form of Digital Humans that can support corporate communications, call centers, or e-commerce. As Digital Humans evolve and become a more central part of our lives and virtual presence, they will re-create human interaction and reimagine our relationship with technology. In the future, Digital Humans may serve as our interface with an increasingly virtual world, and as such, it is vital to create trust and authenticity with the Digital Humans for the emerging e-society. To facilitate this important sense of trust, industry must coordinate and define a shared format and interfaces for all the components of digital humans, including geometry, facial animation, emotion, motion / posture, speech, personality, haptics, and interactions. These facets combine together to offer a digital representation of our likeness and our actions, and we will need a standardized format to digitize ourselves while also maintaining our privacy.
InterDigital, a global market leader in advanced wireless, video and artificial intelligence technology research, has a vision that every human will have a digital double as their avatar for the e-Society, which will become the interface for communication with the emerging world. InterDigtal’s teams have developed solutions that demonstrate how quickly a person’s head can be captured volumetrically and rendered in real-time, taking about 20 minutes to achieve a life-like animated avatar that would otherwise take artists and graphic designers hours and days to complete. Leveraging AI capabilities, the team will be able to equip the avatars with more authentic expressions and eventually voices. As new capabilities advance, you may see the emergence of digital avatars that are indistinguishable from their human models. This new paradigm brings many opportunities and new concerns, and InterDigital anticipates that formats for digital humans will become more standardized in the future.
As humans, we are incredibly perceptive and adept at identifying fake 3D rendered humans. This well-known barrier, also known as the uncanny valley, is only reinforced if the digital human is someone well known, such as a family member, so the rendering and modeling itself are both key.
One company tackling this barrier is Didimo - their technology can create high-fidelity 3D characters from a single selfie or scan. Didimo has compressed the entire traditional process and created a fast, and easy way to create a lifelike, digital version of yourself. With a single selfie uploaded to the cloud, Didimo is able to format an accurate representation of the human face and map the abundance of human facial expressions to a personalized avatar.
Didimo can enable users to experience a new level of ownership and responsibility of their online identity. They can go beyond physical realities, expressing their true selves and experiencing things they wouldn’t otherwise be able to. Apart from obvious use-cases in the online world, easily accessible 3D characters could transform every industry from education and healthcare to even retail shopping experiences. Ultimately at scale, it will change how people can connect in the future.
The applications for this technology are incredibly widespread, especially as our digital identities will come to increasingly represent us as our world becomes more digitized.
How Synthetic Media can advance in 2021 and beyond
Today synthetic media is being used to support a wide spectrum of applications ranging from mass corporate communications presentations to individual interaction with AI-enabled virtual assistants. While the target audiences for these applications sit on opposite sides of the spectrum the strengths brought by synthetic media support each due to the characteristics of the underlying technology.
Digital Realities and Experiences:
Basic content production assistance (multilingual content generation that eliminates the typical artifacts of traditional voice-over).
Digital humans today are limited to upper body representations with a limited range of interaction, motion, and speech. These slightly mechanical representations are interactive talking heads being used for customer service, corporate communications, and sales. They will evolve to incorporate more upper body movement.
Digital Realities and Experiences:
Researchers will work on creating worlds and interactions that have:
Digitally synthesized stories/worlds
Infinite gameplay or content
Digital twins (a Mirror World)
To reach the uncanny valley, researchers will work on:
Improving facial expressions and emotional affects
Adding 3D movements
Creating fluid motion
Improving speech synthesis
Adding full-body animation
Improving content creation using Artificial Intelligence (specifically with General Adversarial Networks)
Ultimately supporting custom-tailored movies or news programs for each individual viewer, digital twins that augment our ability to interact with an increasingly digitized and socially networked world, as well as interactive virtual assistants that are almost indistinguishable from our human friends. All will be built on a foundation of synthetic media.