TL;DR
- The latest Future Today Institute Tech Trends report finds that the entertainment industry is at a tipping point, where new technologies are allowing exploration of completely new forms of expression.
- Haptics will allow us to engage all of the senses when we watch our favorite characters on screen. Until consumer devices offer such features at scale, this enriched experience can be found at select venues such as the Las Vegas Sphere.
- AI will allow for the design of new production processes for deconstructed storytelling.
- Virtual reality applications are evolving to a new experience category: free roaming, interactive adventures that can be experienced with others.
READ MORE: 2023 Tech Trends Launch (Future Today Institute)
The imaginary worlds in Disney and Universal’s theme parks will become increasingly immersive, predicts the latest Future Today Institute Tech Trends report.
Storytelling experiences themselves will move to a collaborative model, where the audience has varying degrees of impact on how the narrative unfolds. This in turn opens up the opportunity for repeated engagement with entertainment franchises.
“The entertainment industry is at a tipping point, where new technologies are allowing exploration of completely new forms of expression,” says the FTI’s chief executive, Amy Webb.
The report itself spans multiple industries and the section on entertainment alone runs 70 pages. NAB Amplify has edited the highlights that the TFI has culled from hundreds of sources, including securities filings, patents, academic research, market research firms, white papers, and the press.
Synthetic Influencers
The influencer economy, estimated to be worth $16 billion last year, is giving creators control over their businesses with the consequence that power is shifting away from social media platforms.
TFI thinks the influencer economy poised to eclipse traditional marketing and advertising channels but that virtual or synthetic influencers are about to muddy the waters.
Some of these computer-generated characters have already amassed social followings in the millions, agency representation, and partnerships with brands but most importantly they are “unencumbered by the demands and limitations of human influencers.”
Remote Revolution
The content production process is being upended in a number of ways, among them the build out of remote and decentralized workflows away from the traditional production hubs.
Instead, talent in regions like New Mexico, Turkey, Australia and Southeast Asia are benefitting from being able to be connected to a production which might be produced in other parts of the world. At the same time, the report thinks there’s a virtuous circle not only in productions being made quicker and cheaper but, paired with global streaming channels, affords a chance to “showcase a greater variety of voices from different cultural and demographic backgrounds.”
Participating in the Story
Spatial audio, volumetric video capture, and haptics will increasingly allow us to hear, feel, and see the action, “transforming us into participants rather than spectators of the events happening on our screens.”
What’s more, as the capabilities of our technical devices expand, consumers don’t just watch their favorite content, they experience the narratives with all — or most — of their senses.
“However, as consumers become accustomed to multisensory engagements, and enabling hardware becomes more accessible, expectations might shift in other areas of entertainment. This provides additional layers for storytelling: What does a location smell like? Where is the sound coming from? Is it windy or hot? Creatives may need to design olfactory, sense, and spatial elements, just as sound and production is designed now,” the report proposes.
Incorporating these aspects in storytelling will also potentially help bring viewers back into the cinema, where the sensory experience can be better controlled and the necessary hardware can be made available.
Customized Content
Stories are evolving from finite products to flexible formats consisting of a variety of modules that can be combined in a near infinite number of ways. AI-assisted writing can adjust plotlines automatically to fit the viewer’s taste profile, based on such data as a person’s past viewing choices, browsing history, and favorite online publications.
The practicality of producing “modular narratives” requires that exponentially more material be shot than with linear storytelling.
Naturally, this inflates costs and production time. It also changes the kind of control that directors, producers and writers can exercise over their product.
“Their work becomes an environment and narrative setup in which a variety of actions can take place — similar to what a game designer provides,” the report suggests.
It also questions whether personalized content-on-demand will touch people in the same way as today’s movies. If everyone consumes different versions of a narrative ecosystem, the foundation for a broader societal discussion shrinks or changes, possibly hindering the exploration of important, controversial topics.
Two-Way Storytelling
We will see more Massive interactive live events, or MILEs, hybrids of TV shows and video games with a storyline that unfolds continuously over several weeks, where viewers can interact with the livestream to influence the action.
Different stories will lend themselves to different degrees of relinquishing control and different forms of consumption, opening up doors for endless experimentation. This new hybrid will also cross-pollinate audiences between gaming and streaming and create new business opportunities for existing titles on both sides.
Another advantage of participatory narratives: What happens will be novel and different each time an experience is launched, keeping the fan community continuously engaged.
AI Voice Dubbing
AI systems can now take a movie’s dialogue and dub it into multiple languages, re-creating actors’ original voices (Val Kilmer’s reunification with Tom Cruise in Top Gun: Maverick was recreated by vocal clone). With synthetic media applications adjusting lip movements to fit the spoken words, authentic localization of content can now be achieved quickly and cost efficiently.
The technology can also amplify the impact of such content: Viewers are able to recall dubbed material much better than content with subtitles.
Push Button Video
Text-to-video solutions enable companies to scale their corporate communication and marketing messages.
While long-form narrative content is far from being produced with a single push of a button, the increasing number of end-to-end solutions, bundling algorithmic voice and image technologies, will be accessible and increasingly utilized by budget-conscious companies, members of the creator economy, and regular consumers. The ease of use and rapidly improving quality of these tools will further heat up competition for viewers’ attention.
Virtual Concerts Take Off
Virtual reality concerts first gained popularity during the pandemic to make up for canceled shows. Now they are evolving into their own category of entertainment, providing more intimacy with performers and new opportunities for smaller acts.
Megan Thee Stallion’s “Enter Thee Hottieverse” tour is just one example of a recent VR experience from a popular artist who can make more money virtually than on a physical tour.
Monetization opportunities include merchandise and experiences. And the gaming environment presents natural crossover potential. As companies explore opportunities to make VR available to smaller bands, those artists will potentially be able to connect with and monetize their audiences without having to go on tour.
Live acts are also freeing themselves from location-specific constraints. Volumetric capture and ubiquitous highspeed connectivity promise to replicate performances in real time to any venue.
Personalized Theme Park Experiences
Existing theme park customer platforms, mobile apps, and wearables provide an ever more optimized and personalized experience to park visitors, thanks to AI and sensor technologies.
For example, in Hamburg, the “Yullbe Wunderland” experience allows participants to “shrink” to miniature size so they can dive into the world of the largest model railroad ever created. Up to six people wander through a 250-square-meter space, each wearing a backpack computer, VR headset, a helmet with infrared sensors, microphone, headphones, and hand and foot trackers. Data from this uniform, as well as from 150 cameras in the room, combines with data from other users to enable collaborative sensory experiences.
VR entertainment experiences use the technology for localized social activities that stimulate all the senses, enabling customers to fully immerse themselves in artificial worlds.
Merging Physical and Virtual Theme Parks
The next frontier is connecting these platforms to data outside the park ecosystem for even greater personalization and user friendliness.
For clues look to Disney+, which announced last October that it would morph into an experiential lifestyle platform that enables data exchange between its park and streaming services, while providing a more personalized experience in both.
Both Universal Studios and Disney filed patents that transmit data about personal preferences from guest wearables to park entities — staff, for example, could communicate accordingly or trigger customized experiences. The two companies also have plans to bring their parks into virtual realms.
“If theme parks fully embrace a presence in the metaverse, it could lay the foundation for an entirely new form of experiencing theme parks, one that’s not bound by real-life limits such as lines, hours of operation, or weather.”