Traditional aerial broadcast or cable distribution systems — representing most legacy video distribution system deployments around the globe today — are built either upon on-premises applications or with their own data centers for processing and built primarily for the narrow purpose of licensing and distribution.
A major challenge with this approach, especially regarding personalization, has been that the data interaction is a one-way street. There are almost no mechanisms in these systems for the provider to learn about the viewer, and the viewer doesn't have a voice in expressing their interests and preferences — certainly not through the TV or back up the cable. Also, because of closed technology and processing limitations, the systems are unable to scale with new capabilities or keep up with customer expectations.
Increasingly, we're seeing more capital investment in, and an infrastructure shift toward, a subscription-based model, which means providers can start utilizing the benefits of the cloud. Cloud technology can deliver the scalability and processing power needed, as well as the ability to store a huge amount of content and information. In the cloud, there are no scalability limits. Moreover, with cloud-based video processing and scalability, time-to-market for new features or capabilities is greatly reduced and providers can learn a lot about their customers and move toward delivering a very personalized experience.
With the stage set for a cloud- and subscription-based infrastructure — especially in an open platform environment — many business model barriers within the industry are being torn down, creating a very nimble and responsive market. The level of business model creativity is becoming supercharged and very exciting.
The Next Level of Personalization
There are two significant aspects of personalization within a cloud-based ecosystem. The first is being able to capture all the relevant actions that a viewer is engaged in with that medium, so providers can personalize the viewer experience. We're just scratching the surface of personalization capabilities now, but more is on the horizon.
The second part of the equation is being able to understand the viewer deeply enough to offer content that is of direct interest to them. We're beginning to see this, but again not at the level consumers are expecting. As the cloud and analytics become more robust, we'll see powerful new tools for customer interaction as well as significant steps in content metadata processing — bringing all of the points together for the viewer.
Being able to utilize data to make business decisions regarding individual subscribers is hugely important for all sectors of the market. Major apps like Netflix and Hulu exemplify progress in this direction. Their content curation and recommendation capabilities are based on viewer-indicated preferences and viewing history.
There's a lot of churn in the industry now, with subscribers becoming increasingly transient. Long-term contracts are going away. And while this is just the infancy of what is coming for personalization, with cloud-based systems, this is the new over the top (OTT) reality that cable operators and telcos are faced with. They need to develop a deeper understanding of the viewer on a more personalized level to attract and keep customers as well.
Scaling for AI
Artificial Intelligence (AI) has become a hot topic. There are many ways in which AI will enhance and advance the industry — with capabilities not yet even imagined — but we do know the use of AI in personalization will become very significant in the creation of richer metadata.
In a cloud-based ecosystem with almost unlimited processing power, we'll be able to take a TV show, program or movie and use AI and contextual awareness to break down the content into very deep metadata. Current content metadata lists the fundamental information about the content: title, year released, actor/director information and a simple plot description. That's it…
We're now seeing systems where we can process video through an AI engine and draw out a huge level of augmented metadata. An AI engine reviews and processes the video and is not only able to match and provide metadata on all the standard data points but can also mine a deep level of metadata about the content.
Imagine a detailed metadata set that lists all the specific shot locations within a film, all the models of cars driven by the hero, the clothes brands worn, and the sub-narratives or topics discussed. This opens up the opportunity to find content related to those subjects, either driven by viewer search or AI-driven curation. This, in turn, will significantly enrich the viewer experience and help providers keep customers interested and engaged.
With AI level metadata and a cloud-based platform, the viewer can look up information of interest and never leave the platform. Providers can present that information within the content that the viewer is watching and reduce the risk of the viewer moving away from that platform. This is a massive benefit of AI from the viewer experience perspective.
The other side of this capability is that the provider now has the potential of making an e-commerce transaction. It's entirely possible for a viewer to notice a jacket within a movie, click to see detailed information about it and then order a jacket for themselves — all within the video platform. Maybe also book a trip to one of the locations or look up local car dealers. Why not?
Redefining the Industry
Content in the very near future will not be served up based on overly simplistic “like” and “dislike” associations, but based on why it is liked or not – based on real interests and engagement with the viewer. And on multiple levels, this is just the beginning.
Now imagine all the above content delivery capabilities integrated with other systems such as customer web browsing, online shopping and social media involvement. These next steps will be incredibly engaging and rewarding for the consumer and will enable providers to create an exceptional personalized experience. This integration will move far beyond the point-in-time interaction to allow a viewer's background, interests and ambitions to shape the content and advertising presented to them.
Features and functions are now taking a backseat to the total viewer experience. We're moving from a feature-driven conversation to a value-driven conversation. Over the next three to five years, we'll begin to see these capabilities emerge within the industry and they will change everything. Open, cloud-based video platforms will enable the true promise of “personal video everywhere” and will take individual business models, the user experience and indeed the entire market to a whole new level.
Shankar Nagarajan is vice president, product management at SeaChange.