Last week I was asked by Digiday to comment on Discovery's announcement that they're going to be aggregating data from linear broadcast, digital (owned) video platforms, and social media engagement, for their upcoming Winter Olympics coverage - all with the aim of building a content valuation model.
They used 28 of my words in their piece but since it's a really interesting topic and because 7L provide relevant content valuation services, I've written another 1,200.
Discovery, who will broadcast to most of Europe through their Eurosport channel, plan to blend the following areas of measurement:
Video: The number of videos viewed and the volume of viewing expressed in hours across Discovery’s owned and operated platforms, including free-to-air, pay TV, streaming services, digital, apps and social media, in addition to traditional audience data from its broadcast partners throughout Europe.
Users: The sum of total users across Discovery’s owned and operated platforms, including free-to-air, pay TV, streaming services, digital, apps and social media, in addition to traditional audience data from its broadcast partners throughout Europe.
Engagement: The number of likes, shares and comments across all of Discovery’s digital and social media properties, covering all platforms.
Discovery's measurement model
First of all, this is a laudable attempt to bring together a whole bunch of data that could tell a much more rounded story of their content consumption than the linear TV figures.
It's a largely sensible set of data points that have reasonable equivalence across platforms.
It feels very likely that another objective for Discovery will be to come out the other side of the Winter Olympics with 📈 A Big Number that they can sell against (while the data won't be used to underpin PyeongChang 2018 commercial deals, it will inform content valuation models for future Discovery events).
So, if I were a future advertising partner, these are the things I'd want to understand in quite a lot of detail about the model, starting with video...
The number of videos viewed
😐 There are definitely issues with equivalence of what counts as 'a view' across different web providers and in particular social networks. And that's not just about length of play required for '1 view', but how visible that view is on screen. So... if this was used as the main video metric, I'd be concerned.
the volume of viewing expressed in hours
😀 Much better for cross-platform equivalence. Measuring consumption by time should always be your #1 metric for video. We'd always use that plus dwell time for web properties as the primary driver for valuation (rather than impressions).
including free-to-air, pay TV, streaming services, digital, apps and social media, in addition to traditional audience data from its broadcast partners throughout Europe
🤔 Ironically it's probably the traditional audience data that's the most finger-in-the-air of all of these data points, despite having been around for the longest. The breakdown of all of the above will be much more interesting than any munged-together total number.
And then on Users...
The sum of total users across Discovery’s owned and operated platforms, including free-to-air, pay TV, streaming services, digital, apps
🤞 This is good in theory- owned platforms should give you the best available data on user figures. De-duping users who consume the content through multiple channels will be a real challenge though, and is very unlikely to be accurately achieved through tracking code; some element of manual adjustment will have to be applied to make the sum figure credible (which is fine- layering human expertise on top of a data model is something we'd always do ourselves, and encourage).
and social media
🤥 Er, ok. So, Facebook give proper 'user' data and in much better detail than any other social platform. If we park any concerns about how much we trust Facebook data in general, then this is fine.
Twitter has no credible measure of reach for posts or video, though. Impressions are a wildly inaccurate representation of the number of actual people who've seen the content. My estimate would be that real, de-duped people make up maybe 5% of the impressions figure.
And then Snapchat. Well, Snapchat are an official partner of Discovery for the Winter Olympics so you'd hope that they'd get some decent data from them. But they don't exactly have a great track record for making useful analytics available around brand content.
Incidentally, you'll see there's no Instagram logo on the 'engagement' section of that infographic; perhaps the Snap deal precludes Discovery working on that channel for the event.
Which brings us to the engagement piece.
The number of likes, shares and comments across all of Discovery’s digital and social media properties
👍 Yes. By which I mean, correct and well done. This is the best and almost only way to credibly measure social engagement for content valuation.
We have broad equivalence across almost all social channels, which means it's reasonable to aggregate valuation data by content theme.
For example, almost all football clubs produce matchday line-up graphics and full-time score graphics. These are regular formats, posted a predictable number of times across the season. They're ideal inventory for branded content, and it makes sense to sell the content to a single partner across multiple social channels.
It's key that the valuation process is flexible enough to value these engagements differently across each channel, though.
It's very likely that your overall engagement on Facebook comes from a more globally spread audience than on Twitter. The valuation rates have to reflect the fact that there's huge variation in ad rates by country; you could reach many many more Indonesians on Facebook than British Twitter users for the same spend, for example.
Additionally, while a 'Like' is basically the same user action on Facebook, Twitter and Instagram, it represents a different amount of effort from the user.
It's really common to see automated commercial models value Instagram higher than any other channel, which is one of the best arguments for adding a layer of human sense-checking on top of the model.
A Like on Instagram comes from the user tapping a massive piece of screen real estate (anywhere on the image). The equivalent action on Twitter or Facebook requires a much more precise and deliberate tap/click from the user. For that reason alone it's important to scale-down the Instagram figures that a data model will output.
It's complicated stuff - having run this type of valuation for a number of clients, we know that the data is always a little imperfect, takes a long time to wade through, requires a seriously good spreadsheet to model and similarly strong expertise to interpret.
One of the biggest overall challenges with digital engagement valuations is that there's no standardisation across the industry - and there probably won't be for some time yet.
In that context, I'm dogmatically purist about our approach; if a potential partner wants to ask difficult questions about how and why we arrive at our valuations, it's essential for us and our clients to have credible answers.
And of course, you can get in touch if you're interested in us valuing your digital and social content inventory...
Infographic first seen by us in this Deadspin article.
Comments