Emily Christner is Chief Product Officer at Trusted Media Brands. A 20-year veteran of the media industry, Emily held marketing and general manager roles before taking over product development following a merger that added streaming and licensing platforms to TMB’s portfolio. Today, she oversees multiple lines of business, a responsibility that has driven her to master the delicate art of prioritization in an industry that, even by AI-age standards, is particularly prone to tectonic paradigm shifts.
Speaking of AI, Emily’s position affords her an opportunity to observe the rise of large language models more closely than most. In our conversation, she talks about the challenge of distinguishing between AI- and human-generated content and the global need for legal and ethical governance, particularly in industries like healthcare, criminal justice, and news. She also calls on media companies to band together and develop a legal framework to protect publisher content and maintain trust in news content as generative AI plays an increasing role in content creation.
It requires a very deep understanding of each line’s business as well as the potential cascading effects that doing one project before or instead of another can have on the company at large. In a publishing business, resources are in short supply, well-informed prioritization often makes or breaks our business.
I try to stay close to and work with the business owners or the P&L owners for each of those business lines. We’ve got someone who runs the streaming business and understands first how that business fits into the overall priorities for the company. It’s common for smaller business lines to get fewer resources than the biggest business lines. But, if there’s a project that could triple revenue for a smaller business line versus growing it 10 percent for a bigger business line, you have to look at the absolute numbers and determine what’s going to grow the bottom line of the company in the biggest way.
It comes down to understanding the goals, the financials, the market opportunity, and what consumers are looking for with regard to each business line. Particularly in media companies, you often have business owners coming to you and asking for certain things that they’re certain will help them reach consumers. It can’t get onto the product roadmap without a thorough understanding of what the business impact is going to be. And so that burden falls to the business owner.
In some cases, the business owner might be the person running a particular business line. In other cases, the business owner is a product owner as well, or a product manager. This is probably very specific to media companies because we don’t have a lot of different product roles; we’ve got product managers and that’s really it. So that product manager might, based on what they see in the market, come up with an idea themselves and drive the business case for it. Or, they might look to other parts of the business.
I think that’s a long way of saying we all need to be aware of the market conditions, how our consumers are consuming our content on various platforms, what the trends are within those particular platforms, social, search — things like that are changing constantly. The key is making sure we’re using our resources in the best way possible at any given time, given that they’re very limited in some cases.
It depends on the product line. Websites are very well-measured, but cross-platform measurement is extremely difficult. Streaming numbers, whether that’s advertising or content consumption, are also somewhat difficult to measure; it’s getting a bit better, but the standardizations there are very far behind.
I think digital has always been pretty easy, but other platforms, whether it’s television ratings, are a different story. But we have fast channel consumption, we have content consumed within our apps and things like that. There are just so many challenges. Every platform’s different, every platform has different rules. So anyone who’s used to a digital business where they can get easy answers — versus a streaming business where the answers aren’t easy, or podcasts, as you mentioned, or other platforms where it’s just sort of the wild west — it can be really frustrating.
We try to combat that by not only understanding those numbers as best we can, but also taking part in conversations with those companies when it comes to the challenges we’re having when we’re trying to make our business decisions. It can be really hard to make business decisions on limited information. Sometimes you just have to look at the relative trends within each of those platforms, even if you don’t have the full picture.
I do think there are people at these companies who want to understand the challenges their partners are having and, to the extent that they can, influence change. But that change is often very slow-moving. Again, it comes back to using the information you have to make the best informed decisions, but realizing that it’s imperfect information. That can be tough, especially for people who are used to having all the information at their disposal.
If you look back to television ratings, it’s kind of amazing how far we’ve grown in the measurement space because, in the early days, there were so many challenges with panel measurements from Comscore and Nielsen. When it comes to television rating points and things like that, people had always made millions of dollars of investment based on imperfect information. When we got to digital, we saw how great it could be. And then some of these “newer” platforms have kind of brought us back to the earlier days of not having all the information.
There have been a couple of recent examples of media companies getting burned with some of the experimentation that’s been going on in the AI space. I hate to mention any companies by name because I think, unfortunately, in this day and age, it could happen to almost anyone quite easily.
Generally speaking, a lot of companies outsource content creation, whether that’s for freelancers or to companies that create affiliate content and things like that. You have to be really, really careful about who you’re working with and have well-established internal guidelines that apply to anyone that you’re working with. Their content becomes your content, and the end user doesn’t care if you created it or a freelancer created it or a third-party company created it. At the end of the day, it’s coming from your brand, so you can’t be careful enough.
There was a volcano in Iceland that erupted recently, and I was there for a conference when it started becoming an issue. All of us sitting at this conference were getting messages from family and friends all over the world saying, “Are you OK? We’ve seen these crazy pictures of the volcano erupting.” And it wasn’t; there were a lot of deep fakes out there. There was a lot of misinformation, certainly a lack of transparency.
I think, in that case, there were probably people setting out to misinform, but I don’t think any news organization sets out to misinform its viewers. Certainly, lots of media companies are using or experimenting with AI to generate imagery to accompany stories, but it’s the responsibility of any company to make sure those images aren’t building on or putting out false information because it can create unnecessary panic around the world, like in the case of the Icelandic volcano erupting.
Transparency is critical because the ethics around AI-generated content are not yet clear or widely agreed upon, and it can be hard for the average person to distinguish genuine, human-generated content from AI. For instance, it might be fine to create an AI-generated image of a ham recipe, but I think you need to mark it as such.
That’s a pretty simple example; an AI-generated ham is unlikely to harm anybody, but when you apply this to other areas of publishing news, it gets dicey. Health publishing is a good example: transparency is critical because fearmongering will always prevail. Society will have a lot of trouble adjusting to this technology if, early on, we’re not being really careful about how it’s being used. It can get really scary really quickly.
I think that’s where this lack of governance or standards is dangerous because everyone’s kind of making things up as they go along. That’s where it’s really critical for organizations internally to have better governance and to work with the larger organizations — and the government, really — broader regulation and governance for AI.
There’s a big debate there. Whose responsibility is it to make sure the underlying data used in AI is not biased? And whose responsibility is it to make governance?
My personal answer is that everyone has a role in this. You can’t rely on somebody else, whether that’s the government, another group within your organization, or whatever the case may be. Everyone needs to be vigilant about it.
I mean, misinformation in the news is one thing. When it comes to healthcare or criminal justice, it can get really scary. There’s a lot of possible negative outcomes: misdiagnoses, wrongful convictions, etc. I think the same would be true of content creation in those industries. If someone gives bad financial advice or health advice, that could certainly have very negative consequences for consumers.
Again, I don’t think their intention is necessarily to mislead consumers, but if they’re not following guidelines or the information they’re using to create that content is just wrong because the inputs are wrong, then it sort of creates this downward spiral: what you started with was wrong and so the outcome is wrong.
I don’t want to be all negative here: I think there’s obviously a lot of positive uses for AI and content creation and delivery, but you can start to see how things could get out of control really quickly if we don’t take a step back and make sure regulations and processes are in place to make sure we use things properly.
For product development, you can use it in so many different ways. You can use it to do simple things like transcribe and automatically share meeting notes so somebody doesn’t have to sit there and write. I’m guessing the software you’re using right now to record and eventually transcribe this interview has similar underlying technology. It can perform some of the more mundane or easier tasks so you can focus on the bigger picture. An example would be using AI to help jumpstart the writing of product requirements; if you look at that from a content creation perspective, research is faster with AI tools.
Again, though, if you’re in the news business, you shouldn’t be looking to ChatGPT to give you an answer because you don’t know whether or not, especially at this point in time, the information you’re getting is real. So you could use it as a starting point and then do your own research as to whether or not something is true. I think it can get you from point A to point B faster as long as you’re really careful.
TMB will look to AI to source. So we have a lot of user-generated content we collect and then repackage and license with the user’s permission, working with people across the world to repackage and then license out that content to other media companies. That process can be done a lot faster because of AI.
For another example, somebody might create something that has usage rights issues with regard to music or something else that needs to be removed. There’s AI-backed software that can do that really quickly. No matter what your function, there are AI tools cropping up every day to help you improve your workflow.
This is moving fast and it isn’t going away. Everyone should educate themselves, and everyone should embrace it. People, particularly in the content creation fields, tend to worry, “Is this going to replace my job?” It’s not going to replace your job, but if you don’t embrace this technology and learn how to use it, your job might be replaced by someone who does know how to use it. That’s how we talk about it internally, and I’ve heard a lot of people say that in different industries.
At our company, we have an internal task force that works on everything from tool vetting through legal. We have an approved tools list for people to experiment with. We’re having show-and-tell, we’ve brought in external companies to do trainings, and we have a product roadmap specific to AI. We’ve also done a company-wide hackathon to get people involved in thinking about it. I think that’s the real key to getting people to adopt it versus just sitting there and watching a training. It needs to be demystified and there’s not one approach to doing that.
We see the potential benefits as two-pronged: we want to encourage experimentation and participation and actually think about different products that can help our consumers and engage our consumers more deeply, but also things that can improve processes and workflows.
AI is only as good as the underlying data. It’s the responsibility of everyone, not just the data science and engineers, to make sure the data they’re building on has been vetted and the outputs aren’t intentionally perpetuating stereotypes.
I think step one is employing a more diverse workforce. Be willing to have open conversations internally about biases and things that might come up in the processes. When your business is deploying AI, establish responsible processes to mitigate bias. That can include adopting certain technical tools, instilling operational practices, or even bringing on third party audit. A lot of tech companies have published recommended practices from which guidelines can be drawn. There’s not one answer; there are a lot of different ways you have to go about doing this. And when you do find bias, it’s not enough to just change an algorithm; you’ve also got to think about how you improve the human-driven processes that underlie it.
Finally, we need to commit to invest more time and resources in this space because it is pretty complicated. It is still pretty new, but it’s also moving very fast. So we need to be really acutely aware of the risk and pull multiple approaches to mitigating that risk.
The goals include everything from education, to internal guidelines and legal vetting and frameworks, to coming up with an actual roadmap of both process efficiencies as well as consumer-facing products.
We’ve gone about that and through many different methods. We had a hackathon with 10 teams and 80 people across the company participating. It doesn’t mean we’re going to implement all 10 of those ideas, but it really got everyone actively working on a real-life project and thinking about how it could be used.
Now, we kind of use our regular processes for prioritization to say, “Here are the two or three ideas that are going to either increase revenue or build our audiences and, therefore, increase revenue or save us time and money on resources.”
I think companies are having to have difficult conversations. I think one thing that’s been out there a lot lately is whether or not media companies want Google to scrape their content, scrape their websites, index that content, and use it in the process for AI content creation generation. There are some minuses there. For instance, if you don’t allow Google to scrape or index the content on your site for AI, will they not index it at all? Will it hurt you in regular search results? I’m not saying there’s any evidence that’s happening, but that’s the type of question people have to answer and explore when they think about these decisions.
I think one of the only ways it’s going to work is if publishers come together to establish guidelines and standards. I don’t know if any one company can do that on its own. This is a fundamental issue affecting the publishing industry and publishers, and they’re starting to, but they’re going to really band together to come up with guidelines and frameworks and require any outside company that wants to use content to their own ends to adhere to them. There are so many considerations that have to be discussed, and I think there’s power in numbers.
At the end of the day, any brand’s audiences are built on a foundation of trust. And so anything that’s offered to consumers, it’s built in a more efficient way. Whether it’s AI-generated images or AI source content for licensing or repackaging, it needs to be communicated as such. And it’s great that brands can create more content faster, but if consumers don’t trust the information they’re getting, then it doesn’t matter how much extra content that a brand puts forth.
I think 2023 was a really big learning year, and 2024 is going to be a really big doing year. Companies that don’t start employing AI in various aspects of their business will get left behind by those that do. So I think we’re starting to see some of the results of early experimentation and use shared more broadly, which is really exciting. I think that’s only going to accelerate in 2024.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
Hypergrowth happens when a company experiences an exceptionally rapid rate of expansion, typically more than 40 percent annual growth.
Detractors have long-term effects like negative brand perception, reduced customer loyalty, and a decrease in sales.
To proactively address liability concerns, you can create an internal product recall team for dealing with risks and ensuring quality.
Our Galileo AI Fall ’24 release makes Highlights available to all Pro and Enterprise customers. We’re also debuting Ask Galileo, which enables you to directly ask questions about what you see in session replays.