Keith Agabob is former SVP of Product at Altice USA, one of the largest broadband communications and video services providers in the United States. He began his product management career working on ecommerce at iWon (acquired by IAC). Keith then spent 10 years at Cablevision, where he worked on various components within Cablevision’s product suite and its Optimum video service. He led Cablevision’s product team through its acquisition by French company Altice — becoming Altice USA and driving innovation and value for both its broadband and video subscription services.
In our conversation, Keith shares examples of when he put together business cases using research, hard data, and consumer insights to remove opinions and bias around what to build. He talks about his strategy for aligning product strategy with customer experience, including using a framework like OKRs to convert strategy to the execution level via objectives, tactics, and measurements. Keith also discusses how AI-powered features are going to become key product differentiators in the same way that great customer experience once was.
We underwent a complete hardware and software platform refresh for both our internet and video customers — about 4.3 million people throughout the US. At a high level, we wanted a new innovation and platform that would upscale both of our services. At the time, we’d just been acquired by a French company, and we wanted to use this opportunity internally to show our employees that we’re here to drive innovation and that it’s going to be an exciting place to work.
There was a lot of pressure on the project to show an innovative roadmap and how this new company can come together. It was actually a merger as well — the Optimum and Suddenlink brands became Altice USA. Innovation was a big part of what we wanted to do.
My first step in the project was to conduct external customer research. As part of that, several features were clear winners to keep on the roadmap. One of them was a voice search capability so you could find your video content by voice remote. We were one of the first to launch that on a wide scale.
I originally got pushback from engineering that it would be too difficult. There was no precedent, so they were concerned they couldn’t fit into the scope or cost of the project. At that time, there was an engineering group in both France and the US. They were aligned on not wanting to take on this scope. But, I knew it was a top-five feature from the customers POV and something that we were going to be able to market.
I had to go through a process of selling that and getting buy-in. I first convinced my US engineering team that it was doable. They got excited to work on something so innovative. I found the vendor that we were going to for natural language processing and to prove that it would fit into our technical architecture. Finally, we flew to France and took 48 hours to build the proof of concept, which sealed the deal.
The research that we did was very robust. We reached out to about 3,000 prospects and consumers via a MaxDiff survey. These were essentially pressure tests and gave us a stacked ranking of which features were going to move the needle. That helped me build the data case that this feature was really important.
We also included a TURF analysis, which stands for Total Unduplicated Reach and Frequency. This is common in media research to understand how many times you need to run a message, or what product features to have, to capture a certain amount of an audience. We used that in the very beginning as a quantifiable data point that this feature would drive 20–25 percent of customers to want to have this product.
We coupled this with some market research and competitive research to prove that this was a great differentiating opportunity from our competitors. This all proved that the idea wasn’t just founded on opinion — we had hard data from consumers that they wanted a better way to find content. That was the foundation of the argument and we brought in external experts to show that this was possible. Finding a vendor that was willing to work with us on the architecture and work around our parameters was another key aspect to push it along.
The customer research was super important to make sure that we were working on the right thing in the right order. We spent time figuring out what could be day one MVP scope versus what could wait until day two and beyond, and that customer research was foundational there.
We also kept objectives in mind and had a clear North Star. The highest-level goal was to improve NPS from a consumer point of view compared to our legacy product. That was our North Star metric, even though it’s hard for teams to contribute to NPS directly. We had a breakdown of level two and level three metrics that laddered up into NPS. For example, the UX design team had a customer effort score (CES) metric around the customer journeys that they were building, so that would be their measure.
For engineering, a key part of the success was making sure that we built in time for quality measurement and telemetry around quality. There was custom hardware, new backend systems, and third-party vendors. We had to ensure they had built-in telemetry. There were a lot of data points between the backend and client side. We were able to create a composite product health score based on numerous data points collected together.
Especially when you have a new product that you haven’t worked on before, it’s not always easy. You can try to get benchmarks from other places. For the voice search feature, we talked to one other company that had done it and also used our consumer research to model it. Sometimes, you can look at tangential, secondary features to get a sense of metrics. Is this something that should have a 20 percent or 50 percent adoption rate?
You have to pick it, measure it, and go in knowing you’re going to tweak your benchmarks as you learn more — especially for something that is brand new.
For things that are more day-to-day and foundational, however, you should clearly have targets. From a quality point of view, we were shooting for five nines. It was easy to have a benchmark of what we wanted the user experience to be.
This was a new customer management app. Consumers could manage both their internet service as well as their general account, such as bill paying, etc. One of the biggest components of that was stakeholder management and alignment. There were three EVP business units that I had to align, so I set a clear vision and leveraged the OKR framework. This was helpful to clarify what was important and how we were going to measure that success. That became a focal point for getting these three teams together and focusing on the scope. It removed a lot of the opinions around what we should build and made us more outcome- and data-driven.
Another big component was building a robust customer feedback loop. It took a moment to get the project approved, so we used that extra time to do more upfront research. We actually ran a couple of design thinking sessions that helped us mold the future vision of where we wanted the app to be in two to three years, as well as fine-tune the initial scope. We were able to turn those into prototypes, which helped sell the vision and build alignment.
Once we got to the start of the development, we baked a usability cycle into our design sprints. That extended the schedule slightly, but we worked on the process of having a feedback platform in place. We could easily go from Figma to another tool and, within 48 hours, get feedback. Because we were getting feedback as we built, we made a better product. The morale of the team also improved, as we were building experiences in a better, more customer-driven way.
This was slightly different because it was in production. For context, self-installation is when, after you purchase the service, we mail it to you to set up yourself instead of having a technician come to your house. You use an app to go through the step-by-step process.
We received calls from consumers that indicated some things weren’t always working. We had digital analytics, but we didn’t have a session replay tool to show us where in the process it was failing. It was difficult to get to the root cause of what was happening.
We had some metrics from our backend systems that were showing errors, but it was a separate system so there was no way of tying them together easily. We listened to calls manually, and, in parallel, stood up a triage team that was taking real-time calls to troubleshoot live. It wasn’t very efficient, but it worked.
It’s important to start with your product strategy first. In parallel, you should have your customer experience and product principles — simplicity, user-centricity, etc. — acting as the pillars to your strategy. Then, take a framework like OKRs to bring your strategy to the execution level. You should certainly have metrics around customer experience, just like how you’d have a revenue target or ARPU (average revenue per customer unit).
To me, KPIs around the customer experience, like NPS, act more like the North Star. You also need lower, more granular KPIs like CES or CSAT (customer satisfaction). These are KPIs that individual teams can move the needle on.
We use a wide spectrum of user testing and feedback mechanisms. I’ve done things like focus groups and moderated 1:1s as well. But I feel like the efficiency gains that you get from unmoderated tests on a platform like UserTesting are the baseline to designing a great experience.
We also do some lower fidelity tests with pen and paper. We’ll walk around the office to get feedback. There was a project where we were designing hardware — it was a new remote control for a connected TV device. We 3D printed the original CAD diagram of it and we gave it to our team members. We said, “Take it home, give it to your spouses and to your kids to play with.” This was to gauge how it feels in their hands and if the button placement makes sense.
It’s changed over probably the last few years. At one point, there was a competitive advantage of having a great product design and user experience. Now that there’s been such an upskilling of experiences across the board, the bar for what makes a great user experience is very high. It’s getting more and more difficult to differentiate on that.
This means that you have to heavily invest in user experience and make sure that you’re providing what your customers want in a very usable way. An organization must have a discipline around UX research. They need to embed customer feedback loops throughout the whole development cycle — especially post-launch. I feel like organizations are getting better at the development cycle, but there’s room for improvement post-launch.
This is especially true as we continue to develop machine learning and AI-powered experiences, as these are complex and hard to do post-production quality validation.
We’ve been using AI around product quality especially. Our key service is internet, and the WiFi environment in the home constitutes roughly 50 percent of customer problems. But it’s difficult to actually model the WiFi environment — it’s dynamic and changes depending on who’s using it, what they’re using it for, and which room they’re in.
We spent several years collecting as much data as possible, and we’ve now put machine learning models on top of that data to create a quality of experience score for customers’ WiFi environments. We’ve tied that to other events that are happening, for example, if there’s buffering when a customer is trying to stream a video service, and we can calibrate that to a WiFi score. That can help us figure out if it’s a pure WiFi issue or something further up the network.
I think that the differentiation of having really good, AI-powered features is very interesting. This will become a competitive advantage between products in the same way that having a great user experience once was.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
The globalization of your product opens up opportunities for growth, however, every new market comes with its own challenges.
Hypergrowth happens when a company experiences an exceptionally rapid rate of expansion, typically more than 40 percent annual growth.
Detractors have long-term effects like negative brand perception, reduced customer loyalty, and a decrease in sales.
To proactively address liability concerns, you can create an internal product recall team for dealing with risks and ensuring quality.