Jie Weng is Chief Technology Officer at Yami, an Asian-centric ecommerce platform. He started his product management career as Co-Founder, Director of Product Management at TrueDemand Software and later moved to Demandtec (acquired by IBM), where he worked closely with Fortune 500 customers in the retail and CPG space, helping them improve promotion and price optimization. More recently, Jie was VP/GM at Coupang (NYSE: CPNG) where he led both product and engineering teams responsible for multiple areas of Coupang’s ecommerce platform.
Jie sat down with us to discuss how he and his team work to surface actionable insights and get to the lowest level of the data to move the needle. He talked about the importance of “looking at the data the right way,” i.e., looking into the distribution of data, not just the average, to surface issues. Jie also shared how A/B testing results may not always tell the full story.
Yami is an ecommerce platform. When our founder, Alex Zhou, was attending school in the US as an international student, he found it very challenging to find products from his home country of China or products from other parts of Asia. Yami was started to help with a selection of goods from Asia that aren’t easily accessible to customers in the US. We initially targeted students, but eventually, broadened to the general public. Today, you could go to Yami to purchase a variety of things that are popular in Asia, such as a soy milk maker, ingredients to make Korean hotpot, or branded beauty and cosmetics products, such as the SK-II facial treatment essence.
Our model is hybrid. We have a retail model where we buy the products, warehouse them, and then we fulfill them. We also allow sellers to list on our platform. They do their own delivery and fulfillment but we charge them a listing fee.
In a steady state or for large companies, the product and technology functions probably should reside separately. From my own experience, startups or companies that want to move fast benefit from having a single-threader decision-maker or tiebreaker to cut across product and engineering. At very early-stage startups, this role is often filled by the founder.
In terms of leadership principles, the first thing that I like to instill in my team is probably customer-centricity. For an ecommerce platform, people often think the customer is the shopper, but there are also internal customers. Teams that provide tools that enable buyers to expand their selection or that manage the fulfillment software that drives operations are also customers, for example. So whether you’re in product, analytics, or engineering, being customer-centric is extremely important.
The second principle is the ability to deep dive — this includes things such as “walking the store” by putting yourself in the shoes of the customer, drilling down data to the lowest level of granularity, and asking questions such as “why” and “how.” This is extremely important, regardless of your function or level of seniority.
Lastly, I think it’s critical to be results-driven. In product and analytics teams, usually that’s the norm, but sometimes engineering teams still subscribe to old thinking, like, “You tell me what to do, and I’ll do the work.” That’s probably not the right attitude, especially for startups. Making sure that what you do is aligned with company goals and mission and are driving for measurable success is important, regardless of what function you’re in.
We use the Google OKR process for planning. We set up objectives quarterly. When I joined Yami, engineering was organized by functional architectural components, whereas product was driving things for external-facing shoppers or internal-facing stakeholders like retail, fulfillment, or finance. I reorganized the team to support the right customers.
As an example, we have a team that supports the pre-purchase process on our platform. That team is self-sustained, including both frontend and backend, and a product manager sets up the goals every quarter and drives toward those goals. We have a retail team that drives against our business’s goals around selection and onboarding with the team, mostly backend folks, and a product manager to drive that. Organizationally, to be able to align that is important.
I gravitate to the ecommerce domain because I enjoy the breadth of the topics that it covers, from growth marketing to selection to vendor management, supply chain, pricing, customer service, etc. You need all of these areas to provide a great customer experience. And in all these domains, whether it’s by using engineering or operational improvements, there are ways to make them work better.
More importantly, I enjoy the fact that you can find talent in unlikely places. For example, at my previous company, I had somebody on the operations team who really wanted to get into software engineering. It was a marketplace for food delivery and he understood how we get drivers and fulfill our orders by matching it with the drivers. In ecommerce, especially at a startup, you have to sometimes wear multiple hats. So I said, “Maybe don’t start with engineering yet, but start with QA because you already have the domain expertise. Look at how we can automate some of the things that you would task manually by writing code.”
Eventually, he grew into software engineering. I see this a lot in diverse setups where you have to wear multiple hats.
For B2C companies or internet companies, number one is to drive from testing and a learning and experimentation culture. I led the experimentation team when I was at Coupang and we would test everything — all new features that get deployed on the customer-facing platform, whether it’s backend- or search algorithm-related, or frontend-related where I just place a button. We would test everything and make sure that when we deploy things, we’re driving the right results.
That part is extremely important because you want to make sure that you’re spending your product engineering resources in the right places and driving for impact. And the best way to objectively measure that is through experimentation. Having said that, using analytics on the surface can sometimes lead to sometimes wrong decisions. That’s why doing deep dives is important.
When we look at our numbers, the aggregate sometimes can tell different stories. Make sure that you look at the data the right way — look at its distribution, not just the average, but the median and the 90th percentile. Look at the extremes, even in certain cases. We do a lot to filter out the noise, but sometimes the problems are in the noise. Why does this noise happen? So really deep diving into that top five percent of things.
For sure. With experimentation, it only gives you that correlation. It doesn’t prove causation. It can tell you with a certain level of confidence that this is probably driving the right thing, but the hypothesis, the qualitative part, has to come from certain human evaluations.
People do this in the search space. If we implement a new search algorithm, any time before we actually A/B test it, the first thing we need to test for is if it’s really improving relevance. And relevance isn’t something you necessarily find in the numbers. So we have a team that looks at the human valuation aspects. We look at the results from the library queries and see if those results are actually better before we put them into the A/B test.
One story I have is an “anti-analytics” example. Sometimes, the results from A/B testing may not tell the whole story. We have same-day delivery in certain locations within the US. One person on my team said, “Why don’t we accentuate that during the checkout process?” Obviously, it could drive better conversions if I could tell my customers that the thing they want can be delivered on the same day. So we A/B tested that and, as you can imagine, the results were pretty good from a revenue generated per user and conversion perspective.
But what gets lost is if we actually fulfill that promise. In an extreme case, if I tell you you can get it in an hour versus if I don’t tell you anything, of course, the former is going to drive better conversions. But if I can’t fulfill that promise, then I may lose customer trust.
We all know customer trust is not easily gained but could be easily lost. You don’t see that in the data, or may not see it until it’s too late. In this example, I told my team, “Please check how we’re doing on the fulfillment promise. Let’s make sure that if we can’t guarantee certain promises because of supply chain bottlenecks or operations-related issues, we don’t show it.” Sometimes, you want to use data the right way and there are certain things that are hard to tease out from the data. You need to make sure you have certain principles that are guiding your decisions.
On the customer side, we do a lot around triggering NPS and CSAT types of surveys at different stages of the customer journey — whether it’s during their actual purchasing process or once they’ve consumed the product. For A/B testing, we look at the gross merchandising volume (GSM) per user as a center. Conversion is binary, so how do I make sure I’m covering not just the transaction itself, but that they’re buying more or buying frequently? Value per user is usually a good KPI to look at.
On the engineering side, we also look at availability and latency. You can have the best product but if it’s not available, it doesn’t matter. So make sure you have the right service level for your platform in your services. But on a higher level is customer experience. If it takes a long time to load something, I can churn. Those are engineering metrics that we look at to make sure that we’re driving a really good feature that isn’t causing a load issue or driving latency.
On one end, I think it’s separating data from insights. My analytics team has certain numbers that they report on a daily or weekly basis. But if this becomes a standing report, people sometimes won’t look at it after a while. Sometimes, I ask them for some of these weekly or monthly reports but say, “What are some of the insights?” And even more importantly, “What are actionable insights?” Certain insights mean that this is interesting, but how do you take those insights and turn them into the decisions we make? Or can we automate some of these things to make it prescriptive?
In terms of analytics maturity, there are three levels: descriptive, predictive, and prescriptive. The descriptive analyst says, “What happened?” You may know what happened, but knowing why is the insight. The second level is predictive — how do I use the historical data to drive a forecast for replenishment in my supply chain, for example? And then prescriptive is how do I optimize my prices if my objective is to drive better profitability? How do I automate my fulfillment process so that, if we take the human aspect away, we can reduce errors? How do I drive recommendations in the platform automatically, via personalization or by other means, to improve that matching process?
One way to think about this is how you’re using data to drive your decisions. For example, If you don’t have the traffic to support certain tasks, how do you decide whether you want to roll something out? I think Amazon has a good way to think about this. Their leadership principles are around that one-way versus two-way door. The idea is to consider the risk if you roll that out. If it’s something simple, you want to test and learn. Go ahead and roll it out, and then you can reiterate. It’s a two-way door — you can always roll it back.
But, if it’s something that might hurt customer trust, like changing the subscription price, you really should rethink that. The other thing, as I said earlier, is if you’re seeing data that’s too late. How do you get the signals early? When you lose the trust of a customer and they churn, you don’t see that in the data until much later. You should have a certain set of principles in place to make sure that these present at a high level in the data that you see.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
A fractional product manager (FPM) is a part-time, contract-based product manager who works with organizations on a flexible basis.
As a product manager, you express customer needs to your development teams so that you can work together to build the best possible solution.
Karen Letendre talks about how she helps her team advance in their careers via mentorship, upskilling programs, and more.
An IPT isn’t just another team; it’s a strategic approach that breaks down unnecessary communication blockades for open communication.