Chas Peacock is Vice President of Engineering at Hotel Engine. He joined Hotel Engine in 2023, bringing 20-plus years of experience in engineering leadership, developer advocacy, and software development from companies such as Apollo GraphQL, H-E-B, and Bazaarvoice.
We sat down with Chas to get his insights on building technology for companies that are scaling quickly. Chas discusses the benefits of incorporating engineering early in the product discovery process and his approach to promoting a culture of continuous learning.
When I joined Hotel Engine, the business was growing at a clip you don’t usually see these days. To me, the most fun point of an organization’s lifecycle is when product-market fit is well achieved. That means it’s time to change up the things that have gotten you this far in order to scale for the next phases of growth.
As you prepare to scale, the organization and the technology you use will look fundamentally different than it did when going from zero to one, from one to 10, from 10 to 100, and so on. Hotel Engine was really in that sweet spot for me to help the team build out their platform strategies.
Much earlier in my career, I worked for Bazaarvoice. If you’ve read or left reviews on Home Depot, Best Buy, or Target, for example, that’s all Bazaarvoice white-labeled content. At the time, hundreds of millions of people interacted with their content each week.
This was before a lot of the more modern technology stacks that people use for scaling existed. I learned a lot from a technology perspective and also got to see some changes in organizational strategies as the company grew. I’ve been fortunate to have had several similar experiences in my career since then.
The build vs. buy decision really depends on whether what you’re trying to do is going to become a core competency of the company. Before Hotel Engine, I helped lead engineering at Apollo GraphQL. We were building developer tooling and it was all new territory, so it was quite a bit of figuring out what problems needed to be solved.
If I’m evaluating tooling that will never be a core competency of the business, I usually ask:
Usually nothing checks every box. At the end of the day, that’s why you build software.
But let’s say you can get 80 percent of the value relatively quickly. Maybe it’s with a SaaS tool that does a certain thing in your web app, or maybe it’s the way you build your entire homegrown application that is the foundation of your business. The evaluation criteria for those two solutions are very different.
It’s really about the 80/20 rule. Can you accelerate things or make things better relatively quickly with a new tool or framework? And if so, is that a permanent or temporary switch?
In general, there are two things that I’ve found most useful in that part of the process. First, as product and design are doing discovery, it’s super helpful to bring at least part of engineering along for the ride.
This enables engineering to grab the context that the rest of the team has been given. Because ultimately, someone is going to have to translate that from a “How does this actually work?” perspective into requirements for the team that’s going to execute on it.
I’ve seen places where product and design have been iterating on a thing and are like, “OK, we’re done. Build this now. Here’s a solution and a date.” That’s demotivating for an engineering team, in my opinion.
Second, while you’re figuring out the problem and the general space around it from a design perspective, give engineering time for diligence to consider how the design is actually going to work in practice. This often gets skipped.
Sending someone to actually figure out how the feature materializes — not only into the subsystem that you would build it in, but also into the macro set of systems — is extremely useful. It’s better to spend two weeks now doing some additional design than it is to build yourself into a corner and pay the price in a year or two.
A few years ago, the team I was leading was helping rebuild a lot of a company’s customer-facing technology from the ground up. Things that initially seemed straightforward quickly turned into a nightmare of complexity.
For example, we were asked to redo the website’s product search. A lot of the company’s revenue was tied to search. The project initially seemed straightforward, but there were a couple things we didn’t realize until we started digging in and actually building the system.
No one had considered how to get updated data on any sort of regular basis until it was too late. Also, there were too many differing opinions on the amount of configuration, the scoring of search results, and how it should actually work.
Coalescing those opinions into something that the machine could understand and score appropriately took far longer than it would have if we had just taken the problem on its face in the beginning rather than starting to build out and then seeing what we had around that.
This led to a great delay. Once launched, the product was ultimately super successful, and we did all of the diligence, but I think we inverted the way that we should have gone about solving the problem. We all learned from the experience, but if I had to do it again, that’s what I would change.
You always want to deliver value as quickly as you can, so I think it’s a slider.
If you’re trying out something net new, or building a new capability that you want to get to market, you want to be quick. You’re intentionally making the trade of not designing something perfectly for scale because that could take an extra six months, and by then the opportunity may have passed you by.
But if the project is something that you know is going to be a core competency, then it’s important to put in more time upfront in the lifecycle. You’ll derive more ROI from that.
I don’t think there’s one single driver for where that slider goes, it’s always kind of a negotiation as part of either a feature- or system-building process.
We do a lot of work measuring things as they’re being built, checking how complete something is, and verifying how much coverage we’ve got. But one thing I’ve noticed people don’t do nearly as well as they should is measuring things once they’ve been released.
Let’s say feature A is in the wild. Three months later, check to see how it’s doing. See if people are actually using it and whether it’s actually delivering the expected value. Those things are frequently missed. I believe in pegging things back to outcomes and making sure the houses we’ve built are sound.
Those are two different things. The first is more about the performance of what we’ve actually tried to do and how it affects our customer base.
The second is more about: How well did we build it? Is it keeping us up at night? Is it actually serving the need? Did we build it in a sound way? Those two sets of metrics look different, but when combined, they really give you a picture of the whole.
There’s only so much time to read and keep up with things that you’re not using every day. As an engineering leader, I’ve always advocated for finding the best solution for a problem. Whether it’s a solution that you end up using or not is almost irrelevant; you’ll learn something along the way.
In a prior role, the company was trying to change the language they used to write most of their runtime software. Small details really matter at scale. If you’re running 30,000 copies of software to route all your traffic, and you think you can replace that with 1,000 copies by making a technology choice, you’ve really got to dig in on whether or not that’s true.
So, you send people on spikes. You let folks iterate on things. Sometimes you build prototypes that you throw away, but you know you’re going to throw them away, and that’s OK. You set that expectation upfront. When that flexibility is there, just leaning into it is really a big thing.
Other times, you may choose not to do a particular innovative thing because you find it’s not the right tool for the job or whatever. Sharing that knowledge and context on why it isn’t the right solution now with everybody else is extremely helpful.
Along my career, I’ve learned a tremendous amount just from hearing others’ experiences. Allowing that time for experimentation and solving a problem in a creative way, even if you don’t actually solve it the way that you thought you would, is very useful.
When I arrived at Hotel Engine, we were still running our production infrastructure on more or less what we had started the company on. I was asked to move everything to AWS. This may seem relatively straightforward, but a business that is highly performant has to be readily available.
We just finished moving everything over to AWS around five days ago. It was a success because no one was the wiser, whether a customer or an internal engineer. That’s the way these things should go.
From that perspective, it was really fun to get to essentially lift a business that is rapidly growing and responsible for a lot of stuff and make it “cool as the other side of the pillow” while doing so. I got to hand-roll parts of our networking stack, which I hadn’t done since my mid-20s. That part was really interesting.
The fun for me was not only getting the work done, but also teaching folks along the way who hadn’t done anything like that before. As an engineering leader, the most rewarding part of the role for me is getting to teach and share mistakes that I’ve made so others don’t have to repeat them.
Yeah, I love it, honestly. I believe that life’s a marathon, not a sprint. I enjoy watching the careers of people I’ve worked with and seeing them grow in ways that I didn’t expect when I first met them.
The fact that I had a part in pushing them in that cardinal direction is really important to me. I think that’s the reason I became an engineering leader instead of just slinging code all day. I’ve delivered a lot of really interesting software in my career, but for me, that’s definitely secondary to the people aspect.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowSOLID principles help us keep code flexible. In this article, we’ll examine all of those principles and their implementation using JavaScript.
JavaScript’s Date API has many limitations. Explore alternative libraries like Moment.js, date-fns, and the new Temporal API.
Explore use cases for using npm vs. npx such as long-term dependency management or temporary tasks and running packages on the fly.
Validating and auditing AI-generated code reduces code errors and ensures that code is compliant.