Kara Sasse is Chief Product Officer at Springboard, an online learning platform and alternative credential provider. A self-described “builder,” she began her career in management consulting and after only a couple of years, decided to apply her business and marketing skills to helping early-stage startups. Kara found a knack for startups, joined Blackboard as one of the first 30 employees and, after completing an MBA, moved to the edtech company EVERFI. Prior to joining Springboard, Kara worked in various leadership roles at Gallup, the publisher of her favorite personal development tool, StrengthsFinder.
In our conversation, Kara talks about how her team is integrating generative AI into Springboard’s bootcamps, with the ultimate goal of equipping learners with these competencies that will soon become “table stakes” for future jobs. She also discusses how she leverages skills assessments like StrengthsFinder to integrate teams and capitalize on certain strengths.
In our consumer business at Springboard, which is where the company was founded, we created the concept of having these six-to-nine-month bootcamps with a job guarantee. These online bootcamps are a way to help people who are interested in re-skilling in technical topics like cybersecurity, machine learning, AI, data analytics, data science, UI/UX, etc. We’re helping people pivot without needing to go back and get a degree and provide an expedited offering that can give you a certificate, a mentor, a coach, etc.
Recently, we’ve moved into partnering with universities that want to do these types of alternative credentials. For example, we now work with a large university in Florida. We still teach the bootcamps and enroll everybody, but it gives the university a very new component of its catalog that they never could have put together that quickly.
We’re also starting to partner with companies on re-skilling and upskilling employees. We work with Amazon as part of the Amazon Career Choice Program, and we offer frontend engineering and software development courses and are helping frontline workers leap forward into an entry-level job outside of being on the floor. This gives them skills and competencies to be able to move into a role within the company but on the corporate side.
About 13 years ago, in my MBA program, StrengthsFinder was one of a dozen psychometric assessments that we took to provide feedback and help us understand our talents. The test is similar to Meyers-Briggs, but it’s much more comprehensive. Where Meyers-Briggs has four buckets, StrengthsFinder evaluates people on 34 dimensions. It was very accessible and helped deepen my understanding of myself and my strengths.
Three weeks into my first job out of business school at EVERFI, we acquired a company in Boston. I’d flown up to meet with the new team in Boston and was thinking, “How am I going to meet people and connect with them?” As I was walking the hallways of this small company, I noticed that everyone had the StrengthsFinder book on their desk.
It turned out that they all had taken it as an exercise — a self-development tool. It’s sensitive when you’re integrating teams and meeting new people. How do you connect with them and what’s going to happen with their job? I used StrengthsFinder to integrate my teams. I acquired about 10 people on their team and did some reflections on our strengths to figure out how we were going to work together and become high-performing. It was so effective that I’ve essentially never put it down.
Ironically, about five years later, I got an opportunity to join Gallup. I never knew that they owned StrengthsFinder, but they created a job for me to take over their strengths product portfolio for the education market. It’s the only job I’ve left the startup environment for — Gallup is a world-renowned, almost 90-year, family-owned/controlled business. I spent five years there and it was an immense privilege and honor to lead work to grow and scale its adoption worldwide. I still use it today with all my teams.
Gallup spent about 25 years honing the tool but the assessment itself has not changed. What’s changed is the way that they deployed it and gave you more content to make it actionable. Because a tool is only as good as what you can take away from it. It’s not helpful if you’re left wondering, “What do I do with this information?”
Now, they have more coaching sessions, certified coaches, content, subscriptions, etc. There’s more power within the community. There’s a real familiarity with it, which makes it easier to deploy and for people to feel comfortable with it. Not everyone likes taking an assessment.
I don’t give it to everyone out of the gate. I first give it to my leadership team to get familiar with the language and the tools, and I’ll typically bring in a coach to do a 1.5-hour session on our team dynamics. So it’s a little bit of learning your strengths, doing your own exercises to understand your strengths, and then figuring out the interplay on your team.
As an example, on my leadership team, we have strong commonalities and contributions to bring forward and create high performance, but there was a very clear area that we started to become more aware of that was less common: influencing talents. For my product organization, this means occasional stumbling blocks between communication with other teams or how we communicate our roadmap. We talked about it as a team and said, “If we’re missing this, what are we going to do about it? Is there anyone on the team that excels in this area and how can we leverage that?”
It’s critical for a team to not feel like they have a deficiency, but instead look at the strengths we have and see how we could avenues to maximize them, such as partnering up a couple of the talent themes to act like they’re influencing. Awareness of team members’ strengths is far more predictive of team engagement and performance than the composition of a team’s strengths.
After the coaching session, one leader said, “I always thought this was a weakness of mine. I’m now realizing it’s not a weakness, it’s just not where I run the fastest.” He expressed that he felt better about the fact that this was a common struggle. He said, “I now realize that I have avenues.” We can partner differently. We can bring someone in to help us. We’re building on that. You get little breakthroughs of trust, and it’s like an athletic team. If someone’s just not good with their left foot, stop pushing them to their left. It’s obvious, but in business, identifying people’s talents is not as simple as that. This just gives you a mirror.
We’ve been in the space of teaching AI for a while. We had machine learning and those fundamentals as part of bootcamps, or even as standalone bootcamps. Generative AI is changing the landscape in terms of a whole set of competencies and skills that are going to be more table stakes — not just for the highly technical niche like data scientists, but anybody.
We had already been in an AI world but in a much more technical data science way versus the democratization of language learning models. Content is flying out the door. There’s lots of stuff we can curate. There’s no shortage of people having options to get up-to-speed on AI. The question is how does it intersect with the job you’re thinking about or how deep do you need to go? Those are the things we’re trying to weave in and out of our bootcamps so that we can really have an impact.
The first pillar we were thinking about was how we fast-track those competencies and skills that aren’t formed yet in the market. Language learning models — how do we teach those? We started to roll learning objectives and content into our current courses as an exercise. It was little things like teaching prompt engineering, but inside of our courses as part of a capstone project. Instead of researching it the traditional way, use ChatGPT. So our prompts and our platform are telling you to do different things by using new tools.
Pillar two was how to use AI to service our online learners better. These learners are busy and it may be hard for them to set aside time to talk to their mentor, but they need some direction and thoughts. How do we use the QA copilot concept of AI for our learners? That’s something we’re piloting now. It solves a big problem for us in the online experience because people need something at the moment. If it’s 11 p.m., you’d have to traditionally wait until the next day. The instantaneousness and accuracy of AI blows you away. We’re productizing features like that to solve accessibility problems and give more instant feedback on our platform.
The third pillar is inside of our own company. We’re experimenting and utilizing new tools to create efficiencies and impact within our organization. As an example, we’re using GitHub Copilot within our engineering organization to optimize our coding capabilities. Our mentors and coaches need support, so we’re using AI to bolt on plugins into our Slack community so they get the information they need faster instead of waiting to talk to one of our experts.
Our team does a ton of A/B testing. We first looked to determine what types of questions people have. Do they have academic questions on the content or more technical questions? Or are they lost in the online platform? We were trying to figure out where users are stuck and where we can be at our best to help them. The AI copilot could take different forms — it could be your mentor on the go or it could be your TA. Which one is it?
Also, whether or not people asked the AI questions measured their willingness to engage with it. Will they engage with it at the right moment? Did we get the UI right? Did we prompt it right? Will they even bother? We saw that people would very quickly start to embrace it and ask lots of questions, and it also kept them on the platform. If you’re stuck somewhere late at night, you just want to get through what you need to get through or understand the assignment to submit something. That persistence is really important.
We were gauging their satisfaction with it — thumbs up or thumb down. What are they asking? That gives us iterations on it. And do we notice that the people who engage in it progress quicker? That’s what we started to see. People who were engaging with it are persisting and moving on faster than people who weren’t.
That’s really interesting because the biggest challenge in a six-month bootcamp is motivation and persistence. You’re not in class. We have some deadlines, but you’re an adult and you have to work your way through it. We have coaches and checkpoints along the way, but you can quickly get behind and you can quickly get demotivated. And so anything we can do to help get you to that next step is well worth it.
I would say the students who are coming into engineering for the very first time — like entry-level coding — have to potentially get familiar more quickly than other roles. That’s one of the biggest areas where AI is intersecting. There are very robust tools coming out that our engineers are using to help them code. We’re trying to expose our entry-level engineers to those things. That’s actually one of the courses where our own copilot was tested —our coding course. We’re now rolling it out course by course and watching the test to see if we should go course-wide.
We have a philosophy in the company of always learning. The first thing we did was create a Slack channel about AI information. We asked people across the company to share what they were seeing. We have a culture of always learning, so people just started playing with AI. In various departments — whether it be marketing, finance, or HR teams — tools are emerging in every vertical. And our philosophy was to try them. Either work with your department, beta test something, or have a goal. What are you trying to achieve with it?
We’re trying to set baselines, test, and have technical and non-technical teams using AI to do things faster. The sales team has been using new tools to figure out how to respond faster with pre-canned emails and have AI personalize them. Is this a way to help you get a better response rate? We don’t know yet. That’s the kind of thing we’re testing and looking at. My advice is just to get started. Don’t overthink it.
Also, have a safe space. We had to show and tell every month for people to share what they’ve tried with AI. What did they learn? Did it work or not work? Maybe they had to prompt it five times and wanted to show what happened.
So many of the tools right now are around creating some efficiency within a functional team. What gets innovative is the ability to use open source language learning models to represent your company. You don’t need to build your own model. You don’t need to set up a big infrastructure. Not over-indexing on the innovation has made it more accessible.
We’re trying to have all the innovation for our customers. Someone on our team really loves the language learning models and has been experimenting with all the APIs around four different use cases. One use case was our grading feedback. We grade a lot. And grading a coding exercise requires a certain skill set, so it’s expensive. The innovation was around, “Could we flip that upside down? Could we get the same quality but do it in an automated way without having the backend?” We’ve been experimenting with that. We’re starting to see those prototypes come to the surface, even coming from people outside the product team. That’s how accessible it is.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
As the name alludes to, the way you frame a product significantly changes the way your audience receives it.
A Pareto chart combines a bar chart with a line chart to visually represent the Pareto Principle (80/20 rule).
Brad Ferringo talks about how he helped develop modern “earconography” — sound language that creates context-driven audio notifications.
Without a clear prioritization strategy, your team will struggle to tackle competing demands and can end up confused and misaligned.