This post shares a case study from our AI training with the TeamCity team at JetBrains.
If you want the same outcome for your team, you can book a consultation with our experts. We’ll review how you’re currently using AI, identify the main constraints, and share practical recommendations tailored to your context.
> Book a free diagnostic call

Last year, we witnessed a significant shift in how product and cross-functional teams operate. The most forward-thinking companies are moving toward what we call AI-Native —teams that integrate AI tools into their daily workflows as naturally as they use development or CRM software.
But this term can be vague and different companies interpret it differently. Some think buying licenses makes them AI-Native or may believe a 1-hour internal training session is enough to tick the box. Below, we'll explain what AI-Native actually means and show how TeamCity at JetBrains approached it with our training program.
The program built around 'AI-native team' idea was designed for TeamCity team: product managers, designers, marketing, support and QA specialists. It also included leads and managers who needed a shared understanding of AI capabilities and applications. No coding experience was required — the focus was on team alignment around AI and its practical application.
An AI-Native team has five characteristics:
We also need to be clear about what AI tools mean here. By AI tools, we mean systems that automate workflows by connecting AI to existing platforms (CRM, documentation, customer feedback tools, and more), agents that handle multi-step tasks automatically, and configurations that let AI work with company-specific data.
This matters because AI-Native teams ship faster, make better decisions with data, and free up time for work that requires human judgment. Companies that consciously invest in developing these capabilities position themselves to improve operational metrics and create products that genuinely enhance user experiences.
However, simply ‘adopting AI tools’ doesn't automatically make a team AI-Native. Building such a team requires a specific approach.
Many companies want to adopt AI, but they approach it backwards. They start with the tool and not the problem. This is why many AI adoption efforts fail to create truly AI-Native teams.
Here's what typically happens:
1) Leadership approves an AI initiative based on industry trends rather than specific needs.
2) The company purchases licenses for hundreds or thousands of employees.
3) A training session happens — let’s say, an internal webinar everyone half-watches.
4) Three months later, usage reports show almost no one uses the tools. But the initiative is called a "success" because it didn't visibly fail. Licenses renew.
The core issue is that these initiatives prioritize looking innovative over being innovative. Success gets measured by seats purchased, not by actual problems solved.
This autumn, we worked with the TeamCity team at JetBrains to put the AI-Native approach into action through a dedicated training program.
The program was aimed at helping the team build custom AI-based solutions that solve real problems in their daily work.
Over several weeks, participants worked through a hands-on curriculum covering how to select appropriate AI models, build simple agents, prompt effectively, and understand AI tool capabilities and limitations. The training was led by Nikolay Vyahhi, MIT lecturer and Hyperskill CEO, with support from teaching assistants — AI engineers and product engineers with years of experience building products.
Over several weeks, the TeamCity team progressed through four modules:
Each module combined learning with building. Participants worked on projects throughout the program, got feedback, iterated, and refined their work.
The process followed a three-step pattern: start with problems, build solutions, iterate with feedback.
The projects built during studies addressed real needs across different areas: product strategy, customer intelligence, quality assurance, content creation, or sales and marketing. Every project started with a problem. The framework was simple: find friction first. Participants identified pain points — repetitive tasks, high-load processes or structured work where AI could meaningfully help. Participants had to answer: "What specific task takes too much time or produces inconsistent results?" Only then did they explore if AI could help.
To assess skill ownership, we used a set of behavioral and outcome-based metrics similar to learning analytics (e.g. activity, consistency, efficiency, and struggle indicators).
In addition, we evaluated the maturity of the resulting projects, how well they can be applied in practice.
In our evaluation framework, we measured three dimensions:
1) Engagement (completion rates and continued usage),
2) Application (whether tools moved into production),
3) and Knowledge (understanding of AI capabilities and limitations).
22 participants completed the program with an 86.4% success rate and 52.3% average score across all modules. Strong engagement in foundational modules (Essentials and Fundamentals) showed participants building solid understanding before moving to advanced topics.
The team created 10 project prototypes, demonstrating deep understanding of the tools studied. This hands-on work proved more valuable than theoretical application. Out of these 10 projects, 7 were evaluated as having high potential to be developed into production solutions by our experts. The team's engagement in group work and project discussions showed approximately 90% project inclusion rate.
At the end of the program, we provided analytical reports with visualizations and a team growth report with recommendations for next steps.
A natural pattern emerged: Participants segmented into two key roles. Some became AI "drivers" — people who actively explore new AI applications, experiment with different approaches, and lead adoption efforts. Others became AI "implementers" — capable users who apply AI solutions effectively once they're established.
Top performers now can help others on their team adopt AI tools. They share what works, troubleshoot problems and make conscious AI adoption stick. This ensures AI-Native practices continue long after the training ends.
As we move into 2026, the concept of AI-Native teams will only become more critical. So will the ability to distinguish between effective AI adoption and ineffective approaches.
Organizations that invest in developing these capabilities now will be better positioned to innovate and compete — but only if they're honest about what's working and what isn't, and willing to kill projects that aren't delivering value.
For teams interested in building an AI-Native approach, here's what important to keep in mind:
The future belongs to teams that understand not just how to use AI tools, but when and why they add value. This is where product thinking meets AI adoption.
Your team is using AI but results feel inconsistent? We can help you identify what's working and what's not.
We'll review how you're currently using AI, identify scaling challenges, and clarify which problems are worth solving. After the call, you'll get practical recommendations for your specific situation.