Table of contents
What is an AI-Native Team?
The Wrong Approach to AI Adoption
The Training for TeamCity: Building AI-Native Team
Results from the TeamCity Training
Key Lessons for Building AI-Native Teams
Ready to Build Your AI-Native Team?

Building AI-Native Teams: The TeamCity Case Study

This post shares a case study from our AI training with the TeamCity team at JetBrains.
If you want the same outcome for your team, you can book a consultation with our experts. We’ll review how you’re currently using AI, identify the main constraints, and share practical recommendations tailored to your context.
> Book a free diagnostic call

Last year, we witnessed a significant shift in how product and cross-functional teams operate. The most forward-thinking companies are moving toward what we call AI-Native —teams that integrate AI tools into their daily workflows as naturally as they use development or CRM software.

But this term can be vague and different companies interpret it differently. Some think buying licenses makes them AI-Native or may believe a 1-hour internal training session is enough to tick the box. Below, we'll explain what AI-Native actually means and show how TeamCity at JetBrains approached it with our training program.

The program built around 'AI-native team' idea was designed for TeamCity team: product managers, designers, marketing, support and QA specialists. It also included leads and managers who needed a shared understanding of AI capabilities and applications. No coding experience was required — the focus was on team alignment around AI and its practical application.

What Is an AI-Native Team?

An AI-Native team has five characteristics:

  1. Shared understanding — The team has a common language and aligned understanding of what AI can and cannot do. Everyone — from product managers to designers to support — understands how LLMs work, knows typical tools and task types, and can discuss solutions that emerge from AI automation. This creates a foundation for collaboration and quality improvement.
  2. Independence — Team members configure and use AI tools themselves. They don't wait for tech consultants.
  3. Judgment — They know when AI helps and when it doesn't. They build solutions for existing problems, not just because the technology exists.
  4. Continuous use — AI tools are a part of their regular workflow.
  5. AI implementation capability — the teammates have an ability to build automated workflows that connect AI to their existing systems, custom agents that handle multi-step tasks specific to their work, RAG systems that let AI access and reason over their company's data, and integrated tools that fit into their daily processes.

We also need to be clear about what AI tools mean here. By AI tools, we mean systems that automate workflows by connecting AI to existing platforms (CRM, documentation, customer feedback tools, and more), agents that handle multi-step tasks automatically, and configurations that let AI work with company-specific data.

This matters because AI-Native teams ship faster, make better decisions with data, and free up time for work that requires human judgment. Companies that consciously invest in developing these capabilities position themselves to improve operational metrics and create products that genuinely enhance user experiences.

However, simply ‘adopting AI tools’ doesn't automatically make a team AI-Native. Building such a team requires a specific approach.

The Wrong Approach to AI Adoption

Many companies want to adopt AI, but they approach it backwards. They start with the tool and not the problem. This is why many AI adoption efforts fail to create truly AI-Native teams.

Here's what typically happens: 

1) Leadership approves an AI initiative based on industry trends rather than specific needs. 

2) The company purchases licenses for hundreds or thousands of employees. 

3) A training session happens — let’s say, an internal webinar everyone half-watches.

4) Three months later, usage reports show almost no one uses the tools. But the initiative is called a "success" because it didn't visibly fail. Licenses renew.

The core issue is that these initiatives prioritize looking innovative over being innovative. Success gets measured by seats purchased, not by actual problems solved.

The Training for TeamCity: Building AI-Native Team

This autumn, we worked with the TeamCity team at JetBrains to put the AI-Native approach into action through a dedicated training program.

The program was aimed at helping the team build custom AI-based solutions that solve real problems in their daily work.

Over several weeks, participants worked through a hands-on curriculum covering how to select appropriate AI models, build simple agents, prompt effectively, and understand AI tool capabilities and limitations. The training was led by Nikolay Vyahhi, MIT lecturer and Hyperskill CEO, with support from teaching assistants — AI engineers and product engineers with years of experience building products.

Over several weeks, the TeamCity team progressed through four modules: 

  1. AI Essentials and Ethics covered the fundamentals: how AI works, where it fails, and how to use it responsibly. 
  2. Large Language Models and Multimodality explored what LLMs can and cannot do, with hands-on practice. 
  3. Context Engineering with RAG and Advanced Prompting focused on techniques that bring better results from AI tools. 
  4. Building Agentic Workflows taught participants to create AI agents that handle multi-step tasks.

Each module combined learning with building. Participants worked on projects throughout the program, got feedback, iterated, and refined their work.

The process followed a three-step pattern: start with problems, build solutions, iterate with feedback.

The projects built during studies addressed real needs across different areas: product strategy, customer intelligence, quality assurance, content creation, or sales and marketing. Every project started with a problem. The framework was simple: find friction first. Participants identified pain points — repetitive tasks, high-load processes or structured work where AI could meaningfully help. Participants had to answer: "What specific task takes too much time or produces inconsistent results?" Only then did they explore if AI could help.

Results from the TeamCity Training

To assess skill ownership, we used a set of behavioral and outcome-based metrics similar to learning analytics (e.g. activity, consistency, efficiency, and struggle indicators).

In addition, we evaluated the maturity of the resulting projects, how well they can be applied in practice.

In our evaluation framework, we measured three dimensions:

1) Engagement (completion rates and continued usage),

2) Application (whether tools moved into production),

3) and Knowledge (understanding of AI capabilities and limitations).

22 participants completed the program with an 86.4% success rate and 52.3% average score across all modules. Strong engagement in foundational modules (Essentials and Fundamentals) showed participants building solid understanding before moving to advanced topics.

The team created 10 project prototypes, demonstrating deep understanding of the tools studied. This hands-on work proved more valuable than theoretical application. Out of these 10 projects, 7 were evaluated as having high potential to be developed into production solutions by our experts. The team's engagement in group work and project discussions showed approximately 90% project inclusion rate.

At the end of the program, we provided analytical reports with visualizations and a team growth report with recommendations for next steps.

Table 1. A snippet of the analytical report

A natural pattern emerged: Participants segmented into two key roles. Some became AI "drivers" — people who actively explore new AI applications, experiment with different approaches, and lead adoption efforts. Others became AI "implementers" — capable users who apply AI solutions effectively once they're established.

Table 2. AI Drivers list

Top performers now can help others on their team adopt AI tools. They share what works, troubleshoot problems and make conscious AI adoption stick. This ensures AI-Native practices continue long after the training ends.

Key Lessons for Building AI-Native Teams

As we move into 2026, the concept of AI-Native teams will only become more critical. So will the ability to distinguish between effective AI adoption and ineffective approaches.

Organizations that invest in developing these capabilities now will be better positioned to innovate and compete — but only if they're honest about what's working and what isn't, and willing to kill projects that aren't delivering value.

For teams interested in building an AI-Native approach, here's what important to keep in mind:

  • Find friction first. The best projects come from identifying pain points before exploring solutions. Look for repetitive tasks, high-load processes, or structured work where AI can meaningfully help. Use simple filters: Is it repetitive? Cognitively heavy? Structured? If yes to all three, AI might help.
  • Be specific about problems. "Improve productivity" is too vague. "Reduce time spent categorizing customer feedback from 2 hours to 20 minutes" is specific and measurable.
  • Start with quick wins. Build confidence through small, high-impact projects first — email triage, document summarization, survey synthesis. These create momentum and show what's possible.
  • Give people time to practice. One training session isn't enough. People need weeks to build applicable skills through hands-on work.
  • Create space for experimentation and failure. Not every project will work, but you can learn a lot from mistakes. Make it safe to try and fail.
  • Build a culture where curiosity and continuous learning are valued. People need psychological safety to try new approaches. Share progress visibly through demos and async updates so others can learn.
  • Set clear guidelines. Help people understand what's safe to try (green), what requires caution (yellow), and what's off-limits (red). This removes uncertainty and speeds up adoption.
  • Measure value created, not seats purchased. Track whether solutions solve real problems and save actual time.
  • Start with motivated people. Our experience showed that intrinsic motivation to learn significantly impacts outcomes. Let these motivated individuals become drivers who help others adopt AI tools.
  • Be honest. If something isn't working, say so. Kill projects that don't deliver value. Celebrate both successes and useful failures.

The future belongs to teams that understand not just how to use AI tools, but when and why they add value. This is where product thinking meets AI adoption.

Ready to Build Your AI-Native Team?

Your team is using AI but results feel inconsistent? We can help you identify what's working and what's not.

We'll review how you're currently using AI, identify scaling challenges, and clarify which problems are worth solving. After the call, you'll get practical recommendations for your specific situation.

> Book a free diagnostic call

Share this article
Get more articles
like this
Thank you! Your submission has been received!
Oops! Something went wrong.

Create a free account to access the full topic

Wide range of learning tracks for beginners and experienced developers
Study at your own pace with your personal study plan
Focus on practice and real-world experience
Andrei Maftei
It has all the necessary theory, lots of practice, and projects of different levels. I haven't skipped any of the 3000+ coding exercises.
Get more articles like this