After a year of layoffs and outages, leaders are under intense pressure to increase efficiency and reliability despite having fewer resources.
According to this survey, 70% of software teams are adopting AI.
That means that capitalizing on AI is becoming an industry standard.
Of the teams who have adopted AI, on average, they’ve reported a 250% increase in software development speed. To put that another way, they’re able to achieve in 5 hours what they’d otherwise achieve in 8.
Those same teams expect their efficiency to increase to 350% within a year.
Successfully adopting AI can, and regularly does, lead to engineering teams unlocking a substantial competitive advantage.
On the flip side, teams not actively adopting (or actively avoiding) AI are signing up for an enormous opportunity cost that leaves vast amounts of resources and money on the table.
AI is capable of drastically improving efficiency in several areas.
CTOs and engineering leaders commonly evaluate AI for multiple purposes, which include:
Different AI adoption types have unique requirements but need a robust foundational infrastructure.
Your AI adoption's success hinges on a robust data and tech infrastructure. This applies across multiple AI adoption scenarios – whether it's for enhancing products, optimizing operations, refining data analytics, elevating customer engagement, or something else.
Low-quality data disrupts AI adoption (not to mention your operations). Set data integrity standards early. Protocols vary: For customer engagement, data should be current and relevant, while operational AI needs historical and real-time data.
When using operational tools like Jira or Slack, teams should adopt communication best practices to ensure data availability, ensuring meaningful discussions are in public channels, not DMs.
Legacy systems can be impediments or even liabilities, depending on the type of AI adoption you're considering. For product-focused AI, sluggish data access due to legacy systems can be a bottleneck, whereas, for operational AI, an outdated system or tools will impede real-time decision-making.
Don't forget to plan for data backup and recovery strategies. These measures are necessary not just for operational continuity but also to ensure the long-term viability of your AI systems.
Our endeavor of injecting AI into your operations is as much about people as it is about algorithms. AI adoption isn't just the task of a lone AI specialist or an isolated task force – it's an organizational shift. Here's your playbook:
Effective AI adoption often involves a designated team or individual. Depending on company size, some roles might be part-time.
I often see these kinds of people on AI task forces:
Your task force needs to develop familiarity with AI tools, how AI works, and AI ethics if they don’t already have this. If they don’t, form partnerships or upskilling programs to fill the gaps. Your task force needs to understand the language of engineering operations, and the language of AI.
AI adoption is a company-wide undertaking that needs the backing of the C-Suite.
If you’re the CTO, or another top-ranked engineering leader, you’ll know that the active support and continuous engagement with projects directly correlate with successful outcomes. Take an active leadership role in your AI adoption project. If that isn’t you, you’ll want to make the case for this endorsement and activity from your decision-makers.
Transparency and good operational hygiene are prerequisites for the operational adoption of AI. Getting this right may look like:
Universal high standards for project management software use. Project management software (like Jira or Linear) is kept up-to-date consistently. Tickets are described completely and accurately.
Process-driven, verbose version control and CI/CD. Pull requests are appropriately described, commit messages are uniform and follow best practices.
Teams communicate openly and transparently. Conversations happen in public channels unless there is a specific reason (such as data sensitivity or operational and strategic irrelevance). This is particularly important for distributed or remote teams.
Your task force needs more than just a passing familiarity with AI. They need operational know-how, executive backing, and a transparent, accountable culture.
Here's how to ensure your AI adoption isn't just robust, relevant, and value-generating.
When we get it right, innovation is a competitive advantage.
The most successful AI-adopting businesses move fast and flexibly with their AI strategy. They conduct experiments to assess the effectiveness of operational AI solutions and make budget available to make them happen.
Place decision-making power for experiments with your AI task force.
It’s a good idea to select pilot projects. Always define what success will look like and be prepared to learn lessons from failed pilots.
Define clear objectives for AI adoption, ensuring AI efforts align with desired business outcomes.
AI can offer quick wins, but it also promises long-term transformation. Start small but focus on projects that deliver immediate ROI. Use these quick wins as proof of concept for larger, more complex projects.
Assess every AI project's potential value and feasibility. Avoid just following AI trends; emphasize projects that significantly improve operations. Strategic planning and prioritization are foundational for a fruitful AI strategy.
AI adoption is most successful when AI taskforces have open minds about the kinds of solutions that can unlock efficiency gains. After all, AI is one of the fastest-evolving technologies in our space.
Be open to reassessing, pivoting and/or expanding your AI strategy to adapt to market evolutions, shifts and emerging opportunities.
Continually reassess and pivot your AI strategy. This isn't a luxury; it's a necessity for adapting to market shifts and emerging opportunities.
Don't make AI decisions in a vacuum. Involve key stakeholders in the process. Ensure everyone is aligned and invested in the AI journey and the tools you’re adopting.
Businesses are often held back from AI adoption because of a lack of grasp of the nuances around AI ethics.
Don't treat AI ethics as an afterthought. Make it central to your AI adoption strategy.
A dedicated AI ethics committee – or AI ethics lead at smaller businesses – is a great way to establish awareness, education, and accountability around AI ethics.
Your AI ethics function should have knowledge and authority on a range of topics, including:
Your AI ethics function isn’t there to roll out the red tape, but rather to help others feel confident in their AI adoption approach and position your organization for sustainable, responsible AI-driven growth.
Some AI implementations are inherently riskier than others. For example, you might be using AI to help your team understand the code in your projects. This doesn’t carry the same risk as, say, using AI to segment user data.
Develop thresholds of risk to help fast-track low-risk projects and reduce risk for high-risk projects.
For example, giving an AI model read-only access to infrastructure code that does not involve user data might constitute a low risk.
Conversely, an AI project involving user-facing actions based on AI analysis of user data would carry numerous risks, from data security and privacy risks to bias and discrimination risks.
Understanding AI regulations is vital to avoid legal risks. Embed data privacy from the project's start to respect user data from the start.
High-risk projects may benefit from – or require – third-party audits to help identify vulnerabilities and blind spots.
Ethical AI practices are both the right thing to do, and crucial for building and maintaining consumer trust. Lose that trust and you risk all kinds of things from bad PR to legal issues.
You’re probably already sitting on a goldmine of data – your product engineering team’s operational data.
We’re talking about the flow of tickets through Jira or Linear, the commits and pull request activity in GitHub, and even your team’s conversations in Slack.
I’m building a tool that makes your operational data work for you, with my team at Stepsize AI.
We’re working hard on our tool and I’d absolutely love your feedback on its features and what alignment or collaboration problems you’re trying to solve on your product engineering team.
Our goal is to build a tool that’s easy to set up, security-first and enables you to get automatic updates and insights that drive actions right when needed.
The Stepsize AI Operational Intelligence Engine understands everything happening in your product engineering org.
It uses this to create powerful, reliable updates, including automated standups, sprint and Kanban reviews, executive summaries, and more.
With Stepsize AI, you can
Effortlessly achieve alignment
Cut meetings and boost productivity
Get instant answers with zero context-switching
Again, if you’ve got a collaboration or alignment problem you’re trying to solve, I’d love to chat and would hugely appreciate your feedback.
Also published here.