A conversation with Aleksandra Osipova on why AI activity is rising faster than operational results
Artificial intelligence has already entered everyday work. Teams use it to draft, summarize, analyze, and produce faster than before. Yet inside many companies, the larger gains remain oddly elusive.
The gap is becoming harder to ignore. Deloitte’s 2025 survey found that AI investment is rising quickly, but returns remain slow: 85 percent of organizations increased investment, 91 percent plan to increase it further, and only 6 percent reported payback in under a year. McKinsey’s 2025 State of AI survey found a similar pattern: many organizations reported business-unit cost reductions, but more than 80 percent saw no tangible enterprise-level EBIT impact, and only 21 percent had fundamentally redesigned at least some workflows.
We spoke with Aleksandra Osipova, founder of Apricity Lab, about why AI activity is accelerating inside organizations faster than measurable performance. She helps leaders improve operational performance through AI by redesigning workflows, decision-making, and operating models. Her focus is on AI readiness, workflow redesign, and practical adoption that creates operational leverage.
Osipova shares that AI initiatives often fail to improve operational performance when gains stay local, workflows stay unchanged, and governance trails adoption.
Interviewer: Aleksandra, what is the core reason AI use can increase inside a company without improving operational performance?
Aleksandra Osipova: Because more AI activity is not the same thing as better operations. In many organizations, AI improves individual tasks before it improves the system those tasks belong to. People analyze and write faster, ship more features, but the surrounding workflow often stays the same. The same handoffs, approval queues, unclear ownership, and decision delays remain in place. So output rises, while overall performance changes much more slowly.
Interviewer: So the problem is not usually whether the tool works?
Aleksandra Osipova: Usually not. The deeper issue is whether AI is integrated into how the organization runs. If the technology sits on top of an unchanged operating model, you get isolated gains rather than system-level improvement.
Interviewer: What are the most common reasons AI fails to improve operational performance?
Aleksandra Osipova: I see three repeatedly. First, gains stay at the level of individual productivity. People improve their own tasks, but those gains do not carry across teams or end-to-end workflows. Second, organizations adopt tools faster than they redesign processes. AI gets introduced into work, but ownership, decision paths, and workflow logic remain unchanged. Third, governance lags behind adoption. Teams start using AI in day-to-day work before leadership has clearly defined risk boundaries, review logic, or accountability.
Interviewer: Can you give a concrete example of what that looks like in practice?
Aleksandra Osipova: A common example is when one part of the workflow speeds up, but the next step does not. A team may generate material faster, but legal review, approval, or prioritization still moves at the old speed. Or a team can analyze more customer data, but decision-making rights remain unclear, so nothing moves faster downstream. In those cases, AI accelerates local output without improving flow through the system.
Interviewer: Many executives still expect near-perfect output. How does that affect adoption?
Aleksandra Osipova: It creates a false standard. The question is not whether AI is perfect. It will not be. The question is where accuracy must be extremely high, where approximation is acceptable, and who is responsible for verification.
When organizations do not make those distinctions, they often default to reviewing everything. That may feel safe, but if it becomes permanent, it absorbs the gains AI creates. The business ends up paying for uncertainty through slower decisions, more manual checking, and weaker compounding.
Interviewer: But if you do not know where the model fails, how can leaders do anything other than review everything?
Aleksandra Osipova: Sometimes they cannot, at least at first. If leaders do not understand where a model is likely to fail, blanket review is a rational response. Some tools are a poor fit for a task, or the output is too inconsistent to support selective trust.
But that should be temporary, not a permanent operating model. If every workflow defaults to universal review, the organization absorbs uncertainty through slower decisions, more manual work, and weaker compounding of gains.
I see this repeatedly in practice. Teams say the model performs well most of the time, but fails badly enough in a small share of cases that they stop trusting it selectively. One operator described it as excellent for the 90 percent it gets right and catastrophic for the 10 percent it gets wrong. Unless you know where that 10 percent lives, you end up checking everything.
The task is to make uncertainty visible and manageable: identify failure modes, decide which errors are tolerable, put anomaly detection in place and assign ownership for verification.
Interviewer: Why do so many companies stay stuck in pilot mode?
Aleksandra Osipova: Pilot mode often persists because it allows visible activity without structural change. Teams can test tools, produce demos, and point to use cases without redesigning the underlying workflow, ownership model, or decision structure.
That is why many organizations can say they are using AI, but still cannot point to a meaningful operational shift. They have experimentation, not integration.
Interviewer: What do leaders most often misunderstand when they try to move beyond pilots?
Aleksandra Osipova: They start with the tool instead of the constraint. The conversation quickly becomes which model, which vendor, or which feature set. Those questions matter, but they usually come too early.
A better starting point is operational. Where does work stall? Where does effort fail to turn into output? Where are teams producing work that does not carry forward? Where are people spending time on repetitive tasks that do not create real value? Those questions reveal where AI can actually improve performance.
Interviewer: What should leaders measure if they want to know whether AI is improving operations?
Aleksandra Osipova: They should look beyond usage and productivity metrics alone. Activity metrics can be misleading. The more important measures are things like cycle time, handoff delays, rework, throughput, decision latency, and how much work actually moves forward without added friction.
If AI increases activity but does not improve flow, speed of decision, or usable output, then operational performance has not really improved.
Interviewer: What does effective AI adoption look like from an operational perspective?
Aleksandra Osipova: It usually starts with one clear intervention in a real workflow. Not a broad promise, but a specific point where time, cost, delay, or friction can be reduced. Then the organization redesigns the surrounding process so the gain can actually travel through the system.
That includes defining who owns the step, what gets reviewed, where risk sits, how exceptions are handled, and what metric will show improvement. AI creates the most value when the organization changes with it.
Interviewer: Where do you tend to create the most meaningful operational change?
Aleksandra Osipova: Usually in one of two situations. Either a pilot has not delivered the results leadership expected, or there is pressure to show a clearer return on AI efforts already underway. In both cases, the challenge is often the same: too much activity, not enough clarity.
I help identify where value can be created fastest, what should be prioritized first, and which risks need to be made visible early. A large part of that is simplification. The goal is not to do everything at once. It is to find the simplest intervention that can create the clearest operational gain.
Interviewer: You have worked across mathematics, complex systems, data science, and AI product development. How does that shape the way you approach this work?
Aleksandra Osipova: It makes me pay attention to both the system and the constraint. Performance depends on how multiple layers fit together: the workflow, the decision logic, the technical reality, the business need, and the practical limits of execution. That background helps me move between those levels, simplify what is overly complex, and build a clearer operational model for change.
Interviewer: If you had to leave leaders with one principle, what would it be?
Aleksandra Osipova: AI does not improve performance just because people are using it. It improves performance when organizations redesign how work moves, how decisions get made, and how value flows through the system.
Interviewer: Where should readers go to follow your work?
Aleksandra Osipova: LinkedIn is the best place to follow my thinking, and readers can also learn more about Apricity Lab and the work we do with organizations improving operational performance through AI.
Interviewer: Thank you. If there is one takeaway here, it is this: AI creates value only when organizations redesign how work flows, how decisions get made, and how gains compound across the business. For many organizations, that work is still ahead.