IT Strategy

The Future of AI Strategy: Lessons From MIT and What They Mean for Your Business

Discover how Intersog is transforming businesses with AI, drawing on insights from MIT. In an exclusive interview, cofounder Vadym Chernega shares practical lessons on driving AI success—focusing on strategy, not hype.

At Intersog, we believe growth never stops. Everyone on our team is passionate about learning, developing new skills, and striving for excellence. That culture extends to our leadership, too.

Vadym Chernega

Our cofounder, Vadym Chernega, has always been hands-on with research and development, leading AI pilot projects and helping clients explore automation, data analytics, and integrations. Recently, he decided to strengthen his expertise by completing one of the most intense programs available to executives today: MIT’s Executive Program – Artificial Intelligence: Implications for Business Strategy.

We sat down with Vadym to talk about what he learned, why it matters, and how Intersog is already applying those lessons in real client projects.

- Vadym, what was the biggest takeaway from your time at MIT?

Vadym: One of the strongest lessons is that AI isn’t about technology for its own sake. As MIT frames it: strategy beats buzz. It’s not about ‘what tool should we try?’ but ‘what business problem are we solving?’ Recently, we worked on a project for Relm Insurance alongside Freestone.AI. The goal wasn’t just to experiment with AI, but to improve underwriting precision and operational efficiency. We built RelmInsight, a platform that provides a conversational interface to Relm’s data. Underwriters can now ask nuanced questions and get comprehensive answers, accelerating decision-making and improving accuracy. This is AI making real business sense — not headlines.

- MIT talks about AI adoption following a J-curve. How do you see that play out with our clients?

Vadym: That J-curve is real. At first, the returns can feel modest, but once an organization crosses about 25% adoption intensity, the upside is tremendous. We saw this with our global supply chain client. We embedded AI into inventory forecasting and logistics planning. The result? Customers and vendors can access information on what’s inside huge shipping containers and see predictive arrival times in real-time. Plus the business itself saw almost 60% improvement in operational efficiency. If you are testing AI in supply chains, you can’t just dip your toe in — you have to commit to scaling AI across workflows.

- Research says that 95% of generative AI pilots fail. What’s happening there?

Vadym: MIT research confirms it - 95% of generative AI pilots don’t generate measurable impact. And it’s not because AI doesn’t work. It’s because integration is missing. Too many pilots remain siloed. That’s why I’m proud of what we built with Pyxos. They faced the complex world of data and privacy compliance, where regulations are daunting and costly. Together, we created an AI-powered compliance co-pilot that automates policy drafting, analyzes overlapping privacy laws, and gives real-time guidance. The platform allows you to chat with multiple agents in one conversation, each offering unique perspectives on compliance based on their expertise (for example, compliance, marketing, engineering). After the launch, we also gathered user feedback and implemented an onboarding tooltip wizard to guide customers. It helped us increase user engagement significantly. 

- Another common challenge in AI now is model hallucinations, where systems confidently output incorrect or fabricated information. There was an article in Mind Matters AI that showed how ChatGPT-5 hallucinated flawed reasoning and visuals when tested on a 'rotated' tic-tac-toe puzzle, despite claiming PhD-level expertise. How would you address hallucinations in AI solutions?

Vadym: Hallucinations are indeed a critical risk, especially in generative models, and that’s why  MIT emphasizes grounding AI in strategy and responsibility rather than unchecked deployment. At Intersog, we mitigate this by designing systems that are deeply integrated with verified data sources and include robust validation layers—such as retrieval-augmented generation (RAG) to pull from real client data rather than inventing responses. In our RelmInsight platform, for example, answers are always tied to traceable data points, preventing baseless outputs. We also incorporate human-in-the-loop oversight, where users can flag and correct issues, and we conduct rigorous testing against edge cases. This not only reduces hallucinations but builds trust, ensuring AI delivers accurate, business-relevant insights instead of confident errors.

- While trying to prevent issues like hallucinations, how would you ensure AI systems remain trustworthy and responsible, especially when handling user data?

Vadym: Trust isn’t optional. Every solution we deliver has transparency, governance, and explainability built in. With Relm Insurance, claims automation decisions are traceable. With our supply chain client forecasting, we built human oversight loops. With Pyxos, we made sure AI recommendations come with user-friendly explanations. It’s all about building AI that partners with people — not black boxes.

- What about the human side? How do people benefit?

Vadym: MIT and BCG found that when employees feel AI empowers them, companies are nearly six times more likely to see financial benefits. We see that across our clients. Relm Insurance staff felt freed from repetitive tasks and could focus on higher-value work. Supply chain planners became more confident because AI forecasts gave clarity, not confusion.
Pyxos users felt empowered because AI simplified their compliance work instead of overwhelming them. When people win, the business wins.

- So Vadym, if you had to sum it up — what does the future of AI strategy look like?

Vadym: We’re not here to chase the latest shiny AI tool. We’re here to make AI work for the business. That means starting with a clear strategy—not just testing tech for the sake of it. It’s time to move past small pilots and start scaling what works. We build AI that people can trust—designed to be responsible, easy to understand, and grounded in real-world needs. And we roll it out in a way that helps clients’ teams do more, not get replaced. The result? Smarter operations, faster decisions, and a measurable boost to profitability.

At Intersog, MIT’s lessons aren’t theory for us — they’re the backbone of how we design and deploy AI solutions. If you’re ready to explore where AI fits into your business roadmap, we’d love to talk.