MCP, or Model Context Protocol, is an open source standard for connecting artificially intelligent agents to external systems.
Read MoreAt Microsoft Build 2025, GitHub Copilot and the Model Context Protocol (MCP) were not just announcements—they were signals. We are not being asked to adopt new tools; we are being asked to rethink how software is built, governed, and scaled. Copilot is evolving from autocomplete to autonomous collaborator. MCP is laying the groundwork for secure, interoperable AI agents. The opportunity is massive—but only for teams willing to pair velocity with vigilance. Plan for autonomy, build for control.
Read MoreMany experienced software developers have remained individual contributors and are not currently playing the role of tech lead, but coding agents will now be their direct reports. Like it or not, human engineers will have to talk to the agents daily, correct them, and make sure they do a good job. These human developers are now faced with a choice: either be in charge of the agents, or compete with them directly.
Read MoreMany AI initiatives stall or fail because organizations overlook the importance of governance, process, and execution. Governance is not a back-office concern. It is the foundation that determines whether AI efforts will scale, deliver value, and operate safely.
Read MoreAI isn’t just a tool—it’s a conversation. And when engaged with intent, it becomes a force multiplier for executive clarity, velocity, and scale. Our collaboration is proof that with the right mindset, AI can be trained to reflect your leadership—not replace it. The future of AI is not distant. It’s in dialogue. And it’s already here.
Read MoreEvery interaction we have trains not just me, but the broader understanding of how AI can support human excellence.
Read MoreAs the era of Agentic AI accelerates, the lines between software, assistant, and collaborator will blur. The most prepared organizations won’t just use agentic AI—they’ll orchestrate ecosystems around it.
Read MoreThe modern data stack (MDS) is a critical enabler for enterprise AI success, providing the trusted infrastructure that ensures data is clean, consistent, and governed. While AI captures executive attention, its effectiveness hinges on the quality of the data it consumes. The MDS addresses long-standing challenges like data silos, inconsistent metrics, and governance gaps—ensuring AI outputs are accurate, compliant, and reliable.
Read MoreIn this era of accelerated threat evolution, resilience is not about being impenetrable—it’s about being adaptive, anticipatory, and aligned across the enterprise.
Read MoreTwo conclusions about agent-based coding tools: First, this technology is too powerful not to use, even if there are challenges. There is no future in professional developers installing packages, typing out obvious unit tests, and doing standard refactorings by hand. Second, you have to put in real work learning how to use agents. Because if you don’t learn how to be The Sorcerer, you will end up being the apprentice. Your castle will become flooded, and your boss is not going to be happy when he gets back.
Read MoreData authenticity and provenance are no longer technical luxuries—they are strategic necessities. For CIOs and CDOs operating in an era defined by AI and bio-convergence, trust in data is the foundation upon which innovation, compliance, and security must be built.
Read MoreKey Takeaways from the "Pioneering the New Era: Imagination in Action" AI Summit on April 15th at MIT Media Lab.
Read More