Hi, this is Nesibe!
It’s happening—Agentic AI is no longer just a concept. The past two weeks have been a whirlwind of announcements from Microsoft, Google, OpenAI, and others. Their message is clear: Agentic AI is here, and it’s already reshaping how we think about the future of work and intelligence.
For years, we’ve relied on AI agents to automate tasks, optimize workflows, and assist in decision-making. These tools have become integral to countless industries. But Agentic AI aspires to be much more: systems that don’t just follow instructions but reason, plan, and collaborate as dynamic partners.
Sam Altman stating, “We may see the first AI agents join the workforce in 2025 and materially change the output of companies.” Meanwhile, Microsoft’s $80 billion infrastructure investments and Nvidia’s advancements are turning this vision into reality—where AI agents evolve from tools to active problem-solvers.
I’ve said before that 2025 will be the year of agentic AI, and everything we’re seeing now reinforces that belief. Since we have some thoughts of big boys implementing Agents in 2025 targets, iwant to look how are these companies preparing for this leap? And are they thinking about the ethical guardrails needed to ensure AI agents serve people, not just profits?
AI Agents and the Leap Toward Agentic AI
Most of us are familiar with AI chatbots—simple, generative AI systems that respond to individual queries. You ask a question, and the chatbot replies. It’s functional but limited.
But what if AI not only responded to problems but tackled them head-on? Imagine an agent that flags supply chain issues, adjusts schedules, negotiates with vendors, and even learns to do better next time. This isn’t the AI we’ve grown used to—it’s Agentic AI, systems built to act, adapt, and collaborate with a level of autonomy we’ve only started to explore.
First, let’s clarify the difference between the generative AI models we’re familiar with and the emerging concept of AI Agents.
As we can understand, AI agents operate as goal-driven systems capable of tackling specific tasks by following a structured framework.
Take Nvidia’s NeMo platform, for instance. Highlighted during their recent AI summit, NeMo demonstrates how these agents function as “digital workers,” seamlessly integrating into supply chain management or customer support workflows.
Tools like LangChain, on the other hand, empower developers to link APIs and build the complex workflows that make this possible.
Agentic AI doesn’t just follow instructions—it thinks on its feet. By leveraging patterns like ReAct (Reason + Act) and retrieval-augmented generation (RAG), these systems can autonomously navigate complex scenarios, making decisions that once required human intervention.
Now that we’ve distinguished Agentic AI from its predecessors, let’s move.
So how does Agentic AI actually work?
At its core, Agentic AI operates through structured processes. These systems start by interpreting input, executing workflows, and refining outcomes. Unlike traditional AI, they’re built to adapt to changing scenarios and learn from past interactions.
While Agentic AI effective at solving structured problems and optimizing workflows, it’s creativity and curiosity that remain uniquely human strengths. These qualities drive innovation, enabling us to push boundaries and imagine futures beyond what AI can conceive. As we build more intelligent systems, preserving and amplifying these traits will be key to unlocking their full potential.
The Big Players Are Betting Big on Agentic AI
Agentic workflows are not just technical achievements— they will impact how teams operate and redefine the interplay between human and digital workforces
Nvidia’s NeMo framework exemplifies this shift, introducing “Nims” (AI microservices) Think of these ‘Nims’ as digital teammates, seamlessly adjusting to real-time data and operational demands, much like a coworker who’s always learning on the job.
Google is stepping up its game with Vertex AI and the Gemini 1.5 Pro LLM. You know that Gemini’s extended context capacity—up to one million tokens—is a big deal. It means AI agents can keep track of complex, multi-step tasks over time, almost like they’re developing a human-like memory for context and decision-making.
Oh, and did you catch their recent "Agents" paper?
You already know Copilot; and other AI models, such as Microsoft’s Orca and Phi, demonstrate the power of high-quality data curation and synthetic post-training to improve both efficiency and specialization. These models lay the foundation for agents capable of performing more nuanced and logical reasoning tasks, like comparing legal contracts or generating code.
But what attracts me most in Microsoft is their focus on aligning technical innovation with social impact offers a roadmap for integrating Agentic AI into everyday workflows while ensuring responsible governance.
Sam Altman has boldly stated that 2025 will be the year AI agents and we are so close to AGI. OpenAI is vocal about prioritizing safety in their AI development, including their work on alignment and ensuring models behave as intended.
As the OpenAI guy, Altman’s vision for Agentic AI, while he does mention governance, safety, and diverse viewpoints, the emphasis on ethical and inclusive design feels less pronounced. This leaves me with the impression that his focus leans heavily toward technological achievement, raising critical questions about how these systems will operate ethically and inclusively in a diverse global context—questions that remain unanswered.
Challenges and Questions for Agentic AI
Building and deploying Agentic AI is like raising a child. When you teach a baby, every decision matters—how they learn, what values they absorb, and how they interact with the world. These early lessons shape the person they’ll become. Similarly, AI agents are in their formative stages today. If we don’t instill principles of ethics, trust, and responsibility from the start, the “teenage” agents of tomorrow may be harder to guide, let alone control.
These developments are undeniably exciting, but they also come with significant uncertainties. The industry still lacks consensus on what responsible AI or trustworthy systems entail. And yet, global frameworks for AI regulation remain fragmented and slow-moving.
Take the EU’s AI Act, for example. While it’s a step in the right direction, it’s still grappling with how to address rapidly evolving technologies like Agentic AI. Meanwhile, in the U.S., discussions around federal AI legislation are still in their infancy.
Technology is racing ahead of regulation. And with Agentic AI, the stakes only get higher. The question is: are we keeping up? More importantly, how do we ensure that Agentic AI is developed and deployed in ways that truly serve societal needs, rather than simply chasing innovation for its own sake?
AGI isn’t a finish line like landing on the moon; it’s a constantly evolving process, shaped by how we develop, deploy, and refine these systems over time.
Since we already see the wave is coming for the rise of Agentic AI, this raises critical questions.
What does trustworthy AI look like at this scale? It’s one thing to say we can align systems with human values, but the stakes are much higher when agents are making decisions that ripple across industries and lives.
Are we ready with the right guardrails? Governance and regulation often feel reactive, but we can’t afford to wait until something goes wrong. Accountability needs to be built in from day one.
How do we keep humans in the loop? Trust in Agentic AI systems isn’t just about reliable outputs—it’s about creating systems that are transparent, explainable, and fair. Without this, pushback is inevitable.
Building a Responsible AI Ecosystem
Creating a future where Agentic AI works for us—not the other way around—requires more than cutting-edge technology. It’s about building trust, ensuring accountability, and committing to societal values. As I looked into how Microsoft, IBM, Anthropic, and OpenAI are approaching this, I found some promising ideas—and some gaps that still need addressing.
Microsoft: Responsible Innovation and Customization
I love Microsoft’s vision for the future of Agentic AI and its deep focus on practical applications and responsible innovation. Their paper “Agents Are Not Enough” highlights a critical point: building capable agents isn’t enough to guarantee success. They’re proposing a holistic ecosystem that includes Sims (user preferences) and Assistants (agent coordinators) to bridge the gaps.
Ece Kamar’s work on responsible AI in agents hits all the right notes for me. She prioritizes dynamic guardrails, ensuring AI agents can adapt and grow while staying grounded in ethical and human-centered principles. It feels a lot like parenting these systems: guiding them as they evolve, allowing them freedom within clear boundaries, and always keeping accountability front and center.
And Microsoft’s push for sustainability—with carbon-neutral datacenters and liquid cooling—shows they’re thinking beyond the immediate. Their commitment to social acceptability and adaptable personalization is another standout. Agents that respect cultural norms and tailor their behavior to individual needs? That’s how you ensure these systems don’t just work—they truly make a difference.
IBM: Ethics Meets Human Dignity
IBM has a knack for reminding us why ethics matter. Their Alignment Studio trains agents to align with moral values, whether drawn from policy documents or company guidelines. But what I found especially compelling was their focus on preserving dignity for human workers.
What I admire most about IBM is their focus on ensuring AI augments human roles without diminishing their importance. Their idea of adversarial collaboration—where humans remain the ultimate decision-makers while AI challenges their assumptions and sharpens their outcomes—feels like a perfect balance.
Anthropic: Keeping It Simple and Transparent
While many AI systems leave users guessing, they focus on making their agents clear and comprehensible from the start. By using modular designs, they’ve created systems that don’t just work but are easy to follow and adapt. You don’t have to dig through complexity to figure out why an agent made a decision—the answers are built into the system.
What I liked is Anthropic’s emphasis on clarity and control. They’re proving that making agents understandable isn’t just about adding transparency after the fact—it’s about designing with it in mind. This philosophy ensures that as agents grow more capable, humans remain firmly in the driver’s seat. In a field where opacity is often the norm, Anthropic’s work reminds us that simplicity is just as powerful as sophistication.
Google: Prioritizing Transparency and Equity
Google’s approach to AI agents echoes some of the best practices we’ve seen elsewhere but with their own thoughtful twist. Like Anthropic, they emphasize transparency and legibility, ensuring agents can explain their actions clearly and remain accountable. But what stands out to me is their focus on equity and access—designing systems that work for diverse users across socioeconomic and cultural contexts, much like Microsoft’s focus on social acceptability.
Their work on mitigating risks like misinformation and manipulation shows they’re serious about governance, but challenges like overtrust through anthropomorphism remain. Google’s doing great work here, but as with others, the question is whether they can scale these safeguards as agents grow more autonomous.
OpenAI: Big Ambitions, Missing Pieces
While OpenAI’s technical contributions are undeniable, their focus on scaling often overshadows the pressing need for inclusivity. Their research emphasizes essential practices like constraining action spaces, default behaviors, and automatic monitoring—critical for ensuring Agentic AI systems remain safe and accountable. But here’s the thing: while these frameworks address safety, they don’t give the same priority to inclusivity or equity, leaving a significant gap in their approach.
The paper “Practices for Governing Agentic AI Systems” outlines strategies like legibility, ensuring agents explain their reasoning, and attributability, where responsibility for decisions can be traced back to specific systems or individuals. These measures are important, but they feel more reactive than forward-thinking. Even their researchers do acknowledge societal impacts, Sam’s broader focus still leans more toward scaling and technical ambition. It leaves me wondering: who will their systems really serve in the long run?
Where Do We Go From Here?
Agentic AI is no longer just about solving technical challenges—it’s about answering the deeper questions of how we want these systems to shape our world. As we navigate this new frontier, the real challenge isn’t just building smarter agents but building ones we can trust to act responsibly in complex, diverse, and unpredictable contexts.
The truth is, no single company, framework, or policy can address everything. What matters now is collaboration—between technologists, policymakers, and society itself. Initiatives like UNESCO’s ethical AI guidelines and the EU-U.S. Trade and Technology Council show that international alignment is possible, but gaps remain. Competing interests, inequities in access to technology, and the rapid pace of innovation are significant hurdles. Addressing these challenges with foresight and shared purpose will be essential to creating systems that benefit everyone.