26th to 27th March 2026 | Nimhans Convention Center, Bangalore

The Agenda of MLDS 2026

MLDS is dedicated to Agentic AI—spotlighting breakthroughs in autonomous agents, Generative AI, and intelligent systems. The summit brings together developers, researchers, and innovators to share insights, showcase real-world applications, and explore how Agentic AI is transforming the future of software development.

The majority of Conference sessions are curated by the AIM community.

We are in the process of finalizing the sessions for 2026. Expect more than 70 talks at the summit. Please check back this page again.

Expand All +
  • Day 1


  • The tacit premise of modern ML: decision-time computation is where value gets created. Train a model, learn a policy, deploy. The policy is the intelligence. In this talk I will demonstrate that structural graph design, specifically via randomised edge perturbation achieves Pareto-dominant performance over learned and optimisation-based methods in urban food delivery. Our core claim is that-if the environment in which agents operate is designed with sufficient structural care, the agents themselves can be remarkably simple; and the system as a whole will still outperform agents that are far more computationally sophisticated.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Learn how to take agentic applications from lab experiments to production-grade deployments using AWS Copilot. This workshop shows how to automate infrastructure provisioning, CI/CD pipelines, and integrate standards like the Model Context Protocol (MCP) to extend agents beyond chat into real-world tasks. By the end, you’ll know how to create resilient, scalable, and context-aware AI agents that can truly operate in enterprise environments, while freeing developers to focus on logic instead of infrastructure firefighting.
    HALL 2 - Exclusive Workshops

  • As organizations move toward building and operating their own AI infrastructure, managing large-scale GPU environments becomes both an opportunity and a challenge. This session shares real-world lessons from running a production AI stack powered by a 13-GPU fleet—covering how teams design, deploy, and optimize infrastructure to support demanding AI workloads and large language models. From workload orchestration and performance tuning to cost management and system reliability, the discussion will highlight practical insights gained while operating GPU clusters in a live environment. Attendees will learn what it takes to maintain stability, scale efficiently, and ensure consistent performance when running AI systems in production
    HALL 3 - Tech Talks

  • The talk introduces an innovative framework for training AI agents using trajectory deep reinforcement learning (GRPO) and active intelligent memory utilization. By treating entire decision trajectories as the core unit of learning, it addresses challenges such as trajectory blindness and high supervision costs, thereby enhancing agent performance and understanding. This framework supports continuous experiential learning without extensive human oversight and incorporates a modular evaluation methodology for assessing enterprise agentic platforms. The framework aims to improve the efficiency, adaptability, and risk management of AI systems, driving wider adoption in enterprise environments.
    HALL 1 (Main) - Keynotes / Tech Talks

  • As AI disrupts current Architectures, we need certain principles to architect applications appropriately & harness the full potential of AI. The sessions explain the Guiding Principles for implementing multi-agentic applications and provide a technical breakdown of those principles, using examples from Snowflake Cortex Agents & Platform.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Enterprise support delegation and resolution is complex: heterogeneous workflows, partial observability, high-risk actions, and strict policy constraints. We present an agentic Supervisor + Specialists architecture that maintains a shared ticket state, generates hierarchical plans via plan graphs, replans using validation signals, and orchestrates multiple skills, with pre/postcondition checks as verifiers via a custom agent harness.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Modern AI systems are rapidly evolving from simple prompt-response models to autonomous agents capable of reasoning, using tools, retrieving knowledge, and executing complex workflows. But what actually makes an AI agent work? In this session, The Anatomy of an AI Agent, we will break down the core building blocks behind modern agentic systems. We will explore how large language models, memory, retrieval, planning, and tool execution come together to create intelligent, reliable, and production-ready AI agents. The session will focus on practical architecture patterns used in real-world systems, including how agents reason over data, interact with external tools, maintain context, and handle multi-step tasks. Attendees will gain a clear mental model of how AI agents are designed, the trade-offs involved, and what it takes to move from demos to scalable, real-world deployments. This talk is intended for engineers, architects, and AI practitioners who want to understand how modern AI agents are actually built under the hood.
    HALL 3 - Tech Talks

  • Building an agent can be done quickly. Building an agent that holds up in production, under time pressure, with operational reliability – now, that’s the hard part. In this talk, an AI researcher from Millennium will explain a practical engineering playbook for deploying agents into high-stakes buy-side workflows: research, monitoring, operations, and reporting processes. This talk will go beyond buzzwords to show how to build a production-grade agent: composing proven patterns (with tool-chaining, reflection, human-in-the-loop, and selective multi-agent design) into systems that are constrained, observable and governable. This session will highlight the failure modes that tutorials omit, and the design decisions that prevent them.
    HALL 1 (Main) - Keynotes / Tech Talks

  • LLMs today promise endless text generation, but creating high-fidelity synthetic text data that actually reflects complex business logic remains an engineering challenge. This talk moves beyond basic "prompt and pray" techniques to address the various nuances of creating instruction datasets useful for knowledge distillation, domain adaptation, and reinforcement learning (RL) workflows. We will examine why direct generation often causes datasets to regress to the mean, producing repetitive, safe content that lacks the messy edge cases required for robust training. To solve this, we suggest a systematic, algorithmic approach that treats data generation as an engineering problem. We will discuss how to decompose pipelines into iterative batches to programmatically inject real-world variations. We also conclude with a strategic checklist to evaluate if synthetic data is truly well-suited to your enterprise problem.
    HALL 3 - Tech Talks

  • Agentic systems are non-deterministic—making them harder to debug with traditional logs. This workshop takes you deep into telemetry: instrumentation, observability pipelines, and analysis techniques that capture reasoning loops, tool failures, and system context. You’ll walk away with hands-on methods to turn raw signals into actionable insights, ensuring your autonomous agents remain reliable and explainable in production, even when facing unpredictable environments.
    HALL 2 - Exclusive Workshops

  • As large language models move from prototypes into enterprise workflows, teams across the industry increasingly face a practical operational alignment problem: how to steer model behavior toward business outcomes in a reliable and measurable way. At the same time for many practitioners a critical decision remains unclear: whether a use case is best addressed through prompt engineering, supervised fine-tuning, or reinforcement learning. This talk introduces a structured framework for choosing the right approach and a practical method for translating business objectives into reward signals that can be optimized, evaluated, and audited. Through hands on experiments in incremental preference learning using reward modeling and policy optimization, the session demonstrates how meaningful behavioral shifts can be achieved even with small, carefully curated datasets. It also examines the stability trade-offs between staged training and online RL updates. Additionally, the session distills practical guidance on designing verifiable reward signals from ambiguous business objectives. This segment identifies when exploration and delayed rewards make reinforcement learning necessary and how to avoid common failure modes, such as reward hacking and instability. Rather than focusing on scale alone, the session emphasizes disciplined reward design and systematic experimentation as the foundation for deploying reinforcement learning effectively in enterprise LLM systems.
    HALL 3 - Tech Talks

  • As AI agents move into real-world use, the data layer becomes critical to their success. Traditional databases struggle with the real-time, high-concurrency, and context-heavy demands of agentic systems.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Building AI-First Operating Models explores how organizations can move beyond isolated AI initiatives to embed intelligence at the core of their operating model. In this session, Abhishek Singh shares a practical and strategic view on designing AI-first enterprises where data, models, and decision intelligence are tightly integrated into everyday workflows. The discussion will cover the shift from traditional automation to AI-led orchestration, key principles such as data readiness, scalable AI infrastructure, human-in-the-loop governance, and cross-functional collaboration, along with real-world examples of how AI-first models are driving measurable impact across operations, customer experience, and decision-making. Attendees will walk away with a clear framework to transition from experimentation to enterprise-scale AI adoption and build resilient, future-ready operating models.
    HALL 3 - Tech Talks

  • As AI agents evolve from experimental prototypes to real-world production systems, enterprises are realizing that model capability alone is not enough. The true challenge lies in how agents manage, retrieve, and retain information across complex, long-running workflows. Getting this right is often the difference between an impressive demo and a system that consistently delivers value. Building effective agentic AI requires a strong approach to context and memory. Developers must understand how different memory types such as in-context, external, episodic, and semantic work together, and when to use each. Techniques like Retrieval-Augmented Generation (RAG) help bridge knowledge gaps, while thoughtful design ensures agents can retain and use information across sessions. Join us for this Tech Talk where Sanketh and Anshul will break down how context and memory shape intelligent agents. Walk away with a practical framework for building AI agents that remember, reason, and scale reliably.
    HALL 1 (Main) - Keynotes / Tech Talks

  • We will move past the hype of generative AI and dive into the technical frameworks that allow machines to navigate the real world with intelligence and autonomy.
    HALL 1 (Main) - Keynotes / Tech Talks

  • As Generative AI moves into HR, the bar for employee experience has shifted considerably. Employees expect accurate, context-aware answers and organizations need systems they can actually audit. This workshop presents an end-to-end approach to designing and implementing an HR Copilot using a Hybrid Retrieval-Augmented Generation (RAG) framework with a multi-agent architecture.
    HALL 3 - Tech Talks

  • The rapid expansion of the global space economy is generating unprecedented volumes of data and increasingly complex operational challenges. Traditional AI systems have largely focused on narrow tasks such as image classification or anomaly detection. However, the next frontier lies in agentic AI systems that can perceive, reason, and act autonomously across multiple components of the space ecosystem. This talk explores how agentic AI can enable a new generation of autonomous space capabilities, spanning satellite Earth observation, robotic spacecraft operations, and intelligent ground infrastructure. We will discuss AI agentic workflow examples that detect vehicles and strategic assets from satellite imagery, support autonomous robotic docking and inspection, and optimize ground station networks for efficient satellite communications.
    HALL 2 - Exclusive Workshops

  • Traditional revenue management and marketing science models are increasingly throttled by Human-in-the-Loop bottlenecks and search space explosion where the exponential growth of variables (cross-elasticity, competitor tactics, and channels etc.) and the number of models outpaces manual oversight. Static ML workflows simply cannot scale when every scenario requires human-led iterative corrections. This session explores the agentic shift where classical ML models are wrapped within a reasoning loop. By treating models as "Tools" rather than final outputs, LLMs act as a central "brain" to decompose high-level goals into executable plans. These systems utilize long-term memory to retain historical context and self-correcting loops to autonomously refine model parameters or flag data anomalies. Unlike isolated model outputs that require human intervention to stay relevant, the autonomous system can perform diagnostic sub-routines, run multiple what-if and optimization scenarios, and automate modeling activities at scale.
    HALL 1 (Main) - Keynotes / Tech Talks

  • As organizations move from standalone LLM applications to complex, agentic AI workflows, LLMOps becomes the critical backbone enabling scale, reliability, and trust. This session explores how to design robust LLMOps frameworks to build, monitor, and govern multi-agent systems in production. It will cover practical approaches to orchestration, observability, evaluation, cost control, and risk management, along with governance strategies to ensure compliance, safety, and responsible AI at scale. Attendees will leave with actionable insights to operationalize agentic AI systems that are resilient, transparent, and enterprise-ready.
    HALL 2 - Exclusive Workshops

  • As AI agents become more prevalent across industries, ensuring they operate with the depth and accuracy of true subject matter expertise is becoming increasingly important. An expert-first approach focuses on anchoring AI agents in structured domain knowledge, expert insights, and reliable information sources rather than relying solely on generalized model outputs. This session will explore why grounding AI agents in subject matter knowledge is essential for building trustworthy and effective AI systems. It will discuss the role of expert input, curated knowledge sources, and contextual understanding in enabling agents to deliver more accurate, relevant, and meaningful outcomes. The session will also highlight how organizations can design AI solutions that combine the power of large language models with domain expertise to create more reliable and impactful AI-driven experiences.
    HALL 3 - Tech Talks

  • The hospitality industry thrives on trust, experience, and word-of-mouth advocacy. This session explores how an AI-based referral engine can transform customer referrals from a reactive process into a scalable, intelligence-driven growth lever. The leader will share how AI can be used to identify customers with a high propensity to refer, enabling businesses to focus efforts where advocacy is most likely to convert. The session will also cover smart allocation of relationship managers using predictive insights, ensuring high-value interactions are prioritized and resources are optimally deployed. Additionally, the talk will highlight how AI-driven profile enrichment—combining behavioral, transactional, and engagement data—can power deeply personalized communication at scale. Attendees will gain practical insights into building referral ecosystems that are proactive, personalized, and measurable, driving sustainable growth and stronger customer relationships in the hospitality sector.
    HALL 1 (Main) - Keynotes / Tech Talks

  • In an increasingly competitive and data-rich landscape, Consumer Packaged Goods (CPG) companies are investing heavily in analytics, AI, and digital transformation—yet many still struggle to translate insights into impactful business outcomes. This session explores Decision Intelligence as the critical missing link that bridges data, analytics, and real-world decision-making. We will delve into how Decision Intelligence frameworks enable organizations to move beyond dashboards and predictions toward structured, repeatable, and scalable decision processes. By integrating data, context, human judgment, and AI-driven recommendations, CPG leaders can optimize pricing, promotions, supply chains, and demand planning with greater precision and speed. Through practical examples and industry use cases, the session will highlight how embedding Decision Intelligence into everyday workflows can unlock measurable improvements in performance, agility, and ROI—helping CPG organizations turn complexity into a competitive advantage.
    HALL 2 - Exclusive Workshops

  • Most enterprise AI systems stop at generating predictions such as churn probabilities, fraud scores, recommendations, or forecasts, but business value is realized only when those predictions translate into reliable, automated decisions. This session focuses on the critical decision layer that sits between model outputs and real-world enterprise workflows. Designed for developers and ML practitioners, it explores how to build production-ready systems that combine model predictions with rules, thresholds, optimization logic, and human-in-the-loop controls to drive actionable outcomes. The talk will also cover handling uncertainty, edge cases, governance, and monitoring decision quality—not just model accuracy—ensuring AI systems are robust, scalable, and aligned with measurable business impact.
    HALL 3 - Tech Talks

  • Everyone is building agents. Almost no one is running them in production. Eighteen months ago, one of our agents changed a VLAN configuration on a network device during a controlled proof of concept. It put an access point on the wrong VLAN, isolating every client connected to it. The agent completed the task successfully. No error was thrown. No alert fired. From the system's perspective—everything was fine. That incident taught us more about production-grade agentic AI than any benchmark or architecture paper. It revealed four failure zones that most enterprise AI teams have not yet engineered for: Silent Failure, Black Box Decisions, Permission Explosion, and Runaway Execution. This session moves beyond agent architecture theory to the operational discipline required to run autonomous systems inside mission-critical enterprise environments—the kind where SLAs are real, permissions matter, and a wrong decision has consequences. We will walk through the 4-Layer Enterprise Agent Stack, a framework built on the principle that "The user is all"—ensuring agent permissions never exceed what the human behind the request is authorized to do. The future of enterprise AI will not be decided by model intelligence. It will be decided by operational discipline.
    HALL 1 (Main) - Keynotes / Tech Talks

  • This talk walks through the real-world journey of building a production-grade Agentic Financial Assistant—from early proof-of-concept experiments to a scalable, reliable system serving real users. We will explore how large language models evolve into autonomous agents capable of reasoning, retrieving financial context, executing tools, and making safe decisions within enterprise constraints. The session will cover practical architecture patterns including multi-agent orchestration, tool execution layers, memory and retrieval systems, and LLMOps for monitoring and reliability. We will also discuss the critical production challenges—latency, cost control, hallucination mitigation, guardrails, and evaluation frameworks—and how to design agent systems that are safe enough for financial environments. Developers will walk away with practical blueprints for building agentic systems that move beyond demos into production.
    HALL 2 - Exclusive Workshops

  • Engineering for Human-Like, Multilingual Voicebots for Bharat explores how voice-first AI systems can be designed to serve India’s linguistically diverse and mobile-first population. Drawing from his experience building large-scale, real-world platforms, Kiran Kumar Katreddi will discuss the engineering foundations behind creating voicebots that feel natural, conversational, and inclusive across multiple Indian languages and dialects. The session will cover how technologies such as speech recognition, natural language understanding, and text-to-speech come together to handle code-mixed speech, regional accents, and low-resource languages, while operating at scale with low latency. Kiran will also highlight the unique challenges and design considerations specific to Bharat, and how human-like multilingual voicebots can significantly expand digital access, improve customer experiences, and enable intuitive interactions for users beyond English-first, text-based interfaces.
    HALL 3 - Tech Talks

  • Building machine learning models is only a small part of the challenge in heavy-industry environments. The real complexity lies in deploying, scaling, and operating ML systems that must work reliably on the shop floor—often under strict safety, latency, and reliability constraints. This session walks through the end-to-end journey of building production-grade ML systems for heavy industry, covering data acquisition from industrial systems, model development, validation, and deployment into real-world decision workflows. It will highlight how ML models are integrated with existing operational technology (OT) systems, how predictions translate into actionable shop-floor decisions, and how teams handle issues like data drift, model monitoring, explainability, and human-in-the-loop controls. Attendees will gain practical insights into ML system design, MLOps, and decision engineering in industrial settings, along with lessons learned from taking models out of notebooks and into mission-critical production environments.
    HALL 1 (Main) - Keynotes / Tech Talks

  • BharatGen represents a new paradigm in building AI systems that are sovereign, inclusive, and purpose-built for India’s diverse linguistic and cultural landscape. This session explores how frugally scalable multilingual and multimodal AI models can be developed to serve Bharat at scale, balancing technological advancement with accessibility and efficiency. It will highlight the principles behind shared national AI infrastructure, enabling collaboration across academia, industry, and government to create AI that understands and serves India’s many languages and modalities. Attendees will gain insights into the opportunities, challenges, and impact of building sovereign AI capabilities that empower innovation while ensuring that AI development remains accessible, affordable, and aligned with the needs of Bharat.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Day 2


  • As large language models (LLMs) move from experimentation to production, building reliable and scalable infrastructure has become critical. This session takes a deep dive into the architecture behind modern LLM systems—covering how organizations scale model deployments, intelligently route workloads, and design resilient AI platforms that can handle real-world demand. With a focus on NVIDIA’s approach to resiliency, the discussion will highlight how advanced GPU infrastructure, optimized networking, and fault-tolerant system design help ensure consistent performance even under heavy and unpredictable workloads. Attendees will gain insights into best practices for maintaining uptime, improving efficiency, and building robust AI systems that can support enterprise-scale generative AI applications.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Interoperability will define the future of agent ecosystems. This workshop unpacks the emerging standards—Model Context Protocol (MCP), Agent-to-Agent (A2A), and Agent Communication Protocol (ACP)—that allow agents to “speak” to each other. Through hands-on exercises, you’ll compare strengths, trade-offs, and real implementations. You’ll learn to build adaptable systems that can evolve with changing standards—future-proofing your AI stack for a multi-agent, protocol-driven world.
    HALL 2 - Exclusive Workshops

  • ADM – Founder’s Voice Redefining the Future of Data with Agentic Intelligence In this special Founder’s Voice session, Raghu Mitra shares the vision, philosophy, and engineering journey behind Acceldata’s evolution from data observability to Agentic Data Management (ADM). As enterprises scale AI, analytics, and data-driven decision-making, traditional monitoring and reactive governance are no longer sufficient. The future demands systems that do not just observe data—but understand, reason, and act. ADM represents this next frontier. Built on the foundation of ADOC, ADM introduces autonomous, AI-powered agents that continuously monitor data ecosystems, diagnose issues with contextual intelligence, and execute corrective actions with minimal human intervention. In this session, Raghu explores the architectural thinking, real-world challenges, and breakthrough innovations that shaped ADM into a self-driving data management platform. Attendees will gain insight into: The shift from reactive observability to autonomous data operations How agentic systems transform reliability, quality, and governance The engineering principles behind scalable AI-native data management The long-term vision for intelligent, self-healing data ecosystems This session is not just a product overview—it is a forward-looking perspective on how agentic intelligence will redefine enterprise data strategy in the AI era.
    HALL 3 - Tech Talks

  • As financial institutions move from AI experimentation to real-world deployment, the challenge is no longer building agents — it is running them reliably in production. In this session, Orkes and StoneX Group Inc. cover the full journey: from traditional workflow orchestration to the next frontier of agentic AI. Orkes will walk through how modern systems are built on Orkes Conductor — starting with the orchestration patterns that enterprises already rely on for durability, visibility, and operational control — and then extending that foundation into agentic use cases with MCP integration, dynamic agent coordination, and governance at scale. StoneX, a Fortune 50 institutional-grade financial services franchise and Orkes customer, will then share how they put this platform into practice — orchestrating complex, multi-region workflows that span compliance, and settlement across markets. Their story illustrates what becomes possible when a battle-tested orchestration platform handles the hard problems of state, resilience, and auditability, freeing teams to focus on business logic. Together, this session makes the case that the path to reliable agentic AI in production does not require starting from scratch — it starts with the right orchestration foundation.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Generic AI coding assistants are great at syntax but fail at context. They don’t know your production schemas, your dbt DAGs, or your FinOps constraints. In this session, we dive into Cortex Code, Snowflake’s native AI agent that operates within your data’s security perimeter. We will demonstrate how Cortex Code moves beyond simple code generation to perform “agentic” tasks: self-healing data pipelines, automated dbt scaffolding, and cross-platform orchestration (CLI to Snowsight). Learn how to turn natural language into production-ready, governed data infrastructure in minutes rather than hours.
    HALL 1 (Main) - Keynotes / Tech Talks

  • As enterprises rapidly deploy large language models into real-world applications, achieving high throughput and low latency has become a critical requirement for modern AI infrastructure. This session explores what it takes to run LLMs at scale—from optimizing model serving pipelines and distributed compute to efficient workload scheduling and inference acceleration. We’ll take a closer look at how advanced GPU architectures and AI platforms from NVIDIA enable faster processing, reduced response times, and consistent performance even under heavy demand. Attendees will gain insights into practical strategies for designing scalable AI systems, balancing cost and performance, and building infrastructure that can support next-generation generative AI workloads in production environments
    HALL 3 - Tech Talks

  • Why the agentic AI era demands a completely different foundation — and what builders in India need to get there. Everyone today is trying to build agents, and most of them will struggle to make it to production. Not because the reasoning is wrong. Not because the model isn't good enough. But because the infrastructure underneath — fragmented, opaque, and designed for a pre-agentic world — simply wasn't built to support how agents run. This session makes the case that the agentic era isn't just a model problem — it's a systems problem. It maps what agentic workloads demand: persistent memory, low-latency inference, stateful orchestration, real-time observability, and cost control that doesn't collapse under multi-agent load. This is the infrastructure conversation the agentic AI era has been avoiding.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Most AI initiatives fail not because the model is weak, but because teams lack a shared, reusable pattern language that turns experimental wins into production systems.This session distils three decades of engineering into 8 eras and 300+ patterns — showing how each technology wave (structured programming, OOP/GoF, SOA/events, cloud/microservices, cloud security, AI/ML, and now agentic AI) accelerated once solutions were named, standardised and made communicable. Attendees will learn the core pattern families behind production-grade agentic systems: reasoning, memory (RAG), tool use (ReAct), orchestration and enterprise safety controls including human-in-the-loop gates. The session concludes with a live Spec-Driven SDLC demo — where the spec acts as the contract coordinating a multi-agent delivery pipeline, from architecture through deployment. The talk closes by connecting the methodology to BITS Pilani Digital's AI Engineering & MLOps programmes, demonstrating how industry–academia partnerships enable learners to apply these patterns to real-world problems and move from prototypes to production with engineering rigour.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Every database architecture decision we have made relied on a few assumptions: callers are predictable, writes are deliberate, connections are short, failures are obvious, and schemas are understood. For decades, these held because a human was always in the loop. Agentic AI changes that. Autonomous, LLM-powered agents generate queries through reasoning, write at machine speed, hold connections during long chains of thought, fail quietly, and interpret schemas through a model rather than shared context. When we attach them to existing data layers, the assumptions we never formalized start to show their cracks. This talk walks through those assumptions, the production issues that follow, and how to design databases for agents. Database patterns that once seemed nice-to-have become essential once agents are in the system.
    HALL 3 - Tech Talks

  • Agentic AI systems are rapidly moving from demos to mission‑critical workflows, but most of them still behave as pattern‑matchers with tools, not as systems that understand cause and effect. The result is familiar: agents that sound confident while suggesting actions that quietly violate business logic, break regulations, or create hidden risk. This talk introduces “causal guardrails”—an architecture where structural causal models (SCMs) sit around GenAI agents to constrain, explain, and validate their decisions. Instead of relying solely on prompts and heuristics, agents must route their plans through explicit causal graphs that encode allowed interventions, downstream impacts, and hard constraints. The session will walk through intuitive examples (credit risk, IT ops, or recommendation workflows), show how to combine LLM-based agents with SCMs in practice, and discuss how this improves robustness, debuggability, and auditability. Attendees will leave with concrete patterns for using causal modeling to keep autonomous GenAI “sane” in real enterprise environments, not just in benchmarks.
    HALL 3 - Tech Talks

  • As AI systems evolve from assistants to autonomous decision-makers, a new paradigm is emerging—Agentic Commerce. In this session, we explore a future where AI agents don’t just recommend products but actively evaluate options, negotiate prices, and complete purchases on behalf of users and organizations. The talk will examine how businesses must rethink digital commerce when the “customer” is increasingly an intelligent agent rather than a human browsing a website or app. From product discovery and trust signals to pricing strategies, APIs, and marketplaces designed for machines, agent-driven transactions are set to reshape the commerce ecosystem. Attendees will gain insights into the technologies enabling agentic commerce, the opportunities it unlocks for retailers and platforms, and the challenges around governance, transparency, and user control as AI becomes the new buyer in the digital economy.
    HALL 1 (Main) - Keynotes / Tech Talks

  • This talk will explore how Scapia is scaling its CX Bot in production by building robust evaluation frameworks that continuously measure response quality, accuracy, and reliability in real time. The session will also dive into model resiliency, highlighting how Scapia is developing an internal platform that enables employees to easily switch between different models, experiment rapidly, adopt best practices, and share learnings across teams. In addition, it will cover Scapia’s approach to hosting models within its own data center to maintain stronger control over data governance, security, and policy compliance while operating AI systems at scale.
    HALL 3 - Tech Talks

  • This workshop explores memory architectures that give agents continuity and true persistence. You’ll learn about episodic vs. semantic memory, vector database integration, memory consolidation strategies, and retrieval balancing recency with relevance. Participants will build agents that learn from every interaction, maintain coherent long-term context, and avoid common pitfalls like context pollution or catastrophic forgetting—core skills for anyone aiming to scale agentic AI responsibly.
    HALL 2 - Exclusive Workshops

  • Agentic AI is reshaping how intelligent systems reason and act — but the next frontier lies in bringing that intelligence into the physical world. This session explores the shift from digital agents to Physical AI systems that interact with real-world environments, devices, and operations. We’ll examine the architectural principles, governance models, and system-level design patterns required to build reliable, scalable intelligent systems beyond the screen.
    HALL 3 - Tech Talks

  • AI adoption in software organisations is not failing only because the capability of models are insufficient. One of the reasons is the teams using them have not developed the mindset, habits, or knowledge structures needed to unlock its full potential. This talk makes the case that the primary bottleneck is disposition— the posture with which a practitioner approaches AI. The talk is grounded in a production-tested framework and inspired from the collective intelligence of individuals who have applied techniques to achieve 100% AI-augmented code generation on live projects for over a year. The talk introduces a blueprint in which domain experts encode structured knowledge that is leveraged by AI to generate code that enables the organisation to scale and grow at a much faster pace. The audience after the session will leave with understanding of why AI seems challenging to adopt in domain-specific contexts, what structured knowledge encoding looks like in practice, and one actionable step they can take in their own team the following week.
    HALL 1 (Main) - Keynotes / Tech Talks

  • The FMCG industry is undergoing a major transformation driven by data, artificial intelligence, and digital platforms. In this session, we will explore how organizations can reimagine traditional FMCG operations by leveraging advanced analytics, AI-powered decision-making, and connected digital ecosystems. From demand forecasting and intelligent supply chains to personalized consumer engagement, the talk will highlight practical strategies for building agile, data-driven enterprises that can adapt quickly to evolving market dynamics and unlock new growth opportunities.
    HALL 1 (Main) - Keynotes / Tech Talks

  • The Problem: A brief case study highlighting the consequences of non-reproducibility in AI decisions. The Challenges: Real-world complexities of maintaining transparency in multi-agentic systems. The Technical Roadmap: Methods and tools to ensure GenAI systems are fully auditable. The Decision Rationale: A deep dive into capturing decision snapshots, logging execution paths, and environment versioning, supported by a technical architecture diagram. Implementation Strategies: Practical takeaways including structured logging, deterministic replays using fixed seeds, shadow-mode testing, and immutable audit trails.
    HALL 3 - Tech Talks

  • Agentic AI is moving beyond “better prompts” into a new layer of infrastructure: standardized tool connectivity, agent-to-agent interoperability, and disciplined context engineering. In this session, I’ll share a production blueprint for building agents that can safely plug into enterprise systems using open protocols (MCP for agent-to-tools, A2A for agent-to-agent collaboration). We’ll cover how to design “context interfaces” (what the model can request, when, and why), handle long-running tasks with progress + recovery, and ship with strong authorization, auditability, and guardrails. Attendees will leave with a practical reference architecture and implementation checklist they can apply immediately.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Physical intelligence, the ability of machines to perceive, reason, and act in the physical world remains one of the key frontiers in artificial intelligence. While modern AI systems have made significant progress in language, vision, and pattern recognition, they still struggle to fully understand and interact with the physical laws that govern real-world environments. This session explores the technology gaps in physical intelligence, focusing on the disconnect between current intelligence models and the principles of physics that shape real-world interactions. It will examine limitations such as weak physical reasoning, challenges in predicting object dynamics, and the difficulty of learning from limited real-world data. The talk will also highlight the “physics gaps” in today’s AI models, where purely data-driven approaches fall short in capturing causal and dynamic properties of the physical world and discuss emerging opportunities in areas such as embodied AI, simulation-based learning, robotics, and hybrid physics-AI systems that aim to bridge these gaps.
    HALL 3 - Tech Talks

  • AI adoption won’t follow a single trajectory, it will diffuse across industries, functions, and societies through multiple parallel pathways. This session explores 100 practical and emerging diffusion routes through which AI is expected to scale by 2030, from developer-led tooling and enterprise copilots to autonomous workflows, sector-specific AI stacks, embedded intelligence in products, and policy-driven innovation. The focus is on how AI moves from experimentation to systemic integration: what accelerates adoption, what creates friction, and how organizations can strategically position themselves to ride the right diffusion curves. A forward-looking yet execution-oriented perspective for leaders and builders shaping the AI-powered decade ahead.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Artificial intelligence is now core infrastructure. Yet many AI workloads operate on shared public platforms, creating exposure risks for proprietary data and strategic intelligence. Zero-Leak AI Infrastructure delivers dedicated, private AI compute environments built on isolated GPU servers and controlled networking. No public exposure. No uncontrolled egress. No shared tenancy. Secure AI begins with sovereign infrastructure.
    HALL 3 - Tech Talks

  • When building Agentic AI systems, decision architecture is not a technical afterthought—it is the foundation of scale, trust, and long-term adoption in real-world environments.
    HALL 2 - Exclusive Workshops

  • The talk focuses on one of the hardest problems in fashion recommendation systems—new users and new items in rapidly changing catalogs—and how recent advances in large language models enable fundamentally different approaches to representation, understanding, and bootstrapping recommendations at scale. The session will share practical system designs, trade-offs, and lessons learned from using LLMs to address cold start across candidate generation and ranking, including how we combine textual, visual, and contextual signals to reduce dependence on historical interaction data. The emphasis will be on what translated to measurable online impact, and where LLM-based approaches helped—or failed—compared to traditional heuristics and embedding-based methods. I believe this talk would resonate well with ML practitioners, recommender system engineers, and applied researchers, and would complement the conference’s focus on recommender systems, applied machine learning, and real-world deployments.
    HALL 3 - Tech Talks

  • As AI systems evolve from simple prompt-response models to autonomous, goal-driven agents, evaluating their performance becomes significantly more complex. This session explores the emerging challenges and methodologies for assessing Agentic AI systems, moving beyond traditional prompt accuracy metrics toward holistic evaluation of task completion, reasoning reliability, tool usage, and real-world effectiveness. It will discuss practical frameworks, benchmarks, and evaluation strategies that help measure how well AI agents plan, adapt, and execute multi-step tasks. Attendees will gain insights into building robust evaluation pipelines that ensure agentic systems are reliable, accountable, and ready for deployment in real-world applications.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Evolution from conversational bots to action-oriented AI agents Core agentic patterns (intent → plan → act → observe) How agents access knowledge and integrate with real-world systems Agent system architecture and multi-agent collaboration Challenges, failures, and productionizing agentic AI systems
    HALL 2 - Exclusive Workshops

  • The Green Orchestrator proposes a next-generation agentic AI framework designed to coordinate, optimize, and govern distributed energy ecosystems operating at up to 1,000 TWh annual scale. As global energy systems become increasingly decentralized — spanning smart grids, renewable assets, data centers, EV infrastructure, and industrial facilities — existing optimization approaches remain fragmented, reactive, and limited to local objectives. Current AI deployments in energy largely function as advisory tools or isolated predictive models, lacking persistent memory, cross-system coordination, policy-aware autonomy, and multi-objective optimization capabilities. This proposal introduces a hierarchical, multi-agent orchestration platform built using structured execution graphs (e.g., frameworks such as LangGraph), transforming large language models from conversational systems into goal-directed, stateful decision agents. Unlike conventional AI pipelines, the Green Orchestrator embeds agents within a deterministic, policy-constrained state machine architecture that supports long-horizon reasoning, controlled autonomy, and enterprise-grade observability. At its core, the platform formalizes each agent as a constrained decision process operating over partially observable system states. Agents maintain belief representations through layered memory architectures consisting of short-term operational context, episodic summaries, and long-term vector-symbolic knowledge graphs. A novel energy-weighted memory optimization mechanism dynamically prioritizes retention based on carbon impact, financial risk exposure, grid stability sensitivity, and regulatory criticality. This approach significantly reduces token overhead while preserving high-value contextual intelligence, enabling scalable deployment across distributed edge environments. The system introduces hierarchical coordination across four layers: global strategic agents, regional grid agents, site-level optimization agents, and asset-level micro agents. Each layer operates within bounded authority while exchanging structured state updates. This creates distributed intelligence with escalation control and conflict resolution mechanisms analogous to enterprise governance structures. Multi-agent interaction is modeled as a stochastic cooperative game with weighted global objectives, enabling simultaneous optimization of energy efficiency, carbon reduction, cost management, resilience, and compliance. A policy-bound autonomy framework ensures that all agent actions pass through validation gates including regulatory constraint checks, digital twin simulations, and risk evaluation layers before execution. This governance-first design differentiates the platform from experimental agent systems by embedding compliance and safety directly into the decision lifecycle. Domain knowledge is integrated through a hybrid approach combining pretrained model capabilities, retrieval-augmented access to enterprise documentation, structured ontologies of energy assets and constraints, and reinforcement learning via simulation environments. Agents leverage defined tool interfaces — including telemetry APIs, market data feeds, storage dispatch systems, and reporting engines — to interact with operational technology (OT) and enterprise systems in a controlled and auditable manner. The architecture is event-driven, activating agents only when triggered by system changes, thereby reducing computational overhead. Federated edge memory allows localized reasoning while sharing compressed embeddings upward, supporting data sovereignty and low-latency control. Projected system impact at 1,000 TWh scale indicates that even modest coordinated optimization (8–12%) yields substantial reductions in energy consumption and carbon emissions while improving peak demand management and operational resilience. For enterprises such as Schneider Electric, the platform represents a strategic evolution from intelligent hardware integration to AI-native sustainability orchestration, enabling subscription-based optimization services and defensible intellectual property in policy-aware autonomous control. In summary, the Green Orchestrator advances the field of agentic AI by integrating hierarchical multi-agent coordination, memory-efficient long-horizon reasoning, policy-embedded governance, and multi-objective optimization within a scalable enterprise framework. It establishes the foundation for a planetary-scale energy nervous system capable of learning, adapting, and autonomously coordinating distributed energy infrastructures responsibly and sustainably.
    HALL 3 - Tech Talks

  • I’ll share insights on leveraging advanced AI to improve product quality, optimize process efficiency, reduce equipment downtime and increase yield. Looking forward to connecting with the AI developer community and sharing practical perspectives at MLDS 2026.
    HALL 2 - Exclusive Workshops

  • This presentation introduces Inya Voice OS by Gnani.ai, an integrated voice AI platform that brings together Speech-to-Text (STT), Text-to-Speech (TTS), and Voice-to-Voice capabilities into a unified system. It explores how combining these components enables seamless, real-time conversational experiences, reducing latency and improving accuracy across voice interactions. The session will highlight the design principles behind building scalable and efficient voice pipelines, including handling multilingual speech, optimizing model performance, and enabling natural, human-like responses. It will also discuss practical applications of such an integrated voice stack in enterprise environments, demonstrating how end-to-end voice systems can enhance user engagement and operational efficiency. Additionally, the presentation will touch upon key challenges in deploying voice AI at scale, such as noise robustness, domain adaptation, and real-world variability in speech. The presentation will also include select case studies illustrating real-world deployments and measurable impact across industries. It will conclude with insights into future directions for voice technologies, including more personalized, context-aware, and adaptive voice interactions
    HALL 1 (Main) - Keynotes / Tech Talks

  • In an age that rewards urgency, Cheteshwar Pujara chose endurance. This conversation traces a career built not on flourish, but on resolve — from disciplined beginnings in Rajkot to defining performances in Australia, from absorbing pressure at Brisbane to beginning again in county cricket. At No. 3, he learned to walk in early and leave late. We’ll explore concentration, doubt, reinvention, and the craft of staying when the game — and sometimes the system — moves on. An evening about patience as strength, time as an ally, and the quiet ambition required to hold your ground.
    HALL 1 (Main) - Keynotes / Tech Talks


The AI Innovation Playground

Expect two days packed with deep dives into generative AI, practical coding challenges, and real-world case studies that prepare you for the next wave of intelligent applications.

Supported by brands building the future of AI.

Edition
0 th
Attendees
0 +
Speakers
0 +
Organizations
0 +

Grab your ticket for a unique experience of
inspiration, meeting and networking with
Top Developers in India

Book your tickets at the earliest. We have a hard stop at 1300 passes.
Note: Ticket Pricing to change at any time.

Building the Age of Agentic AI

From Models to Agents

Explore how LLMs are evolving into intelligent, goal-driven agents that collaborate, reason, and act autonomously.

Scaling AI in the Real World

Dive into architectures, frameworks, and deployment strategies powering production-grade generative and agentic AI systems.

The Future of Human + AI Collaboration

Discover how agentic AI is reshaping developer workflows, enterprise ecosystems, and the very nature of innovation.

It’s been a bit late to post this but I have to say what an event it really was (Machine Learning Developers Summit’23), after COVID the first time this event happened without virtual meeting and the interaction was also amazing by Data Scientists & Machine learning engineers/ enthusiasts, thanks to AIM

Het Patel

IBM

Attended the brilliant and insightful #MLDS2023. The sessions, talks, presentations and workshops were engaging and knowledgeable, a very enriching experience.

Arijit Gayen

iMerit

Attending the Machine Learning Summit 2023 in Bangalore was an incredible opportunity for me to deepen my understanding of the latest advancements and trends in the field.

The keynote speakers were inspiring and provided valuable insights, and I had the chance to network with many other professionals and experts in the industry.

KIRTHIK A

KGiSL

Michelin is glad to be a part of the Machine Learning Developers Summit (MLDS) 2023 which concluded last week in Bangalore, India. The summit comprised of numerous keynote sessions by industry experts.

Michelin

Attending MLDS 2025 was an eye-opening experience into how rapidly the ML & AI landscape is evolving. Trends truly wait for no one and this year, the spotlight was firmly on Agentic AI and Agentic pipelines.

Samuel Shine

Machine Learning Engineer at CVC NETWORK

We were proud to participate in 𝗜𝗻𝗱𝗶𝗮’𝘀 𝗹𝗮𝗿𝗴𝗲𝘀𝘁 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 𝘀𝘂𝗺𝗺𝗶𝘁 – 𝗠𝗟𝗗𝗦 𝟮𝟬𝟮𝟱, where the spotlight was on GenAI, agentic systems and the future of AI-driven innovation.

Kévin BERTRAND

Manager @ Capco

AIM 40 Under 40 AI Builders

Honoring under-40 makers who ship real AI to production at scale in India.

Showcase your leadership in agentic AI,
connect with top developers, and shape
the future of intelligent systems.