Schedules

Event Schedule 2022

We are in the process of finalizing the sessions for 2023. Expect more than 30 talks at the summit. Please check back this page again.

19-20th Jan 2023. Bangalore, India

Expand All +
  • Day 1


  • The tacit premise of modern ML: decision-time computation is where value gets created. Train a model, learn a policy, deploy. The policy is the intelligence. In this talk I will demonstrate that structural graph design, specifically via randomised edge perturbation achieves Pareto-dominant performance over learned and optimisation-based methods in urban food delivery. Our core claim is that-if the environment in which agents operate is designed with sufficient structural care, the agents themselves can be remarkably simple; and the system as a whole will still outperform agents that are far more computationally sophisticated.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Learn how to take agentic applications from lab experiments to production-grade deployments using AWS Copilot. This workshop shows how to automate infrastructure provisioning, CI/CD pipelines, and integrate standards like the Model Context Protocol (MCP) to extend agents beyond chat into real-world tasks. By the end, you’ll know how to create resilient, scalable, and context-aware AI agents that can truly operate in enterprise environments, while freeing developers to focus on logic instead of infrastructure firefighting.
    HALL 2 - Exclusive Workshops

  • The talk introduces an innovative framework for training AI agents using trajectory deep reinforcement learning (GRPO) and active intelligent memory utilization. By treating entire decision trajectories as the core unit of learning, it addresses challenges such as trajectory blindness and high supervision costs, thereby enhancing agent performance and understanding. This framework supports continuous experiential learning without extensive human oversight and incorporates a modular evaluation methodology for assessing enterprise agentic platforms. The framework aims to improve the efficiency, adaptability, and risk management of AI systems, driving wider adoption in enterprise environments.
    HALL 1 (Main) - Keynotes / Tech Talks

  • As AI disrupts current Architectures, we need certain principles to architect applications appropriately & harness the full potential of AI. The sessions explain the Guiding Principles for implementing multi-agentic applications and provide a technical breakdown of those principles, using examples from Snowflake Cortex Agents & Platform.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Modern AI systems are rapidly evolving from simple prompt-response models to autonomous agents capable of reasoning, using tools, retrieving knowledge, and executing complex workflows. But what actually makes an AI agent work? In this session, The Anatomy of an AI Agent, we will break down the core building blocks behind modern agentic systems. We will explore how large language models, memory, retrieval, planning, and tool execution come together to create intelligent, reliable, and production-ready AI agents. The session will focus on practical architecture patterns used in real-world systems, including how agents reason over data, interact with external tools, maintain context, and handle multi-step tasks. Attendees will gain a clear mental model of how AI agents are designed, the trade-offs involved, and what it takes to move from demos to scalable, real-world deployments. This talk is intended for engineers, architects, and AI practitioners who want to understand how modern AI agents are actually built under the hood.
    HALL 3 - Tech Talks

  • Building an agent can be done quickly. Building an agent that holds up in production, under time pressure, with operational reliability – now, that’s the hard part. In this talk, an AI researcher from Millennium will explain a practical engineering playbook for deploying agents into high-stakes buy-side workflows: research, monitoring, operations, and reporting processes. This talk will go beyond buzzwords to show how to build a production-grade agent: composing proven patterns (with tool-chaining, reflection, human-in-the-loop, and selective multi-agent design) into systems that are constrained, observable and governable. This session will highlight the failure modes that tutorials omit, and the design decisions that prevent them.
    HALL 1 (Main) - Keynotes / Tech Talks

  • LLMs today promise endless text generation, but creating high-fidelity synthetic text data that actually reflects complex business logic remains an engineering challenge. This talk moves beyond basic "prompt and pray" techniques to address the various nuances of creating instruction datasets useful for knowledge distillation, domain adaptation, and reinforcement learning (RL) workflows. We will examine why direct generation often causes datasets to regress to the mean, producing repetitive, safe content that lacks the messy edge cases required for robust training. To solve this, we suggest a systematic, algorithmic approach that treats data generation as an engineering problem. We will discuss how to decompose pipelines into iterative batches to programmatically inject real-world variations. We also conclude with a strategic checklist to evaluate if synthetic data is truly well-suited to your enterprise problem.
    HALL 3 - Tech Talks

  • Agentic systems are non-deterministic—making them harder to debug with traditional logs. This workshop takes you deep into telemetry: instrumentation, observability pipelines, and analysis techniques that capture reasoning loops, tool failures, and system context. You’ll walk away with hands-on methods to turn raw signals into actionable insights, ensuring your autonomous agents remain reliable and explainable in production, even when facing unpredictable environments.
    HALL 2 - Exclusive Workshops

  • As large language models move from prototypes into enterprise workflows, teams across the industry increasingly face a practical operational alignment problem: how to steer model behavior toward business outcomes in a reliable and measurable way. At the same time for many practitioners a critical decision remains unclear: whether a use case is best addressed through prompt engineering, supervised fine-tuning, or reinforcement learning. This talk introduces a structured framework for choosing the right approach and a practical method for translating business objectives into reward signals that can be optimized, evaluated, and audited. Through hands on experiments in incremental preference learning using reward modeling and policy optimization, the session demonstrates how meaningful behavioral shifts can be achieved even with small, carefully curated datasets. It also examines the stability trade-offs between staged training and online RL updates. Additionally, the session distills practical guidance on designing verifiable reward signals from ambiguous business objectives. This segment identifies when exploration and delayed rewards make reinforcement learning necessary and how to avoid common failure modes, such as reward hacking and instability. Rather than focusing on scale alone, the session emphasizes disciplined reward design and systematic experimentation as the foundation for deploying reinforcement learning effectively in enterprise LLM systems.
    HALL 3 - Tech Talks

  • As AI agents evolve from experimental prototypes to real-world production systems, enterprises are realizing that model capability alone is not enough. The true challenge lies in how agents manage, retrieve, and retain information across complex, long-running workflows. Getting this right is often the difference between an impressive demo and a system that consistently delivers value. Building effective agentic AI requires a strong approach to context and memory. Developers must understand how different memory types such as in-context, external, episodic, and semantic work together, and when to use each. Techniques like Retrieval-Augmented Generation (RAG) help bridge knowledge gaps, while thoughtful design ensures agents can retain and use information across sessions. Join us for this Tech Talk where Sanketh and Anshul will break down how context and memory shape intelligent agents. Walk away with a practical framework for building AI agents that remember, reason, and scale reliably.
    HALL 1 (Main) - Keynotes / Tech Talks

  • ADM – Founder’s Voice Redefining the Future of Data with Agentic Intelligence In this special Founder’s Voice session, Raghu Mitra shares the vision, philosophy, and engineering journey behind Acceldata’s evolution from data observability to Agentic Data Management (ADM). As enterprises scale AI, analytics, and data-driven decision-making, traditional monitoring and reactive governance are no longer sufficient. The future demands systems that do not just observe data—but understand, reason, and act. ADM represents this next frontier. Built on the foundation of ADOC, ADM introduces autonomous, AI-powered agents that continuously monitor data ecosystems, diagnose issues with contextual intelligence, and execute corrective actions with minimal human intervention. In this session, Raghu explores the architectural thinking, real-world challenges, and breakthrough innovations that shaped ADM into a self-driving data management platform. Attendees will gain insight into: The shift from reactive observability to autonomous data operations How agentic systems transform reliability, quality, and governance The engineering principles behind scalable AI-native data management The long-term vision for intelligent, self-healing data ecosystems This session is not just a product overview—it is a forward-looking perspective on how agentic intelligence will redefine enterprise data strategy in the AI era.
    HALL 3 - Tech Talks

  • As AI agents become more prevalent across industries, ensuring they operate with the depth and accuracy of true subject matter expertise is becoming increasingly important. An expert-first approach focuses on anchoring AI agents in structured domain knowledge, expert insights, and reliable information sources rather than relying solely on generalized model outputs. This session will explore why grounding AI agents in subject matter knowledge is essential for building trustworthy and effective AI systems. It will discuss the role of expert input, curated knowledge sources, and contextual understanding in enabling agents to deliver more accurate, relevant, and meaningful outcomes. The session will also highlight how organizations can design AI solutions that combine the power of large language models with domain expertise to create more reliable and impactful AI-driven experiences.
    HALL 3 - Tech Talks

  • The hospitality industry thrives on trust, experience, and word-of-mouth advocacy. This session explores how an AI-based referral engine can transform customer referrals from a reactive process into a scalable, intelligence-driven growth lever. The leader will share how AI can be used to identify customers with a high propensity to refer, enabling businesses to focus efforts where advocacy is most likely to convert. The session will also cover smart allocation of relationship managers using predictive insights, ensuring high-value interactions are prioritized and resources are optimally deployed. Additionally, the talk will highlight how AI-driven profile enrichment—combining behavioral, transactional, and engagement data—can power deeply personalized communication at scale. Attendees will gain practical insights into building referral ecosystems that are proactive, personalized, and measurable, driving sustainable growth and stronger customer relationships in the hospitality sector.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Most enterprise AI systems stop at generating predictions such as churn probabilities, fraud scores, recommendations, or forecasts, but business value is realized only when those predictions translate into reliable, automated decisions. This session focuses on the critical decision layer that sits between model outputs and real-world enterprise workflows. Designed for developers and ML practitioners, it explores how to build production-ready systems that combine model predictions with rules, thresholds, optimization logic, and human-in-the-loop controls to drive actionable outcomes. The talk will also cover handling uncertainty, edge cases, governance, and monitoring decision quality—not just model accuracy—ensuring AI systems are robust, scalable, and aligned with measurable business impact.
    HALL 3 - Tech Talks

  • Engineering for Human-Like, Multilingual Voicebots for Bharat explores how voice-first AI systems can be designed to serve India’s linguistically diverse and mobile-first population. Drawing from his experience building large-scale, real-world platforms, Kiran Kumar Katreddi will discuss the engineering foundations behind creating voicebots that feel natural, conversational, and inclusive across multiple Indian languages and dialects. The session will cover how technologies such as speech recognition, natural language understanding, and text-to-speech come together to handle code-mixed speech, regional accents, and low-resource languages, while operating at scale with low latency. Kiran will also highlight the unique challenges and design considerations specific to Bharat, and how human-like multilingual voicebots can significantly expand digital access, improve customer experiences, and enable intuitive interactions for users beyond English-first, text-based interfaces.
    HALL 3 - Tech Talks

  • The rapid expansion of the global space economy is generating unprecedented volumes of data and increasingly complex operational challenges. Traditional AI systems have largely focused on narrow tasks such as image classification or anomaly detection. However, the next frontier lies in agentic AI systems that can perceive, reason, and act autonomously across multiple components of the space ecosystem. This talk explores how agentic AI can enable a new generation of autonomous space capabilities, spanning satellite Earth observation, robotic spacecraft operations, and intelligent ground infrastructure. We will discuss AI agentic workflow examples that detect vehicles and strategic assets from satellite imagery, support autonomous robotic docking and inspection, and optimize ground station networks for efficient satellite communications.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Building machine learning models is only a small part of the challenge in heavy-industry environments. The real complexity lies in deploying, scaling, and operating ML systems that must work reliably on the shop floor—often under strict safety, latency, and reliability constraints. This session walks through the end-to-end journey of building production-grade ML systems for heavy industry, covering data acquisition from industrial systems, model development, validation, and deployment into real-world decision workflows. It will highlight how ML models are integrated with existing operational technology (OT) systems, how predictions translate into actionable shop-floor decisions, and how teams handle issues like data drift, model monitoring, explainability, and human-in-the-loop controls. Attendees will gain practical insights into ML system design, MLOps, and decision engineering in industrial settings, along with lessons learned from taking models out of notebooks and into mission-critical production environments.
    HALL 1 (Main) - Keynotes / Tech Talks

  • BharatGen represents a new paradigm in building AI systems that are sovereign, inclusive, and purpose-built for India’s diverse linguistic and cultural landscape. This session explores how frugally scalable multilingual and multimodal AI models can be developed to serve Bharat at scale, balancing technological advancement with accessibility and efficiency. It will highlight the principles behind shared national AI infrastructure, enabling collaboration across academia, industry, and government to create AI that understands and serves India’s many languages and modalities. Attendees will gain insights into the opportunities, challenges, and impact of building sovereign AI capabilities that empower innovation while ensuring that AI development remains accessible, affordable, and aligned with the needs of Bharat.
    HALL 1 (Main) - Keynotes / Tech Talks

  • As organizations move from standalone LLM applications to complex, agentic AI workflows, LLMOps becomes the critical backbone enabling scale, reliability, and trust. This session explores how to design robust LLMOps frameworks to build, monitor, and govern multi-agent systems in production. It will cover practical approaches to orchestration, observability, evaluation, cost control, and risk management, along with governance strategies to ensure compliance, safety, and responsible AI at scale. Attendees will leave with actionable insights to operationalize agentic AI systems that are resilient, transparent, and enterprise-ready.
    HALL 3 - Tech Talks

  • Day 2


  • This workshop explores memory architectures that give agents continuity and true persistence. You’ll learn about episodic vs. semantic memory, vector database integration, memory consolidation strategies, and retrieval balancing recency with relevance. Participants will build agents that learn from every interaction, maintain coherent long-term context, and avoid common pitfalls like context pollution or catastrophic forgetting—core skills for anyone aiming to scale agentic AI responsibly.
    HALL 2 - Exclusive Workshops

  • Building AI-First Operating Models explores how organizations can move beyond isolated AI initiatives to embed intelligence at the core of their operating model. In this session, Abhishek Singh shares a practical and strategic view on designing AI-first enterprises where data, models, and decision intelligence are tightly integrated into everyday workflows. The discussion will cover the shift from traditional automation to AI-led orchestration, key principles such as data readiness, scalable AI infrastructure, human-in-the-loop governance, and cross-functional collaboration, along with real-world examples of how AI-first models are driving measurable impact across operations, customer experience, and decision-making. Attendees will walk away with a clear framework to transition from experimentation to enterprise-scale AI adoption and build resilient, future-ready operating models.
    HALL 3 - Tech Talks

  • Generic AI coding assistants are great at syntax but fail at context. They don’t know your production schemas, your dbt DAGs, or your FinOps constraints. In this session, we dive into Cortex Code, Snowflake’s native AI agent that operates within your data’s security perimeter. We will demonstrate how Cortex Code moves beyond simple code generation to perform “agentic” tasks: self-healing data pipelines, automated dbt scaffolding, and cross-platform orchestration (CLI to Snowsight). Learn how to turn natural language into production-ready, governed data infrastructure in minutes rather than hours.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Every database architecture decision we have made relied on a few assumptions: callers are predictable, writes are deliberate, connections are short, failures are obvious, and schemas are understood. For decades, these held because a human was always in the loop. Agentic AI changes that. Autonomous, LLM-powered agents generate queries through reasoning, write at machine speed, hold connections during long chains of thought, fail quietly, and interpret schemas through a model rather than shared context. When we attach them to existing data layers, the assumptions we never formalized start to show their cracks. This talk walks through those assumptions, the production issues that follow, and how to design databases for agents. Database patterns that once seemed nice-to-have become essential once agents are in the system.
    HALL 3 - Tech Talks

  • Most AI initiatives fail not because the model is weak, but because teams lack a shared, reusable pattern language that turns experimental wins into production systems.This session distils three decades of engineering into 8 eras and 300+ patterns — showing how each technology wave (structured programming, OOP/GoF, SOA/events, cloud/microservices, cloud security, AI/ML, and now agentic AI) accelerated once solutions were named, standardised and made communicable. Attendees will learn the core pattern families behind production-grade agentic systems: reasoning, memory (RAG), tool use (ReAct), orchestration and enterprise safety controls including human-in-the-loop gates. The session concludes with a live Spec-Driven SDLC demo — where the spec acts as the contract coordinating a multi-agent delivery pipeline, from architecture through deployment. The talk closes by connecting the methodology to BITS Pilani Digital's AI Engineering & MLOps programmes, demonstrating how industry–academia partnerships enable learners to apply these patterns to real-world problems and move from prototypes to production with engineering rigour.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Agentic AI systems are rapidly moving from demos to mission‑critical workflows, but most of them still behave as pattern‑matchers with tools, not as systems that understand cause and effect. The result is familiar: agents that sound confident while suggesting actions that quietly violate business logic, break regulations, or create hidden risk. This talk introduces “causal guardrails”—an architecture where structural causal models (SCMs) sit around GenAI agents to constrain, explain, and validate their decisions. Instead of relying solely on prompts and heuristics, agents must route their plans through explicit causal graphs that encode allowed interventions, downstream impacts, and hard constraints. The session will walk through intuitive examples (credit risk, IT ops, or recommendation workflows), show how to combine LLM-based agents with SCMs in practice, and discuss how this improves robustness, debuggability, and auditability. Attendees will leave with concrete patterns for using causal modeling to keep autonomous GenAI “sane” in real enterprise environments, not just in benchmarks.
    HALL 3 - Tech Talks

  • This talk will explore how Scapia is scaling its CX Bot in production by building robust evaluation frameworks that continuously measure response quality, accuracy, and reliability in real time. The session will also dive into model resiliency, highlighting how Scapia is developing an internal platform that enables employees to easily switch between different models, experiment rapidly, adopt best practices, and share learnings across teams. In addition, it will cover Scapia’s approach to hosting models within its own data center to maintain stronger control over data governance, security, and policy compliance while operating AI systems at scale.
    HALL 3 - Tech Talks

  • Evolution from conversational bots to action-oriented AI agents Core agentic patterns (intent → plan → act → observe) Where agentic AI delivers real business value A simple, safe demo showing how conversations trigger actions
    HALL 1 (Main) - Keynotes / Tech Talks

  • Agentic AI is reshaping how intelligent systems reason and act — but the next frontier lies in bringing that intelligence into the physical world. This session explores the shift from digital agents to Physical AI systems that interact with real-world environments, devices, and operations. We’ll examine the architectural principles, governance models, and system-level design patterns required to build reliable, scalable intelligent systems beyond the screen.
    HALL 3 - Tech Talks

  • Interoperability will define the future of agent ecosystems. This workshop unpacks the emerging standards—Model Context Protocol (MCP), Agent-to-Agent (A2A), and Agent Communication Protocol (ACP)—that allow agents to “speak” to each other. Through hands-on exercises, you’ll compare strengths, trade-offs, and real implementations. You’ll learn to build adaptable systems that can evolve with changing standards—future-proofing your AI stack for a multi-agent, protocol-driven world.
    HALL 2 - Exclusive Workshops

  • AI adoption in software organisations is not failing only because the capability of models are insufficient. One of the reasons is the teams using them have not developed the mindset, habits, or knowledge structures needed to unlock its full potential. This talk makes the case that the primary bottleneck is disposition— the posture with which a practitioner approaches AI. The talk is grounded in a production-tested framework and inspired from the collective intelligence of individuals who have applied techniques to achieve 100% AI-augmented code generation on live projects for over a year. The talk introduces a blueprint in which domain experts encode structured knowledge that is leveraged by AI to generate code that enables the organisation to scale and grow at a much faster pace. The audience after the session will leave with understanding of why AI seems challenging to adopt in domain-specific contexts, what structured knowledge encoding looks like in practice, and one actionable step they can take in their own team the following week.
    HALL 1 (Main) - Keynotes / Tech Talks

  • The Problem: A brief case study highlighting the consequences of non-reproducibility in AI decisions. The Challenges: Real-world complexities of maintaining transparency in multi-agentic systems. The Technical Roadmap: Methods and tools to ensure GenAI systems are fully auditable. The Decision Rationale: A deep dive into capturing decision snapshots, logging execution paths, and environment versioning, supported by a technical architecture diagram. Implementation Strategies: Practical takeaways including structured logging, deterministic replays using fixed seeds, shadow-mode testing, and immutable audit trails.
    HALL 3 - Tech Talks

  • I’ll share insights on leveraging advanced AI to improve product quality, optimize process efficiency, reduce equipment downtime and increase yield. Looking forward to connecting with the AI developer community and sharing practical perspectives at MLDS 2026.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Agentic AI is moving beyond “better prompts” into a new layer of infrastructure: standardized tool connectivity, agent-to-agent interoperability, and disciplined context engineering. In this session, I’ll share a production blueprint for building agents that can safely plug into enterprise systems using open protocols (MCP for agent-to-tools, A2A for agent-to-agent collaboration). We’ll cover how to design “context interfaces” (what the model can request, when, and why), handle long-running tasks with progress + recovery, and ship with strong authorization, auditability, and guardrails. Attendees will leave with a practical reference architecture and implementation checklist they can apply immediately.
    HALL 1 (Main) - Keynotes / Tech Talks

  • Physical intelligence, the ability of machines to perceive, reason, and act in the physical world remains one of the key frontiers in artificial intelligence. While modern AI systems have made significant progress in language, vision, and pattern recognition, they still struggle to fully understand and interact with the physical laws that govern real-world environments. This session explores the technology gaps in physical intelligence, focusing on the disconnect between current intelligence models and the principles of physics that shape real-world interactions. It will examine limitations such as weak physical reasoning, challenges in predicting object dynamics, and the difficulty of learning from limited real-world data. The talk will also highlight the “physics gaps” in today’s AI models, where purely data-driven approaches fall short in capturing causal and dynamic properties of the physical world and discuss emerging opportunities in areas such as embodied AI, simulation-based learning, robotics, and hybrid physics-AI systems that aim to bridge these gaps.
    HALL 3 - Tech Talks

  • Artificial intelligence is now core infrastructure. Yet many AI workloads operate on shared public platforms, creating exposure risks for proprietary data and strategic intelligence. Zero-Leak AI Infrastructure delivers dedicated, private AI compute environments built on isolated GPU servers and controlled networking. No public exposure. No uncontrolled egress. No shared tenancy. Secure AI begins with sovereign infrastructure.
    HALL 3 - Tech Talks

  • AI adoption won’t follow a single trajectory, it will diffuse across industries, functions, and societies through multiple parallel pathways. This session explores 100 practical and emerging diffusion routes through which AI is expected to scale by 2030, from developer-led tooling and enterprise copilots to autonomous workflows, sector-specific AI stacks, embedded intelligence in products, and policy-driven innovation. The focus is on how AI moves from experimentation to systemic integration: what accelerates adoption, what creates friction, and how organizations can strategically position themselves to ride the right diffusion curves. A forward-looking yet execution-oriented perspective for leaders and builders shaping the AI-powered decade ahead.
    HALL 1 (Main) - Keynotes / Tech Talks

  • The talk focuses on one of the hardest problems in fashion recommendation systems—new users and new items in rapidly changing catalogs—and how recent advances in large language models enable fundamentally different approaches to representation, understanding, and bootstrapping recommendations at scale. The session will share practical system designs, trade-offs, and lessons learned from using LLMs to address cold start across candidate generation and ranking, including how we combine textual, visual, and contextual signals to reduce dependence on historical interaction data. The emphasis will be on what translated to measurable online impact, and where LLM-based approaches helped—or failed—compared to traditional heuristics and embedding-based methods. I believe this talk would resonate well with ML practitioners, recommender system engineers, and applied researchers, and would complement the conference’s focus on recommender systems, applied machine learning, and real-world deployments.
    HALL 3 - Tech Talks

  • As AI systems evolve from simple prompt-response models to autonomous, goal-driven agents, evaluating their performance becomes significantly more complex. This session explores the emerging challenges and methodologies for assessing Agentic AI systems, moving beyond traditional prompt accuracy metrics toward holistic evaluation of task completion, reasoning reliability, tool usage, and real-world effectiveness. It will discuss practical frameworks, benchmarks, and evaluation strategies that help measure how well AI agents plan, adapt, and execute multi-step tasks. Attendees will gain insights into building robust evaluation pipelines that ensure agentic systems are reliable, accountable, and ready for deployment in real-world applications.
    HALL 1 (Main) - Keynotes / Tech Talks

  • When building Agentic AI systems, decision architecture is not a technical afterthought—it is the foundation of scale, trust, and long-term adoption in real-world environments.
    HALL 1 (Main) - Keynotes / Tech Talks

  • The Green Orchestrator proposes a next-generation agentic AI framework designed to coordinate, optimize, and govern distributed energy ecosystems operating at up to 1,000 TWh annual scale. As global energy systems become increasingly decentralized — spanning smart grids, renewable assets, data centers, EV infrastructure, and industrial facilities — existing optimization approaches remain fragmented, reactive, and limited to local objectives. Current AI deployments in energy largely function as advisory tools or isolated predictive models, lacking persistent memory, cross-system coordination, policy-aware autonomy, and multi-objective optimization capabilities. This proposal introduces a hierarchical, multi-agent orchestration platform built using structured execution graphs (e.g., frameworks such as LangGraph), transforming large language models from conversational systems into goal-directed, stateful decision agents. Unlike conventional AI pipelines, the Green Orchestrator embeds agents within a deterministic, policy-constrained state machine architecture that supports long-horizon reasoning, controlled autonomy, and enterprise-grade observability. At its core, the platform formalizes each agent as a constrained decision process operating over partially observable system states. Agents maintain belief representations through layered memory architectures consisting of short-term operational context, episodic summaries, and long-term vector-symbolic knowledge graphs. A novel energy-weighted memory optimization mechanism dynamically prioritizes retention based on carbon impact, financial risk exposure, grid stability sensitivity, and regulatory criticality. This approach significantly reduces token overhead while preserving high-value contextual intelligence, enabling scalable deployment across distributed edge environments. The system introduces hierarchical coordination across four layers: global strategic agents, regional grid agents, site-level optimization agents, and asset-level micro agents. Each layer operates within bounded authority while exchanging structured state updates. This creates distributed intelligence with escalation control and conflict resolution mechanisms analogous to enterprise governance structures. Multi-agent interaction is modeled as a stochastic cooperative game with weighted global objectives, enabling simultaneous optimization of energy efficiency, carbon reduction, cost management, resilience, and compliance. A policy-bound autonomy framework ensures that all agent actions pass through validation gates including regulatory constraint checks, digital twin simulations, and risk evaluation layers before execution. This governance-first design differentiates the platform from experimental agent systems by embedding compliance and safety directly into the decision lifecycle. Domain knowledge is integrated through a hybrid approach combining pretrained model capabilities, retrieval-augmented access to enterprise documentation, structured ontologies of energy assets and constraints, and reinforcement learning via simulation environments. Agents leverage defined tool interfaces — including telemetry APIs, market data feeds, storage dispatch systems, and reporting engines — to interact with operational technology (OT) and enterprise systems in a controlled and auditable manner. The architecture is event-driven, activating agents only when triggered by system changes, thereby reducing computational overhead. Federated edge memory allows localized reasoning while sharing compressed embeddings upward, supporting data sovereignty and low-latency control. Projected system impact at 1,000 TWh scale indicates that even modest coordinated optimization (8–12%) yields substantial reductions in energy consumption and carbon emissions while improving peak demand management and operational resilience. For enterprises such as Schneider Electric, the platform represents a strategic evolution from intelligent hardware integration to AI-native sustainability orchestration, enabling subscription-based optimization services and defensible intellectual property in policy-aware autonomous control. In summary, the Green Orchestrator advances the field of agentic AI by integrating hierarchical multi-agent coordination, memory-efficient long-horizon reasoning, policy-embedded governance, and multi-objective optimization within a scalable enterprise framework. It establishes the foundation for a planetary-scale energy nervous system capable of learning, adapting, and autonomously coordinating distributed energy infrastructures responsibly and sustainably.
    HALL 3 - Tech Talks

  • In an age that rewards urgency, Cheteshwar Pujara chose endurance. This conversation traces a career built not on flourish, but on resolve — from disciplined beginnings in Rajkot to defining performances in Australia, from absorbing pressure at Brisbane to beginning again in county cricket. At No. 3, he learned to walk in early and leave late. We’ll explore concentration, doubt, reinvention, and the craft of staying when the game — and sometimes the system — moves on. An evening about patience as strength, time as an ally, and the quiet ambition required to hold your ground.
    HALL 1 (Main) - Keynotes / Tech Talks

Check Schedule from 2019

Schedule 2019

Extraordinary Speakers

Meet the best Machine Learning Practitioners & Researchers from the country.

  • Workshop Pass

    Prices to increase from 19th Dec 2025
  • All the benefits of a Standard pass plus…
  • Exclusive Half-day AI Workshops during the conference (Both Day 1 & Day 2)
  • 15000
  • Bird of Feature Pass

    Prices to increase from 19th Dec 2025
  • All the benefits of a Workshop Pass plus…
  • Exclusive 90 min Roundtable Discussion with Industry Peers during the conference (Both Day 1 & Day 2)
  • 20000
  • VIP Pass

    Prices to increase from 19th Dec 2025
  • All the benefits of a Bird of Feature pass plus…
  • Dedicated Whatsapp Support (before, during, and after the show)
  • VIP check-in
  • Platinum Seating - front-row seats reserved for you at all sessions
  • Exclusive Platinum Lounge Access - A lounge for VIP pass holders and Speakers only!
  • Priority Lunch area
  • Post event synopsis and findings
  • Goodies bag with Exclusive Merchandise
  • 1 Year Digital Subscription of AIM
  • 25000