February 1 to 2, 2024 | Bangalore

The Biggest Generative AI Conference in India

0 th
Edition
0 +
Attendees
0 +
Speakers
0 +
Organizations

Topics featured include Generative Model Frameworks, Autoregressive Predictive Models, Text-to-Image Conversion, Image-to-Image Translation, AI in Procedural Content Generation, Generative Design in Architecture, Algorithmic Composition in Music, Generative AI for Personalized Media, Generative Algorithms for Data Augmentation, AI-Generated Imagery in Virtual Reality, Machine Creativity in Literature, AI-Driven Synthetic Voice Generation, Generative Neural Networks in Fashion, Deep Learning for Synthetic Biology, Generative AI Ethics and Governance, Counterfeit Detection in Generative Media, AI in Interactive Storytelling, Generative AI for Educational Content, AI and the Future of Entertainment, Generative Techniques in Autonomous Systems.

Schedule for 2024

The majority of Conference sessions are curated by the AIM community.

We are in the process of finalizing the sessions for 2024. Expect more than 50 talks at the summit. Please check back this page again.
Expand All +
  • Day 1

    Feb 1, 2024

  • This research addresses challenges in taxpayer risk assessment faced by financial departments worldwide. Outdated algorithms and a lack of standardized metrics hinder accurate identification of risky taxpayers. To combat this, the study introduces an innovative approach utilizing Large Language Models (LLMs). It involves integrating taxpayer data into templates to create comprehensive natural-language profiles, which are then used to fine-tune LLMs for precise risk predictions. Comparative evaluations show superior performance over traditional methods, revealing nuanced insights often missed by older algorithms. This approach not only enhances accuracy but also deepens understanding of taxpayer behavior, aiding informed decision-making. Embracing LLMs promises improved fiscal governance amid evolving financial landscapes, highlighting the necessity of modernizing taxpayer risk assessment methods in governmental financial departments.
    HALL 3 - Paper Presentations

  • For the last 70 years, enterprise business applications have followed a predictable and often infuriating pattern of menu-driven user interfaces and hardcoded logic. With the advent of large language models, we now have the ability to completely rethink how these applications are built, how they work, and how we interact with them. My talk will cover this topic with some examples and a discussion of the architecture and terminology around this topic.
    HALL 2 - Tech Talks / Workshops

  • In today's rapidly evolving technological landscape, Generative AI stands at a pivotal crossroads, much like the early days of the internet. While the technology is undeniably powerful, its concrete business applications remain somewhat elusive. This talk delves into the groundbreaking capabilities of Google's cutting-edge Gemini model. We will explore innovation frameworks and strategies for applying these capabilities to revolutionize your business. By leveraging Generative AI, you can forge unparalleled market differentiation, craft novel customer experiences, and unlock a wellspring of untapped opportunities.
    HALL 1 (Main) - Keynotes/Tech Talks

  • Traditionally retailers employ a strategic integration of digital screens and printed media in hypermarkets to captivate customers, convey brand messaging, and increase sales of their products. During these media campaigns, vast amounts of product transaction data are recorded that require extensive analysis, comparison, and the ability to quickly export specific data for non-technical media planners to be able to visualize, understand, and plan media campaigns more effectively. This paper introduces an innovative approach to building a chatbot interface for the transformation of natural language into SQL queries by utilizing the large language model NSQL 350M, which can be used to perform select operations on databases to retrieve and analyze specific data. This enables media planners to ask the chatbot any query about their historical campaign data in English, and the chatbot can translate that into an SQL Query which is executed on the database, thereby retrieving the necessary information. The paper emphasizes the process of prompt engineering and finetuning the language model to ensure its accuracy is up to the mark and language model hallucination is minimal, and it highlights the potential of the chatbot in several applications for retail media campaigns.
    HALL 3 - Paper Presentations

  • In this workshop we will introduce you to the Generative Ai offerings in Google Cloud and explore the latest multi-modal offerings with Gemini. We will understand how innovative applications that can process information across text, code, images, and video can be built with Gemini models on Google Cloud. With Multi-modal capabilities of Gemini, we can reimagine a lot of our existing processes in a whole new and exciting way!
    HALL 2 - Tech Talks / Workshops

  • In this session you will learn how you can leverage Generative AI on Snowflake for your business using your enterprise data and unlock value. Explore how we are empowering organisations globally to move from a POC mindset to an enterprise solutioning mindset with GenAI.
    HALL 1 (Main) - Keynotes/Tech Talks

  • CLIP (Contrastive Language-Image Pre-training) excels in zero-shot image classification across diverse domains, making it an ideal candidate for pre-labelling unlabelled datasets. This paper introduces three pivotal enhancements designed to elevate CLIP-based pre-labeling efficacy without the need for labeled data. First, we introduce prompt refinement using a large language model (GPT-3.5- Turbo) to generate more descriptive prompts, significantly boosting accuracy on various datasets. Second, we address overconfident predictions through confidence calibration, achieving improved results without the need for a separate labeled validation set. Lastly, we leverage the inductive biases of CLIP and DINOv2 through ensembling, demonstrating a substantial boost in zero-shot labeling accuracy. Experimental results across various datasets consistently demonstrate enhanced performance, particularly in handling ambiguous classes. This work not only addresses limitations in CLIP but also provides valuable insights for advancing multimodal models in real-world applications.
    HALL 3 - Paper Presentations

  • A peek into how traditional sales is reinventing its approach using custom LLMs, RAG, and Autonomous agents. We'll talk about technical challenges in various domains and the approaches that we're taking to solve them.
    HALL 1 (Main) - Keynotes/Tech Talks

  • The session emphasizes the strategic alignment of competency skill sets with the ever-changing technology landscape. The synopsis underscores the importance of staying updated by learning the latest advancements, including LLM (Large Language Models) and VLM (Vision Language Models) with Generative AI Evolution. This approach aims to cultivate competencies that effectively tackle contemporary challenges, ensuring professionals stay pertinent in the swiftly evolving digital era. As individuals, the key focus is on intelligently navigating dynamic career paths, remaining at the forefront of innovation, and adeptly contributing solutions to the intricate challenges present in today's professional landscape.
    HALL 1 (Main) - Keynotes/Tech Talks

  • AgTech merges Agriculture and Technology, harnessing AI and ML to revolutionize farming. Scarce comprehensive data impedes its full potential. Our focus starts with a limited image set, crafting a diverse dataset mirroring real-world complexity: weather shifts, soil variations, and lighting dynamics. Employing LoRA fine-tunes a stable diffusion model swiftly. DinoV2 enhances segmentation models by integrating LoRA-trained data. Synthetic datasets bolster model performance, evident in AgTech scenarios. This research introduces LoRA, Stable Diffusion, and DinoV2, addressing limited data challenges. These methods elevate model performance, emphasizing the cost-effectiveness of synthetic datasets. Beyond AgTech, this blueprint for synthetic datasets optimizes ML in diverse domains, paving the way for large vision models.
    HALL 3 - Paper Presentations

  • The talk is a forward-looking dialogue that illuminates how SingleStore, a high-performance scalable SQL database, is at the forefront of this transformative wave, enabling organizations to harness the full potential of Generative AI. We will delve into the complex tapestry of challenges and opportunities that organizations face in today's need-for-speed demands. With the advent of GenAI applications, the need for a robust, versatile, and high-speed database is more evident than ever. SingleStore, with its unparalleled real-time data analytics capabilities and support for diverse data models, stands as a crux of innovation, powering the GenAI future backed by the Global Fortune 500 & Unicorns. Discover how SingleStore isn't just keeping up with the pace of change – it's setting the pace, one real-time insight and AI-driven decision at a time. Key Highlights of the Talk: Fueling Generative AI: Learn about the critical role of SingleStore in powering generative AI applications. A powerful combination of vector storage with full-text search, hybrid search, and multiple vector indexes that paves the way for groundbreaking AI-driven solutions. Real-Time Data Architecture: Discover how SingleStore's real-time data analytics revolutionize decision-making processes, enabling businesses to instantly act on insights gleaned from vast datasets. Multi-Modal Database Prowess: Imagine SQL+JSON. SingleStore's multi-model capabilities embrace relational, key-value, document, geospatial, time-series data, and full-text search, simplifying complex data architectures and significantly reducing operational complexity and costs. Case Studies in Transformation: From real-time fraud detection to powering data-driven innovation in fintech, witness real-world cases and scenarios powered by SingleStore across multiple industries and applications.
    HALL 1 (Main) - Keynotes/Tech Talks

  • The Indian Energy Exchange (IEX) serves as India's energy nucleus, enabling electricity, renewable energy, and certificate trading. Expanding into cross-border electricity trade, it forges a cohesive South Asian Power Market. IEX's success hinges on user-focused technology, simplifying price discovery and procurement. This paper unveils a tool harnessing IEX's pricing data to forecast electricity prices over seven days, refining predictions through historical analysis. Integrated with Generative AI (GenAI), the tool not only forecasts but also generates insightful reports via natural language, empowering informed energy procurement. This initiative aims to enhance decision-making and market transparency, propelling a data-driven, efficient energy market in India and potentially across South Asia.
    HALL 3 - Paper Presentations

  • Personally Identifiable Information (PII) detection is critical due to the increasing exploitation of individual data, particularly in the text analytics domain. With the rise in the application of large language models (LLMs) for Natural Language Processing (NLP) solutions, data security concerns call for effective on-premises solutions and privacy-centric methods. This paper explores the use of LLMs fine-tuned on limited domain-specific datasets for detecting and masking PII and benchmarking this solution against existing NLP methods such as BERT and GPT3.5. Our approach includes fine-tuning the Vicuna-7B LLM using the Quantized and Low Rank Adaptation (QLoRA) technique, enabling cost-effective fine-tuning and deployment on consumer GPUs; The proposed approach offers several advantages, including improved performance and reliability compared to GPT3.5, enhanced data security by keeping data within the company's cloud, domain adaptability through model fine-tuning, and on-premise usage benefits such as reduced dependence on proprietary models, quota limitations, and flexible scaling of model hosting infrastructure. Overall, this paper presents an efficient and secure solution for domain specific PII detection tasks using LLMs.
    HALL 3 - Paper Presentations

  • This talk explores the ethical dimensions of Language Models (LLM), focusing on Responsible AI standards and Impact Assessment. We delve into the unique challenges of GenAI, examining methods for identifying and systematically measuring potential harm. The discussion includes strategies for evaluations, mitigation plans, and operational readiness, emphasising the importance of a proactive approach. The session also addresses AI Content Safety, offering insights into content moderation and ethical considerations. Join us for a concise yet comprehensive exploration of Responsible AI, aiming to pave the way for an ethically sound future in artificial intelligence.
    HALL 2 - Tech Talks / Workshops

  • This paper navigates challenges faced by foundational language models, particularly combating Hallucinations in models like GPT. Detailing a methodical strategy, it regulates GPT's responses by tapping into knowledge solely from PDF, HTML, and Word Doc sources. The approach, integral to a regulatory Query Bot application, prioritizes hallucination-free responses. Leveraging Retriever Augmented Generation, unique prompt engineering, and a scoring mechanism for context passages, it ensures accurate responses despite GPT's innate limitations. This solution significantly reduces turnaround time by 150%, introduces Query Status Tracking, and enhances User Experience, marking a pivotal advancement in controlling misinformation while harnessing the potential of large language models.
    HALL 3 - Paper Presentations

  • We are witnessing unprecedented innovation in AI. How to adapt your role as a Data Science professional. What core skills to take forward and how to stay well-versed with new developments?
    HALL 1 (Main) - Keynotes/Tech Talks

  • This research paper innovates traditional market survey methods by integrating advanced Generative Pre-trained Transformers (GPTs) and ensemble Language Models (LLMs). Emphasizing efficient synthetic data generation while upholding stringent privacy and ethical standards, our methodology utilizes these AI techniques for transparent dataset synthesis. Through real-world scenarios and detailed case studies, it demonstrates substantial efficiency gains, acknowledging limitations transparently. Prioritizing user privacy and data security, our approach complies with rigorous regulations, accelerating survey processes and fostering personalized engagement. This pioneering paradigm shift reshapes market survey practices, establishing an industry benchmark for ethical and efficient data synthesis.
    HALL 3 - Paper Presentations

  • The world's volumes of unstructured data keeps growing rapidly and deep learning frameworks such as Pytorch continue to lead the way in performance for computer vision use cases. Join this session to learn how to leverage Snowpark Container Services and NVIDIAs RAPID library for Pytorch to quickly experiment and deploy models using GPUs in Snowflake. For this demo, we will use a set of pre-labeled medical images.
    HALL 2 - Tech Talks / Workshops

  • In recent months, the landscape of Generative AI has undergone a transformative shift, challenging the long-held dominance of Big Tech giants. This talk delves into the rapidly innovating world of open-source LLMs, highlighting how they are reshaping the field. The latest advancements in both model development and infrastructure are not only democratizing access to generative AI but also presenting cost-effective alternatives to proprietary systems. Join us in this session, tailored for businesses looking to leverage the power of GenAI, as we unravel the intricacies of building and scaling an in-house Generative AI platform. We will compare the financial implications, performance metrics, and long-term viability of proprietary versus open-source solutions.
    HALL 1 (Main) - Keynotes/Tech Talks

  • his paper conducts an in-depth investigation of the visual capabilities of OpenAI's advanced language model, GPT-4, focusing on image comprehension and visual content creation. It demonstrates how GPT-4 processes and translates visual information into detailed data and its proficiency in generating diverse visual content. The study further emphasizes the integration of GPT-4's visual abilities with its advanced language processing capabilities, leading to more complex AI applications. It also discusses the impact of GPT-4's capabilities in various sectors and potential ethical concerns. The paper ends with a perspective on the future implications of these AI advancements.
    HALL 3 - Paper Presentations

  • The talk will cover the below aspects: a) Semantic Search Applications at a Glance b) Application Focus: Semantic Search for SEO c) Application Focus: Semantic Search for UX d) RAG Applications at a Glance e) Application Focus: RAG for Answering Questions f) Application Focus: RAG for Verifying Statements g) RAG as a Remedy for Hallucinations and Outdated Models h) Evaluating Semantic Search and RAG i) Q&A Session
    HALL 2 - Tech Talks / Workshops

  • In the year 2023, we saw the rise of a new industrial revolution - Large Language Models (LLMs). Bringing in a step function growth in how we process information, provide support to customers, and even build applications. We saw a glimpse of what Artificial General Intelligence (AGI) could be with autonomous agents working together. Following the hype of LLMs was not AGI but RAG, Retrieval Augmented Generation. So, what is RAG? If LLM is the most powerful car engine, RAG is the wheels. You can’t reach your destination with the most powerful engine without converting the mechanical energy of the engine to the kinetic energy of the wheels as motion. With the current LLM architecture, the knowledge of the world and the understanding to process the information are embedded in the network in an intertwined manner, which means the model’s view of the world is static and not easy to refresh without training. This is a big limiting factor. Even if we choose to refresh LLM’s knowledge of the world, LLMs are not good at information recall (or retrieval) because they learn a compressed probabilistic view of the world. The fragmented information storage can’t provide retrieval. LLMs, being generative models, will always generate based on existing understanding, which leads rise to the problem of hallucination. Retrieval Augmented Generation (RAG) is a technique of providing LLMs with context relevant to user queries to help LLM ground answers and provide citations. But, building a RAG application hasn’t been easy, it is a continuous cycle of build -> learn -> improve. A lot of this is because of the paradigm shift in how we build applications. Our programming language has changed from structured programming languages to unstructured language of English. This directly impacts how we process data and build execution flows, requiring us to rethink and build new mental models. In this talk, I will take the first principles approach to break down the shift we are seeing, how we are building RAG applications, and leave the audience with thoughts to think about while building RAG application themself.
    HALL 1 (Main) - Keynotes/Tech Talks

  • This research highlights the integration of Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) in Adaptive Cruise Control (ACC), a key part of Advanced Driver Assistance Systems (ADAS). Using a simulation environment featuring a lead car and a Battery Electric Vehicle (BEV), the DRL controller manages the BEV's velocity, addressing multi-objective ACC goals. Driving objectives like tracking accuracy, ride comfort, and safety are translated into reward functions for the DRL. Comparing it with Model Predictive Control (MPC) in highway scenarios, the DRL excels in ride comfort while matching MPC in accuracy and safety. Moreover, the DRL exhibits robustness and efficiency across various driving scenarios, showcasing superior performance and strong adaptability.
    HALL 3 - Paper Presentations

  • The evolution of Computer Vision has witnessed a transformative shift towards Video Analytics, enabling real-time interpretation of visual data. This evolution empowers systems to analyze and derive meaningful insights from dynamic video streams, enhancing applications in surveillance, object recognition, and automated decision-making. The integration of advanced algorithms and deep learning techniques has propelled Video Analytics to the forefront of cutting-edge visual intelligence technologies.
    HALL 3 - Paper Presentations

  • Exploring the alchemy of transforming raw data into compelling narratives, this talk delves into the art and science of crafting captivating stories from the vast landscape of information.
    HALL 2 - Tech Talks / Workshops

  • This talk will cover the Generative AI landscape from the AI scientist’s perspective. The playground of technologies and the range of dev-toys at the disposal of the modern-day AI scientist has exploded in a period of a few months. Generative AI technologies are being viewed mostly from the end user’s perspective in common parlance, due to the disruptive nature and the promise of productivity. However, the ‘behind the scenes’ technologies are equally fascinating, as these are being introduced to the world at a rapid speed. This is especially true after the two announcements of ChatGPT almost putting the Turing test to shame, and the AI engineering world being put on notice when MS mentioned usage of tens of thousands of A100s. There has been an incessant day-on-day announcements about newer topics like Langchain, RAG, anchoring, GANs, LLMs, VLMs, Diffusion, RLHF. Even an age-old AI topic (‘Reasoning’) coming to the forefront -- is fascinating to see. This journey will shed light on the transformative power of Generative AI, uncovering some of the challenges, the breakthroughs, and the promising horizon that lies ahead.
    HALL 1 (Main) - Keynotes/Tech Talks

  • In the dynamic world of investment firms, the investment committee's role in evaluating potential investments is pivotal. The foundation lies in meticulously prepared investment memos by advisors. Traditionally, this demanding process involves extensive due diligence, analyzing company documents, extracting insights, and market research. This research explores Large Language Models (LLMs) as tools to enhance efficiency and accuracy. LLMs streamline data synthesis from varied sources, aiding in generating investment hypotheses, memo drafting, and precise Q&A support. Preliminary results show a twofold efficiency boost in tasks from due diligence to committee reviews and an 85% accuracy rate in responses. This signifies LLMs' potential to transform manual investment decision-making in firms.
    HALL 3 - Paper Presentations

  • Delve into the potential chaos lurking in unmonitored models and pipelines as we explore unforeseen vulnerabilities, data drift, and the silent erosion of model performance. Learn actionable strategies to preemptively safeguard against pitfalls, ensuring the robustness and reliability of your AI systems. Don't let the unseen jeopardize your success—join us for a vital discussion on proactive monitoring and mitigation.
    HALL 2 - Tech Talks / Workshops

  • Increasingly Al solutions have become an integral part of businesses. This has led to significant investment in AI - private investment in AI was around USD 70 billion in 2021, increased to USD 90+ in 2022 and is expected to be USD 160 billion in 2025; up 70-75%. These investments come with a glaring expectation of appropriate returns. As data scientists we are intrigued by the technological advancements and the pursuit of excellence - very rightly so. However, most often than not we forget to or ignore that eventually these solutions should be able to provide net returns. This requires a hard look at the solutions that we have built with so much love. It requires us to evaluate the economics of our solutions before we are asked for it. I have found that either teams shy away from doing this - or do not have the right methodology to evaluate. From a survey conducted in my data science communities I found that the majority of the people do not carry out such an exercise and were also not able to list all the aspects of such an evaluation. In the new world of GenAI - this has become much more relevant. A great data science team is the one that has a solid process and culture for this self scrutiny. This makes our work relevant, long lasting and trustworthy.
    HALL 1 (Main) - Keynotes/Tech Talks

  • This session would be a journey exploring how machine learning, a powerful tool, is enhancing the field of neurotech and contributing to advancements in healthcare. It will delve into real-world applications where machine learning assists in analyzing brain scans, potentially leading to earlier diagnoses and improved treatment and surgery options. It will cover recent research and experiments utilizing machine learning models, offering us a glimpse into the potential of this technology. Additionally, it will compare and contrast how machine learning's impact on healthcare differs from its contributions in other areas, highlighting its unique potential to improve human lives. This session promises to be informative and thought-provoking, offering valuable insights into the exciting intersection of AI and neurotechnology.
    HALL 3 - Paper Presentations

  • LangChain4J is a powerful extension for Quarkus, enabling seamless integration of blockchain capabilities into Quarkus applications. By leveraging LangChain4J, developers can effortlessly incorporate blockchain functionalities, enhancing the efficiency and security of Quarkus-based projects.
    HALL 2 - Tech Talks / Workshops

  • In this talk, Sanyam Bhutani will share the cutting edge details of how to fine-tune your LLM effectively using open source tools. We will start from Zero, get a whole-picture overview and then learn the details hands on.
    HALL 1 (Main) - Keynotes/Tech Talks

  • This research delves into lookalike modeling's methodology, seeking congruent traits among novel customers mirroring high-value ones. Targeting expanded market reach, it pinpoints potential customers keen on products or services. Employing a designated subset (seed-set) of high-value customers, machine learning uncovers commonalities. Leveraging job-related attributes, the model identifies similar users from vast datasets via a fast multi-graph approach, boasting efficiency. Employing a distributed architecture ensures scalability, managing 500 recommendations/second with 80 million interactions. Evaluation shows up to 90% precision enhancement for 200 Lookalike candidates in digital marketing. Further, preprocessing optimization mitigates noise issues, enabling broader model applicability across diverse datasets. This study champions collaboration and innovation, promoting an accessible open-source framework.
    HALL 3 - Paper Presentations

  • Day 2

    Feb 2, 2024

  • Embark on a captivating journey where the dynamic realm of AI converges with the urgent challenges of planetary sustainability, with a special focus on the United Nations' SDGs. This talk will commence by demystifying the fundamental components of AI—Data, Models, and Compute—laying a foundation accessible to all, setting the stage for an enlightening exploration of AI's evolution & how these components can help us in doing AI optimally. Steering the conversation towards actionable change, this talk will equip attendees with the knowledge needed to embrace AI sustainably. Discover how to leverage AI's potential while minimizing its ecological footprint. Walk away with a profound understanding of AI's journey, its emissions impact, and your pivotal role in shaping an AI-powered world in harmony with our planet.
    HALL 1 (Main) - Keynotes/Tech Talks

  • The session will focus on ways to scale data solutions and AI in big enterprises to drive data driven decision making
    HALL 1 (Main) - Keynotes/Tech Talks

  • This paper explores the efficacy of Variational Auto Encoders (VAE) in generating specific compound shapes using point cloud data. While existing generative methods focus on shaping an entire object, engineering demands often involve multiple interconnected parts (e.g., valves, vehicle bodies). SDM-NET used VAE but struggled with complex inter-part relations. Our goal is an end-to-end generative model respecting part connections, while ensuring diverse shape creation. We developed a PointNET-based model for unordered point cloud data, enhancing it with self-attention, structural embeddings, and a balanced reconstruction-generative approach. Although our results use VAE, our proposed enhancements apply broadly to models like GAN and diffusion models. Due to confidentiality, we showcase our work using synthetic data based on simple geometric shapes.
    HALL 3 - Paper Presentations

  • Generative AI adoption is growing exponentially in the industry, which makes on device computing of GenAI is an important technology to cater the needs. In this session I plan to cover the following, why GenAI on edge is important? GenAI use-cases on edge – Auto, Mobile, IoT Unique technical challenges in running large models on edge devices. Key Techniques used to resolve the problems. (will cover briefly on power, performance, quantization, distillation etc..) Hybrid AI Qualcomm’s AI Stack for GenAI for edge devices.
    HALL 2 - Tech Talks / Workshops

  • Working capital plays a vital role in the company's financial management. Working Capital helps to streamline operations and improve the company's earnings and profitability by minimizing the cost. Dynamic Discounting is a part of Working Capital management which helps the company (buyer) for efficient cash management. This paper aims to increase the return on Cash deployed by 5% and also to generate risk-free EBIDTA which is Earnings before interests, Taxes, Depreciation and Amortization for the Manufacturing Organization. This will be achieved by analyzing and identifying the potential suppliers(vendors) who will probably join the Dynamic Discounting process and provide discounts on specific instruments or bills of their choice to the organization or the entire payment of their Invoices will be paid by a banker and then banker gives enhanced credit period for the Manufacturing Organization so that working capital can be improved.
    HALL 3 - Paper Presentations

  • This talk will delve into how AT&T is leveraging the power of its new generative Artificial Intelligence (AI) technology to enhance the effectiveness, creativity, and innovation of its employees. We'll explore the transformational journey of AT&T over recent years, where AI has been progressively integrated across the company to deliver superior value and service to our customers, streamline operations, and unlock fresh revenue streams. Highlighting the ways this technology empowers our employees to not only improve their productivity but also stimulate their creativity, leading to the generation of novel ideas and solutions.
    HALL 1 (Main) - Keynotes/Tech Talks

  • RAG typically uses external data sources only based on text. With Gemini Pro Vision and multimodal embeddings, you can now perform multimodal RAG on text and images. In this session, you will gain hands-on experience by performing multimodal RAG on a financial document that contains both text and images (charts, diagrams).
    HALL 2 - Tech Talks / Workshops

  • Homeowners increasingly wish to reduce their home energy usage for cost and sustainability reasons. Often, they wish to achieve this by changing their usage of appliances. However, to date homeowners generally lack detailed feedback on electricity usage to understand or track the effect of their changes; a monthly utility bill is often the only feedback they receive. To drive more energy savings in homes it is necessary to provide homeowners with detailed feedback on their appliance usage. This paper discusses about a low-cost approach for giving homeowners detailed awareness of their energy usage. The approach extracts the individual energy usage of an appliance from the whole home energy at regular intervals, known as load disaggregation. This paper focuses on applying a deep learning algorithm to the whole home energy data to disaggregate the electric vehicle load as an example and can be extended to other loads also. In this paper, an open-source toolkit called Non-Intrusive Load Monitoring ToolKit is used to review and compare empirically various deep learning algorithms for electric vehicle load disaggregation. The algorithms are evaluated in terms of execution time and performance by performing different experiments.
    HALL 3 - Paper Presentations

  • RAG with Knowledge Graph harnesses the power of knowledge graphs in conjunction with Large Language Models (LLMs) to provide search engines with a more comprehensive contextual understanding. It assists users in obtaining smarter and more precise search results at a lower cost
    HALL 1 (Main) - Keynotes/Tech Talks

  • The session will be covering the business expectation, challenges in building LLM based applications. It will be exploring the best strategies in deciding architecture design and understanding differences between Open-Source vs Closed Source Models and will provide a walk-through of the challenges in designing e2e pipeline and mitigating approaches for each of the challenges. It will further shed light over right testing framework to evaluate LLM application and best metrics for monitoring and maintenance of LLM application.
    HALL 1 (Main) - Keynotes/Tech Talks

  • This research pioneers a novel method using open-source large language models to streamline natural language to SQL queries. Through Retrieval-Augmented Generation, it accurately identifies specific datasets and variables. It comprehends user queries, refines prompts, and aggregates data for precise SQL query generation, navigating complex dataset structures and diverse queries. Its adaptability handles varying dataset periods and fosters continued conversations, catering to specific analytical needs. Rigorous validation confirms its accuracy, offering a scalable, adaptable tool for managing complex databases. This framework marks a pivotal advancement, facilitating intuitive and efficient data interactions in just a hundred words.
    HALL 3 - Paper Presentations

  • The emergence of Large Language Models (LLMs) has revolutionized Natural Language Processing, profoundly impacting Machine Learning. This paper explores LLMs' pivotal role in Programmatic Advertising, emphasizing structured numerical data handling. While pre-trained LLMs face challenges like hallucination and high training costs, we focus on Retrieval Augmented Generation. Leveraging LangChain, we refine tasks involving statistical computations and document retrieval, integrating context and memory via intermediate steps. Custom prompts train LLMs for ad-campaign tasks, forming task-specific experts connected via a router chain. Deploying this model through Flask APIs on cloud infrastructure saves 60-70% of ad managers' time, enhancing ROI and enabling efficient campaign management.
    HALL 3 - Paper Presentations

  • This talk will cover the practical lessons learned from the speaker's experience in developing large-scale ML systems for search and recommendations. Both search and recommendations are the most impactful and yet the most complex applications of ML in the industry. Interestingly, most of these challenges do not arise from the algorithmic complexity but from the constraints related to business, technology and people. The talk will focus on a few key challenges and potential ways to overcome or avoid those.
    HALL 2 - Tech Talks / Workshops

  • This paper introduces "NarcGuideBot," an AI-driven solution revolutionizing narcotics enforcement onboarding for new investigators. Meticulously designed, it navigates rookies through intricate drug investigations, FBI procedures, and NCB processes. Beyond comprehensive guidance, the bot employs advanced machine learning for image-based queries, enhancing its capabilities. Apart from offering detailed responses, "NarcGuideBot" expedites form-related tasks by simplifying form access, providing accurate filling instructions, and ensuring legal compliance through error-detection systems. Emphasizing technology's pivotal role, this study showcases the bot as a vital companion for inexperienced officers in complex drug enforcement. It presents an innovative, tech-infused onboarding method, heralding a new era of precision and direction in the challenging landscape of narcotics enforcement.
    HALL 3 - Paper Presentations

  • Join Viswanathan Anand in a captivating session as he explores the intersection of technology, particularly Artificial Intelligence, and the timeless game of chess. Anand, a legendary chess grandmaster, delves into how AI has penetrated the world of chess, revolutionizing strategies, improving gameplay, and the potential challenges it poses. Gain unique insights into the delicate balance between technological innovation and preserving the essence of this ancient game.
    HALL 1 (Main) - Keynotes/Tech Talks

  • "The Future of Conversational AI" explores the evolution of chatbots and virtual assistants driven by advancements in NLP. This talk highlights how these systems are becoming increasingly sophisticated, offering more human-like interactions. It delves into the implications for customer service, mental health support, and personal productivity. The presentation will also discuss emerging trends, the integration of multimodal AI, and the ethical considerations in developing AI that closely mimics human conversation, shaping the future of human-AI interaction.
    HALL 2 - Tech Talks / Workshops

  • Deep learning systems excel at pattern recognition and learning representations, but they struggle with tasks that require reasoning, planning, and algorithmic-like data manipulation. Currently, Generative AI is witnessing a pivotal shift where Data scientists are moving beyond the traditional LLM prompt-response paradigm and text-based interaction to AI systems that have much more agency. These agents employ thinking models from cognitive science to solve tasks involving long-term planning and execution. Generative agents can access real-time data and instructions from a broad range of sources, integrate various tools, have memory, derive execution plans, reflect, and orchestrate these plans. They can collaborate with other generative agents or humans and directly influence the world to solve complex tasks such as simulating human behavior, planning projects, composing music, and automating workflows. There has been substantial progress in understanding and replicating human-like problem-solving abilities in AI, with applications spread across various industries. While Generative AI is already succeeding in applications and products in the industry such as dialog systems, machine learning system augmentation, NLP tasks, and content generation - organizations often struggle to identify real-world use-cases and locate the right talent to integrate the latest tools and solve complex problems. In this talk, we will cover the current research and architecture for developing generative AI agents. We will also showcase code applications and use cases where generative AI can automate or augment workflows with human experts, and discuss how developers and data scientists can get started and leverage this paradigm in their day-to-day work.
    HALL 1 (Main) - Keynotes/Tech Talks

  • Traditionally, decision-making in the oil and natural gas industry has heavily relied on domain experts. For example, the interpretation of seismic surveys demanded exhaustive analysis by experts to identify faults and estimate hydrocarbon reserves. Similarly, historical well documents have been invaluable for making critical decisions about future wells and understanding the factors leading to Non-Productive Time (NPT). Yet, the manual process of referencing these documents and extracting actionable insights is inherently time-consuming, laborious, and prone to human errors. In the era of recent advances in artificial intelligence, a new horizon has emerged, promising to accelerate decision-making in areas like seismic interpretation, information extraction from documents, etc. To unravel the transformative shift, the talk delves into how artificial intelligence is reshaping the decision-making landscape in the oil and gas sector. By automating complex analyses, AI offers a more efficient, accurate, and timely approach, paving the way for enhanced productivity and strategic decision-making in this dynamic industry.
    HALL 3 - Paper Presentations

  • The talk will focus on the below aspects: 1. The importance and relevance of Large Language Models (LLMs) in today's artificial intelligence landscape. 2. Optimizing the performance of LLMs: Techniques such as prompt engineering, fine-tuning, and Retrieval-Augmented Generation (RAG) will be discussed. 3. Challenges and ethical concerns associated with using LLMs in real-world applications. will address the mitigations and methods to counter these issues. 4. Tackling the issue of trust in language models, with a focus on developing models that are reliable and less prone to hallucinations. We will explore strategies and techniques to mitigate hallucinations in language models. 5. The impact of LLMs on enhancing productivity, customer satisfaction, and decision-making processes, along with discussing the limitations and potential areas for future improvement in the application of LLMs across various domains.
    HALL 2 - Tech Talks / Workshops

  • The session will delve into the fascinating world of ChatGPT and Large Language Models (LLMs), inspired by Inna Logunova's insightful article from Serokell's blog. The session starts with understanding what LLMs are and their remarkable capability to analyze and interpret extensive text data. The speaker will guide you through the evolution of the GPT series, from GPT-1 to the sophisticated GPT-3, highlighting the transformative role of transformer architecture and self-attention mechanisms in processing language. The core of the presentation focuses on ChatGPT, which is an advancement from InstructGPT. The session will discuss how it integrates human feedback into its training process, employing techniques like Supervised Fine Tuning, a Reward Model, and Reinforcement Learning using Proximal Policy Optimization. We will also address the challenges that ChatGPT faces, such as generating misinformation and ethical issues, and how its performance is evaluated based on accuracy, utility, and safety. To wrap up, the speaker will share insights into the future of ChatGPT and similar models. The session will also explore the ongoing efforts to overcome technical and ethical challenges, fine-tuning for specific tasks, and the necessity of managing computational resources effectively.
    HALL 3 - Paper Presentations

  • The talk will focus on how Dyota AI is delivering AI-based services to the Indian state government. The talk will also focus on the process of integrating such services with challenges and the core part of AI integration with 2 D & 4 D Radar.
    HALL 1 (Main) - Keynotes/Tech Talks

  • his project aims to optimize email marketing strategies by implementing machine learning (ML) techniques for advanced market segmentation. The primary objective is to enhance the precision and relevance of email campaigns, thereby improving customer engagement and conversion rates. By leveraging machine learning for market segmentation in email marketing, this project aims to revolutionize the way businesses tailor their campaigns, ultimately enhancing customer satisfaction and driving higher conversion rates.
    HALL 1 (Main) - Keynotes/Tech Talks


Agenda for MLDS 2024

generative ai

large language models

prompt engineering

RAG Models

Creative AI

Agenda for MLDS 2024

Enterprise General AI

LLM Architecture

Knowledge-Intensive Tasks

LLM Scalability

Multimodal LLM Integration

Agenda for MLDS 2024

Language Understanding

Conversational AI

Vector Quantization

GANs

Transfer Learning

Agenda for MLDS 2024

Code Generation

LLM Fine-tuning

Semantic Search

last year's agenda

What to expect

3 Tracks over 2 days –

  • Keynotes/ Tech talks (Hall 1)
  • Tech Talks/ workshops (Hall 2)
  • Paper Presentation (Hall 3)

Besides – Mentoring sessions, Hackathon, Awards, Exhibition, Live Coding, Competitions & a lot more

Grab your ticket for a unique experience of inspiration, meeting and networking for the AI industry

Book your tickets at the earliest. We have a hard stop at 1000 passes.

Note: Ticket Pricing to change at any time.

  • Early Bird Passes

    Expired
  • Access to all in-person tracks & workshops
  • Access the recorded sessions later
  • Access to in-person networking with attendees & speakers
  • Conference Lunch for 2 days
  • Group Discount Available
  • Regular Passes

    Expired
  • Access to all in-person tracks & workshops
  • Access the recorded sessions later
  • Access to in-person networking with attendees & speakers
  • Conference Lunch for 2 days
  • Group Discount available

Our Last Year's Sponsors

To know more details, write to info@analyticsindiamag.com

40 under 40 data scientists awards

40Under40 is an attempt towards recognizing the leading data scientists in India who have successfully transformed data into meaningful insights.

2024 nominations are OPEN.

The Biggest Generative ai conference in india

call for papers

MLDS will include additional programs for technical paper presentations.

The submissions for 2024 are open.

It’s been a bit late to post this but I have to say what an event it really was (Machine Learning Developers Summit’23), after COVID the first time this event happened without virtual meeting and the interaction was also amazing by Data Scientists & Machine learning engineers/ enthusiasts, thanks to AIM

Het Patel

IBM

Attended the brilliant and insightful #MLDS2023. The sessions, talks, presentations and workshops were engaging and knowledgeable, a very enriching experience.

Arijit Gayen

iMerit

Attending the Machine Learning Summit 2023 in Bangalore was an incredible opportunity for me to deepen my understanding of the latest advancements and trends in the field. The keynote speakers were inspiring and provided valuable insights, and I had the chance to network with many other professionals and experts in the industry. I am excited to apply what I have learned to my work and continue to stay ahead of the curve in the field of machine learning.

KIRTHIK A

KGiSL

Michelin is glad to be a part of the Machine Learning Developers Summit (MLDS) 2023 which concluded last week in Bangalore, India. The summit comprised of numerous keynote sessions by industry experts.

Michelin

“I believe that there will be far greater jobs on the other side of this, and that the jobs of today will get better”

~Sam Altman