The session will focus on ways to scale data solutions and AI in big enterprises to drive data driven decision making
- info@analyticsindiamag.com
- +91-95389 89028, +91-96325 33477
The session will focus on ways to scale data solutions and AI in big enterprises to drive data driven decision making
In recent months, the landscape of Generative AI has undergone a transformative shift, challenging the long-held dominance of Big Tech giants. This talk delves into the rapidly innovating world of open-source LLMs, highlighting how they are reshaping the field. The latest advancements in both model development and infrastructure are not only democratizing access to generative AI but also presenting cost-effective alternatives to proprietary systems.
Join us in this session, tailored for businesses looking to leverage the power of GenAI, as we unravel the intricacies of building and scaling an in-house Generative AI platform. We will compare the financial implications, performance metrics, and long-term viability of proprietary versus open-source solutions.
RAG with Knowledge Graph harnesses the power of knowledge graphs in conjunction with Large Language Models (LLMs) to provide search engines with a more comprehensive contextual understanding. It assists users in obtaining smarter and more precise search results at a lower cost
The session emphasizes the strategic alignment of competency skill sets with the ever-changing technology landscape. The synopsis underscores the importance of staying updated by learning the latest advancements, including LLM (Large Language Models) and VLM (Vision Language Models) with Generative AI Evolution. This approach aims to cultivate competencies that effectively tackle contemporary challenges, ensuring professionals stay pertinent in the swiftly evolving digital era. As individuals, the key focus is on intelligently navigating dynamic career paths, remaining at the forefront of innovation, and adeptly contributing solutions to the intricate challenges present in today’s professional landscape.
Embark on a captivating journey where the dynamic realm of AI converges with the urgent challenges of planetary sustainability, with a special focus on the United Nations’ SDGs. This talk will commence by demystifying the fundamental components of AI—Data, Models, and Compute—laying a foundation accessible to all, setting the stage for an enlightening exploration of AI’s evolution & how these components can help us in doing AI optimally.
Steering the conversation towards actionable change, this talk will equip attendees with the knowledge needed to embrace AI sustainably. Discover how to leverage AI’s potential while minimizing its ecological footprint. Walk away with a profound understanding of AI’s journey, its emissions impact, and your pivotal role in shaping an AI-powered world in harmony with our planet.
In the year 2023, we saw the rise of a new industrial revolution – Large Language Models (LLMs). Bringing in a step function growth in how we process information, provide support to customers, and even build applications. We saw a glimpse of what Artificial General Intelligence (AGI) could be with autonomous agents working together. Following the hype of LLMs was not AGI but RAG, Retrieval Augmented Generation.
So, what is RAG?
If LLM is the most powerful car engine, RAG is the wheels.
You can’t reach your destination with the most powerful engine without converting the mechanical energy of the engine to the kinetic energy of the wheels as motion.
With the current LLM architecture, the knowledge of the world and the understanding to process the information are embedded in the network in an intertwined manner, which means the model’s view of the world is static and not easy to refresh without training. This is a big limiting factor.
Even if we choose to refresh LLM’s knowledge of the world, LLMs are not good at information recall (or retrieval) because they learn a compressed probabilistic view of the world. The fragmented information storage can’t provide retrieval. LLMs, being generative models, will always generate based on existing understanding, which leads rise to the problem of hallucination.
Retrieval Augmented Generation (RAG) is a technique of providing LLMs with context relevant to user queries to help LLM ground answers and provide citations.
But, building a RAG application hasn’t been easy, it is a continuous cycle of build -> learn -> improve.
A lot of this is because of the paradigm shift in how we build applications. Our programming language has changed from structured programming languages to unstructured language of English. This directly impacts how we process data and build execution flows, requiring us to rethink and build new mental models.
In this talk, I will take the first principles approach to break down the shift we are seeing, how we are building RAG applications, and leave the audience with thoughts to think about while building RAG application themself.
This talk will cover the Generative AI landscape from the AI scientist’s perspective. The playground of technologies and the range of dev-toys at the disposal of the modern-day AI scientist has exploded in a period of a few months. Generative AI technologies are being viewed mostly from the end user’s perspective in common parlance, due to the disruptive nature and the promise of productivity. However, the ‘behind the scenes’ technologies are equally fascinating, as these are being introduced to the world at a rapid speed. This is especially true after the two announcements of ChatGPT almost putting the Turing test to shame, and the AI engineering world being put on notice when MS mentioned usage of tens of thousands of A100s.
There has been an incessant day-on-day announcements about newer topics like Langchain, RAG, anchoring, GANs, LLMs, VLMs, Diffusion, RLHF. Even an age-old AI topic (‘Reasoning’) coming to the forefront — is fascinating to see.
This journey will shed light on the transformative power of Generative AI, uncovering some of the challenges, the breakthroughs, and the promising horizon that lies ahead.
Deep learning systems excel at pattern recognition and learning representations, but they struggle with tasks that require reasoning, planning, and algorithmic-like data manipulation. Currently, Generative AI is witnessing a pivotal shift where Data scientists are moving beyond the traditional LLM prompt-response paradigm and text-based interaction to AI systems that have much more agency. These agents employ thinking models from cognitive science to solve tasks involving long-term planning and execution. Generative agents can access real-time data and instructions from a broad range of sources, integrate various tools, have memory, derive execution plans, reflect, and orchestrate these plans. They can collaborate with other generative agents or humans and directly influence the world to solve complex tasks such as simulating human behavior, planning projects, composing music, and automating workflows. There has been substantial progress in understanding and replicating human-like problem-solving abilities in AI, with applications spread across various industries.
While Generative AI is already succeeding in applications and products in the industry such as dialog systems, machine learning system augmentation, NLP tasks, and content generation – organizations often struggle to identify real-world use-cases and locate the right talent to integrate the latest tools and solve complex problems. In this talk, we will cover the current research and architecture for developing generative AI agents. We will also showcase code applications and use cases where generative AI can automate or augment workflows with human experts, and discuss how developers and data scientists can get started and leverage this paradigm in their day-to-day work.
Increasingly Al solutions have become an integral part of businesses. This has led to significant investment in AI – private investment in AI was around USD 70 billion in 2021, increased to USD 90+ in 2022 and is expected to be USD 160 billion in 2025; up 70-75%. These investments come with a glaring expectation of appropriate returns. As data scientists we are intrigued by the technological advancements and the pursuit of excellence – very rightly so. However, most often than not we forget to or ignore that eventually these solutions should be able to provide net returns. This requires a hard look at the solutions that we have built with so much love. It requires us to evaluate the economics of our solutions before we are asked for it. I have found that either teams shy away from doing this – or do not have the right methodology to evaluate. From a survey conducted in my data science communities I found that the majority of the people do not carry out such an exercise and were also not able to list all the aspects of such an evaluation. In the new world of GenAI – this has become much more relevant. A great data science team is the one that has a solid process and culture for this self scrutiny. This makes our work relevant, long lasting and trustworthy.
The talk will focus on how Dyota AI is delivering AI-based services to the Indian state government. The talk will also focus on the process of integrating such services with challenges and the core part of AI integration with 2 D & 4 D Radar.
©2018-2024 MLDS is a conference property owned by Analytics India Magazine. Any unauthorized use of these names, or variations of these names, is a violation of national & international laws.