Fill out this form to access our on-demand session video!

Thank you for your interest in watching the session video. To access the video, we kindly request you to fill out the following form.
Your responses will help us improve our services and provide you with better content in the future.

Take me to the video
Oops! Something went wrong while submitting the form.
Episode 01 | On-demand Webinar

5 things I wish I ‘caught’ as a data scientist working with LLMs

Devanshi Vyas
Co-founder,
Censius
Gatha Varma
Senior Data Scientist, Censius
Presented at
Watch the session now

Key Takeaways

  • 1

    Uncover key gaps and challenges of Large Language Models in production

  • 2

    Understand the long-term implications of LLM issues such as hallucinations, bias, and drifts

  • 3

    Strategize ways to monitor LLMs in real-time to build reliable and high-performance models

Abstract

With the overwhelming capabilities and nuances of Large Language Models (LLMs), there is still a lot to overcome when it comes to navigating the intricacies of these huge models.

According to the Stanford University Artificial Intelligence Index Report (2022), a recent 280 billion-parameter model exhibited a substantial 29% increase in toxicity levels compared to a 117 million-parameter model from 2018. From handling model issues, hallucinations, biases, drifts, ethical considerations, and model underperformance, AI teams need to better strategize and foster more efficient workflows and build reliable LLMs.

Join a session with Censius experts where we deep dive into the hidden pitfalls encountered in LLMs at production; and how data science teams can prepare for the unexpected in the rapidly evolving field of building LLMs.

Watch the session now