Fill out this form to access our on-demand session video!

Thank you for your interest in watching the session video. To access the video, we kindly request you to fill out the following form.
Your responses will help us improve our services and provide you with better content in the future.

Take me to the video
Oops! Something went wrong while submitting the form.
Webinar -  September 12, 2023 | 10 am PDT |  1 pm EDT

The AI Townhall

Join our exclusive webinar series to explore the latest trends and breakthroughs in the rapidly evolving field of Large Language Models (LLMs). Hear from industry experts where we discuss best-in-class strategies to build reliable and trustworthy AI solutions.

Join us

Thank you for registering for our webinar! You'll receive more details in your inbox soon.
Oops! Something went wrong while submitting the form.

EPISODE - 1

5 things I wish I ‘caught’ as a data scientist working with LLMs

About the session

With the overwhelming capabilities and nuances of Large Language Models (LLMs), there is still a lot to overcome when it comes to navigating the intricacies of these huge models.

According to the Stanford University Artificial Intelligence Index Report (2022), a recent 280 billion-parameter model exhibited a substantial 29% increase in toxicity levels compared to a 117 million-parameter model from 2018. From handling model issues, hallucinations, biases, drifts, ethical considerations, and model underperformance, AI teams need to better strategize and foster more efficient workflows and build reliable LLMs.

Join a session with Censius experts where we deep dive into the hidden pitfalls encountered in LLMs at production; and how data science teams can prepare for the unexpected in the rapidly evolving field of building LLMs.

Register Now

Can't make it to the LIVE session? Register now to get the on-demand session delivered to your inbox!

Key Takeaways

  • 1

    Uncover key gaps and challenges of Large Language Models in production

  • 2

    Understand the long-term implications of LLM issues such as hallucinations, bias, and drifts

  • 3

    Strategize ways to monitor LLMs in real-time to build reliable and high-performance models

SPEAKERS