The AI Townhall
EPISODE - 1
5 things I wish I ‘caught’ as a data scientist working with LLMs
About the session
With the overwhelming capabilities and nuances of Large Language Models (LLMs), there is still a lot to overcome when it comes to navigating the intricacies of these huge models.
According to the Stanford University Artificial Intelligence Index Report (2022), a recent 280 billion-parameter model exhibited a substantial 29% increase in toxicity levels compared to a 117 million-parameter model from 2018. From handling model issues, hallucinations, biases, drifts, ethical considerations, and model underperformance, AI teams need to better strategize and foster more efficient workflows and build reliable LLMs.
Join a session with Censius experts where we deep dive into the hidden pitfalls encountered in LLMs at production; and how data science teams can prepare for the unexpected in the rapidly evolving field of building LLMs.
Can't make it to the LIVE session? Register now to get the on-demand session delivered to your inbox!
Key Takeaways
- 1
Uncover key gaps and challenges of Large Language Models in production
- 2
Understand the long-term implications of LLM issues such as hallucinations, bias, and drifts
- 3
Strategize ways to monitor LLMs in real-time to build reliable and high-performance models
SPEAKERS
EPISODE - 2
David vs. Goliath: The Impact of Large Language Models on Small-Scale NLP Models
About the session
Large Language Models (LLMs) have caught the fancy of one and all. The question is, what does this say about the fate of smaller-scale models? When it comes to fine-tuning and managing the growing scale of data — who is going to win this battle? While LLMs certainly offer an unparalleled level of power and sophistication, fine-tuning them comes at a significant cost. As the amount of data they require continues to grow, so do the challenges of managing and cleaning it all.
With the boom of generative AI tools, businesses now face an increased risk of model failures — according to McKinsey, 55% of technology leaders experienced AI incidents due to biased or incorrect outputs that resulted in financial losses, measurable loss of brand value, and customer attrition.
Join a session with industry AI experts to learn the principles that set an LLM apart from its small-scale cousins; discover best practices to keep model performance in check and build reliable, compliant, and risk-free AI solutions by leveraging intelligent monitoring tools.
Can't make it to the LIVE session? Register now to get the on-demand session delivered to your inbox!
Key Takeaways
- 1
The generative AI landscape as it evolves and grows at a rapid pace
- 2
The scope of LLMs compared to small-scale models
- 3
Strategies to scale model performance with AI Observability