We have always been a group of people who wanted to solve problems that could positively impact society. In our initial days, this challenge was providing privacy-preserving machine learning(federated learning) to healthcare professionals and hospitals so that they could build AI that worked for all, not just for some.
We were using federated learning to increase access to data, especially for less represented people, by creating a consortium between institutions.
We spent six months talking to leading professors in academia to understand the healthcare industry's challenges and identify the best way to tackle them.
After the end of 6 months, we had two humble learnings:
- After talking with CIOs/Professors/Hospitals, everyone had regulation to blame for the lack of innovation. We realized the scope of regulation & operational challenges were too huge of a barrier to solve our challenge efficiently.
- The takeaways from all our initial discussions introduced us to a much bigger problem of model monitoring and staleness in production.
We knew we couldn’t proceed with our initial idea because of the regulations. But we wanted to ensure that the challenge of model monitoring was necessary to be solved.
Discovery of a Bigger Challenge
In the initial days after the pivot from the federated learning project, we discussed with many companies who were building AI models and those who had deployed models into production.
At that time, the challenge of model monitoring was primarily faced by tech-rich companies that had the people, resources, and data. The industry did not have a model monitoring solution that could help them solve the issues of drift, data quality monitoring, and model performance management. The open-source options were not robust either.
This led the more prominent tech companies to build their monitoring solution in-house. But what about the other 99% of companies still struggling to deploy models in production?
We found that the post-deployment monitoring, i.e., after you have designed, built, and deployed a model, understanding it was an area we could add a lot of value seeing limited readily available tooling to support teams. Almost every team we talked to had it on their roadmap but didn’t have the time or the resources to build out the monitoring tooling themselves — that made for a great product.
The Inception of Censius
Soon after we had validated the need for model monitoring, we gathered a team that could solve this problem. Our initial days involved a lot of back and forth with companies that needed model monitoring.
We wanted to ensure the product fits well with their requirements. Every feature being added was vetted by a group of data scientists and ML engineers that validated the need for the feature.
While building our monitoring solution, we also discovered the challenge of AI explainability. We soon realized that mere monitoring would not solve the model's post-deployment challenges. Moreover, the stakes of a model failing in production were too high for a business.
We soon started working on explainability as well so that data scientists and ML engineers could not only detect model performance issues but also understand what went wrong and where they had to focus on fixing problems.
Our idea gained acceptance when our initial users loved the concept of Observability over mere Monitoring. With a solid MVP that could solve both model monitoring and model explainability problems, we launched our flagship product—the Censius AI Observability Platform.
Check out the Censius AI Observability Platform to see how it solves model monitoring and explainability problems
What Lies Ahead for Us?
ML model decisions impact our day-to-day routines and critical matters of our life. It is crucial to understand what decisions models are making for us. And we can only do that when we have a detailed understanding of what goes inside the black box.
We strongly believe in the principles of responsible AI, model fairness, and explainability. We want to ensure that every data scientist and ML engineer involved in model building has the tools to build high-performing and fair AI models.
We are very clear with the vision we have for Censius.
“We want to be the DataDog. What DataDog is to microservices is what we want to be for machine learning models."
We wish to make machine-learning applications and usage hassle-free and accessible to all organizational stakeholders. We also want to ensure that AI Observability and Censius become an essential component of every enterprise ML stack.
Towards a responsible future!
We are expanding our team with people who are excited to solve the challenge of making AI models fair, transparent, and accountable. Head to our careers page if you want to join us in our mission.