There’s a big push to implement AI everywhere, hoping for better productivity and faster decisions. To get the best out of any AI, it helps to start with good quality data.
So what are we doing to measure our error rates and data quality?
In this workshop, Caroline gave participants the opportunity to compare thoughts on error rates, including trying a new six-aspect framework for errors.
Data quality isn’t static. The session also considered how data might deteriorate over time or in other ways, with a chance to share ideas about people are measuring that, too.
The session ended with “tips and next steps”: an opportunity to consider what what participants now need to find out or do differently.
Participant takeaways:
What do we think good data quality is? How are we currently defining it?
How are we measuring error rates and data quality at the moment?
How much do we know about the consequences of errors in our data?
What ways of measuring error rates and data quality are practical and where can we make improvements?