AI systems are only as trustworthy as the training and testing data
21/01/26
Technology & Data
We welcome insights from our resident Research and Innovation Lead Georgios Dimitropoulos, who produced the video attached to this insight. He welcomes any questions you have, so go ahead and use the CONTACT US button and share your thoughts with us.
“Garbage in, garbage out”
As with all software, if the input data is of low quality, the results will be inferior. With AI systems, the consequences of output quality are exponential. Poor data quality leads to inaccurate predictions, which amplify inconsistencies and, in turn, cause failures across interconnected processes and workflows.
Data aggregation is a challenge
People and organisations may have access to abundant data but often lack focus and direction. As AI systems consume exponentially more information, the challenge is no longer collecting data but aggregating it responsibly.
AI systems need human-centric thinking
Developing AI systems without human-centred design thinking embedded from conception can lead to a focus on technical metrics and superficiality rather than real-world impact. That gap between how AI is built and how it affects people is the crucial challenge.
Anthropocentrism – from Ancient Greek ánthrōpos
‘human’ and κέντρον (kéntron)(‘center’) is the belief that human beings are
the central or most important entity on the planet.