Artificial Intelligence (AI) is getting smarter and smarter. But ironically, trust in AI is becoming lower and lower. It’s human nature to question and suspect, especially when a suspicion aired on Twitter goes viral.
What is of concern here is that, as AI struggles with issues like bias, data privacy, and deliberate misinformation, we could be questioning too many things and concepts, completely derailing the concept of faith, as governments, societies, and media fall into the ring of suspicion with every claim.
Unlike a search engine, such as Google, an LLM chatbot like ChatGPT, doesn’t just direct users to a source where they can find the answer to a query. Instead, it gives the user a rounded answer, which it claims to be factual. And we know now that it’s often inaccurate.
In a world that’s already struggling with misinformation, bias, and fake news, AI has now added one more entity to not trust.
Misinformation Built with AI
Recently, a THIP Media survey found that three out of every five Indians (62%) admitted to not knowing how to identify trustworthy health information on the web. The survey also showed that 59% worry that they may fall prey to health misinformation and get hurt without realising it, while 48% fear that any misinformation related to critical health conditions may hurt the most.
While the findings highlight the need for greater awareness and education on the importance of seeking out credible health information from reputable sources, it also shows the astounding amount of misinformation out there, much of it being built with AI.
Deep fakes have been around for a few years now. As AI generated images become more incongruous, one of the most popular being Pope Francis riding a motorcycle and attending Burning Man, will we start to question the things that we see with our eyes?
Some are using Generative AI to retell nerve-wracking true crime stories with visuals and the real faces of deceased children recounting what they went through while using actual images of real psychopaths and murderers.
Blind trust in AI has proven to be fatal even. Recently, a man committed suicide when a chatbot suggested that this was the best thing to do to save the planet. So, if some are viewing AI with caution, it’s a good thing.
Bias Begets More Doubt
Another aspect that is eroding trust in AI is already an existing one. Bias has been plaguing AI for a while now, and as we become dependent on the tech, it’s imperative that our awareness of the possibility of bias grows.
According to a survey by application development and infrastructure software company, Progress, globally, 66% of organizations anticipate becoming more reliant on AI/ML decision-making in the coming years, with 65% believing that there is currently data bias in their organization. In India, 55% expect to increase their reliance on the technology this year, but 69% of organizations anticipate a heightened concern over potential data bias.
Going ahead, as we board the AI train, we can’t just forget our cares. Instead, we must be alert and aware of what information we absorb and where it comes from.