GROW YOUR STARTUP IN INDIA

SHARE

facebook icon facebook icon

AI’s benefits are undeniable, while big tech is bettering AI more and more. But the risks are just as visible.

Big tech AI’s prowess is such that ChatGPT users now browsing the web, expanding the data the viral chatbot can access beyond its earlier September 2021 cutoff. Google’s Bard can instantly retrieve information from Google Search while being plugged into the entire Google ecosystem, Gmail, Docs, YouTube, and Maps. With Meta, users can now ask AI chatbots and celebrity AI avatars questions on WhatsApp, Messenger, and Instagram, while the AI model retrieves the information online from Bing search.

Read more: AI artist vs. human artist, who’s winning?

According to leaks and reports, Google DeepMind is training Gemini, its next iteration of the generative AI model which it claims is going to be multimodal, on video transcripts of YouTube, not just the audio. This would make the model rich and heavily informed. No one has done that before. 

According to the report ‘Generative Artificial Intelligence: Machines turning creative’ by JM Financial Institutional Securities Generative AI has emerged as a disruptive technology that can potentially help corporates improve productivity while vastly improving customer experience as well. As of now, major use cases and financial benefits of Gen AI fall in sales, marketing and customer support with software engineering expected to rise sharply.

Though Generative AI has powerful benefits across a multitude of use cases, it has its share of pitfalls that need to be accounted for. While these risks should be taken seriously, we need to note that it is not unusual for a major innovation (cars, computers et al) to introduce threats that need to be controlled while letting the innovation progress

The report ‘Generative Artificial Intelligence: Machines turning creative

But the report also warns of the risks from Generative AI. “Though Generative AI has powerful benefits across a multitude of use cases, it has its share of pitfalls that need to be accounted for. While these risks should be taken seriously, we need to note that it is not unusual for a major innovation (cars, computers et al) to introduce threats that need to be controlled while letting the innovation progress,” it says.

On the point of bias and fairness, the report says that while the models are creative, they would “inadvertently amplify the biases present in their training data and hence could demonstrate gender/ race/ caste prejudices.”

AI has a need to learn higher level reasoning and human values, the “primary reason why Gen AI responses need to be checked.”

Deepfakes and misinformation have already upped their game with AI. AI amplifies fake accounts and morphed images on social media with the potential of deepfake audio and video that can even be used to alter the course of a democratic election process. An absolutely horrifying example of real-life harm posed by generative AI happened in September, in Spain, when AI-generated images of children were circulated on social media.

While nonconsensual deepfake porn has been used to torment women for years, the latest generation of AI makes it an even bigger problem. These systems are much easier to use than previous deepfake tech, and they can generate images that look completely convincing

MIT Tech Review

As the MIT Tech Review says, “While nonconsensual deepfake porn has been used to torment women for years, the latest generation of AI makes it an even bigger problem. These systems are much easier to use than previous deepfake tech, and they can generate images that look completely convincing.”

Don’t get me wrong. ChatGPT can be a great boon to us. Recently, a Twitter user claimed that GPT-4 helped save his dog’s life. The user ran a diagnosis on his dog using GPT-4 and the LLM helped reach the underlying problem that was troubling his Border Collie, Sassy.

But can ChatGPT scale to a level of human-level intelligence? Answering this question, L Venkata Subramaniam, a quantum distinguished ambassador at IBM, told AIM that quantum computing can be used in working with fundamental models like ChatGPT. He said that quantum computing can achieve comparable results to classical AI using less training data and potentially accelerate the training process for AI models.

There are other good AI examples as well. Like the robotic exoskeleton that can help runners sprint faster, or how AI helped archaeologists in Peru to discover four new Nazca Line geoglyphs.

Melissa Heikkilä on The MIT Technology Review calls this “a risky bet, given the limitations of the technology”. Apart from regular AI problems like hallucinations, security, and privacy issues, she draws attention to a serious problem called ‘indirect prompt injection’.

The Tech Panda has already discussed about the evil twin propensity of prompts. Indirect prompt injection attacks can allow a third party to alter a website by adding stuff that can change the AI’s behaviour. Now, says Heikkilä, “Attackers could use social media or email to direct users to websites with these secret prompts. Once that happens, the AI system could be manipulated to let the attacker try to extract people’s credit card information, for example.” In fact, she calls the end users ‘sitting ducks’.

Hackers will have a gala time. And the big tech rushing into the LLM race isn’t doing much.

“People may be more trusting of AI when they can’t see how it works,” read one of the articles in the latest edition of Harvard Business Review, which showed how not knowing the workings of a model, helped people trust the process more. 

AI hasn’t surpassed humans yet. Also, it’s not always competent. AI may be besting radiologists at detecting certain types of cancer, but it still lags in its ability to evaluate itself, according to a new Harvard study. Automated scoring systems fared worse than humans in evaluating AI-generated reports, sometimes misinterpreting data and overlooking AI-generated errors.

New York University researchers looking into AI in schools found that ChatGPT did as well or better than students in nine out of 32 courses and that AI can’t reliably detect ChatGPT-generated “plagiarism.”

Read more: Traditional AI or generative? A hybrid approach wins

In conclusion, the escalating prevalence of deep fakes, the potential harm stemming from malicious prompts, and the widespread dissemination of misinformation highlight the need to address and mitigate the risks associated with AI. While the undeniable benefits of AI continue to shape and enhance various aspects of our lives, the advancements driven by big tech also bring forth challenges that cannot be ignored.

As we navigate this intricate landscape of evolving technology, it becomes increasingly crucial for stakeholders, researchers, and policymakers to collaborate in developing robust frameworks, ethical guidelines, and innovative solutions. By proactively addressing AI risks, we can ensure that the transformative power of artificial intelligence is harnessed responsibly, fostering a future where innovation coexists harmoniously with ethical considerations.

SHARE

facebook icon facebook icon
You may also like