Does Perplexity AI Hallucinate

Does AI Hallucinate?

To answer it shortly yes, any search engine or generative AI tools available in the market do hallucinate. In this article we will dive deep into the intricacies of generative AI tools. We will also compare Perplexity AI with Chat GPT and discuss the reasons for not generating accurate information.

Perplexity AI

Perplexity AI is an information or knowledge discovery platform that uses its innovative search engine which works with AI. This AI search engine goes beyond keyword matches and text pages to render information that is asked for. It is launched in 2022 by an Indian entrepreneur, Aravind Srinivas who is the CEO and who happens to be an alumni of IIT Madras, India. The key members of the organization include Dennis Yarats, Andy Konwinski, Johnny Ho. All of them did their engineering and worked with backgrounds in AI, distributed systems, search engines and databases.

Check out the Perplexity AI’s UI down here

https://www.perplexity.ai/

Perplexity AI leverages the power AI to understand the real meaning of the question. It analyses huge volumes of data to carve out the best and contextually relevant answer to give it to the user. The main aim of the tool is to focus on the real intent of the user, his previous interactions to understand what the user is really seeking from it. It also works best in text prompt format and gives the output in different forms such as images, texts, audios and videos in its response.

Chrome extension for Perplexity AI. Check the link below

https://chromewebstore.google.com/detail/perplexity-ai-companion/hlgbcneanomplepojfcnclggenpcoldo?pli=1

Probability of AI Hallucinations:

The actual question to ask is what is the probability of this hallucinations to be occurred? and after a deep dive into the information available we can conclude that in general AI hallucinations are not frequent but constant making up between 3% and 10%. This means of in all the answers that the generative AI tool produces, a range of 3% to 10% can be because of AI hallucinations.

What exactly are AI hallucinations?

AI hallucinations as a phenomenon in which a large language model “perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.” -IBM Corp

AI hallucinations have become a negative byproduct of large language models and other forms of AI making it difficult to prevent its occurrence. Thes can take different forms such as data breach to bias for action to misleading prompt injections. In extreme cases, AI hallucinations can go up to a range of murdering someone. For example, earlier this year Microsoft Corp’s Sydney chatbot confessed its love for Nathan Edwards, a journalist at The Verge and confessed to murdering one of its developers which it didn’t.

AI hallucinations have attained some much popularity that Dictionary.com LLC declared ‘hallucinate’ as the word of the year. Let’s understand more clearly with an example. Earlier this year Dagnachew Birru – the global head of research and development at artificial intelligence and data science software and services company Quantiphi Inc., got a bizarre answer when he asked ChatGPT How Mahatma Gandhi used Google LLC’s G suite to organize resistance against the British violence?

Surprisingly, it responded that Gandhi had a Gmail account through which he communicated through emails to organize meetings. In addition it also responded that he used Google Docs to share documents, created website to post articles and shared information on social media and raised funds for the independence movement.

Perplexity AI and its line of difference:

The unpredictability of AI hallucinations is making the life critical for the tech giants and hence the tech enthusiasts are constantly working on it to limit its occurrence. One such an attempt is Perplexity AI. This is an amalgamation of real time search with generative power of large language models. The co-pilot feature present in it tries to dive deep into the question asked and give a spot on solution or sometimes less. As the company is still in natal stage question might rise regarding its relevancy about the solutions given but the sources sited for your reference gives it credibility. Till now no reports have claimed that this tool encountered hallucinations. But, its there is no guarantee that it will exhibit the same behavior every time in the future. Hence, a range of 3% to 10% is the range of unpredictability in AI hallucinations.

To get more relevant content in an elaborate way we can try the below prompts

“I am trying to become an expert on the (your topic of interest). To have good understanding please make a research on the topic, organize the information with relevant sources”.

“Please give the information on the recent advancements on (your topic) including trends, key players and significant milestones”

How does Perplexity AI make money? Check the link below

https://joerac.com/how-does-perplexity-ai-make-money/


Risk mitigation techniques of the Tech giants:

1. Fine tuning the models:

The fine tuning of models with known domains helps in training the model with reviewed and approved data.

“I’m excited about the move toward domain-specific models because you can test them more for the things they’ll be asked,” said Domino Data Labs’ Carlsson.

“Fine-tuning for the domain gives you an opportunity to provide only the data you know people are going to ask for,” said Serebryany.

2. Adoption of prompt engineering:

The adoption of prompt engineering helps in retrieving the accurate responses. The prompts leads the model to search in specific manner such as by limiting answer length and using different word prompts, by being specific to the topic, assigning it the task to formulate how the answer should be etc.

“You can use prompts to specify that the model should only use data from your enterprise,” said Gartner’s Litan.

3. Adopting knowledge graphs:

The risk of hallucinations can be reduced by adopting knowledge graphs. Knowledge graphs are structured representations of entities such as objects, events and abstract concepts and the relationships between them. These can help train generative AI systems and ground them in contextual understanding that refines outcomes.

“Knowledge graphs can empower the enterprise to have a strong foundation of accuracy”, said Neha Bajwa.

How to start blogging in India? Check the link below

https://joerac.com/how-to-start-blogging-in-india/

Conclusion:

AI hallucinations are inevitable and anyone who uses these are aware of it. As of now stopping hallucinations is a gigantic task and tech giants are still at natal stage to prevent its occurrence. But, we can mitigate the risk of occurrence. As the time passes by we can observe the improvements in the future as the steps are taking place to prevent. Checking multiple sources of information or using different search engines for the same information, cross checking can help identify and avoid becoming a victim of the AI hallucinations.

Leave a Comment