ChatGPT vs Satya Nadella__Satya Nadella’s demonstration of ChatGPT’s capacity to self-correct is encouraging, but there have been several instances where the chatbot has failed to provide proper solutions to inquiries since its inception.
A discussion between Microsoft CEO Satya Nadella and ChatGPT at the recently held Future Ready Technology Summit in Bengaluru about ‘biryani’ being a component of the South Indian tiffin has offered a peek of how the chatbot may now learn from its mistakes and apologise for the same. For example, when Nadella asked ChatGPT about South Indian tiffin, OpenAI’s chatbot recommended biryani. Nadella disagreed that biryani should be included in the South Indian tiffin. ChatGPT, on the other hand, was eager to issue a lengthy apology.
“Users are accurate in saying that Biryani is not a typical South Indian dish. Biryani is a famous rice dish thought to have originated on the Indian subcontinent. It is typically served with raita or yoghurt on the side and is cooked with spices, veggies, and meat or fish. Biryani is popular across India, as well as other countries of South Asia and beyond. Although it is a popular and appreciated food in South India, it is not often considered as a tiffin dish. I regret the error in my prior message. Is there anything else I can do?” Read the ChatGPT answer.
The chatbot also provided a positive answer, demonstrating its capacity to understand the scope of information. Although Nadella’s demonstration of ChatGPT’s capacity to self-correct is encouraging, there have been several instances where the chatbot has failed to provide proper solutions to inquiries since its inception. Let’s take a look at some of ChatGPT’s other blunders and how they’ve been corrected.
ChatGPT’s fundamental math fails.
A prominent inquiry also put the chatbot’s arithmetic skills to the test, as seen recently on Twitter. “If I am half my sister’s age and she is ten, how old will I be when she is forty?” – A simple question posed by numerous users elicited replies that were far from accurate.
In his lengthy thread, Reddit user u/imaginexus stated that he asked the puzzle 30 times, but the chatbot only got it correct once.
Although numerous users have previously expressed concern about ChatGPT’s lack of accuracy with elementary mathematical questions, the chatbot has pleased several educators by explaining and providing instances of topics such as Pythogoras’ theorem and the ‘Monty Hall’ problem.
“Pythagoras’ theorem is a method for determining the length of a right triangle’s missing side” (a triangle with one 90 degree angle). It states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the other two sides,” the bot responded to a question posed by Paul T. von Hippel, associate professor at the University of Texas, as reported by educationnext.org
ChatGPT And Logical Reasoning
Some other allegation that has sparked debate over the legitimacy of ChatGPT is its reported low average IQ, with many pointing to its inability to perceive context. This lack of context awareness has resulted in multiple occasions where the ChatGPT has failed to provide convincing replies to enquiries. Surprisingly, the accuracy and context are heavily influenced by how the user answers the questions.
Twitter user @letsrebelagain posted ChatGPT’s solutions to a logical reasoning inquiry. “Bob is the father of two sons. Jay and John. Jay has one sibling and a father. Father is the father of two sons. Jay has a sibling as well as a parent. “Can you tell me about Jay’s brother?” Please read the question.
The visitor was perplexed to see the chatbot’s lengthy explanation of how it was difficult to establish the link between the persons specified in the query. In a following prompt, the user inquired about Jay’s father, but the chatbot provided the same response as before.
Concerns Regarding Bias(ChatGPT vs Satya Nadella)
ChatGPT can also help users with programming and may be a very valuable tool for learning and increasing productivity. The simplicity of usage, quickness, and precision, however, have generated various ethical issues.
Steven T Piantadosi, a Twitter user with computational cognitive science in his profile, took to his account to demonstrate how ChatGPT may be biassed since simple tactics may easily evade its content filters. The user instructed the chatbot to generate python functions depending on race and gender. Content moderation may be a long way off, but with technology that reinvents itself with each blunder, the future may seem brighter.
ChatGPT, like any other AI model, is constantly learning and unlearning. Many AI models fail horribly when forced to do simple multiplication tasks. The OpenAI bot was designed to help with general knowledge and provide information depending on what it has learnt. It also allows users to downvote comments, which serves as a way for it to learn from user participation and information.