AI Is Not Reasoning As You Might Think

https://img.inleo.io/DQmfM7aYNvDEVwiB4PJG6iMNPrVXnTqCHiQbU78dyYxzDuR/ai-generated-8015423_1280.webp

source

I don't know who led us to believe that AI can actually reason, not cannot at least not like we do

Apple researchers conducted a research that exposed the limitations LLMs and now I feel a little duped for believe AI could reason and was super powerful.

People still think large language models (LLMs) like ChatGPT are capable of deep, logical reasoning. And right now I see it's starting to look like a marketing stunt and not actual scientific progress.

Apple’s study showed us that just some trivial changes in math problems like you swapping names and numbers can easily cause LLMs to stumble and it showed that they merely rely on pattern-matching not understanding or reasoning that's why they fail. You can easily trick them.

I tried this test once with Chatgpt. I made chatgpt state a known fact and told chatgpt that it's not true and ChatGPT apologized and took the truth I gave it. If it was reasoning, it would have stood it's ground and defended its known case or fact.

From that time, I couldn't truat AI for anything requiring logic and facts. You ask it to solve a problem and it gives you an answer that looks very confident and true. But what's really happening is that the AI is guessing based on the patterns it has seen before.

In the study Apple conducted, that illusion of it being good at solving problems suddenly falls apart as soon as the models are given an instruction that is slightly unfamiliar.

Just a simple name swap and its accuracy plummets. This just proves my point that LLMs don’t actually "think" but they only mimic what they’ve encountered or been previously trained with and that mimicry is very fragile can easily crumble.

And the funniest part is when the Apple researchers threw in irrelevant information, the AI practically broke down.

They just try to fool the Model by adding a minor detail like mentioning that some kiwis were smaller and then the model’s trying to subtract fruits from the total. This is a serious joke really.

So how can we trust something this like this. Students are using this to find answers to their math homework, it's just funny to me.

If AI can’t distinguish important facts from junk then it’s not reasoning. It's an illusion of intelligence not real intelligence like ours.

Posted Using InLeo Alpha



0
0
0.000
2 comments
avatar

Congratulations @dinodino! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You received more than 2000 upvotes.
Your next target is to reach 2250 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

0
0
0.000