AI has nothing to do
with intelligence but people believe the marketing hype, mostly
because we have a distorted idea of what intelligence is, largely due
to the media.
Take the quiz show
“Are You Smarter Than a Fifth Grader” that says in its name that
it’s about whether contestants are as intelligent as a fifth grade
student. What the show actually tests is who is more familiar with
the grade five curriculum, grade five students or people who have not
been in school for twenty tears or more. I know who I am betting on.
And take the
famously super intelligent Jeopardy champions. Maybe some of these
people are highly intelligent but that is not why they are Jeopardy
champions because Jeopardy is not about intelligence. It is about
knowing stuff, particularly the type of stuff Jeopardy asks questions
about. At best it is about knowledge, not intelligence.
The Cambridge
Dictionary defines intelligence as: “the ability to learn,
understand, and make judgments or have opinions that are based on
reason”. (Source)
I would refine that
to: “the ability to understand and analyze information in order to
make rational decisions based on that information”.
Intelligence is not
about information it is about reasoning.
I remember what some
might call the first forerunner to Alexa and other chat bots. It was
called Eliza
ELIZA's creator, Weizenbaum, intended the program as a method to
explore communication between humans and machines. He was surprised
and shocked that individuals, including Weizenbaum's secretary,
attributed human-like feelings to the computer program.[3] Many
academics believed that the program would be able to positively
influence the lives of many people, particularly those with
psychological issues, and that it could aid doctors working on such
patients' treatment.[3][13] While ELIZA was capable of engaging in
discourse, it could not converse with true understanding.[14]
However, many early users were convinced of ELIZA's intelligence and
understanding, despite Weizenbaum's insistence to the contrary.[6] (Source)
This was not
artificial intelligence and neither are the latest claimants, the
large language models (LLMs).
A large language model (LLM) is a language model notable for its
ability to achieve general-purpose language understanding and
generation. LLMs acquire these abilities by learning statistical
relationships from text documents during a computationally intensive
self-supervised and semi-supervised training process.[1] LLMs are
artificial neural networks following a transformer architecture.[2]
As autoregressive language models, they work by taking an input text
and repeatedly predicting the next token or word.[3] Up to 2020, fine
tuning was the only way a model could be adapted to be able to
accomplish specific tasks. Larger sized models, such as GPT-3,
however, can be prompt-engineered to achieve similar results.[4] They
are thought to acquire knowledge about syntax, semantics and
"ontology" inherent in human language corpora, but also
inaccuracies and biases present in the corpora.[5]
Notable examples include OpenAI's GPT models (e.g., GPT-3.5 and
GPT-4, used in ChatGPT), Google's PaLM (used in Bard), and Meta's
LLaMA, as well as BLOOM, Ernie 3.0 Titan, and Anthropic's Claude 2.
(Source)
Using statistics to
mimic what a human might say or write is not reasoning and it is
certainly not intelligence.
It might not be so
bad if these systems did not claim to intelligent but only claimed to
be able to retrieve accurate information and did that well but they
are designed to NOT do that.
I remember the early
Internet and search engines with advanced boolean search capability
like Alta Vista and the early versions of Google before they sold
their top search results to the highest bidder.
Then the Internet
was mainly academic institutions and community based organizations.
The information on the Internet was relatively reliable most of the
time. That information is still there if you pay attention to the
actual source.
LLMs could use an
information base based on actual reliable sources like Encyclopedia
Britannica or Wikipedia, or the collections of actual scientific
journals or other respected sources.
But instead they
have adopted the bigger/more is better approach feeding as much of
the Internet as possible into their models, often without permission
of the sources/creators. This leads to an information base dominated
by misinformation and disinformation leading to results like “there
is no water in the Atlantic Ocean”. But obvious errors are not the danger here but the amplification of
misinformation and disinformation in the political sphere.
But it is worse.
These disinformation models are proving to be even more wasteful of
energy and harmful to the planet than the cryptocurrency scam and
their believers/followers just as faithful and misguided. And for
what. Obviously they hope to make a shitload of money from this scam.
AI is clearly not
intelligent, just dangerous.