Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

2024-05-12

The Scourge of the Internet

No I am not writing about the fear and hate mongering taking over the Internet although they are the greatest evils of the Internet. And I am not taking about corporate social media with all it’s evils of turning the customer into the product, at least it can facilitate communication and community and even activism. I am talking about something much subtler and seemingly innocuous.

The Scourge of the Internet are so-called influencers and content creators.

When I think of influencers, The Kardashians are the first thing that come to mind, people famous for being famous. Influencers online are about being famous, and being charismatic or outrageous seems to be the way to go. But influencers are not really out to influence anyone, they are just looking for followers that can be monetized.

As for content providers, the word content says it. They are not about providing real information or knowledge, it’s just about creating something to stick in-between the advertising. That is why when you go researching online you keep finding multiple websites with exactly the same information, word for word (usually stolen from Wikipedia), Content providers are just sticking content they steal in-between the advertising. Again all for hits and advertising revenue.

These things may seem innocuous but they clutter up the Internet with meaningless pap making finding real information increasing more difficult, if not close to impossible. And AI is just going to make everything worse as the LLMs behind it feed on this mountain of garbage for the ultimate GIGO effect.

Can we have our old Internet back please – a place for information, communication and community.

2024-01-03

AI Has Nothing To Do With Intelligence

AI has nothing to do with intelligence but people believe the marketing hype, mostly because we have a distorted idea of what intelligence is, largely due to the media.

Take the quiz show “Are You Smarter Than a Fifth Grader” that says in its name that it’s about whether contestants are as intelligent as a fifth grade student. What the show actually tests is who is more familiar with the grade five curriculum, grade five students or people who have not been in school for twenty tears or more. I know who I am betting on.

And take the famously super intelligent Jeopardy champions. Maybe some of these people are highly intelligent but that is not why they are Jeopardy champions because Jeopardy is not about intelligence. It is about knowing stuff, particularly the type of stuff Jeopardy asks questions about. At best it is about knowledge, not intelligence.

The Cambridge Dictionary defines intelligence as: “the ability to learn, understand, and make judgments or have opinions that are based on reason”. (Source)

I would refine that to: “the ability to understand and analyze information in order to make rational decisions based on that information”.

Intelligence is not about information it is about reasoning.

I remember what some might call the first forerunner to Alexa and other chat bots. It was called Eliza

ELIZA's creator, Weizenbaum, intended the program as a method to explore communication between humans and machines. He was surprised and shocked that individuals, including Weizenbaum's secretary, attributed human-like feelings to the computer program.[3] Many academics believed that the program would be able to positively influence the lives of many people, particularly those with psychological issues, and that it could aid doctors working on such patients' treatment.[3][13] While ELIZA was capable of engaging in discourse, it could not converse with true understanding.[14] However, many early users were convinced of ELIZA's intelligence and understanding, despite Weizenbaum's insistence to the contrary.[6] (Source)

This was not artificial intelligence and neither are the latest claimants, the large language models (LLMs).

A large language model (LLM) is a language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by learning statistical relationships from text documents during a computationally intensive self-supervised and semi-supervised training process.[1] LLMs are artificial neural networks following a transformer architecture.[2]

As autoregressive language models, they work by taking an input text and repeatedly predicting the next token or word.[3] Up to 2020, fine tuning was the only way a model could be adapted to be able to accomplish specific tasks. Larger sized models, such as GPT-3, however, can be prompt-engineered to achieve similar results.[4] They are thought to acquire knowledge about syntax, semantics and "ontology" inherent in human language corpora, but also inaccuracies and biases present in the corpora.[5]

Notable examples include OpenAI's GPT models (e.g., GPT-3.5 and GPT-4, used in ChatGPT), Google's PaLM (used in Bard), and Meta's LLaMA, as well as BLOOM, Ernie 3.0 Titan, and Anthropic's Claude 2. (Source)

Using statistics to mimic what a human might say or write is not reasoning and it is certainly not intelligence.

It might not be so bad if these systems did not claim to intelligent but only claimed to be able to retrieve accurate information and did that well but they are designed to NOT do that.

I remember the early Internet and search engines with advanced boolean search capability like Alta Vista and the early versions of Google before they sold their top search results to the highest bidder.

Then the Internet was mainly academic institutions and community based organizations. The information on the Internet was relatively reliable most of the time. That information is still there if you pay attention to the actual source.

LLMs could use an information base based on actual reliable sources like Encyclopedia Britannica or Wikipedia, or the collections of actual scientific journals or other respected sources.

But instead they have adopted the bigger/more is better approach feeding as much of the Internet as possible into their models, often without permission of the sources/creators. This leads to an information base dominated by misinformation and disinformation leading to results like “there is no water in the Atlantic Ocean”. But obvious errors are not the danger here but the amplification of misinformation and disinformation in the political sphere.

But it is worse. These disinformation models are proving to be even more wasteful of energy and harmful to the planet than the cryptocurrency scam and their believers/followers just as faithful and misguided. And for what. Obviously they hope to make a shitload of money from this scam.

AI is clearly not intelligent, just dangerous.

2023-03-10

Object Removal with Adobe Photoshop Elements 2023 – The De-urbanization of a Pond

Back in the day, Inpaint used to be the standard for removing objects from digital photographs. Then I discovered Photo Stamp Remover which I found to be much better and easier to use. That all changed when I upgraded my Photoshop Elements 12 from 2013 to Photoshop Elements 2023. The interface has improved and the capabilities increased, not the least being it’s object removal capabilities, which are better than any other program I have tried. Photoshop Elements 2023 states “Adobe Sensei AI technology* and automated options do the heavy lifting so you can focus on the fun stuff”.

This project demonstrates the object removal capability of Photoshop Elements 2023.

The location of the photo used in this project was at the pond along Iber Road in Ottawa, Ontario, a block from the Trans-Canada trail.

Google Earth Aerial View of Site

The photo used in this project was one I took on November 14, 2012 with my Garmin GPSmap 62sc GPS camera which I had come across going through my GPS photos for my wallpaper project. I use my own photos for my Desktop PC wallpaper and change them weekly.

Original Photo

 

The original photo was cropped to 16X9 using JPEGCrops and then enhanced using Simply Good Pictures automatic optimization process.

Cropped and Enhanced Version

The result was a very decent photo but the first thing I noticed was the tree in front of the pond was distracting, even if it was a natural feature. So I thought why not try removing it since I had already been surprised by the object removal capabilities of Photoshop Elements 2023. I did not expect great results, the tree with it’s many branches being unlike a straight hydro line or telephone poll. I used the auto select function and surprised that the results were not bad, though they needed some tweaking with the brush function. I then took my shadow from the photo and a couple of culverts and voila the finished product.

Object Removal Version 1

Then I looked at all the buildings along the pond and thought let’s see if we can get rid of those and make this look like it’s not in the middle of a city. The first attempt to remove them all with the auto select function was quite unsatisfactory. So then I tried doing smaller sections using the auto select and brush functions and success.

Object Removal Version 2

 

That was it I thought but then I realized the railing along the pond still gave it’s urban location away. However, I thought with all the lines from the individual railings this was going to be impossible to remove and still have the photo look natural. First attempt using auto select on the whole railing confirmed that. But using the brush function and going a little bit by bit resulted in a decent image. The only giveaway was the apparent pattern among some of the apparently cloned geese. Some pondering and further editing attempts resulted in my removing some of the geese to break up the pattern and create a natural looking photo.

Final Object Removal Version 3

 

The moral of the story being when it comes to photo editing don’t be afraid to try things you do not think will work, you might surprise yourself.