# Global AI and Data Science

View Only

## Artificial Intelligence: As I See It

#### By Moloy De posted Thu February 11, 2021 10:05 PM

Suppose you are tossing a coin, biased or unbiased, unknown to you. Before each toss you are calling Head or Tale and matching the outcome of the toss. You are going on recording the results in a chart shown below and after say fifty tosses you measure your prediction performance calculating Accuracy, Precision, Recall, F1 Score etc.

Now instead of predicting with gut feelings you may use an algorithm that sets the success probability or the probability of Head as the proportion of Heads that occurred so far. You generate a random number between zero and one and if it is less than the proportion of Heads so far you predict Head otherwise predict Tail. Initially you may set the success probability as 0.5. Now, this is an Artificially Intelligent Algorithm.

When one is calling Head or tail randomly the probability of winning the toss is 1/2 * p + 1/2 * (1-p) = 1/2 for any value of p that denotes the probability of Head. If pn denotes the proportion of Heads in n tosses the probability of winning the toss by the AI Algorithm is pn * p + (1-pn) * (1-p). Now pn converges to p as n gets large. So the probability of winning the toss by the AI Algorithm is p2 + (1-p)2 > 1/2 for any value of p. I tried to compare them here by a small experiment in my YouTube video.

As in Wikipedia, intelligence is defined as "the ability to acquire and apply knowledge and skills". We train a model on data which is sort of acquiring knowledge / skill and the model applies the knowledge / skill when it runs in production making predictions and other activities. Now for any life form self defense is a built-in instinct / capability that also can be termed as intelligence. As per Darwin only the intelligent species that can defend themselves survive.

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.

The free will theorem of John H. Conway and Simon B. Kochen states that if we have a free will in the sense that our choices are not a function of the past, then, subject to certain assumptions, so must some elementary particles. Conway and Kochen's paper was published in Foundations of Physics in 2006. In 2009, the authors published a stronger version of the theorem in the Notices of the AMS. Later, in 2017, Kochen elaborated some details. Here is a talk by John Conway on Free Will. Randomness could be used to mimic or model Free Will but as per Conway Free Will is something more that is yet to be appropriately conceived and defined.

In my opinion if we could create a machine to have Free Will it will be Artificially Intelligent.

QUESTION I: How to prove that Free Will is something more than Randomness?
QUESTION II: Why ethics is important when creating AI?

#Featured-area-3
#Featured-area-3-home
#GlobalAIandDataScience
#GlobalDataScience
149 views