An AI result can go wrong in many ways. So, it is desirable to have an idea of the probability of the result being correct. This probably will help in enhancing the quality of business decisions.
1 + 1 = 2 is a theorem that is proved in the book Principia Mathematica by Russel and Whitehead. One needs to define what is “1”, what is “2”, what is “+” and what is “=” while proving the theorem. The proof is available in the net here.
We find children committing mistakes while doing the sums. An AI algorithm that is mimicking activities of human brain may define a sum, let me call it Quantum Sum, as follows:
1 Q 1 = 2 with probability 0.95,
= 1 with probability 0.025,
= 3 with probability 0.025.
Simply saying 1 Q 1 Q 1 = 3 with probability 0.95 × 0.95 = 0.9025.
With n Quantum Sums the probability of getting the correct value is 0.95n that goes to zero and n gets large.
As the graph suggests the probability goes below 0.5 when n ≥ 14.
Mathematician Carl Friedrich Gauss worked on measurement errors that evolved in the Theory of Errors, Least Squares Method that in turn helped in Estimation Theory. But here I am trying to think about errors in elementary math operations like addition, subtraction, multiplication, division and root extraction.
I am sure there are merits of introducing such Quantum Sums. Not modelling mistakes while computing could be suicidal for humanity. Let us think about it.
QUESTION I: How does Theory of Errors give rise to Normal Distribution?
QUESTION II: Has Quantum Computing already taken care of Quantum Sums?
#GlobalAIandDataScience#GlobalDataScience