Skip main navigation (Press Enter).
Log in
Toggle navigation
Log in
Community
Topic Groups
Champions
Directory
Program overview
Rising Champions
IBM Champions group
User Groups
Directory
Benefits
Events
Dev Days
Conference
Community events
User Groups events
All TechXchange events
Participate
TechXchange Group
Welcome Corner
Blogging
Member directory
Community leaders
Resources
IBM TechXchange
Community
Conference
Events
IBM Developer
IBM Training
IBM TechXchange
Community
Conference
Events
IBM Developer
IBM Training
Global AI and Data Science
×
Global AI & Data Science
Train, tune and distribute models with generative AI and machine learning capabilities
Group Home
Threads
4K
Blogs
908
Events
0
Library
370
Members
28.3K
View Only
Share
Share on LinkedIn
Share on X
Share on Facebook
Back to Blog List
Testing Normality
By
Moloy De
posted
Sun October 02, 2022 08:38 PM
Like
In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.
An informal approach to testing normality is to compare a histogram of the sample data to a normal probability curve. The empirical distribution of the data, the histogram, should be bell-shaped and resemble the normal distribution. This might be difficult to see if the sample is small. In this case one might proceed by regressing the data against the quantiles of a normal distribution with the same mean and variance as the sample. Lack of fit to the regression line suggests a departure from normality.
A graphical tool for assessing normality is the normal probability plot, a quantile-quantile plot (QQ plot) of the standardized data against the standard normal distribution. Here the correlation between the sample data and normal quantiles measures how well the data are modeled by a normal distribution. For normal data the points plotted in the QQ plot should fall approximately on a straight line, indicating high positive correlation. These plots are easy to interpret and also have the benefit that outliers are easily identified.
Simple back-of-the-envelope test takes the sample maximum and minimum and computes their z-score, or more properly t-statistic and compares it to the 68–95–99.7 rule: if one has a 3σ event and substantially fewer than 300 samples, or a 4σ event and substantially fewer than 15,000 samples, then a normal distribution will understate the maximum magnitude of deviations in the sample data.
This test is useful in cases where one faces kurtosis risk – where large deviations matter – and has the benefits that it is very easy to compute and to communicate: non-statisticians can easily grasp that "6σ events are very rare in normal distributions".
Tests of univariate normality include the following:
1. D'Agostino's K-squared test,
2. Jarque–Bera test,
3. Anderson–Darling test,
4. Cramér–von Mises criterion,
5.Kolmogorov–Smirnov test,
6.Lilliefors test,
7. Shapiro–Wilk test,
8. Pearson's chi-squared test.
A 2011 study concludes that Shapiro–Wilk has the best power for a given significance, followed closely by Anderson–Darling when comparing the Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors, and Anderson–Darling tests.
More recent tests of normality include the energy test (Székely and Rizzo) and the tests based on the empirical characteristic function (ECF) (e.g. Epps and Pulley, Henze–Zirkler, BHEP test). The energy and the ECF tests are powerful tests that apply for testing univariate or multivariate normality and are statistically consistent against general alternatives.
The normal distribution has the highest entropy of any distribution for a given standard deviation. There are a number of normality tests based on this property, the first attributable to Vasicek.
Kullback–Leibler divergences between the whole posterior distributions of the slope and variance do not indicate non-normality. However, the ratio of expectations of these posteriors and the expectation of the ratios give similar results to the Shapiro–Wilk statistic except for very small samples, when non-informative priors are used.
Spiegelhalter suggests using a Bayes factor to compare normality with a different class of distributional alternatives. This approach has been extended by Farrell and Rogers-Stewart.
QUESTION I : Why do we need our data to be Normally Distributed?
QUESTION II : Myth or Truth - larger is the volume of data more it is Normally behaved?
REFERENCE :
Normality Test - Wikipedia
#GlobalAIandDataScience
#GlobalDataScience
0 comments
8 views
Permalink
Copy
https://community.ibm.com/community/user/blogs/moloy-de1/2022/09/28/points-to-ponder
Powered by Higher Logic