Why do you want to test for normality? If you are doing so to justify the use of t-tests, ANOVA, or some other OLS method, I would not bother. Here is why.
- A sufficient normality assumption for OLS models (including t-tests, ANOVA, linear regression) is that the errors (i.e., the deviations of the observed values of Y from the true, population regression expression) are sampled from a normal distribution with mean = 0 and variance = sigma2.
- As n increases, those sampling distributions approach the normal distribution, even if the errors are not normally distributed.
Putting it all together:
- The smaller n is, the more important it is that the errors are sampled from a normal population; but with small n, tests of normality have low power, and are probably unable to detect important deviations from normality.
- As n increases, normality of the errors becomes less important, but tests of normality have increasing power, and therefore, throw up the red flag of non-normality when there is no important violation (because the sampling distributions of the parameter estimates are approximately normal).
For those reasons, I once gave a short conference presentation in which I described testing for normality as a precursor to t-tests as silly and pointless. YMMV.
I hope this helps.