![two way anova in jmp two way anova in jmp](https://i.ytimg.com/vi/_K2lFvdXth4/maxresdefault.jpg)
Prism also offers the Newman-Keuls test (when comparing each mean with each other mean) for historical reasons (so files made with old versions of Prism will open) but we suggest you avoid it because it does not maintain the family-wise error rate at the specified level(1). Glantz says that Holm's test ought to have more power than Dunnett's test, but this has not (to his knowledge) been explored in depth(2) That means that with some data sets, the Holm-Šídák method can find a statistically significant difference where the Tukey method cannot. It is is more powerful than the Tukey method for comparing all pairs of means (3). If you don't care about seeing and reporting confidence intervals, you can gain a bit more power by choosing the Holm-Šídák test. Other available methods The Bonferroni and Sidak methods are offered for compatibility with other programs, but we see no advantages from choosing these tests. If you are comparing a bunch of independent comparisons, we recommend the Sidak method, which is very similar to Bonferroni but has a tiny bit more power.If you are comparing a control row (or column) mean with the other row (or column) means, we suggest the Dunnett's test.If you are comparing every row (or column) mean with every other row (or column) mean, we recommend the Tukey test.We recommend these tests because they can compute confidence intervals and multiplicity adjusted P values The list of tests available depends on the goal you specified on the second tab. Multiplicity adjusted P values provide more information that simply knowing if a difference has been deemed statistically significant or not.Confidence intervals are much easier for most to interpret than statements about statistical significance.We recommend one of the tests that compute confidence intervals and multiplicity adjusted P values for two reasons: Some of these methods let you compute confidence intervals and multiplicity adjusted P values, and some don't. Correct for multiple comparisons using statistical hypothesis testing