

BIOSTATISTICS 

Year : 2017  Volume
: 3
 Issue : 2  Page : 268270 

Type I, II, and III statistical errors: A brief overview
Parampreet Kaur^{1}, Jill Stoltzfus^{2}
^{1} Research Institute, St. Luke's University Health Network, Bethlehem, United States of America ^{2} Research Institute, St. Luke's University Health Network, Bethlehem; Temple University School of Medicine, Philadelphia, PA, United States of America
Date of Web Publication  9Jan2018 
Correspondence Address: Dr. Jill Stoltzfus St. Luke's University Health Network, 801 Ostrum Street, Bethlehem, PA 18015 United States of America
Source of Support: None, Conflict of Interest: None  Check 
DOI: 10.4103/IJAM.IJAM_92_17
As a key component of scientific research, hypothesis testing incorporates a null hypothesis (H_{0}) of no difference in a larger population and an alternative hypothesis (H_{1}or H_{A}) that becomes true when the null hypothesis is shown to be false. Two potential types of statistical error are Type I error (α, or level of significance), when one falsely rejects a null hypothesis that is true, and Type II error (β), when one fails to reject a null hypothesis that is false. To reduce Type I error, one should decrease the predetermined level of statistical significance. To decrease Type II error, one should increase the sample size in order to detect an effect size of interest with adequate statistical power. Reducing Type I error tends to increase Type II error, and vice versa. Type III error, although rare, occurs when one correctly rejects the null hypothesis of no difference, but does so for the wrong reason. The following core competencies are addressed in this article: Practicebased learning and improvement, Medical knowledge.
Keywords: False negative, false positive, statistical errors, Type I, II, Type III
How to cite this article: Kaur P, Stoltzfus J. Type I, II, and III statistical errors: A brief overview. Int J Acad Med 2017;3:26870 
Introduction   
Hypothesis testing is a critical component of conducting scientific research. As part of this process, one must choose between two competing hypotheses about the value of a population parameter of interest, which is then tested through experiments and/or observations. The null hypothesis (symbolized by H_{0}) states that there is no difference in the population parameter(s).^{[1]} It is assumed to be true, unless, there is strong evidence to the contrary. For example, when examining the effectiveness of an experimental antibiotic, the null hypothesis would be that the drug has no effect on a disease in the larger population.
In contrast, the alternative hypothesis (symbolized by H_{A} or H_{1}) is assumed to be true when the null hypothesis is false. For example, when examining the effectiveness of an experimental antibiotic, the alternative hypothesis is that the drug has a significant effect on a disease in the larger population and that this effect is not due to random chance.
It is essential to define both the null and alternative hypothesis before any statistical test of significance is conducted.
Understanding Types of Errors   
Type I error
When conducting hypothesis testing, there are two major potential types of error that may disrupt the process. Type I error (symbolized by α and equivalent to a falsepositive result) occurs when one incorrectly rejects a null hypothesis that is actually true (i.e., there is no difference in the larger population).^{[2]} Using the previous example of a drug's effect (experimental antibiotic) on a disease in the larger population, if one falsely rejects the null hypothesis, one would claim that the drug has a significant effect on the disease as measured by one's study sample, when in reality, the antibiotic is not effective against the disease in the larger population.
The probability of committing a Type I error is a function of one's level of statistical significance.^{[3]} The conventional range for significance is between 0.01 and 0.10, with 0.05 representing the value seen in most published research studies. Assuming one has obtained an adequately sized and representative sample from the larger population, Type I error generally occurs due to random chance. Multiple testing may also increase the chance of Type I error because making many different comparisons between groups often results in at least one comparison being falsely “significant.”
Type II error
The second type of potential error when conducting hypothesis testing is known as Type II error (symbolized by β and equivalent to a falsenegative result).^{[2]} It occurs when one fails to reject a null hypothesis that in actuality is false. For example, if the experimental antibiotic truly affects a disease in the larger population, but one falsely claims that it does not, as measured by the study sample, Type II error is the result.
The probability of committing a Type II error is a function of power (symbolized by 1 β). The conventional range for Type II error is between 0.05 and 0.20, with 0.20 representing the standard value in published studies (meaning there is an 80% chance of correctly detecting a difference in one's sample that actually exists in the larger population). The main reason for Type II error is an insufficient sample size for detecting an effect size of interest.^{[3]} For example, one may wish to test whether a drug reduces disease incidence in the treatment group by 10% compared to the control group. Here, the effect size would be 10%, and one's sample must be large enough to detect this difference to avoid a Type II error. Smaller effect sizes require larger samples, so one must exercise great care in identifying the appropriate effect size for one's study objectives (e.g., from previous research, pilot study findings, and/or one's clinical observations).
Relationship between Type I and Type II Error   
Although they represent different concepts, Type I and Type II error are related in that reducing Type I error tends to increase Type II error and vice versa.^{[3]} By reducing Type I error (typically by decreasing the level of significance, such as from 0.05 to 0.01), it becomes more difficult to reject the null hypothesis of “no difference” even if there really is a difference in the larger population (which would result in Type II error).
In contrast, by increasing the Type I error rate or level of significance (such as from 0.05 to 0.10), one decreases the likelihood of falsely rejecting the null hypothesis of “no difference” and concluding that there truly is a difference in the larger population, which reduces the probability of Type II error. [Figure 1] illustrates this relationship by showing how increasing or decreasing alpha (Type I error), or beta (Type II error) leads to a respective increase or decrease in the other value.
Type III error
Although Type I and II errors are the primary points of concern when conducting hypothesis testing, another type of error may come into play (albeit rarely). Type III error occurs when one correctly rejects the null hypothesis of no difference but does so for the wrong reason.^{[4]} One may also provide the right answer to the wrong question. In this case, the hypothesis may be poorly written or incorrect altogether. For example, a drug may reduce disease in the larger population, but it fails to do so in one's study sample because the hypothesis was not well conceived. To avoid Type III error, one should take great care when collecting, recording, and analyzing data from the population of interest, since this type of error may negatively impact medical practices and health policies if one adopts an inappropriate treatment plan or course of intervention due to faulty data.
Conclusion   
When conducting hypothesis testing, one must guard against the possibility of Type I and II errors, since both have the potential to adversely affect healthcare decisions and policies, particularly if treatments and interventions are either promoted inappropriately or withheld due to inability to detect their true impact. Careful study design and conscientious attention to data collection and analysis go a long way toward reducing these hypothesis testing errors and promoting the highest quality evidence in healthcare research.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
References   
1.  Banerjee A, Chitnis UB, Jadhav SL, Bhawalkar JS, Chaudhury S. Hypothesis testing, type I and type II errors. Ind Psychiatry J 2009;18:12731. [ PUBMED] [Full text] 
2.  Bajwa SJ. Basics, common errors and essentials of statistical tools and techniques in anesthesiology research. J Anaesthesiol Clin Pharmacol 2015;31:54753. [ PUBMED] [Full text] 
3.  Kim HY. Statistical notes for clinical researchers: Type I and type II errors in statistical decision. Restor Dent Endod 2015;40:24952. [ PUBMED] 
4.  Robin ED, Lewiston NJ. Type 3 and type 4 errors in the statistical evaluation of clinical trials. Chest 1990;98:4635. [ PUBMED] 
[Figure 1]
