What is the difference between Type I and Type II errors? Statistical test practice is widespread, not only in statistics, but also in natural and social sciences. If we test the hypothesis, there are some things that can go wrong. There are two types of errors that can not be avoided by design, and we should be aware that these errors are present. Error Pedestrians named Type I and Type II are sufficient. What type I and type II errors, and how do we say the difference? in short:

*Type I errors occur when we reject the null hypothesis true.**Type II error occurs when we do not reject the null hypothesis false.*

We will explore more background behind these types of errors with the purpose of understanding this statement.

**Testing hypotheses**

The process of reviewing the hypothesis seems very diverse with many test statistics. But the process is usually the same. Hypothesis testing, the null hypothesis includes statements, and the choice of the level of significance. The null hypothesis is true or false, and is the standard claim for a treatment or method. If, for example, the efficacy of the drug is checked, the null hypothesis is that the drug has no effect on a disease.

According to the null hypothesis formulate and a degree of importance choice we gain data by observation.

Statistical calculations tell us whether or not we should reject the null hypothesis.

In an ideal world we would always reject the null hypothesis if it is false, and we would not reject the null hypothesis if it is true. But there are two other possible scenarios, each of which will lead to errors.

**TYPE I ERROR**

The first type of error that can cause the rejection of the null hypothesis that is actually true. This type of error is called a type I error, and sometimes the first type of error.

Type I error is equally false positive. For example, let yourself be back from drugs to treat the disease. If we reject the null hypothesis in this situation, then our claim is that the drug actually has an effect on a disease. But if the null hypothesis is true, then the fact that the drug does not fight disease. The drug is mistakenly claimed to have a positive effect on a disease.

Type I error. Alpha value related to the level of significance we have chosen to have a direct link to the error type I. Alpha will have the maximum probability that we will have a type I error for a confidence interval of 95% Alpha 0.05. This means that there is a 5% probability that the null hypothesis is true. In the long run, one of twenty hypotheses that we will do at this level will result in a type I error

**TYPE II ERROR**

Other types of errors that can occur if we do not reject the null hypothesis is wrong.

This type of error is called a Type II error, and is also referred to as a second type error.

A type II error is the same false negatives. If we remember the scenario we’re testing a drug, what type of type II error? A type II error occurs when we assume that the drug has no effect on a disease, but in fact it does.

The probability of a Type II error is given by the Greek letter beta. This number is related to the strength or sensitivity of the hypothesis, designated 1 – beta.

**HOW to avoid errors**

Error type I and type II is a part of the process of hypothesis testing. Although errors can not be completely eliminated, we can minimize some kind of error.

Usually, when we try to reduce the probability of some kind of error, the probability increases for other species.

We can reduce the alpha value from 0.05 to 0.01, to 99% confidence level. However, if everything remains the same, the probability of a Type II error is almost always increased.

**VIDEO**

Often applications in hypothesis tests of the real world will decide whether we would get the type I or type II error. It will be used when we design our statistical experiment. Thank you for reading this article until the last paragraph, if this article useful you can share to social media Google+, Facebook and others. thanks. – John Sadino –