Type I and II error

Every­time you test a null hypoth­e­sis using a sta­tis­ti­cal test you either accept or reject the null hypoth­e­sis. In most cas­es the deci­sion is cor­rect, but we are deal­ing with prob­a­bil­i­ties here! You will always reject the null hypoth­e­sis if the prob­a­bil­i­ty that the observed test sta­tis­tic is less than the cho­sen sig­nif­i­cance lev­el (most­ly α = 0.05). There is a chance that the observed val­ue actu­al­ly belongs to the dis­tri­b­u­tion (of for exam­ple no dif­fer­ence between means). The prob­a­bil­i­ty that we observe a test sta­tis­tic that caus­es us to reject the null­hy­poth­e­sis when we should not is equal to the cho­sen sig­nif­i­cance lev­el.

So, In every test, there is a chance that we reject the null-hypoth­e­sis when it should have been accept­ed. This is called the type I error and have the prob­a­bil­i­ty equal to the sig­nif­i­cance lev­el (α). Sim­i­lar­ly, when the null­hy­pote­sis is tru­ly false we might get an obser­va­tion of the test sta­tis­tic due to sam­pling error (pure chance!) that do not cause us to reject the null­hy­poth­e­sis. This is called the type II error: Accep­tance of the null­hy­poth­e­sis when it should have been reject­ed. There are four pos­si­ble out­comes: