Two tailed Z-test

The Z-test is used when you want to com­pare the means of two large sam­ples (>30 obser­va­tions). In a two sided Z test, you test the null-hypoth­e­sis that µ1 = µ2. Con­trary, in a one sided test Z test, you test the null-hypoth­e­sis µ1 ≥ µ2 or µ1 ≤ µ2.

You need to check the fol­low­ing assump­tions before pro­ceed­ing with the Z-test:

  1. The obser­va­tions are inde­pen­dent
  2. The sam­ples have the same vari­ance
  3. That the Cen­tral Lim­it The­o­rem holds true (it does if the sam­ple sizes > 30)

The Z-test relies on the test sta­tis­tic Z, which is cal­cu­lat­ed by:


where  \overline{x}_1 and \overline{x}_2 are the sam­ple means,   s_1^2 and s_2^2  are the sam­ple vari­ances, and  n_1 and n_2 are the sam­ple sizes of sam­ple 1 and 2, respec­tive­ly.

The null hypoth­e­sis is reject­ed when Z > 1.96 and 2.58 at a sig­nif­i­cance lev­el of α = 0.05 and 0.01, respec­tive­ly. That is, you are cer­tain at a degree of 95 and 99 %, respec­tive­ly, that the null-hypoth­e­sis can be reject­ed, i.e. the means dif­fer.


You want to test if the month­ly salaries of indus­tri­al work­ers dif­fer between Chi­na and India.

  1. Con­struct the null-hypoth­e­sis

H0: the mean salary in Chi­na and India do not dif­fer (µChi­na = µIndia)

Take a ran­dom sam­ple of at least 30 salaries of indus­tri­al work­ers from Chi­na and India.

  1. Cal­cu­late the mean (\overline{x}) and vari­ance (s^2) for each sam­ple, in this case:

3. Check that the vari­ances are equal
— Per­form a F-test, cal­cu­late the F sta­tis­tic

— Cal­cu­late the degrees of free­dom (v)
v_{china} = 45 - 1 = 44
v_{india} = 32 - 1 = 31

- Check the crit­i­cal val­ue for F at α=0.05 where v1 = 44 and v2 = 31 in a table of crit­i­cal F val­ues: Fα=0.05 = 1.8–2.01

- Com­pare the cal­cu­lat­ed F sta­tis­tic with Fα=0.05

F < Fα=0.05 = 1.14 < 1.8

- Reject H0 or H

H0 can’t be reject­ed; the assump­tion of equal vari­ances holds true.

4. Cal­cu­late the Z sta­tis­tic:

z test example
— Look up the crit­i­cal val­ue for Z at α = 0.05

In the case of crit­i­cal val­ues for Z we don’t have to check a table it is sim­ply Zα=0.05 = 1.96, inde­pen­dent of the degrees of free­dom of the sam­ples as the test relies on the Cen­tral Lim­it The­o­rem.

5. Com­pare the cal­cu­lat­ed Z sta­tis­tic with Zα=0.05

Z > Zα=0.05 = 12.3 > 1.96

Z is even greater than Zα=0.01 = 2.58

6. Reject H0 or H1

H0 can be reject­ed; the mean salary in Chi­na and India dif­fer (µChi­na ≠ µIndia)

7. Inter­pret the result

We are more than 99 % cer­tain that the salaries of indus­tri­al work­ers in Chi­na are high­er com­pared to India.

How to do it in R

#Check the assumption of equal variances using F-test
       f.test<-function(var.max,var.min) var.max/var.min


#Function to calculate the Z statistic




Two tailed Z-test in depth

The the­o­ry behind the Z-test is relies on the Cen­tral Lim­it The­o­rem and is quite straight­for­ward.

The Cen­tral Lim­it The­o­rem says that esti­ma­tions of a pop­u­la­tion para­me­ter from a large num­ber of sam­ples, with size n, con­form to a nor­mal dis­tri­b­u­tion. The pop­u­la­tion para­me­ter can for exam­ple be the mean and the stan­dard devi­a­tion. Oth­er para­me­ters can also be con­sid­ered such as the dif­fer­ence between the means of two pop­u­la­tions (d).

In the Z-test you want to know if there is a real dif­fer­ence between the mean of two pop­u­la­tions, µpop1 and µpop2.

As the cal­cu­lat­ed means from sam­ples are esti­ma­tions and there­fore could dif­fer from the true means (µpop1 and µpop2) due to sam­pling error we need to per­form a sta­tis­ti­cal test (a Z-test). We sim­ply need to find out whether the dif­fer­ence between the esti­mat­ed means are a true dif­fer­ence or due to chance.

If there is no dif­fer­ence between the means of two pop­u­la­tions then d = µpop1 - µpop2 = 0

Every time you esti­mate a mean from two pop­u­la­tions, you can cal­cu­late a dif­fer­ence (d) between the means. Just as in the case with means, a large num­ber of d esti­mat­ed from sam­ples con­forms to a nor­mal dis­tri­b­u­tion accord­ing to the Cen­tral Lim­it The­o­rem.

If there is no real dif­fer­ence between the pop­u­la­tions, if d = µpop1 - µpop2 = 0, the dis­tri­b­u­tion of d is cen­tered around µ=0 with stan­dard devi­a­tion σ. Due to sam­pling error the esti­mat­ed d from sam­ples devi­ates from 0. But, the vari­abil­i­ty decreas­es with sam­ple size. Sounds, famil­iar? Yes, the stan­dard devi­a­tion of this dis­tri­b­u­tion is the same as the stan­dard error, the stan­dard error of d (SEd).

Now, this is what you want to find out: does my esti­mat­ed dif­fer­ence (d) between the means of my two sam­ples belong to a pop­u­la­tion of d’s where µd=0? i.e. a pop­u­la­tion of d’s where there is no real dif­fer­ence between the means, where the dif­fer­ence is due to chance.

How do we do that?

We sim­ply cal­cu­late the num­ber of stan­dard errors (SEd) between the esti­mat­ed d and µd = 0.

  1. Cal­cu­late the dis­tance between the esti­mat­ed d and µd = 0.

The bars mean that we take the absolute val­ue of the dif­fer­ence, i.e. we ignore any minus sign. This is because we are not inter­est­ed in which mean that is greater than the oth­er, just that there is a dif­fer­ence.

2. Divide this dis­tance with the stan­dard error of d (SEd):

se distance z      3. To cal­cu­late the num­ber of stan­dard errors between d and µd = 0 we get:

If the Z val­ue exceeds 1.96, the esti­mat­ed d is not very prob­a­ble to have been obtained if there is no dif­fer­ence between the means. In oth­er words, the esti­mat­ed d is far from 0. The chance is less than 5 % that the esti­mat­ed d belongs to a pop­u­la­tion of d’s with mean 0.

How to produce the graph in R

	mnorm<-function(my, sigma,x){


	for(i in 1:length(x)) {

	plot(x,p,ylab="",xlab="Difference between means",type="n",las=1,bty="l",pch=19,yaxt="n")



mnorm_plot(0,0.35) #The first argument is the difference between means and the second is the standard error