Inferential Statistics: Introduction

In most cases an inferential statistic is used to test some hypothesis. Do groups differ on some outcome variable? Is the difference more than would be expected by chance? Can one factor predict another? You don't need to understand the underlying calculus, but you do need to know which inferential statistic to use and how to interpret it.

Two major sources of error in research

Inferential statistics are used to make generalizations from a sample to a population. There are two sources of error (described in the Sampling module) that may result in a sample's being different from (not representative of) the population from which it is drawn.

These are

Sampling error - chance, random error   Sample bias - constant error, due to inadequate design

 

 


Inferential statistics take into account sampling error. These statistics do not correct for sample bias. That is a research design issue. Inferential statistics only address random error (chance).

p value

The reason for calculating an inferential statistic is to get a p value (p = probability). The p value is the probability that the samples are from the same population with regard to the dependent variable (outcome). Usually, the hypothesis we are testing is that the samples (groups) differ on the outcome. The p value is directly related to the null hypothesis.

The p value determines whether or not we reject the null hypothesis. We use it to estimate whether or not we think the null hypothesis is true. The p value provides an estimate of how often we would get the obtained result by chance, if in fact the null hypothesis were true.


Decision rules - Levels of significance

How small is "small?" Once we get the p value (probability) for an inferential statistic, we need to make a decision. Do we accept or reject the null hypothesis? What p value should we use as a cutoff?

In the behavioral and social and sciences, a general pattern is to use either .05 or .01 as the cutoff. The one chosen is called the level of significance. If the probability associated with an inferential statistic is equal to or less than .05, then the result is said to be significant at the .05 level. If the .01 cutoff is used, then the result is significant at the .01 level.

Using the .05 level of significance means if the null hypothesis is true, we would get our result 5 times out of 100 (or 1 out of 20). We take the risk that our study is not one of those 5 out of 100. Rejecting or accepting the null hypothesis is a gamble. There is always a possibility that we are making a mistake in rejecting the null hypothesis. This is called a Type I Error - rejecting the null hypothesis when it is true. If we use a .01 cutoff, the chance of a Type I Error is 1 out of 100. With a .05 level of significance, we are taking a bigger gamble. There is a 1/20 (5 out of 100) chance that we are wrong, and that our treatment (or predictor variable) doesn't really matter.

Why would we take the bigger gamble of .05 rather than .01 cutoff? Because we don't want to miss discovering a true difference. There is a tradeoff between overestimating and underestimating chance effects.

You will often see the probability value described as p < .05, meaning that the probability associated with the inferential statistic is .05 or less (5 out of 100).  

Notation used with p values:

< = less than
> = greater than
< = less than or equal to
> = greater than or equal to

When you use a computer program to calculate an inferential statistic (such as a t-test, Chi-square, correlation), the results will show an exact p value (e.g., p = .013). If you use the formulas for hand calculation, you will need to use a table of critical values in order to get p. Instructions are provided in the Methods Manual listed in the left navigation bar.


Steps for testing hypotheses

  1. Calculate descriptive statistics
  2. Calculate an inferential statistic
  3. Find its probability (p value)
  4. Based on p value, accept or reject the null hypothesis (H0)
  5. Draw conclusion

Next section: Inferential statistics for Continuous outcomes, normally-distributed