Skip to Main Content

Introduction to Data Analysis and R: Inferential Statistics

Inferential Statistics

Inferential statistics are the second main category of statistical analyses. They tend to be more complex than descriptive stats, and are mainly used to draw conclusions about the population from which your sample data comes. They are meant to answer the question, "What is the probability of getting a result like my observed data, if the null hypothesis were true?" (McDonald, 2014, p.29) If that probability is very low, i.e., if it is highly unlikely that you would get the data under your null hypothesis, you can reject your null hypothesis. This probability is what the p-value represents. Therefore, unlike descriptive stats, the concept of statistical significance is applied to inferential stats.

Most inferential analyses follow a set of standard assumptions (Wallace & Van Fleet, 2012, p. 313, emphasis mine):

  1. Data are from a normally distributed population; that is, the values conform to a normal [bell-shaped] curve. Note that it is the distribution of the population that is important, not the distribution of the sample.
  2. Data are from a sample. Inferential statistics are not designed for use with census-level data and are frequently undermined by application to overly large datasets.
  3. The sample is random. Random sampling in inferential statistics has the dual benefits of ensuring that the data are representative and ensuring that the sample, like the population, is normally distributed.
  4. Assignment to groups is random. There is no systematic process in use to influence the likelihood of a case being assigned to any particular group such as an experimental group or a control group.
  5. Groups being compared have equal variances. Many inferential statistical processes essentially test the question "Are these groups part of the same population?" That becomes more difficult--although not necessarily impossible--to test when sample variances are not equal.

Inferential analyses that follow these assumptions are known as parametric analyses. They assume normal distributions that can be summarized by statistical parameters (e.g. mean, variance, standard deviation). Generally speaking, parametric tests can be applied when you have at least one measurement variable. We will cover some common parametric analyses in the next part of the module.

There are also nonparametric analyses, which do not assume a normal distribution. As you may imagine, nonparametric analyses tend to be vastly more complex than parametric analyses, and, as such, are outside the scope of this module.

A note on interpretation

The entire purpose of inferential statistics is to test hypotheses. We use inferential stats to try to determine how likely it is the observations we're making are due to random chance. This is what the p-value is meant to do.

You're probably familiar with the idea that if the reported p-value is less than 0.05, then our data is statistically significant and we can reject our null hypothesis. This is true up to a point; what a p-value less than 0.05 really means is that there is a 95% chance of the null hypothesis being incorrect. That still leaves a 5% chance that the null hypothesis is actually true. 

Please note that if you report a p-value of greater than 0.05, that does not mean that you should accept the null hypothesis. Just because you don't see a statistically significant difference doesn't mean that there is no difference. So, instead of accepting the null hypothesis, you should fail to reject it. It may seem pedantic but it's an important distinction. 

Also remember that we're working with probabilities here. Anytime you reject or fail to reject a null hypothesis, there is a chance that you're making a mistake. There are two main types of errors involved with inferential stats: false positives and false negatives. False positives (also called Type I errors) crop up when your analysis leads you to reject the null hypothesis even though it's actually true. False negatives (also called Type II errors) crop up when your analysis leads you to fail to reject the null hypothesis even though it's actually false. Understanding when and why these errors happen is outside the scope of this module, but are very important to be aware of.