Difference between revisions of "Data distribution"

From Sustainability Methods
Line 586: Line 586:
 
----
 
----
 
[[Category:Statistics]]
 
[[Category:Statistics]]
 +
[[Category:R examples]]
  
 
The [[Table of Contributors|author]] of this entry is Henrik von Wehrden.
 
The [[Table of Contributors|author]] of this entry is Henrik von Wehrden.

Revision as of 09:00, 12 February 2021

Data distribution

Data distribution is the most basic and also a fundamental step of analysis for any given data set. On the other hand, data distribution encompasses the most complex concepts in statistics, thereby including also a diversity of concepts that translates further into many different steps of analysis. Consequently, without understanding the basics of data distribution, it is next to impossible to understand any statistics down the road. Data distribution can be seen as the fundamentals, and we shall often return to these when building statistics further.

The normal distribution

Discovered by Gauss, it is only consecutive that you can find it even at a 10DM banknote.

How wonderful, it is truly a miracle how almost everything that can be measured seems to be following the normal distribution. Overall, the normal distribution is not only the most abundantly occurring, but also the earliest distribution that was known. It follows the premise that most data in any given dataset has its majority around a mean value, and only small amounts of the data are found at the extremes.

Most phenomena we can observe follow a normal distribution. The fact that many do not want this to be true is I think associated to the fact that it makes us assume that the world is not complex, which is counterintuitive to many. While I believe that the world can be complex, there are many natural laws that explain many phenomena we investigate. The Gaussian normal distribution is such an example. Most things that can be measured in any sense (length, weight etc.) are normally distributed, meaning that if you measure many different items of the same thing, the data follows a normal distribution.

This is an ideal bell curve with the typical deviation in per cent. The σ sign (sigma) stands for standard deviation: within the range of -1 to +1 σ you have about 68,2% of your data. Within -2 to +2 σ you have 95,4% of the data and so on.

The easiest example is tallness of people. While there is a gender difference in terms of height, all people that would identify as e.g. females have a certain height. Most have a different height from each other, yet there are almost always many of a mean height, and few very small and few very tall females within a given population. There are of course exceptions, for instance due to selection biases. The members of a professional basketball team would for instance follow a selection bias, as these would need to be ideally tall. Within the normal population, people’s height follow the normal distribution. The same holds true for weight, and many other things that can be measured.

These five plots can easily show that is does matter how big your sample size is and that the bigger it gets the more normal distributed it will be.

Most things in their natural state follow a normal distribution. If somebody tells you that something is not normally distributed, this person is either very clever or not very clever. A small sample can hamper you from finding a normal distribution. If you weigh five people you will hardly find a normal distribution, as the sample is obviously too small. While it may seem like a magic trick, it is actually true that many phenomena that can be measured will follow the normal distribution, at least when your sample is large enough. Consequently, much of the probabilistic statistics is built on the normal distribution.

This graphic from gapminder.org shows you how male body mass index is ranked within all countries of the world. Germany by the way is number 32.

The most abundant reason for a deviance from the normal distribution is us. We changed the planet and ourselves, creating effects that may change everything, up to the normal distribution. Take weight. Today the human population shows a very complex pattern in terms of weight distribution across the globe, and there are many reasons why the weight distribution does not follow a normal distribution. There is no such thing as a normal weight, but studies from indigenous communities show a normal distribution in the weight found in their populations. Within our wider world, this is clearly different. Yet before we bash the western diet, please remember that never before in the history of humans did we have a more steady stream of calories, which is not all bad.

Skew of distributions

Apart from that distributions can have different skews. There is the symmetrical skew which is basically a normal distributions or bell curve that you can see on the picture. But normal distributions can also be skewed to the left or to the right depending on how mode, median and mean differ. For the symmetrical normal distribution they are of course all the same but for the right skewed distribution (mode < median < mean) it's different.

These graphs show you how distributions can differ according to mode, median and mean of the displayed data.

See Tests for normal distribution to learn how to check if the data is normally distributed.

Detecting the normal distribution
Barplot
Histogram
And this is how a boxplot of a normally distributed series of numbers looks like. Consequently, the whiskers have nearly the same length, and so have the quarters a similar range.

But when is data normally distributed and how to recognize it when you have a boxplot in front of you? Or a histogram? The best way to learn it, is to look at it. Always remember the ideal picture of the bell curve (you can see it above), especially if you look at histograms.

This barplot (at the left) represents the number of front-seat passengers that were killed or seriously injured annually from 1969 to 1985 in the UK. And here comes the magic trick: If you sort the annually number of people from the lowest to the highest (and slightly lower the resolution), a normal distribution evolves (histogram at the left).

If you would like to know, how one can create the diagrams, which you see here, in R, we uploaded the code right below:

# If you want some general information about the "Seatbelt" dataset, at which we will have look, you can use the ?-function.
# As "Seatbelts" is a dataset in R, you can receive a lot of information here.

?Seatbelts

     
## hint: If you want to see all the datasets, that are available in R, just type:
     
data()
     
     
# to have a look a the dataset "Seatbelts" you can use several commands
  
## str() to know what data type "Seatbelts" is (e.g. a Time-Series, a matrix, a dataframe...)
str(Seatbelts)
     
     
## use show() or just type the name of the dataset ("Seatbelts") to see the table and all data it's containing
show(Seatbelts)
# or
Seatbelts
     
     
## summary() to have the most crucial information for each variable: minimum/maximum value, median, mean...
summary(Seatbelts)

     
# As you saw when you used the str() function, "Seatbelts" is a Time-Series, which is not entirely bad per se,
# but makes it hard to work with it. Like that it is useful to change it into a dataframe (as.data.frame()).
# And simultaneously, we should assign the new dataframe "Seatbelts" to a variable, that we don't lose it and
# can work further with Seatbelts as a dataframe.  
  
seat<-as.data.frame(Seatbelts)
     
     
# To choose a single variable of the dataset, we use the '$' operator. If we want a barplot with all front drivers,
# who were killed or seriously injured:
     
barplot(seat$front)
     
     
# For a histogram:
     
hist(seat$front)
  
## To change the resolution of the histogram, you can use the "breaks"-argument of the hist-command, which states
## in how many increments the plot should be divided
     
hist(seat$front, breaks = 30)
hist(seat$front, breaks = 100)

# For a boxplot:
     
boxplot(seat$front)

Non-normal distributions

We found this great overview by Aswath Damodaran

Sometimes the world is not normally distributed. At a closer examination, this makes perfect sense under the specific circumstances. It is therefore necessary to understand which reasons exists why data is not normally distributed.

The Poisson distribution
This picture shows you several possible poisson distributions. They differ according to the lambda, the rate parameter.

Things that can be counted are often not normally distributed, but are instead skewed to the right. While this may seem curious, it actually makes a lot of sense. Take an example that coffee-drinkers may like. How many people do you think drink one or two cups of coffee per day? Quite many, I guess. How many drink 3-4 cups? Fewer people, I would say. Now how many drink 10 cups? Only a few, I hope. A similar and maybe more healthy example could be found in sports activities. How many people make 30 minute of sport per day? Quite many, maybe. But how many make 5 hours? Only some very few. In phenomenon that can be counted, such as sports activities in minutes per day, most people will tend to a lower amount of minutes, and few to a high amount of minutes. Now here comes the funny surprise. Transform the data following a Poisson distribution, and it will typically follow the normal distribution if you use the decadic logarithm (log). Hence skewed data can be often transformed to match the normal distribution. While many people refrain from this, it actually may make sense in such examples as island biogeography. Discovered by MacArtur & Wilson, it is a prominent example of how the log of the numbers of species and the log of island size are closely related. While this is one of the fundamental basic of ecology, a statistician would have preferred the use of the Poisson distribution.

Example for a log transformation
Poisson Education small.png
Poisson Education log small.png

One example for skewed data can be found in the R data set “swiss”, it contains data about socio-economic indicators of about 50 provinces in Switzerland in 1888. The variable we would like to look at is “Education”, which shows how many men in the army (in %) have an education level beyond primary school. As you can see when you look at the first diagram, in 30 provinces only 10 percent of the people received education beyond the primary school.

To obtain a normal distribution (which is useful for many statistical tests), we can use the natural logarithm.

If you would like to know, how to conduct an analysis like on the left-hand side, we uploaded the code right below:

# to get further information about the data set, you can type

?swiss

# to obtain a histogram of the variable Education

hist(swiss$Education)

# to transform the data series with the natural logarithm, just use log()
# besides it is good idea to assign the new value to a variable

log_edu<-log(swiss$Education)
hist(log_edu)

# to make sure, that the data is normally distributed, you can use the shapiro wilk test

shapiro.test(log_edu)

# and as the p-value is higher than 0.05, log_exa is normally distributed

The Pareto distribution

The Pareto distribution can also be apllied when we are looking at how wealth is spread across the world.

Do you know that most people wear 20 % of their clothes 80 % of their time? This observation can be described by the Pareto distribution. For many phenomena that describe proportion within a given population, you often find that few make a lot, and many make few things. Unfortunately this is often the case for workloads, and we shall hope to change this. For such proportions the Pareto distribution is quite relevant. Consequently, it is rooted in income statistics. Many people have a small to average income, and few people have a large income. This makes this distribution so important for economics, and also for sustainability science.

Boxplots

A nice way to visualize a data set is to draw a boxplot. You get a rough overview, how the data is distributed and moreover you can say at a glance if it’s normally distributed. But what are the components of a boxplot and what do they represent?

Boxplot.png

The median marks the exact middle of your data, which is something different than the mean. If you imagine a series of random numbers, e.g. 3, 5, 7, 12, 26, 34, 40, the median would be 12. But what if your data series comprises an even number of numbers, like 1, 6, 19, 25, 26, 55? You take the mean of the numbers in the middle, which is 22 and hence 22 is your median.

The box of the boxplot is divided in the lower and the upper quartile. In each quarter there are, obviously, a quarter of the data points. To define them, you split the data set in two halves (outgoing from the median) and calculate again the median of each half. In a random series of numbers (6, 7, 14, 15, 21, 43, 76, 81, 87, 89, 95) your median is 43, your lower quartile is 14 and your upper quartile 87.

The space between the lower quartile line and the upper quartile line (the box) is called the interquartile range (IQR), which is important to define the length of the whiskers. The data points which are not in the range of the whiskers are called outliers, which could e.g. be a hint that they are due to measuring errors. To define the end of the upper whisker, you take the value of the upper quartile and add the product of 1,5 * IQR.

The boxplot for the series of data: 6, 7, 14, 15, 21, 43, 76, 81, 87, 89, 95

Sticking to our previous example: The IQR is the range between the lower (14) and the upper quartile (87), therefore 73. Multiply 73 by 1,5 and add it to the value of the upper quartile: 87 + 109,5 = 196,5

For the lower whisker, the procedure is nearly the same. Again, you use the product of 1,5*IQR, but this time you subtract this value from the lower quartile: Here is your lower whisker: 14 – 109,5 = -95,5

And as there are no values outside of the range of our whiskers, we have no outliers. Furthermore, the whiskers to not extend to their extremes, which we calculated above, but instead mark the most extreme data points.

#boxplot
#our random series of numbers 6, 7, 14, 15, 21, 43, 76, 81, 87, 89, 95

boxplot.example<-c(6,7,14,15,21,43,76,81,87,89,95)
summary(boxplot.example)

# minimum = 6
# maximum = 95
# mean = 48.55
# median = 43
# 1Q = 14.5
# 3Q = 84
# don't worry about the difference between our calculated quartile-values above and the values that were calculated by R. R works just a little more precisely here, but the approach we introduced above is a good approximation.

# with this information we can calculate the interquartile range
IQR(boxplot.example)
# IQR = 69.5

#lastly we can visualize our boxplot using this comment
boxplot(boxplot.example)

Simple data visualisation

Scatter Plot

Description Scatter plots can be useful for showing the relationship between two things, because they allow you to encode data simultaneously on a horizontal x‐axis and vertical y‐axis to see whether and what relationship exists.

  • (Cole Nussbaumer Knaflic-Storytelling with Data)*

You can create scatter plots if you have a pair of continuous (or numeric) data.

Examples in R

Example 1: Basic Scatter Plot The basic Scatter Plot that we will plot will be based on a dataset, that comes built-in with R, called trees.

The data set contains data on the girth, height and the volume of different trees.

We will first plot the histogram shown in the general structure section above.

Structure of the Data The data frame for trees dataset looks like this:

Girth Height Volume
8.3 70 10.3
8.6 65 10.3
8.8 63 10.2
... ... ...

Here, the data for all the columns are numeric. So, no further data transformation is necessary.

R Code to Plot the Data

# Plot a basic histogram
# look at the data
head(trees)

# Plot a basic scatter plot
plot(x = trees$Girth, y = trees$Height)

Result in R This is a basic scatter plot made using R.

Example 2: Better Scatter Plot In this section, we will take the plot from the previous example and customize it by changing the shape and color of the points, and by adding a title and x- and y-axis labels to the plot.

R code to plot the chart

# look at the data
head(trees)

# Create a scatter plot with labels and colors
plot(x=trees$Girth, y=trees$Height, # choose the x- and y-values
     pch=16,                        # choose how points look on the plot
     col='blue',                    # choose the color of the points
     main='Scatter Plot of Girth and Height of Trees', # main header of the plot
     xlab='Tree girth', ylab='Tree height')            # x- and y-axis labels

Result in R Minor customizations make the plot look more professional and understandable.

Minor customizations make the plot look more professional and understandable.

Related Links

Bar chart

Description (Also known as: column chart)

A bar chart displays quantitative values for different categories. The chart comprises line marks (bars) – not rectangular areas – with the size attribute (length or height) used to represent the quantitative value for each category. - Andy Kirk - Data Visualization

General Structure of Bar Chart

This figure shows the structure of a bar chart.

Example in R

We will first plot the bar chart shown above in the section above. The basic bar chart that we will plot will be based on a dataset built-in to R called mtcars. The data set contains data on specifications of different cars. One such specification is the number of gears a given car's transmission has. We will first create a summary table that contains the number of cars for a given count of gears. Then, we will use that table to create the plot.

Structure of the Data The table that contains information about the frequency of cars for a given number of gears looks like this:

gears freq
3 15
4 12
5 5
... ...

Here, the data for gears column are categories, and the data for freq columns are numeric.

Example 1: Basic Bar Chart

R code to plot the chart

# get the data
gears <- table(mtcars$gear)

# Plot a basic bar chart with a title and labels
barplot(gears,
        main = "Frequency of Vehicles of each Gear Type",   # title of the plot
        xlab = "Number of Gears", ylab = "Number of Cars")  # labels of the plot

Result in R This is how the output in R looks like.

Bar Chart.png

Related Links

Line chart

Description A line chart shows how quantitative values for different categories have changed over time. They are typically structured around a temporal x-axis with equal intervals from the earliest to latest point in time. Quantitative values are plotted using joined-up lines that effectively connect consecutive points positioned along a y-axis. The resulting slopes formed between the two ends of each line provide an indication of the local trends between points in time. As this sequence is extended to plot all values across the time frame it forms an overall line representative of the quantitative change over time story for a single categorical value.

Multiple categories can be displayed in the same view, each represented by a unique line. Sometimes a point (circle/dot) is also used to substantiate the visibility of individual values. The lines used in a line chart will generally be straight. However, sometimes curved line interpolation may be used as a method of estimating values between known data points. This approach can be useful to help emphasise a general trend. While this might slightly compromise the visual accuracy of discrete values if you already have approximations, this will have less impact.

(Note- the description was based on a book by Andy Kirk named "Data Visualization")

Examples in R

We will first plot the line chart shown in the section above.

The basic line chart that we will plot will be based on a built-in dataset called EuStockMarkets. The data set contains data on the closing stock prices of different European stock indices over the years 1991 to 1998.

To make things easier, we will first transform the built-in dataset into a data frame object. Then, we will use that data frame to create the plot.

Structure of the Data The table that contains information about the different market indices looks like this:

DAX SMI CAC FTSE
1628.75 1678.1 1772.8 2443.6
1613.63 1688.5 1750.5 2460.2
1606.51 1678.6 1718.0 2448.2
... ... ... ...

Here, the data for all the columns are numeric.

Example 1: Basic Line Chart This line chart shows how the DAX index from the table from previous section.

R code to plot the chart

# read the data as a data frame
eu_stocks <- as.data.frame(EuStockMarkets)

# Plot a basic line chart
plot(eu_stocks$DAX,  # simply select a stock index
     type='l')       # choose 'l' for line chart

Result in R

Simple line chart.png

As you can see, the plot is very simple. We can enhance the way this plot looks by making a few tweaks as shown in the section below.

Example 2: Better Looking Line Chart Here, we will plot the DAX index again as we did in Example 1. However, the plot will be enhanced to be more informative and aesthetically pleasing.

R code to plot the chart

# get the data
eu_stocks <- as.data.frame(EuStockMarkets)

# Plot a basic line chart
plot(eu_stocks$DAX, # select the data
     type='l',      # choose 'l' for line chart
     col='blue',    # choose the color of the line
     lwd = 2,       # choose the line width 
     main = 'Line Chart of DAX Index (1991-1998)',         # title of the plot
     xlab = 'Time (1991 to 1998)', ylab = 'Prices in EUR') # x- and y-axis labels

Result in R

Line chart.png

You can see that this plot looks much more informative and attractive.

Related Links

Histogram

Description A histogram displays the frequency and distribution for a range of quantitative groups. Whereas Histograms compare quantities for different categories, a histogram technically compares the number of observations across a range of value ‘bins’ using the size of lines/bars (if the bins relate to values with equal intervals) or the area of rectangles (if the bins have unequal value ranges) to represent the quantitative counts. With the bins arranged in meaningful order (that effectively form ordinal groupings) the resulting shape formed reveals the overall pattern of the distribution of observations.

- Andy Kirk - Data Visualization

General Structure of Histogram

This is how a histogram looks.

Examples in R

We will first plot the histogram shown in the general structure section above.

The basic histogram that we will plot will be based on a built-in dataset called cars. This data set contains data on stopping distance of different cars at different speeds.

Since both the values are numeric, we don't need to transform the data in any way in order to plot a histogram.

Structure of the Data The table that contains information about the stopping distance of different cars at a given speed looks like this:

speed dist
4 2
4 10
7 4
7 22
8 16
9 10
... ...

Here, the data for both speed and dist columns are numeric.

Example 1: Basic Histogram (with speed variable)

R code to plot the chart

# data that we are going to use
View(cars)

# Plot a basic histogram
hist(cars$speed,
     main = "Histogram for speed of cars", # main title
     xlab = "Speed") # x-axis label

Result in R

Simple Histogram.png

Example 2: Better looking Histogram (with dist variable)

R code to plot the chart

# data that we are going to use
View(cars)

# Plot a basic histogram
# data that we are going to use
View(cars)

# Plot a basic histogram
hist(cars$dist,
     breaks = 15, # define the number of bins you want in the histogram
     col = 'seagreen', # define the color of the bars in the histogram
     main = "Histogram for stopping distance of cars", # main title
     xlab = "Stopping Distance") # x-axis label

Result in R

This is a better looking histogram.

Related Links

A matter of probability

The most common example explaining probability is rolling the dice.

Probability indicates the likeliness whether something will occur or not. Typically, probabilities are represented by a number between zero and one, where one indicates the hundred percent probability that an event may occur, while zero indicates an impossibility of this event to occur.

The concept of probability goes way back to Arabian mathematicians and was initially strongly associated with cryptography. With rising recognition of preconditions that need to be met in order to discuss probability, concepts such as evidence, validity, and transferability were associated with probabilistic thinking. Probability plays also a role when it came to games, most importantly rolling dice. With the rise of the Enlightenment many mathematical underpinnings of probability were explored, most notably by the mathematician Jacob Bernoulli. Gauss presented a real breakthrough, due to the discovery of the normal distribution, which allowed the feasible approach to link sample size of observations with an understanding of the likelihood how plausible these observations were. Again building on Sir Francis Bacon, the theory of probability reached its final breakthrough once it was applied in statistical hypothesis testing. It is important to notice that this would throw modern statistics into an understanding through the lens of so-called frequentist statistics. This line of thinking dominates up until today, and is widely built on repeated samples to understand the distribution of probabilities across a phenomenon.

Another simple example for calculating probability which you have probably also discussed in school is flipping a coin. Here there are only two options: head or tail.

Centuries ago Thomas Bayes proposed a dramatically different approach, where however an imperfect or a small sample would serve as basis for statistical interference. Very crudely defined, the two approaches start at exact opposite ends. While frequency statistics demand preconditions such as sample size and a normal distribution for specific statistical tests, Bayesian statistics build on the existing sample size; all calculations base on what is already there. Experts may excuse my dramatic simplification, but one could say that frequentist statistics are top-down thinking, while Bayesian statistics work bottom-up. The history of modern science is widely built on frequentist statistics, which includes such approaches as methodological design, sampling density and replicates, and diverse statistical tests. It is nothing short of a miracle that Bayes proposed the theoretical foundation for the theory named after him more than 250 years ago. Only with the rise of modern computers was this theory explored deeply, and builds the foundation of branches in data science and machine learning. The two approaches are also often coined as objectivists for frequentist probability fellows, and subjectivists for folllowers of Bayes theorem.

Another perspective on the two approaches can be built around the question whether we design studies or whether we base our analysis on the data we just have. This debate is the basis for the deeply entrenched conflicts you have in statistics up until today, and was already the basis for the conflicts between Pearson and Fisher. From an epistemological perspective, this can be associated with the question of inductive or deductive reasoning, although not many statisticians might not be too keen to explore this relation. While probability today can be seen as one of the core foundations of statistical testing, probability as such is increasingly criticised. It would exceed this chapter to discuss this in depth, but let me just highlight that without understanding probability, much of the scientific literature building on quantitative methods is hard to understand. What is important to notice, is that probability has trouble considering Occam's razor. This is related to the fact that probability can deal well with the chance likeliness of an event to a occur, but it widely ignores the complexity that can influence such a likeliness. Modern statistics explore this thought further but let us just realise here: without learning probability we would have trouble reading the contemporary scientific literature.

The GINI coefficient is a good example for comparing income distributions across different countries.

The probability can be best explained with the normal distribution. The normal distribution basically tells us through probability how a certain value will add to an array of values. Take the example of the height of people, or more specifically people who define themselves as males. Within a given population or country, these have an average height. This means in other words, that you have the highest chance to have this height when you are part of this population. You have a slightly lower chance to have a slightly smaller or larger height compared to the average height. And you have a very small chance to be much smaller or much taller compared to the average. In other words, your probability is small to be very tall or very small. Hence the distribution of height follows a normal distribution, and this normal distribution can be broken down into probabilities. In addition, such a distribution can have a variance, and these variances can be compared to other variances by using a so called f test. Take the example of height of people who define themselves as males. Now take the people who define themselves as females from the same population and compare just these two groups. You may realise that in most larger populations these two are comparable. This is quite relevant when you want to compare the income distribution between different countries. Many countries have different average incomes, but the distribution acroos the average as well as the very poor and the filthy rich can still be compared. In order to do this, the f-test is quite helpful.

External links

Videos

Data Distribution: A crash course

The normal distribution: An explanation

Skewness: A quick explanation

The Poisson distribution: A mathematical explanation

The Pareto Distribution: Some real life examples

The Boxplot: A quick example

Probability: An Introduction

Bayes theorem: A detailed explanation

F test: An example calculation

Articles

Probability Distributions: 6 common distributions you should know

Distributions: A list of Statistical Distributions

Normal Distribution: The History

The Normal Distribution: Detailed Explanation

The Normal Distributions: Real Life Examples

The Normal Distribution: A word on sample size

The weight of nations: How body weight is distributed across the world

Non normal distributions: A list

Reasons for non normal distributions: An explanation

Different distributions: An overview by Aswath Damodaran, S.61

The Poisson Distribution: The history

The Poisson Process: A very detailed explanation with real life examples

The Pareto Distribution: An explanation

The pareto principle and wealth inequality: An example from the US

History of Probability: An Overview

Frequentist vs. Bayesian Approaches in Statistics: A comparison

Bayesian Statistics: An example from the wizarding world

Probability and the Normal Distribution: A detailed presentation

Compare your income: A tool by the OECD

F test: An example in R


The author of this entry is Henrik von Wehrden.