Psychological Tests and Reliability (LT8)

Introduction

So far in this course we have used many explanatory and outcome variables in our workshops and labs. Overwhelmingly, the most common form of variable in psychology is self-report. As an example, the World Attitudes Survey is full of self-reported information. Many experiments and field work use self-report to measure things like emotions, personality, and psychopathology. It is therefore important we have an appreciation of how these self-report measures are constructed and some key considerations required to ensure that they are reliable (i.e., consistent). This week we will cover this topic.

Packages

library(tidyverse)
library(car)
library(psych)

Loading the personality data

Lets load the bigfive dataframe from the survey we just completed.You can do this by clicking the file in the LT8 Lab folder or by trying the code below:

bigfive <- read_csv("C:/Users/CURRANT/Dropbox/Work/LSE/PB130/LT8/Lab/bigfive.csv")
## 
## -- Column specification ------------------------------------------------
## cols(
##   .default = col_double(),
##   `Event Index` = col_character(),
##   `UTC Date` = col_character(),
##   `Local Date` = col_character(),
##   `Tree Node Key` = col_character(),
##   `Repeat Key` = col_logical(),
##   `Participant Public ID` = col_character(),
##   `Participant Starting Group` = col_logical(),
##   `Participant Status` = col_character(),
##   `Participant Completion Code` = col_logical(),
##   `Participant External Session ID` = col_logical(),
##   `Participant Device Type` = col_character(),
##   `Participant Device` = col_character(),
##   `Participant OS` = col_character(),
##   `Participant Browser` = col_character(),
##   `Participant Monitor Size` = col_character(),
##   `Participant Viewport Size` = col_character(),
##   Checkpoint = col_logical(),
##   `Task Name` = col_character()
## )
## i Use `spec()` for the full column specifications.
bigfive
## # A tibble: 59 x 115
##    `Event Index` `UTC Timestamp` `UTC Date` `Local Timestam~ `Local Timezone`
##    <chr>                   <dbl> <chr>                 <dbl>            <dbl>
##  1 1               1580000000000 11/03/202~    1580000000000                0
##  2 1               1580000000000 11/03/202~    1580000000000                0
##  3 1               1580000000000 11/03/202~    1580000000000                0
##  4 1               1580000000000 11/03/202~    1580000000000                0
##  5 1               1580000000000 11/03/202~    1580000000000                0
##  6 1               1580000000000 11/03/202~    1580000000000                0
##  7 1               1580000000000 11/03/202~    1580000000000                0
##  8 1               1580000000000 11/03/202~    1580000000000                0
##  9 1               1580000000000 11/03/202~    1580000000000                0
## 10 1               1580000000000 11/03/202~    1580000000000                0
## # ... with 49 more rows, and 110 more variables: `Local Date` <chr>,
## #   `Experiment ID` <dbl>, `Experiment Version` <dbl>, `Tree Node Key` <chr>,
## #   `Repeat Key` <lgl>, `Schedule ID` <dbl>, `Participant Public ID` <chr>,
## #   `Participant Private ID` <dbl>, `Participant Starting Group` <lgl>,
## #   `Participant Status` <chr>, `Participant Completion Code` <lgl>,
## #   `Participant External Session ID` <lgl>, `Participant Device Type` <chr>,
## #   `Participant Device` <chr>, `Participant OS` <chr>, `Participant
## #   Browser` <chr>, `Participant Monitor Size` <chr>, `Participant Viewport
## #   Size` <chr>, Checkpoint <lgl>, `Task Name` <chr>, `Task Version` <dbl>,
## #   e1 <dbl>, `e1-quantised` <dbl>, a1r <dbl>, `a1r-quantised` <dbl>, c1 <dbl>,
## #   `c1-quantised` <dbl>, n1 <dbl>, `n1-quantised` <dbl>, o1 <dbl>,
## #   `o1-quantised` <dbl>, e2r <dbl>, `e2r-quantised` <dbl>, a2 <dbl>,
## #   `a2-quantised` <dbl>, c2r <dbl>, `c2r-quantised` <dbl>, n2r <dbl>,
## #   `n2r-quantised` <dbl>, o2 <dbl>, `o2-quantised` <dbl>, e3 <dbl>,
## #   `e3-quantised` <dbl>, a3r <dbl>, `a3r-quantised` <dbl>, c3 <dbl>,
## #   `c3-quantised` <dbl>, n3 <dbl>, `n3-quantised` <dbl>, o3 <dbl>,
## #   `o3-quantised` <dbl>, e4 <dbl>, `e4-quantised` <dbl>, a4 <dbl>,
## #   `a4-quantised` <dbl>, c4r <dbl>, `c4r-quantised` <dbl>, n4 <dbl>,
## #   `n4-quantised` <dbl>, o4 <dbl>, `o4-quantised` <dbl>, e5r <dbl>,
## #   `e5r-quantised` <dbl>, a5 <dbl>, `a5-quantised` <dbl>, c5r <dbl>,
## #   `c5r-quantised` <dbl>, n5r <dbl>, `n5r-quantised` <dbl>, o5 <dbl>,
## #   `o5-quantised` <dbl>, e6 <dbl>, `e6-quantised` <dbl>, a6r <dbl>,
## #   `a6r-quantised` <dbl>, c6 <dbl>, `c6-quantised` <dbl>, n6 <dbl>,
## #   `n6-quantised` <dbl>, o6 <dbl>, `o6-quantised` <dbl>, e7r <dbl>,
## #   `e7r-quantised` <dbl>, a7 <dbl>, `a7-quantised` <dbl>, c7 <dbl>,
## #   `c7-quantised` <dbl>, n7r <dbl>, `n7r-quantised` <dbl>, o7r <dbl>,
## #   `o7r-quantised` <dbl>, e8 <dbl>, `e8-quantised` <dbl>, a8r <dbl>,
## #   `a8r-quantised` <dbl>, c8 <dbl>, `c8-quantised` <dbl>, n8 <dbl>,
## #   `n8-quantised` <dbl>, o8 <dbl>, ...

Cleaning: Recoding items

The first step when working with psychological test data like this is to identify and then recode any reverse items. Recall from the lecture I said some tools have reverse items to mitigate the impact of acquiescence? Well this is true of the Big 5 questionnaire. There are several reverse items for each subscale so we need to recode them so that 5 = 1, 2 = 4, and so on.. Lets do that now with the recode function found in the car package:

# first filter out NAs
bigfive <- 
bigfive %>% 
filter(e1 >= "1")

# note I specify car:: here becuase car and tidyverse have a recode function and I want R to use the car recode function
bigfive$a1 <- car::recode(bigfive$a1r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$c2 <- car::recode(bigfive$c2r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$e2 <- car::recode(bigfive$e2r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$n2 <- car::recode(bigfive$n2r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$a3 <- car::recode(bigfive$a3r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$c4 <- car::recode(bigfive$c4r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$e5 <- car::recode(bigfive$e5r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$c5 <- car::recode(bigfive$c5r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$n5 <- car::recode(bigfive$n5r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$a6 <- car::recode(bigfive$a6r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$e7 <- car::recode(bigfive$e7r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$n7 <- car::recode(bigfive$n7r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$o7 <- car::recode(bigfive$o7r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$a8 <- car::recode(bigfive$a8r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$o9 <- car::recode(bigfive$o9r, "1=5; 2=4; 3=3; 4=2; 5=1")
bigfive$c9 <- car::recode(bigfive$c9r, "1=5; 2=4; 3=3; 4=2; 5=1")

# select out the items

bigfive <- select(bigfive, e1, e2, e3, e4, e5, e6, e7, e8, c1, c2, c3, c4, c5, c6, c7, c8, c9, o1, o2, o3, o4, o5, o6, o7, o8, o9, o10, a1, a2, a3, a4, a5, a6, a7, a8, a9, n1, n2, n3, n4, n5, n6, n7, n8)

Cleaning: Reliability analysis

The next step after we have recoded the items is to check the scales are reliable. As we saw in the lecture, we can calculate the reliability of the Big 5 scales using the Cronbach’s alpha, which is the ratio of variance in the true score to error variance. We saw in the lecture tha tit is calculated as:

r = Vt/V

Where Vt is true score variance, and V is overall variance in the measure.

We can call Cronbach’s alpha using the alpha function in the psych package. Lets do that now for each of our subscales; Openness, Extroversion, Neuroticism, Agreeableness, and Conscientiousness.

extroversion <- select(bigfive, e1, e2, e3, e4, e5, e6, e7, e8)
neuroticism <- select(bigfive, n1, n2, n3, n4, n5, n6, n7, n8)
openness <- select(bigfive, o1, o2, o3, o4, o5, o6, o7, o8, o9, o10)
conscientiousness <- select(bigfive, c1, c2, c3, c4, c5, c6, c7, c8, c9)
agreeableness <- select(bigfive, a1, a2, a3, a4, a5, a6, a7, a8, a9)

alpha(neuroticism)
## Warning in alpha(neuroticism): Some items were negatively correlated with the total scale and probably 
## should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' option
## Some items ( n6 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' option
## 
## Reliability analysis   
## Call: alpha(x = neuroticism)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N   ase mean  sd median_r
##       0.82      0.82    0.93      0.37 4.6 0.032  2.9 0.7     0.42
## 
##  lower alpha upper     95% confidence boundaries
## 0.76 0.82 0.89 
## 
##  Reliability if an item is dropped:
##    raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r
## n1      0.82      0.82    0.91      0.40 4.6    0.034 0.094  0.49
## n2      0.81      0.81    0.89      0.38 4.4    0.034 0.106  0.49
## n3      0.79      0.78    0.92      0.34 3.6    0.040 0.105  0.38
## n4      0.75      0.75    0.86      0.30 3.0    0.047 0.090  0.38
## n5      0.77      0.77    0.92      0.33 3.4    0.044 0.115  0.38
## n6      0.87      0.87    0.95      0.49 6.7    0.025 0.054  0.56
## n7      0.80      0.79    0.92      0.35 3.8    0.038 0.105  0.42
## n8      0.79      0.78    0.88      0.33 3.5    0.040 0.083  0.38
## 
##  Item statistics 
##     n raw.r std.r r.cor r.drop mean   sd
## n1 58  0.52  0.54 0.533  0.392  2.1 0.89
## n2 58  0.61  0.60 0.599  0.474  2.8 1.05
## n3 58  0.77  0.77 0.744  0.680  3.2 0.96
## n4 58  0.91  0.92 0.941  0.856  3.4 1.17
## n5 58  0.83  0.82 0.774  0.728  2.9 1.39
## n6 58  0.20  0.18 0.069  0.022  2.6 1.03
## n7 58  0.70  0.72 0.685  0.607  3.1 0.90
## n8 58  0.78  0.80 0.813  0.710  3.1 0.87
## 
## Non missing response frequency for each item
##       1    2    3    4    5 miss
## n1 0.26 0.47 0.19 0.09 0.00    0
## n2 0.00 0.55 0.16 0.21 0.09    0
## n3 0.00 0.29 0.31 0.31 0.09    0
## n4 0.00 0.34 0.09 0.36 0.21    0
## n5 0.28 0.00 0.45 0.10 0.17    0
## n6 0.09 0.48 0.29 0.05 0.09    0
## n7 0.00 0.28 0.45 0.19 0.09    0
## n8 0.00 0.24 0.50 0.17 0.09    0
alpha(extroversion)
## 
## Reliability analysis   
## Call: alpha(x = extroversion)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N   ase mean   sd median_r
##       0.81      0.82    0.94      0.37 4.7 0.038  3.2 0.62     0.41
## 
##  lower alpha upper     95% confidence boundaries
## 0.73 0.81 0.88 
## 
##  Reliability if an item is dropped:
##    raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r
## e1      0.79      0.80    0.91      0.36 4.0    0.042 0.102  0.44
## e2      0.77      0.80    0.93      0.36 4.0    0.046 0.103  0.42
## e3      0.75      0.78    0.92      0.34 3.6    0.050 0.097  0.41
## e4      0.77      0.80    0.92      0.36 4.0    0.046 0.085  0.35
## e5      0.75      0.77    0.89      0.32 3.3    0.049 0.104  0.35
## e6      0.80      0.82    0.92      0.39 4.4    0.040 0.097  0.44
## e7      0.77      0.79    0.91      0.36 3.9    0.046 0.104  0.41
## e8      0.85      0.86    0.94      0.47 6.3    0.028 0.038  0.44
## 
##  Item statistics 
##     n raw.r std.r r.cor r.drop mean   sd
## e1 58  0.66  0.70  0.71  0.594  3.4 0.50
## e2 58  0.72  0.70  0.66  0.601  2.8 0.96
## e3 58  0.80  0.78  0.76  0.689  3.4 1.16
## e4 58  0.73  0.70  0.69  0.637  3.5 0.80
## e5 58  0.84  0.87  0.88  0.758  2.9 1.00
## e6 58  0.64  0.61  0.57  0.458  3.6 1.23
## e7 58  0.75  0.73  0.70  0.660  2.9 0.82
## e8 58  0.22  0.26  0.22  0.021  3.2 0.99
## 
## Non missing response frequency for each item
##       1    2    3    4    5 miss
## e1 0.00 0.00 0.59 0.41 0.00    0
## e2 0.12 0.22 0.41 0.24 0.00    0
## e3 0.00 0.33 0.12 0.34 0.21    0
## e4 0.00 0.05 0.52 0.29 0.14    0
## e5 0.00 0.50 0.16 0.29 0.05    0
## e6 0.09 0.09 0.28 0.28 0.28    0
## e7 0.00 0.38 0.33 0.29 0.00    0
## e8 0.00 0.31 0.26 0.34 0.09    0
alpha(openness)
## Warning in alpha(openness): Some items were negatively correlated with the total scale and probably 
## should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' option
## Some items ( o6 o10 ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' option
## 
## Reliability analysis   
## Call: alpha(x = openness)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N   ase mean   sd median_r
##       0.58      0.64    0.91      0.15 1.8 0.081  3.6 0.46     0.14
## 
##  lower alpha upper     95% confidence boundaries
## 0.42 0.58 0.74 
## 
##  Reliability if an item is dropped:
##     raw_alpha std.alpha G6(smc) average_r  S/N alpha se var.r med.r
## o1       0.50      0.57    0.88     0.127 1.31    0.096  0.14 0.129
## o2       0.60      0.67    0.89     0.181 1.99    0.077  0.14 0.171
## o3       0.47      0.54    0.88     0.116 1.18    0.103  0.14 0.129
## o4       0.44      0.55    0.85     0.118 1.21    0.108  0.15 0.129
## o5       0.43      0.51    0.84     0.105 1.06    0.113  0.13 0.095
## o6       0.68      0.74    0.92     0.236 2.79    0.064  0.12 0.240
## o7       0.62      0.66    0.88     0.180 1.97    0.073  0.13 0.139
## o8       0.44      0.49    0.84     0.097 0.97    0.108  0.13 0.087
## o9       0.61      0.67    0.89     0.186 2.06    0.077  0.15 0.240
## o10      0.60      0.66    0.91     0.177 1.93    0.074  0.15 0.202
## 
##  Item statistics 
##      n raw.r std.r r.cor  r.drop mean   sd
## o1  58  0.61  0.68  0.65  0.4679  3.6 0.84
## o2  58  0.17  0.27  0.24  0.0014  4.3 0.75
## o3  58  0.70  0.75  0.74  0.5632  4.0 0.97
## o4  58  0.76  0.74  0.75  0.6491  3.7 0.97
## o5  58  0.82  0.84  0.85  0.7333  3.2 0.93
## o6  58 -0.02 -0.14 -0.22 -0.2509  3.8 1.09
## o7  58  0.17  0.28  0.26 -0.0403  3.0 0.96
## o8  58  0.85  0.90  0.92  0.7955  4.1 0.73
## o9  58  0.30  0.24  0.21  0.0499  2.9 1.15
## o10 58  0.44  0.31  0.23  0.1463  3.4 1.41
## 
## Non missing response frequency for each item
##        1    2    3    4    5 miss
## o1  0.00 0.09 0.41 0.36 0.14    0
## o2  0.00 0.00 0.17 0.34 0.48    0
## o3  0.00 0.05 0.29 0.22 0.43    0
## o4  0.00 0.05 0.50 0.14 0.31    0
## o5  0.00 0.28 0.31 0.34 0.07    0
## o6  0.00 0.14 0.29 0.21 0.36    0
## o7  0.05 0.26 0.40 0.24 0.05    0
## o8  0.00 0.00 0.22 0.47 0.31    0
## o9  0.05 0.38 0.28 0.16 0.14    0
## o10 0.17 0.12 0.07 0.41 0.22    0
alpha(conscientiousness)
## 
## Reliability analysis   
## Call: alpha(x = conscientiousness)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N   ase mean   sd median_r
##       0.82      0.85    0.97      0.38 5.5 0.032  3.8 0.59     0.44
## 
##  lower alpha upper     95% confidence boundaries
## 0.76 0.82 0.89 
## 
##  Reliability if an item is dropped:
##    raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r
## c1      0.79      0.81    0.93      0.35 4.2    0.039 0.083  0.44
## c2      0.87      0.88    0.97      0.48 7.4    0.024 0.040  0.55
## c3      0.80      0.82    0.90      0.36 4.5    0.037 0.080  0.39
## c4      0.82      0.84    0.94      0.39 5.2    0.034 0.071  0.44
## c5      0.77      0.80    0.91      0.34 4.1    0.046 0.078  0.38
## c6      0.80      0.82    0.96      0.37 4.7    0.037 0.077  0.39
## c7      0.79      0.82    0.95      0.37 4.6    0.038 0.074  0.33
## c8      0.82      0.84    0.94      0.40 5.4    0.034 0.080  0.50
## c9      0.78      0.81    0.95      0.34 4.2    0.043 0.074  0.33
## 
##  Item statistics 
##     n raw.r std.r r.cor r.drop mean   sd
## c1 58  0.80  0.81  0.82  0.740  4.3 0.70
## c2 58  0.23  0.20  0.12  0.033  3.4 1.05
## c3 58  0.71  0.75  0.76  0.652  4.5 0.60
## c4 58  0.65  0.60  0.59  0.487  3.5 1.20
## c5 58  0.88  0.85  0.86  0.806  3.5 1.20
## c6 58  0.68  0.71  0.69  0.592  4.2 0.70
## c7 58  0.73  0.73  0.72  0.627  3.7 0.90
## c8 58  0.46  0.55  0.54  0.367  4.1 0.57
## c9 58  0.84  0.83  0.81  0.767  3.2 0.97
## 
## Non missing response frequency for each item
##       1    2    3    4    5 miss
## c1 0.00 0.00 0.14 0.45 0.41    0
## c2 0.00 0.28 0.24 0.33 0.16    0
## c3 0.00 0.00 0.05 0.38 0.57    0
## c4 0.09 0.10 0.29 0.29 0.22    0
## c5 0.00 0.31 0.14 0.28 0.28    0
## c6 0.00 0.00 0.17 0.48 0.34    0
## c7 0.00 0.09 0.34 0.36 0.21    0
## c8 0.00 0.00 0.14 0.67 0.19    0
## c9 0.00 0.26 0.47 0.14 0.14    0
alpha(agreeableness)
## 
## Reliability analysis   
## Call: alpha(x = agreeableness)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N   ase mean   sd median_r
##       0.82      0.82    0.93      0.34 4.5 0.034  4.1 0.54     0.38
## 
##  lower alpha upper     95% confidence boundaries
## 0.75 0.82 0.89 
## 
##  Reliability if an item is dropped:
##    raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r
## a1      0.77      0.77    0.89      0.30 3.4    0.045 0.084  0.30
## a2      0.84      0.84    0.92      0.40 5.4    0.029 0.069  0.47
## a3      0.79      0.79    0.91      0.32 3.8    0.040 0.080  0.36
## a4      0.76      0.77    0.90      0.29 3.3    0.047 0.066  0.34
## a5      0.78      0.79    0.90      0.31 3.7    0.043 0.079  0.34
## a6      0.82      0.82    0.93      0.36 4.4    0.036 0.079  0.41
## a7      0.81      0.80    0.93      0.33 4.0    0.038 0.091  0.41
## a8      0.79      0.79    0.92      0.32 3.7    0.041 0.089  0.36
## a9      0.84      0.84    0.91      0.39 5.1    0.030 0.073  0.47
## 
##  Item statistics 
##     n raw.r std.r r.cor r.drop mean   sd
## a1 58  0.83  0.82  0.83   0.76  3.7 0.89
## a2 58  0.30  0.32  0.29   0.14  4.2 0.81
## a3 58  0.69  0.71  0.69   0.60  4.1 0.74
## a4 58  0.87  0.86  0.87   0.80  3.9 1.00
## a5 58  0.77  0.75  0.74   0.67  4.2 0.95
## a6 58  0.59  0.54  0.48   0.43  3.7 0.98
## a7 58  0.61  0.65  0.60   0.52  4.6 0.59
## a8 58  0.74  0.73  0.71   0.65  4.2 0.77
## a9 58  0.35  0.38  0.36   0.20  3.9 0.80
## 
## Non missing response frequency for each item
##       2    3    4    5 miss
## a1 0.07 0.41 0.31 0.21    0
## a2 0.09 0.00 0.59 0.33    0
## a3 0.05 0.05 0.60 0.29    0
## a4 0.10 0.22 0.33 0.34    0
## a5 0.12 0.00 0.45 0.43    0
## a6 0.21 0.05 0.59 0.16    0
## a7 0.00 0.05 0.31 0.64    0
## a8 0.05 0.05 0.52 0.38    0
## a9 0.09 0.09 0.62 0.21    0

Are the scales reliable? Use Nunally’s criteria of 0.70 to answer.

Cleaning: Calculating variables

Once we have established that the Big 5 scales are reliable, we can move to aggregating the items to creating the variables for each sub-scale. We can easily do this with the mutate function, as we saw at the beginning of the first semester.

bigfive <- bigfive %>%
  mutate(neuroticism = (n1 + n2 + n3 + n4 + n5 + n6 + n7 + n8)/8)
bigfive <- bigfive %>%
  mutate(openness = (o1 + o2 + o3 + o4 + o5 + o6 + o7 + o8 + o9 + o10)/10)
bigfive <- bigfive %>%
  mutate(extroversion = (e1 + e2 + e3 + e4 + e5 + e6 + e7 + e8)/8)
bigfive <- bigfive %>%
  mutate(conscientiousness = (c1 + c2 + c3 + c4 + c5 + c6 + c7 + c8 + c9)/9)
bigfive <- bigfive %>%
  mutate(agreeableness = (a1 + a2 + a3 + a4 + a5 + a6 + a7 + a8 + a9)/9)

Activity: Plot a bar graph showing the mean scores for the personality dimensions

# First create dataframe with means
mean_score <- c(mean(bigfive$neuroticism), mean(bigfive$openness), mean(bigfive$extroversion), mean(bigfive$conscientiousness), mean(bigfive$agreeableness))
personality <- c("neuroticism", "openness", "extroversion", "conscientiousness", "agreeableness")
data <- data_frame(mean_score, personality)
## Warning: `data_frame()` is deprecated as of tibble 1.1.0.
## Please use `tibble()` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
## Now create the bar chart

ggplot(data) +
  geom_bar(aes(personality, mean_score, fill = personality), stat = "identity") +
  theme_classic()

Previous
Next