R Data Structures

R statistics, r exercises.

We have gathered a variety of R exercises (with answers) for each R Chapter.

Try to solve an exercise by editing some code, or show the answer to see what you've done wrong.

Count Your Score

You will get 1 point for each correct answer. Your score and total score will always be displayed.

Start R Exercises

Start R Exercises ❯

If you do not know R, we suggest that you read our R Tutorial from scratch.

Get Certified

COLOR PICKER

colorpicker

Contact Sales

If you want to use W3Schools services as an educational institution, team or enterprise, send us an e-mail: [email protected]

Report Error

If you want to report an error, or if you want to make a suggestion, send us an e-mail: [email protected]

Top Tutorials

Top references, top examples, get certified.

Instantly share code, notes, and snippets.

@mGalarnyk

mGalarnyk / assignment1.md

  • Download ZIP
  • Star ( 12 ) 12 You must be signed in to star a gist
  • Fork ( 25 ) 25 You must be signed in to fork a gist
  • Embed Embed this gist in your website.
  • Share Copy sharable link for this gist.
  • Clone via HTTPS Clone using the web URL.
  • Learn more about clone URLs
  • Save mGalarnyk/bef0d3194c04e296a6e9784eccdb36f4 to your computer and use it in GitHub Desktop.

R Programming Project 1

github repo for rest of specialization: Data Science Coursera

For this first programming assignment you will write three functions that are meant to interact with dataset that accompanies this assignment. The dataset is contained in a zip file specdata.zip that you can download from the Coursera web site.

Although this is a programming assignment, you will be assessed using a separate quiz.

The zip file containing the data can be downloaded here: specdata.zip [2.4MB] Description: The zip file contains 332 comma-separated-value (CSV) files containing pollution monitoring data.

Part 1 ( pollutantmean.R )

Part 2 ( complete.r ), part 3 ( corr.r ).

@SUSANKI

SUSANKI commented Jul 30, 2020

Thank u so much, It's a little bit complicated for me @-@

Sorry, something went wrong.

@Cyberclip

Cyberclip commented Sep 14, 2020

Here it says when I try to do part 1 that there's no package named 'data.table', what should I do?

@harshit229

harshit229 commented Oct 7, 2020

use rstudio

@Romeroc3

Romeroc3 commented Dec 27, 2020 • edited Loading

Thank you very much for this assignment information. I am currently doing my case study on the refugee situation and I need to study data science to analyze the data. Interestingly, the idea for the research came spontaneously when I read https://samplius.com/free-essay-examples/refugee/ in preparation for lesson. These free essay examples got me interested more in migration and globalization issue. Therefore, I decided to do a little research, but I lack the skills to do a high-quality analysis of big data.

@kennethwoanyah

kennethwoanyah commented Feb 3, 2021

@SUSANKI yep, complicated for me too . lol. Works perfectly though.

@flaviaouyang

flaviaouyang commented Feb 18, 2021

you need to install the package. install.packages(data.table)

@Bell-016

Bell-016 commented Feb 23, 2021

I am very frustrated with this course. I took it assuming it will explain things from the beggining for a beginner, but the first assignment to me is unreadble, I would never give this answer because I felt I never learn this things you used for your answer.

@utamadonny

utamadonny commented Mar 16, 2022

i run the corr.R and it return "Error in eval(bysub, parent.frame(), parent.frame()) : object 'ID' not found"

@Rushield

Rushield commented Apr 9, 2022

Bruh this course is just annoying because it ain't show us how to do those things, even understanding your simplified code in a week 2 is damn hard.

@emcdowell28

emcdowell28 commented Dec 15, 2022

You and me both. I've been using multiple other online textbooks to try and gain any kind of fundamental understanding of this material. I don't usually struggle with things like this, but nothing makes me feel more unintelligent than being tested over things we haven't even been taught yet.

Modern Statistics with R

15 solutions to exercises, exercise 2.1.

Type the following code into the Console window:

The answer is \(3,628,800\) .

(Click here to return to the exercise.)

Exercise 2.2

  • To compute the sum and assign it to a , we use:
  • To compute the square of a we can use:

The answer is \(1,098,304\) .

As you’ll soon see in other examples, the square can also be computed using:

Exercise 2.3

When an invalid character is used in a variable name, an error message is displayed in the Console window. Different characters will render different error messages. For instance, net-income <- income - taxes yields the error message Error in net - income <- income - taxes : object 'net' not found . This may seem a little cryptic (and it is!), but what it means is that R is trying to compute the difference between the variables net and income , because that is how R interprets net-income , and fails because the variable net does not exist. As you become more experienced with R, the error messages will start making more and more sense (at least in most cases).

If you put R code as a comment, it will be treated as a comment, meaning that it won’t run. This is actually hugely useful, for instance when you’re looking for errors in your code - you can comment away lines of code and see if the rest of the code runs without them.

Semicolons can be used to write multiple commands on a single line - both will run as if they were on separate lines. If you like, you can add more semicolons to run even more commands.

The value to the right is assigned to both variables. Note, however, that any operations you perform on one variable won’t affect the other. For instance, if you change the value of one of them, the other will remain unchanged:

Exercise 2.4

  • To create the vectors, use c :
  • To combine the two vectors into a data frame, use data.frame

Exercise 2.5

The vector created using:

is \((1,2,3,4,5)\) . Similarly,

gives us the same vector in reverse order: \((5,4,3,2,1)\) . To create the vector \((1,2,3,4,5,4,3,2,1)\) we can therefore use:

Exercise 2.6

  • To compute the mean height, use the mean function:
  • To compute the correlation between the two variables, use cor :

Exercise 2.7

length computes the length (i.e. the number of elements) of a vector. length(height) returns the value 5 , because the vector is 5 elements long.

sort sorts a vector. The parameter decreasing can be used to decide whether the elements should be sorted in ascending ( sort(weights, decreasing = FALSE) ) or descending ( sort(weights, decreasing = TRUE) ) order. To sort the weights in ascending order, we can use sort(weight) . Note, however, that the resulting sorted vector won’t be stored in the variable weight unless we write weight <- sort(weight) !

Exercise 2.8

  • \(\sqrt{\pi}=1.772454\ldots\) :
  • \(e^2\cdot log(4)=10.24341\ldots\) :

Exercise 2.9

  • The expression \(1/x\) tends to infinity as \(x\rightarrow 0\) , and so R returns \(\infty\) as the answer in this case:
  • The division \(0/0\) is undefined, and R returns NaN , which stands for Not a Number:
  • \(\sqrt{-1}\) is undefined (as long as we stick to real numbers), and so R returns NaN . The sqrt function also provides an error message saying that NaN values were produced.

If you want to use complex numbers for some reason, you can write the complex number \(a+bi\) as complex(1, a, b) . Using complex numbers, the square root of \(-1\) is \(i\) :

Exercise 2.10

To install the package, we use the install.packages function as follows:

Exercise 2.11

  • View the documentation, where the data is described:
  • Have a look at the structure of the data:

This shows you the number of observations (344) and variables (8), and the variable types. There are three different data types here: num (numerical), Factor (factor, i.e. a categorical variable) and int (integer, a numerical variable that only takes integer values).

  • To compute the descriptive statistics, we can use:

In the summary, missing values show up as NA’s. There are some NA’s here, and hence there are missing values.

Exercise 2.12

The points follow a declining line. The reason for this is that at any given time, an animal is either awake or asleep, so the total sleep time plus the awake time is always 24 hours for all animals. Consequently, the points lie on the line given by awake=24-sleep_total .

Exercise 2.13

  • We can change the opacity of the points by adding an alpha argument to geom_point . This is useful when the plot contains overlapping points:

Exercise 2.14

  • To set different shapes for different values of island we use:
  • We can then change the size of the points as follows. Adding alpha to geom_point makes it easier to distinguish the different points.

Exercise 2.15

Using the scale_axis_log10 options:

Exercise 2.16

  • We use facet_wrap(~ species) to create the facetting:
  • To set the number of rows, we add an nrow argument to facet_wrap :

Exercise 2.17

  • To change the colours of the boxes, we add colour (outline colour) and fill (box colour) arguments to geom_boxplot :

(No, I don’t really recommend using this particular combination of colours.)

  • geom_jitter can be used to plot the individual observations on top of the histogram. It is often a good idea to set a small size and a low alpha in order not to cover the boxes completely.

If we like, we can also change the height of the jitter:

Exercise 2.18

  • Next, we facet the histograms using cut :
  • Finally, by reading the documentation ?geom_histogram we find that we can add outlines using the colour argument:

Exercise 2.19

  • To set different colours for the bars, we can use fill , either to set the colours manually or using default colours (by adding a colour aesthetic):
  • width lets us control the bar width:
  • By adding fill = sex to aes we create stacked bar charts:
  • By adding position = "dodge" to geom_bar we obtain grouped bar charts:
  • coord_flip flips the coordinate system, yielding a horizontal bar plot:

Exercise 2.20

To save the png file, use

To change the resolution, we use the dpi argument:

Exercise 2.21

  • Both approaches render a character object with the text A rainy day in Edinburgh :

That is, you are free to choose whether to use single or double quotation marks. I tend to use double quotation marks, because I was raised to believe that double quotation marks are superior in every way (well, that, and the fact that I think that they make code easier to read simply because they are easier to notice).

  • The first two sums are numeric whereas the third is integer

If we mix numeric and integer variables, the result is a numeric . But as long as we stick to just integer variables, the result is usually an integer . There are exceptions though - computing 2L/3L won’t result in an integer because… well, because it’s not an integer.

  • When we run "Hello" + 1 we receive an error message:

In R, binary operators are mathematical operators like + , - , * and / that takes two numbers and returns a number. Because "Hello" is a character and not a numeric , it fails in this case. So, in English the error message reads Error in "Hello" + 1 : trying to perform addition with something that is not a number . Maybe you know a bit of algebra and want to say hey, we can add characters together, like in \(a^2+b^2=c^2\) ! . Which I guess is correct. But R doesn’t do algebraic calculations, but numerical ones - that is, all letters involved in the computations must represent actual numbers. a^2+b^2=c^2 will work only if a , b and c all have numbers assigned to them.

  • Combining numeric and a logical variables turns out to be very useful in some problems. The result is always numeric, with FALSE being treated as the number 0 and TRUE being treated as the number 1 in the computations:

Exercise 2.22

The functions return information about the data frame:

Exercise 2.23

To create the matrices, we need to set the number of rows nrow , the number of columns ncol and whether to use the elements of the vector x to fill the matrix by rows or by columns ( byrow ). To create

\[\begin{pmatrix} 1 & 2 & 3\\ 4 & 5 & 6 \end{pmatrix}\]

And to create

\[\begin{pmatrix} 1 & 4\\ 2 & 5\\ 3 & 6 \end{pmatrix}\]

We’ll do a deep-dive on matrix objects in Section 12.3 .

Exercise 2.24

In the [i, j] notation, i is the row number and j is the column number. In this case, airquality[, 3] , we have j=3 and therefore asks for the 3rd column , not the 3rd row. To get the third row, we’d use airquality[3,] instead.

To extract the first five rows, we can use:

  • First, we use names(airquality) to check the column numbers of the two variables. Wind is column 3 and Temp is column 4, so we can access them using airquality[,3] and airquality[,4] respectively. Thus, we can compute the correlation using:

Alternatively, we could refer to the variables using the column names:

  • To extract all columns except Temp and Wind , we use a minus sign - and a vector containing their indices:

Exercise 2.25

  • To add the new variable, we can use:
  • By using View(bookstore) or looking at the data in the Console window using bookstore , we see that the customer in question is on row 6 of the data. To replace the value, we can use:

Note that the value of rev_per_minute hasn’t been changed by this operation. We will therefore need to compute it again, to update its value:

Exercise 2.26

  • The coldest day was the day with the lowest temperature:

We see that the 5th day in the period, May 5, was the coldest, with a temperature of 56 degrees Fahrenheit.

  • To find out how many days the wind speed was greater than 17 mph, we use sum :

Because there are so few days fulfilling this condition, we could also easily have solved this by just looking at the rows for those days and counting them:

  • Missing data are represented by NA values in R, and so we wish to check how many NA elements there are in the Ozone vector. We do this by combining is.na and sum and find that there are 37 missing values:
  • In this case, we need to use an ampersand & sign to combine the two conditions:

We find that there are 22 such days in the data.

Exercise 2.27

We should use the breaks argument to set the interval bounds in cut :

To see the number of days in each category, we can use summary :

Exercise 2.28

First we load and inspect the data:

  • Next, we compute summary statistics grouped by dataset :

The summary statistics for all datasets are virtually identical.

  • Next, we make scatterplots. Here is a solution using ggplot2 :

Clearly, the datasets are very different! This is a great example of how simply computing summary statistics is not enough. They tell a part of the story, yes, but only a part.

Exercise 2.29

We can use mutate to add the new variable:

Exercise 2.30

  • The variable X represents the empty column between Visit and VAS . In the X.1 column the researchers have made comments on two rows (rows 692 and 1153), causing R to read this otherwise empty column. If we wish, we can remove these columns from the data using the syntax from Section 2.11.1 :
  • We remove the sep = ";" argument:

…and receive the following error message:

By default, read.csv uses commas, , , as column delimiters. In this case it fails to read the file, because it uses semicolons instead.

  • Next, we remove the dec = "," argument:

read.csv reads the data without any error messages, but now VAS has become a character vector. By default, read.csv assumes that the file uses decimal points rather than decimals commas. When we don’t specify that the file has decimal commas, read.csv interprets 0,4 as text rather than a number.

  • Next, we remove the skip = 4 argument:

read.csv looks for column names on the first row that it reads. skip = 4 tells the function to skip the first 4 rows of the .csv file (which in this case were blank or contain other information about the data). When it doesn’t skip those lines, the only text on the first row is Data updated 2020-04-25 . This then becomes the name of the first column, and the remaining columns are named X , X.1 , X.2 , and so on.

  • Finally, we change skip = 4 to skip = 5 :

In this case, read.csv skips the first 5 rows, which includes row 5, on which the variable names are given. It still looks for variable names on the first row that it reads though, meaning that the data values from the first observation become variable names instead of data points. An X is added at the beginning of the variable names, because variable names in R cannot begin with a number.

Exercise 2.31

We set file_path to the path for vas.csv and read the data as in Exercise 2.30 ::

  • First, we compute the mean VAS for each patient:
  • Next, we compute the lowest and highest VAS recorded for each patient:
  • Finally, we compute the number of high-VAS days for each patient. One way to do this is to create a logical vector by VAS >= 7 and then compute its sum.

Exercise 2.32

  • First, set file_path to the path to projects-email.xlsx . Then we can use read.xlsx from the openxlsx package. The argument sheet lets us select which sheet to read:
  • To obtain a vector containing the email addresses without any duplicates, we apply unique to the vector containing the e-mail addresses. That vector is called E-mail with a hyphen - . We cannot access it using emails$E-mail , because R will interpret that as email$E - mail , and neither the vector email$E nor the variable mail exist. Instead, we can do one of the following:

Exercise 2.33

  • We set file_path to the path to vas-transposed.csv and then read it:

It is a data frame with 4 rows and 2366 variables.

  • Adding row.names = 1 lets us read the row names:

This data frame only contains 2365 variables, because the leftmost column is now the row names and not a variable.

  • t lets us rotate the data into the format that we are used to. If we only apply t though, the resulting object is a matrix and not a data.frame . If we want it to be a data.frame , we must also make a call to as.data.frame :

Exercise 3.1

We create the matrix and run the test as follows:

The p-value is \(0.4339\) , so we have no evidence against \(H_0\) .

What about the criteria for running the test? We can check the expected counts:

There is one cell (resistant E.coli ) for which the expected count is less than 5. This means that 25 % of the cells have a count below 5, and the criteria for running the test are not met. We should consider using simulation to compute the p-value, or using Fisher’s exact test instead:

Exercise 3.2

We have \(x=440\) and \(n=998\) , and can compute the 99% Wilson confidence interval as follows:

The interval is \((0.40, 0.48)\) .

Exercise 3.3

First, let’s compute the proportion of herbivores and carnivores that sleep for more than 7 hours a day:

The proportions are 0.625 and 0.68, respectively. Checking ?binomDiffCI , we see that in order to obtain a confidence interval for the difference of the two proportions, we use binomDiffCI as follows:

Exercise 3.4

We compute the sample size as follows:

The required sample size is \(n=1296\) (the output says \(1295.303\) , which we round up to \(1296\) ).

Exercise 3.5

We use t.test to perform the test.

The p-value is 2.264e-06 , i.e.  \(0.000002264\) , and we reject the null hypothesis that the average weight is the same for females and males.

Exercise 3.6

When running the test, we need to use the argument mu = 195 and alternative = "greater" .

The p-value is 1.7e-05 , i.e.  \(0.000017\) , and we reject the null hypothesis that the average flipper length is less than or equal to 195 mm.

Exercise 3.7

First, we assume that delta is 5 and that the standard deviation is 6, and want to find the \(n\) required to achieve 95 % power at a 5 % significance level:

We see than \(n\) needs to be at least 18 (17.04 rounded up) to achieve the desired power.

The actual sample size for this dataset was \(n=34\) . Let’s see what power that gives us:

The power is \(0.999\) . We’re more or less guaranteed to find statistical evidence that the mean is greater than 195 if the true mean is 200!

Exercise 3.8

We use t.test to perform each test.

Without pipes:

The p-values are still very small after adjustment, and we reject all three null hypotheses.

Exercise 4.1

  • We change the background colour of the entire plot to lightblue .
  • Next, we change the font of the legend to serif .
  • We remove the grid:
  • Finally, we change the colour of the axis ticks to orange and increase their width:

It doesn’t look all that great, does it? Let’s just stick to the default theme in the remaining examples.

Exercise 4.2

  • We can use the bw argument to control the smoothness of the curves:
  • We can fill the areas under the density curves by adding fill to the aes :
  • Because the densities overlap, it’d be better to make the fill colours slightly transparent. We add alpha to the geom:
  • A similar plot can be created using geom_density_ridges from the ggridges package. Note that you must set y = cut in the aes , because the densities should be separated by cut.

Exercise 4.3

We use xlim to set the boundaries of the x-axis and bindwidth to decrease the bin width:

It appears that carat values that are just above multiples of 0.25 are more common than other values. We’ll explore that next.

Exercise 4.4

  • We set the colours using the fill aesthetic:
  • Next, we remove the legend:
  • We add boxplots by adding an additional geom to the plot. Increasing the width of the violins and decreasing the width of the boxplots creates a better figure. We also move the fill = cut aesthetic from ggplot to geom_violin so that the boxplots use the default colours instead of different colours for each category.
  • Finally, we can create a horizontal version of the figure in the same way we did for boxplots in Section 2.19 : by adding coord_flip() to the plot:

Exercise 4.5

We can create an interactive scatterplot using:

There are outliers along the y-axis on rows 24,068 and 49,190. There are also some points for which \(x=0\) . Examples include rows 11,183 and 49,558. It isn’t clear from the plot, but in total there are 8 such points, 7 of which have both \(x=0\) and \(y=0\) . To view all such diamonds, you can use filter(diamonds, x==0) . These observations must be due to data errors, since diamonds can’t have 0 width. The high \(y\) -values also seem suspicious - carat is a measure of diamond weight, and if these diamonds really were 10 times longer than others then we would probably expect them to have unusually high carat values as well (which they don’t).

Exercise 4.6

The two outliers are the only observations for which \(y>20\) , so we use that as our condition:

Exercise 4.7

In this plot, we see that virtually all high carat diamonds have missing x values. This seems to indicate that there is a systematic pattern to the missing data (which of course is correct in this case!), and we should proceed with any analyses of x with caution.

Exercise 4.8

The code below is an example of what your analysis can look like, with some remarks as comments:

Exercise 4.9

  • To decrease the smoothness of the line, we use the span argument in geom_smooth . The default is geom_smooth(span = 0.75) . Decreasing this values yields a very different fit:

More smoothing is probably preferable in this case. The relationship appears to be fairly weak, and appears to be roughly linear.

  • We can use the method argument in geom_smooth to fit a straight line using lm instead of LOESS:
  • To remove the confidence interval from the plot, we set se = FALSE in geom_smooth :
  • Finally, we can change the colour of the smoothing line using the colour argument:

Exercise 4.10

  • Adding the geom_smooth geom with the default settings produces a trend line that does not capture seasonality:
  • We can change the axes labels using labs :
  • ggtitle adds a title to the figure:
  • The colour argument can be passed to autoplot to change the colour of the time series line:

Exercise 4.11

  • The text can be added by using annotate(geom = "text", ...) . In order not to draw the text on top of the circle, you can shift the x-value of the text (the appropriate shift depends on the size of your plot window):
  • We can remove the erroneous value by replacing it with NA in the time series:
  • Finally, we can add a reference line using geom_hline :

Exercise 4.12

  • We can specify which variables to include in the plot as follows:

This produces a terrible-looking label for the y-axis, which we can remove by setting the y-label to NULL :

  • As before, we can add smoothers using geom_smooth :

Exercise 4.13

  • We set the size of the points using geom_point(size) :
  • To add annotations, we use annotate and some code to find the days of the lowest and highest temperatures:

Exercise 4.14

We can specify aes(group) for a particular geom only as follows:

Subject is now used for grouping the points used to draw the lines (i.e. for geom_line ), but not for geom_smooth , which now uses all the points to create a trend line showing the average height of the boys over time.

Exercise 4.15

Code for producing the three plots is given below:

Exercise 4.16

We use the cpt.var functions with the default settings:

The variance is greater in the beginning of the year, and then appears to be more or less constant. Perhaps this can be explained by temperature?

We see that the high-variance period coincides with peaks and large oscillations in temperature, which would cause the energy demand to increase and decrease more than usual, making the variance greater.

Exercise 4.17

By adding a copy of the observation for month 12, with the Month value replaced by 0, we can connect the endpoints to form a continuous curve:

Exercise 4.18

As for all ggplot2 plots, we can use ggtitle to add a title to the plot:

Exercise 4.19

  • We create the correlogram using ggcorr as follows:
  • method allows us to control which correlation coefficient to use:
  • nbreaks is used to create a categorical colour scale:
  • low and high can be used to control the colours at the endpoints of the scale:

(Yes, the default colours are a better choice!)

Exercise 4.20

  • We replace colour = vore in the aes by fill = vore and add colour = "black", shape = 21 to geom_point . The points now get black borders, which makes them a bit sharper:
  • We can use ggplotly to create an interactive version of the plot. Adding text to the aes allows us to include more information when hovering points:

Exercise 4.21

  • We create the tile plot using geom_tile . By setting fun = max we obtain the highest price in each bin:
  • We can create the bin plot using either geom_bin2d or geom_hex :

Diamonds with carat around 0.3 and price around 1000 have the highest bin counts.

Exercise 4.22

  • VS2 and Ideal is the most common combination:
  • As for continuous variables, we can use geom_tile with the arguments stat = "summary_2d", fun = mean to display the average prices for different combinations. SI2 and Premium is the combination with the highest average price:

Exercise 4.23

  • We create the scatterplot using:
  • The interactive facetted bubble plot is created using:

Well done, you just visualised 5 variables in a facetted bubble plot!

Exercise 4.24

  • Fixed wing multi engine Boeings are the most common planes:
  • The fixed wing multi engine Airbus has the highest average number of seats:
  • The number of seats seems to have increased in the 1980’s, and then reached a plateau:

The plane with the largest number of seats is not an Airbus, but a Boeing 747-451. It can be found using planes[which.max(planes$seats),] or visually using plotly :

  • Finally, we can investigate what engines were used during different time periods in several ways, for instance by differentiating engines by colour in our previous plot:

Exercise 4.25

First, we compute the principal components:

  • To see the proportion of variance explained by each component, we use summary :

The first PC accounts for 65.5 % of the total variance. The first two account for 86.9 % and the first three account for 98.3 % of the total variance, meaning that 3 components are needed to account for at least 90 % of the total variance.

  • To see the loadings, we type:

The first PC appears to measure size: it is dominated by carat , x , y and z , which all are size measurements. The second PC appears is dominated by depth and table and is therefore a summary of those measures.

  • To compute the correlation, we use cor :

The (Pearson) correlation is 0.89, which is fairly high. Size is clearly correlated to price!

  • To see if the first two principal components be used to distinguish between diamonds with different cuts, we make a scatterplot:

The points are mostly gathered in one large cloud. Apart from the fact that very large or very small values of the second PC indicates that a diamond has a Fair cut, the first two principal components seem to offer little information about a diamond’s cut.

Exercise 4.26

We create the scatterplot with the added arguments:

The arrows for Area , Perimeter , Kernel_length , Kernel_width and Groove_length are all about the same length and are close to parallel the x-axis, which shows that these have similar impact on the first principal component but not the second, making the first component a measure of size. Asymmetry and Compactness both affect the second component, making it a measure of shape. Compactness also affects the first component, but not as much as the size variables do.

Exercise 4.27

We change the hc_method and hc_metric arguments to use complete linkage and the Manhattan distance:

fviz_dend produces ggplot2 plots. We can save the plots from both approaches and then plot them side-by-side using patchwork as in Section 4.4 :

Alaska and Vermont are clustered together in both cases. The red leftmost cluster is similar but not identical, including Alabama, Georgia and Louisiana.

To compare the two dendrograms in a different way, we can use tanglegram . Setting k_labels = 5 and k_branches = 5 gives us 5 coloured clusters:

Note that the colours of the lines connecting the two dendrograms are unrelated to the colours of the clusters.

Exercise 4.28

Using the default settings in agnes , we can do the clustering using:

Maryland is clustered with New Mexico, Michigan and Arizona, in that order.

Exercise 4.29

We draw a heatmap, with the data standardised in the column direction because we wish to cluster the observations rather than the variables:

You may want to increase the height of your Plot window so that the names of all states are displayed properly.

The heatmap shows that Maryland, and the states similar to it, has higher crime rates than most other states. There are a few other states with high crime rates in other clusters, but those tend to only have a high rate for one crime (e.g. Georgia, which has a very high murder rate), whereas states in the cluster that Maryland is in have high rates for all or almost all types of violent crime.

Exercise 4.30

First, we inspect the data:

There are a few outliers, so it may be a good idea to use pam as it is less affected by outliers than kmeans . Next, we draw some plots to help use choose \(k\) :

There is no pronounced elbow in the WSS plot, although slight changes appear to occur at \(k=3\) and \(k=7\) . Judging by the silhouette plot, \(k=3\) may be a good choice, while the gap statistic indicates that \(k=7\) would be preferable. Let’s try both values:

Neither choice is clearly superior. Remember that clustering is an exploratory procedure, that we use to try to better understand our data.

The plot for \(k=7\) may look a little strange, with two largely overlapping clusters. Bear in mind though, that the clustering algorithm uses all 10 variables and not just the first two principal components, which are what is shown in the plot. The differences between the two clusters isn’t captured by the first two principal components.

Exercise 4.31

First, we try to find a good number of clusters:

We’ll go with \(k=2\) clusters:

Maryland is mostly associated with the first cluster. Its neighbouring state New Jersey is equally associated with both clusters.

Exercise 4.32

We do the clustering and plot the resulting clusters:

Three clusters, that overlap substantially when the first two principal components are plotted, are found.

Exercise 5.1

  • as.logical returns FALSE for 0 and TRUE for all other numbers:
  • When the as. functions are applied to vectors, they convert all values in the vector:
  • The is. functions return a logical : TRUE if the variable is of the type and FALSE otherwise:
  • The is. functions show that NA in fact is a (special type of) logical . This is also verified by the documentation for NA :

Exercise 5.2

We set file_path to the path for vas.csv and load the data as in Exercise 2.30 :

To split the VAS vector by patient ID, we use split :

To access the values for patient 212, either of the following works:

Exercise 5.3

  • To convert the proportions to percentages with one decimal place, we must first multiply them by 100 and then round them:
  • The cumulative maxima and minima are computed using cummax and cummin :

The minimum during the period occurs on the 5th day, whereas the maximum occurs during day 120.

  • To find runs of days with temperatures above 80, we use rle :

To find runs with temperatures above 80, we extract the length of the runs for which runs$values is TRUE :

We see that the longest run was 23 days.

Exercise 5.4

  • On virtually all systems, the largest number that R can represent as a floating point is 1.797693e+308 . You can find this by gradually trying larger and larger numbers:
  • If we place the ^2 inside sqrt the result becomes 0:

Exercise 5.5

We re-use the solution from Exercise 2.27 :

  • Next, we change the levels’ names:
  • Finally, we combine the last two levels:

Exercise 5.6

1 We start by converting the vore variable to a factor :

The levels are ordered alphabetically, which is the default in R.

  • To compute grouped means, we use aggregate :
  • Finally, we sort the factor levels according to their sleep_total means:

Exercise 5.7

First, we set file_path to the path to handkerchiefs.csv and import it to the data frame pricelist :

  • nchar counts the number of characters in strings:
  • We can use grep and a regular expression to see that there are 2 rows of the Italian.handkerchief column that contain numbers:
  • To extract the prices in shillings (S) and pence (D) from the Price column and store these in two new numeric variables in our data frame, we use strsplit , unlist and matrix as follows:

Exercise 5.8

We set file_path to the path to oslo-biomarkers.xlsx and load the data:

To find out how many patients were included in the study, we use strsplit to split the ID-timepoint string, and then unique :

We see that 118 patients were included in the study.

Exercise 5.9

  • "$g" matches strings ending with g :
  • "^[^[[:digit:]]" matches strings beginning with anything but a digit:
  • "a(s|l)" matches strings containing either as or al :
  • "[[:lower:]]+[.][[:lower:]]+" matches strings containing any number of lowercase letters, followed by a period . , followed by any number of lowercase letters:

Exercise 5.10

We want to extract all words, i.e. segments of characters separated by white spaces. First, let’s create the string containing example sentences:

Next, we split the string at the spaces:

Note that x_split is a list . To turn this into a vector, we use unlist

Finally, we can use gsub to remove the punctuation marks, so that only the words remain:

If you like, you can put all steps on a single row:

…or reverse the order of the operations:

Exercise 5.11

  • The functions are used to extract the weekday, month and quarter for each date:
  • julian can be used to compute the number of days from a specific date (e.g. 1970-01-01) to each date in the vector:

Exercise 5.12

  • On most systems, converting the three variables to Date objects using as.Date yields correct dates without times :
  • We convert time1 to a Date object and add 1 to it:

The result is 2020-04-02 , i.e. adding 1 to the Date object has added 1 day to it.

  • We convert time3 and time1 to Date objects and subtract them:

The result is a difftime object, printed as Time difference of 2 days . Note that the times are ignored, just as before.

  • We convert time2 and time1 to Date objects and subtract them:

The result is printed as Time difference of 0 days , because the difference in time is ignored.

  • We convert the three variables to POSIXct date and time objects using as.POSIXct without specifying the date format:

On most systems, this yields correctly displayed dates and times.

  • We convert time3 and time1 to POSIXct objects and subtract them:

This time out, time is included when the difference is computed, and the output is Time difference of 2.234722 days .

  • We convert time2 and time1 to POSIXct objects and subtract them:

In this case, the difference is presented in hours: Time difference of 1.166667 hours . In the next step, we take control over the units shown in the output.

  • difftime can be used to control what units are used for expressing differences between two timepoints:

The out is Time difference of 53.63333 hours .

Exercise 5.13

Using the first option, the Date becomes the first day of the quarter. Using the second option, it becomes the last day of the quarter instead. Both can be useful for presentation purposes - which you prefer is a matter of taste.

To convert the quarter-observations to the first day of their respective quarters, we use as.yearqtr as follows:

%q , %y ,and %Y are date tokens. The other letters and symbols in the format argument simply describe other characters included in the format.

Exercise 5.14

The x-axis of the data can be changed in multiple ways. A simple approach is the following:

A more elegant approach relies on the xts package for time series:

Exercise 5.15

Exercise 5.16.

We set file_path to the path for vas.csv and read the data as in Exercise 2.30 and convert it to a data.table (the last step being optional if we’re only using dplyr for this exercise):

A better option is to achieve the same result in a single line by using the fread function from data.table :

  • First, we remove the columns X and X.1 :
  • Second, we add a dummy variable called highVAS that indicates whether a patient’s VAS is 7 or greater on any given day:

Exercise 5.17

Exercise 5.18.

We set file_path to the path for vas.csv and read the data as in Exercise 2.30 using fread to import it as a data.table :

  • Finally, we compute the number of high-VAS days for each patient. We can compute the sum directly:

Alternatively, we can do this by first creating a dummy variable for high-VAS days:

Exercise 5.19

First we load the data and convert it to a data.table (the last step being optional if we’re only using dplyr for this exercise):

Exercise 5.20

To fill in the missing values, we can now do as follows:

Exercise 5.21

We set file_path to the path to ucdp-onesided-191.csv and load the data as a data.table using fread :

  • First, we filter the rows so that only conflicts that took place in Colombia are retained.

To list the number of different actors responsible for attacks, we can use unique :

We see that there were attacks by 7 different actors during the period.

  • To find the number of fatalities caused by government attacks on civilians, we first filter the data to only retain rows where the actor name contains the word government :

It may be of interest to list the governments involved in attacks on civilians:

To estimate the number of fatalities cause by these attacks, we sum the fatalities from each attack:

Exercise 5.22

  • First, we select only the measurements from blood samples taken at 12 months. These are the only observations where the PatientID.timepoint column contains the word months :
  • Second, we select only the measurements from the patient with ID number 6. Note that we cannot simply search for strings containing a 6 , as we then also would find measurements from other patients taken at 6 weeks, as well as patients with a 6 in their ID number, e.g. patient 126. Instead, we search for strings beginning with 6- :

Exercise 5.23

Next, we select the actor_name , year , best_fatality_estimate and location columns:

Exercise 5.24

We then order the data by the PatientID.timepoint column:

Note that because PatientID.timepoint is a character column, the rows are now ordered in alphabetical order, meaning that patient 1 is followed by 100, 101, 102, and so on. To order the patients in numerical order, we must first split the ID and timepoints into two different columns. We’ll see how to do that in the next section, and try it out on the oslo data in Exercise 5.25 .

Exercise 5.25

  • First, we split the PatientID.timepoint column:
  • Next, we reformat the patient ID to a numeric and sort the table:
  • Finally, we reformat the data from long to wide, keeping the IL-8 and VEGF-A measurements. We store it as oslo2, knowing that we’ll need it again in Exercise 5.26 .

Exercise 5.26

We use the oslo2 data frame that we created in Exercise 5.26 . In addition, we set file_path to the path to oslo-covariates.xlsx and load the data:

  • First, we merge the wide data frame from Exercise 5.25 with the oslo-covariates.xlsx data, using patient ID as key. A left join, where we only keep data for patients with biomarker measurements, seems appropriate here. We see that both datasets have a column named PatientID , which we can use as our key.
  • Next, we use the oslo-covariates.xlsx data to select data for smokers from the wide data frame using a semijoin. The Smoker.(1=yes,.2=no) column contains information about smoking habits. First we create a table for filtering:

Next, we perform the semijoin:

Exercise 5.27

We read the HTML file and extract the table:

We note that some non-numeric characters cause Dates to be a character vector:

Noting that the first four characters in each element of the vector contain the year, we can use substr to only keep those characters. Finally, we use as.numeric to convert the text to numbers:

Exercise 6.1

The formula for converting a temperature \(F\) measured in Fahrenheit to a temperature \(C\) measured in Celsius is $C=(F-32)*5/9. Our function becomes:

To apply it to the Temp column of airquality :

Exercise 6.2

  • We want out function to take a vector as input and return a vector containing its minimum and the maximum, without using min and max :
  • We want a function that computes the mean of the squared values of a vector using mean , and that takes additional arguments that it passes on to mean (e.g.  na.rm ):

Exercise 6.3

We use cat to print a message about missing values, sum(is.na(.)) to compute the number of missing values, na.omit to remove rows with missing data and then summary to print the summary:

Exercise 6.4

The following operator allows us to plot y against x :

Let’s try it out:

Or, if we want to use ggplot2 instead of base graphics:

Exercise 6.5

FALSE : x is not greater than 2.

TRUE : | means that at least one of the conditions need to be satisfied, and x is greater than z .

FALSE : & means that both conditions must be satisfied, and x is not greater than y .

TRUE : the absolute value of x*z is 6, which is greater than y .

Exercise 6.6

There are two errors: the variable name in exists is not between quotes and x > 0 evaluates to a vector an not a single value. The goal is to check that all values in x are positive, so all can be used to collapse the logical vector x > 0 :

Alternatively, we can get a better looking solution by using && :

Exercise 6.7

  • To compute the mean temperature for each month in the airquality dataset using a loop, we loop over the 6 months:
  • Next, we use a for loop to compute the maximum and minimum value of each column of the airquality data frame, storing the results in a data frame:
  • Finally, we write a function to solve task 2 for any data frame:

Exercise 6.8

  • We can create 0.25 0.5 0.75 1 in two different ways using seq :
  • We can create 1 1 1 2 2 5 using rep . 1 is repeated 3 times, 2 is repeated 2 times and 5 is repeated a single time:

Exercise 6.9

We could create the same sequences using 1:ncol(airquality) and 1:length(airquality$Temp) , but if we accidentally apply those solutions to objects with zero length, we would run into trouble! Let’s see what happens:

Even though there are no elements in the vector, two iterations are run when we use 1:length(x) to set the values of the control variable:

The reason is that 1:length(x) yields the vector 0 1 , providing two values for the control variable.

If we use seq_along instead, no iterations will be run, because seq_along(x) returns zero values:

This is the desired behaviour - if there are no elements in the vector then the loop shouldn’t run! seq_along is the safer option, but 1:length(x) is arguably less opaque and therefore easier for humans to read, which also has its benefits.

Exercise 6.10

To normalise the variable, we need to map the smallest value to 0 and the largest to 1:

Exercise 6.11

We set folder_path to the path of the folder (making sure that the path ends with / (or \\ on Windows)). We can then loop over the .csv files in the folder and print the names of their variables as follows:

Exercise 6.12

The condition in the outer loop, i < length(x) , is used to check that the element x[i+1] used in the inner loop actually exists. If i is equal to the length of the vector (i.e. is the last element of the vector) then there is no element x[i+1] and consequently the run cannot go on. If this condition wasn’t included, we would end up with an infinite loop.

The condition in the inner loop, x[i+1] == x[i] & i < length(x) , is used to check if the run continues. If [i+1] == x[i] is TRUE then the next element of x is the same as the current, meaning that the run continues. As in the previous condition, i < length(x) is included to make sure that we don’t start looking for elements outside of x , which could create an infinite loop.

The line run_values <- c(run_values, x[i-1]) creates a vector combining the existing elements of run_values with x[i-1] . This allows us to store the results in a vector without specifying its size in advance. Not however that this approach is slower than specifying the vector size in advance, and that you therefore should avoid it when using for loops.

Exercise 6.13

We modify the loop so that it skips to the next iteration if x[i] is 0 , and breaks if x[i] is NA :

We can put a conditional statement inside each of the loops, to check that both variables are numeric :

An (nicer?) alternative would be to check which columns are numeric and loop over those:

Exercise 6.15

To compute the minima, we can use:

To compute the maxima, we can use:

We could also write a function that computes both the minimum and the maximum and returns both, and use that with apply :

Exercise 6.16

We can for instance make use of the minmax function that we created in Exercise 6.15 :

Exercise 6.17

To compute minima and maxima, we can use:

This time out, we want to apply this function to two variables: Temp and Wind . We can do this using apply :

If we use sapply instead, we lose information about which statistic correspond to which variable, so lapply is a better choice here:

Exercise 6.18

We can also use a single pipe chain to split the data and apply the functional:

Exercise 6.19

Because we want to use both the variable names and their values, an imap_* function is appropriate here:

Exercise 6.20

We combine map and imap to get the desired result. folder_path is the path to the folder containing the .csv files. We must use set_names to set the file names as element names, otherwise only the index of each file (in the file name vector) will be printed:

Exercise 6.21

First, we load the data and create vectors containing all combinations

Next, we create the scatterplots:

If instead we just want to save each scatterplot in a separate file, we can do so by putting ggsave (or png + dev.off ) inside a walk2 call:

Exercise 6.22

First, we write a function for computing the mean of a vector with a loop:

Next, we run the functions once, and then benchmark them:

mean_loop is several times slower than mean . The memory usage of both functions is negligible.

Exercise 6.23

We can compare the three solutions as follows:

We see that dplyr is substantially faster and more memory efficient than the base R solution, but that data.table beats them both by a margin.

Exercise 7.1

The parameter replace controls whether or not replacement is used. To draw 5 random numbers with replacement, we use:

Exercise 7.2

As an alternative to sample(1:10, n, replace = TRUE) we could use runif to generate random numbers from 1:10 . This can be done in at least three different ways.

  • Generating (decimal) numbers between \(0\) and \(10\) and rounding up to the nearest integer:
  • Generating (decimal) numbers between \(1\) and \(11\) and rounding down to the nearest integer:
  • Generating (decimal) numbers between \(0.5\) and \(10.5\) and rounding to the nearest integer:

Using sample(1:10, n, replace = TRUE) is more straightforward in this case, and is the recommended approach.

Exercise 7.3

First, we compare the histogram of the data to the normal density function:

The density estimate is fairly similar to the normal density, but there appear to be too many low values in the data.

Then a normal Q-Q plot:

There are some small deviations from the line, but no large deviations. To decide whether these deviations are large enough to be a concern, it may be a good idea to compare this Q-Q-plot to Q-Q-plots from simulated normal data:

The Q-Q-plot for the real data is pretty similar to those from the simulated samples. We can’t rule out the normal distribution.

Nevertheless, perhaps the lognormal distribution would be a better fit? We can compare its density to the histogram, and draw a Q-Q plot:

The right tail of the distribution differs greatly from the data. If we have to choose between these two distributions, then the normal distribution seems to be the better choice.

Exercise 7.4

  • The documentation for shapiro.test shows that it takes a vector containing the data as input. So to apply it to the sleeping times data, we use:

The p-value is \(0.21\) , meaning that we can’t reject the null hypothesis of normality - the test does not indicate that the data is non-normal.

  • Next, we generate data from a \(\chi^2(100)\) distribution, and compare its distribution to a normal density function:

The fit is likely to be very good - the data is visually very close to the normal distribution. Indeed, it is rare in practice to find real data that is closer to the normal distribution than this.

However, the Shapiro-Wilk test probably tells a different story:

The lesson here is that if the sample size is large enough, the Shapiro-Wilk test (and any other test for normality, for that matter) is likely to reject normality even if the deviation from normality is tiny. When the sample size is too large, the power of the test is close to 1 even for very small deviations. On the other hand, if the sample size is small, the power of the Shapiro-Wilk test is low, meaning that it can’t be used to detect non-normality.

In summary, you probably shouldn’t use formal tests for normality at all. And I say that as someone who has written two papers introducing new tests for normality!

Exercise 7.5

To run the same simulation for different \(n\) , we will write a function for the simulation, with the sample size n as an argument:

We could write a for loop to perform the simulation for different values of \(n\) . Alternatively, we can use a function, as in Section 6.5 . Here are two examples of how this can be done:

Next, we want to plot the results. We need to extract the results from the list res and store them in a data frame, so that we can plot them using ggplot2 .

Transforming the data frame from wide to long format (Section 5.11 ) makes plotting easier.

We can do this using data.table :

…or with tidyr :

We are now ready to plot the results:

All three estimators have a bias close to 0 for all values of \(n\) (indeed, we can verify analytically that they are unbiased). The mean has the lowest variance for all \(n\) , with the median as a close competitor. Our custom estimator has a higher variance, that also has a slower decrease as \(n\) increases. In summary, based on bias and variance, the mean is the best estimator for the mean of a normal distribution.

Exercise 7.6

To perform the same simulation with \(t(3)\) -distributed data, we can reuse the same code as in Exercise 7.5 , only replacing three lines:

  • The arguments of simulate_estimators ( mu and sigma are replaced by the degrees of freedom df of the \(t\) -distribution,
  • The line were the data is generated ( rt replaces rnorm ),
  • The line were the bias is computed (the mean of the \(t\) -distribution is always 0).

To perform the simulation, we can then e.g. run the following, which has been copied from the solution to the previous exercise.

The results are qualitatively similar to those for normal data.

Exercise 7.7

We will use the functions that we created to simulate the type I error rates and powers of the three tests in Sections @ref(simtypeI} and 7.2.3 . Also, we must make sure to load the MKinfer package that contains perm.t.test .

To compare the type I error rates, we only need to supply the function rt for generating data and the parameter df = 3 to clarify that a \(t(3)\) -distribution should be used:

Here are the results from my runs:

The old-school t-test appears to be a little conservative, with an actual type I error rate close to \(0.043\) . We can use binomDiffCI from MKinfer to get a confidence interval for the difference in type I error rate between the old-school t-test and the permutation t-test:

The confidence interval is \((-0.001, 0.010)\) . Even though the old-school t-test appeared to have a lower type I error rate, we cannot say for sure, as a difference of 0 is included in the confidence interval. Increasing the number of simulated samples to, say, \(99,999\) , might be required to detect any differences between the different tests.

Next, we compare the power of the tests. For the function used to simulate data for the second sample, we add a + 1 to shift the distribution to the right (so that the mean difference is 1):

The Wilcoxon-Mann-Whitney test has the highest power in this example.

Exercise 7.8

Both the functions that we created in Section 7.3.1 , simulate_power and power.cor.test include ... in their list of arguments, which allows us to pass additional arguments to interior functions. In particular, the line in simulate_power where the p-value for the correlation test is computed, contains this placeholder:

This means that we can pass the argument method = "spearman" to use the functions to compute the sample size for the Spearman correlation test. Let’s try it:

In my runs, the Pearson correlation test required the sample sizes \(n=45\) and \(n=200\) , whereas the Spearman correlation test required larger sample sizes: \(n=50\) and \(n=215\) .

Exercise 7.9

First, we create a function that simulates the expected width of the Clopper-Pearson interval for a given \(n\) and \(p\) :

Next, we create a function with a while loop that finds the sample sizes required to achieve a desired expected width:

Finally, we run our simulation for \(p=0.1\) (with expected width \(0.01\) ) and \(p=0.3\) (expected width \(0.05\) ) and compare the results to the asymptotic answer:

As you can see, the asymptotic results are very close to those obtained from the simulation, and so using ssize.propCI is preferable in this case, as it is much faster.

Exercise 7.10

If we want to assume that the two populations have equal variances, we first have to create a centred dataset, where both groups have mean 0. We can then draw observations from this sample, and shift them by the two group means:

The resulting percentile interval is close to that which we obtained without assuming equal variances. The BCa interval is however very different.

Exercise 7.11

We use the percentile confidence interval from the previous exercise to compute p-values as follows (the null hypothesis is that the parameter is 0):

A more verbose solution would be to write a while loop:

The p-value is approximately 0.52, and we can not reject the null hypothesis.

Exercise 8.1

We set file_path to the path of sales-weather.csv . To load the data, fit the model and plot the results, we do the following:

The coefficient for SUN_HOURS is not significantly non-zero at the 5 % level. The \(R^2\) value is 0.035, which is very low. There is little evidence of a connection between the number of sun hours and the temperature during this period.

Exercise 8.2

We fit a model using the formula:

What we’ve just done is to create a model where all variables from the data frame (except mpg ) are used as explanatory variables. This is the same model as we’d have obtained using the following (much longer) code:

The ~ . shorthand is very useful when you want to fit a model with a lot of explanatory variables.

Exercise 8.3

First, we create the dummy variable:

Then, we fit the new model and have a look at the results. We won’t centre the SUN_HOURS variable, as the model is easy to interpret without centring. The intercept corresponds to the expected temperature on a day with 0 SUN_HOURS and no precipitation.

Both SUN_HOURS and the dummy variable are significantly non-zero. In the next section, we’ll have a look at how we can visualise the results of this model.

Exercise 8.4

We run the code to create the two data frames. We then fit a model to the first dataset exdata1 , and make some plots:

There are clear signs of nonlinearity here, that can be seen both in the scatterplot and the residuals versus fitted plot.

Next, we do the same for the second dataset:

There is a strong indication of heteroscedasticity . As is seen e.g. in the scatterplot and in the scale-location plot, the residuals appear to vary more the larger x becomes.

Exercise 8.5

  • First, we plot the observed values against the fitted values for the two models.

The first model only predicts values within a fairly narrow interval. The second model does a somewhat better job of predicting high temperatures.

  • Next, we create residual plots for the second model.

There are no clear trends or signs of heteroscedasticity. There are some deviations from normality in the tail of the residual distribution. There are a few observations - 57, 76 and 83, that have fairly high Cook’s distance. Observation 76 also has a very high leverage. Let’s have a closer look at them:

As we can see using sort(weather$SUN_HOURS) and min(weather$TEMPERATURE) , observation 57 corresponds to the coldest day during the period, and observations 76 and 83 to the two days with the highest numbers of sun hours. Neither of them deviate too much from other observations though, so it shouldn’t be a problem that their Cook’s distances are little high.

Exercise 8.6

We refit the model using:

The main effects are not significant at the 5 % level.

Exercise 8.7

We run boxcox to find a suitable Box-Cox transformation for our model:

You’ll notice an error message, saying:

The boxcox method can only be used for non-negative response variables. We can solve this e.g. by transforming the temperature (which currently is in degrees Celsius) to degrees Fahrenheit, or by adding a constant to the temperature (which only will affect the intercept of the model, and not the slope coefficients). Let’s try the former:

The value \(\lambda = 1\) is inside the interval indicated by the dotted lines. This corresponds to no transformation at all, meaning that there is no indication that we should transform our response variable.

Exercise 8.8

autoplot uses standard ggplot2 syntax, so by adding colour = mtcars$cyl to autoplot , we can plot different groups in different colours:

Exercise 8.9

We rerun the analysis:

Unfortunately, if you run this multiple times, the p-values will vary a lot. To fix that, you need to increase the maximum number of iterations allowed, by increasing maxIter , and changing the condition for the accuracy of the p-value by lowering Ca :

According to ?aovp , the seqs arguments controls which type of table is produced. It’s perhaps not perfectly clear from the documentation, but the default seqs = FALSE corresponds to a type III table, whereas seqs = TRUE corresponds to a type I table:

Exercise 8.10

We can run the test using the usual formula notation:

The p-value is very low, and we conclude that the fuel consumption differs between the three groups.

Exercise 8.11

The easiest way to do this is to use boot_summary :

We can also use Boot :

If instead we want to use boot , we begin by fitting the model:

Next, we compute the confidence intervals using boot and boot.ci (note that we use rlm inside the coefficients function!):

Using the connection between hypothesis tests and confidence intervals, to see whether an effect is significant at the 5 % level, you can check whether 0 is contained in the confidence interval. If not, then the effect is significant.

Exercise 8.12

First, we prepare the model and the data:

We can then compute the prediction interval using boot.ci :

Exercise 8.13

We set file_path to the path of shark.csv and then load and inspect the data:

We need to convert the Age variable to a numeric , which will cause us to lose information (“NAs introduced by coercion”) about the age of the persons involved in some attacks, i.e. those with values like 20's and 25 or 28 , which cannot be automatically coerced into numbers. Similarly, we’ll convert Sex. and Fatal..Y.N. to factor variables:

With pipes:

We can now fit the model:

Judging from the p-values, there is no evidence that sex and age affect the probability of an attack being fatal.

Exercise 8.14

We use the same logistic regression model for the wine data as before:

The broom functions work also for generalised linear models. As for linear models, tidy gives the table of coefficients and p-values, glance gives some summary statistics, and augment adds fitted values and residuals to the original dataset:

Exercise 8.15

Using the model m from the other exercise, we can now do the following.

  • Compute asymptotic confidence intervals:
  • Next, we compute bootstrap confidence intervals and p-values. In this case, the response variable is missing for a lot of observations. In order to use the same number of observations in our bootstrapping as when fitting the original model, we need to add a line to remove those observation (as in Section 5.8.2 ).

If you prefer writing your own bootstrap code, you could proceed as follows:

Exercise 8.16

We draw a binned residual plot for our model:

There are a few points outside the interval, but not too many. There is not trend, i.e. there is for instance no sign that the model has a worse performance when it predicts a larger probability of a fatal attack.

Next, we plot the Cook’s distances of the observations:

There are a few points with a high Cook’s distance. Let’s investigate point 116, which has the highest distance:

This observation corresponds to the oldest person in the dataset, and a fatal attack. Being an extreme observation, we’d expect it to have a high Cook’s distance.

Exercise 8.17

First, we have a look at the quakes data:

We then fit a Poisson regression model with stations as response variable and mag as explanatory variable:

We plot the fitted values against the observed values, create a binned residual plot, and perform a test of overdispersion:

Visually, the fit is pretty good. As indicated by the test, there are however signs of overdispersion. Let’s try a negative binomial regression instead.

The difference between the models is tiny. We’d probably need to include more variables to get a real improvement of the model.

Exercise 8.18

We can get confidence intervals for the \(\beta_j\) using boot_summary , as in previous sections. To get bootstrap confidence intervals for the rate ratios \(e^{\beta_j}\) , we exponentiate the confidence intervals for the \(\beta_j\) :

Exercise 8.19

First, we load the data and have a quick look at it:

Next, we make a plot for each boy (each subject):

Both intercepts and slopes seem to vary between individuals. Are they correlated?

There is a strong indication that the intercepts and slopes have a positive correlation. We’ll therefore fit a linear mixed model with correlated random intercepts and slopes:

Exercise 8.20

We’ll use the model that we fitted to the Oxboys data in the previous exercise:

First, we install broom.mixed :

Next, we obtain the summary table as a data frame using tidy :

As you can see, fixed and random effects are shown in the same table. However, different information is displayed for the two types of variables (just as when we use summary ).

Note that if we fit the model after loading the lmerTest , the tidy table also includes p-values:

Exercise 8.21

We use the same model as in the previous exercise:

We make some diagnostic plots:

Overall, the fit seems very good. There may be some heteroscedasticity, but nothing too bad. Some subjects have a larger spread in their residuals, which is to be expected in this case - growth in children is non-constant, and a large negative residual is therefore likely to be followed by a large positive residual, and vice versa. The regression errors and random effects all appear to be normally distributed.

Exercise 8.22

To look for an interaction between TVset and Assessor , we draw an interaction plot:

The lines overlap and follow different patterns, so there appears to be an interaction. There are two ways in which we could include this. Which we choose depends on what we think our clusters of correlated measurements are. If only the assessors are clusters, we’d include this as a random slope:

In this case, we think that there is a fixed interaction between each pair of assessor and TV set.

However, if we think that the interaction is random and varies between repetitions, the situation is different. In this case the combination of assessor and TV set are clusters of correlated measurements (which could make sense here, because we have repeated measurements for each assessor-TV set pair). We can then include the interaction as a nested random effect:

Neither of these approaches is inherently superior to the other. Which we choose is a matter of what we think best describes the correlation structure of the data.

In either case, the results are similar, and all fixed effects are significant at the 5 % level.

Exercise 8.23

BROOD , INDEX (subject ID number) and LOCATION all seem like they could cause measurements to be correlated, and so are good choices for random effects. To keep the model simple, we’ll only include random intercepts. We fit a mixed Poisson regression using glmer :

To compute the bootstrap confidence interval for the effect of HEIGHT , we use boot_summary :

Exercise 9.1

The ovarian data comes from a randomised trial comparing two treatments for ovarian cancer:

  • Let’s plot Kaplan-Meier curves to compare the two treatments:
  • We get the median survival times as follows:

For the group with rx equal to 2, the estimated median is NA . We can see why by looking at the Kaplan-Meier curve for that group: more than 50 % of the patients were alive at the end of the study, and consequently we cannot estimate the median survival from the Kaplan-Meier curve.

  • The parametric confidence interval overlap a lot. Let’s compute a bootstrap confidence interval for the difference in the 90 % quantile of the survival times. We set the quantile level using the q argument in bootkm :

The resulting confidence interval is quite wide, which is unsurprising as we have a fairly small sample size.

Exercise 9.2

  • First, we fit a Cox regression model. From ?ovarian we see that the survival/censoring times are given by futime and the censoring status by fustat .

According to the p-value in the table, which is 0.2, there is no significant difference between the two treatment groups. Put differently, there is no evidence that the hazard ratio for treatment isn’t equal to 1.

To assess the assumption of proportional hazards, we plot the Schoenfeld residuals:

There is no clear trend over time, and the assumption appears to hold.

  • To compute a bootstrap confidence interval for the hazard ratio for age, we follow the same steps as in the lung example, using censboot_summary :.

All values in the confidence interval are positive, meaning that we are fairly sure that the hazard increases with age.

Exercise 9.3

First, we fit the model:

To check the assumption of proportional hazards, we make a residual plot:

As there are no trends over time, there is no evidence against the assumption of proportional hazards.

Exercise 9.4

We fit the model:

While the p-values for the parameters change, the conclusions do not: trt is only explanatory variable with a significant effect.

Exercise 9.5

We fit the model using survreg :

To get the estimated effect on survival times, we exponentiate the coefficients:

According to the model, the survival time increases 1.8 times for patients in treatment group 2, compared to patients in treatment group 1. Running summary(m) shows that the p-value for rx is 0.05, meaning that the result isn’t significant at the 5 % level (albeit with the smallest possible margin!).

Exercise 9.6

We set file_path to the path to il2rb.csv and then load the data (note that it uses a decimal comma!):

Next, we check which measurements that are nondetects, and impute the detection limit 0.25:

27.5 % of the observations are left-censored.

To compute bootstrap confidence intervals for the mean of the biomarker level distribution under the assumption of lognormality, we can now use elnormAltCensored :

Exercise 9.7

We set file_path to the path to il2rb.csv and then load and prepare the data:

Based on the recommendations in Zhang et al. (2009), we can now run a Wilcoxon-Mann-Whitney test. Because we’ve imputed the LoD for the nondetects, all observations are included in the test:

The p-value is 0.42, and we do not reject the null hypothesis that there is no difference in location.

Exercise 10.1

First, we have a look at the data:

We can imagine several different latent variables that could explain how well the participants performed in these tests: general ability, visual ability, verbal ability, and so on. Let’s use a scree plot to determine how many factors to use:

2 or 3 factors seem like a good choice here. Let’s try both:

In the 2-factor model, one factor is primarily associated with the visual variables (which we interpret as the factor describing visual ability), whereas the other primarily is associated with reading and vocabulary (verbal ability). Both are associated with the measure of general intelligence.

In the 3-factor model, there is still a factor associated with reading and vocabulary. There are two factors associated with the visual tests: one with block design and mazes and one with picture completion and general intelligence.

Exercise 10.2

Next, we perform a latent class analysis with GPA as a covariate:

The two classes roughly correspond to cheaters and non-cheaters. From the table showing the relationship with GPA , we see students with high GPA’s are less likely to be cheaters.

Exercise 10.3

First, we note that the sample size is small ( \(n=30\) ), so we should take any conclusions with a fistful of salt.

We specify the model and then fit it using cfa :

Next, we look at the measures of goodness-of-fit:

These all indicate a good fit. The \(\chi^2\) -test is not significant ( \(p=0.409\) ). In addition, we have \(\mbox{CFI}=0.999>0.9\) , \(\mbox{TLI}=0.997>0.95\) , \(\mbox{RMSEA}=0.026<0.06\) , and \(\mbox{SRMR}=0.059<0.08\) . Our conclusion is that the model fits the data well, indicating that it gives a reasonable description of the dependence structure.

Finally, we plot the path diagram of the model:

Exercise 10.4

We specify the model and then fit it using sem :

The results indicate that the theory may not be correct. The “Maturity” variable is not significantly associated with age and sex. The endogenous variables are not significantly associated with “Maturity”. The \(\chi^2\) test, the TLI, and the RMSEA indicate problems with the model fit.

Exercise 10.5

We set file_path to the path to treegrowth.csv to import the data, specify the model and then fit it using sem :

The goodness-of-fit measures all indicate a very good fit. All relations are significant, except for the relation between x5 and the Environment variable.

Exercise 10.6

The indirect effect is significant, so there is mediation here. However, it the indirect effect is only a small proportion of the total effect.

Exercise 10.7

The moderated effects are not significant, and we conclude that there is no statistical evidence for moderation.

Exercise 11.1

  • We load the data and compute the expected values using the formula \(y = 2x_1-x_2+x_3\cdot x_2\) :

Next, we plot the expected values against the actual values:

The points seem to follow a straight line, and a linear model seems appropriate.

  • Next, we fit a linear model to the first 20 observations:

The \(R^2\) -value is pretty high: 0.91. x1 and x2 both have low p-values, as does the F-test for the regression. We can check the model fit by comparing the fitted values to the actual values. We add a red line that the points should follow if we have a good fit:

The model seems to be pretty good! Now let’s see how well it does when faced with new data.

  • We make predictions for all 10 observations:

We can plot the results for the last 10 observations, which weren’t used when we fitted the model:

The results are much worse than before! The correlation between the predicted values and the actual values is very low:

Despite the good in-sample performance (as indicated e.g. by the high \(R^2\) ), the model doesn’t seem to be very useful for prediction.

  • Perhaps you noted that the effect of x3 wasn’t significant in the model. Perhaps the performance will improve if we remove it? Let’s try!

The p-values and \(R^2\) still look very promising. Let’s make predictions for the new observations and check the results:

The predictions are no better than before - indeed, the correlation between the actual and predicted values is even lower this time out!

  • Finally, we fit a correctly specified model and evaluate the results:

The predictive performance of the model remains low, which shows that model misspecification wasn’t the (only) reason for the poor performance of the previous models.

Exercise 11.2

We set file_path to the path to estates.xlsx and then load the data:

There are a lot of missing values which can cause problems when fitting the model, so let’s remove those:

Next, we fit a linear model and evaluate it with LOOCV using caret and train :

The \(RMSE\) is 547 and the \(MAE\) is 395 kSEK. The average selling price in the data ( mean(estates$selling_price) ) is 2843 kSEK, meaning that the \(MAE\) is approximately 13 % of the mean selling price. This is not unreasonably high for this application. Prediction errors are definitely expected here, given the fact that we have relatively few variables - the selling price can be expected to depend on several things not captured by the variables in our data (proximity to schools, access to public transport, and so on). Moreover, houses in Sweden are not sold at fixed prices, but subject to bidding, which can cause prices to fluctuate a lot. All in all, and \(MAE\) of 395 is pretty good, and, at the very least, the model seems useful for getting a ballpark figure for the price of a house.

Exercise 11.3

We set file_path to the path to estates.xlsx and then load and clean the data:

  • Next, we evaluate the model with 10-fold cross-validation a few times:

In my runs, the \(MAE\) ranged from to 391 to 405. Not a massive difference on the scale of the data, but there is clearly some variability in the results.

  • Next, we run repeated 10-fold cross-validations a few times:

In my runs the \(MAE\) varied between 396.0 and 397.4. There is still some variability, but it is much smaller than for a simple 10-fold cross-validation.

Exercise 11.4

Next, we evaluate the model with the bootstrap a few times:

In my run, the \(MAE\) varied between 410.0 and 411.8, meaning that the variability is similar to the with repeated 10-fold cross-validation. When I increased the number of bootstrap samples to 9,999, the \(MAE\) stabilised around 411.7.

Exercise 11.5

We load and format the data as in the beginning of Section 11.1.7 . We can then fit the two models using train :

To compare the models, we use evalm to plot ROC and calibration curves:

Model 2 performs much better, both in terms of \(AUC\) and calibration. Adding two more variables has both increased the predictive performance of the model (a much higher \(AUC\) ) and lead to a better-calibrated model.

Exercise 11.9

First, we load and clean the data:

Next, we fit a ridge regression model and evaluate it with LOOCV using caret and train :

Noticing that the \(\lambda\) that gave the best \(RMSE\) was 10, which was the maximal \(\lambda\) that we investigated, we rerun the code, allowing for higher values of \(\lambda\) :

The \(RMSE\) is 549 and the \(MAE\) is 399. In this case, ridge regression did not improve the performance of the model compared to an ordinary linear regression.

Exercise 11.10

We load and format the data as in the beginning of Section 11.1.7 .

  • We can now fit the models using train , making sure to add family = "binomial" :

The best value for \(\lambda\) is 0, meaning that no regularisation is used.

  • Next, we add summaryFunction = twoClassSummary and metric = "ROC" , which means that \(AUC\) and not accuracy will be used to find the optimal \(\lambda\) :

The best value for \(\lambda\) is still 0. For this dataset, both accuracy and \(AUC\) happened to give the same \(\lambda\) , but that isn’t always the case.

Exercise 11.11

Next, we fit a lasso model and evaluate it with LOOCV using caret and train :

The \(RMSE\) is 545 and the \(MAE\) is 394. Both are a little lower than for the ordinary linear regression, but the difference is small in this case. To see which variables have been removed, we can use:

Note that this data isn’t perfectly suited to the lasso, because most variables are useful in explaining the selling price. Where the lasso really shines in problems where a lot of the variables, perhaps even most, aren’t useful in explaining the response variable. We’ll see an example of that in the next exercise.

Exercise 11.12

  • We try fitting a linear model to the data:

There are no error messages, but summary reveals that there were problems: Coefficients: (101 not defined because of singularities) and for half the variables we don’t get estimates of the coefficients. It is not possible to fit ordinary linear models when there are more variables than observations (there is no unique solution to the least squares equations from which we obtain the coefficient estimates), which leads to this strange-looking output.

  • Lasso models can be used even when the number of variables is greater than the number of observations - regularisation ensures that there will be a unique solution. We fit a lasso model using caret and train :

Next, we have a look at what variables have non-zero coefficients:

Your mileage may vary (try running the simulation more than once!), but it is likely that the lasso will have picked at least the first four of the explanatory variables, probably along with some additional variables. Try changing the ratio between n and p in your experiment, or the size of the coefficients used when generating y , and see what happens.

Exercise 11.13

Next, we fit an elastic net model and evaluate it with LOOCV using caret and train :

We get a slight improvement over the lasso, with an \(RMSE\) of 543.5 and an \(MAE\) of 393.

Exercise 11.14

We load and format the data as in the beginning of Section 11.1.7 . We can then fit the model using train . We set summaryFunction = twoClassSummary and metric = "ROC" to use \(AUC\) to find the optimal \(k\) .

  • Next, we plot the resulting decision tree:

The tree is pretty large. The parameter cp , called a complexity parameter, can be used to prune the tree, i.e. to make it smaller. Let’s try setting a larger value for cp :

That was way too much pruning - now the tree is too small! Try a value somewhere in-between:

That seems like a good compromise. The tree is small enough for us to understand and discuss, but hopefully large enough that it still has a high \(AUC\) .

  • For presentation and interpretability purposes we can experiment with manually setting different values of cp . We can also let train find an optimal value of cp for us, maximising for instance the \(AUC\) . We’ll use tuneGrid = expand.grid(cp = seq(0, 0.01, 0.001)) to find a good choice of cp somewhere between 0 and 0.01:

In some cases, increasing cp can increase the \(AUC\) , but not here - a cp of 0 turns out to be optimal in this instance.

Finally, to visually evaluate the model, we use evalm to plot ROC and calibration curves:

Exercise 11.15

  • We set file_path to the path of bacteria.csv , then load and format the data as in Section 11.3.3 :

Next, we fit a regression tree model using rows 45 to 90:

Finally, we make predictions for the entire dataset and compare the results to the actual outcomes:

Regression trees are unable to extrapolate beyond the training data. By design, they will make constant predictions whenever the values of the explanatory variables go beyond those in the training data. Bear this in mind if you use tree-based models for predictions!

Exercise 11.16

First, we load the data as in Section 4.11 :

Next, we fit a classification tree model with Kernel_length and Compactness as explanatory variables:

Finally, we plot the decision boundaries:

The decision boundaries seem pretty good - most points in the lower left part belong to variety 3, most in the middle to variety 1, and most to the right to variety 2.

Exercise 11.17

We load and format the data as in the beginning of Section 11.1.7 . We can then fit the models using train (fitting m2 takes a while):

Next, we compare the results of the best models:

And finally, a visual comparison:

The calibration curves may look worrisome, but the main reason that they deviate from the straight line is that almost all observations have predicted probabilities close to either 0 or 1. To see this, we can have a quick look at the histogram of the predicted probabilities that the wines are white:

We used 10-fold cross-validation here, as using repeated cross-validation would take too long (at least in this case, where we only study this data as an example). As we’ve seen before, that means that the performance metrics can vary a lot between runs, so we shouldn’t read too much into the difference we found here.

Exercise 11.18

Next, we fit a random forest using rows 45 to 90:

The model does very well for the training data, but fails to extrapolate beyond it. Because random forests are based on decision trees, they give constant predictions whenever the values of the explanatory variables go beyond those in the training data.

Exercise 11.19

Next, we fit a random forest model with Kernel_length and Compactness as explanatory variables:

The decision boundaries are much more complex and flexible than those for the decision tree of Exercise 11.16 . Perhaps they are too flexible, and the model has overfitted to the training data?

Exercise 11.20

We load and format the data as in the beginning of Section 11.1.7 . We can then fit the model using train . Try a large number of parameter values to see if you can get a high \(AUC\) . You can try using a simple 10-fold cross-validation to find reasonable candidate values for the parameters, and then rerun the tuning with a replicated 10-fold cross-validation with parameter values close to those that were optimal in your first search.

Exercise 11.21

Next, we fit a boosted trees model using rows 45 to 90:

The model does OK for the training data, but fails to extrapolate beyond it. Because boosted trees models are based on decision trees, they give constant predictions whenever the values of the explanatory variables go beyond those in the training data.

Exercise 11.22

Next, we fit a boosted trees model with Kernel_length and Compactness as explanatory variables:

The decision boundaries are much complex and flexible than those for the decision tree of Exercise 11.16 , but does not appear to have overfitted like the random forest in Exercise 11.19 .

Exercise 11.23

First, we fit a decision tree using rows 45 to 90:

Next, we fit a model tree using rows 45 to 90. The only explanatory variable available to us is Time , and we want to use that both for the models in the nodes and for the splits:

Next, we make predictions for the entire dataset and compare the results to the actual outcomes. We plot the predictions from the decision tree in red and those from the model tree in blue:

Neither model does particularly well (but fail in different ways).

  • Next, we repeat the same steps, but use observations 20 to 120 for fitting the models:

As we can see from the plot of the model tree, it (correctly!) identifies different time phases in which the bacteria grow at different speeds. It therefore also managed to make better extrapolation than the decision tree, which predicts no growth as Time is increased beyond what was seen in the training data.

Exercise 11.24

We load and format the data as in the beginning of Section 11.1.7 . We can then fit the model using train as follows:

To round things off, we evaluate the model using evalm :

Exercise 11.25

Next, we fit LDA and QDA models with Kernel_length and Compactness as explanatory variables:

Next, we plot the decision boundaries in the same scatterplot (LDA is black and QDA is orange):

The decision boundaries are fairly similar and seem pretty reasonable. QDA offers more flexible non-linear boundaries, but the difference isn’t huge.

Exercise 11.26

Next, we fit the MDA model with Kernel_length and Compactness as explanatory variables:

The decision boundaries are similar to those of QDA.

Exercise 11.27

We load and format the data as in the beginning of Section 11.1.7 . We’ll go with a polynomial kernel and compare polynomials of degree 2 and 3. We can fit the model using train as follows:

And, as usual, we can then plot ROC and calibration curves:

Exercise 11.28

Next, we fit an SVM with a polynomial kernel using rows 45 to 90:

Similar to the linear model in Section 11.3.3 , the SVM model does not extrapolate too well outside the training data. Unlike tree-based models, however, it does not yield constant predictions for values of the explanatory variable that are outside the range in the training data. Instead, the fitted function is assumed to follow the same shape as in the training data.

  • Next, we repeat the same steps using the data from rows 20 to 120:

The results are disappointing. Using a different kernel could improve the results though, so go ahead and give that a try!

Exercise 11.29

Next, we two different SVM models with Kernel_length and Compactness as explanatory variables:

Next, we plot the decision boundaries in the same scatterplot (the polynomial kernel is black and the radial basis kernel is orange):

It is likely the case that the polynomial kernel gives a similar results to e.g. MDA, whereas the radial basis kernel gives more flexible decision boundaries.

Exercise 11.30

We load and format the data as in the beginning of Section 11.1.7 . We can then fit the model using train . We set summaryFunction = twoClassSummary and metric = "ROC" to use \(AUC\) to find the optimal \(k\) . We make sure to add a preProcess argument to train , to standardise the data:

To visually evaluate the model, we use evalm to plot ROC and calibration curves:

The performance is as good as, or a little better than, the best logistic regression model from Exercise 11.5 . We shouldn’t make too much of any differences though, as the models were evaluated in different ways - we used repeated 10-fold cross-validation for the logistics models and a simple 10-fold cross-validation here (because repeated cross-validation would be too slow in this case).

Exercise 11.31

Next, we two different kNN models with Kernel_length and Compactness as explanatory variables:

Next, we plot the decision boundaries:

The decision boundaries are quite “wiggly”, which will always be the case when there are enough points in the sample.

Exercise 11.32

We start by plotting the time series:

Next, we fit an ARIMA model after removing the seasonal component:

The residuals look pretty good for this model:

Finally, we make a forecast for the next 36 months, adding the seasonal component back and using bootstrap prediction intervals:

’ else ’ ’`

IMAGES

  1. R Programming: Week-1: Assignment-1: Demonstration

    r programming assignment

  2. 4

    r programming assignment

  3. R Programming Assignment Help by codeavail on Dribbble

    r programming assignment

  4. Guide To Help You With R Programming Assignment

    r programming assignment

  5. Get The Best R Programming Assignment Help On These Major Topics : r

    r programming assignment

  6. Key Features of R Programming : r/assignmentprovider

    r programming assignment

VIDEO

  1. R Programming Basic to Advance Part 1 || Arnab Mallick || RGET Education

  2. Operators in c

  3. Advanced R Programming for Data Analytics in Business|Week6|Quiz 6|Assignment 6

  4. NPTEL : Advanced R Programming -WEEK 6 Assignment Answers

  5. NPTEL : Advanced R programming (Week-8 Assignment Answers)

  6. R programming basic