mon, 21-dec-2015, 16:58

Introduction

While riding to work this morning I figured out a way to disentangle the effects of trail quality and physical conditioning (both of which improve over the season) from temperature, which also tends to increase throughout the season. As you recall in my previous post, I found that days into the season (winter day of year) and minimum temperature were both negatively related with fat bike energy consumption. But because those variables are also related to each other, we can’t make statements about them individually.

But what if we look at pairs of trips that are within two days of each other and look at the difference in temperature between those trips and the difference in energy consumption? We’ll only pair trips going the same direction (to or from work), and we’ll restrict the pairings to two days or less. That eliminates seasonality from the data because we’re always comparing two trips from the same few days.

Data

For this analysis, I’m using SQL to filter the data because I’m better at window functions and filtering in SQL than R. Here’s the code to grab the data from the database. (The CSV file and RMarkdown script is on my GitHub repo for this analysis). The trick here is to categorize trips as being to work (“north”) or from work (“south”) and then include this field in the partition statement of the window function so I’m only getting the next trip that matches direction.

library(dplyr)
library(ggplot2)
library(scales)

exercise_db <- src_postgres(host="example.com", dbname="exercise_data")

diffs <- tbl(exercise_db,
            build_sql(
   "WITH all_to_work AS (
      SELECT *,
            CASE WHEN extract(hour from start_time) < 11
                 THEN 'north' ELSE 'south' END AS direction
      FROM track_stats
      WHERE type = 'Fat Biking'
            AND miles between 4 and 4.3
   ), with_next AS (
      SELECT track_id, start_time, direction, kcal, miles, min_temp,
            lead(direction) OVER w AS next_direction,
            lead(start_time) OVER w AS next_start_time,
            lead(kcal) OVER w AS next_kcal,
            lead(miles) OVER w AS next_miles,
            lead(min_temp) OVER w AS next_min_temp
      FROM all_to_work
      WINDOW w AS (PARTITION BY direction ORDER BY start_time)
   )
   SELECT start_time, next_start_time, direction,
      min_temp, next_min_temp,
      kcal / miles AS kcal_per_mile,
      next_kcal / next_miles as next_kcal_per_mile,
      next_min_temp - min_temp AS temp_diff,
      (next_kcal / next_miles) - (kcal / miles) AS kcal_per_mile_diff
   FROM with_next
   WHERE next_start_time - start_time < '60 hours'
   ORDER BY start_time")) %>% collect()

write.csv(diffs, file="fat_biking_trip_diffs.csv", quote=TRUE,
         row.names=FALSE)
kable(head(diffs))
start time next start time temp diff kcal / mile diff
2013-12-03 06:21:49 2013-12-05 06:31:54 3.0 -13.843866
2013-12-03 15:41:48 2013-12-05 15:24:10 3.7 -8.823329
2013-12-05 06:31:54 2013-12-06 06:39:04 23.4 -22.510564
2013-12-05 15:24:10 2013-12-06 16:38:31 13.6 -5.505662
2013-12-09 06:41:07 2013-12-11 06:15:32 -27.7 -10.227048
2013-12-09 13:44:59 2013-12-11 16:00:11 -25.4 -1.034789

Out of a total of 123 trips, 70 took place within 2 days of each other. We still don’t have a measure of trail quality, so pairs where the trail is smooth and hard one day and covered with fresh snow the next won’t be particularly good data points.

Let’s look at a plot of the data.

s = ggplot(data=diffs,
         aes(x=temp_diff, y=kcal_per_mile_diff)) +
   geom_point() +
   geom_smooth(method="lm", se=FALSE) +
   scale_x_continuous(name="Temperature difference between paired trips (degrees F)",
                     breaks=pretty_breaks(n=10)) +
   scale_y_continuous(name="Energy consumption difference (kcal / mile)",
                     breaks=pretty_breaks(n=10)) +
   theme_bw() +
   ggtitle("Paired fat bike trips to and from work within 2 days of each other")

print(s)
//media.swingleydev.com/img/blog/2015/12/paired_trip_plot.svg

This shows that when the temperature difference between two paired trips is negative (the second trip is colder than the first), additional energy is required for the second (colder) trip. This matches the pattern we saw in my earlier post where minimum temperature and winter day of year were negatively associated with energy consumption. But because we’ve used differences to remove seasonal effects, we can actually determine how large of an effect temperature has.

There are quite a few outliers here. Those that are in the region with very little difference in temperature are likey due to snowfall changing the trail conditions from one trip to the next. I’m not sure why there is so much scatter among the points on the left side of the graph, but I don’t see any particular pattern among those points that might explain the higher than normal variation, and we don’t see the same variation in the points with a large positive difference in temperature, so I think this is just normal variation in the data not explained by temperature.

Results

Here’s the linear regression results for this data.

summary(lm(data=diffs, kcal_per_mile_diff ~ temp_diff))
##
## Call:
## lm(formula = kcal_per_mile_diff ~ temp_diff, data = diffs)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -40.839  -4.584  -0.169   3.740  47.063
##
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  -2.1696     1.5253  -1.422    0.159
## temp_diff    -0.7778     0.1434  -5.424 8.37e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 12.76 on 68 degrees of freedom
## Multiple R-squared:  0.302,  Adjusted R-squared:  0.2917
## F-statistic: 29.42 on 1 and 68 DF,  p-value: 8.367e-07

The model and coefficient are both highly signficant, and as we might expect, the intercept in the model is not significantly different from zero (if there wasn’t a difference in temperature between two trips there shouldn’t be a difference in energy consumption either, on average). Temperature alone explains 30% of the variation in energy consumption, and the coefficient tells us the scale of the effect: each degree drop in temperature results in an increase in energy consumption of 0.78 kcalories per mile. So for a 4 mile commute like mine, the difference between a trip at 10°F vs −20°F is an additional 93 kilocalories (30 × 0.7778 × 4 = 93.34) on the colder trip. That might not sound like much in the context of the calories in food (93 kilocalories is about the energy in a large orange or a light beer), but my average energy consumption across all fat bike trips to and from work is 377 kilocalories so 93 represents a large portion of the total.

tags: R  temperature  exercise  fat bike 
sun, 20-dec-2015, 15:16

Introduction

I’ve had a fat bike since late November 2013, mostly using it to commute the 4.1 miles to and from work on the Goldstream Valley trail system. I used to classic ski exclusively, but that’s not particularly pleasant once the temperatures are below 0°F because I can’t keep my hands and feet warm enough, and the amount of glide you get on skis declines as the temperature goes down.

However, it’s also true that fat biking gets much harder the colder it gets. I think this is partly due to biking while wearing lots of extra layers, but also because of increased friction between the large tires and tubes in a fat bike. In this post I will look at how temperature and other variables affect the performance of a fat bike (and it’s rider).

The code and data for this post is available on GitHub.

Data

I log all my commutes (and other exercise) using the RunKeeper app, which uses the phone’s GPS to keep track of distance and speed, and connects to my heart rate monitor to track heart rate. I had been using a Polar HR chest strap, but after about a year it became flaky and I replaced it with a Scosche Rhythm+ arm band monitor. The data from RunKeeper is exported into GPX files, which I process and insert into a PostgreSQL database.

From the heart rate data, I estimate energy consumption (in kilocalories, or what appears on food labels as calories) using a formula from Keytel LR, et al. 2005, which I talk about in this blog post.

Let’s take a look at the data:

library(dplyr)
library(ggplot2)
library(scales)
library(lubridate)
library(munsell)

fat_bike <- read.csv("fat_bike.csv", stringsAsFactors=FALSE, header=TRUE) %>%
    tbl_df() %>%
    mutate(start_time=ymd_hms(start_time, tz="US/Alaska"))

kable(head(fat_bike))
start_time miles time hours mph hr kcal min_temp max_temp
2013-11-27 06:22:13 4.17 0:35:11 0.59 7.12 157.8 518.4 -1.1 1.0
2013-11-27 15:27:01 4.11 0:35:49 0.60 6.89 156.0 513.6 1.1 2.2
2013-12-01 12:29:27 4.79 0:55:08 0.92 5.21 172.6 951.5 -25.9 -23.9
2013-12-03 06:21:49 4.19 0:39:16 0.65 6.40 148.4 526.8 -4.6 -2.1
2013-12-03 15:41:48 4.22 0:30:56 0.52 8.19 154.6 434.5 6.0 7.9
2013-12-05 06:31:54 4.14 0:32:14 0.54 7.71 155.8 463.2 -1.6 2.9

There are a few things we need to do to the raw data before analyzing it. First, we want to restrict the data to just my commutes to and from work, and we want to categorize them as being one or the other. That way we can analyze trips to ABR and home separately, and we’ll reduce the variation within each analysis. If we were to analyze all fat biking trips together, we’d be lumping short and long trips, as well as those with a different proportion of hills or more challenging conditions. To get just trips to and from work, I’m restricting the distance to trips between 4.0 and 4.3 miles, and only those activities where there were two of them in a single day (to work and home from work). To categorize them into commutes to work and home, I filter based on the time of day.

I’m also calculating energy per mile, and adding a “winter day of year” variable (wdoy), which is a measure of how far into the winter season the trip took place. We can’t just use day of year because that starts over on January 1st, so we subtract the number of days between January and May from the date and get day of year from that. Finally, we split the data into trips to work and home.

I’m also excluding the really early season data from 2015 because the trail was in really poor condition.

fat_bike_commute <- fat_bike %>%
    filter(miles>4, miles<4.3) %>%
    mutate(direction=ifelse(hour(start_time)<10, 'north', 'south'),
           date=as.Date(start_time, tz='US/Alaska'),
           wdoy=yday(date-days(120)),
           kcal_per_mile=kcal/miles) %>%
    group_by(date) %>%
    mutate(n=n()) %>%
    ungroup() %>%
    filter(n>1)

to_abr <- fat_bike_commute %>% filter(direction=='north',
                                      wdoy>210)
to_home <- fat_bike_commute %>% filter(direction=='south',
                                       wdoy>210)
kable(head(to_home %>% select(-date, -kcal, -n)))
start_time miles time hours mph hr min_temp max_temp direction wdoy kcal_per_mile
2013-11-27 15:27:01 4.11 0:35:49 0.60 6.89 156.0 1.1 2.2 south 211 124.96350
2013-12-03 15:41:48 4.22 0:30:56 0.52 8.19 154.6 6.0 7.9 south 217 102.96209
2013-12-05 15:24:10 4.18 0:29:07 0.49 8.60 150.7 9.7 12.0 south 219 94.13876
2013-12-06 16:38:31 4.17 0:26:04 0.43 9.60 154.3 23.3 24.7 south 220 88.63309
2013-12-09 13:44:59 4.11 0:32:06 0.54 7.69 161.3 27.5 28.5 south 223 119.19708
2013-12-11 16:00:11 4.19 0:33:48 0.56 7.44 157.6 2.1 4.5 south 225 118.16229

Analysis

Here a plot of the data. We’re plotting all trips with winter day of year on the x-axis and energy per mile on the y-axis. The color of the points indicates the minimum temperature and the straight line shows the trend of the relationship.

s <- ggplot(data=fat_bike_commute %>% filter(wdoy>210), aes(x=wdoy, y=kcal_per_mile, colour=min_temp)) +
    geom_smooth(method="lm", se=FALSE, colour=mnsl("10B 7/10", fix=TRUE)) +
    geom_point(size=3) +
    scale_x_continuous(name=NULL,
                       breaks=c(215, 246, 277, 305, 336),
                       labels=c('1-Dec', '1-Jan', '1-Feb', '1-Mar', '1-Apr')) +
    scale_y_continuous(name="Energy (kcal)", breaks=pretty_breaks(n=10)) +
    scale_colour_continuous(low=mnsl("7.5B 5/12", fix=TRUE), high=mnsl("7.5R 5/12", fix=TRUE),
                            breaks=pretty_breaks(n=5),
                            guide=guide_colourbar(title="Min temp (°F)", reverse=FALSE, barheight=8)) +
    ggtitle("All fat bike trips") +
    theme_bw()
print(s)
//media.swingleydev.com/img/blog/2015/12/wdoy_plot.svg

Across all trips, we can see that as the winter progresses, I consume less energy per mile. This is hopefully because my physical condition improves the more I ride, and also because the trail conditions also improve as the snow pack develops and the trail gets harder with use. You can also see a pattern in the color of the dots, with the bluer (and colder) points near the top and the warmer temperature trips near the bottom.

Let’s look at the temperature relationship:

s <- ggplot(data=fat_bike_commute %>% filter(wdoy>210), aes(x=min_temp, y=kcal_per_mile, colour=wdoy)) +
    geom_smooth(method="lm", se=FALSE, colour=mnsl("10B 7/10", fix=TRUE)) +
    geom_point(size=3) +
    scale_x_continuous(name="Minimum temperature (degrees F)", breaks=pretty_breaks(n=10)) +
    scale_y_continuous(name="Energy (kcal)", breaks=pretty_breaks(n=10)) +
    scale_colour_continuous(low=mnsl("7.5PB 2/12", fix=TRUE), high=mnsl("7.5PB 8/12", fix=TRUE),
                            breaks=c(215, 246, 277, 305, 336),
                            labels=c('1-Dec', '1-Jan', '1-Feb', '1-Mar', '1-Apr'),
                            guide=guide_colourbar(title=NULL, reverse=TRUE, barheight=8)) +
    ggtitle("All fat bike trips") +
    theme_bw()
print(s)
//media.swingleydev.com/img/blog/2015/12/min_temp_plot.svg

A similar pattern. As the temperature drops, it takes more energy to go the same distance. And the color of the points also shows the relationship from the earlier plot where trips taken later in the season require less energy.

There is also be a correlation between winter day of year and temperature. Since the winter fat biking season essentially begins in December, it tends to warm up throughout.

Results

The relationship between winter day of year and temperature means that we’ve got multicollinearity in any model that includes both of them. This doesn’t mean we shouldn’t include them, nor that the significance or predictive power of the model is reduced. All it means is that we can’t use the individual regression coefficients to make predictions.

Here are the linear models for trips to work, and home:

to_abr_lm <- lm(data=to_abr, kcal_per_mile ~ min_temp + wdoy)
print(summary(to_abr_lm))
##
## Call:
## lm(formula = kcal_per_mile ~ min_temp + wdoy, data = to_abr)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -27.845  -6.964  -3.186   3.609  53.697
##
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)
## (Intercept) 170.81359   15.54834  10.986 1.07e-14 ***
## min_temp     -0.45694    0.18368  -2.488   0.0164 *
## wdoy         -0.29974    0.05913  -5.069 6.36e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.9 on 48 degrees of freedom
## Multiple R-squared:  0.4069, Adjusted R-squared:  0.3822
## F-statistic: 16.46 on 2 and 48 DF,  p-value: 3.595e-06
to_home_lm <- lm(data=to_home, kcal_per_mile ~ min_temp + wdoy)
print(summary(to_home_lm))
##
## Call:
## lm(formula = kcal_per_mile ~ min_temp + wdoy, data = to_home)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -21.615 -10.200  -1.068   3.741  39.005
##
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)
## (Intercept) 144.16615   18.55826   7.768 4.94e-10 ***
## min_temp     -0.47659    0.16466  -2.894  0.00570 **
## wdoy         -0.20581    0.07502  -2.743  0.00852 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 13.49 on 48 degrees of freedom
## Multiple R-squared:  0.5637, Adjusted R-squared:  0.5455
## F-statistic: 31.01 on 2 and 48 DF,  p-value: 2.261e-09

The models confirm what we saw in the plots. Both regression coefficients are negative, which means that as the temperature rises (and as the winter goes on) I consume less energy per mile. The models themselves are significant as are the coefficients, although less so in the trips to work. The amount of variation in kcal/mile explained by minimum temperature and winter day of year is 41% for trips to work and 56% for trips home.

What accounts for the rest of the variation? My guess is that trail conditions are the missing factor here; specifically fresh snow, or a trail churned up by snowmachiners. I think that’s also why the results are better on trips home than to work. On days when we get snow overnight, I am almost certainly riding on an pristine snow-covered trail, but by the time I leave work, the trail will be smoother and harder due to all the traffic it’s seen over the course of the day.

Conclusions

We didn’t really find anything surprising here: it is significantly harder to ride a fat bike when it’s colder. Because of conditioning, improved trail conditions, as well as the tendency for warmer weather later in the season, it also gets easier to ride as the winter goes on.

tags: R  temperature  exercise  fat bike 
sat, 30-may-2015, 09:21

Introduction

Often when I’m watching Major League Baseball games a player will come up to bat or pitch and I’ll comment “former Oakland Athletic” and the player’s name. It seems like there’s always one or two players on the roster of every team that used to be an Athletic.

Let’s find out. We’ll use the Retrosheet database again, this time using the roster lists from 1990 through 2014 and comparing it against the 40-man rosters of current teams. That data will have to be scraped off the web, since Retrosheet data doesn’t exist for the current season and rosters change frequently during the season.

Methods

As usual, I’ll use R for the analysis, and rmarkdown to produce this post.

library(plyr)
library(dplyr)
library(rvest)
options(stringsAsFactors=FALSE)

We’re using plyr and dplyr for most of the data manipulation and rvest to grab the 40-man rosters for each team from the MLB website. Setting stringsAsFactors to false prevents various base R packages from converting everything to factors. We're not doing any statistics with this data, so factors aren't necessary and make comparisons and joins between data frames more difficult.

Players by team for previous years

Load the roster data:

retrosheet_db <- src_postgres(host="localhost", port=5438,
                              dbname="retrosheet", user="cswingley")

rosters <- tbl(retrosheet_db, "rosters")

all_recent_players <-
     rosters %>%
     filter(year>1989) %>%
     collect() %>%
     mutate(player=paste(first_name, last_name),
            team=team_id) %>%
     select(player, team, year)

save(all_recent_players, file="all_recent_players.rdata", compress="gzip")

The Retrosheet database lives in PostgreSQL on my computer, but one of the advantages of using dplyr for retrieval is it would be easy to change the source statement to connect to another sort of database (SQLite, MySQL, etc.) and the rest of the commands would be the same.

We only grab data since 1990 and we combine the first and last names into a single field because that’s how player names are listed on the 40-man roster pages on the web.

Now we filter the list down to Oakland Athletic players, combine the rows for each Oakland player, summarizing the years they played for the A’s into a single column.

oakland_players <- all_recent_players %>%
    filter(team=='OAK') %>%
    group_by(player) %>%
    summarise(years=paste(year, collapse=', '))

Here’s what that looks like:

kable(head(oakland_players))
player years
A.J. Griffin 2012, 2013
A.J. Hinch 1998, 1999, 2000
Aaron Cunningham 2008, 2009
Aaron Harang 2002, 2003
Aaron Small 1996, 1997, 1998
Adam Dunn 2014
... ...

Current 40-man rosters

Major League Baseball has the 40-man rosters for each team on their site. In order to extract them, we create a list of the team identifiers (oak, sf, etc.), then loop over this list, grabbing the team name and all the player names. We also set up lists for the team names (“Athletics”, “Giants”, etc.) so we can replace the short identifiers with real names later.

teams=c("ana", "ari", "atl", "bal", "bos", "cws", "chc", "cin", "cle", "col",
        "det", "mia", "hou", "kc", "la", "mil", "min", "nyy", "nym", "oak",
        "phi", "pit", "sd", "sea", "sf", "stl", "tb", "tex", "tor", "was")

team_names = c("Angels", "Diamondbacks", "Braves", "Orioles", "Red Sox",
               "White Sox", "Cubs", "Reds", "Indians", "Rockies", "Tigers",
               "Marlins", "Astros", "Royals", "Dodgers", "Brewers", "Twins",
               "Yankees", "Mets", "Athletics", "Phillies", "Pirates",
               "Padres", "Mariners", "Giants", "Cardinals", "Rays", "Rangers",
               "Blue Jays", "Nationals")

get_players <- function(team) {
    # reads the 40-man roster data for a team, returns a data frame

    roster_html <- html(paste("http://www.mlb.com/team/roster_40man.jsp?c_id=",
                              team,
                              sep=''))

    players <- roster_html %>%
        html_nodes("#roster_40_man a") %>%
        html_text()

    data.frame(team=team, player=players)
}

current_rosters <- ldply(teams, get_players)

save(current_rosters, file="current_rosters.rdata", compress="gzip")

Here’s what that data looks like:

kable(head(current_rosters))
team player
ana Jose Alvarez
ana Cam Bedrosian
ana Andrew Heaney
ana Jeremy McBryde
ana Mike Morin
ana Vinnie Pestano
... ...

Combine the data

To find out how many players on each Major League team used to play for the A’s we combine the former A’s players with the current rosters using player name. This may not be perfect due to differences in spelling (accented characters being the most likely issue), but the results look pretty good.

roster_with_oakland_time <- current_rosters %>%
    left_join(oakland_players, by="player") %>%
    filter(!is.na(years))

kable(head(roster_with_oakland_time))
team player years
ana Huston Street 2005, 2006, 2007, 2008
ana Grant Green 2013
ana Collin Cowgill 2012
ari Brad Ziegler 2008, 2009, 2010, 2011
ari Cliff Pennington 2008, 2009, 2010, 2011, 2012
atl Trevor Cahill 2009, 2010, 2011
... ... ...

You can see from this table (just the first six rows of the results) that the Angels have three players that were Athletics.

Let’s do the math and find out how many former A’s are on each team’s roster.

n_former_players_by_team <-
    roster_with_oakland_time %>%
        group_by(team) %>%
        arrange(player) %>%
        summarise(number_of_players=n(),
                  players=paste(player, collapse=", ")) %>%
        arrange(desc(number_of_players)) %>%
        inner_join(data.frame(team=teams, team_name=team_names),
                   by="team") %>%
        select(team_name, number_of_players, players)

names(n_former_players_by_team) <- c('Team', 'Number',
                                     'Former Oakland Athletics')
kable(n_former_players_by_team,
      align=c('l', 'r', 'r'))
Team Number Former Oakland Athletics
Athletics 22 A.J. Griffin, Andy Parrino, Billy Burns, Coco Crisp, Craig Gentry, Dan Otero, Drew Pomeranz, Eric O'Flaherty, Eric Sogard, Evan Scribner, Fernando Abad, Fernando Rodriguez, Jarrod Parker, Jesse Chavez, Josh Reddick, Nate Freiman, Ryan Cook, Sam Fuld, Scott Kazmir, Sean Doolittle, Sonny Gray, Stephen Vogt
Astros 5 Chris Carter, Dan Straily, Jed Lowrie, Luke Gregerson, Pat Neshek
Braves 4 Jim Johnson, Jonny Gomes, Josh Outman, Trevor Cahill
Rangers 4 Adam Rosales, Colby Lewis, Kyle Blanks, Michael Choice
Angels 3 Collin Cowgill, Grant Green, Huston Street
Cubs 3 Chris Denorfia, Jason Hammel, Jon Lester
Dodgers 3 Alberto Callaspo, Brandon McCarthy, Brett Anderson
Mets 3 Anthony Recker, Bartolo Colon, Jerry Blevins
Yankees 3 Chris Young, Gregorio Petit, Stephen Drew
Rays 3 David DeJesus, Erasmo Ramirez, John Jaso
Diamondbacks 2 Brad Ziegler, Cliff Pennington
Indians 2 Brandon Moss, Nick Swisher
White Sox 2 Geovany Soto, Jeff Samardzija
Tigers 2 Rajai Davis, Yoenis Cespedes
Royals 2 Chris Young, Joe Blanton
Marlins 2 Dan Haren, Vin Mazzaro
Padres 2 Derek Norris, Tyson Ross
Giants 2 Santiago Casilla, Tim Hudson
Nationals 2 Gio Gonzalez, Michael Taylor
Red Sox 1 Craig Breslow
Rockies 1 Carlos Gonzalez
Brewers 1 Shane Peterson
Twins 1 Kurt Suzuki
Phillies 1 Aaron Harang
Mariners 1 Seth Smith
Cardinals 1 Matt Holliday
Blue Jays 1 Josh Donaldson

Pretty cool. I do notice one problem: there are actually two Chris Young’s playing in baseball today. Chris Young the outfielder played for the A’s in 2013 and now plays for the Yankees. There’s also a pitcher named Chris Young who shows up on our list as a former A’s player who now plays for the Royals. This Chris Young never actually played for the A’s. The Retrosheet roster data includes which hand (left and/or right) a player bats and throws with, and it’s possible this could be used with the MLB 40-man roster data to eliminate incorrect joins like this, but even with that enhancement, we still have the problem that we’re joining on things that aren’t guaranteed to uniquely identify a player. That’s the nature of attempting to combine data from different sources.

One other interesting thing. I kept the A’s in the list because the number of former A’s currently playing for the A’s is a measure of how much turnover there is within an organization. Of the 40 players on the current A’s roster, only 22 of them have ever played for the A’s. That means that 18 came from other teams or are promotions from the minors that haven’t played for any Major League teams yet.

All teams

Teams with players on other teams

Now that we’ve looked at how many A’s players have played for other teams, let’s see how the number of players playing for other teams is related to team. My gut feeling is that the A’s will be at the top of this list as a small market, low budget team who is forced to turn players over regularly in order to try and stay competitive.

We already have the data for this, but need to manipulate it in a different way to get the result.

teams <- c("ANA", "ARI", "ATL", "BAL", "BOS", "CAL", "CHA", "CHN", "CIN",
           "CLE", "COL", "DET", "FLO", "HOU", "KCA", "LAN", "MIA", "MIL",
           "MIN", "MON", "NYA", "NYN", "OAK", "PHI", "PIT", "SDN", "SEA",
           "SFN", "SLN", "TBA", "TEX", "TOR", "WAS")

team_names <- c("Angels", "Diamondbacks", "Braves", "Orioles", "Red Sox",
                "Angels", "White Sox", "Cubs", "Reds", "Indians", "Rockies",
                "Tigers", "Marlins", "Astros", "Royals", "Dodgers", "Marlins",
                "Brewers", "Twins", "Expos", "Yankees", "Mets", "Athletics",
                "Phillies", "Pirates", "Padres", "Mariners", "Giants",
                "Cardinals", "Rays", "Rangers", "Blue Jays", "Nationals")

players_on_other_teams <- all_recent_players %>%
    group_by(player, team) %>%
    summarise(years=paste(year, collapse=", ")) %>%
    inner_join(current_rosters, by="player") %>%
    mutate(current_team=team.y, former_team=team.x) %>%
    select(player, current_team, former_team, years) %>%
    inner_join(data.frame(former_team=teams, former_team_name=team_names),
                by="former_team") %>%
    group_by(former_team_name, current_team) %>%
    summarise(n=n()) %>%
    group_by(former_team_name) %>%
    arrange(desc(n)) %>%
    mutate(rank=row_number()) %>%
    filter(rank!=1) %>%
    summarise(n=sum(n)) %>%
    arrange(desc(n))

This is a pretty complicated set of operations. The main trick (and possible flaw in the analysis) is to get a list similar to the one we got for the A’s earlier, and eliminate the first row (the number of players on a team who played for that same team in the past) before counting the total players who have played for other teams. It would probably be better to eliminate that case using team name, but the team codes vary between Retrosheet and the MLB roster data.

Here are the results:

names(players_on_other_teams) <- c('Former Team', 'Number of players')

kable(players_on_other_teams)
Former Team Number of players
Athletics 57
Padres 57
Marlins 56
Rangers 55
Diamondbacks 51
Braves 50
Yankees 50
Angels 49
Red Sox 47
Pirates 46
Royals 44
Dodgers 43
Mariners 42
Rockies 42
Cubs 40
Tigers 40
Astros 38
Blue Jays 38
Rays 38
White Sox 38
Indians 35
Mets 35
Twins 33
Cardinals 31
Nationals 31
Orioles 31
Reds 28
Brewers 26
Phillies 25
Giants 24
Expos 4

The A’s are indeed on the top of the list, but surprisingly, the Padres are also at the top. I had no idea the Padres had so much turnover. At the bottom of the list are teams like the Giants and Phillies that have been on recent winning streaks and aren’t trading their players to other teams.

Current players on the same team

We can look at the reverse situation: how many players on the current roster played for that same team in past years. Instead of removing the current × former team combination with the highest number, we include only that combination, which is almost certainly the combination where the former and current team is the same.

players_on_same_team <- all_recent_players %>%
    group_by(player, team) %>%
    summarise(years=paste(year, collapse=", ")) %>%
    inner_join(current_rosters, by="player") %>%
    mutate(current_team=team.y, former_team=team.x) %>%
    select(player, current_team, former_team, years) %>%
    inner_join(data.frame(former_team=teams, former_team_name=team_names),
                by="former_team") %>%
    group_by(former_team_name, current_team) %>%
    summarise(n=n()) %>%
    group_by(former_team_name) %>%
    arrange(desc(n)) %>%
    mutate(rank=row_number()) %>%
    filter(rank==1,
           former_team_name!="Expos") %>%
    summarise(n=sum(n)) %>%
    arrange(desc(n))

names(players_on_same_team) <- c('Team', 'Number of players')

kable(players_on_same_team)
Team Number of players
Rangers 31
Rockies 31
Twins 30
Giants 29
Indians 29
Cardinals 28
Mets 28
Orioles 28
Tigers 28
Brewers 27
Diamondbacks 27
Mariners 27
Phillies 27
Reds 26
Royals 26
Angels 25
Astros 25
Cubs 25
Nationals 25
Pirates 25
Rays 25
Red Sox 24
Blue Jays 23
Athletics 22
Marlins 22
Padres 22
Yankees 22
Dodgers 21
White Sox 20
Braves 13

The A’s are near the bottom of this list, along with other teams that have been retooling because of a lack of recent success such as the Yankees and Dodgers. You would think there would be an inverse relationship between this table and the previous one (if a lot of your former players are currently playing on other teams they’re not playing on your team), but this isn’t always the case. The White Sox, for example, only have 20 players on their roster that were Sox in the past, and there aren’t very many of them playing on other teams either. Their current roster must have been developed from their own farm system or international signings, rather than by exchanging players with other teams.

sat, 25-apr-2015, 10:21

Introduction

One of the best sources of weather data in the United States comes from the National Weather Service's Cooperative Observer Network (COOP), which is available from NCDC. It's daily data, collected by volunteers at more than 10,000 locations. We participate in this program at our house (station id DW1454 / GHCND:USC00503368), collecting daily minimum and maximum temperature, liquid precipitation, snowfall and snow depth. We also collect river heights for Goldstream Creek as part of the Alaska Pacific River Forecast Center (station GSCA2). Traditionally, daily temperature measurements were collecting using a minimum maximum thermometer, which meant that the only way to calculate average daily temperature was by averaging the minimum and maximum temperature. Even though COOP observers typically have an electronic instrument that could calculate average daily temperature from continuous observations, the daily minimum and maximum data is still what is reported.

In an earlier post we looked at methods used to calculate average daily temperature, and if there are any biases present in the way the National Weather Service calculates this using the average of the minimum and maximum daily temperature. We looked at five years of data collected at my house every five minutes, comparing the average of these temperatures against the average of the daily minimum and maximum. Here, we will be repeating this analysis using data from the Climate Reference Network stations in the United States.

The US Climate Reference Network is a collection of 132 weather stations that are properly sited, maintained, and include multiple redundant measures of temperature and precipitation. Data is available from http://www1.ncdc.noaa.gov/pub/data/uscrn/products/ and includes monthly, daily, and hourly statistics, and sub-hourly (5-minute) observations. We’ll be focusing on the sub-hourly data, since it closely matches the data collected at my weather station.

A similar analysis using daily and hourly CRN data appears here.

Getting the raw data

I downloaded all the data using the following Unix commands:

$ wget http://www1.ncdc.noaa.gov/pub/data/uscrn/products/stations.tsv
$ wget -np -m http://www1.ncdc.noaa.gov/pub/data/uscrn/products/subhourly01/
$ find www1.ncdc.noaa.gov/ -type f -name 'CRN*.txt' -exec gzip {} \;

The code to insert all of this data into a database can be found here. Once inserted, I have a table named crn_stations that has the station data, and one named crn_subhourly with the five minute observation data.

Methods

Once again, we’ll use R to read the data, process it, and produce plots.

Libraries

Load the libraries we need:

library(dplyr)
library(lubridate)
library(ggplot2)
library(scales)
library(grid)

Connect to the database and load the data tables.

noaa_db <- src_postgres(dbname="noaa", host="mason")

crn_stations <- tbl(noaa_db, "crn_stations") %>%
    collect()

crn_subhourly <- tbl(noaa_db, "crn_subhourly")

Remove observations without temperature data, group by station and date, calculate average daily temperature using the two methods, remove any daily data without a full set of data, and collect the results into an R data frame. This looks very similar to the code used to analyze the data from my weather station.

crn_daily <-
    crn_subhourly %>%
        filter(!is.na(air_temperature)) %>%
        mutate(date=date(timestamp)) %>%
        group_by(wbanno, date) %>%
        summarize(t_mean=mean(air_temperature),
                  t_minmax_avg=(min(air_temperature)+
                                max(air_temperature))/2.0,
                  n=n()) %>%
        filter(n==24*12) %>%
        mutate(anomaly=t_minmax_avg-t_mean) %>%
        select(wbanno, date, t_mean, t_minmax_avg, anomaly) %>%
        collect()

The two types of daily average temperatures are calculated in this step:

summarize(t_mean=mean(air_temperature),
            t_minmax_avg=(min(air_temperature)+
                        max(air_temperature))/2.0)

Where t_mean is the value calculated from all 288 five minute observations, and t_minmax_avg is the value from the daily minimum and maximum.

Now we join the observation data with the station data. This attaches station information such as the name and latitude of the station to each record.

crn_daily_stations <-
    crn_daily %>%
        inner_join(crn_stations, by="wbanno") %>%
        select(wbanno, date, state, location, latitude, longitude,
               t_mean, t_minmax_avg, anomaly)

Finally, save the data so we don’t have to do these steps again.

save(crn_daily_stations, file="crn_daily_averages.rdata")

Results

Here are the overall results of the analysis.

summary(crn_daily_stations$anomaly)
##     Min.  1st Qu.   Median     Mean  3rd Qu.     Max.
## -11.9000  -0.1028   0.4441   0.4641   1.0190  10.7900

The average anomaly across all stations and all dates is 0.44 degrees Celsius (0.79 degrees Farenheit). That’s a pretty significant error. Half the data is between −0.1 and 1.0°C (−0.23 and +1.8°F) and the full range is −11.9 to +10.8°C (−21.4 to +19.4°F).

Plots

Let’s look at some plots.

Raw data by latitude

To start, we’ll look at all the anomalies by station latitude. The plot only shows one percent of the actual anomalies because plotting 512,460 points would take a long time and the general pattern is clear from the reduced data set.

set.seed(43)
p <- ggplot(data=crn_daily_stations %>% sample_frac(0.01),
            aes(x=latitude, y=anomaly)) +
    geom_point(position="jitter", alpha="0.2") +
    geom_smooth(method="lm", se=FALSE) +
    theme_bw() +
    scale_x_continuous(name="Station latitude", breaks=pretty_breaks(n=10)) +
    scale_y_continuous(name="Temperature anomaly (degrees C)",
                       breaks=pretty_breaks(n=10))

print(p)
//media.swingleydev.com/img/blog/2015/04/crn_minmax_anomaly_scatterplot.svg

The clouds of points show the differences between the min/max daily average and the actual daily average temperature, where numbers above zero represent cases where the min/max calculation overestimates daily average temperature. The blue line is the fit of a linear model relating latitude with temperature anomaly. We can see that the anomaly is always positive, averaging around half a degree at lower latitudes and drops somewhat as we proceed northward. You also get a sense from the actual data of how variable the anomaly is, and at what latitudes most of the stations are found.

Here are the regression results:

summary(lm(anomaly ~ latitude, data=crn_daily_stations))
##
## Call:
## lm(formula = anomaly ~ latitude, data = crn_daily_stations)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -12.3738  -0.5625  -0.0199   0.5499  10.3485
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept)  0.7403021  0.0070381  105.19   <2e-16 ***
## latitude    -0.0071276  0.0001783  -39.98   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.9632 on 512458 degrees of freedom
## Multiple R-squared:  0.00311,    Adjusted R-squared:  0.003108
## F-statistic:  1599 on 1 and 512458 DF,  p-value: < 2.2e-16

The overall model and coefficients are highly significant, and show a slight decrease in the positive anomaly as we move farther north. Perhaps this is part of the reason why the analysis of my station (at a latitude of 64.89) showed an average anomaly close to zero (−0.07°C / −0.13°F).

Anomalies by month and latitude

One of the results of our earlier analysis was a seasonal pattern in the anomalies at our station. Since we also know there is a latitudinal pattern, in the data, let’s combine the two, plotting anomaly by month, and faceting by latitude.

Station latitude are binned into groups for plotting, and the plots themselves show the range that cover half of all anomalies for that latitude category × month. Including the full range of anomalies in each group tends to obscure the overall pattern, and the plot of the raw data didn’t show an obvious skew to the rarer anomalies.

Here’s how we set up the data frames for the plot.

crn_daily_by_month <-
    crn_daily_stations %>%
        mutate(month=month(date),
               lat_bin=factor(ifelse(latitude<30, '<30',
                                     ifelse(latitude>60, '>60',
                                            paste(floor(latitude/10)*10,
                                                  (floor(latitude/10)+1)*10,
                                                  sep='-'))),
                              levels=c('<30', '30-40', '40-50',
                                       '50-60', '>60')))

summary_stats <- function(l) {
    s <- summary(l)
    data.frame(min=s['Min.'],
               first=s['1st Qu.'],
               median=s['Median'],
               mean=s['Mean'],
               third=s['3rd Qu.'],
               max=s['Max.'])
}

crn_by_month_lat_bin <-
    crn_daily_by_month %>%
        group_by(month, lat_bin) %>%
        do(summary_stats(.$anomaly)) %>%
        ungroup()

station_years <-
    crn_daily_by_month %>%
        mutate(year=year(date)) %>%
        group_by(wbanno, lat_bin) %>%
        summarize() %>%
        group_by(lat_bin) %>%
        summarize(station_years=n())

And the plot itself. At the end, we’re using a function called facet_adjust, which adds x-axis tick labels to the facet on the right that wouldn't ordinarily have them. The code comes from this stack overflow post.

p <- ggplot(data=crn_by_month_lat_bin,
            aes(x=month, ymin=first, ymax=third, y=mean)) +
    geom_hline(yintercept=0, alpha=0.2) +
    geom_hline(data=crn_by_month_lat_bin %>%
                        group_by(lat_bin) %>%
                        summarize(mean=mean(mean)),
               aes(yintercept=mean), colour="darkorange", alpha=0.5) +
    geom_pointrange() +
    facet_wrap(~ lat_bin, ncol=3) +
    geom_text(data=station_years, size=4,
              aes(x=2.25, y=-0.5, ymin=0, ymax=0,
                  label=paste('n =', station_years))) +
    scale_y_continuous(name="Range including 50% of temperature anomalies") +
    scale_x_discrete(breaks=1:12,
                     labels=c('Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
                              'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec')) +
    theme_bw() +
    theme(axis.text.x=element_text(angle=45, hjust=1, vjust=1.25),
          axis.title.x=element_blank())
facet_adjust(p)
//media.swingleydev.com/img/blog/2015/04/crn_minmax_anomalies_by_month_lat.svg

Each plot shows the range of anomalies from the first to the third quartile (50% of the observed anomalies) by month, with the dot near the middle of the line at the mean anomaly. The orange horizontal line shows the overall mean anomaly for that latitude category, and the count at the bottom of the plot indicates the number of “station years” for that latitude category.

It’s clear that there are seasonal patterns in the differences between the mean daily temperature and the min/max estimate. But each plot looks so different from the next that it’s not clear if the patterns we are seeing in each latitude category are real or artificial. It is also problematic that three of our latitude categories have very little data compared with the other two. It may be worth performing this analysis in a few years when the lower and higher latitude stations have a bit more data.

Conclusion

This analysis shows that there is a clear bias in using the average of minimum and maximum daily temperature to estimate average daily temperature. Across all of the CRN stations, the min/max estimator overestimates daily average temperature by almost a half a degree Celsius (0.8°F).

We also found that this error is larger at lower latitudes, and that there are seasonal patterns to the anomalies, although the seasonal patterns don’t seem to have clear transitions moving from lower to higher latitudes.

The current length of the CRN record is quite short, especially for the sub-hourly data used here, so the patterns may not be representative of the true situation.

tags: R  temperature  weather  climate  CRN  COOP  ggplot 
tue, 21-apr-2015, 17:33

Abstract

The following is a document-style version of a presentation I gave at work a couple weeks ago. It's a little less useful for a general audience because you don't have access to the same database I have, but I figured it might be useful for someone who is looking at using dplyr or in manipulating the GHCND data from NCDC.

Introduction

Today we’re going to briefly take a look at the GHCND climate database and a couple new R packages (dplyr and tidyr) that make data import and manipulation a lot easier than using the standard library.

For further reading, consult the vignettes for dplyr and tidyr, and download the cheat sheet:

GHCND database

The GHCND database contains daily observation data from locations around the world. The README linked above describes the data set and the way the data is formatted. I have written scripts that process the station data and the yearly download files and insert it into a PostgreSQL database (noaa).

The script for inserting a yearly file (downloaded from http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/) is here: ghcn-daily-by_year_process.py

“Tidy” data

Without going into too much detail on the subject (read Hadley Wickham’s paper) for more information, but the basic idea is that it is much easier to analyze data when it is in a particular, “tidy”, form. A Tidy dataset has a single table for each type of real world object or type of data, and each table has one column per variable measured and one row per observation.

For example, here’s a tidy table representing daily weather observations with station × date as rows and the various variables as columns.

Station Date tmin_c tmax_c prcp snow ...
PAFA 2014-01-01 12 24 0.00 0.0 ...
PAFA 2014-01-01 8 11 0.02 0.2 ...
... ... ... ... ... ... ...

Getting raw data into this format is what we’ll look at today.

R libraries & data import

First, let’s load the libraries we’ll need:

library(dplyr)      # data import
library(tidyr)      # column / row manipulation
library(knitr)      # tabular export
library(ggplot2)    # plotting
library(scales)     # “pretty” scaling
library(lubridate)  # date / time manipulations

dplyr and tidyr are the data import and manipulation libraries we will use, knitr is used to produce tabular data in report-quality forms, ggplot2 and scales are plotting libraries, and lubridate is a library that makes date and time manipulation easier.

Also note the warnings about how several R functions have been “masked” when we imported dplyr. This just means we'll be getting the dplyr versions instead of those we might be used to. In cases where we need both, you can preface the function with it's package: base::filter would us the normal filter function instead of the one from dplyr.

Next, connect to the database and the three tables we will need:

noaa_db <- src_postgres(host="mason",
                        dbname="noaa")
ghcnd_obs <- tbl(noaa_db, "ghcnd_obs")
ghcnd_vars <- tbl(noaa_db, "ghcnd_variables")

The first statement connects us to the database and the next two create table links to the observation table and the variables table.

Here’s what those two tables look like:

glimpse(ghcnd_obs)
## Observations: 29404870
## Variables:
## $ station_id  (chr) "USW00027502", "USW00027502", "USW00027502", "USW0...
## $ dte         (date) 2011-05-01, 2011-05-01, 2011-05-01, 2011-05-01, 2...
## $ variable    (chr) "AWND", "FMTM", "PRCP", "SNOW", "SNWD", "TMAX", "T...
## $ raw_value   (dbl) 32, 631, 0, 0, 229, -100, -156, 90, 90, 54, 67, 1,...
## $ meas_flag   (chr) "", "", "T", "T", "", "", "", "", "", "", "", "", ...
## $ qual_flag   (chr) "", "", "", "", "", "", "", "", "", "", "", "", ""...
## $ source_flag (chr) "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", ...
## $ time_of_obs (int) NA, NA, 0, NA, NA, 0, 0, NA, NA, NA, NA, NA, NA, N...
glimpse(ghcnd_vars)
## Observations: 82
## Variables:
## $ variable       (chr) "AWND", "EVAP", "MDEV", "MDPR", "MNPN", "MXPN",...
## $ description    (chr) "Average daily wind speed (tenths of meters per...
## $ raw_multiplier (dbl) 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0....

Each row in the observation table rows contain the station_id, date, a variable code, the raw value for that variable, and a series of flags indicating data quality, source, and special measurements such as the “trace” value used for precipitation under the minimum measurable value.

Each row in the variables table contains a variable code, description and the multiplier used to convert the raw value from the observation table into an actual value.

This is an example of completely “normalized” data, and it’s stored this way because not all weather stations record all possible variables, and rather than having a single row for each station × date with a whole bunch of empty columns for those variables not measured, each row contains the station × data × variable data.

We are also missing information about the stations, so let’s load that data:

fai_stations <-
    tbl(noaa_db, "ghcnd_stations") %>%
    filter(station_name %in% c("FAIRBANKS INTL AP",
                               "UNIVERSITY EXP STN",
                               "COLLEGE OBSY"))
glimpse(fai_stations)
## Observations: 3
## Variables:
## $ station_id   (chr) "USC00502107", "USW00026411", "USC00509641"
## $ station_name (chr) "COLLEGE OBSY", "FAIRBANKS INTL AP", "UNIVERSITY ...
## $ latitude     (dbl) 64.86030, 64.80389, 64.85690
## $ longitude    (dbl) -147.8484, -147.8761, -147.8610
## $ elevation    (dbl) 181.9656, 131.6736, 144.7800
## $ coverage     (dbl) 0.96, 1.00, 0.98
## $ start_date   (date) 1948-05-16, 1904-09-04, 1904-09-01
## $ end_date     (date) 2015-04-03, 2015-04-02, 2015-03-13
## $ variables    (chr) "TMIN TOBS WT11 SNWD SNOW WT04 WT14 TMAX WT05 DAP...
## $ the_geom     (chr) "0101000020E6100000A5BDC117267B62C0EC2FBB270F3750...

The first part is the same as before, loading the ghcnd_stations table, but we are filtering that data down to just the Fairbanks area stations with long term records. To do this, we use the pipe operator %>% which takes the data from the left side and passes it to the function on the right side, the filter function in this case.

filter requires one or more conditional statements with variable names on the left side and the condition on the right. Multiple conditions can be separated by commas if you want all the conditions to be required (AND) or separated by a logic operator (& for AND, | for OR). For example: filter(latitude > 70, longitude < -140).

When used on database tables, filter can also use conditionals that are built into the database which are passed directly as part of a WHERE clause. In our code above, we’re using the %in% operator here to select the stations from a list.

Now we have the station_ids we need to get just the data we want from the observation table and combine it with the other tables.

Combining data

Here’s how we do it:

fai_raw <-
    ghcnd_obs %>%
    inner_join(fai_stations, by="station_id") %>%
    inner_join(ghcnd_vars, by="variable") %>%
    mutate(value=raw_value*raw_multiplier) %>%
    filter(qual_flag=='') %>%
    select(station_name, dte, variable, value) %>%
    collect()
glimpse(fai_raw)

In order, here’s what we’re doing:

  • Assign the result to fai_raw
  • Join the observation table with the filtered station data, using station_id as the variable to combine against. Because this is an “inner” join, we only get results where station_id matches in both the observation and the filtered station data. At this point we only have observation data from our long-term Fairbanks stations.
  • Join the variable table with the Fairbanks area observation data, using variable to link the tables.
  • Add a new variable called value which is calculated by multiplying raw_value (coming from the observation table) by raw_multiplier (coming from the variable table).
  • Remove rows where the quality flag is not an empty space.
  • Select only the station name, date, variable and actual value columns from the data. Before we did this, each row would contain every column from all three tables, and most of that information is not necessary.
  • Finally, we “collect” the results. dplyr doesn’t actually perform the full SQL until it absolutely has to. Instead it’s retrieving a small subset so that we can test our operations quickly. When we are happy with the results, we use collect() to grab the full data.

De-normalize it

The data is still in a format that makes it difficult to analyze, with each row in the result containing a single station × date × variable observation. A tidy version of this data requires each variable be a column in the table, each row being a single date at each station.

To “pivot” the data, we use the spread function, and we'll also calculate a new variable and reduce the number of columns in the result.

fai_pivot <-
    fai_raw %>%
    spread(variable, value) %>%
    mutate(TAVG=(TMIN+TMAX)/2.0) %>%
    select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD,
           WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)
head(fai_pivot)
## Source: local data frame [6 x 18]
##
##   station_name        dte  TAVG TMIN TMAX TOBS PRCP SNOW SNWD WSF1 WDF1
## 1 COLLEGE OBSY 1948-05-16 11.70  5.6 17.8 16.1   NA   NA   NA   NA   NA
## 2 COLLEGE OBSY 1948-05-17 15.55 12.2 18.9 17.8   NA   NA   NA   NA   NA
## 3 COLLEGE OBSY 1948-05-18 14.40  9.4 19.4 16.1   NA   NA   NA   NA   NA
## 4 COLLEGE OBSY 1948-05-19 14.15  9.4 18.9 12.2   NA   NA   NA   NA   NA
## 5 COLLEGE OBSY 1948-05-20 10.25  6.1 14.4 14.4   NA   NA   NA   NA   NA
## 6 COLLEGE OBSY 1948-05-21  9.75  1.7 17.8 17.8   NA   NA   NA   NA   NA
## Variables not shown: WSF2 (dbl), WDF2 (dbl), WSF5 (dbl), WDF5 (dbl), WSFG
##   (dbl), WDFG (dbl), TSUN (dbl)

spread takes two parameters, the variable we want to spread across the columns, and the variable we want to use as the data value for each row × column intersection.

Examples

Now that we've got the data in a format we can work with, let's look at a few examples.

Find the coldest temperatures by winter year

First, let’s find the coldest winter temperatures from each station, by winter year. “Winter year” is just a way of grouping winters into a single value. Instead of the 2014–2015 winter, it’s the 2014 winter year. We get this by subtracting 92 days (the days in January, February, March) from the date, then pulling off the year.

Here’s the code.

fai_winter_year_minimum <-
    fai_pivot %>%
        mutate(winter_year=year(dte - days(92))) %>%
        filter(winter_year < 2014) %>%
        group_by(station_name, winter_year) %>%
        select(station_name, winter_year, TMIN) %>%
        summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>%
        filter(n>350) %>%
        select(station_name, winter_year, tmin) %>%
        spread(station_name, tmin)

last_twenty <-
    fai_winter_year_minimum %>%
        filter(winter_year > 1993)

last_twenty
## Source: local data frame [20 x 4]
##
##    winter_year COLLEGE OBSY FAIRBANKS INTL AP UNIVERSITY EXP STN
## 1         1994       -43.96            -47.92             -47.92
## 2         1995       -45.04            -45.04             -47.92
## 3         1996       -50.98            -50.98             -54.04
## 4         1997       -43.96            -47.92             -47.92
## 5         1998       -52.06            -54.94             -54.04
## 6         1999       -50.08            -52.96             -50.98
## 7         2000       -27.94            -36.04             -27.04
## 8         2001       -40.00            -43.06             -36.04
## 9         2002       -34.96            -38.92             -34.06
## 10        2003       -45.94            -45.94                 NA
## 11        2004           NA            -47.02             -49.00
## 12        2005       -47.92            -50.98             -49.00
## 13        2006           NA            -43.96             -41.98
## 14        2007       -38.92            -47.92             -45.94
## 15        2008       -47.02            -47.02             -49.00
## 16        2009       -32.98            -41.08             -41.08
## 17        2010       -36.94            -43.96             -38.02
## 18        2011       -47.92            -50.98             -52.06
## 19        2012       -43.96            -47.92             -45.04
## 20        2013       -36.94            -40.90                 NA

See if you can follow the code above. The pipe operator makes is easy to see each operation performed along the way.

There are a couple new functions here, group_by and summarize. group_by indicates at what level we want to group the data, and summarize uses those groupings to perform summary calculations using aggregate functions. We group by station and winter year, then we use the minimum and n functions to get the minimum temperature and number of days in each year where temperature data was available. You can see we are using n to remove winter years where more than two weeks of data are missing.

Also notice that we’re using spread again in order to make a single column for each station containing the minimum temperature data.

Here’s how we can write out the table data as a restructuredText document, which can be converted into many document formats (PDF, ODF, HTML, etc.):

sink("last_twenty.rst")
print(kable(last_twenty, format="rst"))
sink()
Minimum temperatures by winter year, station
winter_year COLLEGE OBSY FAIRBANKS INTL AP UNIVERSITY EXP STN
1994 -43.96 -47.92 -47.92
1995 -45.04 -45.04 -47.92
1996 -50.98 -50.98 -54.04
1997 -43.96 -47.92 -47.92
1998 -52.06 -54.94 -54.04
1999 -50.08 -52.96 -50.98
2000 -27.94 -36.04 -27.04
2001 -40.00 -43.06 -36.04
2002 -34.96 -38.92 -34.06
2003 -45.94 -45.94 NA
2004 NA -47.02 -49.00
2005 -47.92 -50.98 -49.00
2006 NA -43.96 -41.98
2007 -38.92 -47.92 -45.94
2008 -47.02 -47.02 -49.00
2009 -32.98 -41.08 -41.08
2010 -36.94 -43.96 -38.02
2011 -47.92 -50.98 -52.06
2012 -43.96 -47.92 -45.04
2013 -36.94 -40.90 NA

Plotting

Finally, let’s plot the minimum temperatures for all three stations.

q <-
    fai_winter_year_minimum %>%
        gather(station_name, tmin, -winter_year) %>%
        arrange(winter_year) %>%
        ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +
            geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
            geom_smooth(method="lm", se=FALSE) +
            scale_x_continuous(name="Winter Year", breaks=pretty_breaks(n=20)) +
            scale_y_continuous(name="Minimum temperature (degrees F)", breaks=pretty_breaks(n=10)) +
            scale_color_manual(name="Station",
                               labels=c("College Observatory",
                                        "Fairbanks Airport",
                                        "University Exp. Station"),
                               values=c("darkorange", "blue", "darkcyan")) +
            theme_bw() +
            # theme(legend.position = c(0.150, 0.850)) +
            theme(axis.text.x = element_text(angle=45, hjust=1))

print(q)
//media.swingleydev.com/img/blog/2015/04/min_temp_winter_year_fai_stations.svg

To plot the data, we need the data in a slightly different format with each row containing winter year, station name and the minimum temperature. We’re plotting minimum temperature against winter year, coloring the points and trendlines using the station name. That means all three of those variables need to be on the same row.

To do that we use gather. The first parameter is the name of variable the columns will be moved into (the station names, which are currently columns, will become values in a row named station_name). The second is the name of the column that stores the observations (tmin) and the parameters after that are the list of columns to gather together. In our case, rather than specifying the names of the columns, we're specifying the inverse: all the columns except winter_year.

The result of the gather looks like this:

fai_winter_year_minimum %>%
    gather(station_name, tmin, -winter_year)
## Source: local data frame [321 x 3]
##
##    winter_year station_name tmin
## 1         1905 COLLEGE OBSY   NA
## 2         1907 COLLEGE OBSY   NA
## 3         1908 COLLEGE OBSY   NA
## 4         1909 COLLEGE OBSY   NA
## 5         1910 COLLEGE OBSY   NA
## 6         1911 COLLEGE OBSY   NA
## 7         1912 COLLEGE OBSY   NA
## 8         1913 COLLEGE OBSY   NA
## 9         1915 COLLEGE OBSY   NA
## 10        1916 COLLEGE OBSY   NA
## ..         ...          ...  ...

ggplot2

The plot is produced using ggplot2. A full introduction would be a seminar by itself, but the basics of our plot can be summarized as follows.

ggplot(aes(x=winter_year, y=tmin, colour=station_name)) +

aes defines variables and grouping.

geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) +
geom_smooth(method="lm", se=FALSE) +

geom_point draws points, geom_smooth draws fitted lines.

scale_x_continuous(name="Winter Year", breaks=pretty_breaks(n=20)) +
scale_y_continuous(name="Minimum temperature (degrees F)",
                    breaks=pretty_breaks(n=10)) +
scale_color_manual(name="Station",
                    labels=c("College Observatory", "Fairbanks Airport",
                            "University Exp. Station"),
                    values=c("darkorange", "blue", "darkcyan")) +

Scale functions define how the data is scaled into a plot and controls labelling.

theme_bw() +
theme(axis.text.x = element_text(angle=45, hjust=1))

Theme functions controls the style.

For more information:

Linear regression, winter year and minimum temperature

Finally let’s look at the significance of those regression lines:

summary(lm(data=fai_winter_year_minimum, `COLLEGE OBSY` ~ winter_year))
##
## Call:
## lm(formula = `COLLEGE OBSY` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -19.0748  -5.8204   0.1907   3.8042  17.1599
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -275.01062  105.20884  -2.614   0.0114 *
## winter_year    0.11635    0.05311   2.191   0.0325 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 7.599 on 58 degrees of freedom
##   (47 observations deleted due to missingness)
## Multiple R-squared:  0.07643,    Adjusted R-squared:  0.06051
## F-statistic:   4.8 on 1 and 58 DF,  p-value: 0.03249
summary(lm(data=fai_winter_year_minimum, `FAIRBANKS INTL AP` ~ winter_year))
##
## Call:
## lm(formula = `FAIRBANKS INTL AP` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -15.529  -4.605  -1.025   4.007  19.764
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -171.19553   43.55177  -3.931 0.000153 ***
## winter_year    0.06250    0.02221   2.813 0.005861 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 7.037 on 104 degrees of freedom
##   (1 observation deleted due to missingness)
## Multiple R-squared:  0.07073,    Adjusted R-squared:  0.06179
## F-statistic: 7.916 on 1 and 104 DF,  p-value: 0.005861
summary(lm(data=fai_winter_year_minimum, `UNIVERSITY EXP STN` ~ winter_year))
##
## Call:
## lm(formula = `UNIVERSITY EXP STN` ~ winter_year, data = fai_winter_year_minimum)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -15.579  -5.818  -1.283   6.029  19.977
##
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)
## (Intercept) -158.41837   51.03809  -3.104  0.00248 **
## winter_year    0.05638    0.02605   2.164  0.03283 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 8.119 on 100 degrees of freedom
##   (5 observations deleted due to missingness)
## Multiple R-squared:  0.04474,    Adjusted R-squared:  0.03519
## F-statistic: 4.684 on 1 and 100 DF,  p-value: 0.03283

Essentially, all the models show a significant increase in minimum temperature over time, but none of them explain very much of the variation in minimum temperature.

RMarkdown

This presentation was produced with the RMarkdown package. Allows you to mix text and R code, which is then run through R to produce documents in Word, PDF, HTML, and presentation formats.


<< 0 1 2 3 4 5 6 7 8 >>
Meta Photolog Archives