Last week I gave a presentation at work about the National Climate Data Center’s GHCND climate database and methods to import and manipulate the data using the dplyr and tidyr R packages (a report-style version of it is here). Along the way, I used this function to calculate the average daily temperature from the minimum and maximum daily temperatures:
One of the people in the audience asked why the Weather Service would calculate average daily temperature this way, rather than by averaging the continuous or hourly temperatures at each station. The answer is that many, perhaps most, of the official stations in the GHCND data set are COOP stations which only report minimum and maximum temperature, and the original instrument provided to COOP observers was likely a mercury minimum / maximum thermometer. Now that these instruments are digital, they could conceivably calculate average temperature internally, and observers could report minimum, maximum and average as calculated from the device. But that’s not how it’s done.
In this analysis, I look at the difference between calculating average daily temperature using the mean of all daily temperature observations, and using the average of the minimum and maximum reported temperature each day. I’ll use five years of data collected at our house using our Arduino-based weather station.
Our weather station records temperature every few seconds, averages this data every five minutes and stores these five minute observations in a database. For our analysis, I’ll group the data by day and calculate the average daily temperature using the mean of all the five minute observations, and using the average of the minimum and maximum daily temperature. I’ll use R to perform the analysis.
Load the libraries we need:
library(dplyr) library(lubridate) library(ggplot2) library(scales) library(readr)
Retrieve the data
Connect to the database and retrieve the data. We’re using build_sql because the data table we’re interested in is a view (sort of like a stored SQL query), not a table, and dplyr::tbl can’t currently read from a view:
dw1454 <- src_postgres(dbname="goldstream_creek_wx", user="readonly") raw_data <- tbl(dw1454, build_sql("SELECT * FROM arduino_west"))
The raw data contains the timestamp for each five minute observation, and the temperature, in degrees Fahrenheit for that observation. The following series of functions aggregates the data to daily data and calculates the average daily temperature using the two methods.
daily_average <- raw_data %>% filter(obs_dt>'2009-12-31 23:59:59') %>% mutate(date=date(obs_dt)) %>% select(date, wtemp) %>% group_by(date) %>% summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0, h_avg=mean(wtemp), n=n()) %>% filter(n==24*60/5) %>% # 5 minute obs collect()
All these steps are joined together using the “pipe” or “then” operator %>% as follows:
- daily_average <-: assign the result of all the operations to daily_average.
- raw_data %>%: start with the data from our database query (all the temperature observations).
- filter(obs_dt>'2009-12-31 23:59:59') %>%: use data from 2010 and after.
- mutate(date=date(obs_dt)) %>%: calculate the data from the timestamp.
- select(date, wtemp) %>%: reduce the columns to our newly calculated date variable and the temperatures.
- group_by(date) %>%: group the data by date.
- summarize(mm_avg=(min(wtemp)+max(wtemp))/2.0) %>%: summarize the data grouped by date, calculate daily average from the average of the minimum and maximum temperature.
- summarize(h_avg=mean(wtemp), n=n()) %>%: calculate another daily average from the mean of the temperaures. Also calculate the number of observations on each date.
- filter(n==24*60/5) %>%: Only include dates where we have a complete set of five minute observations. We don’t want data with too few or too many observations because those would skew the averages.
- collect(): This function retrieves the data from the database. Without collect(), the query is run on the database server, producing a subset of the full results. This allows us to tweak the query until it’s exactly what we want without having to wait to retrieve everything at each iteration.
Now we’ve got a table with one row for each date in the database where we had exactly 288 observations on that date. Columns include the average temperature calculated using the two methods and the number of observations on each date.
Save the data so we don’t have to do these calculations again:
write_csv(daily_average, "daily_average.csv") save(daily_average, file="daily_average.rdata", compress=TRUE)
How does the min/max method of calculating average daily temperature compare against the true mean of all observed temperatures in a day? We calculate the difference between the methods, the anomaly, as the mean temperature subtracted from the average of minimum and maximum. When this anomaly is positive, the min/max method is higher than the actual mean, and when it’s negative, it’s lower.
anomaly <- daily_average %>% mutate(month=month(date), anomaly=mm_avg-h_avg) %>% ungroup() %>% arrange(date)
We also populate a column with the month of each date so we can look at the seasonality of the anomalies.
This is what the results look like:
## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -6.8600 -1.5110 -0.1711 -0.1341 1.0740 9.3570
The average anomaly is very close to zero (-0.13), and I suspect it would be even closer to zero as more data is included. Half the data is between -1.5 and 1.1 degrees and the full range is -6.86 to +9.36°F.
Let’s take a look at some plots of the anomalies.
Raw anomaly data
The first plot shows the raw anomaly data, with positive anomalies (min/max calculate average is higher than the mean daily average) colored red and negative anomalies in blue.
# All anomalies q <- ggplot(data=anomaly, aes(x=date, ymin=0, ymax=anomaly, colour=anomaly<0)) + geom_linerange(alpha=0.5) + theme_bw() + scale_colour_manual(values=c("red", "blue"), guide=FALSE) + scale_x_date(name="") + scale_y_continuous(name="Difference between min/max and hourly aggregation") print(q)
I don't see much in the way of trends in this data, but there are short periods where all the anomalies are in one direction or another. If there is a seasonal pattern, it's hard to see it when the data is presented this way.
To examine the seasonality of the anomalies, let’s look at some boxplots, grouped by the “month” variable we calculated when calculating the anomalies.
mean_anomaly <- mean(anomaly$anomaly) # seasonal pattern of anomaly q <- ggplot(data=anomaly, aes(x=as.factor(month), y=anomaly)) + geom_hline(data=NULL, aes(yintercept=mean_anomaly), colour="darkorange") + geom_boxplot() + scale_x_discrete(name="", labels=c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")) + scale_y_continuous(name="Difference between min/max and hourly aggregation") + theme_bw() print(q)
There does seem to be a slight seasonal pattern to the anomalies, with spring and summer daily average underestimated when using the min/max calculation (the actual daily average temperature is warmer than was calculated using minimum and maximum temperatures) and slightly overestimated in fall and late winter. The boxes in a boxplot show the range where half the observations fall, and in all months but April and May these ranges include zero, so there's a good chance that the pattern isn't statistically significant. The orange line under the boxplots show the overall average anomaly, close to zero.
Cumulative frequency distribution
Finally, we plot the cumulative frequency distribution of the absolute value of the anomalies. These plots have the variable of interest on the x-axis and the cumulative frequency of all values to the left on the y-axis. It’s a good way of seeing how much of the data falls into certain ranges.
# distribution of anomalies q <- ggplot(data=anomaly, aes(x=abs(anomaly))) + stat_ecdf() + scale_x_discrete(name="Absolute value of anomaly (+/- degrees F)", breaks=0:11, labels=0:11, expand=c(0, 0)) + scale_y_continuous(name="Cumulative frequency", labels=percent, breaks=pretty_breaks(n=10), limits=c(0,1)) + annotate("rect", xmin=-1, xmax=1, ymin=0, ymax=0.4, alpha=0.1, fill="darkcyan") + annotate("rect", xmin=-1, xmax=2, ymin=0, ymax=0.67, alpha=0.1, fill="darkcyan") + annotate("rect", xmin=-1, xmax=3, ymin=0, ymax=0.85, alpha=0.1, fill="darkcyan") + annotate("rect", xmin=-1, xmax=4, ymin=0, ymax=0.94, alpha=0.1, fill="darkcyan") + annotate("rect", xmin=-1, xmax=5, ymin=0, ymax=0.975, alpha=0.1, fill="darkcyan") + theme_bw() print(q)
The overlapping rectangles on the plot show what percentages of anomalies fall in certain ranges. Starting from the innermost and darkest rectangle, 40% of the temperatures calculated using minimum and maximum are within a degree of the actual temperature. Sixty-seven percent are within two degrees, 85% within three degrees, 94% are within four degrees, and more than 97% are within five degrees of the actual value. There's probably a way to get R to calculate these intersections along the curve for you, but I looked at the plot and manually added the annotations.
We looked at more than five years of data from our weather station in the Goldstream Valley, comparing daily average temperature calculated from the mean of all five minute temperature observations and those calculated using the average minimum and maximum daily temperature, which is the method the National Weather Service uses for it’s daily data. The results show that the difference between these methods average to zero, which means that on an annual (or greater) basis, there doesn't appear to be any bias in the method.
Two thirds of all daily average temperatures are within two degrees of the actual daily average, and with a few exceptions, the error is always below five degrees.
There is some evidence that there’s a seasonal pattern to the error, however, with April and May daily averages particularly low. If those seasonal patterns are real, this would indicate an important bias in this method of calculating average daily temperature.
Last night we got a quarter of an inch of rain at our house, making roads “impassable” according to the Fairbanks Police Department, and turning the dog yard, deck, and driveway into an icy mess. There are videos floating around Facebook showing Fairbanks residents playing hockey in the street in front of their houses, and a reported seven vehicles off the road on Ballaine Hill.
Here’s a video of a group of Goldstream Valley musicians ice skating on Golstream Road: http://youtu.be/_afC7UF0NXk
Let’s check out the weather database and take a look at how often Fairbanks experiences this type of event, and when they usually happen. I’m going to skip the parts of the code showing how we get pivoted daily data from the database, but they’re in this post.
Starting with pivoted data we want to look for dates from November through March with more than a tenth of an inch of precipitation, snowfall less than two tenths of an inch and a daily high temperature above 20°F. Then we group by the winter year and month, and aggregate the rain events into a single event. These occurrences are rare enough that this aggregation shoudln’t combine events from different parts of the month.
Here’s the R code:
winter_rain <- fai_pivot %>% mutate(winter_year=year(dte - days(92)), wdoy=yday(dte + days(61)), month=month(dte), SNOW=ifelse(is.na(SNOW), 0, SNOW), TMAX=TMAX*9/5+32, TAVG=TAVG*9/5+32, TMIN=TMIN*9/5+32, PRCP=PRCP/25.4, SNOW=SNOW/25.4) %>% filter(station_name == 'FAIRBANKS INTL AP', winter_year < 2014, month %in% c(11, 12, 1, 2, 3), TMAX > 20, PRCP > 0.1, SNOW < 0.2) %>% group_by(winter_year, month) %>% summarize(date=min(dte), tmax=mean(TMAX), prcp=sum(PRCP), days=n()) %>% ungroup() %>% mutate(month=month(date)) %>% select(date, month, tmax, prcp, days) %>% arrange(date)
And the results:
|Date||Month||Max temp (°F)||Rain (inches)||Days|
This year’s event doesn’t compare to 2010 when almost and inch of rain fell over the course of three days in November, but it does look like it comes at an unusual part of the year.
Here’s the counts and frequency of winter rainfall events by month:
by_month <- winter_rain %>% group_by(month) %>% summarize(n=n()) %>% mutate(freq=n/sum(n)*100)
There haven’t been any rain events in December, which is a little surprising, but next to that, February rains are the least common.
I looked at this two years ago (Winter freezing rain) using slightly different criteria. At the bottom of that post I looked at the frequency of rain events over time and concluded that they seem to come in cycles, but that the three events in this decade was a bad sign. Now we can add another rain event to the total for the 2010s.
Whenever we’re in the middle of a cold snap, as we are right now, I’m tempted to see how the current snap compares to those in the past. The one we’re in right now isn’t all that bad: sixteen days in a row where the minimum temperature is colder than −20°F. In some years, such a threshold wouldn’t even qualify as the definition of a “cold snap,” but right now, it feels like one.
Getting the length of consecutive things in a database isn’t simple. What we’ll do is get a list of all the days where the minimum daily temperature was warmer than −20°F. Then go through each record and count the number of days between the current row and the next one. Most of these will be one, but when the number of days is greater than one, that means there’s one or more observations in between the “warm” days where the minimum temperature was colder than −20°F (or there was missing data).
For example, given this set of dates and temperatures from earlier this year:
Once we select for rows where the temperature is above −20°F we get this:
Now we can grab the start and end of the period (January 2nd + one day and January 8th - one day) and get the length of the cold snap. You can see why missing data would be a problem, since it would create a gap that isn’t necessarily due to cold temperatures.
I couldn't figure out how to get the time periods and check them for validity all in one step, so I wrote a simple function that counts the days with valid data between two dates, then used this function in the real query. Only periods with non-null data on each day during the cold snap were included.
CREATE FUNCTION valid_n(date, date) RETURNS bigint AS 'SELECT count(*) FROM ghcnd_pivot WHERE station_name = ''FAIRBANKS INTL AP'' AND dte BETWEEN $1 AND $2 AND tmin_c IS NOT NULL' LANGUAGE SQL RETURNS NULL ON NULL INPUT;
Here we go:
SELECT rank() OVER (ORDER BY days DESC) AS rank, start, "end", days FROM ( SELECT start + interval '1 day' AS start, "end" - interval '1 day' AS end, interv - 1 AS days, valid_n(date(start + interval '1 day'), date("end" - interval '1 day')) as valid_n FROM ( SELECT dte AS start, lead(dte) OVER (ORDER BY dte) AS end, lead(dte) OVER (ORDER BY dte) - dte AS interv FROM ( SELECT dte FROM ghcnd_pivot WHERE station_name = 'FAIRBANKS INTL AP' AND tmin_c > f_to_c(-20) ) AS foo ) AS bar WHERE interv >= 17 ) AS f WHERE days = valid_n ORDER BY days DESC;
And the top 10:
There have been seven cold snaps that lasted 16 days (including the one we’re currently in), tied for 45th place.
Keep in mind that defining days where the daily minimum is −20°F or colder is a pretty generous definition of a cold snap. If we require the minimum temperatures be below −40° the lengths are considerably shorter:
I think it’s also interesting that only three (marked with a grey background) of the top ten cold snaps defined at −20°F appear in those that have a −40° threshold.
Following up on yesterday’s post about minimum temperatures, I was thinking that a cumulative measure of cold temperatures would probably be a better measure of how cold a winter is. We all remember the extremely cold days each winter when the propane gells or the car won’t start, but it’s the long periods of deep cold that really take their toll on buildings, equipment, and people in the Interior.
One way of measuring this is to find all the days in a winter year when the average temperature is below freezing and sum all the temperatures below freezing for that winter year. For example, if the temperature is 50°F, that’s not below freezing so it doesn’t count. If the temperature is −40°, that’s 72 freezing degrees (Fahrenheit). Do this for each day in a year and add up all the values.
Here’s the code to make the plot below (see my previous post for how we got fai_pivot).
fai_winter_year_freezing_degree_days <- fai_pivot %>% mutate(winter_year=year(dte - days(92)), fdd=ifelse(TAVG < 0, -1*TAVG*9/5, 0)) %>% filter(winter_year < 2014) %>% group_by(station_name, winter_year) %>% select(station_name, winter_year, fdd) %>% summarize(fdd=sum(fdd, na.rm=TRUE), n=n()) %>% filter(n>350) %>% select(station_name, winter_year, fdd) %>% spread(station_name, fdd) fdd_gathered <- fai_winter_year_freezing_degree_days %>% gather(station_name, fdd, -winter_year) %>% arrange(winter_year) q <- fdd_gathered %>% ggplot(aes(x=winter_year, y=fdd, colour=station_name)) + geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) + geom_smooth(data=subset(fdd_gathered, winter_year<1975), method="lm", se=FALSE) + geom_smooth(data=subset(fdd_gathered, winter_year>=1975), method="lm", se=FALSE) + scale_x_continuous(name="Winter Year", breaks=pretty_breaks(n=20)) + scale_y_continuous(name="Freezing degree days (degrees F)", breaks=pretty_breaks(n=10)) + scale_color_manual(name="Station", labels=c("College Observatory", "Fairbanks Airport", "University Exp. Station"), values=c("darkorange", "blue", "darkcyan")) + theme_bw() + theme(legend.position = c(0.875, 0.120)) + theme(axis.text.x = element_text(angle=45, hjust=1)) rescale <- 0.65 svg('freezing_degree_days.svg', height=10*rescale, width=16*rescale) print(q) dev.off()
And the plot.
You’ll notice I’ve split the trend lines at 1975. When I ran the regressions for the entire period, none of them were statistically significant, but looking at the plot, it seems like something happens in 1975 where the cumulative freezing degree days suddenly drop. Since then, they've been increasing at a faster, and statistically significant rate.
This is odd, and it makes me wonder if I've made a mistake in the calculations because what this says is that, at least since 1975, the winters are getting colder as measured by the total number of degrees below freezing each winter. My previous post (and studies of climate in general) show that the climate is warming, not cooling.
One bias that's possible with cumulative calculations like this is that missing data becomes more important, but I looked at the same relationships when I only include years with at least 364 days of valid data (only one or two missing days) and the same pattern exists.
Curious. When combined, this analysis and yesterday's suggest that winters in Fairbanks are getting colder overall, but that the minimum temperature in any year is likely to be warmer than in the past.
The Weather Service is calling for our first −40° temperatures of the winter, which is pretty remarkable given how late in the winter it is. The 2014/2015 winter is turning out to be one of the warmest on record, and until this upcoming cold snap, we’ve only had a few days below normal, and mostly it’s been significantly warmer. You can see this on my Normalized temperature anomaly plot, where most of the last four months has been reddish.
I thought I’d take a look at the minimum winter temperatures for the three longest running Fairbanks weather stations to see what patterns emerge. This will be a good opportunity to further experiment with the dplyr and tidyr R packages I’m learning.
The data set is the Global Historical Climatology Network - Daily (GHCND) data from the National Climatic Data Center (NCDC). The data, at least as I’ve been collecting it, has been fully normalized, which is another way of saying that it’s stored in a way that makes database operations efficient, but not necessarily the way people want to look at it.
There are three main tables, ghchd_stations containing data about each station, ghcnd_variables containing information about the variables in the data, and ghcnd_obs which contains the observations. We need ghchd_stations in order to find what stations we’re interested in, by name or location, for example. And we need ghcnd_variables to convert the values in the observation table to the proper units. The observation table looks something like this:
There are a few problems with using this table directly. First, the station_id column doesn’t tell us anything about the station (name, location, etc.) without joining it to the stations table. Second, we need to use the variables table to convert the raw values listed in the table to their actual values. For example, temperatures are in degrees Celsius × 10, so we need to divide the raw value to get actual temperatures. Finally, to get the so that we have one row per date, with columns for the variables we’re interested in we have to “pivot” the data (to use Excel terminology).
Here’s how we get all the data using R.
Load the libraries we will need:
library(dplyr) library(tidyr) library(ggplot2) library(scales) library(lubridate) library(knitr)
Connect to the database and get the tables we need, choosing only the stations we want from the stations table. In the filter statement you can see we’re using a PostgreSQL specific operator ~ to do the filtering. In other databases we’d probably use %in% and include the station names as a list.
noaa_db <- src_postgres(host="localhost", user="cswingley", port=5434, dbname="noaa") # Construct database table objects for the data ghcnd_obs <- tbl(noaa_db, "ghcnd_obs") ghcnd_vars <- tbl(noaa_db, "ghcnd_variables") # Filter stations to just the long term Fairbanks stations: fai_stations <- tbl(noaa_db, "ghcnd_stations") %>% filter(station_name %~% "(FAIRBANKS INT|UNIVERSITY EXP|COLLEGE OBSY)")
Here’s where we grab the data. We are using the magrittr package’s pipe operator (%>%) to chain operations together, making it really easy to follow exactly how we’re manipulating the data along the way.
# Get the raw data fai_raw <- ghcnd_obs %>% inner_join(fai_stations, by="station_id") %>% inner_join(ghcnd_vars, by="variable") %>% mutate(value=raw_value*raw_multiplier) %>% filter(qual_flag=='') %>% select(station_name, dte, variable, value) %>% collect() # Save it save(fai_raw, file="fai_raw.rdata", compress="xz")
In order, we start with the complete observation table (which contains 29 million rows at this moment), then we join it with our filtered stations using inner_join(fai_stations, by="station_id"). Now we’re down to 723 thousand rows of data. We join it with the variables table, then create a new column called value that is the raw value from the observation table multiplied by the multiplier from the variable table. We remove any observation that doesn’t have an empty string for the quality flag (a value in this fields indicates there’s something wrong with the data). Finally, we reduce the number of columns we’re keeping to just the station name, date, variable name, and the actual value.
We then use collect() to actually run all these operations and collect the results into an R object. One of the neat things about database operations using dplyr is that the SQL isn’t actually performed until it is actually necessary, which really speeds up the testing phase of the analysis. You can play around with joining, filtering and transforming the data using operations that are fast until you have it just right, then collect() to finalize the steps.
At this stage, the data is still in it’s normalized form. We’ve fixed the station name and the values in the data are now what was observed, but we still need to pivot the data to make is useful.
We’ll use the tidyr spread() function to make the value that appears in the variable column (TMIN, TMAX, etc.) appear as columns in the output, and put the data in the value column into the cells in each column and row. We’re also calculating an average daily temperature from the minimum and maximum temperatures and selecting just the columns we want.
# pivot, calculate average temp, include useful vars fai_pivot <- fai_raw %>% spread(variable, value) %>% transform(TAVG=(TMIN+TMAX)/2.0) %>% select(station_name, dte, TAVG, TMIN, TMAX, TOBS, PRCP, SNOW, SNWD, WSF1, WDF1, WSF2, WDF2, WSF5, WDF5, WSFG, WDFG, TSUN)
Now we’ve got a table with rows for each station name and date, and columns with all the observed variables we might be interested in.
Time for some analysis. Let’s get the minimum temperatures by year and station. When looking at winter temperatures, it makes more sense to group by “winter year” rather that the actual year. In our case, we’re subtracting 92 days from the date and getting the year. This makes the winter year start in April instead of January and means that the 2014/2015 winter has a winter year of 2014.
# Find coldest temperatures by winter year, as a nice table fai_winter_year_minimum <- fai_pivot %>% mutate(winter_year=year(dte - days(92))) %>% filter(winter_year < 2014) %>% group_by(station_name, winter_year) %>% select(station_name, winter_year, TMIN) %>% summarize(tmin=min(TMIN*9/5+32, na.rm=TRUE), n=n()) %>% filter(n>350) %>% select(station_name, winter_year, tmin) %>% spread(station_name, tmin)
In order, we’re taking the pivoted data (fai_pivot), adding a column for winter year (mutate), removing the data from the current year since the winter isn’t over (filter), grouping by station and winter year (group_by), reducing the columns down to just minimum temperature (select), summarizing by minimum temperature after converting to Fahrenheit and the number of days with valid data (summarize), only selecting years with 350 ore more days of data (select), and finally grabbing and formatting just the columns we want (select, spread).
Here’s the last 20 years and how we get a nice table of them.
last_twenty <- fai_winter_year_minimum %>% filter(winter_year > 1993) # Write to an RST table sink("last_twenty.rst") print(kable(last_twenty, format="rst")) sink()
|Winter Year||College Obsy||Fairbanks Airport||University Exp Stn|
To plot it, we need to re-normalize it so that each row in the data has winter_year, station_name, and tmin in it.
Here’s the plotting code, including the commands to re-normalize.
q <- fai_winter_year_minimum %>% gather(station_name, tmin, -winter_year) %>% arrange(winter_year) %>% ggplot(aes(x=winter_year, y=tmin, colour=station_name)) + geom_point(size=1.5, position=position_jitter(w=0.5,h=0.0)) + geom_smooth(method="lm", se=FALSE) + scale_x_continuous(name="Winter Year", breaks=pretty_breaks(n=20)) + scale_y_continuous(name="Minimum temperature (degrees F)", breaks=pretty_breaks(n=10)) + scale_color_manual(name="Station", labels=c("College Observatory", "Fairbanks Airport", "University Exp. Station"), values=c("darkorange", "blue", "darkcyan")) + theme_bw() + theme(legend.position = c(0.875, 0.120)) + theme(axis.text.x = element_text(angle=45, hjust=1))
The lines are the linear regression lines between winter year and minimum temperature. You can see that the trend is for increasing minimum temperatures. Each of these lines is statistically significant (both the coefficients and the overall model), but they only explain about 7% of the variation in temperatures. Given the spread of the points, that’s not surprising. The data shows that the lowest winter temperature at the Fairbanks airport is rising by 0.062 degrees each year.