sun, 30-jan-2011, 09:52

Location map

Location map

A couple years ago we got iPhones, and one of my favorite apps is the RunKeeper app, which tracks your outdoor activities using the phone’s built-in GPS. When I first started using it I compared the results of the tracks from the phone to a Garmin eTrex, and they were so close that I’ve given up carrying the Garmin. The fact that the phone is always with me, makes keeping track of all my walks with Nika, and trips to work on my bicycle or skis pretty easy. Just like having a camera with you all the time means you capture a lot more images of daily life, having a GPS with you means you have the opportunity to keep much better track of where you go.

RunKeeper records locations on your phone and transfers the data to the RunKeeper web site when you get home (or during your trip if you’ve got a good enough cell signal). Once on the web site, you can look at the tracks on a Google map, and RunKeeper generates all kinds of statistics on your travels. You can also download the data as GPX files, which is what I’m working with here.

The GPX files are processed by a Python script that inserts each point into a spatially-enabled PostgreSQL database (PostGIS), and ties it to a track.

Summary views allow me to generate statistics like this, a summary of all my travels in 2010:

TypeMilesHoursSpeed
Bicycling538.6839.1713.74
Hiking211.8192.842.29
Skiing3.170.953.34

Another cool thing I can do is use R to generate a map showing where I’ve spent the most time. That’s what’s shown in the image on the right. If you’re familiar at all with the west side of the Goldstream Valley, you’ll be able to identify the roads, Creek, and trails I’ve been on in the last two years. The scale bar is the number of GPS coordinates fell within that grid, and you can get a sense of where I’ve travelled most. I’m just starting to learn what R can do with spatial data, so this is a pretty crude “analysis,” but here’s how I did it (in R):

library(RPostgreSQL)
library(spatstat)
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, dbname="new_gps", host="nsyn")
points <- dbGetQuery(con,
    "SELECT type,
        ST_X(ST_Transform(the_geom, 32606)) AS x,
        ST_Y(ST_Transform(the_geom, 32606)) AS y
     FROM points
        INNER JOIN tracks USING (track_id)
        INNER JOIN types USING (type_id)
     WHERE ST_Y(the_geom) > 60 AND ST_X(the_geom) > -148;"
)
points_ppp <- ppp(points$x, points$y, c(min(points$x), max(points$x)), c(min(points$y), max(points$y)))
Lab.palette <- colorRampPalette(c("blue", "magenta", "red", "yellow", "white"), bias=2, space="Lab")
spatstat.options(npixel = c(500, 500))
map <- pixellate(points_ppp)
png("loc_map.png", width = 700, height = 600)
image(map, col = Lab.palette(256), main = "Gridded location counts")
dev.off()

Here’s a similar map showing just my walks with Nika and Piper:

Hiking trips

Walks with Nika and Piper

And here's something similar using ggplot2:

library(ggplot2)
m <- ggplot(data = points, aes(x = x, y = y)) + stat_density2d(geom = "tile", aes(fill = ..density..), contour = FALSE)
m + scale_fill_gradient2(low = "white", mid = "blue", high = "red", midpoint = 5e-07)

I trimmed off the legend and axis labels:

ggplot2 density map

ggplot2, geom_density2d

tags: GPS  gpx  iPhone  R  statistics 
fri, 08-oct-2010, 10:03

Back cabin

Back cabin

I’ve read predictions that this winter will be a strong La Niña period, which means that the tropical eastern Pacific Ocean temperature will be colder than normal. The National Weather Service has a lot of information on how this might affect the lower 48 states, but the only thing I’ve heard about how this might affect Fairbanks is that we can expect colder than normal temperatures. The last few years we’ve had below normal snowfall, and I was curious to find out whether La Niña might increase our chances of a normal or above-normal snow year.

Historical data for the ocean temperature anomaly are available from the Climate Prediction Center. That page has a table of “Oceanic Niño Index” (ONI) for 1950 to 2010 organized in three-month averages. El Niño periods (warmer ocean temperatures) correspond to a positive ONI, and La Niña periods are negative. I’ve got historical temperature, precipitation, and snow data for the Fairbanks International Airport over the same period from the “Surface Data, Daily” or SOD database that the National Climate Data Center maintains.

First, I downloaded the ONI index data, and wrote a short Python script that pulls apart the HTML table and dumps it into a SQLite3 database table as:

sqlite> CREATE TABLE nino_index (year integer, month integer, value real);

Next, I aggregated the Fairbanks daily data into the same (year, month) format and stuck the result into the SQLite3 database so I could join the two data sets together. Here’s the SOD query to extract and aggregate the data:

pgsql> SELECT extract(year from obs_dte) AS year, extract(month from obs_dte) AS month,
            avg(t_min) AS t_min, avg(t_max) AS t_max, avg((t_min + t_max) / 2.0) AS t_avg,
            avg(precip) AS precip, avg(snow) AS snow
       FROM sod_obs
       WHERE sod_id=’502968-26411’ AND obs_dte >= ’1950-01-01’
       GROUP BY year, month
       ORDER BY year, month;

Now we fire up R and see what we can find out. Here are the statements used to aggregate October through March data into a “winter year” and load it into an R data frame:

R> library(RSQLite)
R> drv = dbDriver("SQLite")
R> con <- dbConnect(drv, dbname = "nino_nina.sqlite3")
R> result <- dbGetQuery(con,
        "SELECT CASE WHEN n.month IN (1, 2, 3) THEN n.year - 1 ELSE n.year END AS winter_year,
                avg(n.value) AS nino_index, avg(w.t_min) AS t_min, avg(w.t_max) AS t_max, avg(w.t_avg) AS t_avg,
                avg(w.precip) AS precip, avg(w.snow) AS snow
         FROM nino_index AS n
            INNER JOIN noaa_fairbanks AS w ON n.year = w.year AND n.month = w.month
         WHERE n.month IN (10, 11, 12, 1, 2, 3)
         GROUP BY CASE WHEN n.month IN (1, 2, 3) THEN n.year - 1 ELSE n.year END
         ORDER BY n.year;"
   )

What I’m interested in finding out is how much of the variation in winter snowfall can be explained by the variation in Oceanic Niño Index (nino_index in the data frame). Since it seems as though there has been a general trend of decreasing snow over the years, I include winter year in the analysis:

R> model <- lm(snow ~ winter_year + nino_index, data = result)
R> summary(model)

Call:
lm(formula = snow ~ winter_year, data = result)

Residuals:
      Min        1Q    Median        3Q       Max
-0.240438 -0.105927 -0.007713  0.052905  0.473223

Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept)  2.1000444  2.0863641   1.007    0.318
winter_year -0.0008952  0.0010542  -0.849    0.399

Residual standard error: 0.145 on 59 degrees of freedom
Multiple R-squared: 0.01208,    Adjusted R-squared: -0.004669
F-statistic: 0.7211 on 1 and 59 DF,  p-value: 0.3992

What does this mean? Well, there’s no statistically significant relationship between year or ONI and the amount of snow that falls over the course of a Fairbanks winter. I ran the same analysis against precipitation data and got the same non-result. This doesn’t necessarily mean there isn’t a relationship, just that my analysis didn’t have the ability to find it. Perhaps aggregating all the data into a six month “winter” was a mistake, or there’s some temporal offset between colder ocean temperatures and increased precipitation in Fairbanks. Or maybe La Niña really doesn’t affect precipitation in Fairbanks like it does in Oregon and Washington.

Bummer. The good news is that the analysis didn’t show La Niña is associated with lower snowfall in Fairbanks, so we can still hope for a high snow year. We just can’t hang those hopes on La Niña, should it come to pass.

Since I’ve already got the data, I wanted to test the hypothesis that a low ONI (a La Niña year) is related to colder winter temperatures in Fairbanks. Here’s that analysis performed against the average minimum temperature in Fairbanks (similar results were found with maximum and average temperature):

R> model <- lm(t_min ~ winter_year + nino_index, data = result)
R> summary(model)

Call:
lm(formula = t_min ~ winter_year + nino_index, data = result)

Residuals:
     Min       1Q   Median       3Q      Max
-10.5987  -3.0283  -0.8838   3.0117  10.9808

Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -209.07111   70.19056  -2.979  0.00422 **
winter_year    0.10278    0.03547   2.898  0.00529 **
nino_index     1.71415    0.68388   2.506  0.01502 *
—
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.802 on 58 degrees of freedom
Multiple R-squared: 0.2343,     Adjusted R-squared: 0.2079
F-statistic: 8.874 on 2 and 58 DF,  p-value: 0.0004343

The results of the analysis show a significant relationship between ONI index and the average minimum temperature in Fairbanks. The relationship is positive, which means that when the ONI index is low (La Niña), winter temperatures in Fairbanks will be colder. In addition, there’s a strong (and significant) positive relationship between year and temperature, indicating that winter temperatures in Fairbanks have increased by an average of 0.1 degrees per year over the period between 1950 and 2009. This is a local result and can’t really speak to hypotheses regarding global climate change, but it does support the idea that the effect of global climate change is increasing winter temperatures in our area.

One other note: the model that includes both year and ONI, while significant, explains a little over 20% of the variation in winter temperature. There’s a lot more going on (including simple randomness) in Fairbanks winter temperature than these two variables. Still, it’s a good bet that we’re going to have a cold winter if La Niña materializes.

Thanks to Rich and his blog for provoking an interest in how El Niño/La Niña might affect us in Fairbanks.

tags: la niña  R  statistics  weather  winter 
sun, 31-jan-2010, 13:40

I recently saw a pair of blog posts showing how to make heatmaps with straight R and with ggplot2. Basketball doesn’t really interest me, so I figured I’d attempt to do the same thing for the 2010 Oakland Athletics 40-man roster. Results are at the bottom of the post.

First, I needed to get the 40-man roster:

$ w3m -dump "http://oakland.athletics.mlb.com/team/roster_40man.jsp?c_id=oak" > 40man

Then trim it down so it’s just a listing of the player’s names.

Next, get the baseball data bank (BDB) database from http://baseball-databank.org/, convert and insert it into a PostgreSQL database using mysql2pgsql.perl.

A Python script reads the names from the roster, and dumps a CSV file of the batting and pitching data for the past two seasons for the players passed in.

$ cat 40man_names | ./get_two-year_batter_stats.py

The batting data looks like this:

            name  , age,   g,    ba,   obp,   slg,   ops,  rc,   hrr,    kr,   bbr
Daric Barton (1B) ,  25, 194, 0.238, 0.342, 0.365, 0.707,  73, 0.017, 0.173, 0.134
Travis Buck (RF)  ,  27,  74, 0.223, 0.289, 0.392, 0.682,  28, 0.035, 0.202, 0.073
Chris Carter (LF) ,  28,  13, 0.261, 0.320, 0.261, 0.581,   1, 0.000, 0.360, 0.080
...

I’ve used the counting stats in the BDB to calculate batting average (ba), on-base percentage (obp), slugging percentage (slg), OPS (on-base percentage + slugging percentage), runs created (rc), home run rate (hrr), strikeout rate (kr) and walks rate (bbr).

And the pitching data:

            name   , age,  g,      ip,  w, l,    sv,    wp,    lp,    wf,   era,    k9,   bb9,   hr9
Brett Anderson (P) ,  22,  30, 175.33, 11,  11,   0,  0.37,  0.37,  0.00,  4.06,  7.70,  2.36,  1.03
Andrew Bailey (P)  ,  26,  68,  83.33,  6,   3,  26,  0.09,  0.04,  0.04,  1.84,  9.83,  2.92,  0.54
Jerry Blevins (P)  ,  27,  56,  60.00,  1,   3,   0,  0.02,  0.05, -0.04,  3.75,  8.70,  3.30,  0.60
...

Here I’ve calculated innings pitched (ip), winning percentage (wp), losing percentage (lp), win frequency (wf), earned run average (era), strikeouts per nine innings (k9), walks per nine (bb9), and home runs given up per nine innings (hr9). All these stats are for the last two Major League seasons.

Finally, generate the heat maps in R. For batting statistics:

library(ggplot2)
mlb <- read.csv('batting.csv')
mlb$name <- with(mlb, reorder(name, ops))
mlb.m <- melt(mlb)
mlb.m <- ddply(mlb.m, .(variable), transform, rescale = rescale(value))
(p <- ggplot(mlb.m, aes(variable, name)) +
+   geom_tile(aes(fill = rescale), colour = "white") +
+   scale_fill_gradient(low = "gold", high = "darkgreen"))
base_size <- 14
p + theme_grey(base_size = base_size) + labs(x = "", y = "") +
+   scale_x_discrete(expand = c(0, 0)) + scale_y_discrete(expand = c(0, 0)) +
+   opts(legend.position = "none", axis.ticks = theme_blank(),
+   axis.text.x = theme_text(size = base_size * 0.8, angle = 0, hjust = 0.5, colour = "black"),
+   axis.text.y = theme_text(size = base_size * 0.8, lineheight = 0.9, colour="black", hjust = 1))
    

Pitching statistics are the same, except the third line (where I order the data frame) is:

mlb$name <- with(mlb, reorder(name, 1/(era+0.1)))
    

The results:

A’s batting heatmap, ordered by OPS

A’s pitching heatmap, ordered by ERA

You have to keep the number of games (or innings pitched for pitchers) in mind when you look at these charts. I don’t even know who some of those guys are, probably because they’ve only barely played in the majors. It might make some sense to split the pitching plot into plots for starters and relievers, but I’d need a good way to determine a pitcher’s status (innings pitched divided by games beyond some threshold, perhaps?).

As for the A’s, I like their pitching, but have serious doubts about their offense. I sure hope some of the younger guys on this chart start reaching their power potential because having Jack Cust as your only offensive weapon doesn’t bode well for the team scoring runs.

fri, 23-oct-2009, 17:22

DNR pond

frozen DNR pond

It’s been almost a month since I last discussed the first true snowfall date (when the snow that falls stays on the ground for the entire winter) in Fairbanks, and we’re still without snow on the ground. It hasn’t been that cold yet, but the average temperature is enough below freezing that the local ponds have started freezing. Without snow, there’s a lot of ice skating going on around town. I’m hoping to head out this weekend and do some skating on the pond in the photo above. Still, most folks in Fairbanks are hoping for snow.

Since my last post, I’ve gotten access to data from the National Climate Data Center, and have been working on getting it all processed into a database. I’ve worked out a procedure for processing the daily COOP data, which means I can repeat my earlier snow depth analysis with a longer (and more consistent) data set. The following figure shows the same basic analysis as in my previous post, but now I’ve got data from 1948 to 2008.

Snow depth histogram

The latest date for the first true snowfall was November 11th, 1962, and we’re almost three weeks away from that date. But we’re also on the right side of the distribution—the mean (and median) date is October 14th, and we’re 9 days past that with no significant snow in the forecast. I’ve also marked the earliest (September 13th, 1992) and latest (November 1st, 1997) first snowfall dates in recent history. 1992 was the year the snow fell while the leaves were still on the trees, causing major power outages and a lot of damage. I think 1997 was the year that we didn’t get much snow at all, which caused a lot of problems for water and septic lines buried in the ground. A deep snowpack provides a good insulating layer that keeps buried water lines from freezing and in 1997 a lot of things froze.


Great Horned Owl

Great Horned Owl, digi-scoped with my iPhone

This is also the time of the year when some of the winter birds start making themselves less scarce. We saw our first Pine Grosbeaks of the year, three days later than last year’s first observation, a Northern Goshawk flew over a couple weeks ago, and we got some great views of this Great Horned Owl on Saturday. Andrea took some spectacular photos with her digital camera, and I experimented with my iPhone and the scope we bought in Homer this year. It’s quite a challenge to get the tiny iPhone lens properly oriented with the eyepiece image in the scope, but the photos are pretty impressive when you get it all set up. Even a pretty wimpy camera becomes powerful when looking through a nice scope.

Winter is on it’s way, just a bit late this year. I’ve been taking advantage by riding my bike to work fairly often. Earlier in the week I replaced my normal tires with carbide-studded tires, so I’ll be ready when the ice and snow finally comes.

tags: DNR pond  GHOW  owl  R  snowfall  weather 
fri, 25-sep-2009, 18:21

Piper and Nika on the Creek

Piper and Nika on the Creek, Feb 2009

On Wednesday I reported the results of my analysis examining the average date of first snow recorded at the Fairbanks Airport weather station. It was based on the snow_flag boolean field in the ISD database. In that post I mentioned that examining snow depth data might show the date on which permanent snow (snow that lasts all winter) first falls in Fairbanks. I’m calling this the first “true” snowfall of the season.

For this analysis I looked at the snow depth field in the ISD database for the Fairbanks station. The data was present for the years between 1973 and 1999, but isn’t in the database before that date. I’m not sure why it’s not in there after 1999, but luckily I’ve been collecting and archiving the data in the Fairbanks Daily Climate Summary (which includes a snow depth measurement) since late 2000. Combining those two data sets, I’ve got data for 27 years.

The SQL query I came up with to get the data from the data sets is a good estimate of what we’re interested in, but isn’t perfect because it only finds the date of first snow that lasts at least a week. In a place like Fairbanks where the turn to winter is so rapid and so dependent on the high albedo of snow cover, I think it’s close enough to the truth. Unfortunately, the query is brutally slow because it involves six (!) inner self-joins. The idea is to join the table containing snow depth data against itself, incrementing the date by one day at each join. The result set before the WHERE statement is the data for each date, plus the data for the six days following that date. The WHERE clause requires that snow depth on all those seven dates is above zero. This large query is a subquery of the main query which selects the earliest date found in each year.

There must be a better way to deal with conditions like this where we’re interested in the consecutive nature of the phenomenon, but I couldn’t figure out any other way to handle it in SQL, so here it is:

SELECT year, min(date) FROM
    (
        SELECT extract(year from a.dt) AS year,
            to_char(extract(month from a.dt), '00') ||
                '-' ||
                ltrim(to_char(extract(day from a.dt), '00')) AS date
        FROM isd_daily AS a
            INNER JOIN isd_daily AS b
                ON a.isd_id=b.isd_id AND
                    a.dt=b.dt - interval '1 day'
            INNER JOIN isd_daily AS c
                ON a.isd_id=c.isd_id AND
                    a.dt=c.dt - interval '2 days'
            INNER JOIN isd_daily AS d
                ON a.isd_id=d.isd_id AND
                    a.dt=d.dt - interval '3 day'
            INNER JOIN isd_daily AS e
                ON a.isd_id=e.isd_id AND
                    a.dt=e.dt - interval '4 day'
            INNER JOIN isd_daily AS f
                ON a.isd_id=f.isd_id AND
                    a.dt=f.dt - interval '5 day'
            INNER JOIN isd_daily AS g
                ON a.isd_id=g.isd_id AND
                    a.dt=g.dt - interval '6 day'
        WHERE a.isd_id = '702610-26411' AND
            a.snow_depth > 0 AND
            b.snow_depth > 0 AND
            c.snow_depth > 0 AND
            d.snow_depth > 0 AND
            e.snow_depth > 0 AND
            f.snow_depth > 0 AND
            g.snow_depth > 0 AND
            extract(month from a.dt) > 7
    ) AS snow_depth_conseq
GROUP BY year
ORDER BY year;

See what I mean? It’s pretty ugly. Running the result through the same R script as in my previous snowfall post yields this plot:

First true snowfall histogram

Between 1973 and 2008 we’ve gotten snow lasting the whole winter starting as early as September 12th (that was the infamous 1992), and as late as the first of November (1976). The median date is October 13th, which matches my impression. Now that the leaves have largely fallen off the trees, I’m hoping we get our first true snowfall on the early end of the distribution. We’ve still got a few things to take care of (a couple new dog houses, insulating the repaired septic line, etc.), but once those are done, I’m ready for the Creek to freeze and snow to blanket the trails.

tags: Nika  Piper  R  snowfall  weather 

<< 0 1 2 3 4 5 6 7 8 >>
Meta Photolog Archives