Statistically Significant

Andrew Landgraf's Blog

Monitoring Price Fluctuations of Book Trade-In Values on Amazon

| Comments

I am planning to finish school soon and I would like to shed some weight before moving on. I have collected a fair number of books that I will likely never use again and it would be nice to get some money for them. Sites like Amazon and eBay let you sell your book to other customers, but Amazon will also buy some of your books directly (Trade-Ins), saving you the hassle of waiting for a buyer.

Before selling, I remembered listening to a Planet Money episode about a couple of guys that tried to make money off of buying and selling used textbooks on Amazon. Their strategy was to buy books at the end of a semester when students are itching to get rid of them, and sell them to other students at the beginning of the next semester. To back up their business, they have been scraping Amazon’s website for years, keeping track of prices in order to find the optimal times to buy and sell.

I collected a few books I was willing to part with and set up a scraper in R. I am primarily interested in selling my books to Amazon, so I tracked Amazon’s Trade-In prices for these books. This was done fairly easily with Hadley Wickham’s package rvest and the Chrome extension Selector Gadget. Selector Gadget tells me that the node I’m interested in is #tradeInButton_tradeInValue. The code to do the scraping is below.

Amazon_scrape_prices.Rlink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
library(rvest)

urls_df = read.csv("AmazonBookURLs.csv", stringsAsFactors = FALSE, comment.char = "")
load(file = "AmazonBookPrices.RData")
price_df_temp = data.frame(Title = urls_df$Title,
                      Date = Sys.time(),
                      Price = NA_real_, stringsAsFactors = FALSE)
for (i in 1:nrow(urls_df)) {
  tradein_html = urls_df$URL[i] %>% html() %>%
    html_node("#tradeInButton_tradeInValue")
  if (is.null(tradein_html)) {
    next
  }
  price = tradein_html %>%
    html_text() %>%
    gsub("(^[[:space:]]+\\$|[[:space:]]+$)", "", .) %>%
    as.numeric()
  price_df_temp$Price[i] = price
}
price_df = rbind(price_df, price_df_temp)

save(price_df, file = "AmazonBookPrices.RData")

After manually collecting this data for less than a week, I am able to plot the trends for the eight books I am interested in selling. The plot and code are below.

plot_amazon_prices.Rlink
1
2
3
4
5
6
7
library(ggplot2); theme_set(theme_bw())
library(scales)

price_df$TitleTrunc = paste0(substring(price_df$TitleTrunc, 1, 30), ifelse(nchar(price_df$Title) > 30, "...", ""))
ggplot(price_df, aes(Date, Price)) +
  geom_step() + geom_point() + facet_wrap(~ TitleTrunc, scales = "free_y") +
  scale_y_continuous(labels = dollar) + theme(axis.text.x = element_text(angle = 90, vjust = 0))

I am surprised how much the prices fluctuate. I expected them to be constant for the most part, with large changes once a week or less often. Apparently Amazon is doing quite a bit of tweaking to determine optimal price points. I would also guess that $2 is their minimum trade-in price. It looks like I missed out on my best chance to sell A First Course in Stochastic Processes, but that the price of A Primer on Linear Models will keep rising forever.

Time Stacking and Time Slicing in R

| Comments

Time lapses are a fun way to quickly show a long period of time. They typically involve setting up your camera on a tripod and taking photos at a regular interval, like every 5 seconds. After all the photos have been taken, they are combined into a movie at a much faster rate, for example 30 frames per second.

Time stacking is a way to combine all the photos into a single photo, instead of a movie. This is a common method to make star trails, and Matt Molloy has recently been experimenting with it in many different settings. There are many possible ways to achieve a time stack, but the most common way is to combine the photos with a lighten layer blend. For every pixel in the final image, the combined photo will use the corresponding pixel from the photo that was the brightest in all of the photos. This gives the desired result of motion of the stars or clouds in a scene.

Another way to combine the photos is through time slicing (see, for example, this photo). In time slicing, the final combined image will contain a “slice” from each of the original photos. Time slices can go left-to-right, right-to-left, top-to-bottom, or bottom-to-top. For example, a time slice that goes from left to right will use vertical slices of the pixels. If you took 100 photos for your time lapse, each of which being 1000 pixels wide, the left-most 10 vertical pixel slices of the final image would contain the corresponding pixels from the first photo, the 11th through 20th vertical pixel slices would contain the corresponding pixels from the second photo, and so on. Different directions will produce different effects.

There is free software available to do lighten layer blending of two photos, but I could not find any to automatically do it for a large number of photos. Similarly for the time slice, it is easy enough to manually slice a few photos, but not hundreds of photos. Therefore, I wrote a couple of scripts in R that would do this automatically. A gist of the scripts to create a time stack script and a time slice is here. They both require you to give the directory containing the JPEG photos and the time slice script let’s you enter the direction you would like it to go.

To try this out on my own photos, I used the time lapse I had created of a sunset (movie created with FFmpeg), which consisted of 225 photos taken over 20 minutes. The source material isn’t that great (the photos were out of focus), but you can still see the effects.

The following picture is a time stack.

The following four pictures are the time slices with different directions.

Yet Another Baseball Defense Statistic

| Comments

Fangraphs recently published an interesting dataset that measures defensive efficiency of fielders. For each player, the Inside Edge dataset breaks their opportunities to make plays into five categories, ranging from almost impossible to routine. It also records the proportion of times that the player successfully made the play. With this data, we can see how successful each player is for each type of play. I wanted to think of a way to combine these five proportions into one fielding metric. From here on, I will assume that there is no error in categorizing a play as easy or hard and that there is no bias in the categorizations.

Model

The model I will build is motivated by ordinal regression. If we only were concerned with the success rate in one of the categories, we could use standard logistic regression, and the probability that player $i$ successfully made a play would be assumed to be $\sigma(\theta_i)$, where $\sigma()$ is the logistic function. Using our prior knowledge that plays categorized as easy should have a higher success rate than plays categorized as difficult, I would like to generalize this.

Say there are only two categories: easy and hard. We could model the probability that player $i$ successfully made an hard play as $\sigma(\theta_i)$ and the probability that he made an easy play as $\sigma(\theta_i+\gamma)$. Here, we would assume that $\gamma$ is the same for all players. This assumption implies that if player $i$ is better than player $j$ at easy plays, he will also be better at hard plays. This is a reasonable assumption, but maybe not true in all cases.

Since we have five different categories of difficulty, we can generalize this by having $\gamma_k, k=1,\ldots,4$. Again, these $\gamma_k$’s would be the same for everyone. A picture of what this looks like for shortstops is below. In this model, every player will effectively be shifting the curve either left or right. A positive $\theta_i$ means the player is better than average and cause the curve to shift left and vice versa for negative $\theta_i$.

I modeled this as a multi-level mixed effects model, with the players being random effects and the $\gamma_k$’s being fixed. Technically, I should optimize subject to the condition that the $\gamma_k$’s are increasing, but the unconstrained optimization always yields increasing $\gamma_k$’s because there is a big difference between success rate in the categories. I used combined data from 2012 and 2013 seasons and included all players with at least one success and one failure. I modeled each position separately. Modeling player effects as random, there is a fair amount of regression to the mean built in. In this sense, I am more trying to estimate the true ability of the player, rather than measuring what he did during the two years. This is an important distinction, which may differ from other defensive statistics.

Model fit

Below is a summary of the results of the model for shortstops. I am only plotting the players with the at least 800 innings, for readability. A bonus of modeling the data like this is that we get standard error estimates as a result. I plotted the estimated effect of each player along with +/- 2 standard errors. We can be fairly certain that the effects for the top few shortstops is greater than 0 since their confidence intervals do not include 0. The same is true for the bottom few. Images for the other positions can be found here.

The results seem to make sense for the most part. Simmons and Tulowitzki have reputations as being strong defenders and Derek Jeter has a reputation as a poor defender.

Further, I can validate this data by comparing it to other defensive metrics. One that is readily available on Fangraphs is UZR per 150 games. For each position, I took the correlation of my estimated effect with UZR per 150 games, weighted by the number of innings played. Pitchers and catchers do not have UZR’s so I cannot compare them. The correlations, which are in the plot below, range from about 0.2 to 0.65.

Interpreting parameters

In order to make this fielding metric more useful, I would like to convert the parameters to something more interpretable. One option which makes a lot of sense is “plays made above/below average”. Given an estimated $\theta_i$ for a player, we can calculate the probability that he would make a play in each of the five categories. We can then compare those probabilities to the probability an average player would make a play in each of the categories, which would be fit with $\theta=0$. Finally, we can weight these differences in probabilities by the relative rate that plays of various difficulties occur.

For example, assuming there are only two categories again, suppose a player has a 0.10 and 0.05 higher probability than average of making hard and easy plays, respectively. Further assume that 75% of all plays are hard and 25% are easy. On a random play, the improvement in probability over an average player of making a play is $.10(.75)+.05(.25)=0.0875$. If a player has an opportunity for 300 plays in an average season, this player would be $300 \times 0.0875=26.25$ plays better than average over a typical season.

I will assume that the number of opportunities to make plays is directly related to the number of innings played. To convert innings to opportunities, I took the median number of opportunities per inning for each position. For example, shortstops had the highest opportunities per inning at 0.40 and catchers had the lowest at 0.08. The plot below shows the distribution of opportunities per inning for each position.

We can extend this to the impact on saving runs from being scored as well by assuming each successful play saves $x$ runs. I will not do this for this analysis.

Results

[Update: The Shiny app is no longer live, but you can view the resulting estimates in the Github repository.]

Finally, I put together a Shiny app to display the results. You can search by team, position, and innings played. A team of ‘- - -‘ means the player played for multiple teams over this period. You can also choose to display the results as a rate statistic (extra plays made per season) or a count statistic (extra plays made over the two seasons). To get a seasonal number, I assume position players played 150 games with 8.5 innings in each game. For pitchers, I assumed that they pitched 30 games, 6 innings each.

Conclusions and future work

I don’t know if I will do anything more with this data, but if I could do it again, I may have modeled each year separately instead of combining the two years together. With that, it would have been interesting to model the effect of age by observing how a player’s ability to make a play changes from one year to the next. I also think it would be interesting to see how changing positions affects a players efficiency. For example, we could have a $9 \times 9$ matrix of fixed effects that represent the improvement or degradation in ability as a player switches from their main position to another one. Further assumptions would be needed to make sure the $\theta$’s are on the same scale for every position.

At the very least, this model and its results can be considered another data point in the analysis of a player’s fielding ability. One thing we need to be concerned about is the classification of each play into the difficultness categories. The human eye can be fooled into thinking a routine play is hard just because a fielder dove to make the play, when a superior fielder could have made it look easier.

Code

I have put the R code together to do this analysis in a gist. If there is interest, I will put together a repo with all the data as well.

Update: I have a Github repository with the data, R code for the analysis, the results, and code for the Shiny app. Let me know what you think.

Hastie and Tibshirani Interview Jerome Friedman

| Comments

Trevor Hastie and Rob Tibshirani are currently teaching a MOOC covering an introduction to statistical learning. I am very familiar with most of the material in the course, having read Elements of Statistical Learning many times over.

One great thing about the class, however, is that they are truely experts and have collaborated with many of the influencial researchers in their field. Because of this, when covering certain topics, they have included interviews with statisticians who made important developments to the field. When introducing the class to R, they interviewed John Chambers, who was able to give a personal account of the history of S and R because he was one of the developers. Further, when covering resampling methods, they spoke with Brad Efron, who talked about the history of the bootstrap and how he struggled to get it published.

Today, they released a video interview with Jerome Friedman. Friedman revealed many interesting facts about the history of tree-based methods, including the fact that there weren’t really any journal articles written about CART when they wrote their book. There was one quote that I particularly enjoyed.

And of course, I’m very gratified that something that I was intellectually interested in for all this time has now become very popular and very important. I mean, data has risen to the top. My only regret is two of my mentors who also pushed it, probably harder and more effectively than I did – namely, John Tukey and Leo Breiman – are not around to actually see how data has triumphed over, say, theorem proving.

Top Songs by Artist on CD102.5 in 2013

| Comments

In a previous post, I showed you how to scrape playlist data from Columbus, OH alternative rock station CD102.5. Since it’s the end of the year and best-of lists are all the fad, I thought I would share the most popular songs and artists of the year, according to this data. In addition to this, I am going to make an interactive graph using Shiny, where the user can select an artist and it will graph the most popular songs from that artist.

First off, I am assuming that you have scraped the appropriate data using the code from the previous post.

library(lubridate)
library(sqldf)

playlist=read.csv("CD101Playlist.csv",stringsAsFactors=FALSE)
dates=mdy(substring(playlist[,3],nchar(playlist[,3])-9,nchar(playlist[,3])))
times=hm(substring(playlist[,3],1,nchar(playlist[,3])-10))
playlist$Month=ymd(paste(year(dates),month(dates),"1",sep="-"))
playlist$Day=dates
playlist$Time=times
playlist=playlist[order(playlist$Day,playlist$Time),]

Next, I will select just the data from 2013 and find the songs that were played most often.
playlist=subset(playlist,Day>=mdy("1/1/13"))
playlist$ArtistSong=paste(playlist$Artist,playlist$Song,sep="-")
top.songs=sqldf("Select ArtistSong, Count(ArtistSong) as Num
From playlist
Group By ArtistSong
Order by Num DESC
Limit 10")

The top 10 songs are the following:
                              Artist-Song Number Plays
1 FITZ AND THE TANTRUMS-OUT OF MY LEAGUE 809
2 ALT J-BREEZEBLOCKS 764
3 COLD WAR KIDS-MIRACLE MILE 759
4 ATLAS GENIUS-IF SO 750
5 FOALS-MY NUMBER 687
6 MS MR-HURRICANE 679
7 THE NEIGHBOURHOOD-SWEATER WEATHER 657
8 CAPITAL CITIES-SAFE AND SOUND 646
9 VAMPIRE WEEKEND-DIANE YOUNG 639
10 THE FEATURES-THIS DISORDER 632

I will make a plot similar to the plots made in the last post to show when the top 5 songs were played throughout the year.
    
plays.per.day=sqldf("Select Day, Count(Artist) as Num
From playlist
Group By Day
Order by Day")

playlist.top.songs=subset(playlist,ArtistSong %in% top.songs$ArtistSong[1:5])

song.per.day=sqldf(paste0("Select Day, ArtistSong, Count(ArtistSong) as Num
From [playlist.top.songs]
Group By Day, ArtistSong
Order by Day, ArtistSong"))
dspd=dcast(song.per.day,Day~ArtistSong,sum,value.var="Num")

song.per.day=merge(plays.per.day[,1,drop=FALSE],dspd,all.x=TRUE)
song.per.day[is.na(song.per.day)]=0

song.per.day=melt(song.per.day,1,variable.name="ArtistSong",value.name="Num")
song.per.day$Alpha=ifelse(song.per.day$Num>0,1,0)

library(ggplot2)
ggplot(song.per.day,aes(Day,Num,colour=ArtistSong))+geom_point(aes(alpha=Alpha))+
geom_smooth(method="gam",family=poisson,formula=y~s(x),se=F,size=1)+
labs(x="Date",y="Plays Per Day",title="Top Songs",colour=NULL)+
scale_alpha_continuous(guide=FALSE,range=c(0,.5))+theme_bw()
Alt-J was more popular in the beginning of the year and the Foals have been more popular recently.

I can similarly summarize by artist as well.
top.artists=sqldf("Select Artist, Count(Artist) as Num
From playlist
Group By Artist
Order by Num DESC
Limit 10")

                    Artist  Num
1 MUSE 1683
2 VAMPIRE WEEKEND 1504
3 SILVERSUN PICKUPS 1442
4 FOALS 1439
5 PHOENIX 1434
6 COLD WAR KIDS 1425
7 JAKE BUGG 1316
8 QUEENS OF THE STONE AGE 1296
9 ALT J 1233
10 OF MONSTERS AND MEN 1150

playlist.top.artists=subset(playlist,Artist %in% top.artists$Artist[1:5])

artists.per.day=sqldf(paste0("Select Day, Artist, Count(Artist) as Num
From [playlist.top.artists]
Group By Day, Artist
Order by Day, Artist"))
dspd=dcast(artists.per.day,Day~Artist,sum,value.var="Num")

artists.per.day=merge(plays.per.day[,1,drop=FALSE],dspd,all.x=TRUE)
artists.per.day[is.na(artists.per.day)]=0

artists.per.day=melt(artists.per.day,1,variable.name="Artist",value.name="Num")
artists.per.day$Alpha=ifelse(artists.per.day$Num>0,1,0)

ggplot(artists.per.day,aes(Day,Num,colour=Artist))+geom_point(aes(alpha=Alpha))+
geom_smooth(method="gam",family=poisson,formula=y~s(x),se=F,size=1)+
labs(x="Date",y="Plays Per Day",title="Top Artists",colour=NULL)+
scale_alpha_continuous(guide=FALSE,range=c(0,.5))+theme_bw()
The pattern for the artists are not as clear as it is for the songs.

Finally, I wrote a Shiny interactive app. They are surprisingly easy to create and if you are thinking about experimenting with it, I suggest you try it. I will leave the code for the app in a gist. In the app, you can enter any artist you want, and it will show you the most popular songs on CD102.5 for that artist. You can also select the number of songs that it plots with the slider.

For example, even though Muse did not have one of the most popular songs of the year, they were still the band that was played the most. By typing in “MUSE” in the Artist text input, you will get the following output.

They had two songs that were very popular this year and a few others that were decently popular as well.

Play around with it and let me know what you think.