About Healthoutcomesguy

Dad, husband, son, brother, data hacker...

Creating a vector of string or numeric from a factor vector using the factors levels: a transformation of sorts

tuxedo_warefolf

I’ve used this handy bit of code quite a bit. It comes in handy when I want to use something that has been coerced to type=factor. I run into this quite a bit after a table() call when I want to do something more interesting with the result (e.g., convert to a data.frame and sort or select rows by the bin name).

vec <- round(12 + rnorm(100, 5, 7), 0)
t1 <- table(vec)
df1 <- data.frame(t1)
df1
class(df1$vec)
df1$vec <- as.numeric(levels(df1$vec))[df1$vec]
df1
class(df1$vec)

Advertisements

Histogram with density plot overlay (and fancy ggplot-esque background + summary data where legend goes)

testplot_2013_09_13I recently had to visualize some data for a client that involved identifying the number of members that were under the age of 18. Thanks to some help from Robert Kabacoff’s Quick-R site, I put together a histogram with density plot overlay. This is how I did it:


set.seed(123)
agedata <- round(20 + rnorm(1000, 25, 20),0)

xrange <- "Jan-2013 to Jun-2013"
par(mar=c(5.1, 4.1, 4.1, 11.5))
x1 <- hist( agedata, breaks = 100, xlim=range(0, 100), lwd=2,yaxt='n', xaxt='n', ylab="", xlab="", main="")
hist( agedata, breaks = 100, xlim=range(0, 100), lwd=2,yaxt='n', xaxt='n', ylab="", xlab="", main="")
u <- par("usr")
rect(u[1], u[3], u[2], u[4], col = "gray88", border = "black")
abline(h = pretty(x1$counts), col='white') ## draw h lines
abline(v = (pretty(range(1,100))), col='white') ## draw v lines
par(new=T)
hist(agedata, breaks=100, xlim=range(0, 100), ylim=range(u[3:4]), col="yellow", xlab="Age", main=paste("Drug Users by Age fills from", xrange) )
xx1 <- data.frame(table(agedata)) ## I will use this next chunk of code to "color in" only those bars that are below 18years to highlight them
xx1$agedata <- as.numeric(levels(xx1$agedata))[xx1$agedata]
xx1 <- xx1[order(xx1$agedata),]
xx1 <- xx1[xx1$agedata < 18,]
vec <- NULL
for( i in 1:nrow(xx1)){ ## this is where I create my new data
run <- rep(xx1$agedata[i], xx1$Freq[i])
vec <- c(vec, run)
}
par(new=T)
hist(vec, breaks=c(x1$breaks[x1$breaks<18]) ,xlim=range(0, 100), ylim=range(u[3:4]), col="darksalmon", xlab="", main= "" ) # notice I have to use breaks from the previous hist call to ensure that the breaks for the new hist line up with the previous.
abline(v=17, lwd=2, lty=2, col="red" )
count <- nrow(genage)
avage <- round(mean(agedata),2)
medage <- round(median(agedata), 2)
sdage <- round(sd(agedata),2)
# text(xpd=TRUE, x= 35, y=35, offset=-.2, labels=paste("Count of Users:", count, "\n", "Mean Age:", avage, "\n", "Median Age:", medage, "\n", "St. Dev.:", sdage), font=2)
legend(xpd=TRUE,'topright', inset=c(-0.3,0), legend=paste(" Count of Users:", count, "\n", "Mean Age:", avage, "\n", "Median Age:", medage, "\n", "St. Dev.:", sdage), pch=NA, ,title="Summary", bty='n' , cex=1) ## you are going to have to create the strings that go into the legend youself, I used some that I had previously created
d <- density(agedata)
par(new=T)
plot(d, main="", xlim=range(0,100), xlab="", ylab="", xaxt='n', yaxt="n")

importing data into R directly from a Github repository

(insert cool sound effect)... I think doing so while running the code also will vastly improve performance

(insert cool sound effect)… I think doing so while running the code also will vastly improve performance

I have recently started to get serious about reviewing time series forecasting. Partly, my push into the topic is nudged along by Rob J Hyndman and George Athanasopoulos’s online text Forecasting: principles and practice. In addition, there are was a really nice contributed blog post on R-bloggers by Corey Chivers @ bayesianbilogist.com that took a stab at forecasting bicycle collision rates in Montreal based on 3 years of data.

There was a lot of really great stuff that Corey’s post encouraged me to explore, but one of the things that it required was to import some data directly from Corey’s github repository into R, and I thought I would quickly write a post on how that can be done.

Corey’s github repository can be found here, and in it there is a data folder that contains all the data required to replicate his time series forecast of bicycle collisions. Corey has also graciously provided R code for the later in the same repository.

To import the data, you will need to have the RCurl library installed.

library(RCurl)
url <- 'https://raw.github.com/cjbayesian/collisions/timeseries/data/Bike%20Accidents.csv'
bike.data <- getURL(url, ssl.verifypeer = FALSE) ##  ssl.verifypeer is to subverte an SSL error you get otherwise               
d <-  read.table(textConnection(bike.data), header=TRUE, sep='|', row.names='id', na.strings=' ') ##  file uses bar separator 

It’s as easy as that! Now if I can only get that pesky shapefile (montreal_borough_borders.shp) to play nicely with readShapePoly()! So much to learn, so little time! 🙂

##UPDATE##

OK, so I’ve revisited this as I start to implement a workflow that incorporates Git for version control and such… As I’ve done so, I wanted to post another set of example code:

url <- "https://raw.githubusercontent.com/connerpharmd/LDLCVE/master/secprevstatin.csv"
statins <- getURL(url,  ssl.verifypeer = FALSE)                
statins<- read.csv(textConnection(statins))
statins <- statins[ which(statins$Study != "AtoZ"),]

Evaluating data by group and in rows (up/down vs. side/side) without looping: lapply is your friend

JUST-SAY-NO-TO-LOOPSI recently had to deal with a “switch analysis” that involved identifying observations (within subject) where there was a change in the kind of medication a patient was taking by class.  To use a simple non-drug example, lets say you were interested in a purchasing habits, and you wanted to identify the exact transaction at which some one switched consumption habits (e.g., stopped buying blue widgets and started buying red ones). Things get even more interesting when you are really more interested in switching within sub-groups of widgets (e.g., within all red widgets, when did the consumer switch from red-small widgets to red-large widgets). While this sounds like a trivial task, if your transactions are laid out in columns, my data set had my transactions laid out in rows (each row corresponding to a separate transaction. This required evaluating data across rows (a slightly more complex task).

I didn’t want to deal with a looping construct and the time disadvantage, so I had to find a way to vectorize the solution, and I did so using lapply. Then after putting something useful into a list, I used some code to recreate a dataframe from the lapply output (see here for a great primer on the apply set of functions). I’m sure that there is a more elegant solution to doing the last bit, and I’m open to any suggestions.

In this example I leave the steps pretty well laid out, and I have not done much to clean things up, to make things a bit more easy to follow.

Note that in this dummy example I use GPI4 (high level classifier) and GPI10 (lower level classifier), and my intent is to isolate when a subject switches to a different GPI10-level agent within a GPI4. These are remnants of my particular use-case, as they are drug classifiers.:


# make my data
structure(list(ptid = c(101L, 101L, 101L, 101L, 101L, 102L, 102L,
102L, 102L, 103L, 103L, 103L, 103L), gpi4 = structure(c(2L, 2L,
2L, 3L, 3L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L), .Label = c("", "A",
"B"), class = "factor"), gpi10 = c(1L, 1L, 2L, 1L, 2L, 1L, 1L,
2L, 2L, 2L, 1L, 1L, 2L), filldate = structure(c(2L, 3L, 4L, 5L,
6L, 2L, 3L, 4L, 5L, 2L, 3L, 4L, 5L), .Label = c("", "1/1/2012",
"2/1/2012", "3/1/2012", "4/1/2012", "5/1/2012"), class = "factor")), .Names = c("ptid",
"gpi4", "gpi10", "filldate"), row.names = c(NA, 13L), class = "data.frame")

# create a splitting factor
ddata$splitfac <- factor(paste(data$ptid, data$gpi4, sep="")) # create a factor to split-by
data <- data[ order(data$splitfac, data$filldate),]
test <- lapply(split(data, factor(data$splitfac)), function(x) (x <- c(NA,x$gpi10[-nrow(x)]))) # here is where the magic happens. Lapply essentially creates a vector 'dragging down' the previous observations data.
test2 <- (unlist(test)) # This and the next few lines of code deal with re-creating a dataframe
# names(test2)
test3 <- data.frame(splitfac2 = substr(names(test2), 1, nchar(names(test2))-1), gpi10.m1 = test2)
data <- cbind(data, test3)
index <- which(data$gpi10 != data$gpi10.m1) # this gives the indices of the switches
data[index, ] # these are the swithces

Using data.table to make quick work of limiting a dataset by the first observation of a set of values by subject…

meep meep this!!!!!!

meep meep this!!!!!!


Today I ran into an issue where I had to select the first instance of an observation where the counting of a first instance depended upon >1 variable. It was essentially a play on an issue that came up during a previous post (Picking one observation per ‘subject’ based on max (or min)), but in the previous case each instance could be selected by examining one variable.

As an example, lets assume that you’ve got a medical claims data set where there are multiple lines per claim and multiple claims per patient; however, you wanted to retain only the first complete claim line for each unique Patient ID and Claim ID combination.

The solution is implemented using Matthew Dowle’s data.table package. I’m trying to make a concerted effort to replace (where I can) data.tables for data.frame data structures (when I remember to do so). This is mainly to the speed advantage (upon data manipulation, sorting, merging) you can obtain by doing this. Also, I think a mastery of the data.table syntax will lend itself to very efficient and compact code for data manipulation. In particular there are handy functions that are part of this syntax (like the .SD or subset of data.table), that can do nifty things in one line that would otherwise take many lines to accomplish.

library(data.table)
DT = data.table(Patient=c(5:1, 5:1, 5, 5),Claim=c(10:1, 10, 8), data=rnorm(12))
DT # before 
setkey(DT, Patient)          # re-orders table and marks it sorted.
DT # after
tables()              # KEY column reports the key'd columns
key(DT)
keycols = c("Patient","Claim", "data") ## now data are sorted by Patient, Claim then data
setkeyv(DT,keycols) 
DT # after (notice it's easy to see now how Patient 5 has multiple lines for Claim 10, you only want one)
tables()              # KEY column reports the key'd columns
key(DT)
DT1 <- DT[ , .SD[which.min(Claim), ], by=list(Patient, Claim)]  
DT1 # now you only have one row for each unique Claim, Patient combo (its the also the min of the data var by PatientID+ClaimID)

As a rif on this theme I also stumbled on some interesting alternatives that may actually run even faster in this post.

An excellent place for quotes about statistics/data science

This is how I picked up my wife... guaranteed to work 60% of the time every time!

This is how I picked up my wife… guaranteed to work 60% of the time every time!

I recently had the occasion to apply George E.P. Box’s famous quote “…all models are wrong, but some are useful.” in a work discussion, and it got me thinking about all those other great and insightful quotes that come in handy (E-mails, presentations, etc.,). So I went on a short search and stumbled on this most excellent list from a stackexchange post. I also stumbled on this list (by following a link from R-bloggers).

R trivia question of the day: What is the default axis buffer R places on y- and x-axes in base scatterplots?

Alex, I'll take the "best non-PC villains EVAAAAH!" for $1000.

Alex, I’ll take the “best non-PC villains EVAAAAH!” for $1000.


When creating more complex scatter plots (especially when building multiple plots on the same set of axes using par(new=T)), I often set the ylim myself to some custom value; however, you will find that even in these cases base plot in R will add a “buffer” to the upper and lower limits of your y-axes. I’ve always wondered just EXACTLY what this buffer was, and after a little bit of searching, I found this very informative (and useful–due to the nifty yaxs=’i’ trick) post. Anyhow, the answer, it seems, is 4%.