Recreating one of the most reproduced plots in the history of hypercholesterolemia clinical trails (using R)

The one-and-only Figure 4. from LaRossa et al.

The one-and-only Figure 4. from LaRossa et al.

VS.

Statins... Saving lives since 1987.

Statins… Saving lives since 1987.

What you are looking at is quite possibly the most replicated statin study data visualization of all time. If you have been to more than one lecture on either primary-or-secondary prevention with statins, you’ve seen this plot, or some adaptation. It comes from page 1434 of the New England Journal of Medicine volume 352, issue 14. It was used in the discussion section by the authors of the TNT trial (LaRosa et al. N Engl J Med. 2005 Apr 7;352(14):1425-35.) to put the results of TNT into the broader context of the extant secondary prevention literature of the day. I’ve been thinking a lot about these data recently (for reasons which I’ll leave for another post), and wanted to manipulate some of it in R, and as a part of that exercise I decided to re-create (as best I could using only data form the published trials) and perhaps improve upon the plot.

options(stringsAsFactors=F, scipen=999)
library(ggplot2)
require(RColorBrewer)
file <- "http://dl.dropboxusercontent.com/u/27644144/secprevstatin.csv" # data is here--I extracted it the best I could from the various landmark trials
statins <- read.csv(file, header=T)
#View(secprevstatin)


# here's a quick look using ggplot
qplot(x=LDL, y=eventrate, data=statins, color=Study)


df1 <- statins
yval <- 'eventrate'
pyval <- 'Event (%)'
xval <- 'LDL'
pxval <- 'LDL Cholesterol (mg/dL)'
df1$pchvec <- ifelse( grepl("PBO", df1$Cohort), 1, 19 ) # plotting character
df1$pchfill <- ifelse(df1$pchvec == 1, 'white', 'black') # plotting character fill --not sure it works
x1 <- brewer.pal(length(unique(df1$Study)), 'Set2') # grab some colors
cpal <- x1[rep(1:length(unique(df1$Study)), rle(df1$Study)[[1]])]


par(mar=c(5.1, 4.1, 4.1, 12))# try to match dimensions from La Rossa et al.

# now draw the vanilla plot (no tweaks) 
plot( df1[, xval], df1[, yval], pch=df1$pchvec, col='black', cex=1.5  ,yaxt='n', xaxt='n', ylab="", xlab="", ylim=c(0,30), xlim=c(70,210))
axis(side=2, at=c(0, 5, 10, 15, 20, 25, 30), labels=c("0","5",'10', '15', '20', '25', '30'), las=1 )
axis(side=1, at=c(70, 90, 110, 130, 150, 170, 190, 210), labels=c("70", "90", "110", "130", "150", "170", "190", "210")  )
legend( "topleft", pch=c(19, 1), legend=c("Statin", "Placebo"), cex=1.2, border= "n", bty='n')
text(df1[, xval], df1[, yval], labels = df1$Study, pos = 3)
title(main="Figure 4. Event Rates Plotted against LDL Cholesterol Levels\nduring Statin Therapy in Secondary-Prevention Studies.", ylab=pyval, xlab=pxval)
abline(lm(df1$eventrate~df1$LDL), lwd=2)


##  slightly tweaked (I think this just looks better)
par(mar=c(5.1, 4.1, 4.1, 12))
df1$pchvec <- ifelse( grepl("PBO", df1$Cohort), 1, 19 )
plot( df1[, xval], df1[, yval], type="n", ylim=c(0,30), xlim=c(70,210), yaxt='n', xaxt='n', ylab="", xlab="")
u <- par("usr")
rect(u[1], u[3], u[2], u[4], col = "grey95", border = "black")
par(new=T)
abline(h = c(0, 5, 10, 15, 20, 25, 30), col='white', lwd=2) ##  draw h lines
abline(v = c(70, 90, 110, 130, 150, 170, 190, 210), col='white', lwd=2) ##  draw v lines
par(new=T)
plot( df1[, xval], df1[, yval], pch=df1$pchvec, col= cpal , bg= df1$pchfill, cex=1.5  ,yaxt='n', xaxt='n', ylab="", xlab="", ylim=c(0,30), xlim=c(70,210), lwd=2)
axis(side=2, at=c(0, 5, 10, 15, 20, 25, 30), labels=c("0","5",'10', '15', '20', '25', '30'), las=1 )
axis(side=1, at=c(70, 90, 110, 130, 150, 170, 190, 210), labels=c("70", "90", "110", "130", "150", "170", "190", "210")  )
legend( "topleft", pch=c(19, 1), legend=c("Statin", "Placebo"), cex=1.2, border= "n", bty='n')
text(df1[, xval], df1[, yval], labels = df1$Study, pos = 3, font=2, col=cpal)
title(main="Figure 4. Event Rates Plotted against LDL Cholesterol Levels\nduring Statin Therapy in Secondary-Prevention Studies.", ylab=pyval, xlab=pxval)
abline(lm(df1$eventrate~df1$LDL), lwd=2)

Predicting 10-year CV risk with Framingham data using logistic regression: The ‘good old’ way vs. LASSO–two competing models

When models compete, it can get pretty ugly (or... let's face it, kinda ridiculous actually)

When models compete, it can get pretty ugly (or… let’s face it, kinda ridiculous actually)

I have recently been thinking a lot about both CV risk modeling (esp. given some of the controversy the newest ATP 10-year ASCVD risk calculator has ignited) and regression methods with cross-validation. Now, putting the particular debate around the newer ATP 10-year risk modeler aside, I found myself wondering how different ways of applying regression methods to a problem can result in models with different predictive performance. Specifically, what I’m referring to is comparing the performance of the ‘good old’ method of model selection–and by that I mean the way I was taught to apply it in graduate school–to a ‘newer’ regression shrinkage method (like LR with lasso). Now, when I say the “good old” way, I’m referring to a method that involves: 1) throwing all the features into the model, 2) evaluating the significance of each feature for selection, and 3) re-specifying the model with only those features you retained from step2. Also, while I might call LR with LASSO a ‘newer’ method, I realize that for many, there would be nothing novel about it at all.

One of the main reasons I am starting to develop a healthy skepticism about the traditional methods concerns ending up with models that contain many features, therefore resulting in models with high variance (and a sub-optimal bias-variance tradeoff). That makes a lot of intuitive sense to me; however, I’d love to be able to observe the difference as it would relate to a problem we might face in the health outcomes field. To that end, I’ve decided to apply both techniques to a cut of Framingham data. The main question being: Which model would do a better job at predicting 10-year CV risk? So, let’s put both methods to the test! May the best model win!

# load framingham dataset
options(stringsAsFactors=F)
framingham <- read.csv("~/R_folder/logreg/data/framingham.csv")
# remove observations with missing data for this example 
framingham <- na.omit(framingham)

# create test & training data
set.seed(1001)
testindex <- sample(1:nrow(framingham), round(.15*nrow(framingham), 0), replace=FALSE)
train <- framingham[-testindex, ]
test <-  framingham[testindex, ]

# first plain vanilla logistic regression (retaining only those variables that are statistically significant)
mod1 <- glm(TenYearCHD~., data=train, family="binomial")
summary(mod1) # retain only the features with p <0.05
vanilla <- glm(TenYearCHD~male+age+cigsPerDay+sysBP+glucose, data=train, family="binomial")

# use cv.lasso
# this method uses cross-validation to determine the optimal lambda 
# and therefore how many features to retain, and what coeffients
cv.lasso <- cv.glmnet(x.matrix, y.val, alpha=1, family="binomial")
plot(cv.lasso, xvar="lambda", label=TRUE)
coef(cv.lasso)

# assess accuracey of model derived from plain vanilla
# use your hold out test data
vanilla.prediction <- predict(vanilla, test, type="response")
# confusion matrix for plain vanilla log reg
table(test$TenYearCHD, vanilla.prediction >= 0.5)
# accuracy
mean(test$TenYearCHD == (vanilla.prediction >= 0.5))

# test lasso predictions
cv.lasso.prediction <- predict(cv.lasso, test.x.matrix, type="response")
# confusion matrix for lasso log reg with CV to choose lambda and best coefficients
table3 <- table(test$TenYearCHD, cv.lasso.prediction>=0.5 )
# accuaracy
mean(test$TenYearCHD == (cv.lasso.prediction>=0.5))

In this particular case, it appears to me that the old way produces a better model (accuracy of 0.842 vs. 0.838). While I’m not going to assume that this will always be the case, I had fun putting both models to the test!

Cleaning up detailed D.0 compound claim files–many columns of data into one cell

compound_medicinesOne of the burning client issues that I find I am focused on–as of late–is a disturbing increase in the proportion of very high cost compounded drug claims. While I am not going to spend any time debating the clinical value of these claims, I think anyone would agree that a good first step in forming an evidence-based approach to developing an opinion on these claims is to examine them (How many do you see? What do they cost? Does the costing basis make sense to you? What ingredients go into them? etc.,).

In the new D.0 claim format, compounding pharmacies are given the ‘opportunity’ to list each individual ingredient used to make up the compounded product. So one of the things you may want to do is take all of the line-by-line ingredients that make up a claim and create data where each set of ingredients are aggregated into one line. This is a helpful chunk of code that does just that. It makes use of both Hadley’s dcast() function–part of the reshape2 package, and the apply() function (one of a very handy set of _apply functions in R).

#Make some data
ingredient_name <- c("Drug A Phosphate Cap", "Flavor", "*Distilled Water*", 
"Drug X Inj Susp", "Lidocaine", 
"Super Drug HCl Liquid", "Not that great Inj Susp", 
"Antibiotic HCl Cap", "Drug A HCl Liquid", "Antifungal (Bulk)", 
"Table Salt Bicarbonate (Bulk)", "Antifungal (Bulk)", "Table Salt Bicarbonate (Bulk)", 
"Drug A Phosphate Cap", "Flavors", "*Distilled Water*", 
"Drug A Phosphate Cap", "Antifungal (Bulk)", "Table Salt Bicarbonate (Bulk)", 
"Super Drufg Acetonide Inj Susp", "Emollient**", 
"Antifungal (Bulk)", "Table Salt Bicarbonate (Bulk)")
 claim_id <- c(1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 6, 6, 6, 7, 8, 8, 9, 
9, 10, 10)
df1 <- data.frame(claim_id, ingredient_name)
#Let's look at what you have
df1
df2 <- dcast(df1, claim_id~ingredient_name)
#NOTE: fantastic side effect of dcast is that it alphabetizes columns by column name
#This ensures that for different claims the ordering of the ingredients follows the same rules!
cols <- names(df2)[-1]
df2$all_ingred <- apply( df2[ , cols ] , 1 , paste , collapse = ", " ) # combine all the columns into one
df2$all_ingred<- gsub("NA, |NA", "", df2$all_ingred)#clean up
df2$all_ingred<- gsub(", $", "", df2$all_ingred)#more clean up
df3 <- df2[c("claim_id", "all_ingred")]#even more clean up
#Now give it a look
df3

A really handy function for cross-validating a set of models from MLR + forward selection

By this it appears that the best model to predict baseball salaries includes 5 features.

By this it appears that the best model to predict baseball salaries includes 5 features.

I’ve been having a lot of fun taking part in a Stanford MOOC called “Introduction to Statistical Learning” offered by Dr. Trevor Hastie and Dr. Robert Tibshirani. The class parallels the book of the same name. The course has been AMAZING so far. And so far we’ve even been blessed with a cameo appearance from another author Daniela Witten! I wonder ,if before it’s over , we’ll hear from Gareth James (another one of the ISLR authors).

Anyhow, the class recently spent some time on model selection. We covered best subset, forward selection, backward selection, ridge regression and the lasso. One of the big takeaways for me has been the value of doing cross-validation (or K-fold cross-validation) to select models (vs. relying on Cp, AIC or BIC). Well, given you can even calculate them (for ridge and lasso, since you don’t really know the number of coefficients, d, you couldn’t even estimate these if you wanted to). That said, Dr. Hastie let the class in on a very handy custom function for cross-validation. I spent some time walking through it and present it (step-by-step) below.

In the example the dependent variable (Salary) is quantitative, and the features (IVs) are of various types. The object is to find the linear model that best captures the relationship b/w the features and Salary. They key here is to avoid over-fitting the model. We seek to optimize the variance-bias tradeoff, and so instead of relying on R-squared or even other corrected/improved indicators like Cp, AIC and BIC, in this case we employ cross-validation to determine the size of the model… I’m becoming a fan of this approach.

First, here’s the chunk all at once:

require(leaps)
set.seed(1)
train=sample(seq(nrow(Hitters)),round(2/3*nrow(Hitters), 0) ,replace=FALSE) # create training sample of size 2/3 of total sample
train
regfit.fwd=regsubsets(Salary~.,data=Hitters[train,],nvmax=19,method="forward")# fit model on training sample

val.errors=rep(NA,19) # there are 19 subset models due to nvmax = 19 above
x.test=model.matrix(Salary~.,data=Hitters[-train,])# notice the -index!... we are indexing by minus train, so this is our test sammple
for(i in 1:19){
coefi=coef(regfit.fwd,id=i)# returns coeficients only for model model i from 1-19
pred=x.test[,names(coefi)]%*%coefi # names(coefi) pulls only columns from model = i, then matrix multiply by the coeficients from i (coefi)
val.errors[i]=mean((Hitters$Salary[-train]-pred)^2)
}
plot(sqrt(val.errors),ylab="Root MSE",ylim=c(300,400),pch=19,type="b")
points(sqrt(regfit.fwd$rss[-1]/180),col="blue",pch=19,type="b")
legend("topright",legend=c("Training","Validation"),col=c("blue","black"),pch=19)

Line 1 (Building a place to store stuff)

val.errors=rep(NA,19)

Line 1 is all about efficiency, what you are doing here is, essentially, building an empty vector of 19 NAs (which will serve as a kind of shelf to store something you will make later on). Those somethings are the 19 test RMSE, that you will produce with your function.

Line 2 (gathering one key ingredient)

x.test=model.matrix(Salary~.,data=Hitters[-train,])# notice the -index!

This is where you create your test set of data. I think the really cool thing going on in this line is how the model.matrix function works. By running the piece of code below, you’ll see that it produces something similar to Hitters[-train,], but by using some additional specification (in the formula Salary~.) you tell it to remove the salary variable and include a vector of 1s which it reserves for an Intercept term (important during the matrix multiplication step).
Try this and you can see how model.matrix does it’s work:

model.matrix(Salary~.,data=Hitters[-train,])

Line 3 (Getting loopy…)

for(i in 1:19){

This line just specifies how you want your loop to run. Remember you are looping from 1-19 because you created 19 distinct models using the nvmax=19 above in regsubsets().

Line 4 (Why looping constructs are cool)

coefi=coef(regfit.fwd,id=i)

Here you are putting the index (i) that the loop is looping through to good use. As the loop does it’s thing it will use this piece of code to create 19 different vectors, one for each model returned by regsubsets above. Don’t take my word for it, try this:

coefi=coef(regfit.fwd,id=6)

Line 5 (The magic of matrix multiplication)

pred=x.test[,names(coefi)]%*%coefi

Lots of good stuff going on here, one of which being the use of the names() function to ensure that for each model you are only doing multiplication on the correct number of columns, but here was one of my big A-HAs. What you have to remember (or realize) here is that the vector coefi (which you create above) is multiplied by the matrix x.test (which is of dimensions nxp). This will return a vector of nX1. Where each scalar is the prediction value for the model of interest. At the end of the day pred will contain all of your individual yhats. Don’t believe me, try this:

coefi=coef(regfit.fwd,id=6)
pred=x.test[,names(coefi)]%*%coefi

Line 6 (Time to calculate your individual test errors and put them somewhere)

val.errors[i]=mean((Hitters$Salary[-train]-pred)^2)

Here is the line where you are putting all of the individual ingredients, making your final product, and putting it away for safe keeping (remember that vector of NA you built earlier). In this piece:

(mean((Hitters$Salary[-train]-pred)^2)

..you are calculating your MSE using your yhats (pred) and your actual ys (Hitters$Salary[-train]). You use the index from your loop to put your MSE away in its respective spot in the vector of NAs with val.errors[i].

The loop will iterate over all your i (1-19), and that’s how it all comes together! Very cool indeed. While you can certainly don’t need to take a look under the hood, I find that to really “get” the pleasure, it helps to stop and think about the complexity. (I think I’m really butchering a Feynman quote there).

The rest of the code just does some plotting so you can visualize at which model size you reach the minimum of RMSE. The overlay of Rsq is a nice touch as well.

Just in case you were curious, the 5 coefficient model happens to be this one (it is important to note that Salary is in $K):

Salary = 145.54 + 6.27*Walks + 1.19*CRuns – 0.803*CWalks – 159.82*DivisionW + 0.372*PutOuts

I thought it was strange that it doesn’t appear to matter in what league you played, but it did matter what division you played in (most likely an AL East “effect”). Anyhow, to learn more about the individual features of the model you can see further documentation on the Hitters dataset here.

Curating data to eliminate partial quarters, months (of even years if that’s your game)…

Step 1: Insert data, Step2: collect money... If it were only that simple...

Step 1: Insert data, Step2: collect money… If it were only that simple…

Often, I’m pulling data in to R from datasets that are updated monthly; however, there are many cases where I am interested in aggregating data by quarters. In these cases I need to make sure that in each aggregation bin, the data represent full quarters (and full months as well–but I tacked that in an earlier post).

Most of the time I use RODBC to bring my data from our warehouse into R, and I suppose you could implement the technique below as part of the data import step, but in this particular case, I implemented the code below after the data had been imported into r.

## simulate some data

library(zoo)
n <- 100
date <- seq(as.Date('2011-01-01'),as.Date('2012-04-30'),by = 1)
values <- rnorm(length(date), n, 25)
df1 <- data.frame(date, values)
df1$yrqtr <- as.yearqtr(df1$date)
df1$yrmon <- as.yearmon(df1$date)

tapply(df1$values, df1$yrqtr, sum) # beware the last quarter is incomplete
range(df1$yrqtr) #you can't tell by looking here
range(df1$yrmon) #but you can tell by looking here


# this code fixes your problem
test1 <- as.yearmon(as.Date(max(df1$yrqtr), frac=1)) ## returns the last month of the last QTR that we have data for
result1 <- as.yearmon(as.Date(max(df1$yrqtr), frac=0)) ##  returns the FIRST month ofthe last QTR what we have data for
#if max yearmon in data is not equal to what should be the last yearmon for the last qtr in data
# cut  data to last full quarter 
if(max(df1$yrmon) != test1)
  df1 <- df1[ df1$yrmon < result1 , ]

tapply(df1$values, df1$yrqtr, sum) # this is more like it!
range(df1$yrqtr)  
range(df1$yrmon) 

Evaluating across many columns within the same row to extract the first occurrence of a string

Does your job make you feel like this sometimes?? Yup, me too!

Does your job make you feel like this sometimes?? Yup, me too!

Some time ago I was tasked with looking at a set of claims data to pull only claim lines that had a certain diagnosis, and even more importantly, if there were claims where multiple diagnoses were listed, I needed to pull only the “most important” (let’s just call it that). It is not the purpose of this post to debate the consideration of attributing a weighting to the importance of a diagnoses based on the order of appearance in the claim files after the first position, so let’s not even go there. This is just more about applying a method that may have application in a more broader context.

For those of you who may not be familiar, the claim files that I had listed (for each Patient and Service Date) a series of fields indicating different diagnoses (e.g., Principal Diagnosis, Diagnosis Code 2, Diagnosis Code 3). The condition that I was tasked with looking into was made up of a composite of diagnosis codes, and in some cases a patient may have one of the diagnoses codes (making up the composite) in position #1, and another in position #2 (or even position #3). However, we did not want to count patients more than once for each claim line. Rather, we wanted to pull forward the first diagnosis (matching any in the composite set) as we “looked” at the set of diagnoses code columns, working our way down Diagnosis Code Principal, then Diagnosis code #2, and #3.

Enough of the background, it is better to just show you what is going on with some code…

One last note, I used a looping construct in this version of my solution. Recently, I think I’ve stumbled on a way to do the same with one of the apply family of functions.

set.seed(1234)
options(stringsAsFactors=FALSE)

# simulate some data
Diagnosis_Code_Principal <- sample(c("heart attack", "unstable angina", "broken arm", "stomach ache", "stubbed toe", "ingrown hair", "tooth ache"), 1000, replace=TRUE)
Diagnosis_Code_02 <- sample(c("heart attack", "unstable angina", "broken arm", "stomach ache", "stubbed toe", "ingrown hair", "tooth ache"), 1000, replace=TRUE)
Diagnosis_Code_03 <- sample(c("heart attack", "unstable angina", "broken arm", "stomach ache", "stubbed toe", "ingrown hair", "tooth ache"), 1000, replace=TRUE)
Service_Date_MMDDYYYY <- sample( seq.Date(as.Date("2011/1/1"), as.Date("2011/12/31"), by="day"), 1000, replace=TRUE)
Person_ID <- sample( c("Person1", "Person2", "Person3", "Person4"),1000, replace=TRUE)
eventdata <- data.frame(Person_ID, Service_Date_MMDDYYYY,Diagnosis_Code_Principal, Diagnosis_Code_02, Diagnosis_Code_03)
eventdata <- eventdata[ order(eventdata$Person_ID, eventdata$Service_Date_MMDDYYYY), ]

keydx <- c("heart attack|unstable angina") # these are the "codes" that make up our composite 

eventdata <- unique(eventdata)## duplicate lines are uninformative

eventdata$keep <- grepl((keydx), eventdata$Diagnosis_Code_Principal) | grepl((keydx), eventdata$Diagnosis_Code_02) | grepl((keydx), eventdata$Diagnosis_Code_03) ## more slimming down of the data
eventdata <- eventdata[ eventdata$keep == TRUE, ] 


##  creates a vector where the first match is returned from a set of columns string
firstdx <- rep(0, nrow(eventdata))
for(i in 1:nrow(eventdata)){
  a <- rep(0,3)
  a <- c(eventdata[i, "Diagnosis_Code_Principal"], eventdata[i, "Diagnosis_Code_02"], eventdata[i, "Diagnosis_Code_03"])## you can list as many columns as you like.
  if (any(grepl(keydx, a)))
    b <- a[grep(keydx, a)[1]] # keydx is listing of ICD9s that match dx grouping you are interested in
  if (!any(grepl(keydx, a))) ##  takes care of any cases where there is no match
    b <- NA      
  firstdx[i] <- b
}
eventdata$firstdx <- firstdx ## add the firstdx column to the dataset


Hacking base R plot code to do what facet does in ggplot2… Going about things the hard way

Another demonstration of doing things the hard way

Another demonstration of doing things the hard way


Rplot07
I have waffled back-and-forth from base plotting to ggplot2 back to base plotting for my every day plotting needs. Mainly, this is because when it comes to customization of all aspects of the plot (esp. the legend) I feel more more in command with the base R plotting code. That said, one of the great benefits of ggplot is efficiency and how the package allows users to do quite a lot with very few lines. I certainly still find ggplot2 to be a very handy package!
One of these killer features is the facet option. To achieve something similar with base R takes quite a bit of code to achieve (as far as I can tell), and while I have managed to hackishly create my own base plot work-around, it certainly is far from elegant.

df1 <- structure(list(yearmon = structure(c(1962.66666666667, 1962.75, 
1962.83333333333, 1962.91666666667, 1963, 1963.08333333333, 1963.16666666667, 
1963.25, 1963.33333333333, 1963.41666666667, 1963.5, 1963.58333333333, 
1962.66666666667, 1962.75, 1962.83333333333, 1962.91666666667, 
1963, 1963.08333333333, 1963.16666666667, 1963.25, 1963.33333333333, 
1963.41666666667, 1963.5, 1963.58333333333, 1962.66666666667, 
1962.75, 1962.83333333333, 1962.91666666667, 1963, 1963.08333333333, 
1963.16666666667, 1963.25, 1963.33333333333, 1963.41666666667, 
1963.5, 1963.58333333333), class = "yearmon"), Drug_Name = c("Agent 1", 
"Agent 1", "Agent 1", "Agent 1", "Agent 1", "Agent 1", "Agent 1", 
"Agent 1", "Agent 1", "Agent 1", "Agent 1", "Agent 1", "Agent 2", 
"Agent 2", "Agent 2", "Agent 2", "Agent 2", "Agent 2", "Agent 2", 
"Agent 2", "Agent 2", "Agent 2", "Agent 2", "Agent 2", "Agent 3", 
"Agent 3", "Agent 3", "Agent 3", "Agent 3", "Agent 3", "Agent 3", 
"Agent 3", "Agent 3", "Agent 3", "Agent 3", "Agent 3"), adjrx = c(18143.5783886275, 
38325.3886392513, 28947.4502791512, 48214.462366663, 43333.2885400775, 
33764.6938232197, 35212.886019669, 36189.6070599246, 28200.3430203372, 
43933.5384644003, 46732.6291571359, 60815.5882493688, 15712.9069922491, 
19251.420642945, 25798.4830512904, 33358.078739438, 44149.0834359141, 
43398.7462134831, 54262.7250247334, 66436.6057335244, 69902.3540414917, 
65782.8992544251, 80473.8038710182, 77450.9502630631, 54513.3449101778, 
69888.3308038326, 73786.2648409879, 108656.505665252, 179029.671628446, 
139676.077252012, 188805.180975972, 199308.502689428, 216174.290372019, 
249180.973882092, 189528.429468574, 261748.967406539)), .Names = c("yearmon", 
"Drug_Name", "adjrx"), class = "data.frame", row.names = c(36L, 
39L, 45L, 38L, 41L, 37L, 42L, 34L, 44L, 40L, 35L, 43L, 20L, 17L, 
18L, 15L, 16L, 12L, 14L, 13L, 19L, 21L, 11L, 10L, 33L, 25L, 24L, 
32L, 23L, 22L, 28L, 26L, 30L, 27L, 31L, 29L))

yrange <- paste('from', paste(range(df1$yearmon), collapse="-"))


yval <- 'adjrx'
loopcol <- 'Drug_Name'
xval <- 'yearmon'
ylabtxt <- 'ADJRx'
xlabtxt <- 'Months'
titletxt <- paste(client, ptargetclass, 'Adjusted Rx by Drug Name by Month\n from', yrange)

# ppi <- 300
# png(paste(client, targetclass, 'Drug_Name_adjrx_bymo.png', sep="_"), width=10*ppi, height=6*ppi, res=ppi)
par(mar=c(5.1, 4.1, 4.1, 12.2))
ymax <-  max( df1[c(yval)])+(0.1* max( df1[c(yval)]))
ymin <-  min( df1[c(yval)])-(0.1* min( df1[c(yval)]))
xmax <- max(df1[,c(xval)])
xmin <- min(df1[,c(xval)])
loopvec <- unique(df1[,loopcol])

library(RColorBrewer)
cpal <- brewer.pal(length(loopvec), 'Set2')
plot( df1[,xval], df1[,yval],yaxt='n', xaxt='n', ylab="", xlab="", ylim=c(ymin,ymax))
u <- par("usr")
rect(u[1], u[3], u[2], u[4], col = "gray88", border = "black")
par(new=T)
abline(h = pretty(ymin:ymax), col='white') ##  draw h lines
abline(v = (unique(df1[,xval])), col='white') ##  draw v lines
par(new=T)
for (i in 1:length(loopvec)){
  loopi <- loopvec[i] ##  calls variable to be plotted
  sgi <- df1[ df1[,c(loopcol)] == loopi, ]
  sgi <- sgi[order(sgi[,c(xval)]),]
  plot( sgi[,xval], sgi[, yval], type="o", col=cpal[i], lwd=2, lty=i,yaxt='n', cex.axis=.7, cex.sub=.8, cex.lab=.8, xlab= "", ylab="", ylim=c(ymin,ymax), xlim=c(xmin, xmax)  ,sub=paste('' ), cex.sub=.8)
  if (i < length(loopvec))
    par(new=T)
}
##draw OLS for total

axis(side=2, at= pretty(range(ymin, ymax)), labels=pretty(range(ymin, ymax)), cex.axis=.75, )
mtext(ylabtxt, side=2, line=2, cex.lab=1,las=2,  las=0, font=2)
mtext(xlabtxt, side=1, line=2, cex.lab=1,las=2,  las=0, font=2)
mtext( titletxt, side=3, line=1, font=2, cex=1.2)
legend(xpd=TRUE,'right', inset=c(-0.30,0), legend=loopvec, lwd=rep(2, length(loopvec)), pch=rep(1, length(loopvec)), col=cpal, lty= 1:length(loopvec) ,title="LEGEND", bty='n' , cex=.8)
# dev.off()