# How R2 error is calculated in Generalized Linear Model

## What is R2 (R^2 i.e. R-Squared)?

R-squared is a statistical measure of how close the data are to the fitted regression line. It is also known as the coefficient of determination, or the coefficient of multiple determination for multiple regression. … 100% indicates that the model explains all the variability of the response data around its mean. (From here)

You can get the full working jupyter notebook for this article from here directly from my Github.

Even when this article explains how R^2 error is calculated for an H2O GLM (Generalized Linear Model) however same math is use for any other statistical model. So you can use this function anywhere you would want to apply.

## Lets build an H2O GLM Model first:

```import h2o
from h2o.estimators.glm import H2OGeneralizedLinearEstimator

h2o.init()

local_url = "https://raw.githubusercontent.com/h2oai/sparkling-water/master/examples/smalldata/prostate.csv"
df = h2o.import_file(local_url)

y = "CAPSULE"
feature_names = df.col_names
feature_names.remove(y)

df_train, df_valid, df_test = df.split_frame(ratios=[0.8,0.1])
print(df_train.shape)
print(df_valid.shape)
print(df_test.shape)

prostate_glm = H2OGeneralizedLinearEstimator(model_id = "prostate_glm")

prostate_glm.train(x = feature_names, y = y, training_frame=df_train, validation_frame=df_valid)
prostate_glm```

Now calculate Model Performance based on training, validation and test data:

```train_performance = prostate_glm.model_performance(df_train)
valid_performance = prostate_glm.model_performance(df_valid)
test_performance = prostate_glm.model_performance(df_test)```

Now lets check the default R^2 metrics for training, validation and test data:

```print(train_performance.r2())
print(valid_performance.r2())
print(test_performance.r2())
print(prostate_glm.r2())```

Now lets get the prediction for the test data which we kept separate:

`predictions = prostate_glm.predict(df_test)`

Here is the math which is use to calculate the R2 metric for the test data set:

```SSE = ((predictions-df_test[y])**2).sum()
y_hat = df_test[y].mean()
SST = ((df_test[y]-y_hat)**2).sum()
1-SSE/SST```

Now lets get model performance for given test data as below:

`print(test_performance.r2())`

Above we can see that both values, one give by model performance for test data and the other we calculated are same.

Thats it, enjoy!!

Advertisements

# Python example of building GLM, GBM and Random Forest Binomial Model with H2O

Here is an example of using H2O machine learning library and then building GLM, GBM and Distributed Random Forest models for categorical response variable.

Lets import h2o library and initialize the H2O machine learning cluster:

```import h2o
h2o.init()```

Importing dataset and getting familiar with it:

```df = h2o.import_file("https://raw.githubusercontent.com/h2oai/sparkling-water/master/examples/smalldata/prostate.csv")
df.summary()
df.col_names```

Lets configure our predictors and response variables from the ingested dataset:

```y = 'CAPSULE'
x = df.col_names
x.remove(y)
print("Response = " + y)
print("Pridictors = " + str(x))```

Now we need to set the response column as categorical or factor:

`df['CAPSULE'] = df['CAPSULE'].asfactor()`

Now we will the levels in our response variable:

```df['CAPSULE'].levels()
[['0', '1']]```

Note: Because there are only 2 levels or values, the model will be called Binomial model.

Now we will split our dataset into training, validation and testing datasets:

```train, valid, test = df.split_frame(ratios=[.8, .1])
print(df.shape)
print(train.shape)
print(valid.shape)
print(test.shape)```

Lets build Generalized Linear Regression (Logistic – response variable is categorical) model first:

```from h2o.estimators.glm import H2OGeneralizedLinearEstimator
glm_logistic = H2OGeneralizedLinearEstimator(family = "binomial")
glm_logistic.train(x=x, y= y, training_frame=train, validation_frame=valid,
model_id="glm_logistic")```

Now we will take a look at few model metrics:

```glm_logistic.varimp()
Warning: This model doesn't have variable importances```

Lets have a look at model coefficients:

`glm_logistic.coef()`

Lets perform the prediction using the testing dataset:

`glm_logistic.predict(test_data=test)`

Now we are checking the model performance metrics “rmse” based on testing and other datasets:

```print(glm_logistic.model_performance(test_data=test).rmse())
print(glm_logistic.model_performance(test_data=valid).rmse())
print(glm_logistic.model_performance(test_data=train).rmse())```

Now we are checking the model performance metrics “r2” based on testing and other datasets:

```print(glm.model_performance(test_data=test).r2())
print(glm.model_performance(test_data=valid).r2())
print(glm.model_performance(test_data=train).r2())```

Lets build Gradient Boosting Model now:

```from h2o.estimators.gbm import H2OGradientBoostingEstimator
gbm = H2OGradientBoostingEstimator()
gbm.train(x=x, y =y, training_frame=train, validation_frame=valid)```

Now get to know our model metrics, starting with confusion metrics first:

`gbm.confusion_matrix()`

Now have a look at variable importance plots:

`gbm.varimp_plot()`

Now have a look at the variable importance table:

`gbm.varimp()`

Lets build Distributed Random Forest model:

```from h2o.estimators.random_forest import H2ORandomForestEstimator
drf = H2ORandomForestEstimator()
drf.train(x=x, y = y, training_frame=train, validation_frame=valid)```

lets understand random forest model metrics starting confusion metrics:

`drf.confusion_matrix()`

We can have a look at gains and lift table also:

`drf.gains_lift()`

Note:

• We can get all model metrics as other model type as applied.
• We can also get model perform based on training, validation and testing data for all models.

Thats it, enjoy!!

# Using Cross-validation in Scala with H2O and getting each cross-validated model

Here is Scala code for binomial classification with GLM:

https://aichamp.wordpress.com/2017/04/23/binomial-classification-example-in-scala-and-gbm-with-h2o/

To add cross validation you can do the following:

```def buildGLMModel(train: Frame, valid: Frame, response: String)
(implicit h2oContext: H2OContext): GLMModel = {
import _root_.hex.glm.GLMModel.GLMParameters.Family
import _root_.hex.glm.GLM
import _root_.hex.glm.GLMModel.GLMParameters
val glmParams = new GLMParameters(Family.binomial)
glmParams._train = train
glmParams._valid = valid
glmParams._nfolds = 3  ###### Here is cross-validation ###
glmParams._response_column = response
glmParams._alpha = Array[Double](0.5)
val glm = new GLM(glmParams, Key.make("glmModel.hex"))
glm.trainModel().get()
}```

To look cross-validated model try this:

```scala> glmModel._output._cross_validation_models
res12: Array[water.Key[_ <: water.Keyed[_ <: AnyRef]]] =
Array(glmModel.hex_cv_1, glmModel.hex_cv_2, glmModel.hex_cv_3)```

Now to get each model do the following:

`scala> val m1 = DKV.getGet("glmModel.hex_cv_1").asInstanceOf[GLMModel]`

And you will see the following:

```scala> val m1 = DKV.getGet("glmModel.hex_cv_1").asInstanceOf[GLMModel]
m1: hex.glm.GLMModel =
Model Metrics Type: BinomialGLM
Description: N/A
model id: glmModel.hex_cv_1
frame id: glmModel.hex_cv_1_train
MSE: 0.14714406
RMSE: 0.38359362
AUC: 0.7167627
logloss: 0.4703465
mean_per_class_error: 0.31526923
default threshold: 0.27434438467025757
CM: Confusion Matrix (vertical: actual; across: predicted):
0 1 Error Rate
0 10704 1651 0.1336 1,651 / 12,355
1 1768 1790 0.4969 1,768 / 3,558
Totals 12472 3441 0.2149 3,419 / 15,913
Gains/Lift Table (Avg response rate: 22.36 %):
Group Cumulative Data Fraction Lower Threshold Lift Cumulative Lift Response Rate Cumulative Response Rate Capture Rate Cumulative Capture Rate Gain Cumulative Gain
1 0.01005467 0....
scala> val m2 = DKV.getGet("glmModel.hex_cv_2").asInstanceOf[GLMModel]
m2: hex.glm.GLMModel =
Model Metrics Type: BinomialGLM
Description: N/A
model id: glmModel.hex_cv_2
frame id: glmModel.hex_cv_2_train
MSE: 0.14598908
RMSE: 0.38208517
AUC: 0.7231473
logloss: 0.46717605
mean_per_class_error: 0.31456697
default threshold: 0.29637953639030457
CM: Confusion Matrix (vertical: actual; across: predicted):
0 1 Error Rate
0 11038 1395 0.1122 1,395 / 12,433
1 1847 1726 0.5169 1,847 / 3,573
Totals 12885 3121 0.2025 3,242 / 16,006
Gains/Lift Table (Avg response rate: 22.32 %):
Group Cumulative Data Fraction Lower Threshold Lift Cumulative Lift Response Rate Cumulative Response Rate Capture Rate Cumulative Capture Rate Gain Cumulative Gain
1 0.01005873 0...
scala> val m3 = DKV.getGet("glmModel.hex_cv_3").asInstanceOf[GLMModel]
m3: hex.glm.GLMModel =
Model Metrics Type: BinomialGLM
Description: N/A
model id: glmModel.hex_cv_3
frame id: glmModel.hex_cv_3_train
MSE: 0.14626761
RMSE: 0.38244948
AUC: 0.7239823
logloss: 0.46873763
mean_per_class_error: 0.31437498
default threshold: 0.28522220253944397
CM: Confusion Matrix (vertical: actual; across: predicted):
0 1 Error Rate
0 10982 1490 0.1195 1,490 / 12,472
1 1838 1771 0.5093 1,838 / 3,609
Totals 12820 3261 0.2070 3,328 / 16,081
Gains/Lift Table (Avg response rate: 22.44 %):
Group Cumulative Data Fraction Lower Threshold Lift Cumulative Lift Response Rate Cumulative Response Rate Capture Rate Cumulative Capture Rate Gain Cumulative Gain
1 0.01001182 0...
scala>```

Thats it, enjoy!!

# How to regularize intercept in GLM

Sometime you may want to emulate hierarchical modeling to achieve your objective you can use beta_constraints as below:
```iris = h2o.import_file("http://h2o-public-test-data.s3.amazonaws.com/smalldata/iris/iris_wheader.csv")
bc = h2o.H2OFrame([("Intercept",-1000,1000,3,30)], column_names=["names","lower_bounds","upper_bounds","beta_given","rho"])
glm = H2OGeneralizedLinearEstimator(family = "gaussian",
beta_constraints=bc,
standardize=False)
glm.coef()```
The output will look like as below:
```{u'Intercept': 3.000933645168297,
u'class.Iris-setosa': 0.0,
u'class.Iris-versicolor': 0.0,
u'class.Iris-virginica': 0.0,
u'petal_len': 0.4423526962303227,
u'petal_wid': 0.0,
u'sepal_wid': 0.37712042938039897}```
There’s more information in the GLM booklet linked below, but the short version is to create a new constraints frame with the columns: names, lower_bounds, upper_bounds, beta_given, & rho, and have a row entry per feature you want to constrain. You can use “Intercept” as a keyword to constraint the intercept.
```http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/GLMBooklet.pdf
names: (mandatory) coefficient names
 lower bounds: (optional) coefficient lower bounds , must be less thanor equal to upper bounds
 upper bounds: (optional) coefficient upper bounds , must be greaterthan or equal to lower bounds
 beta given: (optional) specifies the given solution in proximal operatorinterface
 rho (mandatory if beta given is specified, otherwise ignored): specifiesper-column L2 penalties on the distance from the given solution```
If you want to go deeper to learn how these L1/L2 parameters works, here are more details:
What’s happening is an L2 penalty is being applied between the coeffecient & given. The proximal penalty is computed: Σ(x-x’)*rho. You can confirm this by setting rho to be whatever lambda may be, and set let lambda to 0. This will give the same result as having set lambda to that value.
You can use beta constraints to assign per-feature regularization strength
but only for l2 penalty. The math is explained here:
`sum_i rho[i] * L2norm2(beta[i]-betagiven[i])`
So if you set beta given to zero, and say all rho except for the intercept to 1e-5
then it is equivalent to running without BC, just  with alpha = 0, lambda = 1e-5
Thats it, enjoy!!

# Building high order polynomials with GLM for higher accuracy

Sometimes when building GLM models, you would like to configure GLM to search for higher order polynomial of the features .

The reason you may have to do is that, you may have strong predictors for a model and going for high order polynomial of predictors you will get higher accuracy.

With H2O, you can create higher order polynomials as below:

• Look for  ‘interactions’ parameter in GLM model.
• In the interaction parameters add  list of predictor columns to interact.
When model will be build, all pairwise combinations will be computed for this list. Following is a working sample:
```boston = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/BostonHousing.csv")
predictors = boston.columns[:-1]
response = "medv"
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
interactions_list = ['crim', 'dis']
boston_glm = H2OGeneralizedLinearEstimator(interactions = interactions_list)
boston_glm.train(x = predictors, y = response,training_frame = boston)
boston_glm.coef()```
To explore interactions among categorical variables please do the following:
h2o.interaction
Thats all, enjoy!!

# Binomial classification example in Scala and GLM with H2O

Here is a sample for binomial classification problem using H2O GLM algorithm using Credit Card data set in Scala language.

The following sample is for multinomial classification problem. This sample is created using Spark 2.1.0 with Sparkling Water 2.1.4.

```import org.apache.spark.h2o._
import water.support.SparkContextSupport.addFiles
import org.apache.spark.SparkFiles
import java.io.File
import water.support.{H2OFrameSupport, SparkContextSupport, ModelMetricsSupport}
import water.Key
import _root_.hex.glm.GLMModel
import _root_.hex.ModelMetricsBinomial

val hc = H2OContext.getOrCreate(sc)
import hc._
import hc.implicits._

addFiles(sc, "/Users/avkashchauhan/learn/deepwater/credit_card_clients.csv")
val creditCardData = new H2OFrame(new File(SparkFiles.get("credit_card_clients.csv")))

val ratios = Array[Double](0.8)
val keys = Array[String]("train.hex", "valid.hex")
val frs = H2OFrameSupport.split(creditCardData, keys, ratios)
val (train, valid) = (frs(0), frs(1))

def buildGLMModel(train: Frame, valid: Frame, response: String)
(implicit h2oContext: H2OContext): GLMModel = {
import _root_.hex.glm.GLMModel.GLMParameters.Family
import _root_.hex.glm.GLM
import _root_.hex.glm.GLMModel.GLMParameters
val glmParams = new GLMParameters(Family.binomial)
glmParams._train = train
glmParams._valid = valid
glmParams._response_column = response
glmParams._alpha = Array[Double](0.5)
val glm = new GLM(glmParams, Key.make("glmModel.hex"))
glm.trainModel().get()
//val glmModel = glm.trainModel().get()
}

val glmModel = buildGLMModel(train, valid, 'default_payment_next_month)(hc)

// Collect model metrics and evaluate model quality
val trainMetrics = ModelMetricsSupport.modelMetrics[ModelMetricsBinomial](glmModel, train)
val validMetrics = ModelMetricsSupport.modelMetrics[ModelMetricsBinomial](glmModel, valid)
println(trainMetrics.rmse)
println(validMetrics.rmse)
println(trainMetrics.mse)
println(validMetrics.mse)
println(trainMetrics.r2)
println(validMetrics.r2)
println(trainMetrics.auc)
println(validMetrics.auc)

// Prediction
addFiles(sc, "/Users/avkashchauhan/learn/deepwater/credit_card_predict.csv")
val creditPredictData = new H2OFrame(new File(SparkFiles.get("credit_card_predict.csv")))

val predictionFrame = glmModel.score(creditPredictData)
var predictonResults = asRDD[DoubleHolder](predictionFrame).collect.map(_.result.getOrElse(Double.NaN))```

Thats it, enjoy!!

# Cross-validation example with time-series data in R and H2O

What is Cross-validation: In k-fold crossvalidation, the original sample is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k − 1 subsamples are used as training data. learn more at wiki..

When you have time-series data splitting data randomly from random rows does not work because the time part of your data will be mangled so doing cross-validation with time series dataset is done differently.

The following R code script show how it is split first and the passed as validation frame into different algorithms in H2O.

library(h2o)

h2o.init(strict_version_check = FALSE)

# show general information on the airquality dataset

colnames(airquality)

dim(airquality)

print(paste(‘number of months:’,length(unique(airquality\$Month)), sep=“”))

# add a year column, so you can create a month, day, year date stamp

airquality\$Year <- rep(2017,nrow(airquality))

airquality\$Date <- as.Date(with(airquality, paste(Year, Month, Day,sep=“-“)), “%Y-%m-%d”)

# sort the dataset

airquality <- airquality[order(as.Date(airquality\$Date, format=“%m/%d/%Y”)),]

# convert the dataset to unix time before converting to an H2OFrame

airquality\$Date <- as.numeric(as.POSIXct(airquality\$Date, origin=“1970-01-01”, tz = “GMT”))

# convert to an h2o dataframe

air_h2o <- as.h2o(airquality)

# specify the features and the target column

target <- ‘Ozone’

features <- c(“Solar.R”, “Wind”, “Temp”,  “Month”, “Day”, “Date”)

# split dataset in ~half which if you round up is 77 rows (train on the first half of the dataset)

train_1 <- air_h2o[1:ceiling(dim(air_h2o)/2),]

# calculate 14 days in unix time: one day is 86400 seconds in unix time (aka posix time, epoch time)

# use this variable to iterate forward 12 days

add_14_days <- 86400*14

# initialize a counter for the while loop so you can keep track of which fold corresponds to which rmse

counter <- 0

# iterate over the process of testing on the next two weeks

# combine the train_1 and test_1 datasets after each loop

while (dim(train_1) < dim(air_h2o)){

# get new dataset two weeks out

# take the last date in Date column and add 14 days to i

new_end_date <- train_1[nrow(train_1),]\$Date + add_14_days

last_current_date <- train_1[nrow(train_1),]\$Date

# slice with a boolean mask

mask <- air_h2o[,“Date”] > last_current_date

temp_df <- air_h2o[mask,]

mask_2 <- air_h2o[,“Date”] < new_end_date

# multiply the mask dataframes to get the intersection

final_mask <- mask*mask_2

test_1 <- air_h2o[final_mask,]

# build a basic gbm using the default parameters

gbm_model <- h2o.gbm(x = features, y = target, training_frame = train_1, validation_frame = test_1, seed = 1234)

# print the number of rows used for the test_1 dataset

print(paste(‘number of rows used in test set: ‘, dim(test_1), sep=” “))

print(paste(‘number of rows used in train set: ‘, dim(train_1), sep=” “))

# print the validation metrics

rmse_valid <- h2o.rmse(gbm_model, valid=T)

print(paste(‘your new rmse value on the validation set is: ‘, rmse_valid,‘ for fold #: ‘, counter, sep=“”))

# create new training frame

train_1 <- h2o.rbind(train_1,test_1)

print(paste(‘shape of new training dataset: ‘,dim(train_1),sep=” “))

counter <<- counter + 1

}

Thats all, enjoy!!