Stacked Ensemble Model in Scala using H2O GBM and Deep Learning Models

In this full Scala sample we will be using H2O Stacked Ensembles algorithm. Stacked ensemble is a process of building models of various types first with cross-validation and keep fold columns for each model. In the next step building the stacked ensemble model using all the CV folds. You can learn more about Stacked Ensembles here.

In this Stacked Ensemble we will be using GBM and Deep Learning Algorithms and then finally building the Stacked Ensemble model using the GBM and Deep Learning models.

First lets import key classes specific to H2O:

import org.apache.spark.h2o._
import water.Key
import java.io.File

Now we will create H2O context so we can call key H2O function specific to data ingest and Deep Learning algorithms:

val h2oContext = H2OContext.getOrCreate(sc)
import h2oContext._
import h2oContext.implicits._

Lets import data from local file system as H2O Data Frame:

val prostateData = new H2OFrame(new File("/Users/avkashchauhan/src/github.com/h2oai/sparkling-water/examples/smalldata/prostate.csv"))

In this Stacked Ensemble we will be using GBM and Deep Learning Algorithms so lets first build the deep learning model:

import _root_.hex.deeplearning.DeepLearning
import _root_.hex.deeplearning.DeepLearningModel.DeepLearningParameters

val dlParams = new DeepLearningParameters()
dlParams._epochs = 100
dlParams._train = prostateData
dlParams._response_column = 'CAPSULE
dlParams._variable_importances = true
dlParams._nfolds = 5
dlParams._seed = 1111
dlParams._keep_cross_validation_predictions = true;
val dl = new DeepLearning(dlParams, Key.make("dlProstateModel.hex"))
val dlModel = dl.trainModel.get

Now lets build the GBM model:

import _root_.hex.tree.gbm.GBM
import _root_.hex.tree.gbm.GBMModel.GBMParameters

val gbmParams = new GBMParameters()
gbmParams._train = prostateData
gbmParams._response_column = 'CAPSULE
gbmParams._nfolds = 5
gbmParams._seed = 1111
gbmParams._keep_cross_validation_predictions = true;
val gbm = new GBM(gbmParams,Key.make("gbmRegModel.hex"))
val gbmModel = gbm.trainModel().get()

Now build the Stacked Ensemble Models so first we need classes required for Stacked Ensembles as below:

import _root_.hex.Model
import _root_.hex.StackedEnsembleModel
import _root_.hex.ensemble.StackedEnsemble

Now we will define Stacked Ensembles parameters as below:

val stackedEnsembleParameters = new StackedEnsembleModel.StackedEnsembleParameters()
stackedEnsembleParameters._train = prostateData._key
stackedEnsembleParameters._response_column = 'CAPSULE

Now we need to pass all the different algorithms we would want to use in the Stacked Ensemble by passing their keys as below:

type T_MODEL_KEY = Key[Model[_, _ <: Model.Parameters, _ <:Model.Output]]

// Option 1
stackedEnsembleParameters._base_models = Array(gbmRegModel._key.asInstanceOf[T_MODEL_KEY], dlModel._key.asInstanceOf[T_MODEL_KEY])
// Option 2 
stackedEnsembleParameters._base_models = Array(gbmRegModel, dlModel).map(model => model._key.asInstanceOf[T_MODEL_KEY])

// Note: You can choose any of the above option to pass the model keys

Finally defining the stacked ensemble job as below:

val stackedEnsembleJob = new StackedEnsemble(stackedEnsembleParameters)

And as the last steps let build the stacked ensemble model:

val stackedEnsembleModel = stackedEnsembleJob.trainModel().get();

Now we can take a look at our Stacked Ensemble model as below:

stackedEnsembleModel

Thats it, enjoy!!

Helpful content: https://github.com/h2oai/h2o-3/blob/a554bffabda6770386a31d47e05f00543d7b9ac3/h2o-algos/src/test/java/hex/ensemble/StackedEnsembleTest.java

 

Advertisements

Logistic Regression with H2O Deep Learning in Scala

Here is the sample code which show using Feed Forward Network based Deep Learning algorithms from H2O to perform a logistic regression .

First lets import key classes specific to H2O

import org.apache.spark.h2o._
import water.Key
import java.io.File

Now we will create H2O context so we can call key H2O function specific to data ingest and Deep Learning algorithms:

val h2oContext = H2OContext.getOrCreate(sc)
import h2oContext._
import h2oContext.implicits._

Lets import data from local file system as H2O Data Frame:

val prostateData = new H2OFrame(new File("/Users/avkashchauhan/src/github.com/h2oai/sparkling-water/examples/smalldata/prostate.csv"))

Now lets import Deep Learning classes:

import root.hex.deeplearning.DeepLearning
import root.hex.deeplearning.DeepLearningModel.DeepLearningParameters

Now we will define all key parameters specific to H2O Deep Learning Algorithm

val dlParams = new DeepLearningParameters()
dlParams._epochs = 100
dlParams._train = prostateData
dlParams._response_column = 'CAPSULE
dlParams._variable_importances = true
dlParams._nfolds = 5
dlParams._seed = 1111
dlParams._keep_cross_validation_predictions = true;

Now we will create the Deep Learning Algorithm key first and then start the deep learning algorithm in blocking mode:

val dl = new DeepLearning(dlParams, Key.make("dlProstateModel.hex"))
val dlModel = dl.trainModel.get()

Lets learn more about our model:

dlModel

Now we can perform the prediction by passing an H2O Dataframe (Here I am simply passing the original data frame however you can load your test  data frame and pass it as H2O frame to perform prediction.):

val predictionH2OFrame = dlModel.score(prostateData)('predict)
val predictionsFromModel = asRDD[DoubleHolder](predictionH2OFrame).collect.map(_.result.getOrElse(Double.NaN))

Thats it, enjoy!!

 

 

Two never-miss very informative tutorials on Driverless AI

1. Automatic Feature Engineering with Driverless AI:

Dmitry Larko, Kaggle Grandmaster and Senior Data Scientist at H2O.ai, will showcase what he is doing with feature engineering, how he is doing it, and why it is important in the machine learning realm. He will delve into the workings of H2O.ai’s new product, Driverless AI, whose automatic feature engineering increases the accuracy of models and frees up approximately 80% of the data practitioners’ time – thus enabling them to draw actionable insights from the models built by Driverless AI.  You will see:

  • Overview of feature engineering
  • Real-time demonstration of feature engineering examples
  • Interpretation and reason codes of final models

2. Machine Learning Interpretability with Driverless AI:

In this video, Patrick showcases several approaches beyond the error measures and assessment plots typically used to interpret deep learning and machine learning models and results. Wherever possible, interpretability approaches are deconstructed into more basic components suitable for human storytelling: complexity, scope, understanding, and trust. You will see:

  • Data visualization techniques for representing high-degree interactions and nuanced data structures.
  • Contemporary linear model variants that incorporate machine learning and are appropriate for use in regulated industry.
  • Cutting edge approaches for explaining extremely complex deep learning and machine learning models.

Thats it, enjoy!!

 

Getting individual metrics from H2O model in Python

You can get some of the individual model metrics for your model based on training and/or validation data. Here is the code snippet:

Note: I am creating a test data frame to run H2O Deep Learning algorithm and then showing how to collect individual model metrics based on training and/or validation data below.

import h2o
h2o.init(strict_version_check= False , port = 54345)
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
model = H2ODeepLearningEstimator()
rows = [[1,2,3,4,0], [2,1,2,4,1], [2,1,4,2,1], [0,1,2,34,1], [2,3,4,1,0]] * 50
fr = h2o.H2OFrame(rows)
X = fr.col_names[0:4]

## Classification Model
fr[4] = fr[4].asfactor()
model.train(x=X, y="C5", training_frame=fr)
print('Model Type:', model.type)
print('logloss', model.logloss(valid = False))
print('Accuracy', model.accuracy(valid = False))
print('AUC', model.auc(valid = False))
print('R2', model.r2(valid = False))
print('RMSE', model.rmse(valid = False))
print('Error', model.error(valid = False))
print('MCC', model.mcc(valid = False))

## Regression Model
fr = h2o.H2OFrame(rows)
model.train(x=X, y="C5", training_frame=fr)
print('Model Type:', model.type)
print('R2', model.r2(valid = False))
print('RMSE', model.rmse(valid = False))
 

Note: As I did not pass validation frame thats why I set valid = False to get training metrics. If you have passed validation metrics then you can set valid = True to get validation metrics as well.

If you want to see what is inside model object you can look at the json object as below:

model.get_params()

Thats it, enjoy!!

 

Adding hyper parameter to Deep Learning algorithm in H2O with Scala

Hidden layer is hyper parameter for Deep Learning algorithm in H2O and to use hidden layer setting in H2O based deep learning you should be using “_hidden” parameter to specify the hidden later as hyper parameter as below:
val hyperParms = collection.immutable.HashMap("_hidden" -> hidden_layers)

Here is the code snippet in Scala to add hidden layers H2O based Deep learning Algorithm:

val dlParams = new DeepLearningParameters()
dlParams._train = airlinesData
dlParams._activation =  DeepLearningModel.DeepLearningParameters.Activation.Tanh
dlParams._epochs = 1
dlParams._autoencoder = true

dlParams._ignore_const_cols = false
dlParams._stopping_rounds = 0

dlParams._score_training_samples = 0
dlParams._replicate_training_data = false
dlParams._standardize = true

import collection.JavaConversions._
val hidden_layers = Array(Array(1, 5, 1), Array(1, 6, 1), Array(1, 7, 1)).map(_.asInstanceOf[Object])
val hyperParms = collection.immutable.HashMap("_hidden" -> hidden_layers)

def let[A](in: A)(body: A => Unit) = {
    body(in)
    in
  }

import _root_.hex.grid.GridSearch
import _root_.hex.grid.HyperSpaceSearchCriteria.RandomDiscreteValueSearchCriteria
import _root_.hex.ScoreKeeper
    val intRateHyperSpaceCriteria = let(new RandomDiscreteValueSearchCriteria) { c =>
      c.set_stopping_metric(ScoreKeeper.StoppingMetric.RMSE)
      c.set_stopping_tolerance(0.001)
      c.set_stopping_rounds(3)
      c.set_max_runtime_secs(40 * 60 /* seconds */)
      c.set_max_models(3)
    }

val intRateGrid = GridSearch.startGridSearch(Key.make("intRateGridModel"),
                                                 dlParams,
                                                 hyperParms,
                                                 new GridSearch.SimpleParametersBuilderFactory[DeepLearningParameters],
                                                 intRateHyperSpaceCriteria).get()
val count = intRateGrid.getModelCount()
Thats it, enjoy!!

Anomaly Detection with Deep Learning in R with H2O

The following R script downloads ECG dataset (training and validation) from internet and perform deep learning based anomaly detection on it.

library(h2o)
h2o.init()
# Import ECG train and test data into the H2O cluster
train_ecg <- h2o.importFile(
 path = "http://h2o-public-test-data.s3.amazonaws.com/smalldata/anomaly/ecg_discord_train.csv", 
 header = FALSE, 
 sep = ",")
test_ecg <- h2o.importFile(
 path = "http://h2o-public-test-data.s3.amazonaws.com/smalldata/anomaly/ecg_discord_test.csv", 
 header = FALSE, 
 sep = ",")

# Train deep autoencoder learning model on "normal" 
# training data, y ignored 
anomaly_model <- h2o.deeplearning(
 x = names(train_ecg), 
 training_frame = train_ecg, 
 activation = "Tanh", 
 autoencoder = TRUE, 
 hidden = c(50,20,50), 
 sparse = TRUE,
 l1 = 1e-4, 
 epochs = 100)

# Compute reconstruction error with the Anomaly 
# detection app (MSE between output and input layers)
recon_error <- h2o.anomaly(anomaly_model, test_ecg)

# Pull reconstruction error data into R and 
# plot to find outliers (last 3 heartbeats)
recon_error <- as.data.frame(recon_error)
recon_error
plot.ts(recon_error)

# Note: Testing = Reconstructing the test dataset
test_recon <- h2o.predict(anomaly_model, test_ecg) 
head(test_recon)

h2o.saveModel(anomaly_model, "/Users/avkashchauhan/learn/tmp/anomaly_model.bin")
h2o.download_pojo(anomaly_model, "/Users/avkashchauhan/learn/tmp/", get_jar = TRUE)

h2o.shutdown(prompt= FALSE)

 

 

Thats it, enjoy!!

Multinomial classification example in Scala and Deep Learning with H2O

Here is a sample for multinomial classification problem using H2O Deep Learning algorithm and iris data set in Scala language.

The following sample is for multinomial classification problem. This sample is created using Spark 2.1.0 with Sparkling Water 2.1.4.

import org.apache.spark.h2o._
import water.support.SparkContextSupport.addFiles
import org.apache.spark.SparkFiles
import java.io.File
import water.support.{H2OFrameSupport, SparkContextSupport, ModelMetricsSupport}
import water.Key
import _root_.hex.deeplearning.DeepLearningModel
import _root_.hex.ModelMetricsMultinomial


val hc = H2OContext.getOrCreate(sc)
import hc._
import hc.implicits._

addFiles(sc, "/Users/avkashchauhan/smalldata/iris/iris.csv")
val irisData = new H2OFrame(new File(SparkFiles.get("iris.csv")))

val ratios = Array[Double](0.8)
val keys = Array[String]("train.hex", "valid.hex")
val frs = H2OFrameSupport.split(irisData, keys, ratios)
val (train, valid) = (frs(0), frs(1))

def buildDLModel(train: Frame, valid: Frame, response: String,
 epochs: Int = 10, l1: Double = 0.001, l2: Double = 0.0,
 hidden: Array[Int] = Array[Int](200, 200))
 (implicit h2oContext: H2OContext): DeepLearningModel = {
 import h2oContext.implicits._
 // Build a model
 import _root_.hex.deeplearning.DeepLearning
 import _root_.hex.deeplearning.DeepLearningModel.DeepLearningParameters
 val dlParams = new DeepLearningParameters()
 dlParams._train = train
 dlParams._valid = valid
 dlParams._response_column = response
 dlParams._epochs = epochs
 dlParams._l1 = l1
 dlParams._hidden = hidden
 // Create a job
 val dl = new DeepLearning(dlParams, Key.make("dlModel.hex"))
 dl.trainModel.get
}


// Note: The response column name is C5 here so passing:
val dlModel = buildDLModel(train, valid, 'C5)(hc)

// Collect model metrics and evaluate model quality
val trainMetrics = ModelMetricsSupport.modelMetrics[ModelMetricsMultinomial](dlModel, train)
val validMetrics = ModelMetricsSupport.modelMetrics[ModelMetricsMultinomial](dlModel, valid)
println(trainMetrics.rmse)
println(validMetrics.rmse)
println(trainMetrics.mse)
println(validMetrics.mse)
println(trainMetrics.r2)
println(validMetrics.r2)

Thats it, enjoy!!