# Calculating AUC and GINI model metrics for logistic classification

For logistics classification problem we use AUC metrics to check the model performance. The higher is better however any value above 80% is considered good and over 90% means the model is behaving great.

AUC is an abbreviation for Area Under the Curve. It is used in classification analysis in order to determine which of the used models predicts the classes best. An example of its application are ROC curves. Here, the true positive rates are plotted against false positive rates. You can learn more about AUC in this QUORA discussion.

We will also look for GINI metric which you can learn from wiki.  In this example we will learn how AUC and GINI model metric is calculated using True Positive Results (TPR) and False Positive Results (FPR) values from a given test dataset.

You can get the full working Jupyter Notebook here from my Github.

Lets build a logistic classification model in H2O using the prostate data set:

### Preparation of H2O environment and dataset:

```## Importing required libraries
import h2o
import sys
import pandas as pd
from h2o.estimators.gbm import H2OGradientBoostingEstimator

## Starting H2O machine learning cluster
h2o.init()

## Importing dataset
local_url = "https://raw.githubusercontent.com/h2oai/sparkling-water/master/examples/smalldata/prostate/prostate.csv"
df = h2o.import_file(local_url)

## defining feaures and response column
y = "CAPSULE"
feature_names = df.col_names
feature_names.remove(y)

## setting our response column to catagorical so our model classify the problem
df[y] = df[y].asfactor()```

### Now we will be splitting the dataset into 3 sets for training, validation and test:

```df_train, df_valid, df_test = df.split_frame(ratios=[0.8,0.1])
print(df_train.shape)
print(df_valid.shape)
print(df_test.shape)```

### Setting  H2O GBM Estimator and building GBM Model:

```prostate_gbm = H2OGradientBoostingEstimator(model_id = "prostate_gbm",
ntrees=500,
learn_rate=0.001,
max_depth=10,
score_each_iteration=True)

## Building H2O GBM Model:
prostate_gbm.train(x = feature_names, y = y, training_frame=df_train, validation_frame=df_valid)

## Understand the H2O GBM Model
prostate_gbm```

### Generating model performance with training, validation & test datasets:

```train_performance = prostate_gbm.model_performance(df_train)
valid_performance = prostate_gbm.model_performance(df_valid)
test_performance = prostate_gbm.model_performance(df_test)```

### Let’s take a look at the AUC metrics provided by Model performance:

```print(train_performance.auc())
print(valid_performance.auc())
print(test_performance.auc())
print(prostate_gbm.auc())```

### Let’s take a look at the GINI metrics provided by Model performance:

```print(train_performance.gini())
print(valid_performance.gini())
print(test_performance.gini())
print(prostate_gbm.gini())```

### Let generate the predictions using test dataset:

```predictions = prostate_gbm.predict(df_test)
## Here we will get the probability for the 'p1' values from the prediction frame:
predict_probability = predictions['p1']```

Now we will import required scikit-learn libraries to generate AUC manually:

```from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
import random```

Lets get the real response results from the test data frame:

```actual = df_test[y].as_data_frame()
actual_list = actual['CAPSULE'].tolist()
print(actual_list)```

Now lets get the results probabilities from the prediction frame:

```predictions_temp = predict_probability_x['p1'].as_data_frame()
predictions_list = predictions_temp['p1'].tolist()
print(predictions_list)```

### Calculating False Positive Rate and True Positive Rate:

Lets calculate TPR, FPR and Threshold metrics from the predictions and original data frame
– False Positive Rate (fpr)
– True Positive Rate (tpr)
– Threashold

```fpr, tpr, thresholds = roc_curve(actual_list, predictions_list)
roc_auc = auc(fpr, tpr)
print(roc_auc)
print(test_performance.auc())```

Note: Above you will see that our calculated ROC values is exactly same as given by model performance for test dataset.

### Lets plot the AUC Curve using matplotlib:

```plt.title('ROC (Receiver Operating Characteristic)')
plt.plot(fpr, tpr, 'b',
label='AUC = %0.4f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate (TPR)')
plt.xlabel('False Positive Rate (FPR)')
plt.show()``` ### This is how GINI metric is calculated from AUC:

```GINI = (2 * roc_auc) - 1
print(GINI)
print(test_performance.gini())```

Note: Above you will see that our calculated GINI values is exactly same as given by model performance for test dataset.

Thats it, enjoy!!

Advertisements

# How R2 error is calculated in Generalized Linear Model

## What is R2 (R^2 i.e. R-Squared)?

R-squared is a statistical measure of how close the data are to the fitted regression line. It is also known as the coefficient of determination, or the coefficient of multiple determination for multiple regression. … 100% indicates that the model explains all the variability of the response data around its mean. (From here)

You can get the full working jupyter notebook for this article from here directly from my Github.

Even when this article explains how R^2 error is calculated for an H2O GLM (Generalized Linear Model) however same math is use for any other statistical model. So you can use this function anywhere you would want to apply.

## Lets build an H2O GLM Model first:

```import h2o
from h2o.estimators.glm import H2OGeneralizedLinearEstimator

h2o.init()

local_url = "https://raw.githubusercontent.com/h2oai/sparkling-water/master/examples/smalldata/prostate.csv"
df = h2o.import_file(local_url)

y = "CAPSULE"
feature_names = df.col_names
feature_names.remove(y)

df_train, df_valid, df_test = df.split_frame(ratios=[0.8,0.1])
print(df_train.shape)
print(df_valid.shape)
print(df_test.shape)

prostate_glm = H2OGeneralizedLinearEstimator(model_id = "prostate_glm")

prostate_glm.train(x = feature_names, y = y, training_frame=df_train, validation_frame=df_valid)
prostate_glm```

Now calculate Model Performance based on training, validation and test data:

```train_performance = prostate_glm.model_performance(df_train)
valid_performance = prostate_glm.model_performance(df_valid)
test_performance = prostate_glm.model_performance(df_test)```

Now lets check the default R^2 metrics for training, validation and test data:

```print(train_performance.r2())
print(valid_performance.r2())
print(test_performance.r2())
print(prostate_glm.r2())```

Now lets get the prediction for the test data which we kept separate:

`predictions = prostate_glm.predict(df_test)`

Here is the math which is use to calculate the R2 metric for the test data set:

```SSE = ((predictions-df_test[y])**2).sum()
y_hat = df_test[y].mean()
SST = ((df_test[y]-y_hat)**2).sum()
1-SSE/SST```

Now lets get model performance for given test data as below:

`print(test_performance.r2())`

Above we can see that both values, one give by model performance for test data and the other we calculated are same.

Thats it, enjoy!!

# Launching H2O cluster on different port in pysparkling

In this example we will launch H2O machine learning cluster using pysparkling package. You can visit my github and this article to learn more about the code execution explained in this article.

For you would need to  install pysparkling in python 2.7 setup as below:

`> pip install -U h2o_pysparkling_2.1`

Now we can launch the pysparkling Shell as below:

`SPARK_HOME=/Users/avkashchauhan/tools/spark-2.1.0-bin-hadoop2.6`

Launch pysparkling shell:

`~/tools/sw2/sparkling-water-2.1.14 \$ bin/pysparkling`

Python Code Script Launch the H2O cluster in pysparkling:

```## Importing Libraries
from pysparkling import *
import h2o

## Setting H2O Conf Object
h2oConf = H2OConf(sc)
h2oConf

## Setting H2O Conf for different port
h2oConf.set_client_port_base(54300)
h2oConf.set_node_base_port(54300)

## Gett H2O Conf Object to see the configuration
h2oConf

## Launching H2O Cluster
hc = H2OContext.getOrCreate(spark, h2oConf)

## Getting H2O Cluster status
h2o.cluster_status()```

Now If you verify the Sparkling Water configuration you will see that the H2O is running on the given IP and port 54300 as configured:

``````Sparkling Water configuration:
backend cluster mode : internal
workers              : None
cloudName            : Not set yet, it will be set automatically before starting H2OContext.
flatfile             : true
clientBasePort       : 54300
nodeBasePort         : 54300
cloudTimeout         : 60000
h2oNodeLog           : INFO
h2oClientLog         : WARN
nthreads             : -1
drddMulFactor        : 10``````

Thats it, enjoy!!

# Using H2O AutoML for Kaggle Porto Seguro Safe Driver Prediction Competition

If you into competitive machine learning you must be visiting Kaggle routinely. Currently you can compete for cash and recognition at the Porto Seguro’s Safe Driver Prediction as well.

I did try to given training dataset (as it is) with H2O AutoML which ran for about 5 hours and I was able to get into top 280th position. If you could transform the dataset properly and run H2O AutoML you may be able to get even higher ranking.

Following is the simplest H2O AutoML python script which you can try as well (Note: Make sure to change the run_automl_for_seconds to the desired time you would want to run the experiment.)

```import h2o
import pandas as pd
from h2o.automl import H2OAutoML

h2o.init()
train = h2o.import_file('/data/avkash/PortoSeguro/PortoSeguroTrain.csv')
test = h2o.import_file('/data/avkash/PortoSeguro/PortoSeguroTest.csv')
sub_data = h2o.import_file('/data/avkash/PortoSeguro/PortoSeguroSample_submission.csv')

y = 'target'
x = train.columns
x.remove(y)

## Time to run the experiment
run_automl_for_seconds = 18000
## Running AML for 4 Hours
aml = H2OAutoML(max_runtime_secs =run_automl_for_seconds)
train_final, valid = train.split_frame(ratios=[0.9])
aml.train(x=x, y =y, training_frame=train_final, validation_frame=valid)

leader_model = aml.leader
pred = leader_model.predict(test_data=test)

pred_pd = pred.as_data_frame()
sub = sub_data.as_data_frame()

sub['target'] = pred_pd
sub.to_csv('/data/avkash/PortoSeguro/PortoSeguroResult.csv', header=True, index=False)```

That’s it, enjoy!!

# Handling exception “Argument python_obj should be a …”

Recently I hit the following exception when running python code with H2O functions on a new machine however this exception does not happen on my main machine. The exception was as below:

```H2OTypeError: Argument `python_obj` should be a None | list | tuple | dict | numpy.ndarray | pandas.DataFrame | scipy.sparse.issparse, got H2OTwoDimTable
Error in sys.excepthook:
Traceback (most recent call last):
File “/usr/local/lib/python2.7/site-packages/h2o/utils/debugging.py”, line 95, in _except_hook
_handle_soft_error(exc_type, exc_value, exc_tb)
File “/usr/local/lib/python2.7/site-packages/h2o/utils/debugging.py”, line 225, in _handle_soft_error
args_str = _get_args_str(func, highlight=highlight)
File “/usr/local/lib/python2.7/site-packages/h2o/utils/debugging.py”, line 316, in _get_args_str
s = str(inspect.signature(func))[1:-1]```

The following message is worth to explore:

Argument `python_obj` should be a None | list | tuple | dict | numpy.ndarray | pandas.DataFrame | scipy.sparse.issparse, got H2OTwoDimTable

Diagnostics:

• The method is looking for numpy, pandas, scipy to be available in the machine
• I checked that numpy was installed but pandas was missing
• The missing pandas library gave me cryptic error message

Solution:

After installing pandas library the problem was resolved.

Thats it, enjoy!!

# Exploring & transforming H2O Data Frame in R and Python

Sometime you may need to ingest a dataset for building models and then your first task is to explore all the features and their type you have. Once that is done you may want to change the feature types to the one you want.

Here is the code snippet in Python:

```df = h2o.import_file('https://raw.githubusercontent.com/h2oai/sparkling-water/master/examples/smalldata/prostate.csv')
df.types
{    u'AGE': u'int', u'CAPSULE': u'int', u'DCAPS': u'int',
u'DPROS': u'int', u'GLEASON': u'int', u'ID': u'int',
u'PSA': u'real', u'RACE': u'int', u'VOL': u'real'
}
```
If you would like to visualize all the features in graphical format you can do the following:
```import pylab as pl
df.as_data_frame().hist(figsize=(20,20))
pl.show()```
The result looks like as below on jupyter notebook: Note: If you have features above 50, you might have to trim your data frame to less features so you can have effective visualization.
Next you may need to You can also use the following function to convert a list of columns as factor/categorical by passing H2O dataframe and a list of columns:
```def convert_columns_as_factor(hdf, column_list):
list_count = len(column_list)
if list_count is 0:
return "Error: You don't have a list of binary columns."
if (len(pdf.columns)) is 0:
return "Error: You don't have any columns in your data frame."
local_column_list = pdf.columns
for i in range(list_count):
try:
target_index = local_column_list.index(column_list[i])
pdf[column_list[i]] = pdf[column_list[i]].asfactor()
print('Column ' + column_list[i] + " is converted into factor/catagorical.")
except ValueError:
print('Error: ' + str(column_list[i]) + " not found in the data frame.")```

The following script is in R to perform the same above tasks:

```N=100
set.seed(999)
color = sample(c("D","E","I","F","M"),size=N,replace=TRUE)
num = rnorm(N,mean = 12,sd = 21212)
sex = sample(c("male","female"),size=N,replace=TRUE)
sex = as.factor(sex)
color = as.factor(color)
data = sample(c(0,1),size = N,replace = T)
fdata = factor(data)
table(fdata)
dd = data.frame(color,sex,num,fdata)
data = as.h2o(dd)
str(data)
data\$sex = h2o.setLevels(x = data\$sex ,levels = c("F","M"))
data```
Thats it, enjoy!!

# Full working example of connecting Netezza from Java and python

Before start connecting you must make sure you can access the Netezza database and table from the machine where you are trying to run Java and or Python samples.

## Connecting Netezza server from Python Sample

Check out my Ipython Jupyter Notebook with Python Sample

Step 1: Importing python jaydebeapi library

``````import jaydebeapi
``````

Step 2: Setting Database connection settings

``````dsn_database = "avkash"
dsn_hostname = "172.16.181.131"
dsn_port = "5480"
dsn_uid = "admin"
dsn_pwd = "password"
jdbc_driver_name = "org.netezza.Driver"
jdbc_driver_loc = "/Users/avkashchauhan/learn/customers/netezza/nzjdbc3.jar"
###jdbc:netezza://" + server + "/" + dbName ;
connection_string='jdbc:netezza://'+dsn_hostname+':'+dsn_port+'/'+dsn_database
url = '{0}:user={1};password={2}'.format(connection_string, dsn_uid, dsn_pwd)
print("URL: " + url)
print("Connection String: " + connection_string)
``````

Step 3:Creating Database Connection

``````conn = jaydebeapi.connect("org.netezza.Driver", connection_string, {'user': dsn_uid, 'password': dsn_pwd},
jars = "/Users/avkashchauhan/learn/customers/netezza/nzjdbc3.jar")
curs = conn.cursor()
``````

Step 4:Processing SQL Query

``````curs.execute("select * from allusers")
result = curs.fetchall()
print("Total records: " + str(len(result)))
print(result)
``````

Step 5: Printing all records

``````for i in range(len(result)):
print(result[i])
``````

Step 6: Closing all connections

``````curs.close()
conn.close()
``````

## Connecting Netezza server from Java Code Sample

Step 1: Have the Netezza driver as nzjdbc3.jar in a folder.

Step 2: Create netezzaJdbcMain.java as below in the same folder where nzjdbc3.jar is placed.

``````import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class netezzaJdbcMain {
public static void main(String[] args) {
String server = "x.x.x.x";
String port = "5480";
String dbName = "_db_name_";
String url = "jdbc:netezza://" + server + "/" + dbName ;
String user = "admin";
String pwd = "password";
String schema = "db_schema";
Connection conn = null;
Statement st = null;
ResultSet rs = null;
try {
Class.forName("org.netezza.Driver");
System.out.println(" Connecting ... ");
conn = DriverManager.getConnection(url, user, pwd);
System.out.println(" Connected "+conn);

String sql = "select * from allusers";
st = conn.createStatement();
rs = st.executeQuery(sql);

System.out.println("Printing result...");
int i = 0;
while (rs.next()) {
String userName = rs.getString("name");
int year = rs.getInt("age");
System.out.println("User: " + userName +
", age is: " + year);
i++;
}
if (i==0){
System.out.println(" No data found");
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if( rs != null)
rs.close();
if( st!= null)
st.close();
if( conn != null)
conn.close();
} catch (SQLException e1) {
e1.printStackTrace();
}
}
}
}
``````

Step 3: Compile code as below:

``````\$ javac -cp nzjdbc3.jar -J-Xmx2g -J-XX:MaxPermSize=128m netezzaJdbcMin.java
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
``````

Note: You should see your main class is compiled without any problem.

Step 4: Run compiled class as below:

``````\$ java -cp .:nzjdbc3.jar netezzaJdbcMain

Connecting ...
Connected org.netezza.sql.NzConnection@3feba861
Printing result...
User: John                , age is: 30
User: Jason               , age is: 26
User: Jim                 , age is: 20
User: Kyle                , age is: 21
User: Kim                 , age is: 27

``````

Note: You will see results something as above.

Thats it, enjoy!!