Матрица ошибок keras

Keras
August 18, 2023
June 29, 2019

Model accuracy is not a reliable metric of performance, because it will yield misleading results if the validation data set is unbalanced. For example, if there were 90 cats and only 10 dogs in the validation data set and if the model predicts all the images as cats. The overall accuracy would be 90%.

The confusion matrix allows us to visualize the performance of the trained model. It makes it easy to see if the system is confusing two classes. It also summarizes the results of testing the model for further inspection. In this tutorial, we create a simple Convolutional Neural Network (CNN) to classify MNIST digits for visualization confusion matrix in TensorBord.

Download Dataset

We’re going to construct a simple neural network to classify images in the MNIST dataset. This dataset consists of  28×28 grayscale images of 10 digits(0-9) of 10 categories.

(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data()

train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))

train_images, test_images = train_images / 255.0, test_images / 255.0

classes=[0,1,2,3,4,5,6,7,8,9]

Define Simple CNN Model

First, create a very simple model and compile it, setting up the optimizer and loss function and training it.

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x=train_images, 
            y=train_labels, 
            epochs=5, 
            validation_data=(test_images, test_labels))

The compile step also specifies that you want to log the accuracy of the classifier along the way.

Create a Confusion Matrix

You can use Tensorflow’s confusion matrix to create a confusion matrix.

y_pred=model.predict_classes(test_images)
con_mat = tf.math.confusion_matrix(labels=y_true, predictions=y_pred).numpy()

Normalization Confusion Matrix to the interpretation of which class is being misclassified.

con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)

con_mat_df = pd.DataFrame(con_mat_norm,
                     index = classes, 
                     columns = classes)
Keras Confusion matrix

The diagonal elements represent the number of points for which the predicted label is equal to the true label, while off-diagonal elements are those that are mislabeled by the model.

figure = plt.figure(figsize=(8, 8))
sns.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()

We use matplotlib to plot confusion matrix and Seaborn library to create a heatmap.

Plot Keras Confusion matrix

The confusion matrix shows that this model has some problems. “9”, “5”, and “2” are getting confused with each other. The model needs more work.

Plot Confusion Matrix in Tensorbord

Using the TensorFlow Image Summary API, you can easily view them in TensorBoard.Here’s what you’ll do:

  1. Create the Keras TensorBoard callback to log basic metrics
  2. Create a Keras LambdaCallback to log the confusion matrix at the end of every epoch
  3. Train the model using Model.fit(), making sure to pass both callbacks

You need some boilerplate code to convert the plot to a tensor, tf.summary.image() expecting a rank-4 tensor containing (batch_size, height, width, and channels). Therefore, the tensors need to be reshaped.

file_writer = tf.summary.create_file_writer(logdir + '/cm')

def log_confusion_matrix(epoch, logs):
  # Use the model to predict the values from the validation dataset.
  test_pred = model1.predict_classes(test_images)

  con_mat = tf.math.confusion_matrix(labels=test_labels, predictions=test_pred).numpy()
  con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)

  con_mat_df = pd.DataFrame(con_mat_norm,
                     index = classes, 
                     columns = classes)

  figure = plt.figure(figsize=(8, 8))
  sns.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
  plt.tight_layout()
  plt.ylabel('True label')
  plt.xlabel('Predicted label')
  
  buf = io.BytesIO()
  plt.savefig(buf, format='png')

  plt.close(figure)
  buf.seek(0)
  image = tf.image.decode_png(buf.getvalue(), channels=4)

  image = tf.expand_dims(image, 0)
  
  # Log the confusion matrix as an image summary.
  with file_writer.as_default():
    tf.summary.image("Confusion Matrix", image, step=epoch)

    
logdir='logs/images'

tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)

cm_callback = keras.callbacks.LambdaCallback(on_epoch_end=log_confusion_matrix)

You’re now ready to train the model and log this image and view it in TensorBoard.

model1.fit(
    train_images,
    train_labels,
    epochs=5,
    verbose=0, 
    callbacks=[tensorboard_callback, cm_callback],
    validation_data=(test_images, test_labels))

The “Images” tab displays the image you just logged.

TensorBord Confusion Matrix

The image is scaled to a default size for easier viewing. If you want to view the unscaled original image, check “Show actual image size” at the upper left.

Related Post

  • How to get the ROC curve and AUC for Keras model?
  • Calculate Precision, Recall and F1 score for Keras model
  • Micro and Macro Averages for imbalance multiclass classification
  • Calculate F1 Macro in Keras
  • PyTorch Confusion Matrix for multi-class image classification
  • TensorBoard Callback of Keras with Google Colab

Run this code in Google Colab

In the realm of machine learning, understanding the performance of your model is crucial. One of the most effective ways to visualize this performance is through a confusion matrix. This blog post will guide you on how to generate a confusion matrix in Keras for multiclass classification.

Generating a Confusion Matrix in Keras for Multiclass Classification

In the realm of machine learning, understanding the performance of your model is crucial. One of the most effective ways to visualize this performance is through a confusion matrix. This blog post will guide you on how to generate a confusion matrix in Keras for multiclass classification.

What is a Confusion Matrix?

A confusion matrix, also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm. It’s particularly useful for multiclass classification, where the output can belong to two or more classes.

Why Use a Confusion Matrix?

A confusion matrix provides a more detailed breakdown of your model’s performance, showing not only the total accuracy but also the types of errors made. It’s an essential tool for data scientists to understand the nuances of their model’s performance.

Getting Started with Keras

Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It’s user-friendly, modular, and extensible, making it a popular choice for data scientists.

To install Keras, use the following pip command:

Generating a Confusion Matrix in Keras

Let’s dive into the steps to generate a confusion matrix in Keras for multiclass classification.

Step 1: Import Necessary Libraries

First, we need to import the necessary libraries. We’ll need Keras for building the model, numpy for numerical operations, and sklearn for generating the confusion matrix.

import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import confusion_matrix

Step 2: Load and Preprocess the Data

Next, we load our dataset and preprocess it. For this example, we’ll use the Iris dataset, a multiclass classification problem.

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical

iris = load_iris()
X = iris.data
y = to_categorical(iris.target)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Step 3: Build and Train the Model

Now, we build and train our model. We’ll use a simple feedforward neural network for this example.

model = Sequential()
model.add(Dense(10, input_dim=4, activation='relu'))
model.add(Dense(3, activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(X_train, y_train, epochs=150, batch_size=10)

Step 4: Generate Predictions

After training the model, we generate predictions on our test set.

y_pred = model.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
y_test = np.argmax(y_test, axis=1)

Step 5: Generate the Confusion Matrix

Finally, we generate the confusion matrix using sklearn’s confusion_matrix function.

cm = confusion_matrix(y_test, y_pred)
print(cm)

Conclusion

A confusion matrix is a powerful tool for understanding the performance of a multiclass classification model. With Keras, generating a confusion matrix is straightforward and can provide valuable insights into your model’s performance.

Remember, a model’s accuracy is not the only metric that matters. A confusion matrix can help you understand the types of errors your model is making, allowing you to fine-tune it for better performance.

Stay tuned for more posts on advanced machine learning topics!


Keywords: Confusion Matrix, Keras, Multiclass Classification, Machine Learning, Data Science, Python, Neural Networks, Model Performance, sklearn, TensorFlow, CNTK, Theano, Iris Dataset, Feedforward Neural Network, Categorical Crossentropy, Adam Optimizer, Train Test Split, Predictions, Error Matrix


About Saturn Cloud

Saturn Cloud is your all-in-one solution for data science & ML development, deployment, and data pipelines in the cloud. Spin up a notebook with 4TB of RAM, add a GPU, connect to a distributed cluster of workers, and more. Join today and get 150 hours of free compute per month.

Computes the confusion matrix from predictions and labels.

tf.math.confusion_matrix(
    labels,
    predictions,
    num_classes=None,
    weights=None,
    dtype=tf.dtypes.int32,
    name=None
)

Used in the notebooks

Used in the tutorials
  • Video classification with a 3D convolutional neural network
  • Simple audio recognition: Recognizing keywords
  • Transfer learning for video classification with MoViNet
  • Fine tuning models for plant disease detection
  • How to solve a problem on Kaggle with TF-Hub

The matrix columns represent the prediction labels and the rows represent the
real labels. The confusion matrix is always a 2-D array of shape [n, n],
where n is the number of valid labels for a given classification task. Both
prediction and labels must be 1-D arrays of the same shape in order for this
function to work.

If num_classes is None, then num_classes will be set to one plus the
maximum value in either predictions or labels. Class labels are expected to
start at 0. For example, if num_classes is 3, then the possible labels
would be [0, 1, 2].

If weights is not None, then each prediction contributes its
corresponding weight to the total value of the confusion matrix cell.

For example:

  tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==>
      [[0 0 0 0 0]
       [0 0 1 0 0]
       [0 0 1 0 0]
       [0 0 0 0 0]
       [0 0 0 0 1]]

Note that the possible labels are assumed to be [0, 1, 2, 3, 4],
resulting in a 5×5 confusion matrix.

Args

labels 1-D Tensor of real labels for the classification task.
predictions 1-D Tensor of predictions for a given classification.
num_classes The possible number of labels the classification task can
have. If this value is not provided, it will be calculated
using both predictions and labels array.
weights An optional Tensor whose shape matches predictions.
dtype Data type of the confusion matrix.
name Scope name.

Returns

A Tensor of type dtype with shape [n, n] representing the confusion
matrix, where n is the number of possible labels in the classification
task.

Raises

ValueError If both predictions and labels are not 1-D vectors and have
mismatched shapes, or if weights is not None and its shape doesn’t
match predictions.

I am building a multiclass model with Keras.

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[checkpoint], validation_data=(X_test, y_test))  # starts training

Here is how my test data looks like (it’s text data).

X_test
Out[25]: 
array([[621, 139, 549, ...,   0,   0,   0],
       [621, 139, 543, ...,   0,   0,   0]])

y_test
Out[26]: 
array([[0, 0, 1],
       [0, 1, 0]])

After generating predictions…

predictions = model.predict(X_test)
predictions
Out[27]: 
array([[ 0.29071924,  0.2483743 ,  0.46090645],
       [ 0.29566404,  0.45295066,  0.25138539]], dtype=float32)

I did the following to get the confusion matrix.

y_pred = (predictions > 0.5)

confusion_matrix(y_test, y_pred)
Traceback (most recent call last):

  File "<ipython-input-38-430e012b2078>", line 1, in <module>
    confusion_matrix(y_test, y_pred)

  File "/Users/abrahammathew/anaconda3/lib/python3.6/site-packages/sklearn/metrics/classification.py", line 252, in confusion_matrix
    raise ValueError("%s is not supported" % y_type)

ValueError: multilabel-indicator is not supported

However, I am getting the above error.

How can I get a confusion matrix when doing a multiclass neural network in Keras?

import numpy as np from keras import backend as K from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.preprocessing.image import ImageDataGenerator from sklearn.metrics import classification_report, confusion_matrix #Start train_data_path = ‘F://data//Train’ test_data_path = ‘F://data//Validation’ img_rows = 150 img_cols = 150 epochs = 30 batch_size = 32 num_of_train_samples = 3000 num_of_test_samples = 600 #Image Generator train_datagen = ImageDataGenerator(rescale=1. / 255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode=‘nearest’) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory(train_data_path, target_size=(img_rows, img_cols), batch_size=batch_size, class_mode=‘categorical’) validation_generator = test_datagen.flow_from_directory(test_data_path, target_size=(img_rows, img_cols), batch_size=batch_size, class_mode=‘categorical’) # Build model model = Sequential() model.add(Convolution2D(32, (3, 3), input_shape=(img_rows, img_cols, 3), padding=‘valid’)) model.add(Activation(‘relu’)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(32, (3, 3), padding=‘valid’)) model.add(Activation(‘relu’)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(64, (3, 3), padding=‘valid’)) model.add(Activation(‘relu’)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation(‘relu’)) model.add(Dropout(0.5)) model.add(Dense(5)) model.add(Activation(‘softmax’)) model.compile(loss=‘categorical_crossentropy’, optimizer=‘rmsprop’, metrics=[‘accuracy’]) #Train model.fit_generator(train_generator, steps_per_epoch=num_of_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=num_of_test_samples // batch_size) #Confution Matrix and Classification Report Y_pred = model.predict_generator(validation_generator, num_of_test_samples // batch_size+1) y_pred = np.argmax(Y_pred, axis=1) print(‘Confusion Matrix’) print(confusion_matrix(validation_generator.classes, y_pred)) print(‘Classification Report’) target_names = [‘Cats’, ‘Dogs’, ‘Horse’] print(classification_report(validation_generator.classes, y_pred, target_names=target_names))

Понравилась статья? Поделить с друзьями:

Интересное по теме:

  • Матрица ковариации ошибок
  • Матрикс код ошибки 99993
  • Матрикс код ошибки 11307
  • Матлаб ошибка при запуске
  • Матлаб ошибка 8 523

  • Добавить комментарий

    ;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: