Record of the first machine learning challenge with Keras

I've been playing around with the back end and front end, but I haven't tried machine learning yet. This is my first attempt, so I will record it as a memorial. I'm using python, numpy, tf.keras.

My spec

To study the theory of machine learning collectively, "Deep Learning from scratch-Theory and implementation of deep learning learned with Python I read "dp / 4873117585 /)". It was a very good book.

The development environment is PyCharm Community 2019.3. I use PyCharm by loading the necessary libraries without using Anaconda.

1. Machine learning task setting

We aim to machine learn the following correct logic.

2. Code

I made a typical code for a binary classification problem while looking at some Web articles. I thought it was quite compact and intuitive to write. Keras is amazing.

#!/usr/bin/env python3

import tensorflow as tf
import numpy as np
from tensorflow_core.python.keras.metrics import binary_accuracy
import matplotlib.pyplot as plt

#Data set preparation
ds_features = np.random.rand(10000, 2)  #Feature data
NOISE_RATE = 0
ds_noise = (np.random.rand(10000) > NOISE_RATE).astype(np.int) * 2 - 1  #No noise: 1,Yes: -1
ds_labels = (np.sign(ds_features[:, 0] - ds_features[:, 1]) * ds_noise + 1) / 2  #Correct label

#Split dataset for training and validation
SPLIT_RATE = 0.8   #Split ratio
training_features, validation_features = np.split(ds_features, [int(len(ds_features) * SPLIT_RATE)])
training_labels, validation_labels = np.split(ds_labels, [int(len(ds_labels) * SPLIT_RATE)])

#Model preparation
INPUT_FEATURES = ds_features.shape[1]   #Feature dimension
LAYER1_NEURONS = int(INPUT_FEATURES * 1.2 + 1)   #A little wider than the input dimension
LAYER2_NEURONS = LAYER1_NEURONS
LAYER3_NEURONS = LAYER1_NEURONS  #3 hidden layers
OUTPUT_RESULTS = 1  #Output is one-dimensional
ACTIVATION = 'tanh'
model = tf.keras.models.Sequential([
    tf.keras.layers.Dense(input_shape=(INPUT_FEATURES,), units=LAYER1_NEURONS, activation=ACTIVATION),
    tf.keras.layers.Dense(units=LAYER2_NEURONS, activation=ACTIVATION),
    tf.keras.layers.Dense(units=LAYER3_NEURONS, activation=ACTIVATION),
    tf.keras.layers.Dense(units=OUTPUT_RESULTS, activation='sigmoid'),
])
LOSS = 'binary_crossentropy'
OPTIMIZER = tf.keras.optimizers.Adam   #Typical optimization method
LEARNING_RATE = 0.03   #Common initial values of learning coefficient
model.compile(optimizer=OPTIMIZER(lr=LEARNING_RATE), loss=LOSS, metrics=[binary_accuracy])

#Learning
BATCH_SIZE = 30
EPOCHS = 100
result = model.fit(x=training_features, y=training_labels,
                   validation_data=(validation_features, validation_labels),
                   batch_size=BATCH_SIZE, epochs=EPOCHS, verbose=1)

#display
plt.plot(range(1, EPOCHS+1), result.history['binary_accuracy'], label="training")
plt.plot(range(1, EPOCHS+1), result.history['val_binary_accuracy'], label="validation")
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.ylim(0.5, 1)
plt.legend()
plt.show()

3. Result

This is the learning result. It quickly reached an accuracy of about 99%, and it seems that it has not overfitted.

Figure_1.png

4. Consideration

4.1. Behavior when noise is added

I tried setting NOISE_RATE = 0.2. The accuracy is reduced by the amount of noise, but the result is appropriate.

Figure_1.png

4.2. Behavior when irrelevant dummy features are added

Let's return the noise and increase the features to 5 types. Find the correct label with the same logic using only 2 of the 5 types. In other words, the remaining 3 types of features are dummies that have nothing to do with the correct answer.

The result is here, and although the blur width is a little larger, it can be said that you can learn without being deceived by the dummy.

Figure_1.png

4.3. Behavior when feature normalization is broken

I will return the features to 2 types, but I tried to multiply the random value of 0 or more and less than 1 by 1000. The result is that learning does not seem to converge uniformly and is less accurate near the final epoch.

Figure_1.png

I increased the epoch and checked it. After all learning seems to be unstable.

Figure_1.png

On the other hand, I shifted the average of the features and tried to increase the random value of 0 or more and less than 1 by +1000. The results show that the accuracy is almost 0.5, that is, it is not trained at all as a binary classification.

Figure_1.png

Overall, we can see that feature normalization is important.

Recommended Posts

Record of the first machine learning challenge with Keras
Predict the gender of Twitter users with machine learning
Summary of the basic flow of machine learning with Python
The first step of machine learning ~ For those who want to implement with python ~
A story stuck with the installation of the machine learning library JAX
Record the steps to understand machine learning
I made a GAN with Keras, so I made a video of the learning process.
The story of doing deep learning with TPU
See the behavior of drunkenness with reinforcement learning
Challenge the Tower of Hanoi with recursion + stack
About the development contents of machine learning (Example)
Machine learning rabbit challenge
Align the number of samples between classes of data for machine learning with Python
Key points of "Machine learning with Azure ML Studio"
Impressions of taking the Udacity Machine Learning Engineer Nano-degree
Predicting the goal time of a full marathon with machine learning-③: Visualizing data with Python-
Basics of Machine Learning (Notes)
Machine learning with Python! Preparation
Machine learning Minesweeper with PyTorch
Importance of machine learning datasets
Beginning with Python machine learning
Try machine learning with Kaggle
Try to evaluate the performance of machine learning / regression model
Reinforcement learning in the shortest time with Keras with OpenAI Gym
Survey on the use of machine learning in real services
Predict the presence or absence of infidelity by machine learning
Your URL didn't respond with the value of the challenge parameter.
Try to evaluate the performance of machine learning / classification model
Rewrite the record addition node of SPSS Modeler with Python.
[Machine learning] I tried to summarize the theory of Adaboost
I made an API with Docker that returns the predicted value of the machine learning model
PySpark learning record ② Kaggle I tried the Titanic competition with PySpark binding
Manga Recommendations with Machine Learning Part 1 First, try dividing without thinking
The first artificial intelligence. Challenge web output with python. ~ Flask introduction
Significance of machine learning and mini-batch learning
First Python 3 ~ The beginning of repetition ~
I tried to make Othello AI with tensorflow without understanding the theory of machine learning ~ Introduction ~
I tried machine learning with liblinear
Machine learning with python (1) Overall classification
Machine learning ③ Summary of decision tree
Get the first element of queryset
Try machine learning with scikit-learn SVM
Challenge image classification with TensorFlow2 + Keras 9-Learning, saving and loading models-
Prediction of sine wave with keras
How to use machine learning for work? 01_ Understand the purpose of machine learning
[Introduction to StyleGAN] Unique learning of anime with your own machine ♬
[Reading Notes] Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow Chapter 1
Machine learning model management to avoid quarreling with the business side
Quantum-inspired machine learning with tensor networks
Validate the learning model with Pylearn2
Get started with machine learning with SageMaker
"Scraping & machine learning with Python" Learning memo
For those of you who glance at the log while learning with machine learning ~ Muscle training with LightGBM ~
I tried to make Othello AI with tensorflow without understanding the theory of machine learning ~ Implementation ~
4/22 prediction of sine wave with keras
Feature engineering for machine learning starting with the 1st Google Colaboratory --Binarization and discretization of count data
I tried to make Othello AI with tensorflow without understanding the theory of machine learning ~ Battle Edition ~
Challenge image classification with TensorFlow2 + Keras CNN 1 ~ Move for the time being ~
I tried calling the prediction API of the machine learning model from WordPress
[Introduction to machine learning] Until you run the sample code with chainer
Source code of sound source separation (machine learning practice series) learned with Python