[Chainer] Learning XOR with multi-layer perceptron

Introduction

I want to use Chainer. But I'm not sure. Yes, let's start by learning XOR with a multi-layer perceptron.

** * This article is written on the assumption that the environment where Chainer can be used is in place. ** **

** Code used in this article **

environment

--python 2.7 series

Training data

# Prepare dataset
source = [[0, 0], [1, 0], [0, 1], [1, 1]]
target = [[0], [1], [1], [0]]
dataset = {}
dataset['source'] = np.array(source, dtype=np.float32)
dataset['target'] = np.array(target, dtype=np.float32)

Model definition

The model used this time is 2 inputs and 1 output.

N = len(source) # train data size

in_units  = 2   #Number of units in the input layer
n_units   = 2   #Number of hidden layer units
out_units = 1   #Number of units in the output layer

#Model definition
model = chainer.Chain(l1=L.Linear(in_units, n_units),
                      l2=L.Linear(n_units , out_units))

Forward propagation

def forward(x, t):
    h1 = F.sigmoid(model.l1(x))
    return model.l2(h1)

Learning

Repeat until the training error is less than 0.00001 or epoch is greater than or equal to n_epoch.

# Setup optimizer
optimizer = optimizers.Adam()
optimizer.setup(model)

# Learning loop
loss_val = 100
epoch = 0
while loss_val > 1e-5:

    # training
    x = chainer.Variable(xp.asarray(dataset['source'])) #source
    t = chainer.Variable(xp.asarray(dataset['target'])) #target
    
    model.zerograds()       #Zero initialization of gradient
    y    = forward(x, t)    #Forward propagation

    loss = F.mean_squared_error(y, t) #Mean squared error
    
    loss.backward()              #Error back propagation
    optimizer.update()           #optimisation
    
    #Display the result on the way
    if epoch % 1000 == 0:
        #Calculate error and accuracy rate
        loss_val = loss.data

        print 'epoch:', epoch
        print 'x:\n', x.data
        print 't:\n', t.data
        print 'y:\n', y.data

        print('train mean loss={}'.format(loss_val)) #Training error,Correct answer rate
        print ' - - - - - - - - - '
    
    # n_It ends when it becomes epoch or more
    if epoch >= n_epoch:
        break

    epoch += 1

#Save model and optimizer
print 'save the model'
serializers.save_npz('xor_mlp.model', model)
print 'save the optimizer'
serializers.save_npz('xor_mlp.state', optimizer)

Execution result

I am learning as a regression problem. When predicting, it is necessary to set the threshold value such as 1 if it is 0.5 or more and 0 if it is less than 0.5.

$ python train_xor.py --gpu 1
epoch: 0
x:
[[ 0.  0.]
 [ 1.  0.]
 [ 0.  1.]
 [ 1.  1.]]
t:
[[ 0.]
 [ 1.]
 [ 1.]
 [ 0.]]
y:
[[-0.62479508]  #I want you to get closer to 0
 [-0.85900736]  #I want you to get closer to 1
 [-0.4117983 ]  #I want you to get closer to 1
 [-0.62129647]] #I want you to get closer to 0
train mean loss=1.55636525154  #Training error(I want you to be smaller)
 - - - - - - - - -
epoch: 1000
x:
[[ 0.  0.]
 [ 1.  0.]
 [ 0.  1.]
 [ 1.  1.]]
t:
[[ 0.]
 [ 1.]
 [ 1.]
 [ 0.]]
y:
[[ 0.39130747]
 [ 0.40636665]
 [ 0.50217605]
 [ 0.52426183]]
train mean loss=0.257050335407
 - - - - - - - - -

...


 - - - - - - - - -
epoch: 8000
x:
[[ 0.  0.]
 [ 1.  0.]
 [ 0.  1.]
 [ 1.  1.]]
t:
[[ 0.]
 [ 1.]
 [ 1.]
 [ 0.]]
y:
[[ 0.00557911]
 [ 0.98262894]
 [ 0.98446763]
 [ 0.02371788]]
train mean loss=0.000284168170765
 - - - - - - - - -
epoch: 9000
x:
[[ 0.  0.]
 [ 1.  0.]
 [ 0.  1.]
 [ 1.  1.]]
t:
[[ 0.]
 [ 1.]
 [ 1.]
 [ 0.]]
y:
[[  5.99622726e-05] #Approached 0
 [  9.99812365e-01] #Approached 1
 [  9.99832511e-01] #Approached 1
 [  2.56299973e-04]] #Approached 0
train mean loss=3.31361960093e-08
 - - - - - - - - -
save the model
save the optimizer

chart.png

Reference article

-Learning XOR -Let's learn neural network with chainer (Neural network with chainer 2)

Recommended Posts

[Chainer] Learning XOR with multi-layer perceptron
Multilayer Perceptron with Chainer: Function Fitting
Perceptron learning experiment learned with Python
Try Common Representation Learning with chainer
Classify anime faces with deep learning with Chainer
Try with Chainer Deep Q Learning --Launch
Make a logic circuit with a perceptron (multilayer perceptron)
Seq2Seq (1) with chainer
MNIST (handwritten digit) image classification with multi-layer perceptron
Stock Price Forecast with TensorFlow (Multilayer Perceptron: MLP) ~ Stock Forecast Part 2 ~
Learning Python with ChemTHEATER 03
Learning Python with ChemTHEATER 05-1
Linear multiple regression, logistic regression, multi-layer perceptron, autoencoder, Chainer yo!
Learning Python with ChemTHEATER 02
Learning Python with ChemTHEATER 01
Use scikit-learn training dataset with chainer (for learning / prediction)
Now, let's try face recognition with Chainer (learning phase)
I tried to implement ListNet of rank learning with Chainer
Let's move word2vec with Chainer and see the learning progress
Machine learning learned with Pokemon
Try deep learning with TensorFlow
Play with reinforcement learning with MuZero
Try implementing RBM with chainer.
Ensemble learning summary! !! (With implementation)
Reinforcement learning 6 First Chainer RL
Reinforcement learning starting with Python
Try implementing XOR with PyTorch
Learn elliptical orbits with Chainer
About learning with google colab
Machine learning with Python! Preparation
Deep Kernel Learning with Pyro
Try Deep Learning with FPGA
Perform logical operations with Perceptron
Seq2Seq (3) ~ CopyNet Edition ~ with chainer
Use chainer with Jetson TK1
Linux fastest learning with AWS
Neural network starting with Chainer
Machine learning Minesweeper with PyTorch
Your multi-layer perceptron is dirty
Machine learning algorithm (simple perceptron)
Implemented Conditional GAN with chainer
Image caption generation with Chainer
Beginning with Python machine learning
Python Iteration Learning with Cheminformatics
Implemented SmoothGrad with Chainer v2
Deep Embedded Clustering with Chainer 2.0
A little stuck with chainer
Try machine learning with Kaggle
Generate Pokemon with Deep Learning
Multi Layer Perceptron for Deep Learning (Deep Learning with Python; MPS Yokohama Deep Learning Series)
Introduction to Deep Learning (2) --Try your own nonlinear regression with Chainer-