Home>

What is the reason for converting to floating point (float32) in convolutional neural network programming (MNIST, etc.)?
Is it not good to process without conversion?

Thanks for your guidance and guidance.

X_train = X_train.astype ('float32')
X_test = X_test.astype ('float32')
  • Answer # 1

    First of all, the operation cannot be performed unless the operand types are matched.
    Since images are uint8 type and neural network parameters are usually handled as float32 type, uint8 type must be cast to float32 type.

    In the example below, the array a is defined as an int64 type, but the type is different when performing an operation with float64, so it is implicitly cast to float64 and the resulting array b is also float64 I understand that

    a = np.array ([1, 2, 3])
    print (a.dtype) # int64
    b = a * 3.5 # int64 * float
    print (b.dtype) # float64

  • Answer # 2

    This is to reduce memory consumption to single precision.