Home>

When floating point calculation is performed with python, for example, 0.1 + 0.2 = 0.300000000 ... there will be a discrepancy. Does this amount of deviation vary depending on the computer?

Although the program was slightly different, I ran a program that should have the same calculation results on different PCs, but the results were slightly different.

  • Answer # 1

      

    Floating-point types are usually implemented using C doubles;the precision and internal representation of floating-point types on the machine your program runs on is available from sys.float_info.
      Built-in types — Python 3.8.0 documentation

      

    (The 3.8 version is not translated and is quoted from 3.7 for the sake of convenience. Same content)

    It is

    . So, if you actually check sys.float_info, check the built compiler, and so on. As a general theory nowadays, as Mr. tiitoi has already answered, it seems that it is difficult to think of the difference here unless it is a very special environment.

    For example, even if the order of calculation changes slightly, the result may change, so it may be good to doubt such a line.

    >>>(0.1/3) + (0.2/3)
    0.1
    >>>(0.1 + 0.2)/3
    0.10000000000000002

  • Answer # 2

    The current hardware/software implementation of floating-point arithmetic is based on the IEEE 754 standard. There should be nothing.

    However, when outputting to the screen, the floating point number is converted to a decimal string, so there may be differences depending on the library/language.

  • Answer # 3

      

    Although the program was slightly different, I ran a program that should have the same calculation results on different PCs, but the results were slightly different.

    If it's not in the answer, you're comparing results with different arithmetic contexts in decimal.