In the learning of RandomForestRegressor, there was a phenomenon that the result of feature_importances_ when bootstrap was performed was slightly different from the one calculated by hand. Since the decision tree was used as one, it is unlikely that it was a calculation error. The result of feature_impotances_ without bootstrap (when all the specified training data was used) was only about 10 ^ (-3) error from the hand-calculated one.

So the question is, what kind of calculation is done by feature_importances_ in the random forest trained with bootstrap = true?