Home>

Deriving the same result from multiple image types

Question

We use the deep learning to distinguish the weather in university research. The cloud gathering data is downloaded from the Japan Meteorological Agency, and the indicators are converted into images and used for deep learning.
At that time, using one kind of image (precipitation map, etc.), we are studying whether the cloud will develop and the weather will worsen or not.

Is it a question? Is it possible to train using multiple types of images, not just one type of image (for example, precipitation distribution and lightning potential)? .
I think that it is a little difficult to understand, so if you give an example, prepare multiple images of Pikachu's ears, tails, legs, etc. with different body shapes, learn with images of one individual's ears, tail, feet, ears of another individual, Proceed with deep learning, such as learning with tail and foot images.
Can you judge whether these images are Pikachu's or not?

  • Answer # 1

    Respond as a question about how to input multiple types of images.
    (Since I was wondering what to do, it was a result of a little google search.)

    The following paper had a methodology.
    https://jsai.ixsq.nii.ac.jp/ej/?action=repository_uri&item_id=9456&file_id=1&file_no=1

    Here is a quote. ----------

    For the method of integrating multiple inputs into one output in a convolutional neural network, we referred to the type of integration in Zhe et al. [3]. There are three types of integration as shown in Figure 3. In Type-I, multiple images are combined into one image and input. In the case of Type-II, each image is input to the convolutional layer, and the combined feature map is input to all the connection layers. In the case of Type-III, the label map output from all connection layers is agreed by "some method".

    End of citation. --------------

    I wonder if I can understand it further by examining the cited references.
    If you understand the contents, I'd be happy if you can summarize them here.

  • Answer # 2

    I don't know if this example can be applied because it is different from the data set of the question. For example, for satellite image data, input different wavelength data to different channels and input to NN. There are also. If it is a normal color image, there are three RGB channels, but it is an image in which image data of different waveforms is superimposed on such an image. I think the following site will be helpful.

    https://www.nict.go.jp/publication/shuppan/kihou-journal/houkoku65-1_HTML/2019R-03-03 (13) .pdf

    However, if it is a distribution map of precipitation and a figure of the possibility of lightning occurrence (it is hard to say because I have never seen anything), I think that it is a little different from the above example So I can't say anything. (Sorry, not helpful.)

  • Answer # 3

    What is the teacher data? If there is a collection of image data of a part of Pikachu, a part of a lion, a part of a monkey, and learn together with the determination result (correct answer) whether it is Pikachu, lion or monkey I think it is possible to distinguish Pikachu/Lion/Monkey from images.
    However, even if the body shape is different, if you learn all the images of Pikachu, you will not be judged as "not Pikachu!"

    I can't imagine how the story of clouds and the weather is connected to the story of Pikachu.
    Since the weather changes, it seems to be a model that learns by giving cloud and weather data in time series.
    And it doesn't seem like the "Pikachu Judgment Machine" that judges from a single image is a similar model.

    I feel like I need some more organizing.