Home>

I want to do reinforcement learning and imitation learning using Unity's Ml-agents

Error message

The latest version of Ml-agents is 0.11.0, but there are too few references to solve it.

(base) C: \ Users \ user \ Documents \ ml-agents-master \ ml-agents>mlagents-learn ../config/trainer_config.yaml --run-id = firstRun --train
2019-11-14 14: 45: 59.431992: W tensorflow/stream_executor/platform/default/dso_loader.cc: 55] Could not load dynamic library 'cudart64_100.dll';dlerror: cudart64_100.dll not found
2019-11-14 14: 45: 59.436532: I tensorflow/stream_executor/cuda/cudart_stub.cc: 29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING: tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If i depend on functionality not listed there, please file an issue.

                        ▄▄▄▓▓▓▓
                   ╓▓▓▓▓▓▓█▓▓▓▓▓
              , ▄▄▄m▀▀▀ ', ▓▓▓▀▓▓▄ ▓▓▓ ▓▓▌
            ▄▓▓▓▀ '▄▓▓▀ ▓▓▓ ▄▄ ▄▄, ▄▄ ▄▄▄▄, ▄▄ ▄▓▓▌▄ ▄▄▄, ▄▄
          ▓▓▌ ▄▓▓▀ ▐▓▓▌ ▓▓▌ ▐▓▓ ▐▓▓▓▀▀▀▓▓▌ ▓▓▓ ▀▓▓▌▀ ^ ▓▓▌ ╒▓▓▌
        ▄▓▓▓▓▓▄▄▄▄▄▄▄▄▓▓▓ ▓▀ ▓▓▌ ▐▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▌ ▐▓▓▄ ▓▓▌
        ▀▓▓▓▓▀▀▀▀▀▀▀▀▀▀▓▓▄ ▓▓ ▓▓▌ ▐▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▌ ▐▓▓▐▓▓
          ^ █▓▓▓ ▀▓▓▄ ▐▓▓▌ ▓▓▓▓▄▓▓▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▓▄ ▓▓▓▓`
            '▀▓▓▓▄ ^ ▓▓▓ ▓▓▓ └▀▀▀▀ ▀▀ ^ ▀▀ `▀▀` ▀▀' ▀▀ ▐▓▓▌
               ▀▀▀▀▓▄▄▄ ▓▓▓▓▓▓, ▓▓▓▓▀
                   `▀█▓▓▓▓▓▓▓▓▓▌
                        ¬`▀▀▀█▓

INFO: mlagents.trainers: CommandLineOptions (debug = False, num_runs = 1, seed = -1, env_path = None, run_id = 'firstRun', load_model = False, train_model = True, save_freq = 50000, keep_checkpoints = 5, base_port = 5005 , num_envs = 1, curriculum_folder = None, lesson = 0, slow = False, no_graphics = False, multi_gpu = False, trainer_config_path = '../config/trainer_config.yaml', sampler_file_path = None, docker_target_name = None, env_args = None cpu = False)INFO: mlagents.envs: Start training by pressing the Play button in the Unity Editor.</Code></pre>
<strong>Applicable source code</strong>
<p>When you press play on Unity after the above message appears, the following message appears.</p>
<pre><code>Process Process-1:
Traceback (most recent call last):
  File "c: \ users \ user \ anaconda3 \ lib \ multiprocessing \ process.py", line 297, in _bootstrap
    self.run ()
  File "c: \ users \ user \ anaconda3 \ lib \ multiprocessing \ process.py", line 99, in run
    self._target (* self._args, ** self._kwargs)
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ envs \ subprocess_env_manager.py", line 82, in worker
    env = env_factory (worker_id)
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ trainers \ learn.py", line 359, in create_unity_environment
    args = env_args,
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ envs \ environment.py", line 105, in __init__
    aca_output = self.send_academy_parameters (rl_init_parameters_in)
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ envs \ environment.py", line 689, in send_academy_parameters
    return self.communicator.initialize (inputs)
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ envs \ rpc_communicator.py", line 88, in initialize
    "The Unity environment took too long to respond. Make sure that: \ n"
mlagents.envs.exception.UnityTimeOutException: The Unity environment took too long to respond.Make sure that:
         The environment does not need user interaction to launch
         The Agents are linked to the appropriate Brains
         The environment and the Python interface have compatible versions.
Traceback (most recent call last):
  File "c: \ users \ user \ anaconda3 \ lib \ multiprocessing \ connection.py", line 312, in _recv_bytes
    nread, err = ov.GetOverlappedResult (True)
BrokenPipeError: [WinError 109] The pipe has been terminated.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ envs \ subprocess_env_manager.py", line 59, in recv
    response: EnvironmentResponse = self.conn.recv ()
  File "c: \ users \ user \ anaconda3 \ lib \ multiprocessing \ connection.py", line 250, in recv
    buf = self._recv_bytes ()
  File "c: \ users \ user \ anaconda3 \ lib \ multiprocessing \ connection.py", line 321, in _recv_bytes
    raise EOFError
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "c: \ users \ user \ anaconda3 \ lib \ runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c: \ users \ user \ anaconda3 \ lib \ runpy.py", line 85, in _run_code
    exec (code, run_globals)
  File "C: \ Users \ user \ Anaconda3 \ Scripts \ mlagents-learn.exe \ __ main__.py", line 9, in<module>
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ trainers \ learn.py", line 408, in main
    run_training (0, run_seed, options, Queue ())
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ trainers \ learn.py", line 222, in run_training
    options.sampler_file_path, env.reset_parameters, run_seed
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ envs \ subprocess_env_manager.py", line 225, in reset_parameters
    return self.env_workers [0] .recv (). payload
  File "c: \ users \ user \ anaconda3 \ lib \ site-packages \ mlagents \ envs \ subprocess_env_manager.py", line 62, in recv
    raise UnityCommunicationException ("UnityEnvironment worker: recv failed.")
mlagents.envs.exception.UnityCommunicationException: UnityEnvironment worker: recv failed.

Reinstall, browse website

Supplemental information (FW/tool version etc.)

TensorFlow 1.15

  • Answer # 1

    This was an environment-dependent issue.

    It may be repelled in a communication environment such as a proxy.

    Although it is not a direct solution to this question. Here are some helpful sites.
    https://note.com/npaka

  • Answer # 2

    I got the same error. In my case, the version of MLAgents I was using was 0.6.0, but it was resolved so I'll answer if it can be helpful.

    Is the Control checkbox in the Broadcast Hub Brain column of the Academy checked?
    I forgot this so I couldn't use AI Brain.
    It is a site that I am referring to.

    http://am1tanaka.hatenablog.com/entry/2019/01/18/212915