# Isaac Gym Environments for Legged Robots #
This repository provides the environment used to train ANYmal (and other robots) to walk on rough terrain using NVIDIA's Isaac Gym.
It includes all components needed for sim-to-real transfer: actuator network, friction & mass randomization, noisy observations and random pushes during training.
**Maintainer**: Nikita Rudin
**Affiliation**: Robotic Systems Lab, ETH Zurich
**Contact**: rudinn@ethz.ch
---
### :bell: Announcement (09.01.2024) ###
With the shift from Isaac Gym to Isaac Sim at NVIDIA, we have migrated all the environments from this work to [Isaac Lab](https://github.com/isaac-sim/IsaacLab). Following this migration, this repository will receive limited updates and support. We encourage all users to migrate to the new framework for their applications.
Information about this work's locomotion-related tasks in Isaac Lab is available [here](https://isaac-sim.github.io/IsaacLab/source/features/environments.html#locomotion).
---
### Useful Links ###
Project website: https://leggedrobotics.github.io/legged_gym/
Paper: https://arxiv.org/abs/2109.11978
### Installation ###
1. Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
2. Install pytorch 1.10 with cuda-11.3:
- `pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html`
3. Install Isaac Gym
- Download and install Isaac Gym Preview 3 (Preview 2 will not work!) from https://developer.nvidia.com/isaac-gym
- `cd isaacgym/python && pip install -e .`
- Try running an example `cd examples && python 1080_balls_of_solitude.py`
- For troubleshooting check docs `isaacgym/docs/index.html`)
4. Install rsl_rl (PPO implementation)
- Clone https://github.com/leggedrobotics/rsl_rl
- `cd rsl_rl && git checkout v1.0.2 && pip install -e .`
5. Install legged_gym
- Clone this repository
- `cd legged_gym && pip install -e .`
### CODE STRUCTURE ###
1. Each environment is defined by an env file (`legged_robot.py`) and a config file (`legged_robot_config.py`). The config file contains two classes: one containing all the environment parameters (`LeggedRobotCfg`) and one for the training parameters (`LeggedRobotCfgPPo`).
2. Both env and config classes use inheritance.
3. Each non-zero reward scale specified in `cfg` will add a function with a corresponding name to the list of elements which will be summed to get the total reward.
4. Tasks must be registered using `task_registry.register(name, EnvClass, EnvConfig, TrainConfig)`. This is done in `envs/__init__.py`, but can also be done from outside of this repository.
### Usage ###
1. Train:
```python legged_gym/scripts/train.py --task=anymal_c_flat```
- To run on CPU add following arguments: `--sim_device=cpu`, `--rl_device=cpu` (sim on CPU and rl on GPU is possible).
- To run headless (no rendering) add `--headless`.
- **Important**: To improve performance, once the training starts press `v` to stop the rendering. You can then enable it later to check the progress.
- The trained policy is saved in `issacgym_anymal/logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt`. Where `<experiment_name>` and `<run_name>` are defined in the train config.
- The following command line arguments override the values set in the config files:
- --task TASK: Task name.
- --resume: Resume training from a checkpoint
- --experiment_name EXPERIMENT_NAME: Name of the experiment to run or load.
- --run_name RUN_NAME: Name of the run.
- --load_run LOAD_RUN: Name of the run to load when resume=True. If -1: will load the last run.
- --checkpoint CHECKPOINT: Saved model checkpoint number. If -1: will load the last checkpoint.
- --num_envs NUM_ENVS: Number of environments to create.
- --seed SEED: Random seed.
- --max_iterations MAX_ITERATIONS: Maximum number of training iterations.
2. Play a trained policy:
```python legged_gym/scripts/play.py --task=anymal_c_flat```
- By default, the loaded policy is the last model of the last run of the experiment folder.
- Other runs/model iteration can be selected by setting `load_run` and `checkpoint` in the train config.
### Adding a new environment ###
The base environment `legged_robot` implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and has no reward scales.
1. Add a new folder to `envs/` with `'<your_env>_config.py`, which inherit from an existing environment cfgs
2. If adding a new robot:
- Add the corresponding assets to `resources/`.
- In `cfg` set the asset path, define body names, default_joint_positions and PD gains. Specify the desired `train_cfg` and the name of the environment (python class).
- In `train_cfg` set `experiment_name` and `run_name`
3. (If needed) implement your environment in <your_env>.py, inherit from an existing environment, overwrite the desired functions and/or add your reward functions.
4. Register your env in `isaacgym_anymal/envs/__init__.py`.
5. Modify/Tune other parameters in your `cfg`, `cfg_train` as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!
### Troubleshooting ###
1. If you get the following error: `ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory`, do: `sudo apt install libpython3.8`. It is also possible that you need to do `export LD_LIBRARY_PATH=/path/to/libpython/directory` / `export LD_LIBRARY_PATH=/path/to/conda/envs/your_env/lib`(for conda user. Replace /path/to/ to the corresponding path.).
### Known Issues ###
1. The contact forces reported by `net_contact_force_tensor` are unreliable when simulating on GPU with a triangle mesh terrain. A workaround is to use force sensors, but the force are propagated through the sensors of consecutive bodies resulting in an undesirable behaviour. However, for a legged robot it is possible to add sensors to the feet/end effector only and get the expected results. When using the force sensors make sure to exclude gravity from the reported forces with `sensor_options.enable_forward_dynamics_forces`. Example:
```
sensor_pose = gymapi.Transform()
for name in feet_names:
sensor_options = gymapi.ForceSensorProperties()
sensor_options.enable_forward_dynamics_forces = False # for example gravity
sensor_options.enable_constraint_solver_forces = True # for example contacts
sensor_options.use_world_frame = True # report forces in world frame (easier to get vertical components)
index = self.gym.find_asset_rigid_body_index(robot_asset, name)
self.gym.create_asset_force_sensor(robot_asset, index, sensor_pose, sensor_options)
(...)
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
self.gym.refresh_force_sensor_tensor(self.sim)
force_sensor_readings = gymtorch.wrap_tensor(sensor_tensor)
self.sensor_forces = force_sensor_readings.view(self.num_envs, 4, 6)[..., :3]
(...)
self.gym.refresh_force_sensor_tensor(self.sim)
contact = self.sensor_forces[:, :, 2] > 1.
```
没有合适的资源?快使用搜索试试~ 我知道了~
legged-gym包
共115个文件
dae:32个
py:23个
stl:21个
需积分: 0 1 下载量 28 浏览量
更新于2024-12-11
收藏 14.87MB ZIP 举报
legged-gym包
收起资源包目录
legged-gym包 (115个子文件)
anymal_base.dae 8.81MB
trunk.dae 4.05MB
anymal_thigh_r.dae 2.59MB
anymal_thigh_l.dae 2.56MB
anymal_hip_r.dae 2.37MB
anymal_hip_l.dae 2.37MB
calf.dae 1.68MB
bottom_shell.dae 1.38MB
top_shell.dae 1.27MB
thigh_mirror.dae 1.13MB
thigh.dae 1.1MB
face.dae 812KB
hip.dae 653KB
anymal_shank_l.dae 566KB
anymal_shank_r.dae 564KB
anymal_foot.dae 461KB
base.dae 272KB
thigh.dae 260KB
shank_r.dae 234KB
shank_l.dae 231KB
foot.dae 115KB
lidar_cage.dae 65KB
remote.dae 42KB
hip_r.dae 36KB
hip_l.dae 36KB
lidar.dae 36KB
drive.dae 27KB
handle.dae 22KB
wide_angle_camera.dae 20KB
hatch.dae 9KB
depth_camera.dae 5KB
battery.dae 5KB
.gitattributes 148B
.gitignore 690B
base_uv_texture.jpg 648KB
face.jpg 295KB
lidar_cage.jpg 276KB
thigh.jpg 231KB
hip.jpg 227KB
top_shell.jpg 213KB
base.jpg 194KB
depth_camera.jpg 186KB
foot.jpg 185KB
lidar.jpg 180KB
drive.jpg 174KB
handle.jpg 167KB
bottom_shell.jpg 165KB
shank.jpg 165KB
carbon_uv_texture.jpg 165KB
wide_angle_camera.jpg 152KB
battery.jpg 111KB
remote.jpg 107KB
hatch.jpg 103KB
LICENSE 2KB
README.md 7KB
bug_report.md 654B
trunk_A1.png 70KB
anydrive_v3_lstm.pt 16KB
legged_robot.py 47KB
legged_robot_config.py 10KB
terrain.py 9KB
helpers.py 9KB
task_registry.py 7KB
base_task.py 6KB
logger.py 6KB
play.py 6KB
cassie_config.py 5KB
anymal.py 4KB
anymal_c_rough_config.py 4KB
a1_config.py 4KB
anymal_c_flat_config.py 3KB
__init__.py 3KB
base_config.py 3KB
test_env.py 2KB
math.py 2KB
anymal_b_config.py 2KB
train.py 2KB
cassie.py 2KB
__init__.py 2KB
__init__.py 2KB
setup.py 405B
torso.stl 1.31MB
thigh.stl 241KB
thigh_mirror.stl 241KB
pelvis.stl 177KB
yaw_mirror.stl 135KB
yaw.stl 134KB
abduction_mirror.stl 128KB
abduction.stl 127KB
shin-bone_mirror.stl 126KB
shin-bone.stl 126KB
tarsus.stl 113KB
tarsus_mirror.stl 113KB
knee-output.stl 67KB
knee-output_mirror.stl 67KB
hip_mirror.stl 58KB
hip.stl 58KB
toe.stl 40KB
toe_mirror.stl 40KB
toe-output-crank.stl 28KB
共 115 条
- 1
- 2
资源推荐
资源预览
资源评论
127 浏览量
2021-02-03 上传
2021-10-13 上传
146 浏览量
167 浏览量
2021-04-05 上传
2021-06-25 上传
182 浏览量
2021-05-23 上传
2021-06-14 上传
2024-08-26 上传
2022-08-03 上传
5星 · 资源好评率100%
188 浏览量
2020-02-28 上传
106 浏览量
2018-06-07 上传
2018-01-17 上传
191 浏览量
2024-06-05 上传
200 浏览量
197 浏览量
201 浏览量
137 浏览量
140 浏览量
2022-08-03 上传
2023-10-06 上传
资源评论
机器人white
- 粉丝: 12
- 资源: 6
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功