File size: 1,245 Bytes
906aab2
 
 
 
 
8e6d666
 
 
 
0052423
906aab2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
datasets:
- IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot
base_model:
- nvidia/Eagle2-2B
tags:
- vision-language-model
- manipulation
- robotics
pipeline_tag: robotics
---
# Model Card for InstructVLA LIBERO-Spatial

- checkpoints: the model in `.pt` format

- eval: the evaluation results with 3 random seeds

- dataset_statistics.json: the normalization statistics for the dataset

## Evaluation:


```bash

#!/bin/bash

CKPT_LIST=(
  "path/to/checkpoints/step-018000-epoch-87-loss=0.0409.pt"
)

# Loop over the checkpoint list and GPUs
for i in "${!CKPT_LIST[@]}"; do
  GPU_ID=$((i % 8))  # Cycle through GPUs 0-7
  CHECKPOINT="${CKPT_LIST[$i]}"
  
  # Run the evaluation script for each checkpoint and GPU
  CUDA_VISIBLE_DEVICES=$GPU_ID python deploy/libero/run_libero_eval.py \
    --model_family instruct_vla \
    --pretrained_checkpoint "$CHECKPOINT" \
    --task_suite_name libero_spatial \
    --local_log_dir Libero/release_ensemble \
    --use_length -1 \
    --center_crop True &

  # --use_length == -1 : execute the ensembled action
  # --use_length >= 1  : execute action_chunk[0:use_length]
  # For this checkpoint, you should use action ensemble.
  
  sleep 5
done

# Wait for all background jobs to finish
wait

```