Skip to content

Commit 2dff71a

Browse files
committed
Update readme
1 parent 9a354c2 commit 2dff71a

File tree

4 files changed

+28
-11
lines changed

4 files changed

+28
-11
lines changed

README.md

Lines changed: 26 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,43 @@
11
# 🤔 ETA
22

3-
## Highlight
4-
We propose "Efficiency through Thinking Ahead" (ETA), an asynchronous dual-system that pre-processes information from past frames using a large model in tandem with processing the current information with a small model to enable real-time decisions with strong performance.
3+
## Highlight <a name="highlight"></a>
4+
We propose "**E**fficiency through **T**hinking **A**head" (ETA), an asynchronous dual-system that pre-processes information from past frames using a large model in tandem with processing the current information with a small model to enable real-time decisions with strong performance.
55
<img width="800" alt="CarFormer overview" src="assets/overview.png">
66

7-
## Abstract
8-
How can we benefit from large models without sacrificing inference speed, a common dilemma in self-driving systems? A prevalent solution is a dual-system architecture, employing a small model for rapid, reactive decisions and a larger model for slower but more informative analyses. Existing dual-system designs often implement parallel architectures where inference is either directly conducted using the large model at each current frame or retrieved from previously stored inference results. However, these works still struggle to enable large models for a timely response to every online frame. Our key insight is to shift intensive computations of the current frame to previous time steps and perform a batch inference of multiple time steps to make large models respond promptly to each time step. To achieve the shifting, we introduce Efficiency through Thinking Ahead (ETA), an asynchronous system designed to: (1) propagate informative features from the past to the current frame using future predictions from the large model, (2) extract current frame features using a small model for real-time responsiveness, and (3) integrate these dual features via an action mask mechanism that emphasizes action-critical image regions. Evaluated on the Bench2Drive CARLA Leaderboard-v2 benchmark, ETA advances state-of-the-art performance by 8% with a driving score of 69.53 while maintaining a near-real-time inference speed at 50 ms.
7+
## News <a name="news"></a>
8+
- **`[2025/06/10]`** [ETA](https://arxiv.org/abs/2506.07725) paper and code release!
99

1010
## Results
1111
<img width="800" alt="CarFormer overview" src="assets/results.png">
1212

13-
## Training and Inference
13+
14+
## Table of Contents
15+
1. [Highlights](#highlight)
16+
2. [News](#news)
17+
3. [Results](#results)
18+
2. [Getting Started](#gettingstarted)
19+
- [Training](docs/TRAIN_EVAL.md#trainingsetup)
20+
- [Evaluation](docs/TRAIN_EVAL.md#evalsetup)
21+
4. [TODO List](#todolist)
22+
6. [License and Citation](#licenseandcitation)
23+
7. [Other Resources](#otherresources)
24+
25+
## Getting Started <a name="gettingstarted"></a>
1426

1527
Please refer to [TRAIN_EVAL.md](docs/TRAIN_EVAL.md) for detailed instructions on how to train and evaluate the model.
1628

29+
## TODO List
30+
- [x] ETA Training code
31+
- [x] ETA Evaluation
32+
- [x] Inference Code
33+
- [ ] Checkpoints
34+
1735
## Acknowledgements
1836

19-
This codebase builds on open sourced code from [carla_garage]([email protected]:autonomousvision/carla_garage.git) among others. We thank the authors for their contributions. This project is funded by the European Union (ERC, ENSURE, 101116486) with additional compute support from Leonardo Booster (EuroHPC Joint Undertaking, EHPC-AI-2024A01-060). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This study is also supported by National Natural Science Foundation of China (62206172) and Shanghai Committee of Science and Technology (23YF1462000).
37+
This codebase builds on open sourced code from [CARLA Garage]([email protected]:autonomousvision/carla_garage.git) and [Bench2DriveZoo](https://github.com/Thinklab-SJTU/Bench2DriveZoo/) among others. We thank the authors for their contributions. This project is funded by the European Union (ERC, ENSURE, 101116486) with additional compute support from Leonardo Booster (EuroHPC Joint Undertaking, EHPC-AI-2024A01-060). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This study is also supported by National Natural Science Foundation of China (62206172) and Shanghai Committee of Science and Technology (23YF1462000).
2038

21-
## Citation
22-
If you find our project useful for your research, please consider citing our paper with the following BibTeX:
39+
## License and Citation <a name="licenseandcitation"></a>
40+
This project is released under the [MIT License](LICENSE). If you find our project useful for your research, please consider citing our paper with the following BibTeX:
2341

2442

2543
```bibtex

assets/overview.png

-78.9 KB
Loading

carformer/train.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,6 @@ def main(cfg):
1717

1818
args = cfg
1919

20-
# NCCL_DEBUG=WARN HF_ENDPOINT=https://hf-mirror.com OMP_NUM_THREADS=6 python train_hydra_ds_lightning.py deepspeed=zero2 hyperparams.batch_size=16 hyperparams.num_epochs=40 user=shadihamdan wandb_tag=finalrun_v6dlc_basedata_llava1pt6 training/bev/[email protected]_backbone=llava1pt6 dataset.dataset_path_rel=B2D-base gpus=8 hyperparams.gradient_accumulation_steps=2 training.parallel_dataset_init=False 'ckpt_path="/cpfs01/user/shadihamdan/research/Carformer/checkpoints/TRAINING/bs=16_gradient_accumulation_steps=2_eps=40_training_bev_rgb_backbonergb_backbone=llava1pt6_wnb=finalrun_v6dlc_basedata_llava1pt6_data=B2D-base_bev=rgb_front/2024-08-20_18-53-38/last.ckpt"'
2120
if args.ckpt_path is not None:
2221
print("Loading model from checkpoint")
2322
cfg.save_dir = os.path.dirname(args.ckpt_path)

docs/TRAIN_EVAL.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
## Training Setup
1+
## Training Setup <a name="trainingsetup"></a>
22
### Bench2Drive data
33
Download the [Bench2Drive](https://github.com/Thinklab-SJTU/Bench2Drive) dataset and unzip all the directories.
44

@@ -30,7 +30,7 @@ make ETA_base_model_s42
3030
make ETA_async_model_s42
3131
```
3232

33-
## Evaluation Setup
33+
## Evaluation Setup <a name="evalsetup"></a>
3434

3535
### Bench2Drive
3636

0 commit comments

Comments
 (0)