sseung0703 commited on
Commit
effa24f
·
1 Parent(s): 3836118

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # JaxNeRF
2
+
3
+ This is a [JAX](https://github.com/google/jax)-[Flax](https://github.com/google/flax) implementation of
4
+ [Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis](https://www.ajayj.com/dietnerf).
5
+ This code is created and maintained by
6
+ [our names](https://boyangdeng.com/),
7
+ [our names](https://jonbarron.info/),
8
+ and [Seunghyun Lee](https://github.com/sseung0703/sseung0703).
9
+
10
+ <div align="center">
11
+ <img width="95%" alt="NeRF Teaser" src="https://raw.githubusercontent.com/bmild/nerf/master/imgs/pipeline.jpg">
12
+ </div>
13
+
14
+ Our JAX-Flax implementation currently supports:
15
+
16
+ <table class="tg">
17
+ <thead>
18
+ <tr>
19
+ <th class="tg-0lax"><span style="font-weight:bold">Platform</span></th>
20
+ <th class="tg-0lax" colspan="2"><span style="font-weight:bold">Single-Host GPU</span></th>
21
+ <th class="tg-0lax" colspan="2"><span style="font-weight:bold">Multi-Device TPU</span></th>
22
+ </tr>
23
+ </thead>
24
+ <tbody>
25
+ <tr>
26
+ <td class="tg-0lax"><span style="font-weight:bold">Type</span></td>
27
+ <td class="tg-0lax">Single-Device</td>
28
+ <td class="tg-0lax">Multi-Device</td>
29
+ <td class="tg-0lax">Single-Host</td>
30
+ <td class="tg-0lax">Multi-Host</td>
31
+ </tr>
32
+ <tr>
33
+ <td class="tg-0lax"><span style="font-weight:bold">Training</span></td>
34
+ <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td>
35
+ <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td>
36
+ <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td>
37
+ <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td>
38
+ </tr>
39
+ <tr>
40
+ <td class="tg-0lax"><span style="font-weight:bold">Evaluation</span></td>
41
+ <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td>
42
+ <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td>
43
+ <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td>
44
+ <td class="tg-0lax"><img src="http://storage.googleapis.com/gresearch/jaxnerf/check.png" alt="Supported" width=18px height=18px></td>
45
+ </tr>
46
+ </tbody>
47
+ </table>
48
+
49
+ The training job on 128 TPUv2 cores can be done in **2.5 hours (v.s 3 days for TF
50
+ NeRF)** for 1 million optimization steps. In other words, JaxNeRF trains to the best while trains very fast.
51
+
52
+ As for inference speed, here are the statistics of rendering an image with
53
+ 800x800 resolution (numbers are averaged over 50 rendering passes):
54
+
55
+ | Platform | 1 x NVIDIA V100 | 8 x NVIDIA V100 | 128 x TPUv2 |
56
+ |----------|:---------------:|:-----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|
57
+ | TF NeRF | 27.74 secs | <img src="http://storage.googleapis.com/gresearch/jaxnerf/cross.png" alt="Not Supported" width=18px height=18px> | <img src="http://storage.googleapis.com/gresearch/jaxnerf/cross.png" alt="Not Supported" width=18px height=18px> |
58
+ | JaxNeRF | 20.77 secs | 2.65 secs | 0.35 secs |
59
+
60
+
61
+ ## Installation
62
+ We recommend using [Anaconda](https://www.anaconda.com/products/individual) to set
63
+ up the environment. Run the following commands:
64
+ ```
65
+
66
+ ```
67
+
68
+ Then, you'll need to download the datasets
69
+ from the [NeRF official Google Drive](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1).
70
+ Please download the `nerf_synthetic.zip` and `nerf_llff_data.zip` and unzip them
71
+ in the place you like. Let's assume they are placed under `/tmp/jaxnerf/data/`.
72
+
73
+ That's it for installation. You're good to go. **Notice:** For the following instructions, you don't need to enter the jaxnerf folder. Just stay in the parent folder.
74
+
75
+ ## Two Commands for Everything
76
+
77
+ ```
78
+ bash jaxnerf/train.sh demo /tmp/jaxnerf/data
79
+ bash jaxnerf/eval.sh demo /tmp/jaxnerf/data
80
+ ```
81
+
82
+ Once both jobs are done running (which may take a while if you only have 1 GPU
83
+ or CPU), you'll have a folder, `/tmp/jaxnerf/data/demo`, with:
84
+
85
+ * Trained NeRF models for all scenes in the blender dataset.
86
+ * Rendered images and depth maps for all test views.
87
+ * The collected PSNRs of all scenes in a TXT file.
88
+
89
+ Note that we used the `demo` config here which is basically the `blender` config
90
+ in the paper except smaller batch size and much less train steps. Of course, you
91
+ can use other configs to replace `demo` and other data locations to replace
92
+ `/tmp/jaxnerf/data`.
93
+
94
+ We provide 2 configurations in the folder `configs` which match the original
95
+ configurations used in the paper for the blender dataset and the LLFF dataset.
96
+ Be careful when you use them. Their batch sizes are large so you may get OOM error if you have limited resources, for example, 1 GPU with small memory. Also, they have many many train steps so you may need days to finish training all scenes.
97
+
98
+ ## Play with One Scene
99
+
100
+ You can also train NeRF on only one scene. The easiest way is to use given configs:
101
+
102
+ ```
103
+ python -m jaxnerf.train \
104
+ --data_dir=/PATH/TO/YOUR/SCENE/DATA \
105
+ --train_dir=/PATH/TO/THE/PLACE/YOU/WANT/TO/SAVE/CHECKPOINTS \
106
+ --config=configs/CONFIG_YOU_LIKE
107
+ ```
108
+
109
+ Evaluating NeRF on one scene is similar:
110
+
111
+ ```
112
+ python -m jaxnerf.eval \
113
+ --data_dir=/PATH/TO/YOUR/SCENE/DATA \
114
+ --train_dir=/PATH/TO/THE/PLACE/YOU/SAVED/CHECKPOINTS \
115
+ --config=configs/CONFIG_YOU_LIKE \
116
+ --chunk=4096
117
+ ```
118
+
119
+ The `chunk` parameter defines how many rays are feed to the model in one go.
120
+ We recommend you to use the largest value that fits to your device's memory but
121
+ small values are fine, only a bit slow.
122
+
123
+ You can also define your own configurations by passing command line flags. Please refer to the `define_flags` function in `nerf/utils.py` for all the flags and their meanings.
124
+
125
+ **Note**: For the ficus scene in the blender dataset, we noticed that it's sensible to different initializations,
126
+ e.g. using different random seeds, if using the original learning rate schedule in the paper.
127
+ Therefore, we provide a simple tweak (turned off by default) for more stable trainings: using `lr_delay_steps` and `lr_delay_mult`.
128
+ This allows the training to start from a smaller learning rate (`lr_init` * `lr_delay_mult`) in the first `lr_delay_steps`.
129
+ We didn't use them for our pretrained models
130
+ but we tested `lr_delay_steps=5000` with `lr_delay_mult=0.2` and it works quite smoothly.
131
+
132
+ ## Pretrained Models
133
+
134
+ We provide a collection of pretrained NeRF models that match the numbers
135
+ reported in the [paper](https://arxiv.org/abs/2003.08934). Actually, ours are
136
+ slightly better overall because we trained for more iterations (while still
137
+ being much faster!). You can find our pretrained models
138
+ [here](http://storage.googleapis.com/gresearch/jaxnerf/jaxnerf_pretrained_models.zip).
139
+ The performances (in PSNR) of our pretrained NeRF models are listed below:
140
+
141
+ ### Blender
142
+ | Scene | Chair | Drums | Ficus | Hotdog | Lego | Materials | Mic | Ship | Mean |
143
+ |---------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|
144
+ | TF NeRF | 33.00 | 25.01 | 30.13 | 36.18 | 32.54 | 29.62 | 32.91 | 28.65 | 31.01 |
145
+ | JaxNeRF | **34.08** | **25.03** | **30.43** | **36.92** | **33.28** | **29.91** | **34.53** | **29.36** | **31.69** |
146
+
147
+ #### Demo video
148
+ - Lego
149
+ - Chair
150
+ - Drums
151
+ - Ship
152
+ - Hotdog
153
+
154
+ - Lego-occlusion case
155
+
156
+ ## Citation
157
+ If you use this software package, please cite it as:
158
+
159
+ ```
160
+ @software{jaxnerf2020github,
161
+ author = {Boyang Deng and Jonathan T. Barron and Pratul P. Srinivasan},
162
+ title = {{JaxNeRF}: an efficient {JAX} implementation of {NeRF}},
163
+ url = {https://github.com/google-research/google-research/tree/master/jaxnerf},
164
+ version = {0.0},
165
+ year = {2020},
166
+ }
167
+ ```
168
+
169
+ and also cite the original NeRF paper:
170
+
171
+ ```
172
+ @misc{jain2021putting,
173
+ title={Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis},
174
+ author={Ajay Jain and Matthew Tancik and Pieter Abbeel},
175
+ year={2021},
176
+ eprint={2104.00677},
177
+ archivePrefix={arXiv},
178
+ primaryClass={cs.CV}
179
+ }
180
+ ```
181
+
182
+ ## Acknowledgement