LLaDA-8B-Instruct / README.md
nielsr's picture
nielsr HF Staff
Add library_name and pipeline_tag to model card
1316be1 verified
|
raw
history blame
464 Bytes
---
license: mit
library_name: transformers
pipeline_tag: text-generation
---
We introduce LLaDA (<b>L</b>arge <b>La</b>nguage <b>D</b>iffusion with m<b>A</b>sking), a diffusion model with an unprecedented 8B scale, trained entirely from scratch,
rivaling LLaMA3 8B in performance, as described in [the paper](https://hf.co/papers/2502.09992).
Project page: https://ml-gsai.github.io/LLaDA-demo/.
For code and sample usage, see https://github.com/ML-GSAI/SMDM.