Update README.md
Browse files
README.md
CHANGED
|
@@ -7,7 +7,9 @@ language:
|
|
| 7 |
pipeline_tag: automatic-speech-recognition
|
| 8 |
---
|
| 9 |
|
| 10 |
-
#
|
|
|
|
|
|
|
| 11 |
|
| 12 |
<div align="center" style="line-height: 1;">
|
| 13 |
<a href="https://github.com/augustgw/early-exit-transformer" target="_blank" style="margin: 2px;">
|
|
@@ -21,7 +23,9 @@ pipeline_tag: automatic-speech-recognition
|
|
| 21 |
</a>
|
| 22 |
</div>
|
| 23 |
|
| 24 |
-
|
|
|
|
|
|
|
| 25 |
|
| 26 |
This architecture introduces parallel downsampling layers before the first and last exits to improve performance with minimal extra overhead, while retaining inference speed.
|
| 27 |
|
|
|
|
| 7 |
pipeline_tag: automatic-speech-recognition
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# Splitformer
|
| 11 |
+
|
| 12 |
+
---
|
| 13 |
|
| 14 |
<div align="center" style="line-height: 1;">
|
| 15 |
<a href="https://github.com/augustgw/early-exit-transformer" target="_blank" style="margin: 2px;">
|
|
|
|
| 23 |
</a>
|
| 24 |
</div>
|
| 25 |
|
| 26 |
+
## Overview
|
| 27 |
+
|
| 28 |
+
**Splitformer** is a 36.7M parameters Conformer-based ASR model trained from scratch on 1000 hours of the LibriSpeech dataset with an early‐exit objective.
|
| 29 |
|
| 30 |
This architecture introduces parallel downsampling layers before the first and last exits to improve performance with minimal extra overhead, while retaining inference speed.
|
| 31 |
|