Moved legend to bottom
Browse files
README.md
CHANGED
@@ -10,39 +10,6 @@ _above models sorted by amount of capabilities (lazy method - character count)_
|
|
10 |
|
11 |
Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
|
12 |
|
13 |
-
## Legend
|
14 |
-
|
15 |
-
for the above TTS capability table
|
16 |
-
|
17 |
-
* Processor ⚡ - Inference done by
|
18 |
-
* CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
|
19 |
-
* CUDA by *NVIDIA*™
|
20 |
-
* ROCm by *AMD*™
|
21 |
-
* Phonetic alphabet 🔤 - Phonetic transcription that allows to control pronunciation of words before inference
|
22 |
-
* [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) - International Phonetic Alphabet
|
23 |
-
* [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
|
24 |
-
* Insta-clone 👥 - Zero-shot model for quick voice cloning
|
25 |
-
* Emotion control 🎭 - Able to force an emotional state of speaker
|
26 |
-
* 🎭 <# emotions> ( 😡 anger; 😃 happiness; 😭 sadness; 😯 surprise; 🤫 whispering; 😊 friendlyness )
|
27 |
-
* strict insta-clone switch 🎭👥 - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
|
28 |
-
* strict control through prompt 🎭📖 - prompt input parameter
|
29 |
-
* Prompting 📖 - Also a side effect of narrator based datasets and a way to affect the emotional state
|
30 |
-
* 📖 - Prompt as a separate input parameter
|
31 |
-
* 🗣📖 - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
|
32 |
-
* Streaming support 🌊 - Can playback audio while it is still being generated
|
33 |
-
* Speech control 🎚 - Ability to change the pitch, duration etc. of generated speech
|
34 |
-
* Voice conversion / Speech-To-Speech support 🦜 - Streaming support implies real-time S2S; S2T=>T2S does not count
|
35 |
-
* Longform synthesis 📜 - Able to synthesize whole paragraphs
|
36 |
-
|
37 |
-
Example if the proprietary ElevenLabs were to be added to the capabilities table:
|
38 |
-
| Name | Processor<br>⚡ | Phonetic alphabet<br>🔤 | Insta-clone<br>👥 | Emotional control<br>🎭 | Prompting<br>📖 | Speech control<br>🎚 | Streaming support<br>🌊 | Voice conversion<br>🦜 | Longform synthesis<br>📜 |
|
39 |
-
|---|---|---|---|---|---|---|---|---| --- |
|
40 |
-
|ElevenLabs|CUDA|IPA, ARPAbet|👥|🎭📖|🗣📖|🎚 stability, voice similarity|🌊|🦜|📜 Projects|
|
41 |
-
|
42 |
-
More info on how the capabilities table can be found within the [GitHub Issue](https://github.com/Vaibhavs10/open-tts-tracker/issues/14).
|
43 |
-
|
44 |
-
Please create pull requests to update the info on the models.
|
45 |
-
|
46 |
---
|
47 |
|
48 |
# 🗣️ Open TTS Tracker
|
@@ -90,4 +57,41 @@ This is aimed as a resource to increase awareness for these models and to make i
|
|
90 |
| xVASynth | [Repo](https://github.com/DanRuta/xVA-Synth) | [Hub](https://huggingface.co/Pendrokar/xvapitch) | [GPL-3.0](https://github.com/DanRuta/xVA-Synth/blob/master/LICENSE.md) | [Yes](https://github.com/DanRuta/xva-trainer) | Multilingual | [Papers](https://huggingface.co/Pendrokar/xvapitch) | [🤗 Space](https://huggingface.co/spaces/Pendrokar/xVASynth) | Base model trained on copyrighted materials. |
|
91 |
|
92 |
* *Multilingual* - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported
|
93 |
-
* *ALL* - Supports all natural languages; may not support artificial/contructed languages
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
# 🗣️ Open TTS Tracker
|
|
|
57 |
| xVASynth | [Repo](https://github.com/DanRuta/xVA-Synth) | [Hub](https://huggingface.co/Pendrokar/xvapitch) | [GPL-3.0](https://github.com/DanRuta/xVA-Synth/blob/master/LICENSE.md) | [Yes](https://github.com/DanRuta/xva-trainer) | Multilingual | [Papers](https://huggingface.co/Pendrokar/xvapitch) | [🤗 Space](https://huggingface.co/spaces/Pendrokar/xVASynth) | Base model trained on copyrighted materials. |
|
58 |
|
59 |
* *Multilingual* - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported
|
60 |
+
* *ALL* - Supports all natural languages; may not support artificial/contructed languages
|
61 |
+
|
62 |
+
---
|
63 |
+
|
64 |
+
## Legend
|
65 |
+
|
66 |
+
for the [above](#) TTS capability table
|
67 |
+
|
68 |
+
* Processor ⚡ - Inference done by
|
69 |
+
* CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
|
70 |
+
* CUDA by *NVIDIA*™
|
71 |
+
* ROCm by *AMD*™
|
72 |
+
* Phonetic alphabet 🔤 - Phonetic transcription that allows to control pronunciation of words before inference
|
73 |
+
* [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) - International Phonetic Alphabet
|
74 |
+
* [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
|
75 |
+
* Insta-clone 👥 - Zero-shot model for quick voice cloning
|
76 |
+
* Emotion control 🎭 - Able to force an emotional state of speaker
|
77 |
+
* 🎭 <# emotions> ( 😡 anger; 😃 happiness; 😭 sadness; 😯 surprise; 🤫 whispering; 😊 friendlyness )
|
78 |
+
* strict insta-clone switch 🎭👥 - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
|
79 |
+
* strict control through prompt 🎭📖 - prompt input parameter
|
80 |
+
* Prompting 📖 - Also a side effect of narrator based datasets and a way to affect the emotional state
|
81 |
+
* 📖 - Prompt as a separate input parameter
|
82 |
+
* 🗣📖 - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
|
83 |
+
* Streaming support 🌊 - Can playback audio while it is still being generated
|
84 |
+
* Speech control 🎚 - Ability to change the pitch, duration etc. of generated speech
|
85 |
+
* Voice conversion / Speech-To-Speech support 🦜 - Streaming support implies real-time S2S; S2T=>T2S does not count
|
86 |
+
* Longform synthesis 📜 - Able to synthesize whole paragraphs
|
87 |
+
|
88 |
+
Example if the proprietary ElevenLabs were to be added to the capabilities table:
|
89 |
+
| Name | Processor<br>⚡ | Phonetic alphabet<br>🔤 | Insta-clone<br>👥 | Emotional control<br>🎭 | Prompting<br>📖 | Speech control<br>🎚 | Streaming support<br>🌊 | Voice conversion<br>🦜 | Longform synthesis<br>📜 |
|
90 |
+
|---|---|---|---|---|---|---|---|---| --- |
|
91 |
+
|ElevenLabs|CUDA|IPA, ARPAbet|👥|🎭📖|🗣📖|🎚 stability, voice similarity|🌊|🦜|📜 Projects|
|
92 |
+
|
93 |
+
More info on how the capabilities table can be found within the [GitHub Issue](https://github.com/Vaibhavs10/open-tts-tracker/issues/14).
|
94 |
+
|
95 |
+
Please create pull requests to update the info on the models.
|
96 |
+
|
97 |
+
---
|