-
-
-
-
-
-
Inference Providers
Active filters:
vllm
unsloth/gpt-oss-20b-BF16
Text Generation
•
21B
•
Updated
•
56k
•
20
unsloth/gpt-oss-20b
Text Generation
•
22B
•
Updated
•
59.4k
•
29
lmstudio-community/gpt-oss-120b-MLX-8bit
Text Generation
•
117B
•
Updated
•
159k
•
10
NexVeridian/gpt-oss-20b-4bit
Text Generation
•
21B
•
Updated
•
1.19k
•
3
NexVeridian/gpt-oss-20b-6bit
Text Generation
•
21B
•
Updated
•
282
•
1
inferencerlabs/openai-gpt-oss-120b-MLX-6.5bit
Text Generation
•
117B
•
Updated
•
1.7k
•
2
mradermacher/gpt-oss-120b-GGUF
Jinx-org/Jinx-DeepSeek-R1-0528
Text Generation
•
684B
•
Updated
•
29
•
2
RedHatAI/gpt-oss-120b-FP8-dynamic
Text Generation
•
117B
•
Updated
•
1.86k
•
5
unsloth/Seed-OSS-36B-Instruct
Text Generation
•
36B
•
Updated
•
143
•
2
mlx-community/gpt-oss-20b-MXFP4-Q4
Text Generation
•
21B
•
Updated
•
650
•
3
mlx-community/gpt-oss-120b-MXFP4-Q4
Text Generation
•
117B
•
Updated
•
499
•
2
Sci-fi-vy/gpt-oss-20b-GGUF
Text Generation
•
21B
•
Updated
•
12.2k
•
1
RedHatAI/gpt-oss-120b
Text Generation
•
120B
•
Updated
•
22
•
1
Jackmin108/gpt-oss-0.5B
Text Generation
•
Updated
•
5
•
1
mradermacher/gpt-oss-0.5B-GGUF
0.5B
•
Updated
•
1.66k
•
1
Inferless/deciLM-7B-GPTQ
Text Generation
•
Updated
•
8
•
1
Inferless/SOLAR-10.7B-Instruct-v1.0-GPTQ
Text Generation
•
Updated
•
8
•
2
Inferless/Mixtral-8x7B-v0.1-int8-GPTQ
Text Generation
•
Updated
•
11
•
2
mistralai/Mixtral-8x22B-v0.1
141B
•
Updated
•
5.69k
•
229
RedHatAI/Meta-Llama-3-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
6.07k
•
•
24
RedHatAI/Mixtral-8x7B-Instruct-v0.1-AutoFP8
Text Generation
•
47B
•
Updated
•
16
•
3
RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV
Text Generation
•
8B
•
Updated
•
11.8k
•
•
8
RedHatAI/Meta-Llama-3-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
2.26k
•
•
13
RedHatAI/Qwen2-72B-Instruct-FP8
Text Generation
•
73B
•
Updated
•
1.6k
•
15
mradermacher/Mistral-7B-Instruct-v0.3-GGUF
7B
•
Updated
•
299
•
2
mradermacher/Mistral-7B-Instruct-v0.3-i1-GGUF
7B
•
Updated
•
162
•
1
RedHatAI/Mixtral-8x22B-Instruct-v0.1-AutoFP8
Text Generation
•
141B
•
Updated
•
13
•
3
RedHatAI/Qwen2-0.5B-Instruct-FP8
Text Generation
•
0.5B
•
Updated
•
1.85k
•
3
RedHatAI/Qwen2-1.5B-Instruct-FP8
Text Generation
•
2B
•
Updated
•
11.2k