Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
gguf-org
/
gemma-3-270m-gguf
like
2
Follow
gguf
15
Text Generation
GGUF
gguf-connector
License:
gemma
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
gemma-3-270m-gguf
gemma-3-270m-gguf
base model from google
tested on
gguf-connector
with nightly
llama-cpp-python
Downloads last month
10,538
GGUF
Model size
268M params
Architecture
gemma3
Hardware compatibility
Log In
to view the estimation
1-bit
IQ1_S
237 MB
IQ1_M
237 MB
2-bit
IQ2_XXS
237 MB
IQ2_XS
237 MB
IQ2_S
237 MB
Q2_K_S
237 MB
Q2_K_S
237 MB
Q2_K
237 MB
Q2_K
237 MB
3-bit
IQ3_XXS
237 MB
IQ3_XXS
237 MB
IQ3_S
237 MB
Q3_K_S
237 MB
IQ3_S
237 MB
Q3_K_S
237 MB
Q3_K_M
242 MB
Q3_K_M
242 MB
Q3_K_L
246 MB
Q3_K_L
246 MB
4-bit
IQ4_XS
241 MB
IQ4_XS
241 MB
Q4_K_S
250 MB
Q4_K_S
250 MB
IQ4_NL
242 MB
MXFP4_MOE
292 MB
Q4_0
241 MB
Q4_1
248 MB
IQ4_NL
242 MB
MXFP4_MOE
292 MB
Q4_0
241 MB
Q4_1
248 MB
Q4_K_M
253 MB
Q4_K_M
253 MB
5-bit
Q5_K_S
258 MB
Q5_K_S
258 MB
Q5_0
254 MB
Q5_1
260 MB
Q5_0
254 MB
Q5_1
260 MB
Q5_K_M
260 MB
Q5_K_M
260 MB
6-bit
Q6_K
283 MB
Q6_K
283 MB
8-bit
Q8_0
292 MB
Q8_0
292 MB
16-bit
BF16
543 MB
F16
543 MB
BF16
543 MB
F16
543 MB
32-bit
F32
1.08 GB
F32
1.08 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
gguf-org/gemma-3-270m-gguf
Base model
google/gemma-3-270m
Quantized
(
22
)
this model