Shell Command Assistant - Gemma 2B LoRA Adapters
This model provides LoRA (Low-Rank Adaptation) adapters for the Gemma 2B model, fine-tuned to help users with shell commands. It can answer questions about common Unix/Linux shell commands with high accuracy.
Model Performance
- Accuracy: 100% on core shell commands
- Base Model: mlx-community/gemma-2-2b-it-4bit
- Adapter Size: ~18MB (much smaller than full model)
- Training: Fine-tuned on 26 diverse shell command examples
Quick Start
from mlx_lm import load, generate
# Load the model with adapters
# Step 1: Explicitly download from HF
adapter_path = snapshot_download(repo_id="PKSGIN/MyGemma270_Shellcommands")
# Step 2: Pass the local path to load()
model, tokenizer = load(
"mlx-community/gemma-2-2b-it-4bit",
adapter_path=adapter_path # Now it's a local path!
)
# Ask a question
question = "How do I check disk space?"
prompt = f"### Human: {question}\n### Assistant:"
response = generate(model, tokenizer, prompt=prompt, max_tokens=50, verbose=False)
print(response.strip())
Example Outputs
# Question: How do I list all files including hidden ones?
# Answer: Use ls -la to show all files including hidden ones.
# Question: How do I find all Python files?
# Answer: Use find . -name "*.py" to find all Python files.
# Question: How do I check memory usage?
# Answer: Use free -h to see memory usage in human-readable format.
Supported Commands
The model can help with these types of shell commands:
- File Operations:
ls
,cp
,mv
,rm
,mkdir
,find
- System Monitoring:
ps
,top
,free
,df
- Network:
netstat
,lsof
,ping
- File Content:
cat
,grep
,wc
,tail
,head
- Archives:
tar
,zip
,unzip
- Permissions:
chmod
,chown
- Process Management:
kill
, background processes
Training Details
- Method: LoRA (Low-Rank Adaptation)
- Rank: 4, Alpha: 8, Dropout: 0.0
- Training Data: 26 unique shell commands × 60 repetitions = ~1560 examples
- Format: Human/Assistant conversation format
- Iterations: ~1000 until convergence
Installation
pip install mlx-lm>=0.19.0
Usage Examples
from mlx_lm import load, generate
model, tokenizer = load(
"mlx-community/gemma-2-2b-it-4bit",
adapter_path="PKSGIN/MyGemma270_Shellcommands"
)
questions = [
"How do I compress files?",
"How do I see running processes?",
"How do I find large files?",
"How do I check network connections?"
]
for question in questions:
prompt = f"### Human: {question}\n### Assistant:"
response = generate(model, tokenizer, prompt=prompt, max_tokens=50)
print(f"Q: {question}")
print(f"A: {response.strip()}")
print()
License
Apache 2.0 (following base Gemma model)
Acknowledgments
Built using MLX-LM and based on Google's Gemma 2B model.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for PKSGIN/MyGemma270_Shellcommands
Base model
mlx-community/gemma-2-2b-it-4bit