File size: 10,807 Bytes
da260fa
 
3947ba8
 
 
 
 
 
 
 
 
 
 
 
 
da260fa
3947ba8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
---

license: mit
language:
  - en
library_name: transformers
tags:
  - phi2
  - lora
  - science-on-a-sphere
  - sos
  - earth-science
  - question-answering
base_model: microsoft/phi-2
datasets:
  - HacksHaven/science-on-a-sphere-prompt-completions
---


# Model Card for Model ID

This is a LoRA fine-tuned version of Phi-2-2.7b, adapted for educational and scientific question-answering.
The model has been fine-tuned on the Science On a Sphere (SOS) QA Dataset, which includes thousands of prompt/completion pairs derived
from NOAA’s Science On a Sphere support content and dataset catalog.
The model is designed to support Earth science education and enable AI-powered SOS content experiences.

## Model Details

Base Model: microsoft/phi-2
Fine-Tuned by: Eric Hackathorn (NOAA)
Architecture: Transformer decoder-only (Phi-2)
Finetuning Type: Parameter-efficient fine-tuning using LoRA
Language(s): English
License: MIT

### Model Description

**Model Status: Work in Progress**

This model is currently under active development. Please note:

- The “More Information” URLs are provisional — they currently overemphasize support pages rather than high-level "What is..." resources.

- The links will be refined in upcoming updates to better align with the model's purpose and intended audience.

- Feedback is welcome to help improve this aspect and others.

This model is a LoRA fine-tuned version of microsoft/phi-2, optimized for question answering over content related to NOAA’s Science On a Sphere (SOS) initiative,
including Earth science metadata, dataset descriptions, support documentation, and educational guidance.
It is designed to be integrated into museum kiosks, classroom assistants, educational chatbots, and SOS Explorer environments to make complex environmental
data more accessible and engaging.

- Developed by: Eric Hackathorn (NOAA Global Systems Laboratory)
- Shared by: https://huggingface.co/HacksHaven/phi-2-science-on-a-sphere
- Model type: Decoder-only transformer (LLM) with LoRA fine-tuning
- Language(s): English
- License: MIT
- Finetuned from model: microsoft/phi-2

## Uses

1. Educational Chatbots

    **Use**: Plug into an LLM-powered assistant (like ChatGPT or a custom app) in a science museum, classroom, or mobile app.


    **Example**:

    Student: “What causes a tsunami?”

    Model: Tsunamis are typically caused by underwater earthquakes, often at subduction zones. More information: https://sos.noaa.gov/catalog/datasets/tsunami-locations-2000-bce-2014/


2. Interactive Museum Kiosks

    **Use**: Replace static displays with conversational kiosks powered by your model.


    **Example**: A touchscreen exhibit next to an SOS globe where users ask, “What does this animation show?” and the model responds with a summary of that dataset.


3. SOS Explorer Integration

    **Use**: Embed QA inside SOS Explorer or a future AI-powered version to describe datasets, provide learning guidance, or guide exploratory interactions.


    **Example**: When a user clicks on a dataset, a bot could summarize it, suggest classroom activities, or quiz the user.


4. Curriculum and Lesson Plan Support

    **Use**: Teachers ask the model for summaries, concepts, or classroom activities based on a specific dataset.


    **Example**: “Describe a classroom activity using the dataset about ocean acidification.”


5. Research Assistant for Outreach Teams

    **Use**: Internal NOAA outreach and comms teams use the model to quickly surface descriptions, summaries, related content, or activity suggestions.


6. Voice-activated Assistants

    **Use**: Deploy in AR/VR environments or installations with voice input, e.g., “Tell me about sea surface temperature datasets.”


### Direct Use

This model is optimized for:

- Question-answering on Earth science content
- SOS educational kiosk applications
- Embedding into chatbots or classroom tools for informal STEM education

### Downstream Use

It can be further fine-tuned for:

- Domain-specific science outreach bots
- Custom SOS Explorer content recommendation engines
- Multimodal extensions (e.g., image+QA)
  
### Out-of-Scope Use

- Real-time decision-making or scientific analysis requiring exact precision
- High-stakes classroom assessment without human verification
- Non-English QA without additional fine-tuning

## Bias, Risks, and Limitations

- Some responses may oversimplify complex topics
- Answers are based on generated content, not human-authored explanations
- May reflect biases from the underlying LLM or training set structure

### Recommendations

- Use model outputs with educator supervision in formal settings
- Cross-check completions against authoritative SOS materials
- Avoid deployment in mission-critical scenarios without further vetting

## How to Get Started with the Model

This is a merged and quantization-ready version of Qwen3-4B fine-tuned on the Science On a Sphere (SOS) instruction dataset using LoRA + PEFT. You can load it using:

```python

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig



bnb_config = BitsAndBytesConfig(

    load_in_4bit=True,

    bnb_4bit_compute_dtype=torch.bfloat16,

    bnb_4bit_use_double_quant=True,

    bnb_4bit_quant_type="nf4"

)



model = AutoModelForCausalLM.from_pretrained(

    "HacksHaven/phi-2-science-on-a-sphere",

    quantization_config=bnb_config,

    device_map="auto",

    trust_remote_code=True,

    torch_dtype=torch.bfloat16,

)



tokenizer = AutoTokenizer.from_pretrained("HacksHaven/phi-2-science-on-a-sphere", trust_remote_code=True)

```

Use the code below to chat with the model.

``` python

qa = pipeline("text-generation", model=model, tokenizer=tokenizer)

qa("What is NOAA's Science On a Sphere?")

```

## Training Details

### Training Data

- Source Website: https://sos.noaa.gov/
- Repository: https://huggingface.co/datasets/HacksHaven/science-on-a-sphere-prompt-completions/

#### Preprocessing

Prompts and completions were embedded in a Phi-2-friendly conversational format using simple User: / Assistant: prefixes, with no special tokens.

``` python

User: [Prompt text]

Assistant: [Completion text]

```

- Tokenization used padding="longest" and max_length=2048.

- Labels were copied directly from input IDs for causal language modeling.



#### Training Hyperparameters



| Parameter               | Value                                                         |

| ----------------------- | ------------------------------------------------------------- |

| Base model              | `microsoft/phi-2`                                             |

| Finetuning method       | LoRA (Low-Rank Adaptation)                                    |

| LoRA Rank (`r`)         | 8                                                             |

| LoRA Alpha              | 32                                                            |

| LoRA Dropout            | 0.05                                                          |

| Target Modules          | `q_proj`, `k_proj`, `v_proj`, `o_proj`, `dense`, `fc1`, `fc2` |

| Gradient Checkpointing  | Enabled                                                       |

| Max sequence length     | 2048                                                          |

| Precision               | float32 (for CPU deployment compatibility)                    |

| Quantization            | 4-bit NF4 via BitsAndBytes                                    |

| Optimizer               | `paged_adamw_8bit`                                            |

| Learning Rate           | 2e-4                                                          |

| Epochs                  | 3                                                             |

| Batch Size              | 1 (with gradient accumulation = 4)                            |

| Logging & Eval Strategy | Every 10 steps                                                |

| Evaluation Metric       | `bertscore_f1` (maximize)                                     |
| Load Best Model at End  | ✅ Yes                                                         |
Yes                                                                       |

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

Evaluated on a 10% held-out split of the training dataset (stratified).

#### Factors

This model was fine-tuned to support instructional content for NOAA's Science On a Sphere (SOS) exhibits, which span a diverse set of topics and audiences. Relevant factors that may affect model performance include:

- **Scientific Domain**: The model has seen examples across atmospheric science, oceanography, climate change, space weather, and Earth system interactions. Responses may vary depending on the domain depth in the fine-tuning set.

- **Instruction Type**: Prompts vary in style, including explanations of scientific processes, definitions, causal reasoning, and narrative-style descriptions for public displays.

- **Intended Audience**: While many prompts are written at a general public or middle school level, the model may perform differently for early learners, specialists, or multilingual audiences.

- **Data Origin**: The training set draws from curated NOAA science narratives, educational materials, and exhibit scripts. Domains or tones not represented in these sources may yield less accurate responses.

Future evaluations could assess performance across these axes to better understand model reliability in SOS-like deployment environments.

#### Metrics

- ROUGE-1, ROUGE-2, ROUGE-L: N-gram overlap
- BLEU: Token-based overlap precision
- BERTScore F1: Semantic similarity of completions
- Perplexity: If eval loss is available

### Results

Evaluation was performed using ROUGE, BLEU, BERTScore, and perplexity on a held-out 10% test set. 
BERTScore F1 was used to select the best checkpoint during training. Unfortunately it made my GPU
burst into flames.

Quantitative results TBD in future update.

#### Summary

Summary will be added when quantitative evaluation is complete.

## Citation

**BibTeX:**

```

@model{hackathorn_2025_sosqwen,

  title = {Science On a Sphere QA Model (Phi-2, LoRA)},

  author = {Hackathorn, Eric},

  year = {2025},

  url = {https://huggingface.co/HacksHaven/phi-2-science-on-a-sphere}

}

```

**APA:**

Hackathorn, E. (2025). Science On a Sphere QA Model (Phi-2, LoRA). Hugging Face. https://huggingface.co/HacksHaven/phi-2-science-on-a-sphere

## Model Card Contact

Author: Eric Hackathorn
Email: [email protected]
Affiliation: NOAA Global Systems Laboratory