CBSI-ModernBERT Models
This repository hosts CBSI-ModernBERT models fine-tuned on the replication data of Nițoi et al. (2023).
Check out their paper and website for more information.
The models are based on ModernBERT (Warner et al., 2024), which allows for longer context handling compared to vanilla BERT.
The same training data and methodology as [Nițoi et al. (2023)] was used, but fine-tuned ModernBERT for improved sequence length support.
Results
Model | F1 Score | Accuracy | Loss |
---|---|---|---|
CBSI-bert-base-uncased | 0.88 | 0.88 | 0.49 |
CBSI-bert-large-uncased | 0.92 | 0.92 | 0.45 |
CBSI-ModernBERT-base | 0.93 | 0.93 | 0.40 |
CBSI-ModernBERT-large | 0.91 | 0.91 | 0.53 |
CBSI-CentralBank-BERT | 0.92 | 0.92 | 0.36 |
How to use
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
# Load model and tokenizer
model_name = "brjoey/CBSI-ModernBERT-base"
classifier = pipeline(
"text-classification",
model=model_name,
tokenizer=model_name
)
# Define label mapping
cbsi_label_map = {
0: "neutral",
1: "dovish",
2: "hawkish"
}
# Example texts
texts = [
"The Governing Council decided to lower interest rates.",
"The central bank will maintain its current policy stance."
]
df = pd.DataFrame({"text": texts})
# Run classification
predictions = classifier(df["text"].tolist())
# Store the results
df["label"], df["score"] = zip(*[
(cbsi_label_map[int(pred["label"].split("_")[-1])], pred["score"])
for pred in predictions
])
print("\n === Results ===\n")
print(df[["text", "label", "score"]])
Citation
If you use this model, please cite:
Data:
Nițoi Mihai; Pochea Maria-Miruna; Radu Ștefan-Constantin, 2023,
"Replication Data for: Unveiling the sentiment behind central bank narratives: A novel deep learning index",
https://doi.org/10.7910/DVN/40JFEK, Harvard Dataverse, V1
Paper:
Mihai Niţoi, Maria-Miruna Pochea, Ştefan-Constantin Radu,
"Unveiling the sentiment behind central bank narratives: A novel deep learning index",
Journal of Behavioral and Experimental Finance, Volume 38, 2023, 100809, ISSN 2214-6350.
https://doi.org/10.1016/j.jbef.2023.100809
ModernBERT:
Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Griffin Adams, Jeremy Howard, Iacopo Poli,
"Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference",
arXiv preprint arXiv:2412.13663, 2024.
https://arxiv.org/abs/2412.13663
- Downloads last month
- 11
Model tree for brjoey/CBSI-ModernBERT-base
Base model
answerdotai/ModernBERT-base