File size: 6,853 Bytes
4c7fcd6 8199033 4c7fcd6 8199033 0b96eaa 8199033 23842a6 0b96eaa 8199033 0b96eaa c84d392 0b96eaa 0223c85 c84d392 0223c85 c84d392 0223c85 0b96eaa 8199033 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- fi
tags:
- toxicity
size_categories:
- 1K<n<10K
---
### Suomi-24-toxicity-annotated
This dataset includes comments from Suomi24 sampled using predictions from a toxicity classifier. The comments were taken in intervals for each label. The process of sampling emphasized difficult borderline cases. 500 comments were sampled for each label.
The annotation process used the labels from Perspective, used e.g. for `TurkuNLP/wikipedia-toxicity-data-fi`.
Instead of multi-label, we annotated each comment only for one label, although a couple comments appear in two labels.
Process of annotation included initial annotation of 100-200 comments followed by a discussion and final annotations. Raw data can be found from [here](https://github.com/TurkuNLP/toxicity-classifier/tree/main/annotations/raw_annotations).
Examples that made it to the dataset are ones that had unanimous agreement or were resolved through discussion.
### Citing
To cite this dataset use the following bibtex.
```
@inproceedings{eskelinen-etal-2023-toxicity,
title = "Toxicity Detection in {F}innish Using Machine Translation",
author = "Eskelinen, Anni and
Silvala, Laura and
Ginter, Filip and
Pyysalo, Sampo and
Laippala, Veronika",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.68",
pages = "685--697",
abstract = "Due to the popularity of social media platforms and the sheer amount of user-generated content online, the automatic detection of toxic language has become crucial in the creation of a friendly and safe digital space. Previous work has been mostly focusing on English leaving many lower-resource languages behind. In this paper, we present novel resources for toxicity detection in Finnish by introducing two new datasets, a machine translated toxicity dataset for Finnish based on the widely used English Jigsaw dataset and a smaller test set of Suomi24 discussion forum comments originally written in Finnish and manually annotated following the definitions of the labels that were used to annotate the Jigsaw dataset. We show that machine translating the training data to Finnish provides better toxicity detection results than using the original English training data and zero-shot cross-lingual transfer with XLM-R, even with our newly annotated dataset from Suomi24.",
}
```
## Label definitions taken from Perspective API
THREAT: Describes an intention to inflict pain, injury, or violence against an individual or group.
THREATENING: Language that is threatening or encouraging violence or harm, including self-harm.
PROFANITY: Swear words, curse words, or other obscene or profane language.
INSULT: Insulting, inflammatory, or negative comment towards a person or a group of people. Such comments are not necessarily identity specific.
IDENTITY ATTACK: Negative or hateful comments targeting someone because of their identity.
TOXICITY: A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion.
SEVERE TOXICITY: A very hateful, aggressive, disrespectful comment or otherwise very likely to make a user leave a discussion or give up on sharing their perspective. This attribute is much less sensitive to more mild forms of toxicity, such as comments that include positive uses of curse words.
## Guidelines used for annotation:
### Obscene
swearwords, including mild expletives and misspelled, masked, or other variations
sexually explicit words/terminology that are not topically or contextually appropriate
### Threat
suicidal or self-harm comments, incitement to violence or self-harm, hypothetical situations and wishing harm to somebody
comments that are very unlikely to happen if not marked clearly as sarcasm
only threats towards people are annotated as threat
threats made by somebody else other than the writer NOT included
counterfactuals statements NOT included <!--- as in "if I was there I would have..." --->
### Insult
terms that are insulting towards groups of people (also in identity attack)
insults against political groups, e.g. "vitun demari/suvakki/persu" -> "fucking liberal/conservative etc." <!--- I made this decision here.. --->
negative insulting comments towards oneself, things other than people and hypothetical situations NOT included
<!--- PROBLEM: use of racist or rapist if true, target not clear --->
### Identity attack
comments that have no negative language but are still clearly negative
negative statements towards political groups or groups that nobody self-identifies with are NOT included (unless an insult)
### Toxicity
unreasonably expressed negative comments regardless of the target present and whether the target is known or not
mild or humoristic swearwords are NOT included
positive or neutral sexually explicit comments are NOT included
### Severe toxicity
comments that include only sexually explicit content
only one severely toxic element is needed to have this label and a comment is severely toxic even if the comment contains substantive content
target does not need to be present nor does the target matter
## Inter-annotator agreement:
| Label | Initial (unanimous) | After discussion (unanimous) | Initial (at least 2/3) | After discussion (at least 2/3) |
|------ | ------------------- | ---------------------------- | ---------------------- | ------------------------------- |
| identity attack | 54,5 % | 66,6 % | 92 % | 93,6 % |
| insult | 47,5 % | 49,6 % | 94,5 % | 95,6 % |
| severe toxicity | 63 % | 66 % | 92 % | 96,6 % |
| threat | 82 % | 80,3 % | 98 % | 97,3 % |
| toxicity | 58 % | 54 % | 93 % | 89,6 % |
| obscene | 69 % | 62 % | 97 % | 96 % |
## Evaluation results
Evaluation results from using `TurkuNLP/bert-large-finnish-cased-toxicity`.
| Label | Precision | Recall | F1 |
|------ | ------------------- | ---------------------------- | ---------------------- |
| identity attack | 73,2 | 32 | 44,6 |
| insult | 59,4 | 646,8 | 52,4 |
| severe toxicity | 12 | 28,6 | 16,9 |
| threat | 32,4 | 28,6 | 30,4 |
| toxicity | 60,4 | 79,2 | 68,5 |
| obscene | 64,5 | 82,4 | 72,3 |
| OVERALL | 57,4 | 58,9 | 51,1 |
| OVERALL weighted by original sample counts | 55,5 | 65,5 | 60,1 |
## Licensing Information
Contents of this repository are distributed under the
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Copyright of the dataset contents belongs to the original copyright holders. |