annieske commited on
Commit
0b96eaa
·
1 Parent(s): d13ac36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -2
README.md CHANGED
@@ -6,14 +6,98 @@ language:
6
  - fi
7
  tags:
8
  - toxicity
9
- - multi-label
10
  size_categories:
11
  - 1K<n<10K
12
  ---
13
 
 
14
 
 
 
 
15
 
16
- ### Licensing Information
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  Contents of this repository are distributed under the
18
  [Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
19
  Copyright of the dataset contents belongs to the original copyright holders.
 
6
  - fi
7
  tags:
8
  - toxicity
 
9
  size_categories:
10
  - 1K<n<10K
11
  ---
12
 
13
+ ### Suomi-24-toxicity-annotated
14
 
15
+ This dataset includes comments from Suomi24 sampled using predictions from a toxicity classifier. The comments were taken in intervals for each label. The process of sampling emphasized difficult borderline cases. 500 comments were sampled for each label.
16
+ The annotation process used the labels from Perspective, used e.g. for `TurkuNLP/wikipedia-toxicity-data-fi`. Instead of multi-label, we annotated each comment only for one label, although a couple comments appear in two labels.
17
+ Process of annotation included initial annotation of 100-200 comments followed by a discussion and final annotations. Raw data can be found from [here](https://github.com/TurkuNLP/toxicity-classifier/tree/main/annotations/raw_annotations).
18
 
19
+ Examples that made it to the dataset are ones that had unanimous agreement or were resolved through discussion.
20
+
21
+
22
+ ## Label definitions taken from Perspective API
23
+
24
+ THREAT: Describes an intention to inflict pain, injury, or violence against an individual or group.
25
+ THREATENING: Language that is threatening or encouraging violence or harm, including self-harm.
26
+
27
+ PROFANITY: Swear words, curse words, or other obscene or profane language.
28
+
29
+ INSULT: Insulting, inflammatory, or negative comment towards a person or a group of people. Such comments are not necessarily identity specific.
30
+
31
+ IDENTITY ATTACK: Negative or hateful comments targeting someone because of their identity.
32
+
33
+ TOXICITY: A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion.
34
+
35
+ SEVERE TOXICITY: A very hateful, aggressive, disrespectful comment or otherwise very likely to make a user leave a discussion or give up on sharing their perspective. This attribute is much less sensitive to more mild forms of toxicity, such as comments that include positive uses of curse words.
36
+
37
+ ## Guidelines used for annotation:
38
+
39
+ ### Obscene
40
+
41
+ swearwords, including mild expletives and misspelled, masked, or other variations
42
+ sexually explicit words/terminology that are not topically or contextually appropriate
43
+
44
+ ### Threat
45
+
46
+ suicidal or self-harm comments, incitement to violence or self-harm, hypothetical situations and wishing harm to somebody
47
+ comments that are very unlikely to happen if not marked clearly as sarcasm
48
+ only threats towards people are annotated as threat
49
+
50
+ threats made by somebody else other than the writer NOT included
51
+ counterfactuals statements NOT included <!--- as in "if I was there I would have..." --->
52
+
53
+
54
+ ### Insult
55
+
56
+ terms that are insulting towards groups of people (also in identity attack)
57
+ insults against political groups, e.g. "vitun demari/suvakki/persu" -> "fucking liberal/conservative etc." <!--- I made this decision here.. --->
58
+
59
+ negative insulting comments towards oneself, things other than people and hypothetical situations NOT included
60
+
61
+ <!--- PROBLEM: use of racist or rapist if true, target not clear --->
62
+
63
+ ### Identity attack
64
+
65
+ comments that have no negative language but are still clearly negative
66
+
67
+ negative statements towards political groups or groups that nobody self-identifies with are NOT included (unless an insult)
68
+
69
+ ### Toxicity
70
+
71
+ unreasonably expressed negative comments regardless of the target present and whether the target is known or not
72
+ mild or humoristic swearwords are NOT included
73
+ positive or neutral sexually explicit comments are NOT included
74
+
75
+ ### Severe toxicity
76
+
77
+ comments that include only sexually explicit content
78
+ only one severely toxic element is needed to have this label and a comment is severely toxic even if the comment contains substantive content
79
+ target does not need to be present nor does the target matter
80
+
81
+
82
+ ---
83
+
84
+ The final sample includes texts where the inter-annotator agreement was 1.0 or texts which we were able to resolve according to our discussion and the guidelines that followed.
85
+
86
+ ---
87
+
88
+ ## Inter-annotator agreement:
89
+
90
+ | Label | Initial (unanimous) | After discussion (unanimous) | Initial (at least 2/3) | After discussion (at least 2/3) |
91
+ |------ | ------------------- | ---------------------------- | ---------------------- | ------------------------------- |
92
+ | identity attack | 54,5 % | 66,6 % | 92 % | 93,6 % |
93
+ | insult | 47,5 % | 49,6 % | 94,5 % | 95,6 % |
94
+ | severe toxicity | 63 % | 66 % | 92 % | 96,6 % |
95
+ | threat | 82 % | 80,3 % | 98 % | 97,3 % |
96
+ | toxicity | 58 % | 54 % | 93 % | 89,6 % |
97
+ | obscene | 69 % | 62 % | 97 % | 96 % |
98
+
99
+
100
+ ## Licensing Information
101
  Contents of this repository are distributed under the
102
  [Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
103
  Copyright of the dataset contents belongs to the original copyright holders.