Feng-001 commited on
Commit
47d97ba
·
verified ·
1 Parent(s): beea62c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -90
README.md CHANGED
@@ -10,134 +10,102 @@ task_categories:
10
  This repositoty covers 8 Text datasets inlcuding: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, Reuters21578.
11
  We provide the original textual data, preprocess data and multiple embeddings based LLama2, Llama3, Mistral and Embedding Models (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002) from OpenAI.
12
 
13
- ## Dataset Details
14
-
15
- ### Dataset Description
16
-
17
- <!-- Provide a longer summary of what this dataset is. -->
18
-
19
-
20
-
21
- - **Curated by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
-
27
- ### Dataset Sources [optional]
28
-
29
- <!-- Provide the basic links for the dataset. -->
30
-
31
- - **Repository:** [More Information Needed]
32
- - **Paper [optional]:** [More Information Needed]
33
- - **Demo [optional]:** [More Information Needed]
34
-
35
- ## Uses
36
-
37
- <!-- Address questions around how the dataset is intended to be used. -->
38
-
39
- ### Direct Use
40
-
41
- <!-- This section describes suitable use cases for the dataset. -->
42
-
43
- [More Information Needed]
44
-
45
- ### Out-of-Scope Use
46
-
47
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
48
-
49
- [More Information Needed]
50
-
51
- ## Dataset Structure
52
-
53
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
54
-
55
- [More Information Needed]
56
-
57
- ## Dataset Creation
58
-
59
- ### Curation Rationale
60
-
61
- <!-- Motivation for the creation of this dataset. -->
62
 
63
- [More Information Needed]
64
 
65
- ### Source Data
66
 
67
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
68
 
69
- #### Data Collection and Processing
70
 
71
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
72
 
73
- [More Information Needed]
 
74
 
75
- #### Who are the source data producers?
76
 
77
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
78
 
79
- [More Information Needed]
80
 
81
- ### Annotations [optional]
 
 
 
 
 
 
82
 
83
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
84
 
85
- #### Annotation process
86
 
87
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
 
 
88
 
89
- [More Information Needed]
90
 
91
- #### Who are the annotators?
92
 
93
- <!-- This section describes the people or systems who created the annotations. -->
94
 
95
- [More Information Needed]
96
 
97
- #### Personal and Sensitive Information
 
 
 
 
98
 
99
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
100
 
101
- [More Information Needed]
 
 
 
102
 
103
- ## Bias, Risks, and Limitations
104
 
105
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
106
 
107
- [More Information Needed]
108
 
109
- ### Recommendations
110
 
111
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
112
 
113
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
114
 
115
- ## Citation [optional]
116
 
117
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
118
 
119
- **BibTeX:**
120
 
121
- [More Information Needed]
122
 
123
- **APA:**
124
 
125
- [More Information Needed]
126
 
127
- ## Glossary [optional]
128
 
129
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
130
 
131
- [More Information Needed]
132
 
133
- ## More Information [optional]
134
 
135
- [More Information Needed]
136
 
137
- ## Dataset Card Authors [optional]
138
 
139
- [More Information Needed]
140
 
141
- ## Dataset Card Contact
142
 
143
- [More Information Needed]
 
10
  This repositoty covers 8 Text datasets inlcuding: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, Reuters21578.
11
  We provide the original textual data, preprocess data and multiple embeddings based LLama2, Llama3, Mistral and Embedding Models (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002) from OpenAI.
12
 
13
+ task_categories:
14
+ - text-classification
15
+ - feature-extraction
16
+ tags:
17
+ - anomaly-detection
18
+ - benchmark
19
+ - embeddings
20
+ - llms
21
+ language:
22
+ - en
23
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
 
25
 
26
+ # Text-ADBench: Text Anomaly Detection Benchmark based on LLMs Embedding
27
 
 
28
 
 
29
 
30
+ This repository provides **Text-ADBench**, a comprehensive benchmark for text anomaly detection, leveraging embeddings from diverse pre-trained language models across a wide array of text datasets.
31
 
32
+ **Paper**: [Text-ADBench: Text Anomaly Detection Benchmark based on LLMs Embedding](https://arxiv.org/abs/2507.12295)
33
+ **Code**: [https://github.com/Feng-001/Text-ADBench](https://github.com/jicongfan/Text-Anomaly-Detection-Benchmark)
34
 
35
+ ## Abstract
36
 
37
+ Text anomaly detection is a critical task in natural language processing (NLP), with applications spanning fraud detection, misinformation identification, spam detection and content moderation, etc. Despite significant advances in large language models (LLMs) and anomaly detection algorithms, the absence of standardized and comprehensive benchmarks for evaluating the existing anomaly detection methods on text data limits rigorous comparison and development of innovative approaches. This work performs a comprehensive empirical study and introduces a benchmark for text anomaly detection, leveraging embeddings from diverse pre-trained language models across a wide array of text datasets. Our work systematically evaluates the effectiveness of embedding-based text anomaly detection by incorporating (1) early language models (GloVe, BERT); (2) multiple LLMs (LLaMa-2, LLama-3, Mistral, OpenAI (small, ada, large)); (3) multi-domain text datasets (news, social media, scientific publications); (4) comprehensive evaluation metrics (AUROC, AUPRC). Our experiments reveal a critical empirical insight: embedding quality significantly governs anomaly detection efficacy, and deep learning-based approaches demonstrate no performance advantage over conventional shallow algorithms (e.g., KNN, Isolation Forest) when leveraging LLM-derived embeddings. In addition, we observe strongly low-rank characteristics in cross-model performance matrices, which enables an efficient strategy for rapid model evaluation (or embedding evaluation) and selection in practical applications. Furthermore, by open-sourcing our benchmark toolkit that includes all embeddings from different models and code at this https URL , this work provides a foundation for future research in robust and scalable text anomaly detection systems.
38
 
39
+ ## Dataset Details
40
 
41
+ This repository covers 8 text datasets including: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, Reuters21578. For each of these multi-domain datasets (news, social media, scientific publications), the repository provides:
42
+ * The original textual data.
43
+ * Preprocessed data.
44
+ * Multiple embeddings derived from various pre-trained language models, including:
45
+ * Early language models (GloVe, BERT)
46
+ * Multiple LLMs (LLaMa-2, LLaMa-3, Mistral)
47
+ * OpenAI embedding models (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002)
48
 
49
+ ### Dataset Description
50
 
51
+ Text-ADBench addresses the critical task of text anomaly detection by providing a standardized and comprehensive benchmark. It facilitates rigorous comparison and development of innovative approaches by systematically evaluating embedding-based text anomaly detection across diverse models and datasets. The benchmark highlights that embedding quality significantly influences anomaly detection performance and that traditional shallow algorithms can be as effective as deep learning approaches when utilizing LLM-derived embeddings.
52
 
53
+ - **Curated by:** Feng Xiao and Jicong Fan
54
+ - **Language(s) (NLP):** English
55
+ - **License:** MIT
56
 
57
+ ### Dataset Sources
58
 
59
+ - **Paper:** [TextADBench](https://arxiv.org/abs/2507.12295)
60
 
61
+ ## Uses
62
 
63
+ ### Direct Use
64
 
65
+ This dataset is intended for researchers and practitioners in natural language processing and artificial intelligence, specifically for:
66
+ * Benchmarking existing text anomaly detection methods.
67
+ * Developing and evaluating new anomaly detection algorithms on diverse text data.
68
+ * Studying the impact of various LLM embeddings on anomaly detection efficacy.
69
+ * Exploring efficient strategies for rapid model evaluation and selection in practical applications, leveraging observed low-rank characteristics in performance matrices.
70
 
71
+ ### Out-of-Scope Use
72
 
73
+ This dataset is not intended for:
74
+ * General text classification tasks unrelated to anomaly detection.
75
+ * Training large language models from scratch, as it primarily provides embeddings and benchmark data, not raw corpus data for pre-training.
76
+ * Applications where biases present in the original source datasets or embedding models could lead to unfair or discriminatory outcomes without proper mitigation.
77
 
78
+ ## Dataset Structure
79
 
80
+ The repository contains 8 distinct text datasets: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, and Reuters21578. For each dataset, the repository provides:
81
+ * Original textual data (e.g., in `text_data/`).
82
+ * Preprocessed versions of the text data.
83
+ * Multiple sets of embeddings, generated using a range of models including GloVe, BERT, Llama-2, Llama-3, Mistral, and OpenAI's text embedding models (e.g., in `text_embedding/`).
84
 
85
+ For a detailed file structure, please refer to the [GitHub repository](https://github.com/Feng-001/Text-ADBench).
86
 
87
+ ## Dataset Creation
88
 
89
+ ### Curation Rationale
90
 
91
+ The dataset was created to address a critical gap in the field of text anomaly detection: the absence of standardized and comprehensive benchmarks. By providing a unified framework, Text-ADBench enables rigorous comparison and facilitates the development of innovative approaches to text anomaly detection, leveraging the advancements in large language models.
92
 
 
93
 
 
94
 
95
+ ### Source Data
96
 
 
97
 
 
98
 
99
+ #### Data Collection and Processing
100
 
101
+ The benchmark leverages a wide array of publicly accessible multi-domain text datasets. The original textual data was collected, followed by preprocessing steps. Subsequently, embeddings were generated using diverse pre-trained language models, encompassing both early models and modern LLMs. The benchmark toolkit also supports generating embeddings for new text data.
102
 
 
103
 
 
104
 
105
+ #### Who are the source data producers?
106
 
107
+ The source data producers include the original authors and maintainers of the 8 constituent text datasets (e.g., 20Newsgroups, IMDB, SMS_SPAM, etc.). The benchmark and its generated embeddings were curated by Feng Xiao and Jicong Fan, the authors of the Text-ADBench paper.
108
 
 
109
 
 
110
 
 
111