Datasets:
id stringlengths 10 10 | title stringlengths 12 156 | abstract stringlengths 279 2.02k | full_text sequence | qas sequence | figures_and_tables sequence |
|---|---|---|---|---|---|
1909.00694 | Minimally Supervised Learning of Affective Events Using Discourse Relations | Recognizing affective events that trigger positive or negative sentiment has a wide range of natural language processing applications but remains a challenging problem mainly because the polarity of an event is not necessarily predictable from its constituent words. In this paper, we propose to propagate affective polarity using discourse relations. Our method is simple and only requires a very small seed lexicon and a large raw corpus. Our experiments using Japanese data show that our method learns affective events effectively without manually labeled data. It also improves supervised learning results when labeled data are small. | {
"section_name": [
"Introduction",
"Related Work",
"Proposed Method",
"Proposed Method ::: Polarity Function",
"Proposed Method ::: Discourse Relation-Based Event Pairs",
"Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)",
"Proposed Method ::: ... | {
"question": [
"What is the seed lexicon?",
"What are the results?",
"How are relations used to propagate polarity?",
"How big is the Japanese data?",
"What are labels available in dataset for supervision?",
"How big are improvements of supervszed learning results trained on smalled labeled d... | {
"caption": [
"Figure 1: An overview of our method. We focus on pairs of events, the former events and the latter events, which are connected with a discourse relation, CAUSE or CONCESSION. Dropped pronouns are indicated by brackets in English translations. We divide the event pairs into three types: AL, CA, and... |
2003.07723 | PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English Poetry | Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of kappa=.70, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion | {
"section_name": [
"",
" ::: ",
" ::: ::: ",
"Introduction",
"Related Work ::: Poetry in Natural Language Processing",
"Related Work ::: Emotion Annotation",
"Related Work ::: Emotion Classification",
"Data Collection",
"Data Collection ::: German",
"Data Collection ::: Engli... | {
"question": [
"Does the paper report macro F1?",
"How is the annotation experiment evaluated?",
"What are the aesthetic emotions formalized?"
],
"question_id": [
"3a9d391d25cde8af3334ac62d478b36b30079d74",
"8d8300d88283c73424c8f301ad9fdd733845eb47",
"48b12eb53e2d507343f19b8a667696a39b719... | {
"caption": [
"Figure 1: Temporal distribution of poetry corpora (Kernel Density Plots with bandwidth = 0.2).",
"Table 1: Statistics on our poetry corpora PO-EMO.",
"Table 2: Aesthetic Emotion Factors (Schindler et al., 2017).",
"Table 3: Cohen’s kappa agreement levels and normalized line-level emoti... |
1705.09665 | Community Identity and User Engagement in a Multi-Community Landscape | A community's identity defines and shapes its internal dynamics. Our current understanding of this interplay is mostly limited to glimpses gathered from isolated studies of individual communities. In this work we provide a systematic exploration of the nature of this relation across a wide variety of online communities. To this end we introduce a quantitative, language-based typology reflecting two key aspects of a community's identity: how distinctive, and how temporally dynamic it is. By mapping almost 300 Reddit communities into the landscape induced by this typology, we reveal regularities in how patterns of user engagement vary with the characteristics of a community. Our results suggest that the way new and existing users engage with a community depends strongly and systematically on the nature of the collective identity it fosters, in ways that are highly consequential to community maintainers. For example, communities with distinctive and highly dynamic identities are more likely to retain their users. However, such niche communities also exhibit much larger acculturation gaps between existing users and newcomers, which potentially hinder the integration of the latter. More generally, our methodology reveals differences in how various social phenomena manifest across communities, and shows that structuring the multi-community landscape can lead to a better understanding of the systematic nature of this diversity. | {
"section_name": [
"Introduction",
"A typology of community identity",
"Overview and intuition",
"Language-based formalization",
"Community-level measures",
"Applying the typology to Reddit",
"Community identity and user retention",
"Community-type and monthly retention",
"Communi... | {
"question": [
"Do they report results only on English data?",
"How do the various social phenomena examined manifest in different types of communities?",
"What patterns do they observe about how user engagement varies with the characteristics of a community?",
"How did the select the 300 Reddit comm... | {
"caption": [
"Figure 1: A: Within a community certain words are more community-specific and temporally volatile than others. For instance, words like onesies are highly specific to the BabyBumps community (top left corner), while words like easter are temporally ephemeral. B: Extending these word-level measures... |
1908.06606 | Question Answering based Clinical Text Structuring Using Pre-trained Language Model | "Clinical text structuring is a critical and fundamental task for clinical research. Traditional met(...TRUNCATED) | {"section_name":["Introduction","Related Work ::: Clinical Text Structuring","Related Work ::: Pre-t(...TRUNCATED) | {"question":["What data is the language model pretrained on?","What baselines is the proposed model (...TRUNCATED) | {"caption":["Fig. 1. An illustrative example of QA-CTS task.","TABLE I AN ILLUSTRATIVE EXAMPLE OF NA(...TRUNCATED) |
1811.00942 | Progress and Tradeoffs in Neural Language Models | "In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks fo(...TRUNCATED) | {"section_name":["Introduction","Background and Related Work","Experimental Setup","Hyperparameters (...TRUNCATED) | {"question":["What aspects have been compared between various language models?","what classic langua(...TRUNCATED) | {"caption":["Table 1: Comparison of neural language models on Penn Treebank and WikiText-103.","Figu(...TRUNCATED) |
1805.02400 | Stay On-Topic: Generating Context-specific Fake Restaurant Reviews | "Automatically generated fake restaurant reviews are a threat to online review systems. Recent resea(...TRUNCATED) | {"section_name":["Introduction","Background","System Model","Attack Model","Generative Model"],"para(...TRUNCATED) | {"question":["Which dataset do they use a starting point in generating fake reviews?","Do they use a(...TRUNCATED) | {"caption":["Fig. 1: Näıve text generation with NMT vs. generation using our NTM model. Repetitiv(...TRUNCATED) |
1907.05664 | Saliency Maps Generation for Automatic Text Summarization | "Saliency map generation techniques are at the forefront of explainable AI literature for a broad ra(...TRUNCATED) | {"section_name":["Introduction","The Task and the Model","Dataset and Training Task","The Model","Ob(...TRUNCATED) | {"question":["Which baselines did they compare?","How many attention layers are there in their model(...TRUNCATED) | {"caption":["Figure 2: Representation of the propagation of the relevance from the output to the inp(...TRUNCATED) |
1910.14497 | Probabilistic Bias Mitigation in Word Embeddings | "It has been shown that word embeddings derived from large corpora tend to incorporate biases presen(...TRUNCATED) | {"section_name":["Introduction","Background ::: Geometric Bias Mitigation","Background ::: Geometric(...TRUNCATED) | {"question":["How is embedding quality assessed?","What are the three measures of bias which are red(...TRUNCATED) | {"caption":["Figure 1: Word embedding semantic quality benchmarks for each bias mitigation method (h(...TRUNCATED) |
1912.02481 | Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\`ub\'a and Twi | "The success of several architectures to learn semantic representations from unannotated text and th(...TRUNCATED) | {"section_name":["Introduction","Related Work","Languages under Study ::: Yorùbá","Languages under(...TRUNCATED) | {"question":["What turn out to be more important high volume or high quality data?","How much is mod(...TRUNCATED) | {"caption":["Table 1: Summary of the corpora used in the analysis. The last 3 columns indicate in wh(...TRUNCATED) |
1810.04528 | Is there Gender bias and stereotype in Portuguese Word Embeddings? | "In this work, we propose an analysis of the presence of gender bias associated with professions in (...TRUNCATED) | {"section_name":["Introduction","Related Work","Portuguese Embedding","Proposed Approach","Experimen(...TRUNCATED) | {"question":["Does this paper target European or Brazilian Portuguese?","What were the word embeddin(...TRUNCATED) | {"caption":["Fig. 1. Proposal","Fig. 2. Extreme Analogies"],"file":["3-Figure1-1.png","5-Figure2-1.p(...TRUNCATED) |
Dataset Card for Qasper
Dataset Summary
QASPER is a dataset for question answering on scientific research papers. It consists of 5,049 questions over 1,585 Natural Language Processing papers. Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers.
Supported Tasks and Leaderboards
question-answering: The dataset can be used to train a model for Question Answering. Success on this task is typically measured by achieving a high F1 score. The official baseline model currently achieves 33.63 Token F1 score & uses Longformer. This task has an active leaderboard which can be found hereevidence-selection: The dataset can be used to train a model for Evidence Selection. Success on this task is typically measured by achieving a high F1 score. The official baseline model currently achieves 39.37 F1 score & uses Longformer. This task has an active leaderboard which can be found here
Languages
English, as it is used in research papers.
Dataset Structure
Data Instances
A typical instance in the dataset:
{
'id': "Paper ID (string)",
'title': "Paper Title",
'abstract': "paper abstract ...",
'full_text': {
'paragraphs':[["section1_paragraph1_text","section1_paragraph2_text",...],["section2_paragraph1_text","section2_paragraph2_text",...]],
'section_name':["section1_title","section2_title"],...},
'qas': {
'answers':[{
'annotation_id': ["q1_answer1_annotation_id","q1_answer2_annotation_id"]
'answer': [{
'unanswerable':False,
'extractive_spans':["q1_answer1_extractive_span1","q1_answer1_extractive_span2"],
'yes_no':False,
'free_form_answer':"q1_answer1",
'evidence':["q1_answer1_evidence1","q1_answer1_evidence2",..],
'highlighted_evidence':["q1_answer1_highlighted_evidence1","q1_answer1_highlighted_evidence2",..]
},
{
'unanswerable':False,
'extractive_spans':["q1_answer2_extractive_span1","q1_answer2_extractive_span2"],
'yes_no':False,
'free_form_answer':"q1_answer2",
'evidence':["q1_answer2_evidence1","q1_answer2_evidence2",..],
'highlighted_evidence':["q1_answer2_highlighted_evidence1","q1_answer2_highlighted_evidence2",..]
}],
'worker_id':["q1_answer1_worker_id","q1_answer2_worker_id"]
},{...["question2's answers"]..},{...["question3's answers"]..}],
'question':["question1","question2","question3"...],
'question_id':["question1_id","question2_id","question3_id"...],
'question_writer':["question1_writer_id","question2_writer_id","question3_writer_id"...],
'nlp_background':["question1_writer_nlp_background","question2_writer_nlp_background",...],
'topic_background':["question1_writer_topic_background","question2_writer_topic_background",...],
'paper_read': ["question1_writer_paper_read_status","question2_writer_paper_read_status",...],
'search_query':["question1_search_query","question2_search_query","question3_search_query"...],
}
}
Data Fields
The following is an excerpt from the dataset README:
Within "qas", some fields should be obvious. Here is some explanation about the others:
Fields specific to questions:
"nlp_background" shows the experience the question writer had. The values can be "zero" (no experience), "two" (0 - 2 years of experience), "five" (2 - 5 years of experience), and "infinity" (> 5 years of experience). The field may be empty as well, indicating the writer has chosen not to share this information.
"topic_background" shows how familiar the question writer was with the topic of the paper. The values are "unfamiliar", "familiar", "research" (meaning that the topic is the research area of the writer), or null.
"paper_read", when specified shows whether the questionwriter has read the paper.
"search_query", if not empty, is the query the question writer used to find the abstract of the paper from a large pool of abstracts we made available to them.
Fields specific to answers
Unanswerable answers have "unanswerable" set to true. The remaining answers have exactly one of the following fields being non-empty.
- "extractive_spans" are spans in the paper which serve as the answer.
- "free_form_answer" is a written out answer.
- "yes_no" is true iff the answer is Yes, and false iff the answer is No.
"evidence" is the set of paragraphs, figures or tables used to arrive at the answer. Tables or figures start with the string "FLOAT SELECTED"
"highlighted_evidence" is the set of sentences the answer providers selected as evidence if they chose textual evidence. The text in the "evidence" field is a mapping from these sentences to the paragraph level. That is, if you see textual evidence in the "evidence" field, it is guaranteed to be entire paragraphs, while that is not the case with "highlighted_evidence".
Data Splits
| Train | Valid | |
|---|---|---|
| Number of papers | 888 | 281 |
| Number of questions | 2593 | 1005 |
| Number of answers | 2675 | 1764 |
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
NLP papers: The full text of the papers is extracted from S2ORC (Lo et al., 2020)
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
[More Information Needed]
Annotation process
[More Information Needed]
Who are the annotators?
"The annotators are NLP practitioners, not expert researchers, and it is likely that an expert would score higher"
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
Crowdsourced NLP practitioners
Licensing Information
Citation Information
@inproceedings{Dasigi2021ADO,
title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},
author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},
year={2021}
}
Contributions
Thanks to @cceyda for adding this dataset.
- Downloads last month
- 3,528