Commit
·
3b78033
1
Parent(s):
52497dc
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,20 +1,31 @@
|
|
| 1 |
|
| 2 |
# Introduction
|
| 3 |
|
| 4 |
-
Modern image captaining relies heavily on extracting knowledge, from images such as objects,
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
|
| 6 |
-
Please refer to [Github](https://github.com/ahmedssabir/Visual-Semantic-Relatedness-Dataset-for-Image-Captioning) for more information.
|
| 7 |
|
| 8 |
|
| 9 |
|
| 10 |
|
| 11 |
# Overview
|
| 12 |
|
| 13 |
-
We enrich COCO-
|
| 14 |
-
object information for each
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
|
| 17 |
-
For quick start please have a look this [colab](https://colab.research.google.com/drive/1N0JVa6y8FKGLLSpiG7hd_W75UYhHRe2j?usp=sharing)
|
| 18 |
|
| 19 |
|
| 20 |
|
|
|
|
| 1 |
|
| 2 |
# Introduction
|
| 3 |
|
| 4 |
+
Modern image captaining relies heavily on extracting knowledge, from images such as objects,
|
| 5 |
+
to capture the concept of static story in the image. In this paper, we propose a textual visual context dataset
|
| 6 |
+
for captioning, where the publicly available dataset COCO caption (Lin et al., 2014) has been extended with information
|
| 7 |
+
about the scene (such as objects in the image). Since this information has textual form, it can be used to leverage any NLP task,
|
| 8 |
+
such as text similarity or semantic relation methods, into captioning systems, either as an end-to-end training strategy or a post-processing based approach.
|
| 9 |
|
| 10 |
+
Please refer to [project page](https://sabirdvd.github.io/project_page/Dataset_2022/index.html) and [Github](https://github.com/ahmedssabir/Visual-Semantic-Relatedness-Dataset-for-Image-Captioning) for more information.
|
| 11 |
|
| 12 |
|
| 13 |
|
| 14 |
|
| 15 |
# Overview
|
| 16 |
|
| 17 |
+
We enrich COCO-Caption with textual Visual Context information. We use ResNet152, CLIP,
|
| 18 |
+
and Faster R-CNN to extract object information for each image. We use three filter approaches
|
| 19 |
+
to ensure the quality of the dataset (1) Threshold: to filter out predictions where the object classifier
|
| 20 |
+
is not confident enough, and (2) semantic alignment with semantic similarity to remove duplicated objects.
|
| 21 |
+
(3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong
|
| 22 |
+
relation. In particular, we use Sentence-RoBERTa via cosine similarity to give a soft score, and then
|
| 23 |
+
we use a threshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage
|
| 24 |
+
of the visual overlap between caption and visual context,
|
| 25 |
+
and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
|
| 26 |
+
to estimate the visual relatedness score.
|
| 27 |
|
| 28 |
+
For quick start please have a look this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb)
|
|
|
|
| 29 |
|
| 30 |
|
| 31 |
|