prithivMLmods commited on
Commit
a91c580
·
verified ·
1 Parent(s): 84c59cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -1
README.md CHANGED
@@ -18,4 +18,114 @@ tags:
18
  - text-generation-inference
19
  - VLM
20
  - Callisto
21
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - text-generation-inference
19
  - VLM
20
  - Callisto
21
+ ---
22
+ # **Callisto-OCR3-2B-Instruct**
23
+
24
+ The **Callisto-OCR3-2B-Instruct** model is a fine-tuned version of **Qwen/Qwen2-VL-2B-Instruct**, specifically optimized for **messy handwriting recognition**, **Optical Character Recognition (OCR)**, **English language understanding**, and **math problem solving with LaTeX formatting**. This model integrates a conversational approach with visual and textual understanding to handle multi-modal tasks effectively.
25
+
26
+ #### Key Enhancements:
27
+
28
+ * **SoTA understanding of images of various resolution & ratio**: Callisto-OCR3 achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
29
+
30
+ * **Enhanced Handwriting OCR**: Optimized for recognizing and interpreting **messy handwriting** with high accuracy, making it ideal for digitizing handwritten documents and notes.
31
+
32
+ * **Understanding videos of 20min+**: Callisto-OCR3 can process long videos, enabling high-quality video-based question answering, transcription, and content generation.
33
+
34
+ * **Agent that can operate your mobiles, robots, etc.**: With advanced reasoning and decision-making, Callisto-OCR3 can be integrated with mobile phones, robots, and other devices to perform automated tasks based on visual and textual input.
35
+
36
+ * **Multilingual Support**: Besides English and Chinese, Callisto-OCR3 supports text recognition inside images in multiple languages, including European languages, Japanese, Korean, Arabic, and Vietnamese.
37
+
38
+ ### How to Use
39
+
40
+ ```python
41
+ from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
42
+ from qwen_vl_utils import process_vision_info
43
+
44
+ # Load the model on the available device(s)
45
+ model = Qwen2VLForConditionalGeneration.from_pretrained(
46
+ "prithivMLmods/Callisto-OCR3-2B-Instruct", torch_dtype="auto", device_map="auto"
47
+ )
48
+
49
+ # Enable flash_attention_2 for better acceleration and memory optimization
50
+ # model = Qwen2VLForConditionalGeneration.from_pretrained(
51
+ # "prithivMLmods/Callisto-OCR3-2B-Instruct",
52
+ # torch_dtype=torch.bfloat16,
53
+ # attn_implementation="flash_attention_2",
54
+ # device_map="auto",
55
+ # )
56
+
57
+ # Default processor
58
+ processor = AutoProcessor.from_pretrained("prithivMLmods/Callisto-OCR3-2B-Instruct")
59
+
60
+ # Customize visual token range for speed-memory balance
61
+ # min_pixels = 256*28*28
62
+ # max_pixels = 1280*28*28
63
+ # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
64
+
65
+ messages = [
66
+ {
67
+ "role": "user",
68
+ "content": [
69
+ {
70
+ "type": "image",
71
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
72
+ },
73
+ {"type": "text", "text": "Recognize the handwriting in this image."},
74
+ ],
75
+ }
76
+ ]
77
+
78
+ # Preparation for inference
79
+ text = processor.apply_chat_template(
80
+ messages, tokenize=False, add_generation_prompt=True
81
+ )
82
+ image_inputs, video_inputs = process_vision_info(messages)
83
+ inputs = processor(
84
+ text=[text],
85
+ images=image_inputs,
86
+ videos=video_inputs,
87
+ padding=True,
88
+ return_tensors="pt",
89
+ )
90
+ inputs = inputs.to("cuda")
91
+
92
+ # Inference: Generate the output
93
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
94
+ generated_ids_trimmed = [
95
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
96
+ ]
97
+ output_text = processor.batch_decode(
98
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
99
+ )
100
+ print(output_text)
101
+ ```
102
+
103
+ ### Buffering Output
104
+ ```python
105
+ buffer = ""
106
+ for new_text in streamer:
107
+ buffer += new_text
108
+ # Remove <|im_end|> or similar tokens from the output
109
+ buffer = buffer.replace("<|im_end|>", "")
110
+ yield buffer
111
+ ```
112
+
113
+ ### **Key Features**
114
+
115
+ 1. **Advanced Handwriting OCR:**
116
+ - Excels at recognizing and transcribing **messy and cursive handwriting** into digital text with high accuracy.
117
+
118
+ 2. **Vision-Language Integration:**
119
+ - Combines **image understanding** with **natural language processing** to convert images into text.
120
+
121
+ 3. **Optical Character Recognition (OCR):**
122
+ - Extracts and processes textual information from images with precision.
123
+
124
+ 4. **Math and LaTeX Support:**
125
+ - Solves math problems and outputs equations in **LaTeX format**.
126
+
127
+ 5. **Conversational Capabilities:**
128
+ - Designed to handle **multi-turn interactions**, providing context-aware responses.
129
+
130
+ 6. **Image-Text-to-Text Generation:**
131
+ - Inputs can include **images, text, or a combination**, and the model generates descriptive or problem-solving text.