--- license: apache-2.0 pipeline_tag: image-text-to-text library_name: transformers base_model: - openbmb/MiniCPM-V-4_5 language: - multilingual tags: - minicpm-v - vision - ocr - multi-image - video - custom_code - abliterated - uncensored --- # huihui-ai/Huihui-MiniCPM-V-4_5-abliterated This is an uncensored version of [openbmb/MiniCPM-V-4_5](https://huggingface.co/openbmb/MiniCPM-V-4_5) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). It was only the text part that was processed, not the image part. The abliterated model will no longer say "I'm sorry, but I can't assist with that." ## Chat with Image ### 1. [llama.cpp](https://github.com/ggml-org/llama.cpp) Inference (llama-mtmd-cli needs to be compiled.) ``` llama-mtmd-cli -m huihui-ai/Huihui-Qwen3-8B-abliterated/GGUF/ggml-model-Q4_K_M.gguf --mmproj huihui-ai/Huihui-Qwen3-8B-abliterated/GGUF/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image abc.png -p "What is in the image?" ``` ### 2. Transfromers Inference ``` import torch from PIL import Image from transformers import AutoModel, AutoTokenizer torch.manual_seed(100) model = AutoModel.from_pretrained('huihui-ai/Huihui-MiniCPM-V-4_5-abliterated', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('huihui-ai/Huihui-MiniCPM-V-4_5-abliterated', trust_remote_code=True) image = Image.open('./assets/minicpmo2_6/show_demo.jpg').convert('RGB') enable_thinking=False # If `enable_thinking=True`, the thinking mode is enabled. stream=True # If `stream=True`, the answer is string # First round chat question = "What is the landform in the picture?" msgs = [{'role': 'user', 'content': [image, question]}] answer = model.chat( msgs=msgs, tokenizer=tokenizer, enable_thinking=enable_thinking, stream=True ) generated_text = "" for new_text in answer: generated_text += new_text print(new_text, flush=True, end='') # Second round chat, pass history context of multi-turn conversation msgs.append({"role": "assistant", "content": [generated_text]}) msgs.append({"role": "user", "content": ["What should I pay attention to when traveling here?"]}) answer = model.chat( msgs=msgs, tokenizer=tokenizer, stream=True ) generated_text = "" for new_text in answer: generated_text += new_text print(new_text, flush=True, end='') ``` ### Usage Warnings - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use. ### Donation ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it. - bitcoin: ``` bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge ``` - Support our work on [Ko-fi](https://ko-fi.com/huihuiai)!