Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

ronantakizawa 
posted an update 1 day ago
danielhanchen 
posted an update 2 days ago
sergiopaniego 
posted an update 3 days ago
view post
Post
1673
The Christmas holidays are here! 🎄
Thinking about learning something new in AI?

@huggingface offers 12 FREE courses covering all the relevant topics, for every level of experience. A great challenge for the holidays (and worth saving for later 🙄)

Let’s explore them!

🧠 𝗟𝗟𝗠 𝗖𝗼𝘂𝗿𝘀𝗲: large language models with HF tools
https://huggingface.co/learn/llm-course

🤖 𝗔𝗴𝗲𝗻𝘁𝘀 𝗖𝗼𝘂𝗿𝘀𝗲: build and deploy AI agents
https://huggingface.co/learn/agents-course

🎨 𝗗𝗶𝗳𝗳𝘂𝘀𝗶𝗼𝗻 𝗖𝗼𝘂𝗿𝘀𝗲: diffusion models with 🤗 Diffusers
https://huggingface.co/learn/diffusion-course

🔊 𝗔𝘂𝗱𝗶𝗼 𝗖𝗼𝘂𝗿𝘀𝗲: transformers for audio tasks
https://huggingface.co/learn/audio-course

🎮 𝗗𝗲𝗲𝗽 𝗥𝗟 𝗖𝗼𝘂𝗿𝘀𝗲: deep reinforcement learning
https://huggingface.co/learn/deep-rl-course

👁️ 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗖𝗼𝘂𝗿𝘀𝗲: modern computer vision with HF
https://huggingface.co/learn/computer-vision-course

🦾 𝗥𝗼𝗯𝗼𝘁𝗶𝗰𝘀 𝗖𝗼𝘂𝗿𝘀𝗲 (𝗟𝗲𝗥𝗼𝗯𝗼𝘁): learning-based robotics
https://huggingface.co/learn/robotics-course

🧩 𝗠𝗖𝗣 𝗖𝗼𝘂𝗿𝘀𝗲: Model Context Protocol explained
https://huggingface.co/learn/mcp-course

🧪 𝗔 𝗦𝗺𝗼𝗹 𝗖𝗼𝘂𝗿𝘀𝗲: post-training AI models
https://huggingface.co/learn/a-smol-course

🕹️ 𝗠𝗟 𝗳𝗼𝗿 𝗚𝗮𝗺𝗲𝘀: AI in game development
https://huggingface.co/learn/ml-for-games-course

🧊 𝗠𝗟 𝗳𝗼𝗿 𝟯𝗗: machine learning for 3D data
https://huggingface.co/learn/ml-for-3d-course

📘 𝗢𝗽𝗲𝗻-𝗦𝗼𝘂𝗿𝗰𝗲 𝗔𝗜 𝗖𝗼𝗼𝗸𝗯𝗼𝗼𝗸: practical AI notebooks
https://huggingface.co/learn/cookbook

All of them can be found here: https://huggingface.co/learn
dhruv3006 
posted an update 2 days ago
view post
Post
2582
OpenAPI specs are a great way to describe APIs in a clear, standard format. They provide a full overview of endpoints, methods, parameters etc. which makes working with APIs easier and more consistent.

Voiden lets you turn your OpenAPI spec into organized, ready-to-use API request files.

Just import your OpenAPI file, and you can immediately browse your endpoints, grouped by tags, and start testing without any manual setup.

The generated requests come pre-configured but fully editable, so you can customize them as you want.

If you want to get started with your existing APIs or try out new ones, this can save you quite some time.

Read the docs here : https://docs.voiden.md/docs/getting-started-section/getting-started/openapi-imports/
Jiaqi-hkust 
posted an update 2 days ago
view post
Post
3488
We have open-sourced Robust-R1 (AAAI 2026 Oral), a new paradigm in the field of anti-degradation and robustness enhancement for multimodal large models.

Multimodal Large Language Models struggle to maintain reliable performance under extreme real-world visual degradations, which impede their practical robustness. Existing robust MLLMs predominantly rely on implicit training/adaptation that focuses solely on visual encoder generalization, suffering from limited interpretability and isolated optimization. To overcome these limitations, we propose Robust-R1, a novel framework that explicitly models visual degradations through structured reasoning chains. Our approach integrates: (i) supervised fine-tuning for degradation-aware reasoning foundations, (ii) reward-driven alignment for accurately perceiving degradation parameters, and (iii) dynamic reasoning depth scaling adapted to degradation intensity. To facilitate this approach, we introduce a specialized 11K dataset featuring realistic degradations synthesized across four critical real-world visual processing stages, each annotated with structured chains connecting degradation parameters, perceptual influence, pristine semantic reasoning chain, and conclusion. Comprehensive evaluations demonstrate state-of-the-art robustness: Robust-R1 outperforms all general and robust baselines on the real-world degradation benchmark R-Bench, while maintaining superior anti-degradation performance under multi-intensity adversarial degradations on MMMB, MMStar, and RealWorldQA.

We have made all of our papers, codes, data, model weights and demos fully open-source:
Paper: Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding (2512.17532) (help us to upvote)
GitHub code: https://github.com/jqtangust/Robust-R1 (help us to star)
HF model: https://huggingface.co/Jiaqi-hkust/Robust-R1
HF data: Jiaqi-hkust/Robust-R1
HF Space: Jiaqi-hkust/Robust-R1

We sincerely invite everyone to give it a try.

  • 2 replies
·
nicolay-r 
posted an update 3 days ago
view post
Post
1267
Time-Effective LLM Querying in Information Retrieval Tasks

🎤 Last week at Research Colloquium in Technische Universität Chemnitz, we presented a framework for time-effective data handling with prompting schemas. The video of the talk is now available 👇️

🎬️ Video: https://youtu.be/pa8jGOhHViI
🌟 Framework (bulk-chain): https://github.com/nicolay-r/bulk-chain

🔑 bulk-chain solves the following problems:
✅ Effective handling CoT schema with big amount of prompts and parameters they are based on (batching policies)
✅ Easy-to-apply for data-iterators (datasets handling)
inoculatemedia 
posted an update 1 day ago
view post
Post
147
I’m opening the waitlist for what I believe to be the most advanced multimodal bridge for A/V professionals. Txt2img, img2video, editing, export to ProRes, apply Luts, Pexels and TouchDesigner integrations, music and voice gen, multichannel mixing.

Announcing: Lilikoi by Haawke AI

Teaser video made entirely with Lilikoi:
https://youtu.be/-O7DH7vFkYg?si=q2t5t6WjQCk2Cp0w

Https://Lilikoi.haawke.com

Technical brief:
https://haawke.com/technical_brief.html

ibragim-bad 
posted an update 2 days ago
view post
Post
233
🎄 67,074 Qwen3-Coder OpenHands trajectories + 2 RFT checkpoints.

We release: 67,000+ trajectories from 3,800 resolved issues in 1,800+ Python repos.
About 3x more successful trajectories and 1.5x more repos than our previous dataset.
Trajectories are long: on average 64 turns, up to 100 turns and 131k context length.

> RFT on this data, SWE-bench Verified:
Qwen3-30B-Instruct: 25.7% → 50.3% Pass@1.
Qwen3-235B-Instruct: 46.2% → 61.7% Pass@1.
Also strong gains on SWE-rebench September.

> We also did massive evals.
We run OpenHands with 100 and 500 turns.
We compare models under both limits.
We run on SWE-bench Verified and several months of SWE-rebench.

!!! We also check tests written by the models.
We measure how often tests are correct.
We check how often the final patch passes its own tests.
This gives a pool of tests for verifiers and auto graders.

> Fully permissive licenses
Dataset and models: https://huggingface.co/collections/nebius/openhands-trajectories

Blog post: https://nebius.ai/blog/posts/openhands-trajectories-with-qwen3-instruct
  • 1 reply
·
AbstractPhil 
posted an update 2 days ago
view post
Post
202
geofractal getting started guide available, bulk ablation for fusion, simple towers, oscillator capacity, and substructure systemic associative capacity.
Many formulas were tested, 92 tests for collectives, oscillation bulk experiments, and more. All of them either coalesce into the correct behavior or the failures are directly visible, which means the system is robust enough to declare some tools functionally valid but not scalable yet.

ai-crash course available;
https://github.com/AbstractEyes/geofractal/blob/main/ai_helpers/v101_claude_helpers.txt
Feed GPT, Claude, or Grokk and they will assist.

getting started guide;
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/router/GETTING_STARTED.md

geofractal router architecture is in prototype phases;
https://github.com/AbstractEyes/geofractal

This is likely one of it's final growing phases before full production capacity is ramped up. The architecture is not for the novice, it's meant for experts to either get ideas, borrow code, utilize library capacity, or simply tell AI what to do. MOST files in current production have good descriptions for AI integration.

Transfer learning notebook available here;
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/router/Router_Transfer_Learning-12_19_25.ipynb

Stress test and multiple diagnostics available here;
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/router/components/diagnostics/

WideRouter compilation capacity available;
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/router/wide_router.py

The wide router compiler organizes similar towers into stacked staged combinations before compiling with torch.compile. This is experimental, but has shown increased speed with multiple structures of wide models and will serve it's purpose in the future.
  • 1 reply
·
TravisMuhlestein 
posted an update 2 days ago
view post
Post
172
From AI demos to production systems: what breaks when agents become autonomous?

A recurring lesson from production AI deployments is that most failures are system failures, not model failures.

As organizations move beyond pilots, challenges increasingly shift toward:

• Agent identity and permissioning
• Trust boundaries between agents and human operators
• Governance and auditability for autonomous actions
• Security treated as a first-class architectural constraint

This recent Fortune article highlights how enterprises are navigating that transition, including work with AWS’s AI Innovation Lab.

Open question for the community:
What architectural patterns or tooling are proving effective for managing identity, permissions, and safety in autonomous or semi-autonomous agent systems in production?

Context: https://fortune.com/2025/12/19/amazon-aws-innovation-lab-aiq/