Ksenia Se
Kseniase
AI & ML interests
None yet
Recent Activity
replied to
their
post
2 days ago
10 Latest Preference Optimization Techniques
Models need feedback on what makes outputs “good” or “bad.” Policy optimization (PO) turns preferences and rewards into actual training signals. This field is evolving quickly, moving far beyond classics like PPO and GRPO. So here is our overview of 10 newest PO methods:
1. Pref-GRPO → https://huggingface.co/papers/2508.20751
Stabilizes text-to-image reinforcement learning (RL) with pairwise preference rewards and a unified UNIGENBENCH benchmark
2. PVPO (Policy with Value Preference Optimization) → https://huggingface.co/papers/2508.21104
This critic-free RL method uses a pre-trained model as a reference anchor to reduce bias and guide learning, selecting high-value examples through data pre-sampling
3. DCPO (Dynamic Clipping Policy Optimization) → https://huggingface.co/papers/2509.02333
Uses dynamic clipping, which adjusts probability limits per token for better token exploration, and smooth reward standardization to balance rewards over training steps and prevent wasted updates
4. ARPO (Agentic Reinforced Policy Optimization) → https://huggingface.co/papers/2507.19849
Optimizes multi-turn LLM agents that use external tools. It uses an entropy-based adaptive rollout to explore post-tool use and an advantage attribution method to better assign credit across steps, leading to more efficient tool use with fewer resources
5. GRPO-RoC (Group Relative Policy Optimization with Resampling-on-Correct) → https://huggingface.co/papers/2508.20722
Oversamples rollouts, then resamples them to keep diverse mistakes and only the highest-quality correct answers. It reduces noises and ends up with stronger reasoning in a code environment
Read further below ⬇️
If you like this, also subscribe to the Turing post: https://www.turingpost.com/subscribe
posted
an
update
2 days ago
10 Latest Preference Optimization Techniques
Models need feedback on what makes outputs “good” or “bad.” Policy optimization (PO) turns preferences and rewards into actual training signals. This field is evolving quickly, moving far beyond classics like PPO and GRPO. So here is our overview of 10 newest PO methods:
1. Pref-GRPO → https://huggingface.co/papers/2508.20751
Stabilizes text-to-image reinforcement learning (RL) with pairwise preference rewards and a unified UNIGENBENCH benchmark
2. PVPO (Policy with Value Preference Optimization) → https://huggingface.co/papers/2508.21104
This critic-free RL method uses a pre-trained model as a reference anchor to reduce bias and guide learning, selecting high-value examples through data pre-sampling
3. DCPO (Dynamic Clipping Policy Optimization) → https://huggingface.co/papers/2509.02333
Uses dynamic clipping, which adjusts probability limits per token for better token exploration, and smooth reward standardization to balance rewards over training steps and prevent wasted updates
4. ARPO (Agentic Reinforced Policy Optimization) → https://huggingface.co/papers/2507.19849
Optimizes multi-turn LLM agents that use external tools. It uses an entropy-based adaptive rollout to explore post-tool use and an advantage attribution method to better assign credit across steps, leading to more efficient tool use with fewer resources
5. GRPO-RoC (Group Relative Policy Optimization with Resampling-on-Correct) → https://huggingface.co/papers/2508.20722
Oversamples rollouts, then resamples them to keep diverse mistakes and only the highest-quality correct answers. It reduces noises and ends up with stronger reasoning in a code environment
Read further below ⬇️
If you like this, also subscribe to the Turing post: https://www.turingpost.com/subscribe
replied to
their
post
9 days ago
11 Powerful Image Models
Everyone is buzzing around image generation this week, or more specifically, Google's Nano-Banana. So today we want to share a list of models that can be your great toolkit for image generation + editing + multi-turn refinement.
1. Gemini 2.5 Flash Image, or Nano-Banana →
https://deepmind.google/models/gemini/image/
Google’s newest image model with conversational editing, character consistency, and multi-image fusion. Available in AI Studio and the Gemini API. Price: $2.50 per 1M tokens
2. FLUX (Black Forest Labs) → https://bfl.ai/
A family of models known for rich detail and, excellent prompt adherence, and fast iterative generation. Offered in several variants, from Pro to open-source, it's accessible via Hugging Face, Replicate, Azure AI Foundry, etc., and used as a base in many pipelines. Price: $0.025-0.08 per image
3. Midjourney v7 → https://www.midjourney.com/
Enhanced image fidelity, prompt comprehension, and anatomical coherence (hands, bodies, objects) + provides a smart lightbox editor. The Omni-reference tool improves character and object consistency in your images. It remains accessible via Discord with a supporting web interface. Price: $10-60/month
4. Stable Diffusion 3.5 (Stability AI) → https://stability.ai/stable-image
Open-weights line with improved text rendering, photorealism, and
prompt adherence compared to earlier versions. It introduces technical innovations through its MMDiT architecture. Price: $0.025-0.065 per image
5. OpenAI GPT-Image-1 →https://platform.openai.com/docs/guides/image-generation?image-generation-model=gpt-image-1
It's the same multimodal model that powers ChatGPT's image capabilities, offering high-fidelity image generation, precise edits, including inpainting, and accurate text rendering. Available via the Images API. Price: $40 per 1M tokens
Read further below ⬇️
If you like this, also subscribe to the Turing post: https://www.turingpost.com/subscribe