RL for Agents Workshop – Deep Dive on Training Agents with RL and Open Source

Video by Hugging Face via YouTube
RL for Agents Workshop - Deep Dive on Training Agents with RL and Open Source

Reinforcement learning is becoming central to agentic systems, but moving from RL for LLMs to RL for agents introduces a new set of challenges: environments, rollouts, tool use, inference bottlenecks, reward design, and evaluating multi-step behavior in the real world.

In this live Hugging Face workshop, we bring together researchers and builders working on the frontier of RL for agents. The session will feature short talks followed by a discussion on what is working today, where open methods still fall short, and what comes next.

Speakers include:

– Lewis Tunstall, Hugging Face
– Will Brown, Prime Intellect
– Ofir Press, Princeton University
– Alex Zhang, MIT CSAIL
additional guests TBA

Topics include:
– training agents with open source tools
– scaling RL for language agents
– multi-step verification and reward design
– benchmarking agent capability beyond static tasks
– recursive reasoning and new agent architectures

Source