Jump to related tools in the same category or review the original source on GitHub.

Search & Research @olmmlo-cmd Updated 2/18/2026

Agent Lightning OpenClaw Plugin & Skill | ClawHub

Looking to integrate Agent Lightning into your AI workflows? This free OpenClaw plugin from ClawHub helps you automate search & research tasks instantly, without having to write custom tools from scratch.

What this skill does

Microsoft Research's agent training framework. Optimizes AI agents with Reinforcement Learning, Automatic Prompt Optimization, and Supervised Fine-tuning. Zero code change required. Works with LangChain, AutoGen, CrewAI, OpenAI Agent SDK.

Install

npx clawhub@latest install agent-lightning

Full SKILL.md

Open original
Metadata table.
nameversiondescriptionlicensehomepagerepositorytags
agent-lightning1.0.0Microsoft Research's agent training framework. Optimizes AI agents with Reinforcement Learning, Automatic Prompt Optimization, and Supervised Fine-tuning. Zero code change required. Works with LangChain, AutoGen, CrewAI, OpenAI Agent SDK.MIThttps://microsoft.github.io/agent-lightning/https://github.com/microsoft/agent-lightning
agent-trainingreinforcement-learningprompt-optimizationfine-tuningmicrosoftrlhfagent-improvement

SKILL.md content below is scrollable.

Agent Lightning ⚡

Microsoft Research's agent training framework. Turn your AI agents into optimizable beasts with (almost) zero code changes.

Core Features

  • 🔌 Universal Compatibility: Works with LangChain, OpenAI Agent SDK, AutoGen, CrewAI, Microsoft Agent Framework, or plain Python OpenAI
  • 🎯 Selective Optimization: Optimize one or more agents in a multi-agent system
  • 🧠 Multiple Algorithms: Reinforcement Learning (RL), Automatic Prompt Optimization (APO), Supervised Fine-tuning (SFT)
  • ⚡ Zero Code Change: Add agl.emit_xxx() helpers or use tracer — your agent keeps running as usual

Installation

pip install agentlightning

For latest nightly build:

pip install --upgrade --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ --pre agentlightning

Quick Start

1. Instrument Your Agent

Option A: Add emit helpers (recommended)

import agentlightning as agl

# In your agent's tool calls
response = agl.emit_tool_call(
    model=model,
    messages=messages,
    tools=tools,
    context={"task": "search"}
)

Option B: Use tracer (zero code change)

from agentlightning import tracer

# Wrap your agent with tracer
with tracer.trace("my-agent", input_data):
    result = your_agent.run(user_query)

2. Create Training Config

# config.yaml
agent:
  name: "my-agent"
  type: "openai"  # openai, langchain, autogen, crewai

training:
  algorithm: "grpo"  # grpo, apo, sft, rloo
  episodes: 100
  batch_size: 16
  
environment:
  eval_tasks:
    - "math"
    - "coding"
    - "reasoning"

3. Run Training

agent-lightning train --config config.yaml

Algorithms

Algorithm Use Case Description
GRPO General RL Group Relative Policy Optimization — stable, works well for most agents
APO Prompt Tuning Automatic Prompt Optimization — improves system prompts
SFT Supervised Fine-tuning Supervised Fine-tuning with preference data
RLOO Long-horizon RLOO for tasks with sparse rewards

Usage Commands

agent-lightning train

Train your agent with configured algorithm.

agent-lightning eval

Evaluate agent on benchmark tasks.

agent-lightning export

Export trained model/prompts for deployment.

agent-lightning serve

Launch serving endpoint for trained agent.

Example: SQL Agent Training

See full example: Train SQL Agent with RL

from agentlightning import Agent, RLConfig, GRPOTrainer

# 1. Define your agent
sql_agent = Agent(
    name="sql-agent",
    system_prompt="You are a SQL expert...",
    tools=[execute_sql, query_schema]
)

# 2. Configure RL training
config = RLConfig(
    algorithm="grpo",
    episodes=500,
    learning_rate=1e-4
)

# 3. Train
trainer = GRPOTrainer(config=config)
trainer.train(sql_agent, eval_tasks=["sql-generation"])

Integration with Clawdbot

Environment Variables

# Required for training
export OPENAI_API_KEY="sk-..."

# Optional: for remote storage
export AGL_STORAGE="s3://my-bucket/agent-lightning/"

Python API

from agentlightning import LightningStore, GRPOTrainer

# LightningStore keeps tasks, resources, and traces in sync
store = LightningStore()

# Read traces, learn, and update prompts
trainer = GRPOTrainer(store=store)
trainer.train(agent=my_agent)

Monitoring Training

# Launch dashboard
agent-lightning dashboard --port 8080

# View logs
tail -f ~/.agent-lightning/logs/training.log

Best Practices

  1. Start Small: Begin with 10-50 episodes to verify setup
  2. Define Clear Rewards: Design reward functions that match your goal
  3. Use Evaluation Tasks: Always eval on held-out tasks
  4. Checkpoint Frequently: Save model every N episodes
  5. Monitor Convergence: Watch loss curves in dashboard

Resources

Citation

If you use Agent Lightning in research:

@misc{luo2025agentlightningtrainai,
  title={Agent Lightning: Train ANY AI Agents with Reinforcement Learning},
  author={Xufang Luo and Yuge Zhang and Zhiyuan He and Zilong Wang and Siyun Zhao and Dongsheng Li and Luna K. Qiu and Yuqing Yang},
  year={2025},
  eprint={2508.03680},
  archivePrefix={arXiv},
  primaryClass={cs.AI}
}
Original Repository URL: https://github.com/openclaw/skills/blob/main/skills/olmmlo-cmd/agent-lightning
Latest commit: https://github.com/openclaw/skills/commit/82f64a508e0478308c551c5e4f5983370d9481ed

Related skills

If this matches your use case, these are close alternatives in the same category.

1

Personal knowledge base powered by Ensue for capturing and retrieving understanding. Use when user wants to save knowledge, recall what they know, manage their toolbox, or build on past learnings. Triggers on "save this", "remember", "what do I know about", "add to toolbox", "my notes on", "store this concept".

academic-deep-research

Transparent, rigorous research with full methodology — not a black-box API wrapper. Conducts exhaustive investigation through mandated 2-cycle research per theme, APA 7th citations, evidence hierarchy, and 3 user checkpoints. Self-contained using native OpenClaw tools (web_search, web_fetch, sessions_spawn). Use for literature reviews, competitive intelligence, or any research requiring academic rigor and reproducibility.

academic-writer

Professional LaTeX writing assistant. Capabilities include: scanning existing LaTeX templates, reading reference materials (Word/Text), drafting content strictly following templates, and compiling PDFs. Triggers include: 'write thesis', 'draft section', 'compile pdf', 'check latex format'. Designed to work in tandem with 'academic-research-hub' for citation retrieval.

academic-writing

You are an academic writing expert specializing in scholarly papers, literature reviews, research methodology, and thesis writing. You must adhere to strict academic standards in all outputs.## Core Requirements1. **Output Format**: Use Markdown exclusively for all writing outputs and always wrap the main content of your response within <ama-doc></ama-doc> tags to clearly distinguish the core i...

academic-writing-refiner

Refine academic writing for computer science research papers targeting top-tier venues (NeurIPS, ICLR, ICML, AAAI, IJCAI, ACL, EMNLP, NAACL, CVPR, WWW, KDD, SIGIR, CIKM, and similar). Use this skill whenever a user asks to improve, polish, refine, edit, or proofread academic or research writing — including paper drafts, abstracts, introductions, related work sections, methodology descriptions, experiment write-ups, or conclusion sections. Also trigger when users paste LaTeX content and ask for writing help, mention "camera-ready", "rebuttal", "paper revision", or reference any academic venue or conference. This skill handles both full paper refinement and section-by-section editing.

aclawdemy

The academic research platform for AI agents. Submit papers, review research, build consensus, and push toward AGI — together.