Hermes Agent that after the tab is gone.
Hermes Agent is built for people who want more than a disposable chat window. It keeps memory alive, grows new skills from experience, spans terminal and messaging apps, and runs in environments that fit your budget instead of locking you to a single machine.
Quick Install
Linux, macOS, and WSL are supported by the official Hermes Agent installer.

Where It Lives
Core Capabilities
Why Hermes Agent stands out in long-running AI work.
The strongest ideas in the upstream Hermes Agent project show up here as a clearer product story: memory, cross-platform continuity, delegated execution, and deployment flexibility.
Hermes Agent keeps memory instead of resetting every session.
Hermes Agent stores experience, recalls earlier work, and turns repeated solutions into reusable skills so the system compounds capability over time.
Hermes Agent stays coherent across terminal and messaging.
Start in the CLI, then continue from Telegram, Discord, Slack, WhatsApp, or Signal without splitting state or identity.
Delegates isolated subagents when the task deserves it.
Spin up parallel workstreams, run RPC-backed tools, and keep the main thread focused instead of drowning in accumulated context.
Deploy from a laptop to hardened containers with the same mental model.
Run Hermes Agent locally, in Docker, over SSH, or inside serverless environments with explicit isolation boundaries and low idle cost.
From idea capture to autonomous follow-through.
Hermes Agent works best as a long-running operational layer: take in context, use tools, create durable memory, and keep going across sessions.
01
Gather context
Search the web, inspect files, open terminals, and read prior sessions before acting.
02
Act inside a real environment
Use tools, shells, browser automation, and message gateways instead of stopping at analysis.
03
Store what mattered
Capture durable memory, update skills, and make future turns cheaper and sharper.
Terminal View
$ hermes
interactive CLI • streaming tools • slash commands
$ /model openai:gpt-5.4
$ /skills
$ /insights --days 7
memory recalled · tools ready · session active
Choose the environment that matches the job.
Environments
Low idle cost
Keep the agent available without paying laptop-level overhead when nobody is using it.
Multi-platform continuity
Terminal-first when you need depth, messaging-first when you need reach.
Get Started
A clean four-step path from install to always-on agent.
This section follows the official getting-started sequence but frames it as an intentional onboarding arc instead of a raw command dump.
Install
Bootstrap Python, Node.js, dependencies, and the hermes command in one shot.
Pick a model
Point Hermes at Nous Portal, OpenRouter, OpenAI, or your own compatible endpoint.
Start the CLI
Open the interactive terminal UI with memory, tools, slash commands, and streaming output.
Add a gateway
Bring the same agent into chat apps when you want long-running access beyond the terminal.
Platform Note
Native Windows remains experimental in the upstream project. The recommended path is WSL when you want a Windows-hosted Hermes Agent setup.
FAQ
Questions people ask before they commit to Hermes Agent.
This FAQ targets the core search intent around what Hermes Agent does, how it installs, and why it matters for long-running agent workflows.
What is Hermes Agent?
Hermes Agent is an open source long-running AI agent project from Nous Research. It focuses on persistent memory, durable skills, tool use, and continuity across CLI and chat gateways.
Why does Hermes Agent matter more than a normal chat app?
A normal chat app resets context too easily. Hermes Agent is built for work that stretches across sessions, tools, files, and environments, so the agent can keep operating after the current tab is gone.
How do you install Hermes Agent?
The upstream project provides an install script for Linux, macOS, and WSL. After installation, you select a model, start the CLI, and optionally connect chat gateways for always-on access.
Where can Hermes Agent run?
Hermes Agent can run locally, in Docker, over SSH, and in other remote or sandboxed environments. That flexibility is one of the main reasons people evaluate it for serious agent workflows.