agentic workflows hit beta, enabling "new categories of repository automation"

Github Logo, Octocat during GitHub universe
GitHub’s Octocat logo hangs at Yerba Buena Center during GitHub universe

GitHub previews Agentic Workflows as part of continuous AI concept

Published

Agentic workflows - where an AI agent runs automatically in GitHub Actions - are now in technical preview, following their introduction at the Universe event in San Francisco last year.

 This type of workflow has been developed by GitHub Next and Microsoft Research, and features sandboxed execution and a feature called secure output, intended to protect the agentic workflow from misuse.

The new service is part of the continuous AI concept, also presented at Universe. According to principal researcher Eddie Aftandilian, speaking at the event, "we coined the term continuous AI to describe an engineering paradigm that we see as the agentic evolution of continuous integration."

 An agentic workflow is defined in a markdown file and compiled to GitHub Actions YAML with the GitHub CLI (command line interface). The workflow is triggered by events, with developers able to choose one or more from events including new issues, new issue comments, pull requests and their comments, and new discussions. The actions to be taken by the agent are determined by prompt instructions , such as asking the agent to analyze issues, add labels, review pull requests, and output a structured report. The agent used can be GitHub Copilot, Caude Code, or OpenAI Codex. 

According to the team, typical use cases for agentic workflows include triaging issues, updating documentation, identifying code improvements, monitoring test coverage and adding new tests, investigating continuous integration (CI) failures, and creating regular reports on repository health. GitHub states that agentic workflows make "entirely new categories of repository automation and software engineering possible," that could not be achieved without AI.

The new agentic workflows are not intended to replace traditional CI/CD (continuous integration and delivery) workflows, but to be used alongside. The FAQ notes that CI/CD needs to be deterministic, whereas agentic workflows are not. "If you use agentic workflows, you should use them for tasks that benefit from a coding agent’s flexibility, not for core build and release processes that require strict reproducibility,” it says.

Giving AI agents access to code repositories has obvious risks, particularly in the case of public repositories where malicious prompts may be hidden in new issues, pull requests or comments. In order to address this, there are guardrails which, GitHub claims, makes its agentic workflows safer than simply running AI agent CLIs directly inside an Action. That approach "often grants these agents more permission than is required," the team said.

The security architecture has several layers. Agentic workflows run in an isolated container, and the agent has read-only access to a repository. Access to the wider internet is restricted by a firewall and can be constrained to specified destinations. User content is sanitized before being passed to the agent. In addition there is a Safe Outputs subsystem where tasks that do write content run in separate permission-controlled jobs.

The cost of an agentic workflow, as is often the case with AI workloads, is somewhat opaque. "Costs vary depending on workflow complexity," the FAQ states. The logs contain usage metrics and an audit command shows "detailed token usage and costs," according to the docs.

Despite the security features, the documentation warns that the product is in early development, may change significantly, and that even with careful supervision "things can still go wrong. Use it with caution, and at your own risk."

Nevertheless, security is a large part of this new GitHub feature and is unusually prominent in its presentation. Aftandilian said at Universe that the "agent can only do the things that we want it to do, and nothing else," a bold but welcome claim.