Skip to main content

What it does

/improve applies targeted improvements to your code across quality, performance, security, and maintainability dimensions. It follows strict rules: measure before improving performance, test before and after refactoring, and commit one improvement at a time.

When to use

Use /improve after /analyze has identified issues, after a code review surfaces improvement opportunities, or when you want to proactively reduce technical debt in a specific area.

Prerequisites

  • Tests passing before starting — you need a green baseline to verify improvements don’t break behavior
  • For performance improvements: baseline measurements must exist (or will be created before any changes)

Conversation mode

Either mode works.

What happens

1

Identify improvement targets

The specific areas to improve are identified — from /analyze output, review feedback, or your direct request.
2

Apply quality improvements

Duplicated logic is extracted into reusable functions, complex conditionals are simplified, naming is improved for clarity, and long functions are reduced.
3

Apply performance improvements

The performance-optimization skill is loaded. Performance is measured before any change — no optimization without baseline data.
4

Apply security improvements

The security-review skill is loaded to check for vulnerabilities before and after changes.
5

Apply maintainability improvements

Missing error handling at system boundaries is added, test coverage for critical paths is improved, over-engineered abstractions are simplified, and non-obvious decisions are documented.
6

Test between each improvement

Tests run before and after every refactoring step. One improvement is committed at a time.
7

Verify before declaring done

The verification-before-completion skill runs before the improvement session is closed.

Skills invoked

  • performance-optimization — measurement-first performance improvement
  • security-review — vulnerability check for security improvements
  • verification-before-completion — final check before declaring improved

Improvement categories

Code quality

  • Extract duplicated logic into reusable functions
  • Simplify complex conditionals
  • Improve naming for clarity
  • Reduce function and method length

Performance

Load performance-optimization skill — measure before improving.

Security

Load security-review skill — check for vulnerabilities.

Maintainability

  • Add missing error handling at system boundaries
  • Improve test coverage for critical paths
  • Simplify over-engineered abstractions
  • Document non-obvious decisions

Rules

  • Measure before improving performance
  • Test before and after any refactoring
  • One improvement at a time — commit between each
  • YAGNI — don’t add abstractions for hypothetical future needs

Example

/improve the authentication module
Antigravity identifies issues from a previous /analyze run: a 120-line validateUser function, missing error handling on the token refresh path, and an N+1 query in the session lookup. It addresses each one separately, running tests between each change and committing:
refactor(auth): extract token validation to separate function
fix(auth): add error handling on token refresh failure path  
perf(auth): replace N+1 session lookup with single batched query

/analyze

Run /analyze first to identify what needs improving.

/review

Code review often surfaces the improvement opportunities that /improve addresses.

/cleanup

For removing dead code and unused imports rather than improving live code.

/test

Verify test coverage after improvements, especially for maintainability changes.

Build docs developers (and LLMs) love