What gets captured, how it's measured, and what it means for your career. This is the complete guide to every metric, score, and signal UseAI produces.
Every time you work with an AI tool, UseAI silently records the full session lifecycle — from the first message to the final evaluation. No manual logging, no forms to fill out, no context switching. The background daemon captures everything.
Each session records: tool used, task type, duration, languages, milestones, complexity, and files touched.
useai_startSession begins when your AI tool sends the first message. Tool, task type, and project are recorded automatically.
useai_heartbeatHeartbeats fire during long sessions. Duration, languages, files touched, and milestones accumulate in real time.
useai_endSession closes with a full evaluation, Ed25519 signature, and hash chain entry. Immutable from this point forward.
Features shipped. Bugs fixed. Refactors completed. Tests written. Every milestone you complete is categorized by type and weighted by complexity — because a complex architecture overhaul is not the same as a quick typo fix.
Complexity weights: simple ×1 · medium ×2 · complex ×4. Your output fingerprint shows what kind of developer you really are.
New functionality shipped
Defect identified and resolved
Structural improvement, same behavior
Test coverage added or improved
Documentation written or updated
Project scaffolding or tooling
Released to production
Miscellaneous development work
Are you mostly debugging or mostly building? UseAI breaks down your AI time by task type, giving you real numbers for how AI fits into your workflow — daily, weekly, and monthly.
Active session time is measured via heartbeats, not wall clock. If you step away for coffee, that gap isn't counted. You see real time spent with AI, not calendar time.
Building new features and writing implementation code
Investigating and fixing bugs, tracing error paths
Writing and running tests, verifying behavior
Architecture decisions, task breakdown, scoping
Code review, PR feedback, quality checks
Writing docs, READMEs, inline comments
Exploring new tools, libraries, or concepts
UseAI evaluates AI coding sessions using an adaptation of the SPACE framework (Satisfaction, Performance, Activity, Communication, Efficiency) developed by GitHub and Microsoft Research for measuring developer productivity.
Rather than measuring raw output, UseAI focuses on how effectively you orchestrate AI tools — scoring prompt clarity, context quality, autonomy, and task scoping across four weighted dimensions.
Each session receives a 0–100 score computed from the four SPACE dimensions using their assigned weights:
A perfect score of 100 requires a 5 in every dimension. The weighting ensures prompt quality has the largest impact — because clear communication drives productive AI sessions more than anything else.
For any dimension scored below 5, the AI provides a concrete, actionable tip explaining what was missing and how to improve next time. Scores aren't just numbers — they're a feedback loop.
The APS is a composite 0–1000 score that aggregates your performance across multiple sessions. It combines five components, each normalized to 0–1 and weighted to produce a holistic measure of AI-assisted development proficiency.
Unlike the per-session score, APS captures your entire body of work — rewarding consistency, breadth of skills, and sustained output over time.
Complexity-weighted milestones completed per window
Files touched per hour of active AI session time
Average session evaluation score using SPACE weights
Active coding days streak, capped at 14 days
Unique programming languages used across sessions
GitHub shows your commits. UseAI shows what you built with AI and how effectively you wield it. A public, shareable profile displaying your tools, languages, output volume, complexity distribution, and SPACE scores — your AI development resume.
In a world where every developer “uses AI,” prove you don't just use it — you're proficient with it.
A shareable page showing your AI activity — tools used, languages, output volume, complexity distribution, and SPACE scores. Only generic titles are shown publicly — no project names, file paths, or company details.
See where you stand globally. APS ranks developers by output, efficiency, prompt quality, consistency, and breadth.
Visible to recruiters, teams, and the community. Demonstrate AI proficiency with verified data, not self-reported claims.
Every milestone and session is cryptographically signed. Not timestamps you could edit. Not stats you could inflate. Provable proof of what you shipped, when you shipped it, and the evaluation scores you earned.
Every completed session is sealed with an Ed25519 digital signature. The signing key lives on your machine — only your daemon can produce valid signatures.
Sessions are linked in a SHA-256 hash chain. Each entry references the previous one. Tampering with any record breaks the chain and is immediately detectable.
UseAI is privacy-first by architecture, not by policy. No source code, file paths, or prompt contents are ever transmitted. The daemon processes everything locally in ~/.useai. You own your raw data — always.
The entire project is open source under the AGPL-3.0 license. You can audit every line of code that runs on your machine.
No source code, file paths, class names, or prompt contents leave your machine. Only aggregate metrics and milestones are synced — if you choose to sync at all.
Milestones use generic descriptions like "Fixed authentication bug" — no project names, file paths, company names, or identifying details ever appear on your public profile or the leaderboard.
The UseAI daemon runs on your machine, stores data in ~/.useai, and processes everything locally. No cloud dependency for core functionality.
Your session history is stored as plain JSONL files you can read, export, or delete at any time. No vendor lock-in. Your data belongs to you.
Anthropic's CLI coding assistant
Anthropic desktop app
AI-first code editor
AI-powered IDE
Visual Studio Code MCP client
Insiders channel for VS Code
OpenAI coding assistant
Google's AI coding CLI
AI pair programmer by GitHub
GitHub Copilot terminal assistant
AI pair programming in terminal
Amazon Q command-line assistant
Amazon Q in IDE workflows
Zed editor integration
VS Code autonomous coding agent
Roo coding agent extension
Kilo Code VS Code extension
Trae editor integration
Antigravity / Gemini integration
Goose desktop coding agent
OpenCode terminal coding agent
Crush coding assistant
JetBrains Junie assistant
JetBrains IDE integration
Continue coding assistant
Sourcegraph AI coding assistant
Self-hosted code assistant
Augment coding assistant
Amp coding assistant
Generic MCP-compatible client