DEEP_DIVE

How UseAI Works

What gets captured, how it's measured, and what it means for your career. This is the complete guide to every metric, score, and signal UseAI produces.

ACT_01

What UseAI Captures

Every Session, Captured Automatically

SESSION_LIFECYCLE

Every time you work with an AI tool, UseAI silently records the full session lifecycle — from the first message to the final evaluation. No manual logging, no forms to fill out, no context switching. The background daemon captures everything.

Each session records: tool used, task type, duration, languages, milestones, complexity, and files touched.

01

Start

useai_start

Session begins when your AI tool sends the first message. Tool, task type, and project are recorded automatically.

02

Track

useai_heartbeat

Heartbeats fire during long sessions. Duration, languages, files touched, and milestones accumulate in real time.

03

Seal

useai_end

Session closes with a full evaluation, Ed25519 signature, and hash chain entry. Immutable from this point forward.

Not All Output Is Equal

OUTPUT_BREAKDOWN

Features shipped. Bugs fixed. Refactors completed. Tests written. Every milestone you complete is categorized by type and weighted by complexity — because a complex architecture overhaul is not the same as a quick typo fix.

Complexity weights: simple ×1 · medium ×2 · complex ×4. Your output fingerprint shows what kind of developer you really are.

Feature

New functionality shipped

Bug Fix

Defect identified and resolved

Refactor

Structural improvement, same behavior

Test

Test coverage added or improved

Docs

Documentation written or updated

Setup

Project scaffolding or tooling

Deployment

Released to production

Other

Miscellaneous development work

Where Your AI Hours Go

TIME_INTELLIGENCE

Are you mostly debugging or mostly building? UseAI breaks down your AI time by task type, giving you real numbers for how AI fits into your workflow — daily, weekly, and monthly.

Active session time is measured via heartbeats, not wall clock. If you step away for coffee, that gap isn't counted. You see real time spent with AI, not calendar time.

coding

Building new features and writing implementation code

debugging

Investigating and fixing bugs, tracing error paths

testing

Writing and running tests, verifying behavior

planning

Architecture decisions, task breakdown, scoping

reviewing

Code review, PR feedback, quality checks

documenting

Writing docs, READMEs, inline comments

learning

Exploring new tools, libraries, or concepts

ACT_02

How UseAI Measures

Measuring How You Wield AI

SPACE_FRAMEWORK

UseAI evaluates AI coding sessions using an adaptation of the SPACE framework (Satisfaction, Performance, Activity, Communication, Efficiency) developed by GitHub and Microsoft Research for measuring developer productivity.

Rather than measuring raw output, UseAI focuses on how effectively you orchestrate AI tools — scoring prompt clarity, context quality, autonomy, and task scoping across four weighted dimensions.

Prompt Quality

Communication
30% weight
1Vague, no goal stated, AI must guess intent entirely
2Goal implied but ambiguous, missing key constraints
3Clear goal, some constraints provided, missing edge cases
4Clear goal with constraints, minor ambiguity remains
5Crystal clear goal, all constraints stated, acceptance criteria defined

Context Provided

Communication
25% weight
1No context provided — no files, errors, or background
2Minimal context — vague references without specifics
3Some files or errors provided but incomplete picture
4Good context with relevant files, errors, and background
5Comprehensive context: files, errors, constraints, and expected behavior

Independence Level

Efficiency
25% weight
1Needed constant guidance, every step required approval
2Frequent back-and-forth, many clarifying questions needed
3Some back-and-forth on approach, core decisions made by user
4Mostly self-directed, only major decisions needed input
5Gave clear spec, AI executed autonomously with minimal interruption

Scope Quality

Performance
20% weight
1Vague or impossibly broad — no clear deliverable
2Poorly defined — scope creep likely, unclear boundaries
3Reasonable scope with some ambiguity in deliverables
4Well-scoped with clear deliverables, minor gaps
5Precise, achievable, well-decomposed into actionable steps

Your Session Score

SESSION_SCORE

Each session receives a 0–100 score computed from the four SPACE dimensions using their assigned weights:

score = (prompt_quality / 5 × 0.30) + (context_provided / 5 × 0.25) + (independence_level / 5 × 0.25) + (scope_quality / 5 × 0.20) × 100

A perfect score of 100 requires a 5 in every dimension. The weighting ensures prompt quality has the largest impact — because clear communication drives productive AI sessions more than anything else.

For any dimension scored below 5, the AI provides a concrete, actionable tip explaining what was missing and how to improve next time. Scores aren't just numbers — they're a feedback loop.

AI Proficiency Score (APS)

AI_PROFICIENCY_SCORE

The APS is a composite 0–1000 score that aggregates your performance across multiple sessions. It combines five components, each normalized to 0–1 and weighted to produce a holistic measure of AI-assisted development proficiency.

Unlike the per-session score, APS captures your entire body of work — rewarding consistency, breadth of skills, and sustained output over time.

Output

25%

Complexity-weighted milestones completed per window

Efficiency

25%

Files touched per hour of active AI session time

Prompt Quality

20%

Average session evaluation score using SPACE weights

Consistency

15%

Active coding days streak, capped at 14 days

Breadth

15%

Unique programming languages used across sessions

ACT_03

What It Means For You

Your AI Developer Identity

DEVELOPER_IDENTITY

GitHub shows your commits. UseAI shows what you built with AI and how effectively you wield it. A public, shareable profile displaying your tools, languages, output volume, complexity distribution, and SPACE scores — your AI development resume.

In a world where every developer “uses AI,” prove you don't just use it — you're proficient with it.

PUBLIC PROFILE

A shareable page showing your AI activity — tools used, languages, output volume, complexity distribution, and SPACE scores. Only generic titles are shown publicly — no project names, file paths, or company details.

LEADERBOARD RANKING

See where you stand globally. APS ranks developers by output, efficiency, prompt quality, consistency, and breadth.

PROFESSIONAL SIGNAL

Visible to recruiters, teams, and the community. Demonstrate AI proficiency with verified data, not self-reported claims.

Verified, Not Self-Reported

VERIFICATION

Every milestone and session is cryptographically signed. Not timestamps you could edit. Not stats you could inflate. Provable proof of what you shipped, when you shipped it, and the evaluation scores you earned.

SIGNED MILESTONES

Every completed session is sealed with an Ed25519 digital signature. The signing key lives on your machine — only your daemon can produce valid signatures.

HASH CHAIN INTEGRITY

Sessions are linked in a SHA-256 hash chain. Each entry references the previous one. Tampering with any record breaks the chain and is immediately detectable.

Your Data, Your Machine

PRIVACY

UseAI is privacy-first by architecture, not by policy. No source code, file paths, or prompt contents are ever transmitted. The daemon processes everything locally in ~/.useai. You own your raw data — always.

The entire project is open source under the AGPL-3.0 license. You can audit every line of code that runs on your machine.

ZERO PAYLOAD

No source code, file paths, class names, or prompt contents leave your machine. Only aggregate metrics and milestones are synced — if you choose to sync at all.

PUBLIC TITLES ONLY

Milestones use generic descriptions like "Fixed authentication bug" — no project names, file paths, company names, or identifying details ever appear on your public profile or the leaderboard.

LOCAL PROCESSING

The UseAI daemon runs on your machine, stores data in ~/.useai, and processes everything locally. No cloud dependency for core functionality.

DATA OWNERSHIP

Your session history is stored as plain JSONL files you can read, export, or delete at any time. No vendor lock-in. Your data belongs to you.

REFERENCE

Supported Tools

Works With Your Stack

30 tools listed

Claude Code

Anthropic's CLI coding assistant

Claude Desktop

Anthropic desktop app

Cursor

AI-first code editor

Windsurf

AI-powered IDE

VS Code

Visual Studio Code MCP client

VS Code Insiders

Insiders channel for VS Code

Codex

OpenAI coding assistant

Gemini CLI

Google's AI coding CLI

GitHub Copilot

AI pair programmer by GitHub

Copilot CLI

GitHub Copilot terminal assistant

Aider

AI pair programming in terminal

Amazon Q CLI

Amazon Q command-line assistant

Amazon Q IDE

Amazon Q in IDE workflows

Zed

Zed editor integration

Cline

VS Code autonomous coding agent

Roo Code

Roo coding agent extension

Kilo Code

Kilo Code VS Code extension

Trae

Trae editor integration

Antigravity

Antigravity / Gemini integration

Goose

Goose desktop coding agent

OpenCode

OpenCode terminal coding agent

Crush

Crush coding assistant

Junie

JetBrains Junie assistant

JetBrains

JetBrains IDE integration

Continue

Continue coding assistant

Sourcegraph Cody

Sourcegraph AI coding assistant

TabbyML

Self-hosted code assistant

Augment

Augment coding assistant

Amp

Amp coding assistant

MCP Client

Generic MCP-compatible client

Want to add your tool?

UseAI works with any MCP-compatible AI tool.

View on GitHub