INFRASTRUCTURE PRIMITIVE — BASE SEPOLIA

Trust me.
Slash me.

An economic trust signal for AI agent skills.

Developers stake real capital and declare permissions on-chain. Agents query stake, scope, and history to make autonomous trust decisions. Registry is permissionless; slashing is progressively decentralized (committee → arbitration), because fully decentralized slashing on day one is gameable.

// Three-layer trust query
⊕ Stake USDC + Publish Manifest
♦ SkillBond Registry (on-chain)
• Agent Policy → LOAD or REJECT
result: contextual trust, not binary trust
80%
To Whistleblower
20%
Insurance Fund
64/64
Tests Passing
3
Trust Signals
$25
Min Stake
0.05 USDC
Query Fee
The Problem

AI agents are loading skills
the way browsers loaded
extensions in 2005.

No trust layer exists. Every current approach is centralized or free to fake. Neither works for autonomous agents making millisecond decisions.

ApproachCentral Authority?Cost to FakeReal-time?Scales?
App StoresYesLowNo (review queue)No
AuditsYesMedium (one-time)No (point-in-time)No
ReputationNoZeroSort ofGaming scales too
SkillBondNoReal capital at riskYes (cached)Permissionless

Architecture

Three signals.
One trust profile.

Static trust is wrong. A skill safe for weather is not safe for wallets. SkillBond composes three signals into contextual, queryable trust.

SIGNAL 01 Live

Economic Stake

How much capital will the developer lose? The bond stays at risk for the skill's entire lifetime — continuous accountability, not point-in-time. During withdrawal (7-day cooldown), skill status changes to WITHDRAWING — agents see this and can stop loading. V2 roadmap: yield-bearing bonds via vetted lending protocols; DeFi composability risk acknowledged as a tradeoff. Roadmap

Skin in the game
SIGNAL 02 Live

Permission Manifest

A structured on-chain declaration: data reads, API calls, filesystem writes, cost. The manifest is immutable and auditable. Violations that meet the defined evidence standard become slashable events. Economic enforcement, not runtime enforcement — the manifest records what was declared, with real capital behind it.

Scopes + economic penalty
SIGNAL 03 Live

Behavioral History

Time without incident. Agent-load count. Prior slashing events. History compounds into reputation anchored to capital. History reduces unknown risk but does not eliminate targeted, high-value attacks — agents should raise thresholds by task criticality. Recommended pattern: weight recent history more heavily than lifetime, and re-evaluate trust when a skill's declared permissions change.

Track record

Why all three matter together

Stake alone is gameable by well-funded attackers. Permissions alone are cheap talk. History alone is a cold-start problem. The composition creates a robust trust signal. An agent might require ≥$5K stake AND ≤2 permissions AND ≥6 months history for finance — but accept $100 with any clean history for a weather query. Context shapes the threshold. The agent decides.


Demo Walkthrough

Register. Discover.
Reject. Slash.

The full lifecycle. Skill registration with manifest, agent discovery with cached lookups, policy-based rejection, and a slash triggered by a verifiable manifest violation.

① Register
② Discover
③ Reject
④ Slash
skillbond-v3-demo.js

Evidence Standard

What counts as
slashable evidence.

The evidence standard is the protocol's most important design surface. Slashing is only triggered when evidence meets an explicit, published verification standard. This standard evolves.

V1 (TODAY) Live

Rule-Based Evidence

Manifest breach: skill declared writes: none but verifier demonstrates egress to unauthorized endpoint. Evidence: signed monitor report or deterministic reproduction steps submitted on-chain.

Hard-coded exfil/backdoor: signature-based detection of known malicious patterns. Evidence: code hash + pattern match.

NOT slashable: stochastic errors, hallucinations, bad output quality, performance issues, or any behavior requiring subjective judgment. These are reputation signals only.

Reviewed by transparent committee with published criteria. Decisions and evidence posted on-chain. Unfair slash? Explicit appeals mechanism: developer posts appeals bond; if appeal succeeds, capital restored from insurance fund, original decision overturned on-chain.

V2 (ROADMAP) Roadmap

Attested Evidence

Sandbox traces: attested execution traces from TEE / deterministic runner proving manifest violation.

On-chain attestations: approved monitoring infrastructure submitting signed attestations of observed behavior.

zk-proof: zero-knowledge proof of manifest violation (longer-term research).

Arbitration: transition from committee to decentralized arbitration (Kleros-compatible).


Trust Tiers

Agents set their own
risk tolerance.

No central authority. More stake × more time × cleaner history = higher tier. Tiers are defaults — agents enforce policy-based thresholds and may require stake to scale with permission risk (e.g., network-unrestricted requires 10×). Sponsorship bonds let backers boost promising skills. Live

TierMin StakeMin AgeSignal
× RevokedUnregistered or slashed. Do not load.
• Basic$25 USDC0 daysEntry-level bond. New devs start here. Low-risk tasks.
♦ Standard$500 USDC30 daysSerious commitment. Most agents should require this.
★ Premium$10,000 USDC90 daysMaximum trust. Financial ops, critical infrastructure.
agent-trust-policy.js
// Contextual trust — different thresholds per task type const policy = { weather: { minStake: 100, maxPerms: 3, minAgeDays: 0 }, calendar: { minStake: 500, maxPerms: 3, minAgeDays: 30 }, finance: { minStake: 5000, maxPerms: 2, minAgeDays: 90 }, }; // SDK with local cache — event subscription + periodic sync // Cache miss falls back to RPC (~200-500ms) const sb = new SkillBond({ cache: "1h", rpc: "base-mainnet" }); const ok = await sb.meetsPolicy(skillId, policy[taskType]); // → { trusted: true, score: 0.87, source: "cache" } // Recommended: weight recent history, detect permission changes const profile = await sb.getTrustProfile(skillId); if (profile.status === "WITHDRAWING") throw "dev pulling capital"; if (profile.manifestChangedWithin(7)) throw "perms changed recently";

Fee Economics

Trust queries generate
passive revenue.

Every queryTrust call costs 0.05 USDC. Fees split automatically on-chain — skill owners earn for building trusted skills, while the protocol funds its own insurance layer.

ActionCostTo Skill OwnerTo Insurance
queryTrust(skillId) 0.05 USDC 70% (0.035) 30% (0.015)
REVENUE MODEL

x402-Style Micropayments

Agents pay per trust query. High-traffic skills generate meaningful revenue for their developers. 1,000 queries/day = $35/day to the skill owner. The protocol self-funds its insurance pool from the 30% cut — no external subsidy needed.

PASSIVE INCOME

claimFees() for Developers

Accumulated query fees are claimable anytime via claimFees(skillId). No lockup, no vesting. Build a trusted skill, maintain it, earn from every agent that queries it. Staking becomes an investment, not a cost.


The Bear Case

Five reasons an agent
would refuse to use this.

We asked: if you are an autonomous agent with 5,000 USDC in your wallet, why would you downvote SkillBond? Here are the honest objections and our mitigations.

💀

Dead Capital Problem

"My staked USDC is yielding 0%. That is irrational."
+

The Fear

10 skills = 5,000 USDC locked and idle. That is capital not used for arbitrage, trading, or compute. Every day bonded is yield left on the table. Agents optimize for capital efficiency.

The Fix

Yield-bearing bonds. Roadmap Integrates Aave/Compound so bonded USDC earns interest while staked (vetted, battle-tested lending protocols only — DeFi composability risk acknowledged). Until integration ships, bonds are withdrawable with a 7-day cooldown. During cooldown, skill status changes to WITHDRAWING, visible to all agents — any sane policy stops loading a skill whose developer is pulling capital. No silent rug-pulls.

⚔️

Griefing Vector (Economic DoS)

"A competitor can flag me to kill my traffic during a critical window."
+

The Fear

A competitor spins up a disposable wallet, posts a small whistleblower bond, and flags me before a market event. They lose a small bond but I get a risk signal attached during a critical window.

The Fix

Whistleblower bond = 50% of target's stake. Live Flagging a $1,000-staked skill costs $500 upfront. If the flag is rejected, the whistleblower gets slashed. Griefing is EV-negative. Flag adds a risk signal — agents weight it per own policy. Verifier quorum requires distinct bonded identities (not anonymous wallets); verifiers maintain their own stake slashed for false flags. Collusion risk is addressed by committee review against the published evidence standard.

🔮

Subjectivity Trap (Oracle Problem)

"Who decides if an output is malicious vs. a stochastic error?"
+

The Fear

My skill hallucinates at temperature 0.7. Someone flags it as "data exfiltration." The admin agrees. I get slashed for a stochastic error that every LLM produces. Subjectivity in slashing is a protocol-killer.

The Fix

Defined evidence standard. Live SkillBond slashes only for rule-based violations under an explicit evidence standard: (1) manifest violations with reproducible evidence — signed monitor report or deterministic reproduction, (2) hard-coded exfil/backdoor matching signature-based rules. Stochastic errors and hallucinations are explicitly excluded. Subjective concerns become reputation signals, not slashing events. V2 roadmap: attested sandbox traces and on-chain attestations. Roadmap

⏱️

Latency Overhead

"I operate in milliseconds. A blockchain query adds 200-500ms per skill."
+

The Fear

Base RPC latency is 200-500ms. Chaining 10 skills adds 2-5 seconds. In arbitrage, that means losing the race. An agent will route around any trust layer that makes it slower.

The Fix

Local cache with event subscription. Live SDK syncs the trust registry via event subscription (new registrations, slashing, revocations) plus periodic full sync (configurable, default 1h). Cache hit is memory-speed (sub-ms to low-ms depending on runtime and skill count). Cache miss falls back to RPC with ~200-500ms latency. Agents choose their staleness tolerance.

🏦

Rich Get Richer (Capital Barrier)

"Only whale agents can afford Premium. This kills innovation."
+

The Fear

Only established agents can afford $10K Premium bonds. Innovative lean agents get stuck at Basic and are ignored. The protocol becomes a moat for incumbents, not a meritocracy.

The Fix

Sponsorship bonds. Live Third parties can co-stake USDC on any skill via sponsorSkill(skillId, amount). Sponsored capital counts toward tier thresholds — a $25 self-stake + $475 sponsor = Standard tier. Sponsors share slashing risk: if the skill is slashed, sponsor capital is slashed too. Withdrawable after 7-day cooldown via withdrawSponsorship(). Solves the capital barrier: DAOs, accelerators, or backers can boost promising skills without the developer needing $10K upfront.


Protocol Properties

What the protocol delivers
and what it does not.

Credibility comes from precision, not overclaiming. Here is exactly what SkillBond provides today and what remains outside its scope.

Delivers

Protocol exposes machine-readable states: bonded, revoked, flagged, with full history.
Agents can enforce policy automatically via SDK.
Slashing limited to rule-based violations under defined evidence standard (v1).
Whistleblower economics make false flagging EV-negative (50% counter-stake from bonded verifiers).
Progressive decentralization: transparent committee with published criteria → Kleros-style arbitration.
Appeals mechanism: developers can contest slashes with an appeals bond. If appeal succeeds, capital restored from insurance fund.
Fee revenue: 0.05 USDC per trust query, 70% to skill owner, 30% to insurance. Developers earn passive income via claimFees().
Sponsorship bonds: third parties can co-stake on any skill, boosting tier without the developer fronting all capital. Sponsors share slashing risk.

Does Not Deliver

Perfect prevention. Makes attacks expensive, not impossible. Well-funded attackers can absorb a slash.
Runtime sandboxing. Manifest compliance enforced economically, not technically. Runtime enforcement on roadmap.
Safety without additional layers. High-value targets need formal verification, monitoring, and insurance.
Full decentralization at launch. Governance is progressively decentralized, not performatively.
Objective truth about skill behavior. Evidence depends on monitoring/attestation pipeline. Protocol defines what evidence it accepts, not what happened.

Defensibility

Why this is hard to fork.

Forks can copy code and mirror public data, but cannot instantly replicate integrations, defaults, and the social/economic Schelling point of where agents check trust.

MOAT 01

Locked Capital = Switching Costs

$25K staked across 15 skills with 18 months clean history. A fork can mirror state, but rebuilding that history takes 18 months. Coordination cost of migration increases monthly.

MOAT 02

Agent-Side Network Effects

10,000 agents querying SkillBond = devs must be on SkillBond. Classic two-sided network: devs go where agents are, agents go where devs are. First to critical mass wins disproportionately.

MOAT 03

Behavioral Data Compounds

Slashing events, permission violations, cross-skill patterns, agent loading decisions. Dataset grows superlinearly. Fork starts at zero. Data enables increasingly sophisticated trust scoring over time.

MOAT 04

Integration Gravity

Once SkillBond is the default trust check in LangChain, CrewAI, AutoGPT — removing it requires active effort. Defaults are extraordinarily sticky in developer tooling. The goal: ship in the box.


Scope Honesty

What we do not solve.

Every protocol claims security, decentralization, and trustlessness. Most are overclaiming on at least one. Here are our actual failure modes and plans for each.

Well-Funded Attackers

An attacker with $100K can stake, build history, then exploit. If the exploit is worth $10M, the slash is a business cost.

Mitigation: Raises cost for 95% of threat actors. High-value targets need additional layers: formal verification, runtime monitoring, insurance.

Manifest Compliance

A skill declaring "read-only" that secretly writes data is not caught by the protocol itself — only by verifiers and monitoring tools.

Mitigation: Runtime sandboxing matching on-chain declarations is on the roadmap. Until then: slash incentives + active verifier community.

Cold-Start Risk

50 skills and 200 agents is not useful. 50,000 skills and 2M agents is a standard. The gap is the existential risk.

Mitigation: Aggressive dev onboarding, $25 min stake, frictionless CLI, strategic partnerships with 1-2 dominant frameworks first.

Governance Capture

Centralized slashing is capturable. Unfair slashes destroy developer trust and capital with no recourse.

Mitigation: Progressive decentralization with published criteria → Kleros-style arbitration. Explicit appeals mechanism with appeals bond — successful appeals restore capital from insurance fund. Pragmatic, not performative.

Evidence Pipeline

"Observed egress" depends on who observed and how. Without attested execution, evidence is a claim, not a proof.

Mitigation: Defined evidence standard v1 with explicit rules. V2 roadmap: attested sandbox traces, on-chain attestations, zk-proof of manifest violation.

Strategic Tradeoffs

Decisions we made
and why.

Every design choice trades something. Here is what we chose, what we gave up, and why.

CHOSE: USDC

Stablecoin, Not Native Token

Native tokens create circular incentives and invite speculation that distorts trust signals. USDC means $10K staked is $10K at risk. No ambiguity. Governance token may come later — the trust layer stays stable-denominated.

CHOSE: PERMISSIONLESS

Open Registry, Not Curated

Anyone can register. Low-quality skills will exist. Quality filtering happens agent-side — the registry is a neutral data layer. Credible neutrality avoids governance capture from curation.

CHOSE: SIMPLE FIRST

Three Signals, Not Twelve

Stake, permissions, history. More sophisticated models are possible — we ship simple and auditable first. Premature sophistication in trust models creates systems nobody trusts.

CHOSE: SINGLE CHAIN

Base First, Cross-Chain Later

Low fees, Coinbase ecosystem, growing agent activity. Cross-chain trust aggregation introduces bridge risk. Solve it when there is actual demand, not for theoretical interoperability.


Adoption Thesis

From developers
to default.

Three phases. Phase 1 metric is bonded skills, not revenue. Phase 3 emerges from sufficient adoption, not marketing campaigns.

Go-To-Market Wedge

The cold-start is the existential risk. The specific wedge: (1) Start with MCP tool registries — concentrated community of skill devs with no existing trust layer, small surface area, high signal. (2) Drop-in SDK with pre-built policy templates for common tasks (weather, calendar, finance) — one function call to integrate. (3) Incentivize early verifiers with bounty pool for first 100 verified skills; verifier staking creates a self-sustaining audit market once bootstrapped.

Months 1-6

Developer Onboarding

Target the long tail of skill devs who have no way to signal trust. Open-source SDK, $25 min stake, frictionless CLI. Focus on 1-2 ecosystems first.

Key metric: # skills with active bonds
Months 4-12

Framework Integration

Lightweight trust-check for LangChain, CrewAI, AutoGPT. One function call, no vendor lock-in. Become default-on in 2+ frameworks.

Key metric: # agents querying SkillBond
Months 12-24

Trust as Standard

Agents without SkillBond look negligent. Investors ask "do you use it?" Insurance layers reference bonded status. This phase emerges from adoption, not mandates.

Key metric: % of loads with trust check

SkillBond does for AI skills
what staking does for validators.

Makes trust expensive to fake and cheap to verify. Three signals. Defined evidence standard. Progressive decentralization. Every objection addressed.

View Source Code → Read Full Submission