Self-learning
Your agent watches how you work, notices when you keep doing the same kind of thing, and proposes a reusable skill it can apply automatically next time. You review each proposal and accept or reject with a click. Paid-tier feature.
The idea, in plain language
The longer you use an agent, the more it has seen you do. Self-learning takes those observations and turns them into structured "skills". Short, reusable recipes the agent can pull out the next time the same situation comes up.
Three things to know:
- It's opt-in. Nothing the agent observes becomes a skill until you say yes.
- It runs locally by default. Pattern detection happens on your machine. No AI call required.
- You can plug in an AI to summarize the proposals more eloquently. Either your own API key, or Sibyl-hosted AI billed against your account balance.
What it looks like in practice
Suppose every time the agent ships a new version of Atlas, it goes through the same checklist: run the test suite, update the changelog, deploy to staging, ping the team, then deploy to production. After the agent's seen that sequence three or four times, self-learning surfaces a proposal:
Ship Atlas version
Auto-detected from 5 matching journal entries. When the user asks to ship a new Atlas version, follow this recipe:
- Run the test suite. Confirm all green.
- Update the changelog with the version number and a one-line summary.
- Deploy to staging. Wait for the staging health check to pass.
- Ping the team in the deploys channel with the version and what changed.
- Deploy to production. Confirm the live health check.
[Accept] [Reject] [Edit]
You eyeball the proposal. If it captures the way you actually want shipping to work, you accept it. And from that moment on, the agent has a structured skill to apply the next time you mention shipping Atlas. If the proposal is off, you reject it or edit the recipe to match your real flow.
How the agent decides what to propose
Sibyl Memory Plugin's self-learning watches for four kinds of patterns in the journal:
| Pattern | What it catches | Example |
|---|---|---|
| Repeated action | The agent has done the same kind of thing several times. | "deployed atlas" five times in a row → proposes a deploy skill. |
| Same shape | The agent has handled requests with the same structure repeatedly. | Tickets that all have task + module + owner → proposes a triage skill. |
| Steady cadence | The same kind of event keeps firing at a regular pace. | "daily standup" appearing every weekday morning → proposes a standup-summary skill. |
| Co-occurrence | Two specific things keep appearing together. | "jane" and "code review" always show up in the same entries → proposes a Jane-specific skill. |
A pattern needs at least three matches before the agent proposes it, and confidence rises with more matches. The higher the confidence, the more obvious the pattern was. Low-confidence proposals show up too. They're worth a glance but reject often.
Three ways to power the summaries
Detecting a pattern is one thing. Writing it up as a useful, reusable skill recipe is another. Self-learning supports three modes for that summarization step:
1. Local-only (free, no AI call)
The default. Sibyl Memory Plugin writes the skill body from templates, with no external AI call. Quality is moderate. You get a clear listing of the pattern, the matching events, and the suggested usage, but no polished prose. Good for power users who want to see the raw signal.
2. Bring your own AI
Paste an API key from any AI provider (Anthropic, OpenAI, Venice, Google, anything) into your Sibyl Memory Plugin config. The summarizer calls your chosen model to write a more polished skill body. We never see the key or what you send to it.
Use this if you already pay a provider and want to point Sibyl Memory Plugin at your existing budget.
3. Sibyl-hosted AI (designing)
Pre-fund your Sibyl Memory Plugin account with USDC (via wallet) or a credit card. We route summarization through Venice on your behalf, debit your balance per call, and the polished skill body comes back. No keys to manage; lowest setup overhead.
Use this if you want the simplest path and trust us to route the call. Sibyl Labs takes a small margin to keep the proxy running.
Where the skills go
Accepted proposals become reference documents in your memory under skill/<name>. The agent looks
them up the same way it looks up any other reference doc. By name, when the situation calls for it.
Rejected proposals are recorded so the same pattern doesn't keep getting suggested. You can review the full proposal history at any time and re-accept anything you rejected earlier.
Why this is a paid-tier feature
Three reasons:
- Real value is downstream. Skills make the agent dramatically more useful over time. The first month is exploration; month three is when the agent starts feeling like a real collaborator. Paid users are the ones who'll stay long enough to see that compounding.
- Hosted summarization costs us money. Routing through Venice costs real USDC. Paid users cover the proxy overhead.
- It aligns incentives. Users who pay are users who care about the long-term shape of their agent's behavior. Self-learning is the feature for them.
How to use it (when you're on a paid tier)
Run a learning pass over your recent journal:
report = memory.learn()
print(f"detected {report.proposals_made} new skill proposals")
Review the queue:
proposals = memory.list_skill_proposals()
for p in proposals:
print(p.proposed_title, p.confidence)
print(p.proposed_body[:200])
Accept one (writes it to your reference docs as a real skill):
memory.accept_skill_proposal(proposals[0].id, note="this is exactly right")
Reject one:
memory.reject_skill_proposal(proposals[1].id, note="too narrow")
Once the sibyl terminal command ships, the friendlier version is just:
sibyl learn # run a learning pass
sibyl learn review # walk through pending proposals interactively
Calling memory.learn() on the free tier raises a polite error pointing at the upgrade page. You
can see your storage status with memory.free_tier_status() at any time, paid or free.