The same thesis, applied to three different domains.

Each of these projects is a working test of the same architectural thesis: that the most useful AI work today isn't replacing human judgement — it's giving one person the operating leverage of a small, disciplined team. The pattern repeats across capital allocation, defensive operations, and security research.

In each case the agent does the heavy lifting, the human stays the decision-maker, every action is auditable, and every assumption is explicit. They are research projects, not products. The point of featuring them here is to show what the architectural thinking looks like in practice.

/ 01 In Evaluation

Kronos Engine

A research project applying foundation-model forecasting to personal capital discipline.

Investigation

Kronos Engine is built on top of an open-source financial foundation model — a transformer pre-trained on candlestick data from 45 global exchanges. The project fine-tunes this base model on crypto-specific historical data and uses it to produce probabilistic price forecasts: distributions of where price could go, not single-point predictions. The point isn't to generate trading signals; it's to investigate whether foundation-model forecasting can structurally improve the rigor of personal capital allocation.

Architecture
  • Fine-tuning pipeline running on cloud GPU infrastructure, training a 102M-parameter base model on multi-year crypto historical data
  • Probabilistic forecasting engine generating 30 sample paths per inference at 1-hour candle close, 24-hour forward horizon
  • Side-by-side evaluation harness comparing fine-tuned vs base model on hit rate, MAE, and Sharpe across out-of-sample windows
  • Risk-gate framework with funding-rate awareness, position sizing limits, and explicit confidence cutoffs ahead of any execution path
  • Companion bias-monitoring system on independent cadence to surface discretionary blind spots and validate framework assumptions
  • Full integration architecture: Hyperliquid SDK for execution, Binance public data API for training data
Operating principle The output is a distribution, not a price. Confidence thresholds and risk gates do the work — the model is a primitive, not an oracle.
Stack Python · PyTorch · Kronos foundation model (NeoQuasar/Kronos-base, 102M params) · Hyperliquid SDK · Binance API · SQLite · Cloud GPU (RunPod / Vast.ai) · Claude AI for companion analysis
Reference upstream model: github.com/shiyu-coder/Kronos
/ 02 Live

Privacy Posture

An AI agent that systematically reduces personal data exposure across data brokers, breach databases, and OSINT sources.

Investigation

Most people's personal data is scattered across fifty-plus data brokers and people-search sites that scrape, package, and sell it. Manual removal is hours of repetitive work per broker, and the records reappear. Privacy Posture applies an AI agent — operating with human approval gates — to map the PII footprint, draft jurisdiction-correct removal requests, track every request through to confirmation, and re-check for reappearance on a schedule. Effectively a small Security Operations Centre where the analyst is an LLM, the case management system is a structured workspace, and the legal-basis library is encoded as a template set.

What's operating
  • 49 brokers mapped across 3 priority tiers, with active opt-out queue
  • 6-template legal framework library covering CCPA, GDPR, PDPA, generic, follow-up, and reappearance scenarios
  • SLA logic per framework (CCPA 45d, GDPR 30d, PDPA 21d) with 14-day and 90-day reappearance check windows
  • Identity-verification minimisation logic that refuses brokers' overreaching ID-upload demands by citing data minimisation principles
  • Operations hub spanning PII Inventory, Broker Targets, Active and Completed Requests, Escalations, Breach Monitor, and Template Library
  • OSINT baseline scans across name variants, identifiers, and handles, with WHOIS / domain privacy posture checks
  • Time-to-draft per opt-out: ~30 seconds, vs. ~10 minutes manual
Lesson worth featuring The original ledger sat in Google Drive, but the connector is effectively read-only for binary files — which breaks the autonomy claim. Migration to a workspace where the agent has full read/write authority restored the loop. When building agent-native workflows, choose storage where the agent is a first-class citizen, or you re-introduce the manual steps you were trying to remove.
Stack Claude (Anthropic) as operator · Notion + Gmail via MCP · Web search/fetch for OSINT · HIBP for breach checks · Jurisdictional frameworks: CCPA/CPRA, GDPR, PDPA
/ 03 In Daily Use

Scope Sentinel

An AI-orchestrated research aide for HackerOne — recon, scope validation, and report drafting in one pipeline.

Investigation

A research-aide toolkit for solo bug bounty work, first deployed on HackerOne. Architecture is intentionally split: standard scanners (nuclei, subfinder, httpx) handle the technical execution, while a multi-agent Claude layer — planner, researcher, and report-drafter — handles scope parsing, scanner orchestration, and turning collected output plus researcher observations into platform-formatted draft reports for human review. The most distinctive capability is the report drafter: turning structured scanner output and unstructured notes into submissions ready for human approval.

What's working
  • Multi-agent Claude stack (planner + researcher + report-drafter) orchestrating scanners and turning output into platform-formatted drafts
  • Pulls and parses HackerOne program scope; validates assets are in-scope before any scanning begins
  • Coordinates a recon → scan → triage pipeline end-to-end, with findings, evidence, and drafts flowing through MCP into Notion, Drive, and Gmail
  • Platform-agnostic core; first-platform deployment on HackerOne, with adapters for additional programs as a build-out, not a rewrite
  • Honest workflow improvement: under five hours saved per week, with the bigger gain in consistency of report quality rather than raw throughput
  • Human stays the decision-maker at every consequential step — the tool reduces slow surface area, not judgement
Why it's here Scope Sentinel is the working version of a thesis I bring into commercial conversations: that AI's real leverage is helping one disciplined operator scale the rigor of their own work. The same multi-agent architecture, MCP integration, and human-in-the-loop design pattern apply far beyond security research.
Stack Python · TypeScript / Node · Claude (Anthropic API) as orchestration layer · Multi-agent gstack workflow (planner + researcher + report-drafter) · Standard scanners (nuclei, subfinder, httpx) · MCP integrations: Notion, Drive, Gmail

Each of these projects has architecture, methodology, and operating detail beyond what's shared here. Some material is intentionally omitted — specific findings, capital figures, target scope, and the execution layer detail of each system. Available to discuss in appropriate professional contexts on request.

Talk shop

Have a problem worth building against?

ryan@gruponugara.com