Methodology
How we structure audits end-to-end.
1. Scope and assumptions
We agree repositories/commits, chains, deployment addresses, trusted roles, and out-of-scope components. Misalignment here is the leading cause of audit friction — we document it explicitly.
2. Automated evidence layers
Typical layers include:
- Static analysis — Slither/Semgrep-style detectors and custom rules.
- Symbolic / formal — reachability and properties where tooling fits.
- Fuzzing / invariants — Foundry/Echidna/Medusa harnesses for stateful bugs.
- Economic simulation — oracle/AMM/MEV/liquidation angles for DeFi.
- Cross-chain / governance — bridges, messaging, upgrades, admin paths.
- Deployment audit — bytecode, proxies, roles, token metadata vs manifest.
- Intel + AI assist — historical exploit similarity and hypothesis generation (draft until cited).
3. Correlation and triage
Engines emit normalized findings with evidence envelopes. We deduplicate, score priority, and confirm exploitability before publication.
4. Reporting and fix verification
Deliverables include Markdown/HTML/SARIF-friendly outputs. Fixes are verified with reruns and residual risk notes.