Fraud detection

A detection layer that knows your attackers.

Custom AI layer tuned to your specific attack patterns. Hybrid rules + ML. Review queue UX. Proven at $1B+ unicorn scale.

Available for new projects
See AI Automation

Starting at $3,000/mo · monthly retainer

Who this is for

Marketplace, fintech, or community trust & safety lead where off-the-shelf fraud tools (Stripe Radar, Sift) are missing pattern-specific fraud that's costing real money.

The pain today

  • Stripe Radar missing 30%+ of fraud specific to your business
  • Sift/Signifyd catching generic fraud, missing marketplace-specific
  • Review queue overwhelming — false positives frustrate good customers
  • Attack patterns evolving faster than vendor model updates
  • No visibility into which signals are driving decisions

The outcome you get

  • Custom detection tuned to attack patterns specific to your business
  • Hybrid model: explainable rules + ML classifiers + LLM for edge cases
  • Review queue UX designed for moderator efficiency
  • Feedback loop — every human decision improves the model
  • Integration layered on top of Stripe Radar, not replacing

Signals beyond payment data

Stripe Radar uses payment signals (card BIN, velocity, geographic mismatch). That's necessary but not sufficient for marketplace or community fraud. Additional signals: device fingerprinting (same device, multiple accounts), behavioral (time-on-site before checkout, mouse patterns), social (LinkedIn connections, email age, phone verification), content (listing text, profile descriptions, review patterns), network (IP reputation, shared IP with known bad actors), and temporal (account age vs transaction pattern). A custom layer combines these with your business-specific signals (e.g., marketplace: seller account age vs transaction amount; fintech: first transaction vs average balance). Richness of signals is what separates effective custom detection from generic.

Hybrid rules and ML approach

Three layers. Rules: fast, deterministic, explainable — 'block if IP blocklisted', 'require 2FA if transaction > $10k'. ML classifiers: pattern matching on labeled fraud data, catches subtle signals humans miss. LLM: reasons about edge cases using natural language context (e.g., is this profile's description consistent with its transaction behavior). Each layer's score combines into a final risk score. Rules handle obvious cases cheaply; ML handles the 80% of subtle cases; LLM handles novel or ambiguous cases. This layered approach keeps per-decision cost low while maintaining coverage. I build the specific combination per your fraud patterns and cost sensitivity.

Review queue UX

Most fraud detection systems fail in the review queue — moderator throughput too low to keep up, false positives frustrate good customers. UX discipline: group similar items (five similar transactions reviewed together faster than individually), pre-populate decision options with reason codes, show signals and confidence clearly so moderator can decide in seconds not minutes, keyboard shortcuts for common decisions. Every decision feeds back into the model. High-volume queues need throughput: target 500+ items/hour on well-designed queues vs 50–100 on poorly designed ones. UX investment pays back as operational efficiency.

Case study: bolttech payments scale

At bolttech — a $1B+ unicorn — the Payment Service integrated 40+ payment providers across 15+ markets. Fraud detection at scale required custom layers on top of each provider's native controls, with region-specific attack patterns, regulatory reporting obligations, and reconciliation with financial losses. 99.9% platform uptime. 0 post-launch critical bugs on the Payment Service. The discipline — layered detection, explainable signals, audit trail for regulatory review, continuous adaptation to new attack patterns — is the same at any scale. I apply unicorn discipline to smaller-scale fraud work because the failure modes are identical, just with smaller dollar amounts.

Pricing

AI fraud detection fits the AI Automation retainer at $3,000/mo for typical marketplace or fintech scale. Very high volume (>10M transactions/month) or regulated (fintech, healthcare) contexts may warrant enterprise-scope pricing. First-version timeline: 5–7 weeks including baseline fraud-pattern analysis. Retainer continues through ongoing pattern adaptation and model retraining — fraud detection is a continuous arms race. 14-day money-back, cancel anytime, Work Made for Hire. LLM and ML infrastructure costs typically $200–2,000/mo depending on transaction volume.

Layering on top of Stripe Radar and Sift

Don't replace proven vendors — layer on them. Stripe Radar handles payment-specific fraud at the transaction level. Sift handles account-takeover and login fraud. Custom layer adds business-specific signals (marketplace: seller quality, inventory patterns; community: content quality, reporting history). Decisions combine: Stripe blocks payment fraud, Sift blocks account fraud, custom blocks marketplace-specific abuse. Layered defense catches what any single layer misses. The custom layer pays back when specific attack patterns cost more per month than the retainer.

Recent proof

A comparable engagement, delivered and documented.

Payment Integration Platform

Unified payment orchestration across Asia and Europe

Delivered the payment orchestration platform at bolttech, a $1B+ unicorn, with 40+ integrations across multiple regions.

Fintech$1B+ unicorn40+ payment providers15 new markets
Read the case study

Frequently asked questions

The questions prospects ask before they book.

Will this replace Stripe Radar?
No — it layers on top. Stripe Radar handles generic payment fraud well; custom layer adds your business-specific signals. Combined detection catches significantly more than either alone. Stripe Radar cost stays the same; custom layer adds the retainer on top.
How do you measure ROI?
Monthly comparison: fraud losses before vs after, plus false-positive impact on good customer revenue. Typical target: cut fraud losses 50%+ with false-positive rate under 2%. ROI math: if current monthly fraud losses exceed the retainer plus false-positive revenue cost, custom detection pays back.
What about appeal handling for false positives?
Appeals workflow built into the detection pipeline. Blocked users/transactions appealed via standard form, routed to human review queue. Every overturn feeds back into model retraining. Customer-facing messaging explains the hold without revealing specific signals (signal disclosure would teach attackers).
How quickly can it adapt to new attack patterns?
Rules can deploy within hours. ML retraining cycles weekly with fresh labeled data. LLM-layer adaptation (new prompts addressing novel patterns) deploys within days. Fast iteration vs attackers is the point of custom detection; generic vendors adapt on quarterly cycles.
Is this compliant with regulations?
GDPR, CCPA compliance baked in — data minimization, explainability (for adverse decisions), right to appeal, audit trail. Fintech regulations (AML, KYC) scoped per your specific obligations. Compliance counsel confirms requirements; I implement to spec.
Get started in 60 seconds

Ready to start?

Tell me what you need in 60 seconds. Tailored proposal in your inbox within 6 hours.

Available for new projects