Research Hub

Most Amazon-research tools do not publish their accuracy. RIDGE does.

This page aggregates the public benchmarks, back-tests, and calibration audits that support every RIDGE verdict. Each document is updated when new data is collected.

Back-test

2026 Back-test Report

169 historical niches observed from 2022-2023 entry through 2026 outcome. Headline result: 97.8% GO precision and 96.2% on NO-GO calls.

Read the study →
ML Audit

Calibration Audit

Calibrated machine-learning verdict head, audited against 2,710 ground-truth labels with bootstrap 95% confidence intervals on every published metric.

Read the audit →
Dataset

Niche Database

6,779 niches scored across 16 product categories and 19 marketplaces with the current production engine.

Browse the dataset →
Open Benchmark

Public Leaderboard

The first held-out FBA niche benchmark — 169 labeled niches, downloadable JSONL, leaderboard open to any vendor or independent researcher. RIDGE holds rank #1; no competitor has submitted as of 2026-04-25.

View the leaderboard →
Methodology

Methodology

How RIDGE produces a verdict: data sources, validation protocol (k-fold + 169-cohort held-out + bootstrap CI + calibration + conformal abstain + cohort prior disclosure), and what we deliberately do not publish.

Read the methodology →

Why publish this at all?

FBA sellers commit between $10,000 and $100,000 of inventory on the back of a research decision. Those dollars deserve a vendor that shows its work. Every number on this page can be audited against the RIDGE niche database, and every verdict in a RIDGE report cites the same underlying evidence. When Helium 10, Jungle Scout, Viral Launch, Data Dive, or SellerApp publish their own back-tested accuracy, we will update our comparison tables — until then, RIDGE stands alone in this transparency.

Use the evidence, not adjectives

Order a RIDGE report — the same methodology audited on this page is applied to your niche. 48-hour delivery. 40+ sections. 14-day money-back guarantee.

Order Analysis