About

An AI benchmark for the procurement era.

Built by 珈特科技 in Taipei. Phase 0 dogfood live; Phase 1 (15-model cohort, 3-judge ensemble, 60-task pack) ships Q3 2026.

Why this exists

Most AI benchmarks publish a leaderboard number and ask you to trust it. Some tell you which judge model they used. A handful release the prompts. None — at the time we started — give you a cryptographic chain back from the score on your screen to the bytes the model returned to the bytes we anchored at midnight UTC.

That chain is the only thing a procurement officer can put in a vendor review. So we built it.

Who this is for

Roadmap

Apr 2026 phase 0 · live Evidence chain end-to-end (R2 + Neon + edge verify) · single-provider smoke baseline
May 2026 phase 0 · scale 繁中 coding pack 60 tasks · daily cron stable · 14-day Merkle streak
Jun 2026 phase 1 · open 3-judge ensemble · 15-model cohort · public leaderboard live · pricing snapshot
Jul 2026 phase 1 · vertical Silent-update probe public · 繁中 invoice OCR pack · accounting partner samples
Aug 2026 phase 1 · workspace Tenant-private packs · evidence reports · alert engine · RBAC
Sep 2026 phase 1 · advisor Routing Advisor · Shadow Simulator · incident attribution
Oct 2026 phase 1 · ga Cosign + Merkle · Stripe billing · 2-3 paid pilots

The team

Built by 珈特科技 (GetInfo Tech) in Taipei. Same team behind SayVox (real-time voice translation) and GetMSG (enterprise .msg viewer).

Talk to us

If you have a workload you think GetAI should benchmark, write to perry@getinfo.com.tw.

Built on Cloudflare Pages + R2 + Pages Functions, Neon Postgres (ap-southeast-1), MiniMax-M2.7 baseline. No managed servers. Two vendor accounts. One bill at $0.