Small tools that help you see your own work clearly. Free for students. Open source.
AI qualifies · Code judges · AI narrates
Try the tools Support the practiceBuilt by Prayas Abhinav — teacher, designer, Ahmedabad
Open Tools
Each tool does one thing. Free to use. Source available. Clone the repo and run locally with your own API key — no login. Hosted demos require email verification while funds last.
Paste a design concept. Get a diagnostic across five dimensions — what's solid, what's thin, what's unclear. A domain-trained DeBERTa model qualifies the text, deterministic rules apply judgment thresholds, Claude Haiku narrates the result.
Select three experiential qualities from twelve — anticipation, tension, relief, discovery, and more. See whether your combination is harmonic, distinct, dynamic, complex, or paradoxical. User selection replaces AI classification; embedding similarities determine relationships.
Open source under MIT licence. Verify your email to try — 10 free analyses, no account setup. Or clone the repo and run locally. See all tools →
What This Means
Language understanding means reading text and extracting what's there — identifying claims, detecting tone, recognising structure. AI does this well.
Judgment means deciding what counts — is this evidence sufficient? Is this scope too broad? Does this meet the standard? This requires explicit, auditable rules that reflect domain expertise.
When you ask a language model to do both at once — "Is this contract complete?" or "Is this essay well-argued?" — you get answers you can't audit, can't adjust, and can't trust to be consistent.
The difference in practice
| Task | Typical AI | Koher approach |
|---|---|---|
| Design concept evaluation | "AI, is this concept coherent?" Different answer each time, no visible rubric |
AI identifies claims, evidence, scope → Code applies thresholds → Consistent, auditable result |
The separated version is auditable, consistent, and adjustable. When standards change, you update the rules — not retrain the AI.
The Approach
A pattern that emerged from teaching:
The key insight: structured signals from the qualification layer never reach the language layer directly. Every judgment passes through explicit, auditable rules first.
Positions
Philosophical statements on AI, judgment, and how domain expertise should shape the tools we build.
Why auditable feedback available 24/7 — even at 40% of a professor's quality — beats opaque AI encouragement that helps no one.
Read position →Why staying matters more than scaling. A position on compounding practice and refusing the exit ramps that expectations create.
Read position →A philosophy for building AI tools that separate language from judgment — the complete specification.
Read full architecture →The Venue
The tools demonstrate the architecture technically. The venue lets you feel what it addresses.
Every quarter, a new interactive experience opens — each one placing you in the position of a model. You judge under constraint: incomplete information, time pressure, forced confidence. Then you see how an AI judged the same material under similar constraints.
The point is not to prove AI wrong or you right. The point is to feel the gap between confidence and knowledge — and to recognise that gap in every system that judges.
8 seconds. Partial text. Rate anyway. Then compare your judgments to an AI's — and feel what confidence under constraint actually means.
Enter the venue →About
I teach design at Anant National University. For fifteen years I've watched students struggle to see whether their concepts hold together — and I've struggled to articulate why one does and another doesn't.
The tools here are an attempt to make that seeing easier. Not to judge for you, but to surface what's there so you can judge more clearly.
Each tool encodes approximately 40% of what a teacher does — the portion that is pattern-based and repeatable. The other 60% requires human presence. That 60% is irreplaceable; these tools don't pretend otherwise.
One tool every three months. Free for students. Open source. If nothing ships, that's fine. The practice continues.
Prayas Abhinav · Ahmedabad
Support
The tools are free for students. Always. But API calls and hosting cost money.
If you're a working practitioner and a tool helped you, you can pay it forward — extend access for students who can't pay.
When balance hits zero, hosted demos pause. Open source continues — clone the repo, bring your own API key.
₹500 ≈ 50 student analyses · Secure payment via Razorpay
Or just use the tools. That's enough too.
Data use: Concepts submitted to hosted demos are logged solely to improve model accuracy. No human reads your submissions. Data is never sold, shared, or used commercially — training data only. Self-hosted versions log nothing.