KOHER

AI handles language.
Code handles judgment.

Small tools that help you see your own work clearly. Free for students. Open source.

AI qualifies · Code judges · AI narrates

Try the tools Support the practice

Built by Prayas Abhinav — teacher, designer, Ahmedabad

Open Tools

Small things that might help.

Each tool does one thing. Free to use. Source available. Clone the repo and run locally with your own API key — no login. Hosted demos require email verification while funds last.

Design Concepts Live

Coherence Diagnostic

Paste a design concept. Get a diagnostic across five dimensions — what's solid, what's thin, what's unclear. A domain-trained DeBERTa model qualifies the text, deterministic rules apply judgment thresholds, Claude Haiku narrates the result.

Play & Games Live

Play Shape Diagnostic

Select three experiential qualities from twelve — anticipation, tension, relief, discovery, and more. See whether your combination is harmonic, distinct, dynamic, complex, or paradoxical. User selection replaces AI classification; embedding similarities determine relationships.

Open source under MIT licence. Verify your email to try — 10 free analyses, no account setup. Or clone the repo and run locally. See all tools →

What This Means

Language and judgment are different tasks. Most AI tools conflate them.

Language understanding means reading text and extracting what's there — identifying claims, detecting tone, recognising structure. AI does this well.

Judgment means deciding what counts — is this evidence sufficient? Is this scope too broad? Does this meet the standard? This requires explicit, auditable rules that reflect domain expertise.

When you ask a language model to do both at once — "Is this contract complete?" or "Is this essay well-argued?" — you get answers you can't audit, can't adjust, and can't trust to be consistent.

The difference in practice

Task Typical AI Koher approach
Design concept evaluation "AI, is this concept coherent?"
Different answer each time, no visible rubric
AI identifies claims, evidence, scope → Code applies thresholds → Consistent, auditable result

The separated version is auditable, consistent, and adjustable. When standards change, you update the rules — not retrain the AI.

The Approach

Separate what AI does well from what requires human judgment.

A pattern that emerged from teaching:

The Koher Architecture

Qualification
Transforms unstructured input into structured signals
AI reads language patterns — extracting what is there, not judging it
Rules
Applies deterministic logic to produce judgments
Code handles judgment — auditable, reproducible, adjustable
Language
Translates judgments into readable explanation
AI narrates decisions already made — it does not override them

The key insight: structured signals from the qualification layer never reach the language layer directly. Every judgment passes through explicit, auditable rules first.

Positions

What we believe and why.

Philosophical statements on AI, judgment, and how domain expertise should shape the tools we build.

Position Statement · 21 Feb 2026

When Students Trust ChatGPT More Than Teachers

Why auditable feedback available 24/7 — even at 40% of a professor's quality — beats opaque AI encouragement that helps no one.

Read position →
Position Statement · 15 Feb 2026

Being Around

Why staying matters more than scaling. A position on compounding practice and refusing the exit ramps that expectations create.

Read position →
Position Statement · 13 Feb 2026

Koher Architecture Specification

A philosophy for building AI tools that separate language from judgment — the complete specification.

Read full architecture →

View all positions

The Venue

Art that embodies the architecture.

The tools demonstrate the architecture technically. The venue lets you feel what it addresses.

Every quarter, a new interactive experience opens — each one placing you in the position of a model. You judge under constraint: incomplete information, time pressure, forced confidence. Then you see how an AI judged the same material under similar constraints.

The point is not to prove AI wrong or you right. The point is to feel the gap between confidence and knowledge — and to recognise that gap in every system that judges.

Season 1 · Now Open

You Are The Model

8 seconds. Partial text. Rate anyway. Then compare your judgments to an AI's — and feel what confidence under constraint actually means.

Enter the venue →

About

A teacher making tools.

I teach design at Anant National University. For fifteen years I've watched students struggle to see whether their concepts hold together — and I've struggled to articulate why one does and another doesn't.

The tools here are an attempt to make that seeing easier. Not to judge for you, but to surface what's there so you can judge more clearly.

Each tool encodes approximately 40% of what a teacher does — the portion that is pattern-based and repeatable. The other 60% requires human presence. That 60% is irreplaceable; these tools don't pretend otherwise.

One tool every three months. Free for students. Open source. If nothing ships, that's fine. The practice continues.

Prayas Abhinav · Ahmedabad

Support

Pay it forward.

The tools are free for students. Always. But API calls and hosting cost money.

If you're a working practitioner and a tool helped you, you can pay it forward — extend access for students who can't pay.

Current Balance Sheet
Monthly costs ₹2,400
API calls (Claude) ₹1,800
Hosting ₹600
Current balance ₹4,200
Runway ~7 weeks

When balance hits zero, hosted demos pause. Open source continues — clone the repo, bring your own API key.

₹500 ≈ 50 student analyses · Secure payment via Razorpay

Or just use the tools. That's enough too.

Data use: Concepts submitted to hosted demos are logged solely to improve model accuracy. No human reads your submissions. Data is never sold, shared, or used commercially — training data only. Self-hosted versions log nothing.