How we work. What we protect. What we refuse.

Judgment Is Personal

When we say "humans make decisions," we do not mean humans as a species. We mean humans as persons — individuals who have examined their own positions and arrived at them through their own thinking.

Deferring to social convention without examining it is not human judgment. It is cognitive offloading — functionally no different from asking AI to judge for you. The offloading destination changes (culture, tradition, peer consensus, an algorithm), but the abdication is the same. Someone else — or something else — decided, and you inherited the decision without inhabiting it.

This is why Koher's architecture makes rules explicit and configurable. A teacher who writes a config is forced to examine what she actually thinks — not what her discipline assumes, not what her institution rewards, not what her peers would approve of. The config asks: what do you see? Not: what are you supposed to see? The act of writing it is the act of arriving at a personal position.

Koher does not build tools that judge. It builds tools that make judgment visible — so the person using the tool must decide whether to accept it, revise it, or reject it entirely. That decision, personally made, is what we mean by human.

The Architecture Is the Proof

Most discourse about AI runs to one of two extremes: breathless enthusiasm or fearful resistance. Koher occupies neither. It says something simpler: AI is good at reading patterns and generating language. AI is not good at judgment. Code handles judgment better — it is auditable, reproducible, explicit. Humans handle decisions better than either.

This would be merely philosophical if it were not demonstrated in working tools. Koher builds with a three-layer architecture — qualification, rules, language — where each layer does only what it is suited for. Every tool ships with a curtain you can pull back: which signals the AI read, which rules fired, what the AI narrated and why.

The AI tool landscape is full of claims. Tools claim to be transparent, ethical, human-centred. Koher does not claim these things. It builds tools where transparency is structural — you can inspect the judgment because the judgment is code, not a black box. You can disagree with the rules because the rules are explicit. You can rewrite them.

The architecture is not a constraint. It is the contribution.

Judgment Is Encodable, Shareable, and Plural

A teacher in Ahmedabad can write a JSON config that encodes how she evaluates design coherence. A teacher in Berlin can write a different config. The same student's work, run through both, produces different readings. Neither is wrong. Both are portraits of how those teachers see.

This is genuinely new. Not the technology — the proposition that expertise can become infrastructure without becoming authority. The config does not replace the teacher. It makes her way of seeing portable, comparable, and discussable. It turns tacit judgment into something that can be shared without being imposed.

If this works, what Koher has built is not a set of AI tools. It is a medium for encoding and exchanging perception.

Ground Over Surface

We have spent enough time around impressive surfaces — institutions with beautiful reputations and hollow classrooms, tools with extraordinary interfaces and no judgment underneath. Koher values work that arrives at the ground level: specific, concrete, honest about what it does and does not know.

A half-finished thing with a genuine question attached is worth more than a polished deliverable that performs completeness. Hesitation is signal. The absence of hesitation means either perfect understanding (rare) or disengagement from difficulty (common). We would rather not assume which one it is.

Honesty as Structure

Koher declared AI co-authorship publicly because it was true. The architecture is transparent because hidden judgment is unauditable judgment. The balance sheet is public because hidden costs become exit ramps.

This extends to people. We do not ask contributors to perform enthusiasm. We do not penalise someone for saying "this tool did nothing for me" — that observation is structurally significant, and we pay for it. We do not reward silence, but we do not punish it either. We pay for honest engagement and let absence be absence.

Every tool ships with a "Behind the Curtain" toggle. The user sees the result first, then can inspect exactly how it was produced — which layer did what, what signals were read, what rules fired, what the AI narrated. Nothing is hidden. This is not a feature. It is a commitment.

Doing More With Less

The prevailing logic of AI development is accumulation: larger models, more parameters, bigger datasets, more compute. Koher inverts this. Smaller models doing remarkable things on constrained hardware — edge deployments, phones, cheap servers, places where a 700 MB model will never run.

A 50 MB model that matches 95% of a larger model's performance is not a compromise. It is a better design. The constraint is generative: it forces architectural cleverness over brute-force scaling. It means a student in rural Maharashtra with a phone and intermittent connectivity can run the same tool as a student at a well-funded university with fibre broadband.

Serve People, Not Institutions

Tools are free for students. Always. Email verification, not accounts — we need to know you are a person, not build a profile of you. No freemium tiers, no institutional licensing. A student in rural Maharashtra and a student at a well-funded university see the same tool, get the same depth.

When an institution benefits, it is because the people inside it benefit. We do not sell to institutions. We build for people and let institutions notice.

The Range

Koher began in design education — a teacher building tools for his students. But the architecture illuminates wherever genuine stake exists. Design education, yes. But also: ethical technology, animal rights, artistic practice, philosophy of technology, veganism, critical pedagogy. The range extends wherever the question "what should AI judge, and what should it not?" intersects with a domain where Prayas has lived experience, professional knowledge, ethical commitment, or intellectual investment.

This is not mission creep. It is the natural consequence of an architecture that is domain-agnostic. The three layers work wherever language carries judgment. The specificity is in the configs and the trained models, not in the architecture itself.

Earned, Not Applied For

The people closest to the practice arrive through demonstrated work, not credentials or applications. Contributors are paid per accepted contribution. There is no flat rate for presence. Core team positions — time-based, sustained, with a visible ladder — are earned by people whose work consistently lands at the ground level. The door is open to anyone, nationwide. What matters is the work.

Weird Over Polished

Koher is a teacher making free AI tools with a ten-year horizon, no business model, and a public declaration of AI co-authorship. None of this is normal.

We would rather work with people who find something genuinely confusing or disagreeable about the practice than with people who find it generically exciting. Confusion is engagement. Generic excitement is noise.

Worth Is Not Hierarchical

The hierarchy exists — Prayas sets direction, reviews work, makes architectural decisions. But it is structural, not ontological. A student's observation that "this tool did nothing for me" shaped the practice as much as a year of development.

You are not lesser. You are earlier.

The Practice Continues Regardless

Koher is a ten-year practice. This is not a tagline — it is a structural decision about what kind of proof is required. A claim this large cannot be proved in a quarter, a funding cycle, or a product launch. It requires tools that accumulate, a body of work that compounds, and enough time for the world to encounter the work and decide whether it matters.

The practice continues regardless of funding, recognition, or adoption. This is not stoicism — it is the removal of exit ramps. When external validation becomes a condition for continuing, it also becomes a reason for stopping. Koher refuses that bargain.

For supporters

Your contribution does not keep the lights on. The lights are already on. Your contribution makes the room larger.

These values are not aspirational. They describe how Koher already works. If something we do contradicts what is written here, the contradiction is the problem, not the document. The work itself is the argument.