The definitive argument against post-hoc explanations. Rudin demonstrates that interpretable models match black-box accuracy in high-stakes domains -- and that explaining opaque models is fundamentally unreliable.
Unpacks what "interpretability" actually means across disciplines. Distinguishes transparency (seeing the model) from post-hoc explanation (rationalising the model) -- a distinction that maps directly to the Koher architecture.
Proposes evaluation frameworks for interpretability research. Argues that without rigorous definitions, claims about "explainable AI" remain unfalsifiable.
Combines neural network accuracy with first-order logic explanations. The model produces human-readable rules alongside predictions -- an architecture where pattern recognition and explicit reasoning coexist.
arXiv
How AI Reads Language
The mechanics behind Koher's qualification layer -- what models actually do when they process text, and why that operation is different from understanding it.
The most accessible visual explanation of the transformer architecture. Step-by-step diagrams show how attention mechanisms let models weigh relationships across entire sequences -- the foundation of every modern language model.
Distribution-free methods for providing statistical guarantees on NLP classification — without retraining the model. Directly relevant to how Koher's rules layer sets confidence thresholds on qualification scores.
BERT reads text in both directions simultaneously, producing contextual embeddings that capture meaning better than any prior approach. The architecture behind most text classification systems, including Koher's qualification layer.
The model Koher uses. DeBERTa separates content and position in its attention mechanism, improving how models handle word relationships. This disentangled approach achieves stronger classification accuracy than BERT.
Visual walkthrough of how BERT pre-trains on masked language modelling and transfers to classification tasks. Shows the pipeline from raw text to structured predictions.
Blog
Neural + Symbolic: Hybrid Architectures
Systems that combine neural pattern recognition with deterministic rules -- the broader research family that Koher's three-layer architecture belongs to.
A system that discovers symbolic rules directly from perceptual data without predefined logic. Demonstrates that neural networks can learn to produce interpretable rule-like outputs -- bridging perception and reasoning.
NeuRules enables end-to-end learning of interpretable rule lists via neural optimisation, without requiring discretisation. Unifies neural scalability with symbolic transparency.
Learns structured rule systems from minimal data. Demonstrates compositional generalisation -- the ability to combine learned rules in novel ways -- which pure neural approaches struggle with.
arXiv
Design Education & Studio Culture
What AI enters when it enters the design classroom -- the pedagogical traditions, critique cultures, and ways of knowing that resist parameterisation.
Categorises AI integration into design representation, deduction, and derivation. Maps how AI tools are reshaping teacher-student dynamics and critique culture.
Proposes pedagogical structures that augment design learning while preserving human-centred critique. Explores how AI handles rote tasks so students can focus on conceptual depth.
Documents a curricular experiment with Midjourney across two cohorts of 100+ students. Expanded ideation but revealed tensions around ethics, originality, and foundational skill development.
Theorises AI analytics for multiscale design assessment in project-based studios -- indexing student spatial and use patterns to contextualise feedback beyond intuition.
Design as a third area of knowledge -- distinct from science and humanities. Argues that designerly thinking relies on tacit skills, synthetic judgment, and co-evolutionary problem-solution framing.
University Archive
Human-AI Collaboration
The intellectual lineage of augmentation -- why the best AI systems extend human capability rather than replace human judgment.
The foundational text. Engelbart envisions computers as dynamic aids for problem-solving -- extending human capability through interactive tools. Every "human-in-the-loop" system descends from this framework.
Proposes adaptive models with mutual learning and information exchange between humans and AI. Shifts the frame from AI-as-tool to AI-as-partner with defined boundaries.
Reviews the intellectual history from Engelbart and Licklider to contemporary hybrid systems. Contrasts philosophical and engineering visions for problems that neither humans nor AI can solve alone.
Meta-analysis revealing when human-AI collaboration outperforms either alone. Finds augmentation works best in content creation and complex reasoning -- and fails when AI already exceeds human capability.
arXiv
AI Ethics & Accountability in Education
Why judgment in educational assessment should remain inspectable, challengeable, and human-governed.
Analyses 17 empirical studies on ethical concerns -- bias, privacy, accountability -- in AI education systems. Proposes guidelines prioritising human-centred design over full automation.
Outlines ethical effects across access, algorithms, and citizenship. Recommends iterative human oversight -- not one-time audits -- for AI systems that affect student outcomes.
Argues for procedural fairness and contestability in AI-driven educational assessments. Students should be able to understand and challenge the basis of AI-generated evaluations.