PRAXIS Evaluation Track
Programme evaluation infrastructure
Twelve years of field evaluation experience, from pastoralist communities in South Sudan to health facilities in rural Burundi, encoded into open-source tools that run on your machine.
Why this exists
I kept running into the same problem. You find a useful tool, you build your workflow around it, and then the pricing changes. Or the platform pivots. Or the internet goes down in the middle of a field visit and you lose access to your own work. It happened enough times that I stopped trusting tools I could not control.
PRAXIS runs entirely in your browser. Everything stays on your machine. No server calls, no accounts, no tracking. If the wifi cuts out in Juba or Ouagadougou, the tools still work. If PRAXIS disappears tomorrow, you still have your files. MIT licensed, permanently free, fully yours.
"I spent years looking for evaluation tools built by someone who had actually run an evaluation in a conflict zone. I could not find them. So I built them."Emmanuel Nene Odjidja
The ecosystem
Foundation
AI Evaluation Skills
Claude Project knowledge modules encoding validated evaluation methodology. Six core modules plus sector-specific extensions covering over 20 evaluation designs and 16 validated scales. The AI applies structured methodology to your specific context, programme type, and operating environment.
View on GitHub →Tools
Browser Toolkit
Six tools covering the full evaluation lifecycle. Sample size calculation, design advising, data exploration, Theory of Change building, evaluation matrices, and an indicator bank. Zero data transmission. Works offline.
Open toolkit →Workflow
PRAXIS 360
The integrated end-to-end evaluation workflow built on the .praxis file format. Theory of Change through to reporting, with every finding traceable back to the evaluation framework. Launching at Glocal 2026.
Learn more →Live tools
Answer 10 context questions about your programme and get scored recommendations across 16 evaluation designs. The advisor accounts for data availability, ethical constraints, programme maturity, and operational context to suggest designs that will actually work in the field, not just in a textbook.
Get recommendations →