Conduct Research and Engage With Scholars Exploring How Humans and Machines Understand Reality

Scholars Exploring – Conducting research on how humans and machines understand reality demands a space where philosophers, cognitive scientists, and AI engineers actively compare theories, share evidence, and build tools together. Our initiative invites scholars to interrogate foundational questions—What is representation? How do perception, inference, and action compose a world-model?—alongside practical questions like how to test and falsify those models in silicon and in brains. We bridge classic debates in the philosophy of mind with formalizations such as the computational theory of mind and contemporary accounts of predictive processing. On the machine side, we map these ideas to architectures that learn and reason, then pressure-test them with interpretability and alignment methods. By convening cross-disciplinary working groups, sharing open protocols, and developing comparative benchmarks, we aim to produce research that is both philosophically rigorous and empirically grounded—capable of explaining phenomena in humans while guiding safer, more transparent AI systems.

Scholars Exploring
Scholars Exploring
Also read

The Ethics of Knowledge Production – How the Institute Promotes Responsible Epistemology The Ethics of Knowledge Production – How the Institute Promotes Responsible Epistemology

Design a Comparative Research Agenda Connecting Brains and Models

An effective program starts with shared constructs—representation, attention, generative modeling, causality—and makes them operational across human studies and machine experiments. We encourage framing hypotheses that travel in both directions: from human perception (e.g., free-energy/predictive coding) to machine objectives, and from machine learning findings (e.g., scaling laws) back to cognitive constraints. Projects might test whether embodied cognition predicts robustness gains in embodied agents, or whether human causal learning aligns with structural models in causal inference. We support preregistered protocols, shared datasets, and replication tracks, and we welcome proposals that triangulate behavioral signals, neural data, and internal activations from AI systems. The goal is theory that survives contact with measurement—and improves both scientific explanation and engineering design.

Also read

Why Epistemics Matter Today: In an Era of AI, Misinformation, and Cognitive Overload Why Epistemics Matter Today: In an Era of AI, Misinformation, and Cognitive Overload

Methods: Experiments, Simulations, and Open Benchmarks

Our methods stack integrates human experiments (psychophysics, judgment, language understanding), computational modeling (probabilistic programs, differentiable simulators), and AI evaluations (generalization, robustness, reasoning). We promote model-based accounts of perception and action, and we pair them with transparency tools such as circuits-level interpretability and mechanistic interpretability to link internal components to functions. On safety and oversight, we study cooperative protocols like AI debate and audit frameworks inspired by concrete problems in AI safety. To ensure cumulative science, we adopt FAIR data practices (Findable, Accessible, Interoperable, Reusable) and run replication sprints addressing the reproducibility crisis. Every study ships with open protocols, diagnostics, and a benchmark card specifying construct validity, external validity, and limitations.

Also read

What Is Epistemics? How the Institute Fosters Research Across Philosophy, Psychology, and Artificial Intelligence What Is Epistemics? How the Institute Fosters Research Across Philosophy, Psychology, and Artificial Intelligence

Seminars, Reading Groups, and Collaborative Sprints

Engagement is continuous and structured. Our seminar series pairs classic texts with state-of-the-art results: e.g., phenomenological analyses of experience from phenomenology juxtaposed with empirical work on representation learning and abstraction. Reading groups rotate between cognitive science, neuroscience, and machine learning, while methods clinics teach tools for Bayesian modeling, causal discovery, and interpretability. Hack-sprints develop shared codebases and evaluation suites, translating conceptual claims into measurable predictions. We maintain an open repository of experiment templates and emphasize adversarial collaboration: teams argue competing hypotheses and agree on decisive tests. Our visiting-scholar program hosts researchers who want to port lab protocols to AI settings or reverse—embedding AI measurement into human studies. Across formats, the ethos is constructive skepticism: make claims crisp, test them publicly, and iterate.

Also read

Dive Deep Into Epistemology, AI Ethics, and Philosophy of Knowledge With Our Institute Dive Deep Into Epistemology, AI Ethics, and Philosophy of Knowledge With Our Institute

Impact: Ethics, Policy Translation, and Research Careers

Understanding reality is not purely academic; it informs how we deploy AI in education, health, and governance. We align our practices with the UNESCO Recommendation on the Ethics of AI, integrating risk assessment, human oversight, and transparency from project inception. Policy translation briefs distill findings for regulators and standards bodies, while our public-interest track co-designs evaluations with affected communities. For early-career researchers, we offer mentored fellowships, cross-lab residencies, and career workshops on grant writing, open science, and interdisciplinary publishing. We also run showcases where teams present preregistered plans, null results, and successful replications—because negative evidence moves fields forward. If you are ready to contribute theories, datasets, or evaluation tools, propose a study, join a working group, or present at an upcoming seminar. Together we can build explanations—and systems—that better match how humans and machines construct, contest, and share reality.

Share this news: