Philosophy of Science โ†’ AI

I study knowledge.
Then I build it.

From the epistemology of Popper and Russell to artificial intelligence โ€” building reasoning systems that distinguish justified belief from mere information, and know the boundaries of their own certainty.

๐Ÿ›๏ธ Founder, AiDome ๐Ÿ“œ 5 AI Patents (SIAE) ๐ŸŽ“ Harvard PhD Research ๐Ÿ‘จโ€๐Ÿซ Professor Emeritus

The Thread

At the University of Milan, I studied epistemology โ€” the branch of philosophy that asks not what we know, but how we can know anything at all. From Popper's falsificationism to Russell's logical atomism, from Kuhn's paradigm shifts to Lakatos's research programmes: the rigorous analysis of how humans construct, validate, and revise scientific knowledge.

At Harvard, working alongside John Nash on game-theoretic equilibria and Gary Chamberlain on econometric inference, I saw these philosophical questions become computational ones. How do we reason under uncertainty? How do we update beliefs with new evidence? How do we distinguish signal from noise in complex systems?

This is why my AI systems are different. They're built on an epistemological foundation: every claim must be traceable to evidence, every inference must be auditable, and the system must know โ€” and communicate โ€” the limits of its own knowledge.

Most AI today hallucinates confidently โ€” generating plausible-sounding answers with no grounding in verified knowledge. This isn't a bug to be fixed with better training. It's a fundamental architectural failure: systems built to predict the next token, not to reason about truth and justification.

PRISM takes a different approach. It creates Professional AI Twins grounded in domain-specific knowledge bases, with explicit reasoning chains, source citations, and epistemic boundaries. Not "AI that sounds smart." AI that knows what it knows โ€” and tells you when it doesn't.

Intellectual Foundations

The philosophical and scientific traditions that inform my approach to AI architecture

๐Ÿ”ฌ

Philosophy of Science

Popper's falsificationism, Kuhn's paradigms, Lakatos's research programmes. Understanding how scientific knowledge progresses โ€” and fails โ€” informs how AI systems should handle uncertainty and revision.

๐Ÿ“

Formal Epistemology

Bayesian reasoning, belief revision theory, epistemic logic. The mathematical frameworks for representing and updating knowledge under uncertainty โ€” now implemented in reasoning engines.

๐Ÿงฎ

Econometric Inference

Causal identification, instrumental variables, structural estimation. Learning from Chamberlain and the Harvard tradition: how to extract reliable knowledge from observational data.

๐ŸŽฏ

Game Theory

Strategic reasoning, equilibrium concepts, mechanism design. Nash's insights into multi-agent systems inform how AI Twins interact with human experts and institutional processes.

๐Ÿ”—

Knowledge Representation

Ontologies, semantic networks, description logics. The computational challenge of representing domain knowledge in ways that support genuine reasoning, not just retrieval.

โš–๏ธ

Bounded Rationality

Simon's satisficing, Kahneman's heuristics, ecological rationality. Building AI that works within real-world constraints, not idealized assumptions of perfect information.

What I Build

๐Ÿง 

PRISM

Platform for Reasoning, Intelligence & Specialized Modeling. Professional AI Twins for regulated industries โ€” healthcare, legal, finance, engineering. Your methodology, your knowledge base, your reasoning style, deployed entirely on your infrastructure. No data leakage. Full traceability. AI that cites its sources and knows when to say "I don't know."

Explore PRISM โ†’
๐Ÿ“œ

5 Registered AI Patents

Intellectual property registered with SIAE covering generative architectures, knowledge representation systems, and reasoning engines. Including: Artificial Intelligence Platform, Artificial Intelligence Solutions, and Generative AI Framework. A sixth registration โ€” covering PRISM's core methodology โ€” currently in filing.

SIAE Registered ยท 6th Pending

Published Research

Peer-reviewed contributions to natural language processing and machine learning

arXiv 2023

A Distribution-Based Threshold for Determining Sentence Similarity

Gioele Cadamuro & Marco Gruppo

A novel approach to semantic textual similarity using siamese neural networks to create distributions of distances between similar and dissimilar sentence pairs. The method derives a mathematically rigorous threshold for determining similarity โ€” addressing the fundamental question: when can two sentences be considered semantically equivalent?

Natural Language Processing Siamese Networks Semantic Similarity Transfer Learning
Read on arXiv โ†’

The Path

๐ŸŽ“ Harvard PhD Research
๐Ÿ›๏ธ Milan Laurea cum Laude
๐Ÿ“œ 5 IP Works SIAE Registered
๐Ÿ‘จโ€๐Ÿซ Professor Emeritus, ITS Rizzoli
๐Ÿง  15+ Years ML & Generative AI
๐Ÿ“š Published Research & Papers

Design Principles

"AI must know the limits of its knowledge."

Epistemic humility isn't a nice-to-have โ€” it's foundational. Systems that confidently generate false information aren't intelligent; they're dangerous. Every PRISM output includes explicit confidence bounds and acknowledges uncertainty.

"Traceability is non-negotiable."

Every claim must cite its source. Every reasoning chain must be auditable. This isn't just for compliance โ€” it's how professionals actually work. Knowledge without provenance is noise.

"Complexity requires structure, not simplification."

Real domains are multi-dimensional: regulations, precedents, constraints, exceptions. Good AI preserves and navigates this complexity rather than flattening it into misleading simplicity.

Let's Connect

Building AI for regulated industries? Interested in reasoning systems grounded in epistemological rigor? I'd welcome the conversation.

Direct contact:

marco@gruppomarco.net