Series: GUT Check - The Timothian Model: A Mechanical Grand Unification of Physics
The Nature of Ontology
Ontology as the Hidden Lever Behind Every Prediction
Logic dictates ontology.
Ontology drives prediction.
Prediction guides experiment and data.
Data validates and gives constraint.
Ontology is the model’s allowed reality: what exists, what can interact, and which degrees of freedom are permitted to do the work.
This issue formalizes ontology as a first-class step in scientific endeavors and reorders and expands scientific error categories to reduce the likelihood of work going down dead-end paths inconsistent with natural and Newtonian first principles. Ontology determines what we treat as “real” and what we believe the rules of reality are; it therefore shapes the predictions we make, the experiments we design, and how we interpret data as constraints on the underlying model.
Logic starts with basic principles, then uses deduction and induction to create new knowledge. It’s like constructing a building – you need a strong foundation before building floors.
Our evolution of logic begins simply enough from everyday observations. The famous “Newton’s Apple” falls from the tree, increasing in speed until stopped by the ground. The wind blows the leaves of that same tree. Skinny branches bend easily, the trunk not so much. Water on the ground is drawn up through the core of the tree and is shared by all the leaves. The leaves have complex patterns of veins we can see with our eyes that move water and minerals into cells. We can watch those under a microscope form and multiply in the presence of sunlight, and eventually run their course to death. The details of replication, life, and death are controlled by even smaller structures in our DNA that we can now see and even manipulate.
In modern day science, at each layer of this logic we have objective evidence that these are true without inferring anything. We can see the entire causal chain without ambiguity or the need for theory, speculation, or postulation. And critically, the entire chain is mechanically causal.
Our logic is further refined by taking mechanically causal evidence and comparing it to other similar evidence and seeing the interactions, patterns, and cumulative effects. Things float according to their relative density with a medium, like a raft between water and air. Capillary action draws blood into a pipette. Heating or cooling water causes the molecules to transition through solids, liquids, and gases.
From the observations, we generate a series of “rules” that we can apply in different situations. Surface tension applies to both tree veins and pipettes. Different molecules experience phase transitions at different temperatures. Specific combinations of DNA control hair color, blood type, and complex systems such as immune response and protein production.
These rules, collectively, become an Ontology. What exists? For each thing that exists, what are its properties? Critically, our ontologies only serve us long-term when they are based on mechanistic facts.
Every ontology that is missing mechanistic causality from start to finish is incomplete. We see something, we don’t yet know why it works that way, we don’t yet have the tools or logic to explain it mechanistically, and we error by giving our limited view “special status”, that this must be a “fundamental” force, something to be accepted and not questioned. Once upon a time, weather events were due to angry gods, the universe revolved around the Earth, and illness was due to miasma (bad air). Each of these were ontologies that involved limited perspectives, and in scientific terms, missing degrees of freedom – mechanistic factors we weren’t yet taking into account.
Canonizing mechanistically incomplete ontologies teaches generation after generation that this weirdness we can’t explain must be accepted, which needlessly delays our advancement in understanding the natural operating system of our universe. Once incorporated into our textbooks, our training, our institutions, and our worldview, that dogma is doggedly hard to get out from under.
This issue has multiple objectives.
First and foremost, I seek to elevate the criticality of ontological errors as precursors for sending scientific exploration down unnecessarily long and misleading paths. The history of ontological errors we already agree have happened is our guide. I seek to place the importance of ontological errors firmly into our understanding of scientific endeavor, and clearly articulate what we all know from history. Ontological errors frame exploration with prison bars instead of structural supports. The Earth is objectively not flat. The mass throughout the universe is not spinning around the Earth. Miasma had some truth in that germs, bacteria, viruses, and fungus may be more present in the local environment that makes us sick, but there’s a lot more to these.
Second, I seek to remap Error Types to formally incorporate ontological errors in with the historical error types to create an improved framework.
Third, I will walk through a perspective of how non-Newtonian ontologies came to pass, and what error violations led to those ontologies using this new framework.
Finally, using the same framework, I take a look at the Timothian Model critically using this new framework for comparison.
This document is thus structured into Parts I–V, with the following progression:
Part One builds the epistemic case (“ontology errors are real science errors”).
Part Two formalizes the taxonomy and introduces the diagram suite.
Part Three demonstrates application against canonical physics examples.
Part Four makes the taxonomy operational (how you prevent errors).
Part Five shows how the Timothian Model’s evolution is an example of those guardrails in action.
By the end of this paper, you’ll be able to (1) spot ontology-level errors, (2) classify failure modes using Types 0–7, and (3) use the three diagrams as a repeatable audit protocol.
When people hear the word ontology, they often think of philosophy seminars, not physics. But ontology is simply this: what do you believe actually exists, and what do you believe does not?
Ontology is the model’s allowed reality: what exists, what can interact, and which degrees of freedom are permitted to do the work. Said more formally:
Ontology is the explicit inventory of what exists in the model (objects, medium, mechanisms) and the allowed degrees of freedom those entities may use to interact.
Do you think there is a medium between objects, or not?
Do you think forces require physical contact, or not?
Do you think time is a mechanical rate of change, or a thing that can bend and stretch?
Every experiment we run, every equation we write, sits on top of those background yes/no decisions. They are rarely stated out loud, but they quietly steer how we interpret data.
Three of the most influential experiments in modern physics—Michelson-Morley, Stern-Gerlach, and the double‑slit experiment—were not just measurements. They were forks in ontological road. In each case, the data were sound; what went wrong was the story about reality that was built on top of them.
Einstein had to deal with the Michelson-Morley null result when he wrote his 1905 papers. The ether people were looking for was very specific:
perfectly uniform,
massless,
not really interacting with matter in any ordinary Newtonian way,
and yet somehow capable of carrying light.
In other words, a ghost medium that violated common sense and Newton’s own rules from the start.
Michelson-Morley’s null result did something modest and important:
It showed there was no detectable ether wind of that particular kind under the conditions of the experiment.
That’s all. The correct conclusion was:
“This specific, unphysical ether model is wrong.”
Instead, the result was ontologically over‑promoted to:
“There is no medium at all.”
From there, Einstein inherited an ontology in which space had to be empty and light had to propagate through nothing. To make sense of the data within that vacuum ontology, he did what any good physicist does: he followed cause and effect inside the allowed story. Curved spacetime, time dilation, and a universal speed limit for light “in vacuum” are all reasonable consequences—if you first accept the premise that there is no medium and that “nothing” can curve.
Later, when cesium clocks were flown on airplanes and found to tick at different rates, the data were real. Clocks really do disagree under different motions and heights. But because the ontology already said “time itself stretches,” these rate changes were treated as time dilation of an abstract time parameter, rather than rate modulation of mechanical processes embedded in a tensioned medium.
In the Timothian Model, the same facts are kept:
Michelson-Morley did not see the ether they were looking for.
Clocks do tick at different rates in different conditions.
What changes is the ontology underneath:
Space is not empty; it is filled with a mass‑bearing chunk medium.
Light is an oscillation in that medium.
Clock rates change because medium tension and backfill work change, not because time itself bends.
In that sense, Michelson-Morley was never a refutation of a Newtonian medium. It was a refutation of one specific, non‑Newtonian ether hypothesis. The leap from “this ether is wrong” to “no medium exists” was an ontology error.
Stern and Gerlach discovered something genuinely surprising when they sent silver ions through a long, non‑uniform magnetic corridor. Instead of landing in a smooth smear on the detector, the ions self‑sorted into two discrete groups—one deflected one way, one the opposite way.
The experiment is brilliant; the data are robust to this day. The ontology layered on top of it was more dramatic:
Magnetic fields were treated as static, abstract fields in a vacuum, with no moving stuff attached to them.
The discrete splitting was interpreted as a sign of intrinsic quantum spin, a new kind of two‑valued property with no mechanical analogue.
The field itself was declared a “fundamental force,” not requiring a Newtonian explanation.
In the Timothian ontology, there is no such thing as a static, ghostly field in nothing. A magnetic field is a real flow of chunk species through and around matter, with backfilling counterflows enforced by the no‑vacuum rule. A Stern-Gerlach apparatus is not bathing ions in an invisible mathematical field; it is sending them through an asymmetric corridor in a flowing medium.
Within that corridor:
Some internal chunk configurations in the ions are mechanically compatible with one bias of the flow and pressure map.
Others are compatible with the opposite bias.
Intermediate configurations are unstable and are driven by restoration forces into one branch or the other as they travel.
The binary splitting then says:
“Under these flows and tensions, there are only two robust mechanical end‑states that survive the trip.”
That’s a strong statement about mechanical stability in a flowing medium, not a command to invent acausal “spin” that points nowhere in particular until measured. Again, the ontology choice—fields in vacuum vs flows in a medium—determines how we talk about the same data.
The double‑slit experiment is the poster child for modern ontological excess. Run it with light or with electrons and you see:
With both slits open and no attempt to resolve which path was taken, the arrival pattern on the screen shows an interference pattern.
When you modify the setup to resolve “which slit” in a strong way, the pattern changes.
Those are the facts. What was bolted on top was a now‑familiar litany:
“Particles are also waves.”
“They do not have definite positions until observed.”
“Reality is fundamentally probabilistic.”
In a Timothian medium, you don’t need to claim that matter ceases to exist between source and screen. You simply remember:
There is a real, stratified, chunk medium between source and detector.
Disturbances in that medium (oscillations and flows) behave differently when they have multiple low‑resistance paths vs one.
The way you constrain those paths changes how the disturbance propagates and where it can deposit energy into the detector’s chunk structures.
In other words, the double‑slit data say:
“The propagation of real disturbances through a real medium depends sensitively on geometry and constraints, and discrete detection is a property of the detector’s structures.”
That is a perfectly Newtonian statement about waves and structures. The claim that “reality doesn’t exist until measured” is not a measurement; it is an ontological add‑on.
What do all of these stories have in common?
The experiments were clever.
The data were (and are) solid.
The equations that summarize those data often work extremely well in their domains.
The failures lie in the ontology—in deciding too quickly what is allowed to exist and what is forbidden.
A massless, non‑interacting ether failed a test → all media were declared dead.
A discrete pattern in a magnetic corridor → mechanical flows were ignored; “mystical spin” was enthroned.
Interference patterns in a transmissive environment → the medium was erased; particles were asked to be everywhere and nowhere at once.
The Timothian Model is, at heart, an ontology correction:
It restores a real, mass‑bearing chunk medium that obeys Newtonian mechanics at every scale.
It keeps the experimental facts and much of the useful mathematics.
It rewrites the stories about what exists so that gravity, magnetism, light, atomic structure, and even “quantum” effects all come from the same mechanical substrate instead of incompatible layers of curved spacetime, abstract fields, and fundamental probabilities.
Once the ontology is repaired, the mysteries begin to evaporate. Michelson-Morley, Stern-Gerlach, and the double‑slit are no longer invitations to abandon mechanics; they become case studies in how badly we can misinterpret data when we first decide that the universe must be built from nothing.
The history of astronomy is, in many ways, a history of ontology errors that worked “well enough” until someone finally dared to change the underlying story.
For centuries, the flat Earth and Earth‑centered universe were not fringe ideas; they were the consensus ontology. The data—sunrise and sunset, stars wheeling overhead, planets wandering in the sky—were real. But the story wrapped around those data was wrong. To keep that story alive, people invented epicycles on epicycles: elaborate geometric patches that made the predictions roughly match what the sky actually did.
The problem was not the observations. It was the ontology:
Earth was assumed to be fixed and central.
The heavens were assumed to be perfect circles on crystalline spheres.
Motion “up there” was assumed to be fundamentally different from motion “down here.”
Once the ontology shifted – round and spinning Earth, Sun‑centered system, same mechanics above and below – the math got simpler and the predictions got better. The same sky, the same data, but a different story underneath.
We are in a similar place today with gravity, orbits, and “mysterious” orbital behaviors.
In the conventional ontology, planets and moons are perpetually falling and mysteriously missing, their long‑term stability defended by careful initial conditions and abstract integrals of motion. Rings are “leftover debris” that just happens to hang together. Precession is handled piecemeal: classical perturbations here, relativistic corrections there, frame‑dragging over in another corner.
In the Timothian Model, the ontology is different from the ground up:
Space is not empty; it is a stratified, mass‑bearing chunk medium.
Gravity is the restoration push of that medium plus buoyancy in its stratification.
Orbits are buoyant paths—bodies finding and tracking their neutral‑buoyancy corridors in that stratified medium.
Planetary rings and the Orbital Protective Effect are not accidents; they are the natural by‑products of a rotating stratified medium that favors certain corridors and punishes excursions.
Precession is what happens when spinning, orbiting bodies have to negotiate that stratified, rotating medium over long timescales. The medium remembers the star’s rotation and the galaxy’s flows; orbits and spin axes slowly realign with that memory.
Once that ontology is accepted, several formerly “coincidental” or “fine‑tuned” facts stop looking miraculous:
Stable orbits are no longer balance‑on‑a‑knife‑edge miracles; they are the natural resting states of bodies floating in a stratified ocean of chunks (Archimedes in spherical coordinates).
The Orbital Protective Effect—ring corridors and resonances that protect moons and shepherd debris—becomes the expected behavior of a medium that has preferred density and flow bands. Disturb material away from those bands and the medium’s restoration and collision mechanics tend to nudge it back or grind it away.
Axial and orbital precession become slow drifts toward medium‑preferred configurations, not mysterious “extra” corrections on top of otherwise complete theories.
The data were never the problem. Our ontology was.
This is where the broader teachable moment sits. The way physics has often proceeded for the last century looks like this, in practice:
Data first – Build a giant pile of measurements.
Fit a model – Find a mathematical structure that reproduces those numbers.
Declare the ontology – Read off “what reality must be like” from the math, even when it contradicts basic mechanics or common‑sense causality.
Retrofit the logic – Explain away paradoxes as “the universe is just weird.”
In other words: we have let data dictate ontology, and then tried to bolt logic on afterward.
The Timothian Model flips that sequence back to something more sane and more historically successful:
Logic dictates ontology.
Ontology drives prediction.
Prediction guides experiment and data.
Data validates and gives constraint.
Logic dictates ontology.
Start by insisting on a mechanically coherent universe: No action at a distance, no true vacuum, no massless “force carriers” waving in nothing, no special rules for the very small or very large. If you put a rock on a table or a planet in motion, something real must be pushing on something real, everywhere, at every scale.
Ontology drives prediction.
Given that mechanical ontology—chunks, medium, stratification, flows—work out what must happen:
Rocks and apples sink toward denser medium layers: gravity as buoyancy.
Planets and moons float in stratified corridors: stable orbits and ring structures as buoyant equilibria.
Spinning bodies entrain and shear the medium: gyroscopics, frame‑dragging, and precession.
Overstuffed seeds slowly relax: radioactive decay and half‑lives.
Extreme stratification kills light: horizons and black holes.
Data validate and constrain.
Then—and only then—go to the experiments. Not to worship the current story, but to see where your mechanically grounded ontology is over‑ or under‑shooting:
Michelson-Morley constrains which kinds of medium drift are allowed, not whether a medium exists at all.
Stern-Gerlach tells you there is internal structure and rotational bias in atoms and the medium, not that nature is “intrinsically probabilistic.”
Stable rings, orbital resonances, and precession curves tell you about the detailed stratification and rotation of the medium around real bodies, not about occult epicycles in curved emptiness.
When you run the sequence in this order, the ontology becomes testable instead of being whatever the last equation seems to suggest. And because the ontology is mechanical and Newtonian at every scale, the tests are intelligible: you always know what is supposed to be pushing on what.
This pattern—ignoring the unseen medium and misreading the data—is not new.
We treated air as a kind of nothing until we had to explain sound, pressure, and flight. Only then did we accept that “empty” rooms are full of something that pushes, flows, and stratifies.
We treated disease as bad humors, curses, or spontaneous generation until the germ ontology finally clicked: tiny organisms, everywhere, doing mechanical things to our tissues.
We treated space as emptiness, then as curved nothing, and finally as quantum vacuum froth—anything but a real medium whose chunks can be packed, strained, and set into motion.
Each time, accepting the right ontology turned a zoo of special‑case rules into a small set of simple, reusable mechanics.
The Timothian Model argues that we are at that threshold again. If we grant that the primordial chunk soup never vanished—that it condensed into atoms and bodies, but also remained as a real medium that fills what we mis‑label “vacuum”—then:
Gravity, magnetism, light, thermodynamics, atomic structure, and black holes all become manifestations of one substrate behaving in different regimes.
Stable orbits, rings, and precession are no longer separate “phenomena” needing separate stories; they are different expressions of bodies negotiating buoyancy and drag in a rotating, stratified ocean of chunks.
Experiments like Michelson-Morley stop being death sentences for “any medium at all” and instead become detailed constraints on which medium stories are still allowed.
We did not get here by being bad at math. We got here by letting math without ontology outrun mechanics.
Rebuilding that chain—logic → ontology → prediction → data—doesn’t just rehabilitate flat‑Earth‑style mistakes as history lessons. It also gives us a path forward: a way to look at every puzzling dataset and ask, not “what equation will fit this,” but “what mechanical ontology would make this almost obvious?”
The Timothian Model treats error not just as noise in measurements but as misalignment anywhere in the reasoning chain:
Logic dictates ontology – coherence, causality, Newtonian mechanics.
Ontology drives prediction – which models, mediums, and degrees of freedom are even allowed on the table.
Prediction guides experiment and data – what we choose to measure, how we sample, and how we instrument.
Data validates and gives constraint – which ontologies and models survive and which are discarded.
Each error type below names a specific way this chain can break.
The formal Timothian error types are:
Type 0 – Ontology Space Neglect
Type 1 – Ontological Overreach
Type 2 – Wrong Question / Wrong Problem
Type 3 – Misinterpretation of a Correct Result
Type 4 – Evaluation & Replication Error
Type 5 – False Positive
Type 6 – False Negative
Type 7 – Measurement & Sampling Issues in Raw Data
I use Type 0–7 (Arabic numerals) to make room for a genuine Type 0 foundation. This error logic builds from the ground up. Lower numbers are deeper in logic and ontology; higher numbers are more superficial and easier to blame. They can be grouped as follows:
Types 0–1 – Foundational Ontology Errors
Types 2–4 – Question / Model / Interpretation Errors
Types 5–6 – Data-Decision Errors
Type 7 – Measurement & Sampling Issues
Figure 1 shows both the types and their groupings in context.
- Hierarchy of Experimental Error
Truncating the Universe of Possible Worlds
Definition
Type 0 error occurs when we artificially shrink the space of admissible ontologies before we even start building models. It is a failure in the “Logic dictates ontology” link: crucial candidate pictures of “what exists” and “what can interact with what” are never considered.
Characteristics:
Certain classes of models or mediums are ruled out by assumption, not by data.
Degrees of freedom such as “medium rotation,” “stratification,” “entrainment,” or “seed/medium interaction” are excluded a priori.
Later experiments are designed only to discriminate within a prematurely narrow ontological box.
Examples
Declaring that there is no physical medium for light or gravity after rejecting one specific ether model, thereby forbidding any chunk medium variant from even being considered.
Insisting that quantum phenomena must be described only by abstract wavefunctions in configuration space, excluding deterministic or medium-based ontologies as “unphysical” before they are tested.
Treating “curved spacetime” as the only acceptable ontology for gravity and never allowing a buoyant, stratified medium model into the candidate set.
In the Timothian view, Type 0 was the deepest mistake beneath both relativity and quantum orthodoxy: the ontology space was collapsed prematurely.
Claiming Too Much Reality from Too Little Evidence
Definition
Type 1 error occurs when we take a narrow, local experimental result and inflate it into a global ontological claim about the nature of reality. It is a failure in the “Data validates and gives constraint → Ontology” link.
Characteristics:
A result that legitimately falsifies or constrains one model is taken to falsify or constrain all models of that general kind.
Mathematical convenience or elegance is promoted to ontological truth.
Boundary conditions and domain of validity are quietly ignored.
Examples
From the Michelson-Morley null result for a particular ether model:
Correct conclusion: “this stationary, uniform ether is unsupported.”
Type 1 overreach: “there is no medium at all; space is empty.”
From successful GR tests (perihelion of Mercury, light bending):
Correct: “these data are consistent with this metric.”
Type 1 overreach: “spacetime curvature is the only real explanation; a medium is impossible.”
From quantum interference and Bell tests:
Correct: “our current particle picture is incomplete.”
Type 1 overreach: “reality itself is intrinsically probabilistic and nonlocal; no deeper mechanical substrate exists.”
Type 0 and Type 1 often arrive as a pair: ontology space is shrunk (Type 0), and the surviving story is promoted to total reality (Type 1).
Solving the Wrong Puzzle Precisely
Definition
Type 2 error arises when the formal question, hypothesis, or optimization target is misposed, even if everything downstream (experiments, statistics) is technically correct. It sits at the “Ontology → Prediction / Question & Model” link.
Characteristics:
We ask, “What geometry must spacetime have?” when the real question is, “What medium structure must exist?”
We optimize for an easily measured surrogate (a metric, a p-value, a curve fit) instead of the true scientific target (mechanism, medium behavior, causal structure).
We end up with beautiful mathematics answering a question that wasn’t the one we needed to ask.
Examples
Investing decades in perfecting dark-energy parametrizations of a metric instead of asking whether a changing chunk-medium stratification could account for the same data.
Designing quantum foundations entirely around “What is the probability of outcome X?” instead of “What physical processes in the medium produce these outcome frequencies?”
In statistics, optimizing predictive accuracy on a flawed label instead of asking whether the label reflects the phenomenon of interest.
Timothian framing: a Type 2 error is puzzle selection failure—you choose the wrong puzzle, then solve it brilliantly.
Right Numbers, Wrong Story
Definition
Type 3 error occurs when data and calculations are correct, but the physical story we tell about them is wrong. It bridges the “Data → Constraint” and “Constraint → Explanation” links.
Characteristics:
The experiment is well-designed and correctly executed.
The statistical decision (significant vs. not, direction of effect) is sound.
The narrative of what the result means misidentifies the underlying mechanism.
Examples
Correctly observing time-dilation effects on moving and high-altitude clocks, but narrating this as “time itself stretches and contracts” instead of “clock processes slow because they work harder against the medium’s tension and stratification.”
Correctly detecting Stern-Gerlach splitting, then interpreting it as mystical, non-mechanical “spin” rather than structured seed–medium coupling in a flowing field.
Correctly observing interference in a double-slit experiment, but concluding “particles don’t have definite paths until observed” rather than “our model of medium-borne wave propagation and detection thresholds is incomplete.”
Type 3 is the classic “good measurement, bad myth” error.
Misreading the Body of Evidence
Definition
Type 4 error occurs when the community’s process of evaluating, replicating, and canonizing results is flawed. It is a failure in the “Constraint → Shared Knowledge” layer: how we decide what becomes “settled.”
Characteristics:
Unequal weighting of flashy results vs. quiet corrections.
Poor replication practice; failure to test robustness across conditions.
Meta-analyses or reviews that mischaracterize what is actually supported.
Examples
Treating early, narrow tests of “no ether” as final, while ignoring or never running experiments that could distinguish between different kinds of media (stratified, entrained, rotating).
Allowing specific interpretations of quantum or relativistic results to harden into textbook dogma before alternative ontologies are even explored.
Publishing and teaching only the “geometry” or “probability” stories, systematically excluding mechanical medium-based readings from curricula.
Type 4 is what turns Types 0–3 from local missteps into structural bias in the field.
Seeing an Effect That Isn’t There
Definition
Type 5 error is the classic statistical false positive: concluding that an effect exists when, in fact, it does not. It lives squarely in the “Data → Decision” step.
Characteristics:
A true null is incorrectly rejected.
Driven by random variation, undiscovered Type 7 issues, or arbitrary thresholds.
Often appears as “discovery” that later evaporates under replication.
Examples
Claiming a new particle from a small bump in a spectrum that disappears in a larger dataset.
Announcing a novel medium effect or anomaly from an under-powered experiment with poorly controlled noise.
In applied domains, declaring a treatment effective based on a single small trial that fails to reproduce.
Timothian note: while important, Type 5 is not the deepest failure mode. It is one of the easiest to detect and correct when Type 4 processes are healthy.
Missing a Real Effect
Definition
Type 6 error is the classic statistical false negative: failing to detect a real effect. It is another “Data → Decision” error.
Characteristics:
A false null is not rejected.
Caused by low power, noisy data, insensitive instruments, or overly conservative thresholds.
Can hide real medium structure or dynamics that later appear when experiments are improved.
Examples
Early gravitational or medium-drag experiments whose sensitivity was inadequate to detect small but real effects, leading to premature “no effect” conclusions.
Failing to observe subtle refraction or dispersion changes in different chunk-medium stratifications because the apparatus resolution is too coarse.
In medicine or climate science, missing modest but real trends because of noisy measurements and insufficient sample sizes.
Type 6 is a blindness error: the world is doing something interesting, but our instrumentation or analysis fails to notice.
Broken Thermometer, Biased Sample
Definition
Type 7 error covers foundational data integrity problems: issues with how measurements are made and how samples are drawn, before any modeling or hypothesis testing. It sits at the very front of the chain: “World → Data.”
Characteristics:
Measurement error – miscalibration, drift, mis-coding, timing glitches, misaligned optics, wrong units, sensor nonlinearity.
Sampling error – non-representative samples, biased selection, missing critical strata, too small or skewed datasets.
These issues contaminate all downstream inferences, and can induce or mask Type 5 and 6 errors.
Examples
Interferometers or clocks with uncorrected systematic biases masquerading as “relativistic” or “quantum” effects.
Optical or particle detectors with dead-time or saturation issues that distort amplitude and timing distributions.
Observational samples of astrophysical phenomena taken only under specific conditions, then used as if they represented the full range of medium behaviors.
In the Timothian Model, Type 7 is the easiest to blame (“the instrument must be wrong”), but often the last to be carefully and systematically audited.
To relate this taxonomy to classical statistical terminology:
Type 5 (Timothian) ≈ Type I (false positive) in classical stats.
Type 6 (Timothian) ≈ Type II (false negative).
Type 2–4 correspond to various classical notions of “Type III/IV/V” (wrong question, misinterpretation, evaluation failure), but now cleanly separated.
Type 0–1 are uniquely Timothian: they make ontology itself a first-class, named source of error.
The whole point is to make it harder for physics to silently repeat Type 0 and Type 1 mistakes—and much easier to see where in the chain we’re actually stuck.
- Stage Error Pipeline
The chart in Figure 2 is a wiring diagram for the Timothian error taxonomy. The left column shows the stages of a scientific reasoning pipeline; the right column groups the error types 0–7 into four colored blocks. The colored, dashed connectors show where each kind of error typically enters the flow.
If the pyramid and nine-block diagrams tell you what the error types are and how deep they sit, this chart tells you when and where they tend to show up in real work.
Left Side: The Five Stages of the Pipeline
The left column walks top–to–bottom through a single run of the reasoning chain:
Stage 1 – Ontology
“What exists? Which models and mediums are allowed?”
This is where you decide whether the universe is built from chunks in a medium, abstract fields in a vacuum, curved spacetime, pure probability amplitudes, or something else. These are your deep starting assumptions.
Stage 2 – Question & Model
“Which problems, hypotheses, and models are posed?”
Here you translate your ontology into concrete questions: which variables matter, what counts as a ‘cause,’ what constitutes a meaningful alternative. You choose or build models that encode those questions.
Stage 3 – Experiment & Data
“How data are collected, measured, and sampled.”
At this stage you interact with the world: design apparatus, define sampling frames, choose instruments, and actually record numbers.
Stage 4 – Statistical Decision
“Tests, p-values, intervals, and detected effects.”
This is where you apply the machinery of inference: hypothesis tests, confidence intervals, likelihoods, effect sizes, model fits.
Stage 5 – Interpretation & Canonization
“How results are interpreted, replicated, and stored.”
Finally, you decide what the numbers mean physically, what gets written into papers and textbooks, and which stories become “the standard view.”
Every project in science walks this ladder, whether explicitly or not.
Right Side: Four Error Groups (Types 0–7)
The colored blocks on the right collect the eight Timothian error types into four functional groups:
Purple – Types 0–1: Ontology-Level Errors
Type 0 – Ontology Space Neglect
Type 1 – Ontological Overreach
Green – Types 2–4: Question / Model / Interpretation Errors
Type 2 – Wrong Question / Wrong Problem
Type 3 – Misinterpretation of a Correct Result
Type 4 – Evaluation & Replication Error
Blue – Types 5–6: Data-Decision Errors
Type 5 – False Positive
Type 6 – False Negative
Orange – Type 7: Measurement & Sampling Error
Measurement – Instrument, protocol, or coding problems
Sampling – Non-representative or undersized samples
The text in each colored box is a compact reminder of what that group does to you if you let it sneak in.
The Connectors: Who Talks to What
The dashed colored lines are the most important part of this diagram. They show which stage feeds which error block.
Purple connectors – Ontology ↔︎ Ontology-level Errors (Types 0–1)
From Stage 1 – Ontology to Types 0–1 – Ontology-Level Errors.
This is where you can:
Shrink the ontology space too early (Type 0 – Ontology Space Neglect), or
Promote one favored picture to “the” reality based on limited evidence (Type 1 – Ontological Overreach).
If the purple wiring is bad, everything downstream is built on a distorted world-picture.
Green connectors – Questions & Interpretation ↔︎ Types 2–4
Green lines appear from Stages 2, 3, and 5 into the green block:
From Stage 2 – Question & Model to Types 2–4 – Question / Model / Interpretation Errors:
This is where a misframed problem or bad model choice becomes Type 2 – Wrong Question / Wrong Problem.
From Stage 3 – Experiment & Data to the same green block:
How you plan to interpret results feeds into how you instrument and what you bother to record. This is where early interpretive bias sets you up for Type 3 – Misinterpretation of Correct Result later.
From Stage 5 – Interpretation & Canonization back into the green block:
This captures Type 3 and Type 4 – Evaluation & Replication Error: you can misread correct results, over-value or under-value specific findings, or let sloppy replication practice harden bad interpretations into “facts.”
Whenever you find yourself saying, “The data are fine, but the story might be off,” you’re somewhere along these green wires.
Blue connector – Decisions ↔︎ Types 5–6
The blue line runs from Stage 4 – Statistical Decision to the blue box (Types 5–6 – Data-Decision Errors):
This is the domain of false positives (Type 5) and false negatives (Type 6):
Choosing thresholds, test statistics, and model comparison rules.
Deciding whether an effect is “detected” or “not detected” based on noisy data.
If your main worry is, “Did we call an effect that isn’t real, or miss one that is?”, you’re debugging along the blue path.
Orange connectors – Data & Interpretation ↔︎ Type 7
The orange lines connect Stage 3 and Stage 5 to the orange block (Type 7 – Measurement & Sampling Error):
From Stage 3 – Experiment & Data:
This is the obvious one: instrument calibration, protocol details, coding errors, sampling frames, missing strata. All the stuff that happens before the numbers even reach your statistical machinery.
From Stage 5 – Interpretation & Canonization back into orange:
This captures the subtle feedback loop where interpretive decisions and publication bias influence what gets measured and reported in the first place. Over time, this can turn into structural Type 7 error: whole regions of data-space that are never sampled because everyone “knows” they don’t matter.
In Timothian terms, orange is the contact point with the chunk medium: if your measurement interface is flawed, you are mis-reading the medium itself.
You can use this diagram as a checklist for debugging any piece of physics or empirical work:
Start on the left:
Ask yourself where you are in the process:
Are you arguing about what exists? → Stage 1.
Are you designing experiments or models? → Stages 2–3.
Are you arguing about statistics? → Stage 4.
Are you debating what the results “mean”? → Stage 5.
Follow the color that fits the symptom:
If the problem is about what is allowed to exist or how much is claimed, look at the purple box (Types 0–1).
If the problem feels like we’re asking the wrong thing or telling the wrong story, follow the green block (Types 2–4).
If the problem is false detection / non-detection, the blue block (Types 5–6) is where to look.
If the problem is raw data quality or representativeness, it’s in orange (Type 7).
Use it as a discipline tool:
When something doesn’t fit, resist the reflex to blame Type 7 first (“the instrument must be broken”) or Type 5/6 (“the p-value is wrong”).
Instead, consciously walk the stages in order and ask:
Did we commit Type 0 or 1 at Stage 1?
Did we commit Type 2 at Stage 2?
Are we setting ourselves up for Type 3 or 4 at Stages 3 and 5?
The central point this chart makes visually is:
Most serious failures start on the right side and high up (purple/green), not on the left and low down (orange/blue).
In other words, we are far more likely to be undone by bad ontological and interpretive wiring than by a miscalibrated instrument. The Timothian Model doesn’t ignore orange and blue errors—but it insists we stop pretending that’s where the deepest problems usually live.
The following chart in Figure 3 demonstrates where each error type typically enters the scientific pipeline.
- Nine-Block Error Map
The 3×3 grid “Where Each Error Type Enters the Scientific Pipeline” is a map of where different error modes tend to show up along the reasoning chain. It is not a new taxonomy; it is a visual index for the 0–7 types just defined.
Think of the grid as two coordinate systems laid on top of each other:
Columns tell you what kind of thing you are interacting with:
Measurement & Sampling – How you contact the world and collect raw data.
Data-Decision – How you turn data into yes/no, effect/no-effect, or parameter estimates.
Model & Ontology – How you frame the problem, build models, and decide what “exists.”
Rows tell you where you are in the workflow:
Ontology & World-Building – The assumptions and background picture you bring in before you do anything else.
Question & Model Design – The way you pose the problem, choose hypotheses, and design experiments.
Data, Tests, Evaluation – The actual measurements, statistical tests, and interpretation of results.
Each cell then answers the question:
If something goes wrong at this stage, and in this aspect of the pipeline, what kind of error is it likely to be?
That’s why some cells explicitly name Types 0–7, and others describe data issues or “no direct Type” but note how problems there feed into downstream types.
Left column – Measurement & Sampling
This column covers how the world is turned into numbers. The top, middle, and bottom cells remind you that measurement and sampling problems can creep in at:
The ontology/world-building level (R1C1: “Baseline Data Issues”):
You might never notice that your “clean” measurement is conceptually blind to an entire class of phenomena because your ontology does not recognize them.
The design level (R2C1: “Design-Linked Data Issues”):
Your sampling frame or inclusion criteria quietly bake in bias—who and what you choose to measure favors certain outcomes.
The operational level (R3C1: “Operational Data Issues”):
Drift, miscoding, missing data, uncorrected protocols. These are the classic “plumbing” problems.
These are the raw material for Type 7 errors (Measurement & Sampling Issues). They also feed into Type 5 and 6 (false positives/negatives) if you don’t catch them.
Middle column – Data-Decision
This column is about what you do to the numbers—hypothesis tests, thresholds, confidence intervals, effect sizes.
At the ontology/world-building row, the grid says “No direct Type 5–6,” but reminds you that bad baseline data can still trigger false positives/negatives downstream.
At the design row, you see Type 2 – Wrong Question: you can run perfect statistics on a perfectly wrong target.
At the data/tests row, you get Types 5–6 – Data-Decision explicitly:
Type 5 – False Positive
Type 6 – False Negative
So if you suspect a problem in your detection or non-detection of an effect, the middle column tells you where to look:
Is the question misframed (Type 2)?
Are you calling “signal” on noise or missing real signal (Types 5–6)?
Or is the apparent effect coming from Type 7 issues in the left column that have not been controlled?
Right column – Model & Ontology
This column is the heart of the Timothian critique: how models and ontologies are chosen, interpreted, and canonized.
Top row (R1C3): Types 0–1 – Ontology-Level
Type 0 – Ontology Space Neglect
Type 1 – Ontological Overreach
This is where you decide what kinds of worlds are even allowed, and how much you claim from limited data. This is where the “no medium” and “curved spacetime as the only story” errors sit.
Middle row (R2C3): Types 3–4 – Interpretation / Eval
Type 3 – Misinterpretation of correct results
Type 4 – Evaluation & replication errors
This is where you decide what the numbers mean, and how seriously to take them. Good data, wrong story; or good data, mishandled as a community.
Bottom row (R3C3): “Downstream Impact”
This cell shows how bad tests and data (left and middle columns) eventually feed back into future ontology errors (Types 0–1). If you canonize misinterpreted or poorly checked results, they harden into the next generation’s “obvious truths.”
Reading across a row answers: What can go wrong at this stage?
Row 1 – Ontology & World-Building
Measurement column: You may treat certain data channels as “baseline” without realizing they already embed ontological assumptions.
Data-Decision column: You rarely see pure Type 5–6 here, but early choices in how you think about data can bias everything later.
Model/Ontology column: The big ones—Type 0 and Type 1.
Row 2 – Question & Model Design
Measurement column: Your sampling frame and operational definitions are baked into the design.
Data-Decision column: Type 2 – Wrong Question sits here.
Model/Ontology column: Types 3–4 begin here, in how you plan to interpret what you will eventually see.
Row 3 – Data, Tests, Evaluation
Measurement column: Operational glitches, drift, miscoding.
Data-Decision column: Types 5–6 are most explicit here.
Model/Ontology column: The downstream impact section—this is where everything you do feeds back into “what the field thinks is true.”
The point of the nine-block is to give you a debugging map for any piece of science:
When something doesn’t add up, don’t jump straight to “the data are noisy” (Type 7) or “the p-value is wrong” (Type 5/6).
Instead, walk the grid from the bottom-right corner backwards:
Are we canonizing results too early (Type 4)?
Are we telling the wrong story about otherwise correct data (Type 3)?
Is the very question misframed (Type 2)?
Did we overreach or prematurely close ontology space (Types 1 and 0)?
Once you have checked the deep cells, then move to the middle and left columns:
Are our decisions (Type 5/6) consistent with the data we actually have?
Are there lurking measurement or sampling issues (Type 7) we haven’t surfaced?
Used this way, the nine-block is not just a diagram; it’s a stepwise diagnostic protocol for scientific reasoning in the Timothian Model:
Bottom-right: fix your world-picture first.
Middle row: fix your questions and interpretations.
Bottom-middle: fix your decision rules.
Left column: finally, fix your instruments and samples.
The central claim of The Nature of Ontology is that most of 20th-century physics lived in the middle and right columns of the top row—confidently committing Types 0 and 1—while treating the left column and bottom rows as afterthoughts. The nine-block makes that pattern visible, and gives future work a concrete tool to avoid repeating it.
The three diagrams this issue introduced are meant to be operational tools.
The pyramid reminds us that the deepest errors are ontological (Type 0–1), and that “fixing the measurement” does not rescue a broken ontology.
The pipeline chart reminds us that different stages create different error pressures; it also prevents a common failure pattern: using Stage 4 statistics to “prove” a Stage 1 ontology.
The nine-block is the debugging dashboard: it forces us to locate the failure mode (what stage? what aspect?) before we “explain” anything.
The intended workflow is:
Locate the stage: ontology / question / experiment / decision / interpretation.
Name the error type: 0–7.
Apply the appropriate fix:
For 0–1: expand or restrain ontology claims.
For 2–4: reframe the question, revise the interpretation, strengthen replication discipline.
For 5–6: adjust decision rules and power.
For 7: fix the instrument and sampling frame.
This prevents “argument by momentum,” where a community keeps refining downstream machinery while the upstream picture quietly stays wrong.
This paper’s role is to establish method and guardrails for the rest of the Timothian program.
From here forward, each domain paper (Space, Gravity, Time, Magnetism, Light, Stable Orbits, etc.) can be written with a consistent structure:
Ontology commitments used in this domain (what is assumed).
Predictions those commitments imply (what must follow mechanically).
What experiments and observations constrain those predictions (what the data says).
Where the mainstream story likely committed Type 0–4 (how we got stuck).
Which parts of the Timothian explanation are core vs scaffold (how to avoid Type 1 and Type 4).
Done this way, the Timothian Model becomes not only a unified ontology, but a unified discipline of reasoning—a method that makes it structurally harder to repeat the same ontological mistakes that produced the current fragmentation in physics.
That is the standard this taxonomy sets—not just for everyone else, but for the Timothian Model itself.
This part is intentionally not a physics textbook; it’s an error-taxonomy audit of how interpretations become canon.
When you look at Michelson-Morley, Stern-Gerlach, and the double-slit experiment through the Timothian lens, the striking thing is not that the experiments were wrong. The measurements were clever, careful, and repeatable. The mathematics that followed them often fits the data extremely well.
What went wrong was how the results were framed inside a particular ontology, and how quickly that framing was allowed to harden into “truth.”
In the taxonomy we’ve just built, those mistakes live largely in the Type 0–4 range:
Type 0 – Ontology Space Neglect
Entire classes of possible worlds were excluded before the game began.
Type 1 – Ontological Overreach
A few narrow results were promoted to sweeping statements about what exists.
Type 2 – Wrong Question / Wrong Problem
The formal questions were posed inside a truncated ontology rather than about the ontology itself.
Type 3 – Misinterpretation of a Correct Result
The right numbers were given the wrong physical story.
Type 4 – Evaluation & Replication Error
Those stories were then canonized and defended long after alternative interpretations were possible.
The point is not to blame Michelson, Morley, Einstein, Stern, Gerlach, or the early quantum pioneers. They were doing the best they could with the conceptual tools they had. The point is to show exactly where the reasoning chain bent, so we can avoid bending it the same way again.
Let’s walk those famous cases with that in mind.
The Michelson-Morley interferometer was designed to detect an “ether wind”: a drift between Earth and a hypothetical luminiferous ether that would change the speed of light along different arms of the apparatus. When the expected phase shifts failed to appear, the experiment produced a solid, repeatable null.
That null result is not the problem.
The problem was in the ontology locked in before and after the experiment.
Before the experiment – Type 0 (Ontology Space Neglect)
The ether being tested was already heavily constrained by prior assumptions:
It had to be a uniform, stationary, non-interacting medium.
It could not be significantly entrained by Earth’s motion.
It was not allowed to be stratified, rotating, or coupled to matter in a Newtonian way.
In other words, only a very narrow slice of possible mediums was on the table. A stratified, mass-bearing, partly entrained chunk medium—the kind the Timothian Model proposes—was never seriously considered.
That’s a textbook Type 0 error: the ontology space was truncated up front.
After the experiment – Type 1 (Ontological Overreach)
The correct local conclusion was:
“This class of ether models (stationary, uniform, non-entrained) is strongly constrained or ruled out at the tested scales.”
Instead, the result was over-generalized into:
“There is no medium at all. Space is empty. Light propagates in vacuum with no substrate.”
That’s Type 1: an ontological overreach. A null for one model was promoted into a null for an entire category of possibilities.
Downstream effects – Types 2–4
Once “no medium” was accepted as a given:
Type 2 – Wrong Question:
The question became, “How must spacetime geometry behave, given that there is no medium?” instead of, “What class of mediums are still allowed by this null result?”
Type 3 – Misinterpretation of Correct Result:
Later observations of clock-rate differences, redshifts, and light bending were interpreted as “time itself stretching” and “nothingness curving,” instead of mechanical consequences of a real medium with variable density and tension.
Type 4 – Evaluation & Replication Error:
The “no medium” interpretation was canonized. Alternative designs that could have probed entrained or stratified mediums were rarely pursued as serious competitors. Textbooks hardened the overreach into dogma.
In Timothian language: Michelson-Morley did excellent work ruling out a particular ether story, and we responded by killing the very idea of a medium. That cascade is exactly what the purple (Type 0–1) and green (Type 2–4) blocks in these diagrams are there to make visible.
The Stern-Gerlach experiment sends silver atoms through a non-uniform magnetic field and observes how they hit a detector screen. Instead of a continuous smudge, the beam splits into discrete spots. That result is robust and repeatable.
Again, the data are not the problem. The interpretation is.
Ontology at the time – again, Type 0
The prevailing ontology treated magnetic fields as static patterns in a vacuum, not as real flows in a medium. Matter was imagined as pointlike or cloudlike “particles” with abstract internal degrees of freedom.
A Timothian-style ontology—where a magnet creates real chunk flows and pressure gradients in a medium, and atoms have structured, extended seeds interacting with those flows—was not even in the candidate set. Another Type 0: ontology space neglect.
The leap to “spin” – Types 1 and 3
The core facts:
The beam splits into discrete branches.
The pattern is stable and reproducible.
The standard story:
Atoms possess an intrinsic, quantized “spin,” a two-valued internal variable with no classical analogue.
The magnetic field “measures” this spin and forces the atom into one of two allowed states.
From a Timothian standpoint:
That is a Type 3 misinterpretation—right result, wrong story. The discrete splitting tells you the system only supports a few mechanically stable configurations when seeds move through a particular flow and tension pattern in the medium.
There’s also a touch of Type 1 overreach: the observation of discrete outcomes is treated as proof that the underlying ontology must be non-mechanical and abstract, rather than as a clue about the structure of the medium and the seeds.
Lost questions – Type 2
Once “spin” was defined as a fundamental quantum degree of freedom, the question itself was frozen:
“How does spin behave?”
instead of
“What medium and seed structures could produce these two preferred branches?”
That’s another Type 2 move: the model space is locked in, and the question is asked only inside that model, not about alternative mechanical ontologies.
In the Timothian Model, Stern-Gerlach is not a mystery at all. It’s a simple example of structure-dependent sorting in a flowing medium, the kind of behavior you would expect whenever extended objects move through a structured flow field and only certain alignments are stable. The mystery comes from Type 0–3 errors layered on top of otherwise solid data.
The double-slit experiment shows an interference pattern when particles (electrons, photons, etc.) pass through two open slits and hit a detector. Change the setup so you gain strong “which-path” information, and the pattern changes.
The facts:
With both slits open and no strong which-path constraint, the arrival distribution on the screen shows interference fringes.
When you strongly constrain or measure the path, the fringes diminish or disappear.
Those facts are consistent with any model in which:
Something real propagates through multiple paths in a medium.
The constraints and detection rules affect how that propagation deposits energy.
Yet the orthodox story goes much further:
Particles “do not have definite positions or paths” until measured.
The system is in a “superposition” of all possible paths, and reality is fundamentally probabilistic.
The act of measurement “collapses” the wavefunction.
In Timothian terms, this is almost a pure Type 3 misinterpretation, supported by earlier Type 0–2 choices:
Type 0 – Ontology Space Neglect:
The idea of a real, mechanical medium between source and detector is excluded. The only allowed ontologies are “particle,” “wave,” or “wavefunction in abstract space.”
Type 2 – Wrong Question:
The question becomes, “How do we interpret the wavefunction?” rather than, “What are the medium flows and seed–medium interactions that produce these arrival patterns?”
Type 3 – Misinterpretation of Correct Result:
The interference pattern is real. The sensitivity to how we constrain the paths is real. But we read them as evidence that nothing has a determinate state between emission and detection, rather than as evidence that our model of the medium and detector thresholds is incomplete.
Once again, the experiment is excellent. The story is where the ontology drift happens.
If you line these examples up—Michelson-Morley, Stern-Gerlach, double-slit—you see the same pattern:
A clever, well-designed experiment produces clear, robust data.
The experiment is interpreted inside a narrow ontology that was accepted before anyone asked if it was complete.
Correct numerical results are mapped to overstated ontological claims:
“There is no medium.”
“Spin is an inherently non-mechanical property.”
“Particles do not have definite states until observed.”
In the language of the Timothian error taxonomy:
Type 0 (Ontology Space Neglect) set the stage: alternative mechanical, medium-based ontologies were never allowed to compete.
Type 1 (Ontological Overreach) then elevated specific mathematical descriptions into “what the world really is.”
Type 2 (Wrong Question) ensured that later work stayed within those assumptions.
Type 3 (Misinterpretation) wrapped the right numbers in the wrong story.
Type 4 (Evaluation & Replication Error) kept those stories entrenched.
The Timothian Model’s key move is to name those ontological mistakes explicitly and treat them as first-class errors, not as background philosophy.
With the taxonomy and the diagrams in place, it’s easier to see what you’re doing differently.
You keep the empirical constraints:
Michelson-Morley really did see a null for the sort of ether they were looking for.
Stern-Gerlach really does produce discrete splitting.
Double-slit interference really is sensitive to which-path constraints.
But you rerun the reasoning chain with different starting rules:
Logic dictates ontology: no action at a distance, no true vacuum, Newtonian mechanics everywhere.
That blocks Type 0 upfront: a medium-free universe is not on the table without overwhelming evidence.
Ontology drives prediction: there is a real, mass-bearing chunk medium.
This medium can be stratified, entrained, rotated, and locally structured. Seeds (atoms, bodies) are extended structures in that medium. That gives you a rich ontology space to explore rather than a single “no medium” lane.
Prediction guides experiment and data: ask medium-aware questions.
For each classic experiment, you ask:
What medium flows and stratifications are implied?
What alternative medium configurations are still compatible with the data?
That guards against Type 2 (wrong question) and Type 7 (bad sampling/measurement of the medium itself).
Data validates and constrains the medium, not abstract geometry alone.
When a model fits, the question is:
What does this tell us about the medium’s properties?
not
What does this tell us about the curvature of nothingness or the collapse of probability amplitudes?
That’s how you avoid Type 1 and Type 3: you let data carve away medium candidates instead of metaphorically curving emptiness.
Evaluation and canonization stay humble.
Even when a medium model matches data, you treat it as a live hypothesis about chunk structure and flows, not as a final metaphysical truth. That’s how you keep Type 4 in check.
In the traditional story, ontology is treated as taste or philosophy: you are free to prefer “fields,” “particles,” “strings,” “wavefunctions,” or “spacetime curvature,” so long as you match the numbers.
In the Timothian story, ontology is a testable component of the model and a named source of error:
If you exclude a chunk medium without having tested chunk-medium candidates, that is not a neutral choice; it is a Type 0 error.
If you infer “no medium” or “broken reality” from a narrow class of experiments, that is not insight; it is a Type 1 overreach.
If you treat strange results as proof that reality is strange rather than that your ontology is incomplete, that is a Type 3 misinterpretation.
By making those errors explicit, the Timothian Model doesn’t just propose a new ontology. It proposes a better way to check ontologies against experiments, so that the next time we build a grand theory on top of a Michelson-Morley or a Stern-Gerlach, we can be confident we’re not silently repeating the same mistakes.
That is what these error diagrams—pyramid, pipeline, and nine-block—are doing in The Nature of Ontology: they are the guardrails that make the Timothian ontology not just new, but disciplined.
Part Four converts the taxonomy into operating procedure: a practical set of guardrails that keep Timothian claims constrained, testable, and resistant to premature canonization.
This section walks through the 8 error types and shows how each category unearths different risks, requiring specific guardrails to prevent errors from propagating.
Timothian failure mode risk:
We prematurely collapse ontology space in the opposite direction: we treat “chunk medium + Newtonian mechanics everywhere” as so compelling that we fail to examine legitimate alternative ontologies, or we fail to explore meaningful variants within “medium space” itself.
Guardrails:
Maintain an explicit Ontology Space Registry: a short list of major candidate ontologies and why each is admitted or excluded on logical grounds, not on sociological grounds.
Inside “medium space,” maintain a visible list of degrees of freedom that are still open: stratification, entrainment strength, rotation, coupling rules, seed structure, boundary conditions, etc.
When an experiment is discussed, explicitly state which “medium variants” it constrains and which it does not.
This prevents the Timothian Model from committing the same sin it accuses others of: declaring that alternatives are “impossible” without actually exploring the space.
Timothian failure mode risk:
We overclaim what the model proves—moving from “this medium-based ontology explains X, Y, Z” to “therefore all other ontologies are false,” or “therefore this specific medium structure is exactly true.”
Guardrails:
Separate statements into three clearly labeled classes:
Ontology commitments (what is assumed).
Derived consequences (what follows if those commitments are true).
Empirical constraints (what data currently supports or limits).
Avoid global language (“must be,” “only possible”) unless tied to explicit logic constraints.
Include a short “Domain of Validity (So Far)” and “Open Parameters” section in every major paper. (Note, this is future work, as this formalization of my ontological flow emerged late.)
Type 1 is the easiest way for a new paradigm to become a new dogma. Timothian must not repeat that pattern.
Timothian failure mode risk:
We spend effort proving that old questions can be re-answered (“how does relativity fit?” “how does quantum fit?”) while neglecting the deeper Timothian questions that the medium ontology actually makes possible and testable.
Guardrails:
For each topic, state the Timothian-native question first—what the medium picture predicts and what it makes measurable.
Treat “compatibility mapping” to old paradigms as secondary: useful for translation, not the core objective.
When you find yourself doing a long detour into legacy framing, stop and ask:
Am I answering their question, or am I asking the question the medium makes inevitable?
Type 2 is how a new ontology gets trapped inside the old ontology’s “problem set.”
Timothian failure mode risk:
We correctly match a phenomenon but tell the wrong mechanistic story for why it happens—especially if multiple medium mechanisms could produce similar outward behavior.
Guardrails:
Whenever you claim “this mechanism explains that effect,” add one line:
“What would look different if the mechanism were wrong?”
Prefer predictions that are qualitatively discriminating, not only quantitatively matching.
Make explicit whether a claim is:
a direct deduction from first principles,
a plausible mechanism consistent with current constraints, or
a provisional narrative awaiting targeted tests.
Type 3 is the “right curve, wrong cause” trap. Timothian must treat mechanism as something to be distinguished, not merely asserted.
Timothian failure mode risk:
We canonize too early: internally consistent stories become treated as “settled,” or critics are dismissed as “not understanding the ontology” rather than being used as adversarial test engines.
Guardrails:
Publish an explicit Falsification Menu: a list of observations that would force revision of key claims (not a rhetorical gesture—an actual list).
Invite replication and adversarial tests by naming the most sensitive “pressure points” where the model could fail.
Separate “core” from “scaffold”:
Core: logic constraints and ontology commitments.
Scaffold: working mechanisms, parameter guesses, and illustrative models that may change.
Type 4 is the sociological hardening of Type 0–3 into a permanent structure. Timothian should instead harden only what has earned it.
Timothian failure mode risk:
We treat every anomaly or coincidence as evidence for the chunk medium; we “collect hits” and ignore misses.
Guardrails:
Treat “anomaly matches” as hypotheses, not confirmations, unless they are paired with a discriminating prediction that alternative ontologies do not naturally produce.
Do not use “fits” as proof when many mechanisms could fit.
Prefer constraint-building: what does the match tell us about medium parameters, and what does it rule out?
Type 5 is the temptation of discovery: seeing support where there is only ambiguity.
Timothian failure mode risk:
We dismiss data that doesn’t fit because it seems small, messy, or “probably a measurement issue,” when it may actually be a real constraint that points to missing degrees of freedom in the medium.
Guardrails:
When data conflicts, first ask:
Which parameter or degree of freedom in medium space could reconcile this without breaking the rest of the model?
Only after that do we ask whether Type 7 issues could explain it.
Maintain a “Known Tensions / Open Conflicts” list as an honest ledger.
Type 6 is how a paradigm protects itself from reality. Timothian should protect itself from pride instead.
Timothian failure mode risk:
We appeal to “measurement issues” too early to protect the ontology, or we build arguments on low-fidelity data without acknowledging the limitations of the measurement interface.
Guardrails:
Explicitly label whether a claim rests on:
high-fidelity measurement,
indirect inference, or
historically noisy evidence.
If a measurement is known to be coarse, treat conclusions as constraints on a wide region of medium space, not as precise parameter values.
Avoid the reflex to “blame the instrument” or “blame the model”; instead, name the uncertainty and keep it live.
Type 7 matters, but the Timothian discipline is to avoid using it as a shield against deeper errors.
A reader may reasonably ask: If ontology is so central, how did the Timothian ontology itself evolve without becoming dogma? The answer is that this model was not built by “choosing a metaphysical picture and defending it.” It was built by enforcing a small set of Newtonian constraints and repeatedly revising any mechanism that violated them.
From the beginning, the guiding commitments were simple:
No action at a distance (mechanical causality must exist).
No magical thinking (complex and emergent is allowed; unexplainable is not).
No-vacuum (space is not “nothing”; it must be a physical medium).
Newtonian mechanics at every scale (the same causal grammar, regardless of size).
These rules did not make the work easier. They made it slower—but they also made it harder to “solve” problems by renaming them.
The Risk Register Discipline
A recurring technique in the model’s development was what I eventually began thinking of as a risk register: a list of known unresolved tensions—places where an analogy or working mechanism seemed useful, but did not fully satisfy the ontology.
This mattered because it prevented the most common failure mode in theory-building: solving a local problem with a convenient story, and then forgetting that the convenience violated the deeper rules.
When a mechanism was helpful but not fully consistent, it was allowed to stand only as scaffolding, explicitly labeled as unresolved, and placed into the risk register until the missing degrees of freedom were found.
Example 1: The foam analogy—and the “no-vacuum” constraint
Early on, I formed a picture of the medium of space as something like foam rubber:
Imagine poking a hole into foam, stretching it, inserting a baseball, then releasing.
The ball displaces the foam: cells near the ball compress, cells farther away compress less, and the distortion falls off with distance.
At first, this felt like a “eureka”—because unlike a massless curvature story, it treated space as material, and it gave a mechanical intuition for inverse-square-like gradients.
But the risk register immediately lit up: I had violated my own no-vacuum rule.
Foam “compresses” because it has voids to compress into. If space contains no-vacuum, then a compressibility story is incomplete unless the medium has a way to reorganize without creating empty gaps—and unless that reorganization is mechanically consistent across scale.
That tension sat unresolved for a long time, not because the analogy was useless, but because the model demanded a deeper answer:
What, exactly, is “compressing”?
How does displacement propagate without gaps?
How can the medium maintain continuity at every scale?
What ultimately resolved this was not abandoning the no-vacuum rule, but taking it seriously enough to require a medium composed of continuous stratifications—with varying density and structure—so that the medium can deform, pack, and re-balance without implying “empty space” as the hidden escape hatch.
Example 2: Magnetism—“fields of what?” and the necessity of counterflow
A similar arc occurred in magnetism.
From a Newtonian standpoint, the question “a magnetic field exists” immediately becomes: a field of what? Powered by what? Acting through what mechanism?
Early in the work I treated fields as flows of the medium that were rectified or organized by a magnet. That was an improvement over treating fields as causeless primitives—but it still didn’t close mechanically:
A single flow picture struggled to explain N–N repulsion, S–S repulsion, and N–S attraction using the same Newtonian rules.
Again, the risk register caught it: a one-flow field was an appealing simplification, but it wasn’t yet a complete mechanism.
The breakthrough came from allowing a degree of freedom that is fully compatible with the ontology: the medium can contain multiple “participants” of different sizes/masses/densities. That one addition—without invoking anything non-Newtonian—opened the model dramatically:
Magnetic structures do not necessarily rectify all medium participants equally.
A magnet can organize some components into a directed flow, while other components remain less organized—or respond differently.
Then the no-vacuum rule does what it always does: it forces you to ask, what fills the space left behind?
If some participants are being preferentially directed or “pulled” into a structured pathway, continuity demands backfilling—which means a counterflow.
Once counterflow is recognized as required by the ontology (not optional), the three pairing behaviors stop being mysterious: you no longer try to force all magnetic behaviors through a single unidirectional “field,” but through paired flows (and selective participation) that can produce attraction and repulsion in a mechanically symmetric way.
This is a pattern you’ll see repeatedly in the Timothian Model: the medium cannot be allowed to “cheat.” If something moves, something else must move to keep continuity. That one principle eliminates whole classes of hand-waving.
Example 3: Gravity and packing—fractal gaps, shape, and elasticity
Gravity carried a deeper version of the same continuity problem:
If the medium is real and continuous, then as you consider smaller and smaller scales, you face an apparent “fractal packing” question:
How can the medium fill gaps between increasingly smaller constituents without eventually violating the no-vacuum rule?
Does the rule break at some depth, or is there a legitimate way to keep continuity?
For years, I held an explicit open parameter: medium participants vary in mass and size and density—and possibly in shape, because Newtonian constraints did not require uniform shape, and nothing in the observed effects demanded it either. So “shape variation” remained an admitted possibility rather than a forced claim.
Then a new degree of freedom emerged that solved multiple problems at once: elasticity.
Elasticity is powerful in this framework because it:
Adds mechanical degrees of freedom that can support wave-like behaviors without requiring mystical primitives,
Provides a way to maintain tight packing without empty gaps (by allowing deformable, load-bearing structures),
And unifies what would otherwise look like separate rule-sets for gravity, electromagnetic propagation, and the stability of structures across scale.
In hindsight, elasticity didn’t “patch” the model; it clarified what the ontology was asking for all along: a medium that is not only continuous, but mechanically rich enough to carry gradients, flows, backfilling, and stable configurations without invoking emptiness as an unspoken assumption.
What this development process is designed to prevent
This story is not just autobiography; it is a demonstration of the method this paper argues for.
The risk register + guardrail approach is specifically designed to prevent the Timothian Model from committing the very errors it critiques:
It prevents Type 1 (Ontological Overreach) by treating analogies as scaffolding until they are mechanically closed.
It prevents Type 0 (Ontology Space Neglect) by keeping open degrees of freedom (participant types, shape, elasticity) until data and logic constrain them.
It prevents Type 2 (Wrong Question) by forcing each concept to be grounded in “fields of what?” and “powered by what?” instead of accepting inherited abstractions.
It prevents Type 3 (Misinterpretation of Correct Results) by refusing to let a good fit become an explanation unless the mechanism is Newtonian end-to-end.
And it helps avoid Type 4 (premature canonization) by institutionalizing unresolved tensions rather than hiding them.
In short: the Timothian Model was not built by assuming a story and defending it. It was built by insisting that the story must obey its own rules—especially the no-vacuum rule—and then returning, repeatedly, to any place where the mechanism was not yet complete until a mechanically consistent resolution appeared.
This is what Type 0/1/2 discipline looks like in practice: you shelve mechanisms that violate ontology, expand degrees of freedom only when warranted, and re-run downstream predictions.