In the Abstract of Prof. Primiero’s lecture „Computational Hypotheses and Computational experiments” (at Warsaw University of Technology, November 19, 2019) I found the following, much thought-provoking claim; let it be called PC (Primiero’s Claim).
PC: The analogy between the scientific method and the problem-solving process underlying computing still is a tempting proposition.
This is a strong temptation, indeed. I am one who did succumb to it. This is due to my vivid interest in Turing’s [1939] idea of oracle as discussed in my paper „The progress of science from a computational point of view: the drive towards ever higher solvability” — in the journal Foundations of Computing and Decision Sciences, 2019 (Vol.44, No.1) — Section 4 dealing with Turing’s [1939] idea of oracle.
What is there said seems to provide some premises to discuss the analogy claimed by PC. To wit, that similar rules control the problem solving-processes (hence a progress of science) in empirical sciences and those called by Prof.Primiero computational sciences. It seems to me (a point for discussion) that the latter term can be conceived as equivalent with what is called „decision sciences” (see the quoted journal’s title), i.e., theories of decidability (to be called computability as well).
A crucial Turing’s (1939) claim concerning decidability in mathematics (see quotations by Marciszewski 2019), which continues his revolutionary result of 1936/37 (on the existence of uncountable numbers), is the following. When there is a problem undecidable at a given evolutionary stage of axiomatized and formalized theory, it can prove solvable with inventing
appropriate new axioms. Those, in turn are due to a creative concept-forming to be involved into the axioms, and so expressed by primitive terms of the theory in question.
Such an adding of axioms and the concepts involved — to be briefly called conceptualization — somehow motivated by mathematical intuition, do not enjoy a merit of infallibility. This have to be checked as guesses which may fail, as is happens with some presages; hence their metaphorical name „oracles” as suggested by Turing. In this respect they resemble empirical hypotheses being in need of testing. While in empirical teories hypotheses are tested with experiments, in computational science they are tested with their efficiency to produce right algorithms, and those, in turn — with their ability to be transformed into effective programs.
Thus the axioms of Boolean algebra produce, e.g., various algorithms to solve the problems of validity of propositional formulas, while those in turn, can be used to construct programs for automated theorem proving (the so-called provers).
Very interesting examples of testing such guesess can be found in the evolution of arithmetic. This, however, is a new subject, to be discussed in a next post. And still in another post one should consider thought-provoking analogies in problem-solving between mathematics and empirical sciences, e.g., astronomy.
The remark by prof. Marciszewski on what he calls PC is quite to the point: the analogy between scientific method and computational sciences (or decision sciences, if we constrain ourselves to their mathematical interpretation, and leave aside for the time being implementation issues — which is a safe move in this context) goes back to the very beginning of the theoretical foundations of Computer Science.
„Oracles” by Turing (in the sense mentioned by prof. Marciszewski) is one of the possible lenses through which to consider this problem. And the proposition that these resemble empirical hypotheses (in the sense of hypotheses concerning an empirical system) seems to me entirely sensible.
Let me provide an additional element to this discussion. In my analysis of Computational Hypotheses (as presented in a lecture at WUT, and largely explored in Chapter 13 of my recent On the Foundations of Computing https://global.oup.com/academic/product/on-the-foundations-of-computing-9780198835646?lang=en&cc=it), I have suggested that these types of hypotheses are only one type of Declarative Hypotheses as they can be used in computational sciences. In particular, when thought of as axioms of a theory, the equivalent nature of the declarative hypothesis is that of a Black Box style: the claim that a certain Input/Output relation is valid in the domain of interest. Note that there is no claim about knowledge concerning the structure of such a relation (what I call White Box Style Declarative Computational Hypothesis), and that the process of testing such a hypothesis is inversely proportional in terms of informational content with the process of finding one White Box Algorithm that falsifies it.
The interplay between the development of a proper methodological approach to the computational sciences and issues of explanation, clarification, justification is a large area that requires more and more efforts, especially when considered in the light of the problems rising from the field of Data Science and associated methodologies.
I have been wondering about the issue of axioms in formalised theories, such as Boolean algebra or the arithmetic of natural numbers…
At the very beginning it is worth noting that the axiomatization of the theory makes it a mature computational theory/science. Having axioms and appropriate rules of inference, we can compute different theorems of this theory or, to put it in a slightly different way, verify certain hypotheses that are hypothetical theorems. Computation is here the same as constructing a proof, i.e. the „chain” of propositions connecting axioms with a proven theorem. The question for discussion is to what extent the efficiency of proving, including the length of relevant proofs, depends on the noncomputational (perhaps) activity of the mind, which creates new concepts simplifying proving.
As for the verifiability of axioms, I agree with Prof. Marciszewski that they can be verified computationally – not only for lack of contradiction of the set of theorems obtained with their use, but also for the effectiveness of proving or solving problems expressed in the language of a given theory. With this approach, axioms providing greater efficiency would be more desirable.
However, I do not fully understand why hypotheses expressed by means of axioms (and verified further as suggested by Professor Marciszewski) are Black Box hypotheses. In fact, the essence of an axiom is that it (strictly) specifies what has been captured intuitively by someone. To put it another way: the axiom is not opaque like a black box or even a gray box. The axiom is supposed to be evident, and as such it cannot be opaque. Moreover, the set of axioms well structures the knowledge in a given field. For example: arithmetic axioms precisely define the structure of a set of natural numbers (e.g. what number follows after which) and the structure of operation/relationship of adding (what is the sum of any two numbers).
If my above doubts are an expression of a misunderstanding of what a Black Box hypothesis is, I am very sorry.
But maybe there is something important in my doubts…
„The role that Hilbert envisioned for formalism in mathematics is best served
by computer programming languages, which are in fact formalisms that can be
mechanically interpreted — but they are formalisms for computing and calculating,” — Gregory Chaitin
„Computation is here the same as constructing a proof, i.e. the „chain” of propositions connecting axioms with a proven theorem. — Paweł Stacewicz
I like the quoted Chaitin’s claim. This is why I think that in the genealogical tree of computer science, including von Neumann, Turing and Gödel, at the very top we have David Hilbert. And this is why I like Dr Stacewicz’s comment as well.
The way from mathematical thought to computer’s printout of a conclusion or the result of a computation, leads through conceptualization (a new idea), axiomatization of concepts, formalization, arithmetization, and physicalization (mechanical interpreation — in Chaitin’s wording). The last stage means the rendering of a sequence of zeros and ones in a sequence of physical impulses.
So I see the process as a logician. Not being a computer scientist, I cannot be sure how much the process of programming involves the listed stages. I would enjoy learning it from colleagues in a further discussion.
In the procedure I describe above, I do not see any place for a black box. But certainly I miss some point in Prof.Primiero’s discourse. I hope that the above very short presentation of a logical point of view may help in hinting at the difference between this point and that of computer scientist’s.
Let me clarify my point: why, and if so in which way, the addition of axioms can amount to the formulation of a Black Box hypothesis?
The formulation of an axiom does not amount *as such* to a Black Box hypothesis. But it allows to generate many such hypotheses. The extension of a theory with a new axiom — and possibly some set of rules, if necessary — determines an extension of the domains of truths, or theorems. From a computational point of view, a new hypothesis which takes an instance of a (new) axiom schema as the input of the computational process and a possible true statement as an output is to be considered a Black Box, as long as the actual computation (formally equivalent to proof steps) is executed and the derivation is made transparent ( and thereby proven correct).
In this, the process is equivalent from both a logical and computational viewpoint.