Invention, Intension and the Extension of the Computational Analogy

The current entry is part of the Research Workshop titled “Computational Modelling“, which will be held on 11.03.2019 in Cracow, at John Paul II University. It concerns the text by Professor Hajo Greif on the possibility of computational realization of such human cognitive abilities like intuition and invention.

I would like to invite to the blog discussion not only the workshop participants, but all interested readers of the blog.
Of course, Hajo Greif will be present at the workshop and available for face-to-face discussion of his paper.

The whole text can be read HERE.

Below, however, I put a short abstract of the article and slightly longer final remarks.

Abstract.
This short philosophical discussion piece explores the relation between two common assumptions: first, that at least some cognitive abilities, such as inventiveness and intuition, are specifically human and, second, that there are principled limitations to what machine-based computation can accomplish in this respect. In contrast to apparent common wisdom, this relation may be one of informal association. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing: Maintaining a principled difference between the processes involved in human cognition, including practices of computation, and machine computation will crucially depend on the requirement of intensional equivalence. However, this requirement was neither part of Turing’s expressly extensionally defined analogy between human and machine computation, nor is it pertinent to the domain of computational modelling. Accordingly, the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation.

Concluding remarks.
I have no proof or other formal conclusion to end on but merely one observation, a morale, another observation and yet another morale: First, the relation between the limits of computation and the limits of human inventiveness remains an open question, with each side of the equation having to be solved independently.
       Second, it will be worthwhile to expressly acknowledge and address the relation between human and machine abilities as an open question, and as multifaceted rather than as a strict dichotomy. Any possible decision for one position or another will have rich and normatively relevant implications. On most of the more tenable accounts outlined above, the domains of human cognition and machine computation will be distinct in kind and extension, but this will be not a matter of a priori metaphysical considerations but of empirical investigation and actual, concrete human inventions.
     Third, whatever the accomplishments of AI are and may come to be, intensional equivalence is not going to come to pass. In fact, several of the classical philosophical critiques of AI build on the requirement that the same cognitive functions would have to be accomplished in the same way in machines as in human beings for AI to be vindicated. Even if questions of AI are not involved, different kinds of computing machines – for example analog, digital and quantum computers – might provide identical solutions to the same functions, but the will do so in variant ways. Hence, intensional equivalence will remain out of reach here, too.
     Fourth, intensionality is an interesting and relevant concept in mathematics and partly also in computing, to the extent that one is concerned with the question of what mathematical objects are to human beings (which was the explicit guiding question for Feferman 1985). However, intensional equivalence might prove to be too much of a requirement when it comes to comparing realisations of computational processes in human beings and various types of machines. Extensional equivalence will have to suffice. It might become a more nuanced concept once we define the analogies involved with sufficient precision and move beyond the confines of pure Turing-computability. After all, Turing’s computer analogy builds on extensional equivalence between human and machine operations. This kind of equivalence and its possible limitations are essential to the very idea of computer modelling. This leaves open the possibility of other relations of extensional equivalence to hold between different types or levels of systems, computational or other.

I cordially invite you to a discussion in which we can refer both to the details of Professor Greif’s argumentation and to some general issues that constitute the philosophical background of the article.

Here are three examples of these issues:

1) What is extensional and intensional equivalence in the theory of computation, with particular respect to comparisons between computing machines and the human mind?

2) Do we have good reasons to believe that the mind is not extensionally equivalent to a digital computer (with potentially unlimited resources)?

3) What is the relationship between human intuition and inventiveness?

Once again, I warmly encourage everyone to discuss — Paweł Stacewicz.

Ten wpis został opublikowany w kategorii Dydaktyka logiki i filozofii, Epistemologia i ontologia, Filozofia informatyki, Filozofia nauki, Światopogląd informatyczny, Światopogląd racjonalistyczny. Dodaj zakładkę do bezpośredniego odnośnika.

5 Responses to Invention, Intension and the Extension of the Computational Analogy

  1. Paula Quinon pisze:

    (1)
    In “Can Church’s Thesis be Viewed as a Carnapian Explication?”, Synthese, https://doi.org/10.1007/s11229-019-02286-7 (open access) I proposed a clear account of the origins of intensional differences between models of computation. My argument goes as follows:
    (i) I claim that the Church-Turing thesis is best understood in terms of the Carnapian method of explication.
    (ii) A Carnapian explication consists in translating an intuitive concept (in the case of the CTT, the concept of computation), to a formal concept (of what “to compute” means in a given branch of mathematics).
    (iii) The Carnapian method of explication consists of two steps: *the clarification of the explicandum and *the specification of the explicatum.
    (iv) I claim that that intensional differences between explicata appear already at the stage of clarifying the intuitive concept. For instance, the intuitive concept of computation can be clarified as a subset of arithmetical functions (recursive function) or as a method of effective manipulation of symbols (like in Turing Machines).

    Hajo Grief, in his paper, wants to extend the idea that models of computation differ intensionally to the area of non-computable aspect of human mind. I am unsure if I understand Hajo’s endeavor correctly, but I would be very careful in formulating an anti-mechanistic argument in such general terms. It seems to me that the problem is sufficiently complex for a simplified version of the anti-mechanistic argument, which states that even the operation of the mind in the field of arithmetic is not mechanical (an argument based on Gödel’s Theorem, see Stanisław Krajewski, Twierdzenie Gödla i jego filozoficzne interpretacje, IFIS PAN 2003. In English a nice resumé of Krajewski’s ideas can be found in https://www.academia.edu/39736125/On_the_Anti-Mechanist_Arguments_Based_on_G%C3%B6dels_Theorem_ALMOST_FINAL).

    I suggest to reformulate Paweł’s question (1) in such a way that it refers not to human mind in general, but to the simplified version of anti-mechanistic argument. In consequence, a researcher standing on an anti-mechanistic position can in a systematic way study the influence of the clarification (used for the particular model of computation) on the manifestation of the (simplified) anti-mechanistic argument in the particular model. For Hajo it might be interesting to start with distinction: software/hardware.

    • Paweł Stacewicz pisze:

      — A Carnapian explication consists in translating an intuitive concept (in the case of the CTT, the concept of computation), to a formal concept (of what “to compute” means in a given branch of mathematics)..

      Can we put this in such a way that we take a standard concept of computation (described, for example, by means of the universal Turing machine model) and see how it can be formally described in different branches of mathematics. And we examine what intensional differences occur in different descriptions (because we take the standard concept of computation each description is extensionally equivalent).

      Or do we put the matter in such a way that we do not prejudge what the computation is? And we examine what is (what consists in) problem solving in different branches of mathematics. If these branches are significantly different (e.g. the arithmetic of natural numbers is different from the arithmetic of real numbers, differential geometry is different from classical geometry…), then we get different concepts of computation. For example: the concept of analog-continuous computation would be different from that of discrete (digital) computation.

      I like the second path more…

      • Hajo Greif pisze:

        It is interesting to see that my argument is constructed as ‘anti-mechanist’ by Paula, who frames it as the view that there are non-computable aspects of the human mind. It is interesting because I would never have thought of myself as subscribing to this view on such a level of generality (or even at all, but that might owe to my deliberate abstention from subsuming thoughts under “-isms” in philosophy). If one tries to make the anti-mechanist claim more precise, it will unpack into either of (at least) two distinct specific claims, and I am wondering which one is under scrutiny here (and ascribed to me):

        #1. There is a set of properties of the human mind that essentially is not computational in nature. This is, fundamentally, the question whether cognition is computation partly or entirely, which is independent of the question, first, of whether some of the operations in arithmetic that humans perform are correctly described as computations and second, whether computational methods can be used to model human cognition, which takes me to the second claim about non-computable aspects of the human mind.

        #2. Computational models fail to capture some relevant aspects of the human mind. Paradigms of this issue are the questions, first, whether digital simulations provide good-enough, empirically adequate approximations to the – analogue – properties of human cognition that one seeks to investigate or, second, whether such simulations would need to account for the embodied aspects of human cognitive activities in order to be adequate. The mind might still be computational in nature, while our models might be de facto inadequate to its specific computational properties.

        If I take Paula’s reference to Stanisław Krajewski’s work as an indicator, the suggestion seems to be that I subscribe to some form of #1. Krajewski very explicitly discusses – and objects to – the “alleged Gödel-based proof of the non-mechanical character of the human mind.” This line of argument might be defensible, although I harbour some kind of Wittgensteinian skepticism against the notion of solving, rather than dissolving, philosophical problems through logical proof. Instead, if my argument needs to be labeled as anti-mechanist, it will be so in the way outlined in #2 – which is much weaker in its claim and its presuppositions. It is also much less concerned with fundamental mathematical questions as it is with scientific modelling, and with computational modelling of cognitive phenomena in particular.

        For very much the same reason, I agree with Paweł that the second, more pluralistic path towards Carnapian explications is the preferable one. After all, there might be more than one intuitive concept of computation, which I think becomes perfectly evident in the fact that am following Turing’s intuitive concept (or one of his concepts, namely the LCM one), whereas someone who follows Church’s concept might arrive at very different conclusions that may not even be very instructive with respect to answering questions of cognitive modelling (ignoring for the moment that several concepts seem to be wrapped into the Church-Turing Thesis).

  2. Paweł Stacewicz pisze:

    Hajo addressed a very interesting topic of extensional/intensional differences between computations used to model (and also: to artificially implement) human cognitive activities. For me such an approach – in terms of extensionality/intensionality – is new and inspiring. I think that I will continue to penetrate this topic…

    For now, I would like to say a few words about how I understand equivalence (and non-equivalence) with regard to the models of computation. Maybe I’m doing it wrong… (so please correct…)

    With extensional equivalence, the matter seems simple and unambiguous. Two models of computation are extensionally equivalent if they have the same class of solvable problems (regardless of how these problems are solved). Thus: the universal Turing machine model is equivalent to both the recursive functions model and the quantum computation model. In contrast, the UMT model is not extensionally equivalent to the analog-continuous model of computation (described by means of real recursive functions). The latter, theoretically speaking, allows to solve the TM halting problem (unsolvable under the UMT model). It is therefore extensionally stronger.

    With the intensional equivalence the matter is more complicated. It is not about the class of problems solved, but about the way they are solved. Characterising this way, one can indicate various properties/aspects, for example: the speed of solving the problem, the size of required resources (e.g. memory), the need to use certain natural processes. Thus, the intensional equivalence is determined by which of these characteristics we consider important.

    For example, the model of quantum computation is not intensionally equivalent to the UMT model (although it is extensionally equivalent), because it allows to solve some problems faster (not in exponential time but in polynomial time). Therefore, it is not intensionally equivalent to the UMT model due to the “speed of computations” property (or to be more precise: it is intensionally stronger in this respect). For the comparison of other models we can indicate other properties.
    (It is worth noting that the above remarks can easily be adjusted not only to the models of computation, but also to specific machines/systems that work according to such and such models.)

    Intending to transfer the above remarks into the field of mental activity modelling, at least 2 strategies can be adopted: 1) to compare, in computational terms (i.e. as above), different computational models of mental activities (e.g. perception); and to examine whether they are extensionally/intensionally equivalent, 2) based on certain empirical research (e.g. neurobiological research), to consider certain properties of the mind-brain as relevant and to check whether such and such mind-brain computational model is (due to selected properties) intensionally equivalent to the mind-brain itself.

    As one can see, the last paragraph is quite vague and general, so it indicates the direction of my next commentaries.
    So now I would like to focus on the discussion on the extensional/intensional equivalence of models of computation (first part of the commentary)

  3. Cristiam Martin Jackson pisze:

    It is understood that the principle of working of the brain is different than it was thought 60 years ago, however, the existing insight led to a big leap in the matter of technological evolution during the last decade. Comparing the structure of a modern digital AI machine with the structure of what is understood for a brain nowadays, will show a considerable dissimilarity between both systems, there are many differences, to mention some aspects: number of processing units, speed, memory, efficiency.

    What propels the development of a human mind is the need to overcome a problem that has been created as product of a necessity intrinsec to the human being, this need makes the human brain to create synaptic connections that later will associate different situations that might resemble a previous experience, however a machine has no such necessities, in a machine activities we wish to solve are assigned by us, and the process it uses to solve it is based on a logic we created in order to obtain a response we expect; we are creating an extension to our brain using systems that provide certain advantages such as speed for solving problems easily modeled with mathematical formulation. So far we are taking advantage of these perks in order to implement previously established models that might reveal or help to understand how the brain works. However it is interesting to see a work published by Jonas E, Kording KP entitled: “Could a Neuroscientist Understand a Microprocessor?”, where a group of neuroscientist attempted to evaluate a microprocessor using all the techniques they normally use for exploring brains, and the result was that, although it is known how a processor works, it was not possible to reveal it by mean of neuroscientist approaches, which leads to some thoughts: evidently a brain does not have the same structure as a processor and the processor cannot behave like normally a brain would do, also, the understanding of the brain is directly linked to these techniques, for what we could assume that we cannot expect to understand the brain by trying to look for a digital brain, but instead, we need to first understand how the brain works in order to be able to translate this model into a tangible digital circuit that might show the same response to stimulus as a brain does, even for the most trivial tasks which are the main problem when looking for an extensional equivalent computer.

    There are different ideas intending to crack how a brain works one of them has gotten my attention, in “The Singularity Is Near: When Humans Transcend Biology” by Ray Kurzweil, its explored the idea of scanning a brain from the inside, using nanobots would be possible to reveal the structure of the brain and not only, it can be possible to reveal the brain activity after certain stimulus and then recreate this in form of bits in a digital-based machine, provided that we count with the technology to implement what could be at least a network of billions of microprocessors interconnected in parallel.

    A final thought is based on a modern theory formulated by György Buzsáki in his last work “The Brain from Inside Out” he establishes that the brain is constantly working on different possibilities for each one of the stimulus it receives “Our brain does not process information: it creates it”.

Skomentuj Paula Quinon Anuluj pisanie odpowiedzi

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *