Updated: Apr 17, 2021
I just read few chapters of “Introduction to Cognition and Communication” (K. Stenning, A. Lascarides & J. Kalder, 2006) and appreciated very much that the authors are explaining the main constructs of human cognition and communication in a simple and understandable manner. Those same constructs not only drive design of AI systems but also can be further explored while creating functional AI systems. Below are some of the most interesting ideas that I came across.
It is hard to deny that by talking, writing or even dancing people do more than sharing and disseminating their ideas (as suggests Malinowski’s ideational perspective), we also speak or do other things to communicate our sociability more than information (phatic perspective). The two perspectives are tightly connected and inseparable one from each other. For example, student id-card not only provides (transports) information about identity of its holder to others, but also proves person’s social status and affiliation with a particular academic community. On the other hand, showing this card to someone who does not have an appropriate knowledge or similar experience (being in not the ‘same natural resonance frequency’) may have no effect. In this case we have a sender, which sends a message, but communication will not occur. Therefore, it is also important to admit that analogy of resonance address an important issue – what conditions can hinder the process of communication.
Communication cannot be explained in detail without accepting an important role of interdisciplinary cognitive science, which incorporated different approaches from psychology, linguistics, neuroscience, logics and modern technology. As this new discipline developed, observation appeared to be too narrow and limiting method for this kind of research. A new approach for gaining knowledge ‘Understanding by engineering’ was employed, which I see as one of the most effective attempts through the history of cognitive science to look inside the ‘Black box’ – research complex mental processes in human brains. According to this, a good example of ‘Understanding by engineering’ can be building of different systems (i.e., Artificial Intelligence - AI), which imitate different properties of human mind.
While talking about AI, its originator Alan Turing must be mentioned here. He introduced an abstraction of the machine (mostly known as Turing machine), which suggests that mathematical computations can be done by a machine using formally defined rules (algorithms). It is not surprising that computation principles were used to explain how computers process representations of information. What is more, computer main parts still reassemble those in Turing machine (CPU, core memory and disks respectively share some similarities with the head and tape of Turing machine).
However, it is much more difficult to apprehend that the same computation applies to the mental processes in humans’ mind. But how does this happen? First, computation should be taken into account no more different like an abstraction. Secondly, computation may be used for simulations which are also useful for science: scientists formulate theories in computable language and computer can search through possible combinations of rules applications to see whether and how a certain sentence is generated by rules. Thirdly, the implementation of computation in biological systems is very different, but some of brain’s computations are quite well understood at some levels (mostly those computations that are involved in animal’s sensory processes).
One can notice that many analogies and abstractions are used to explain the humans mind and communication: phatic-ideational, transportation-resonance, Turing machine, computation, etc. Usually this method allows us seeing a complex concept in a simpler manner, therefore, to study and to research it is also easier.
Another important application of Turing machine’s idea in cognitive science is Chomsky’s attempt to show that a Finite State machine (the one with a tape moving to a single direction) can be a reliable model of the human language processor, but it is not able to decode a hierarchical structure of the sentence, because additional resource is required here – memory. However, people can understand even very complex sentences. This obviously means that biological systems can implement computations more expressively than Finite State machines.
Term of internal representations (mental structures that perform computational functions related to mental processes, which very often means different forms of memory), could add more lucidity in explaining mentation in humans. From the historical perspective, internal representations have been thought of as pictures in the mind, but this “Pictures-in-the-mind” theory faces so-called argument establishing infinite regress and it is still not clear how we process these images. Do human mental processes frequently switch representations during reasoning? Is general experience or some other integrative processes important here? Maybe mental processes have at least two levels of system in it and at one level there are systems of representations? Or maybe representations achieve data-reductions, because they are selective? Unfortunately, a discussion between scientists does not provide us with a clear answer yet. It is only suggested that combining methods of psychology and AI may help to understand the mind and answer questions mentioned above.