The above is a title of a discussion at the interdisciplinary ‘Philosophy, Psychology, and Informatics Reading Group’. We have
Matteo Colombo and Matthew Chalk discussing two recent papers on two theoretical approaches to cognitive modelling:
Griffiths, Thomas L. et al. 2010. “Probabilistic models of cognition: exploring representations and inductive biases.” Trends in Cognitive Sciences 14(8): 357-364.
McClelland, James L. et al. 2010. “Letting structure emerge: connectionist and dynamical systems approaches to cognition.” Trends in Cognitive Sciences 14(8): 348-356.
General descriptions of the models
They began by Matteo generally describing the both approaches, including the advantages and problems.
To say very generally, the probabilistic models are the “top-down” high-level models, taking computational problem and engineer the system solution with few assumptions about low-level mechanisms. It is more structured, transparent, flexible but less constrained and lacks the actual neural implementation. It goes directly from the task to behaviour, skipping the middle implementation part.
Connestionistic models are starting from the low-level, with deriving cognition as emergent consequence of interplay of a large number of dumb processes. It is more (although not really, they argue -> oversimplified) close to describing the actual processes in the brain leading to cognition, but is not really scalable, unordered (difficult to learn ).
Now they put together a list of issues for the debate:
- What is the goal of cognition?
- What makes a certain approach progressive?
- What type of predictions do the two approaches give us?
- Under what conditions can we say that a model describes “real” phenomena?
- Should we persue unity or plurarism in the CogSciences?
- What are sociological/technological factors influencing “framework-selection”?
Comments, questions and discussion
One issue mostly argued was – do we really need to chose one, or should we look into bridging. And if we go into bridging the two, where do we start?
Another comment on both approaches was that they are closed and do not incorporate the environment, which is the crucial ingredient of cognition (it actually motivates the existence of cognition).
Matthew then gave a case study of binocular rivalry with two models for it, one from each field (Griffiths et al. 2009, Matsuoka et al. 1984). This finally spurred a bit more intense discussion, mainly questioning the Griffiths experiment, claiming it only fits the data, but gives no testable predictions – so it’s generally of no use.
The comments from other hand was, that in this case connectionist model works, but if you go into more complicated cognitive domains, you need statistical (probabilistic) approach, as you run into the connectionist limit – need to make too many assumptions.
And the final question from the audience -> Is there anything real in any of the two models? No. It kind of shut the discussion up for a bit.
A good question coming out in the following discussion was – If you want to build a robot with at least some of the human behaviour, which approach would you take? This question nicely eludes the need for actual brain research and goes straight to the functional solution. It seems that currently the predominant approach in robotics is probabilistic, but it’s not working (yet).
Not to completely bury the computational cognitive science, there were suggestions that bridging the two approaches could be the way to go. It seems both camps would agree on this, only not agreeing on where to start this bridging from.
For the brave ones, you can directly dig into the notes I took at the meeting.
This same debate was presented again within the ANC Institute seminar.