*** Matteo *** (top-down, Probabilistic): Induction - How can people learn so much about the world form limited evidence (input). What makes people smart?: induction from partial or noisy information. Top-down approach. Using Bayesian framework to tackle the induction: Probabilistic models (ingridients): Prior: degrees of believe over hypothesis before observing data Likelihood: relation between the hypotheses and observable data Inference: Learning approximates optimal statistical inference. Advantages: - Formal/Explicit - Identification of "optimal" solutions - Autonomy from implementational level: - Flexibility for exploring different representational formats (IMPORTANT) - Explanatory scope (common language for different sources of data) - Unifying power (common language for different cognitive domains) Problems: - What is known in advance (the prior) - What are the ovjective and constraints (the loss function) - Lack of real basis in actual mechanisms (as-if models of performance) - How to relate function and structure - Experamentally underconstrained Is the Brain really Bayesian It all boils down to: You start with having a computational problem, finding a way to solve it, but don't really care about the neural implementation. You do engineering then, not brain research. (bottom-up, connectionist) What makes people smart? cognition as emergent consequence of interplay of a large number of dumb processes! -> focus on "real" cognitive processes -> Emergence from low-level dynamics Ingredients of Emerginistic Models Neuron-like processing units Knowledge stored in the connections among units. Cognitive activity depends on experience drive connection adjustment rules -> Representation lacks explicit structures Advantages: -> should start from low level, as this is the way evolution goes -> Focus on "real" mechanisms: * Integrated approach to cognition * Emphasis on temporal and spatial constraints of ... Problems: -> Oversimplification of the mechanisms -> Why \ How exactly emergent behaviour of a complex system solves a certain problem? -> Implicit theoretical commintments about hypothesis spaces and structure Issues for Debate: What is the goal of cognition? What makes a certain approach progressive? What type of predictions d othe two approaches give us? Under what conditions can we say that a model describes "real" phenomena? Should we persue unity or plurarism in the CogSciences? What are sociological/technological factors influencing "framework-selection"? ----- *** Matthew Chalk *** (general comments + case studies) Low level models make high-level assumptions. Flexibility of high level probabilistic models makes them unconstained - low level provides the constrains. => they are somehow inseperable ### Best approach depends on scientific question ### Case study: Binocular rivalry - probabilistic model (Griffiths et al. 2009) - bottom-up model (Matsuoka et al. 1984)