Cognitive theory in psychology appeared in the 1960s and 1970s (Miller Galanter and Pribram) although the major foundations were built earlier (Herbert Simon, 1940s). Its use rapidly increased and replaced an unwieldy stimulus-response behaviourism (Hull, Spence), loose associationisms, and a variety of other approaches which tried to represent and account for human behaviour, including psychoanalytic approaches which had moved far from Freud and were rigidly enforced through professional associations. As a psychology student in the late 1970s, for me it was certainly a breath of fresh air. When I specialized in social psychology, it was the main bulwark of any theorizing and this has continued until the present time.
Looking back now, having examined almost every theory and idea from psychology and the social sciences, four things strike me about this ‘breath of fresh air’.
1. First, on the positive side, the cognitive theories replaced some very poor theorizing about human behaviour which had got out of hand and was not leading anywhere. Applications to real world issues were difficult and very obtuse hypotheses were being generated. The theories at that time were very theoretical: stimulus-response links had become tautological, pseudo-mathematical, with new parameters introduced to fit any problem; associationisms did not have anything concrete to use as a foundation except vague promises of brain processes (Pavlov, Hebb); and psychoanalysis was either wedded to retaining historical ideas at all costs or again adding any new hypothetical elements to explain any new findings did not fit. This is a very broad picture of course and there were always some very good researchers and thinkers who were trying to do things a little different within those domains.
2. The second thing that strikes me, looking back on the ‘cognitive revolution’, is that there were other foundations which avoided the pitfalls I have mentioned above but which did not gain prominence in the way which occurred for cognitive theory. Two of note are behaviour analysis, commenced in the 1930s by Skinner (1935), and the ecological psychology started in the 1950s by Gibson (1950, 1960, 1966; Gibson & Gibson, 1955). In principle, either of these could have risen to prominence rather than cognitive theory because they replaced the main ideas of stimulus-response behaviourism, associationisms, and psychoanalytic theories in novel ways, and showed how a new psychology could be built. Ironically, they have each become more prominent for psychological ideas in recent times.
For Skinner and behaviour analysis, the actual position was never really understood by most psychologists who assumed it was just a variation of the stimulus-response behaviourisms. This included Chomsky’s famous review of Skinner’s Verbal Behavior (1957) which completely missed the radical point of behaviour analysis and was really attacking the older behaviourisms. But the Chomsky review gave a good narrative story for textbooks to use when dismissing radical behaviourism and it seemed to be a cognitive position. A second reason behaviour analysis never gained popularity was that the research was almost entirely with non-human animals whereas cognitive theory allowed new forays into human behaviour research (which was liberating). This was obviously more appealing, especially when the non-human animal research from old stimulus-response behaviourisms had produced so little of use. Ventures into ‘normal’ human social behaviour within behaviour analysis were also not encouraging (Guerin, 1994) whereas the cognitive approaches seemed (on paper at least) to be getting somewhere new and exciting. The big exception for behaviour analysis was the research with people with autism or developmental disabilities, and behaviour analysis has always been strong in these domains.
For Gibson, there are perhaps two factors which prevented a bigger role. First, Gibson himself was almost exclusively concerned with understanding perception and many saw that as the sole application of what he was saying, with the real potential only being shown much later (Powers, Brookes, Neisser, Ingold). Second, many read Gibson but only thought about his ideas in terms of cognitive theory: humans perceive affordance which get represented in the cognitive system and stored as memories of affordances, which are later used in processing new information about the world and that processing leads to body movements. Part of this misunderstanding was probably Gibson’s use of the word ‘information’, which he used early on, but which had a very different meaning to its later use in cognitive theories (Miller). So, for those who even bothered to read Gibson during the hey-day of cognitive theorizing, it was only seen as a slightly different version of cognitive theory.
The third thing that strikes me now about the ‘cognitive revolution’, is that the theorizing was actually not that much different to the stimulus-response behaviourisms or the associationisms. They all purported to understand what humans do by suggesting that after humans see things and act, we have some connection, association, link, memory trace, distributed memory, or S-R bond remaining inside us.
None of these positions grounded these ‘bonds’ or ‘connections’ in anything observable or concrete, except promises of brain processes which will be known in the future: we form associations which are in the brain somewhere or just ‘stamped in’; we make S-R links through learning which again are in the brain somewhere; or we process information and store this information in some form within the brain somewhere. They all purported that the world gets ‘into’ the body as some form of representation or association, and is then stored or left there as some residue to assist in future behaviours. That is, despite some applications of cognitive theory to real life, the main new ideas were still not observable and were very abstract–big theories were based on meager observations. Both the behaviour analysis and the ecological psychology (Gibson’s version) foundations were, of course, not in agreement with these points.
This leads us to the fourth thing that strikes me about the ‘cognitive revolution’: of why it became so overwhelmingly popular within psychology, aside from replacing worse theories. For me, there are two parts to this which go together.
First, although cognitive theory was not much different to the earlier theorizing of S-R behaviourisms and associations, viewing human behaviour as a processing mechanism inside people, allowed more structure to be placed upon the woolly ideas about what happened to the associations, links, or connections once they were formed, without having to immediately pin it on proposed brain processes (to give it some physicality). All this was still premised on a future knowledge of brain pathways, but it provided a stronger flexibility for (theoretical) structure to a messy part of the chain.
For cognitive theories, connections or links are still formed, and they will one day be observable or measurable as brain processes, but before that day arrives we can finally talk more about what happens in between by modelling or simulating these events as an internal, active chain of hypothetical events. We see objects and the ‘information’ about these is passed to a processing centre where it can be changed, manipulated, brought into new connections, adjusted, etc. Things can be done to this information inside the human brain independently of the outside world. It can then be stored so later effects of that ‘information’ can be viewed in hypothetical terms of the organism ‘retrieving’ memories which are also put into the processing unit and a new mix created completely internally, divorced from the world. It was this structuring of what happens ‘post-association forming’ that was sorely lacking in earlier accounts. But it was all theoretical, however.
The second part of how I now think about the popularity of cognitive theorizing stems from the first, and is a direct consequence. All the above is abstract and hypothetical, although still predicated on the future promise of underlying physical brain processes which will match the cognitive processing model. What this meant was that to build this ‘cognitive architecture’ was easy. You could add anything on as a new processing unit to deal with aspects of observed behaviour which did not fit; such new units made sense in explaining new findings with the only problem being that they were totally under-determined with respect to the observable world. They were abstract, hidden inside humans, where there did their structural processing, and had a future material basis in the brain which real meant that you were not constrained what you could theorize currently. Doing this was easy! PhDs could do it; you cold do it in the bath!
As an example, there is the vast cognitive literature which recognized, quite correctly, that humans seem to deal with the use of language in a different way to how they deal with other behaviours. Instead of wallowing in murky language links, S-R associations and stamped in language connections, cognitive theory legitimized the creation of new, hypothetical ‘processing units’ especially for observed language behaviour. This went so far as Chomsky’s infamous Linguistic Acquisition Device which was purported to be in-built for humans at birth and which assisted in the enormously rapid acquisition of language observed in children, but which was completely an abstract, theoretical manoeuvre.
The point I am making, therefore, is that cognitive theorizing as a strategy was cheap to wield, theoretically rapid, and theoretically satisfying (albeit abstract and hypothetical) because it was able to give some credibility to explaining any puzzling aspects of human behaviour which were observed. All said, it was convenient, like a ‘runabout inference ticket’ in logic (Prior, 1960), and the guarantor for this abstractness was the future promise of physical brain structures to come.