Within the human consciousness, ‘traces’ of physical experiences, referred to as memories, are
kept in different data stores - or ‘memory stores’ - each of which is responsible for the storage of
a different type of memory data (coding), has a different maximum length of time for which
information can be kept within (duration) and a different maximum amount of information which
can be simultaneously kept within that store (capacity). Each of these concepts has been
explored through research studies and meta-analyses, from Joseph Jacobs (1887) and Harry
Bahrick et al. (1975) to Alan Baddeley (1966a, 1966b) and George Miller’s “The Magical
Number” (1956). Within the context of these studies, the term ‘memory stores’ is generalised to
refer only to the human short-term and long-term memory stores, and not to specific stores
within those overarching categories (such as those outlined in Baddeley and Hitch’s 1974
Working Memory Model, of the visuo-spatial sketchpad and the phonological loop, or the
semantic, episodic and procedural memory stores defined by Endel Tulving in 1985). Short-term
memory refers to our temporary capacity to retain small chunks of information in a readily
accessible, available state for up to thirty seconds, whereas long-term memory refers to the
capacity to retain informative and meaningful knowledge indefinitely, although not typically in a
readily accessible state. Information held within the long-term store requires a conscious and
deliberate effort to retrieve and even with this effort, relevant cues (e.g. objects or sensory
experiences reminiscent of a particular episode) may be required to fully retrieve an older
memory.
In 1968, Richard Atkinson & Richard Shiffrin published their theory of the modal, or multi-store
model of memory - a processing system whereby meaningful information passes through three
stores (the sensory register, the short-term memory and the long-term memory). These stores
were not subdivided, and, with the exception of the sensory register, were defined only by
duration - having no specificity of format, capacity or function. Many cognitive psychologists,
such as Alan Baddeley, Graham Hitch, Endel Tulving and Michael Gazzaniga would go on to
deem this model simplistic and fundamentally incorrect in its depiction of the STM and LTM as
single and whole memory stores unto themselves. These men and others began to formulate
theories of different, more specific stores and compiled existing information about the storage of
memory to create three key criteria used to categorise each of these specialised stores. The first
of these criteria is the concept of coding.
In terms of memory modelling, coding refers to two separate phenomena - both the format in
which information is stored and the process of converting ‘raw’ information from one format to
another across stores. Information is typically encoded in one of three formats - visual, acoustic
and semantic (semantic coding refers to the retention and attribution of meaning to an
experience, concept or observation using relevant pre-existing knowledge, as opposed to the
recall of individual sensory details associated). In 1966, Alan Baddeley formulated a procedure
designed to identify the preferred means of coding of the LTM and STM respectively, dividing
seventy-two participants (a mixture of male and female) into four groups. Baddeley had
compiled four distinct lists of words (a semantically similar (similar in meaning), a semantically
dissimilar, an acoustically similar (similar in sound) and an acoustically dissimilar list) and each
, participant within each group went through four separate trials, with their recall of the correct
order of the words on each list being tested every time. The words were presented to
participants one at a time on a screen, and immediately after this presentation, participants were
presented with a mixed-up list of the original words and asked to recall the correct order in
which the words had been shown (intended to reflect the function of their STM), and once again
twenty minutes after the presentation had concluded (reflective of the function of their LTM).
Baddeley found that participants struggled most to recall the correct order of acoustically similar
words in the immediate recall test, but found more difficulty with the semantically similar list after
twenty minutes. These findings led to his hypothesis that the dominant method of encoding in
the STM is acoustic, whereas the LTM is primarily semantically coded. The logic behind what he
labelled ‘acoustic confusion’ (difficulty in distinguishing between recall of similar sounds) and
‘semantic confusion’ (difficulty in distinguishing between recall of words with similar meanings)
was simple - when a memory store primarily receives information of a single intended format, an
overload of near-identical pieces of information in that format overloads the store, causing its
functions to grow distracted with the task of distinguishing between the similar sounds, and
damaging recall of details surrounding the presentation of those sounds (i.e. word order).
Baddeley’s research may be used to support theories of specialised coding within separate
stores, and although exceptions to the rules of his findings have been identified within later
research, the general idea of STM as primarily acoustically coded and LTM as primarily
semantically coded remains in use today. His findings provided a foundation for our
understanding of format and function within memory stores, and allowed for further distinction of
LTM and STM, eventually contributing to the development of the multi-store model of memory
(MSM) just two years later (as well as later refinements to that model).
However, whilst his findings were foundational to modern psychology, his procedure was
fundamentally flawed, meaning that those findings might have limited application to real-world
scenarios and thus that his study has low ecological validity. The stimuli used (i.e. the lists of
words upon which participants were tested) are artificial, in that they are in no way reminiscent
of the activity and function of human memory in everyday situations, where information is more
likely to be meaningful. This might affect the coding of that information, as meaningful
information (such as the names and faces of people that we encounter) might be stored
semantically within the STM.
Another criterion of memory stores with more sufficient research support is that of capacity - the
amount of information which can be retained by a particular store at any one time. The capacity
of the LTM is generally considered to be close to unlimited, with people accruing millions, if not
billions, of minute episodic, semantic and procedural memories over the course of their lives
(with everything from the meaning and correct usage of the words we speak to the individual
physiological interactions required to ride a bike encoded into our LTM stores), but the capacity
of the STM is fundamentally limited, in that we cannot recall an infinite number of details from
our immediate environment at any given time. Only a small number of information ‘traces’ can
be kept readily accessible on the surface of consciousness at any given time. In 1887, Joseph
Jacobs developed a digit-span technique in order to test the capacity of short-term memory (an
individual’s digit span being the longest series of sequential digits they can recall with perfect