AKRI

Intelligence :
Machine Memory Models

Analysis

Unlike the case with human systems, the construction and operation of machine memory is fully understood. This means that models are not needed to simulate the way machine memory works. Models of human memory however, are used to try to equip machines with human like properties. The need for memory models for machines is therefore to help implement human like characteristics using artificial or man made devices and systems.

These models may look to represent the human memory architecture to some approximation, as is the case with Artificial Nueral Networks, or simply to exhibit characteristics at a behavioral level, as is the case with the symbolic approach. In either case, architecture and theory are closly linked.

Debate

How closely can one device perform like another if it is constructed in am entirely different way? Can machine memory ever perform like human memory? There are some examples of how man made systems can carry out tasks similar to natural systems although their construction is entirely different.

Many small countries with low rainfall, use desalination plants to supplement the water supply that would be made available through the natural processes. Man made aircraft can fly further and faster than birds yet employ an entirely different approach to flight.

So with the correct models and thorough definitions of what human memory does, could machine memory perform just as well as human memory?

Machine Memory Architecture

Computational Overview

There are various architectural models used for machine memory systems at the symbolic level. There are also models employed as knowledge representational models. This section will consider a lower level memory based models rather than the higher level knowledge based models considered elsewhere.

Bits and Bytes

All computer memory systems eventually rely on the storage of individual binary bits of information. As these are grouped together in various ways for various purposes, their application becomes more specific. Computer memory capacity is most frequently quoted in bytes, or groups of eight bits. Systems that require more bits than this usually make the extra available in multiples of eight bits, 16, 24, 32 or 64 bits. The way that bytes, or larger groups of bits are organised next will depend on the application.

These groups of bits or cells of 8, 16, 24, 32 or 64 bit lengths can be used to store many things from simple numbers to complex numbers and letters or words. The computer will need to know what each cell is being used to store.

Arrays

Array

Arrays of bytes (or larger groups) represent a simple grouping of memory that has many applications and is easy to manipulate. Arrays may be single or multi dimensional.

The three dimensional array shown is a 3x6 array with 18 cells and information is usually accessed using a two dimensional number. For example, in the three dimensional array, cell C5 would be accessed by calling array [2][4] where zero is the first number, so array [0][0] would access cell A1.

Structures

At a still higher level, memory cells (groups of bits) and arrays can be organised into structures that allow sometimes-diverse information about a specific item to be stored together. Structures may contain simple number cells and complex number cells, and arrays of numbers and words all together. Sometimes actual code or operations may also be stored together with information and actions on the whole structure defined. This type of architecture is called Object Orientated.

Special Architectures

There are some specialised languages like LISP and PROLOG that work with special types of memory architecture and indeed, the memory architecture used is a central feature of the language. In PROLOG, the architecture is based on a type of "If..Then" rule and in LISP the architecture is based on linked lists.

Graphic : Special Architectures

Connectionist Architecture

Biological Neuron

The connectionist memory architecture is an approximation (although very rough approximation) to the architecture of biological neurons. It is believed that information in the brain is stored in the synapse (junctions between) separate neurons. Similarly, with Artificial Neurons (connectionist architecture) information is stored as weightings at the junctions between separate neurons.

Artificial Neural
              Network

Arrays of neurons organised in layers are necessary for the storage of any significant amount of information. It is not clear exactly how many neurons or layers or exactly how neurons should be connected up, in order to store any specific amount of information. Some theories attempt to give guidance but it is often necessary to experiment.

Machine Memory Theory

Symbolic Overview

The basis of the symbolic theory of computer memory architecture is that various symbols, words, numbers etc, can be used to represent objects or activities within the memory structure. Furthermore, these symbols can then be processed using known methods to manipulate the memory and possibly discover new facts.

Semantic Networks

Semantic Network

In a semantic network for instance, the symbolic elements used may be words and phrases. Questions about the topic in memory could be answered using a mechanism that knows how to navigate the network and uses the symbols to derive the response.

In this simple network, the question: Can any animals bark? Could easily be answered by a simple search.

Other symbolic systems have complex logic theories that allow the symbols to be manipulated in various ways.

Access

One of the main differences between machine and human memory is in the way information is accessed. Machine memory is essentially accessed by an address and human memory is accessed by content. That is, a small part of the information required can lead to the availability of much more. There have been computational implementations of Content Addressable Memory. The essential feature of such systems is that information is located by presenting a small part of the information required in order to recover the rest.

One of the main ways that content addressable memory is implemented in artificial systems is by the use of Artificial Neural Networks.

Artificial Neural Networks

Artificial
                Neural Network

An Artificial Neural Network consists of a network of artificial neurons organised and connected in layers.

Information in an ANN is stored at the junctions between neurons. Usually, numeric weights are adjusted during learning by a small fraction. Adjustment may be either positive or negative. Information or known data is presented to the network over many trials. If weights are seen to be good, they are reinforced, if they are bad they are weakened. Gradually, the network is adjusted so that it can correctly classify the input information.

Early Work

Photo
              : Marvin Minsky

Early work on single layer Artificial Neural Networks (or perceptrons) began in the 1950s and 60s. Researchers like Minsky, Rosenblatt and Windrow developed many systems using these perceptrons. In 1969, Minsky and Papert produced a book entitled 'Perceptrons' that proved that the single layer perceptrons were incapable of performing a simple exclusive or gate function. This lead to a sharp decline in research on ANNs.

Gradually, during the 1970s, other researchers showed that by using multi-layered networks, the original problems could be solved and ANNS could perform some complex classification problems.

An Application

One example of the use of the classifying properties of ANNs is in Automated Medical Testing. In one case the presence or absence of a chemical reaction is the result. The reaction causes a clumpy structure in the test substance whilst a non-reaction leaves the test substance smooth. An ANN has been successfully trained to detect reactions from a visual image.

Sources

The following provides an overview of memory models.

Artificial Intelligence, Principles & Applications. Edited by Masoud Yazdani. 1986 Chapman & Hall.