This paper considers the use of memory models and machine intelligence, to dynamically update a computer based representation of the occupancy of a small building. The input to the model is derived from very simple, single bit, movement sensors in each room of the premises. It will be shown that the information derived from these sensors can provide adequate data for a building control scheme. Short and Long Term memory models of man will be briefly reviewed. Working models for Short and Long Term memory will be discussed, which have evolved from the earlier work but which have been tuned to fit the machine level constraints of this type of application. A review of the performance of a working pilot installation will be given. A performance measure will be derived and initial figures using this measure will be presented.
Thanks to Blackburn College and Lancashire Education Authority for providing support and financial assistance throughout this research project.
First Published in Robotica (1992) volume 10, pp 65-74
This paper will describe a memory model which is being used in the induction of complex information from an array of simple movement sensors situated, one in each room, of a small domestic premises. The objective of the work is to provide information about premises occupation which will allow a control system to effect sensible and autonomous control of the environment.
The main constraint imposed in this work is that simple, readily available sensors are used. Such sensors can provide only low level information.
The controlling system is able to sense and manipulate an environment, about which it has specific information. Although the information sensors and manipulators are all static, the intelligence required by this system is analogous to that required in more traditional robots.
An experimental site has been used as the focus for this work along with various computer simulations. A plan of the site is shown in figure 1. Movement sensors are situated at convenient locations in each room. The purpose of these sensors is to provide a simple digital pulse when human movement is detected. It should not be possible to enter or exit from any room without the sensors being triggered.
The AMC is so labelled because it provides fast, reflex control of the environment. It operates from a simple rule set and can perform independently of the HLP. Data from the AMC is relayed to the HLP, where higher level computation attempts to maintain a computer representation of the environment; chiefly the occupation of the environment.
Figure 1 : A plan of the experimental site
During the early part of the work, it was realised that the derivation of any complex data may be expensive in computer time and may not be compatible with the sort of immediate control which would be required in an active environment. The phrase Autonomic Microprocessor Controller (AMC) was used to describe the microprocessor based environmental interface which would connect to all sensors and actuators and would also relay and receive information from the main processor or High Level Processor (HLP).
It also became necessary to allow the AMC to adopt an autonomous functionality since the initial down time of the HLP would be considerable and the converted environment would require some sort of control functionality. The details of the AMC and its sensing and control abilities have been presented in an internal report at Liverpool Polytechnic (LIV/ENG/AMC Dr. D.Williams, Liverpool Polytechnic, Dept. of Eng.).
Events from the environment will initially be processed by the AMC using local control rules and then be relayed to the HLP for higher level processing. Although many types of event are possible including analogue environmental values, the events which will be considered are events from the movement sensors in each room which will provide pulses when the movement of a person is detected in a room.
An event can therefore be described as the indication from a sensor within a particular room, that movement has taken place within that room. It has been shown that there are analytical tools which may be applied to this data 1 but that the following technique is likely to prove more robust in the long term. Events from the actual environment may signal that a sensor is becoming active or that a sensor is becoming inactive. A sensor is only of interest when it is active.
2. Short and Long Term Memory: (STM,LTM)
The view that Short Term Memory is inhabited by a few symbolic chunks of information, whilst Long Term Memory holds all productions, is outlined by Winston 2. Specific combinations of the Short Term Memory, trigger items in Long Term Memory. Short Term Memory is used as some sort of key in calling appropriate procedures.
In Memory Models of Man & Machine, Narayanan 3 outlines Cohen's 4 memory model of linear progress of sensory data through a primary store, to Short Term Memory where forgetting occurs within a short interval. Data which is not forgotten may pass to Long Term Memory which does allow some forgetting but is generally more permanent. This follows the general work by Miller 5, which details the limits of human Short Term Memory. Further work shows LTM playing a greater part in item recognition and the work of Tulving 6, considers LTM as containing both episodic and semantic knowledge.
Anderson 7 discusses the progress made in the theories of STM and LTM and suggests that such models are indistinguishable from models of a single Long Term Memory with an initial rapid decay.
2.1 A generalised view of Short Term Memory.
The general view of Short and Long term memory suggests a fixed size short term memory and a complex long term structure. It also suggests that all knowledge is contained in Long Term Memory and Short Term Memory is used as an index to contextual knowledge or procedural mechanisms.
It should be remembered that these models are intended to serve as an explanatory aid in the understanding of human memory. Memory models are also required for machine intelligence and such models will generally offer solutions to particular aspects in machine learning and control. They may be similar in many respects to memory models of man but will be modified by the pragmatics of functional constraint.
2.2 A specialised model for Short Term Memory.
The Short Term Memory Model for this application has been designed to record direct information from simple environment movement sensors. The complexity of any entry in STM is dictated solely by the need to explain each event and induce the effect of that event on the environment. Thus, STM contains a list of the most recent events from the environment.
An STM length of 600 has proven to be optimal in tests performed to date. This represents between approximately 45 and 90 minutes worth of sensor data from the environment.
STM is also used to store a model of the occupancy of the premises at each event entry. The most recent entry should therefore contain a model of the current occupancy of the world. The word environment will often be used, in this paper, instead of the phrase 'occupancy of the premises or world'.
At each event, a new environmental model is deduced and entered into STM which has the following structure :-
- Event identification.
- A time stamp for event arrival.
- No of People in each room (13 rooms) (world model)
- A binary record of each sensor state at (b)
- A tree pointer indicating the path taken through the decision tree.
- A tree node memory list which is used to prune the search tree during error correction.
- A tree node memory pointer.
In addition and in agreement with more general memory models, information from LTM is fed back to STM to assist in decision making. When this option is selected there are additional entries in STM :-
h. A time of day number, in the range, 0..512
i. A list of prediction values for each sensor indicating the likely activity level for that sensor in the near future.
The function of STM is then to record the most recent environmental sensor activity and to maintain a current environmental model along with the information required in its induction. In this way, when errors occur, providing the source of the error is still in STM, it can be tracked down and rectified.
2.3 A specialised model for Long term Memory.
The model for Long Term Memory being considered, fits the general description for LTM models of man. However it does not contain any procedural knowledge. Individual events are not used as the source for LTM and neither are deduced world models recorded. Recording deduced world models in LTM would lead to instability since there is no way of verifying the accuracy of a world model.
The source for Long Term Memory is sensor data from the AMC. This data is averaged and delivered to the HLP 512 times a day, figure 2 shows typical data for one day, from movement sensors throughout the premises. Such data contains the generalised sensor activity level of a movement sensor and an average value, in cases when the sensor produces analogue information, such as temperature or light level.
Figure 2 : Showing source LTM data from movement sensors
During each day, sensor averages are recorded for all movement and other environmental parameters. At midnight, the information is stored, if required, and further averaged to represent shorter parts of the day.
The original proposal for LTM in this case, was that a hierarchy of sensor averages be used to provide predictive information for different parts of the day, eg 12 parts would provide predictive information for a 1.5 hour period. However, the current implementation, although deriving all necessary source data, only maintains one of the hierarchies. Each structure in this hierarchy represents one 64th part of a day or about 23 minutes. The information contained in each LTM structure is shown in figure 3.
Figure 3 : A general structure for Long Term Memory
At 3:00am each day, the latest section of hierarchical data from the environment is installed in the LTM structure. The installation procedure may lead to the addition of new information or general support for existing information. Old, scantly used information in LTM may be lost during this process.
2.4 Memory structures designed for environmental data.
These models of both STM and LTM have been designed specifically to deal with simple environmental data. Care has been taken not to attempt to learn information which may lead to a gradual decline in belief in the truth value of the information contained in LTM. A situation may be envisaged where one attempts to learn higher level occupancy information and in turn use this knowledge to influence the derivation of new world models in STM. This could clearly lead to a degradation in system performance, although it will form part of a longer term goal when performance can be more accurately assessed.
Long Term Memory deals with a more generalised form of the sensor data and also allows for further generalisation to take place. Specialisation is used when selecting a particular LTM structure to be fed back to STM at each appropriate point in time. LTM is able to store several sensor memory options and maintains a utility for each option and an environmental as well as temporal applicability measure. Thus, the most appropriate long term memory structure may be selected for feedback to STM at event arrival.
Short Term Memory is then used to record simple sensor data and to provide a source of information which is necessary to resolve a new world model given the new event and event history. STM is also used to hold information which has been fed back from LTM and which will assist in the event resolution process.
3. Using Short Term Memory for immediate event resolution:
New events arrive from the AMC via a serial data link. As soon as they arrive they are time stamped and placed in a holding buffer until the event can be processed.(A definition of the holding buffer in terms of other memory models has not been explicitly stated. It may however be likened to the sensor buffers thought to exist in man such as the iconic store for visual information.) When the HLP is free it will extract the next event from the holding buffer and insert it at the top of the STM structure, thereby dislodging the oldest event in STM. The environment is copied from the previous STM structure and STM is passed on for event resolution.
3.1 Event resolution and likelihood ranking.
Event resolution is the process of deducing all of the possible reasons which could explain the event and placing these reasons in a list. The evaluation process which builds the list of reasons, the conflict list, will only attempt to deduce an explanation for sensors which are activating and not those which are deactivating. Sensor deactivation is of course used in the process of conflict ranking and must therefore remain in STM.
Event resolution considers the following options, where the event is a sensor activation in one room. Note that a person moving out of a room is not considered since that movement will become a new event within an adjoining room.
- People (1 or more) moving from adjoining rooms.
- People moving from more than one room together.
- People moving within a room.
- People making a rapid double move, into/out of a room.
- People moving through rooms with active sensors. (movement pipes)
The conflict list of reasons for an event must be sorted into an order where the most likely reasons appear at the top of the list. Various heuristics and the current state of STM are used in this process.
3.2 The Conflict List.
The conflict list consists of a list of reasons for an event in the following form.
- Source room
- Number of people moving
- Probability or likelihood
- Combination list of moves if used
The likelihood value is derived from the current state of STM. Factors which influence the likelihood value are,
- Temporal recency between the event and a previous event in an adjoining room. Derived from STM records.
- The number of people moving, favouring fewer.
- An influence factor based on the sort of evaluation being performed, eg adjoining rooms, moves through raised sensors, moves which may skip sensor activation.
When entry in the conflict list is given a ranking, the list may be sorted into best first order.
3.3 Using the conflict list to create a new model.
The first entry in the conflict list provides the most likely cause of the event and is therefore used to derive a new world model from the previous world model. The previous model will have been copied up into the current world record. The best entry in the conflict list provides information concerning how many people have moved and the rooms from which they came. The new world model reflects this belief. An entry is also made in the tree pointer slot which shows that the branch taken through the decision tree is the best or left hand one, figure 4.
Figure 4 : Tree showing left hand branch
4. Using Short Term Memory in Error Identification and Correction:
As each world model is derived from the most likely reasons from a full conflict list, it is hoped that errors will be minimised. It would of course be unreasonable to expect no errors to occur since, even with much more elaborate inductive powers, using such simple input data will always lead to some errors. It is hoped that the procedure used will minimise errors but errors are to be expected. The AMC will provide a control buffer against such errors in that it will provide rapid autonomic control where needed, giving the HLP time to correct errors which have been identified.
This system relies on the recognition of errors as part of its operational characteristics. The best first ordering of the conflict list at each node in STM should facilitate rapid correction of errors in most cases.
4.1 Catastrophic errors.
There are several types of errors which can be identified in the system. The most obvious and most disastrous type of error, which will be referred to as a catastrophic error, is easy to recognise. It has occurred when there is a new event for which an empty conflict list is produced. That is, there are no known reasons to explain an event.
Under these circumstances it is realised that an error must have been made previously. Hopefully the diversion in direction within the decision tree, from the correct path, is still contained in STM.
4.2 Using Short Term memory in error correction.
When a catastrophic error is recognised, the error correction mechanism back tracks through STM to find the source of the error. The mechanism initially looks back through STM using the source of the error and its neighbouring rooms as an index. It thus avoids taking alternative branches through the decision tree which are unrelated to the source of the current error. Having found a potential source of the error, the mechanism takes the next best branch and then proceeds back up the tree using the best branch option. If a new failure is encountered, the back tracking is restarted from the latest source of error, this time the new index is added to the old.
Records from this back tracking process show that much redundancy exists within the decision tree, figure 5. To eliminate duplicate searches which prove wasteful in computing time, a node memory at each STM node records the world states which have been visited at that node during the current error correction process. Therefore, when the mechanism is attempting to rebuild STM after identifying a potential source of error, world state records at any node which have been previously identified will prevent further investigation of the path.
As stated previously, this process does not cause a control lapse because the AMC is able to provide a buffer for all environmental events and also provide rapid short term control for the duration of the error correction process, assuming that this process lasts no longer that several minutes.
Figure 5 : Tree showing redundancy
4.3 Non Catastrophic errors.
Other types of errors may materialise as a divergence between the state of the world and that modelled by the HLP. These errors may only be identified with absolute certainty by consulting an oracle. However, this is not an option which is used and such errors must be identified in other ways. These errors are also different in that they are non catastrophic. This means that events applied to the present model will still produce a resolvable new world model.
The major error of non catastrophic form is a belief that a room is occupied within it is not. One clue that this error has occurred is to consider how long a sensor has been inactive. If this time is longer than is normal for a particular room when occupied then a non catastrophic error correcting scheme is activated. Such errors are presently corrected using the same procedure applied to catastrophic error correction. If the scheme can not correct the error in a given time it is assumed that the inactivity is justified in this case and it is not tested again until new activity changes the belief.
If however, a new world model can be derived which fits the event history and which leaves the room in question, unoccupied, then this new model is used as the current model.
It has been noticed in experimental tests, that this process can lead to system failure because it effectively blanks off sections of the decision tree from further consideration. For example, consider this series of events.
- A room is believed to be occupied but its sensor has not fired for a considerable time.
- A non catastrophic error test, resolves the situation by selecting a new model which is resolvable and shows the room in question to be unoccupied.
- Without subsequent events, the sensor in the room in question now fires and a catastrophic error occurs.
- The original world cannot now be restored because it is believed that the portion of the decision tree which the model is in, is not correct.
4.4 A strategy for error correction.
The first simulated tests of the system when driven by a user simulating sensor firing and therefore movement, worked well with a simple error correction strategy. However, subsequent ON LINE tests showed that this simplified position is ill-suited.
It is now believed that the process of error detection and correction is a complex process. A more complex strategy for error correction has been developed which performs much better than the first ON LINE tests. This strategy currently takes the form,
1. If error is catastrophic and previous was none cat err Then Build STM again from scratch. END Else 2
2. Reset node Memory
3. Note current state of STM and world models.
4. With WIDE branch set. Try to correct error by back tracking in given time.
5. If fail 6 Else END
6. Change branch to DEEP search Try again to correct error from where left.
7. If fail 8 Else END
8. Select WIDE brach and LASTDITCH, Restore old state of STM Try error correction again from start.
9. If fail 10 else END
10. Change to DEEP search Try again to correct error.
11. IF fail EXIT with Fail else END.
(LASTDITCH, when TRUE, allows the evaluate procedure to consider jumps between adjoining rooms with sensors remaining inactive. This may occur occasionally but will adversely affect the normal process if considered generally.)
An additional feature of the error correction mechanism is involved in signals from the AMC that the premises is empty. This may be from user input or from the fact that no sensors have been activated for some time and the last one to activate was the front door. If this signal is received, the HLP clears the world model and restarts STM with a general start strategy so that no people are left in the premises.
Non catastrophic error correction uses the process described above, but LASTDITCH never becomes TRUE for this type of error evaluation. This strategy is still evolving but is able to maintain the ON LINE system for many days before complete failure occurs.
5 Using Long Term Memory to influence event ranking:
LTM is used to influence the process of conflict list ranking by first selecting the most appropriate record in LTM. The belief that useful knowledge relies on the selection of the most appropriate structure from LTM is not new. Anderson describes how this process is achieved in ACT 8.
5.1 Selecting the best contender from LTM.The process of decay as described by Anderson is clearly important in the selection of the most appropriate memory structure to be used in on line evaluation. However, this is not the only selection procedure which should be used in this sort of control application. Several selection criteria are needed.
- Information which applies on the current day.
- Information which applies at the current time of year.
- Information with the best Environmental Match.
- Information which has highest utility.
- The current AMC mode of operation.
(The AMC has three modes of operation, 1 = Normal occupation, 2 = Some rooms not used eg. night time, 3 = The premises is empty.)
These selection factors are used as follows.
- First the AMC mode of operation is considered. If no records can be found which match the current mode then the process ends with no LTM information being made available
- Next, the day number is considered, If records for the current day are available from the list of correct modes then only these are contenders, if no day match is found then all days are contenders.
- Next the Best Match is considered from the list of contending records. All records within 10% of the best match are now contenders to be used.
- If only one is found then this is used. Next, an environmental closeness is calculated and the current week and day number is taken into account. The Highest score from this process is used as the source of LTM information.
5.2 Managing LTM in real time.
The activity of real sensors depends on the occupancy activity within a room, the individual sensor, on its location in the room, on the furniture layout and therefore on the movement paths within a room, to name only the most obvious parameters. For this reason, it is necessary to consider the activity of sensors as relative to their individual performance and not on an absolute activity scale if useful information is to be gained from a knowledge of activity. Figure 6 shows a typical graphical representation of one movement sensor from LTM for one day. The thickness of the line represents the strength of belief in the sensor value and trend chosen for graphical display.
Figure 6 : Showing LTM graphical representation for dining or kitchen for one day
The HLP contains a LTM Slot Manager which performs several tasks. It maintains the correct LTM slot in fast memory so that real time information can be accessed without the need to wait for backing store. At midnight, at the start of each day, it also calculates the maximum likely sensor activity from all LTM records for the coming day. This information is used throughout the day as the basis for relative calculations to take place.
Further to this, all STM data has been made of integer type to assist in the speed of ON LINE processing
5.3 Feeding knowledge back to STM.
The most appropriate LTM structure is selected at each new 512th part of a day. The structure provides a predicted value for each sensor for the period which the slot covers, about 23 minutes. This value is made relative by considering it as a percentage of the predicted maximum for that sensor for that day. It is also made integer by multiplying by 1024 (10 shifts).
The predictive sensor value for every sensor is then appended to STM as each event is added. In this way, LTM feedback is always available without further calculation even during error correction where back tracking is used.
5.4 Influencing evaluation with learned knowledge.
The influencing information now contained in STM is used to adjust the likelihood values in the conflict list during the evaluate process. An existing likelihood value will have been derived from the simple heuristics mentioned in section 3.
A typical adjustment would be for an event where the likelihood that one person has moved from room 'B' to the room where the event has occurred, room 'A'. The existing likelihood will be 100 and the relative (in this case per unit) influence factors from LTM would be, prA = 0.5, prB = 0.8. An influence scaling factor of 0.5 is also used in this calculation.
Lnew = 100 + 100 * (0.5 * (prA - prB))
Lnew = 100 - 15 = 85
Or more generally
Lnew = Lold + Lold * isc * (prA - prB)
In this example, because there is a greater likelihood that the sensor in room B will be more active than the sensor in room A in the near future, then the new likelihood value (Lnew) is reduced from 100 to 85. When this process is carried out for all contending entries (rooms) in the conflict list, it is hoped that the new winner has a higher chance of being the best explanation for the event.
At this stage, some other influencing effects are considered but these are beyond the scope of this paper.
6 Some preliminary results:
A working system originally developed in LISP and now converted to 'C', is able to run either a keyboard simulation or run in real time with data being supplied by a functional AMC.
6.1 Running the system On Line.
When the system is running ON LINE, its working memory is dynamically updated with all environmental parameters including analogue sensors and motor effector data. Short Term Memory is only updated when movement sensors or external door sensors are active and it is this situation which currently provides the only source of events for updating the world occupation model.
Events arrive in real time and are stamped with a time stamp supplied by the AMC. The AMC time is synchronised once each day with drifts of less than one minute being normal. Even if the HLP can not receive data for any reason, the AMC will buffer the events and log the time which they occurred.
6.2 Simulation and On Line tests.
Initial tests were conducted using keyboard entry to simulate selected movement scenarios. These tests were extremely successful and showed that the HLP would correctly manage the maintenance of a world model of premises occupation. However, when ON LINE tests were conducted, the systems performance was very different than that predicted from keyboard simulations.
The first ON LINE tests would run for no more than one or two hours before failing to resolve the events into new world models. Most runs were in fact shorter than this.
The main problem was in the unpredictable behaviour of the sensors. Keyboard simulations assumed that a sensor would send out a short, fixed length pulse when movement was detected and would never provide a pulse when the room was empty. This situation proved to be highly optimistic. Sensors actually provide pulses whose length varies from .25s to 6s. In addition and under certain circumstances, a sensor may re-trigger after a person has left a room. These problems were rectified by doing additional processing in the AMC. However, the result of this extra processing meant that the length of sensor pulses was even longer. The remaining problems were dealt with by heuristics in the evaluate mechanism.
An additional problem was in the simultaneous combination of multiple but individual movement patterns. There are multiple interpretations of such movement pattern combinations. Such problems had to be dealt with through improved evaluate heuristics and a sensible error correction strategy. It is also hoped that the information provided by LTM will continue to improve the system performance.
6.3 Results from the On Line system.
Currently, the HLP is able to maintain continuous operation for several days without failure. In more than one case, it has been stopped deliberately after seven days to perform slight adjustments to the code. These tests are being performed with all functions connected except the likelihood adjustments based on LTM feedback. The system is learning and using its basic methods to maintain a world model. When Long Term Memory is built up sufficiently and when a measure of the systems performance can be estimated without the use of LTM feedback, then the additional adjustment will be connected.
Log records show that in several cases the system believes that there are more people in the premises than is in fact the case. Under these circumstances, the extra person or people are normally shown in rooms which are in fact occupied. Clearing takes place when the premises are unoccupied for more than one hour and the STM is reset.
The main cause of current fail is when a non catastrophic error correction moves a person out of a room, particularly a bedroom and at a time much later, that sensor is triggered. In such circumstances, the processed mistake may have disappeared from STM through age.
In addition, old mistakes have proven difficult to correct when an external door is left open for a considerable time. In these cases, a movement in a room adjoining the open door could have resulted in a new body entering the premises. During error correction, the system may consider that many people have entered the premises and the size of the decision tree will become unmanageable during long back track searches.
A considerable problem is then, when an error has gone unnoticed for some time and the size of the search tree is expanded due to open external doors.
6.4 Measuring system performance.
During this work it has become necessary to consider some automatic evaluation method which will allow comparison of old and new methods. It is clearly not practicable to visually monitor the system intensively over long periods. One measure of performance which is automatic is the measure of time spent correcting errors. Although this measure cannot be taken as a measure of model accuracy, it may be used as a relative measure which is based on the accuracy of model derivation. If the models derived prove highly inaccurate, it is reasonable to assume that the system will spend more time correcting errors.
The measure chosen to be most representative of system performance is that of percentage run time spent doing catastrophic error correction, excluding time spent in AMC mode 3 (premises empty). Measuring the percentage time spent performing non catastrophic errors has been excluded since this time may not actually lead to any change in the world model and may cause incorrect changes which will be corrected by catastrophic error methods.
Using the stated measure, initial runs were giving readings of about 6% to 7%. More recent runs are nearer to 1% and sometimes lower. Considering all runs over a three week period in October 1990, the value is averaging about 1.2%, where the average run before failure was about five days.
The general application area described in this work is one of using the input from simple environmental sensors and an inductive mechanism to maintain a world model composed of more complex entities. An attempt is made to induce useful knowledge from simple, low cost, readily available sensors. It is clear that any single, simple event can not give rise to the induction of a complex conclusion without further assistance.
7.1 The influence of earlier Memory Structures.
The work has shown that the further assistance used is in the form of heuristic knowledge applied to a Short Term Memory of simple events. Earlier models of Short and Long term memory in man have not been used in verbatim, but the models have influenced the development of a machine control level approach to the application area. Short Term Memory is considerably larger than would be expected in a human model and the items contained in STM, although complex, are composed of discrete and simple entries.
Appropriate information from Long Term Memory is selected and fed back to Short Term Memory for ON LINE evaluation. Long Term Memory is thus used to influence and hopefully improve the accuracy of the world model induced by modifying the inductive processes performed by the raw heuristic knowledge.
7.2 A model to deal with simple sensor input.
It has been demonstrated using a complex application that the model chosen has the ability to perform the desired task with measurable success. It is now the intention to use the model without further modification, in longer term tests and to further evaluate the implications of connecting the LTM feedback mechanism into the on line evaluation process. Using the evaluation criteria described, it will be possible to obtain some quantifiable data on the performance of the method and on how the connection of LTM affects this performance.
Current data suggests that a figure of about 1% of the total process time will be spent correcting catastrophic errors. The systems seems to perform reasonably well at this level of correction and so it may be judged to be a worthwhile bench mark for further investigation.
Some early results indicating how the LTM data will affect this performance are not decisive. This may be that the initial tests were performed with LTM data which was out of synchronisation with the current on line environmental data by about six months. More exhaustive tests will be performed.
7.3 What should be learned?
Long Term Memory features a representation structure based on generalisations of temporally averaged sensor data. It allows several competing structures to contend for relevance in any given situation. The use of direct derivatives of sensor data ensures the accuracy of the information contained in LTM. An alternative approach would be to use LTM to contain and therefore predict occupancy information which has itself been derived using sensor data and the evaluation mechanisms.
Since the integrity of the method is still being investigated, it is believed that this approach is likely to lead to instability in the systems ability to induce new world models. This would be particularly true when LTM itself is used to influence the evaluation process.
7.4 An integrated scheme.
It should not be forgotten that although the focus of attention in this paper has been on the use of Short and Long Term Memory in the derivation of usable occupancy models from simple input data, that this method is part of an integrated scheme. The use of an Autonomic Microprocessor Controller to buffer the environment and provide autonomic control, is essential to the to the overall design.
The point is also central to A.I. in that it recognises that errors will be made and devotes attention to error detection. The scheme can therefore justify error correction in real time and can claim legitimacy as a provider of sensible information from a sparsely sensed environment.
- J.L. Gordon, D. Williams & C.A. Hobson, "Deriving complex loction data from simple movement sensors", Robotica (1990) Vol 8. pp 151-158
- P.H. Winston, Artificial Intelligence Second Edition, Addison-Wesley 1984, P201
- A. Narayanan, Memory Models of Man & Machine, Artificial Intelligence, Principles and Actions. Ed. Masoud Yazdani, Chapman & Hall 1986, pp 226-259
- J. Cohen, "Complex Learning", Rand McNally, New York, 1969.
- G.A. Miller, "The magical number seven, plus or minus two, Some limits on our capacity for processing information", Psychol Review, 63, 1956 pp 81-97
- E. Tulving, "Episodic and Semantic Memory", in Organisation of Memory, Eds E. Tulving and W. Donaldson, Academic Press, New York, 1972
- J.R. Anderson, "The Architecture of Cognition", Harvard University Press, Cambridge, MA, 1983
- J.R. Anderson, "A Theory of the Origins of Human Knowledge" Machine Learning Paradigms and Methods, Ed Jamie Carbonell, 1990 MIT Press, p313-351