| On this site:CVPublications
 NLP
 MAS
 Research interests
 Contact
 
 Selected Links
 Student Tips
 Glossary
 Seminar on Socionics 
		Office hours
 
 
 
            |    |  Terms and Definitions On this page I collect definitions and terms that I find useful for my own work.Agent, cognitiveCognitive agents are agents whose actions are internally regulated by goals (goal-directed) and whose goals, decisions, and plans are based on beliefs. Both goals and beliefs are cognitive representations that can be internally generated, manipulated, and are subject to inferences and reasoning. Since a cognitive agent may have more than one goal active in the same situation, it must have some form of choice/decision, based on some "reason", i.e. on some belief and evaluation. Notice that I use "goal" as the general family term for all motivational representations: from desires to intentions, from objectives to motives, from needs to ambitions, etc. by "sub-cognitive" I mean agents whose behaviour is not regulated by an internal explicit representation of its purpose and by explicit beliefs. Sub-cognitive agents are for example simple neual-net agents, or mere reactive agents.Back to TopCastelfranchi, C. (1998). Emergence and Cognition: Towards a Synthetic Paradigm in AI and Cognitive Science. Lecture Notes in Computer Science, vol. 1484. 
 
 
 
 EmergenceIn Artificial Life systems the term emergence is used if any properties of a system (e.g. the behaviour of an agent) arise from the system's interactions with the environment. Emergence is then neither a property of the environment, nor the agent or its control system. Usually the term is used with respect to levels of organisation, where properties which the system exhibits on a level A emerge from non-linear interactions of components at the lower level B (including other systems of the same type, the environment and components of the system. The issues whether emerging properties need to be novel, or are inherently  unpredictable (from the analysis of interactions at level B) are controversial.(see also self-organisation)
 
 A number of interactions itself is a pattern as soon as it is itself a cause for something else.Dautenhahn, K. (2000). Reverse Engineering of Societies - A Biological Perspective. In Edmonds, B. and Dautenhahn, K. Proceedings of the AISB'00 Symposium on Starting from Society - The Application of Social Analogies to Computational Systems. 
 Castelfranchi, C. (2000). Personal communication. 
 Back to Top
 
 
 
 Empirical inquiry hypothesisIntelligence is still so poorly understood that Nature still holds most
of the important suprises in store for usn. So the most profitable way
to investigate AI is to embody our hypotheses in programs, and gahter
data by running the programs. The surprise usually suggest revisions
that start the cycle over again. Progresss depends on these experiments
being able to falsify our hypotheses: i.e. these programs must be
capable of behavior not expected by the experimenter.
 Lenat, D. and Feigenbaum, E. (2000). Reverse Engineering of Societies - A Biological Perspective. In Edmonds, B. and Dautenhahn, K. Proceedings of the AISB'00 Symposium on Starting from Society - The Application of Social Analogies to Computational Systems. 
 Back to Top
 
 
 
 Self-OrganisationA set of dynamical mechanisms whereby structures appear at the global level of a system from interactions among its lower-level components. The rules specifying the interactions among the system's constituent units are executed on the basis of purely local information, without reference to the global pattern, which is an emergent property of the system rather than a property imposed upon the system by an external ordering influence.[Self-organisation has four basic ingredients:]Bonabeau, E., Dorigo, M. and Theraulaz, G. (1999). Swarm Intelligence. From Natural to Artificial Systems. Oxford University Press, New York, Oxford.
 
 
	Positive feedback: Amplification through positive feedback can result in a 'snowball effect'. Pheromones can increase the attractiveness of particular locations, e.g. trail laying and trail following in some ant species is used in recruitment of a food source.Negative feedback: it counterbalances positive feedback and in this way helps stabilising the overall pattern. the exhaustion of food sources or the decay of pheromones are examples of negative feedback.Aplification of fluctuations. In order to find new solutions, self organisation relies on random walk, errors, random task-switching etc.Multiple Interactions. Individuals can make use of the results of their own as well as of other's activities, but generally a minimal density of (mutually tolerant) individuals is required. Back to TopDautenhahn, K. (2000). Reverse Engineering of Societies - A Biological Perspective. In Edmonds, B. and Dautenhahn, K. Proceedings of the AISB'00 Symposium on Starting from Society - The Application of Social Analogies to Computational Systems. 
 
 
 
 Social IntelligenceThe individual's capability to develop and manage relationships between individualized, autobiographic agents which, by means of communication, build up shared social interaction structures which help to integrate and manage the individual's basic ("selfish") interests in relationship to the interests of the social system the net higher level. The term artificial social intelligence is then an instantiation of social intelligence in artifacts.Back to TopDautenhahn, K. (1999). Embodiment and interaction in socially intelligent life-like agents. In Nehaniv, C. L. (ed.), Computation for metaphors, Analogy and Agents,pp. 102-142, Springer Lecture Notes in Artificial Intelligence, vol. 1562.
 
 
 
 Society, levels of synthesis[In analogy to Harnad's hierarchy of the Turing Test (see below), Dautenhahn describes levels of synthesis of (artifical) societies:]
	ST 1: Toy models of human societies. At present most existing systems of artificial societies are social simulations showing particular, specific aspects of human societies. None of the systems shows the full capacity of human societies.ST 2:Total indistinguishability in global dynamics. computational social systems in the not too far future may show properties very similar to (if not indistinguishable) from human societies. In particular domains, systems at this level might succeed to abstract from the biological, individual properties of humans and describe their behaviour on higher levels of social organisation and control, e.g. processes in economics and cultural transmission mmight closely resemble processes we boserve in human societies. Such systems might be used effectively as 'laboratories' in order to understand processes in historical and present societies, or might be used for predictive purposes.ST 3: Artificial Societies. Total indistuishability in social performance capacity. Societies at this level have to account for the scially ambedded, individual and embodied nature of human beings. It might be possible that 'embodiment' in the sense of structural coupling between agent and environment can be achieved without requiring physical (robotic) embodiment (...) The performance capacity of artificial societies at this level is indistinguishable from real societies, although the specific ways how these systems interact/communicate with each other need not be similar to or compatible with human societies. However, these societies go beyond 'simulation models' of societies, they truly areartificial societies.ST 4: Societies of Socially Intelligent Agents. Artificial Societies at this level possess social intelligence like human beings do. This includes cognitive processes in social understanding in all aspects required in human societies, e.g. 'theory of mind', empathy, etc.Members of artificial societies at this level might merge with human society, even in a physical sense (e.g. if the embodied agents are robots on a T3 or higher level, (see below). However, the agents need not be robotic, they might exist as computational agents, with different means of communicating and interacting with each other.ST 5: Societies of Minds. Total indistinguishability of social intelligence. The way these synthesised societies perform is not only indistinguishable from human societies with respect to their external performance, they are also indistinguishable with respect to the internal dynamics of their social 'minds'. Means and mechanisms of verbal and non-berbal communication, social 'politics', friendship, grief, hatred, empathy etc. at the individual level, as well as the performance of the society as a whole, is at this stage indistinguishable from human societies. Members of such societies could exist in human societies without any detectable difference, i.e. they might possibly consult the same psychiatrist. Back to TopDautenhahn, K. (2000). Reverse Engineering of Societies - A Biological Perspective. In Edmonds, B. and Dautenhahn, K. Proceedings of the AISB'00 Symposium on Starting from Society - The Application of Social Analogies to Computational Systems. 
 
 
 
 Symbolic vs. Subsymbolic reasoningSymbolic reasoning deals with converting incoming sensor data into objects coherent with the reasoner's ontology. These objects are then subject of the reasoning process. The difference in subsymbolic reasoning is that this step of interpretation is omitted. Subsymbolic reasoning takes the input operates on it and produces output. Therefore,  neural networks, for example, are subsymbolic reasoners although the process number, which are of course symbols.
Back to Top
 
 
 System, multi-agentA multi-agent system can be defined as a loosely coupled network of problem solvers that interact to solve problems that are beyond the individual capabilities or knowledge of each problem solver.Back to TopSycara, Katia (1998). The Many Faces of Agents. AI Magazine 19 (2).
 
 
 
 Turing Test, hierarchy of[Steven Harnad proposes a hierarchy in order to discuss degrees of indistinguishability in the Turing Test. His proposal consists of five levels:]
	T1: Toy models of human total capacity.T2: Total indistinguishability in symbolic ("pen-pal") performance capacity (see standard interpretation of the Turing Test).T3: Total indistinguishability in robotic (including symbolic) performance capacity.T4: Total indistinguishability in neual (including robotics) properties.T5: Total physical indistinguishability. Back to TopHarnad, S. (2001). Minds, machines and Turing: the indistinguishability of indistinguishables. Journal of Logic, Language, and Information, Special Issue on Alan Turing and Artificial Intelligence.
 
 
 
 
 
 
 |