Picture by Claudia Vasconcelo Carvajal

Menu:

Program

schedule
Mon Tue Wed Thu Fri
09:30
10:30
Registration 09:00
10:15
R. Ñanculef &
H. Allende
Group work I
J. Atkinson
Group work presentations and evaluation
10:45
11:45
Welcome
Presentations
10:30
11:45
J. Atkinson
R. Goldman
12:00
13:15
C. Wood
12:00
13:15
F. Sancho
G. Theraulaz
15:00
16:45
R. Goldman
14:45
16:00
J.C. Letelier &
J. Mpodozis
G. Barrantes
Panel II: Emergence vs. Control Closing
17:00
17:30
Presentation Research Center in Social Complexity
C. Rodriguez
16:15
17:30
G. Theraulaz
J. Ruiz del Solar
Group work II
17:30
18:00
Student Work Bootstrap
18:00 Welcome cocktail 17:45
19:00
P. Marquet &
C. Rodriguez
Panel I: Robustness vs. Efficiency

Legend:

Plenary talks
Basic talks
Technical talks
Group work
Panels

New Approaches to Computing: Learning from Biology

Ron Goldman

There have been several recent changes in the way we think about complex computer systems. This talk will try to articulate the shift from mathematics to engineering and now to biology, and how this can help us design and write better software.

Mathematics gave us the conception of a program as an abstract idealization that can in theory be made perfect. Practical experience with computer systems soon made it clear that perfection is just not possible and there was a shift to a more pragmatic engineering focus.

Our systems now are designed to provide long-lived services and they have become much more complex than the types of systems that engineering has successfully dealt with, so we again need new organizational principles. Many computer scientists are looking to biology for inspiration and guidance since living organisms are the most complex systems that we know.

This talk will outline some of the problems with complex computer systems and present some first steps for solving them using ideas from biology. The hardest part of applying this new approach is changing our attitude as to what is important and expecting more from our software.

Back to program

(What) Does the Brain Compute?

Chris Wood

While most scientists agree that the brain “processes information” and many would claim that the brain “computes” in one sense or another, the precise meanings of “information processing” and “computation” in those claims are unclear. In this talk I will address the questions “Does the brain compute?” and “If so, what and how?”. The theory of computation is mainly expressed as abstractions that are independent of any particular physical realization. However, once an abstract computation is actually implemented it becomes a physical phenomenon, governed like all such phenomena by the laws of physics. “Computers are physical systems: the laws of physics dictate what they can and cannot do.” (Lloyd, S. “Ultimate Physical Limits to Computation”, Nature, August 2000). I will focus in particular on the question of whether “computational primitives” exist for the brain that represent first-level abstractions for neural computation in the same sense that binary arithmetic and Boolean algebra are “computational primitives” for Von Neumann architectures.

Back to program

Indirect vs. Direct Control

Ron Goldman

Historically computer systems have been centralized systems using a top-down control structure. As our systems have gotten more complex they have become decentralized and it is no longer possible to have some master controller be in charge. The behavior of the system emerges from the interactions of its components, including new types of bugs and failure modes. Each component needs to do what it can not just for itself, but also for the health of the system as a whole

This talk will discuss some of the traditional software control methods and describe new approaches more suitable for distributed systems and the problems some people have with accepting them.

Back to program

Membrane Computing

Fernando Sancho

In this session we present the basics of a new model of non- conventional computing: Membrane Computing. This model is inspired on the way the living cells process the components and interact with the environment non-deterministic and parallel manner. In the talk we will focus on the general features of the model as well as the presentation of the more interesting variants and applications developed in the last time around the topic.

Back to program

Swarm intelligent systems: how social insects provide solutions to complex problems

Guy Theraulaz

Social insects provide us with a powerful metaphor to create decentralised control systems of simple interacting agents. The emergent collective intelligence of social insects, often called swarm intelligence, resides not in complex cognitive individual abilities but rather in networks of local interactions that exist among individuals. These swarm intelligent systems possess a number of interesting properties such as flexibility, robustness, decentralized control and self-organization that may have made the evolutionary success of social insects. In particular these kind of problem-solving systems are particularly well suited to cope with complex and dynamic environments sometimes overloaded with information. These powerful organisation principles can provide new ways to solve similar problems in engineering science. These last ten years, there has been an increasing number of methods inspired by social insects behavior that have been applied to many combinatorial and dynamical optimization problems optimization problems that proved to be very efficient. Similarly the powerful organisation principles of insects societies have been used to design distributed algorithms to control the behavior of groups of robots. In this lecture, I will review some of the recent findings about the way insect colonies manage communication and information, how tasks are allocated among individuals when group size varies and finally how these insects build complex 3D nests and robust communication networks.

Back to program

Models of Biological Organization

Juan Carlos Letelier

Since the contribution of Maturana and Varela that states that the central caracteristic of biological organization is the closure (or circular organization), this idea, known worldwide as autopoiesis, has deeply penetrated into disciplines such as cybernetics, systems engineering and epistemology, but in biology its impact has been definitely lower. One reason of this limited impact is the difficulty of clarifying notions such as processes and process networks, and of determining properties of circularly-organized systems. In this talk we will explore aspects of the theory of circularly-organized autopoietic systems. We will analyze the similarities (and differences) with the Theory of Systems (M, R) proposed by Robert Rosen, with an emphasis on biochemical networks, from the viewpoint of metabolic closure. Finally we will discuss how autopoietic systems can be viewed as anticipatory computational systems.

Back to program

Robustness in Robotics

Javier Ruiz del Solar

A robot should be able to take autonomous decisions in dynamic environments. Thus, one can say that a machine that behaves autonomously is a robot. But why is it so difficult to build autonomous machines, in contrast to building autonomous software agents? The main reason is the large variability that we can find in the real world. For instance, a robot should deal with variable illumination, non-regular surfaces, noisy sensors and actuators, variability on the objects to be perceived (e.g. every face is unique), perceptual aliasing produced by the mapping from 3D objects onto 2D sensors, etc. Variability can be tackled using robust methodologies for the data processing and analysis. In this talk these issues will be addressed and examples of robust image analysis and robot self-localization methodologies will be presented.

Back to program

Introduction to Multi-Agent Systems

John Atkinson

An agent can be a physical or virtual entity that can act, perceive its environment and communicate with others, is autonomous and has skills to achieve its goals and tendencies. It is in a multi-agent system (MAS) that contains an environment, objects and agents (the agents being the only ones to act), relations between all the entities, a set of operations that can be performed by the entities and the changes of the universe in time and due to these actions.

The significance of MAS is increasing due to the growing need to coordinate distributed operations, their increasing use to simulate complex systems in a decentralised way, and the developing communication infrastructure. The aim of this conference is to give participants a grounding in Multi-Agent Systems – one of the most active areas of Computer Science and Artificial Intelligence. We present the basic notions of the domain, insisting on methodological issues of how such systems may be conceived and realised. We show how MAS approach is different from more traditional methodologies and we identify the best problems and the particular domains in which MAS seems to be the most promising paradigm to adopt. We approach MAS starting by splitting them into Agents, Environments, Interactions, Organisations and Dynamics. An important part of the presentation deals with the different types of “Interactions”. Starting from Game Theoretic Interactions, followed by the communicational aspects of MAS and Interaction Protocols.

Back to program

Emergence of Language in Agent Systems

John Atkinson

Communicating systems play a key role in Multi-Agent Systems (MAS). This also constitutes the groundings for further negotiation and enabling shared tasks. There is a plenty of well-established protocols for this borrowed from network protocols, agent technology and game theory. However, these general protocols are usually not suitable for dynamic environments in which the situations that agents face may not be known in advance. Thus, it makes necessary to provide an adaptive communication language for both agents and agent-environment interactions. In order to achieve such an adaptation, each agent must be endowed with group learning abilities to develop new structures on its language. As a consequence, a common interface should be provided for every agent in the group in order for the meanings to be transmitted become unambiguous.

There are many interesting applications in which a language acquisition mechanism may be useful, including: intelligent robotics, internet search, artificial life, etc. This conference explores the hypothesis that language as an emergent process, can arise from the interaction of local agents in a system with no explicit teaching or internal knowledge. That is, agents are only guided by external influences (i.e., information perceived by sensors from the environment) and internal stimulus (i.e., information perceived from other agents).

Back to program

Self Organization

Guy Theraulaz

The roots of swarm intelligence are deeply embedded in the biological study of self-organized behaviors in social insects. From the routing of traffic in telecommunication networks to the design of control algorithms for groups of autonomous robots, the collective behaviors of these animals have inspired many of the foundational works in this emerging research field. Here I will review the main biological principles that underlie the organization of insects? colonies. I will begin with some reminders about the decentralized nature of such systems and describe the underlying mechanisms of complex collective behaviors of social insects, from the concept of stigmergy to the theory of self-organization in biological systems. I will emphasize in particular the role of interactions and the importance of bifurcations that appear in the collective output of the colony when some of the system?s parameters change. I will also address the role of modulations of individual behaviors by disturbances (either environmental or internal to the colony) in the overall flexibility of insect colonies.

Back to program

Generating Useful Diversity in Computing Systems

Gabriela Barrantes

Computer systems are becoming more homogeneous, at many different levels. This is expected and beneficial because of issues such as interoperability, maintenance, and standardization. However, it also creates a fertile ground for the propagation of attacks that rely on the existence of particular system features. One of the approaches proposed to mitigate this problem is to add artificial diversity to computer systems. This task is a difficult one, and it is necessary to evaluate whether the potential gains outweigh the overheads. Therefore, finding the hot spots where diversity will confer the most benefits is critical. In this talk we will present the current work on diversity defenses, and a preliminary evaluation of the benefits it provides.

Back to program

Organization in Social & Ecological Systems

Carlos Rodriguez

Hierarchically implemented rules of behavior (institutions) and descentralized mechanisms of control (morality, social pressure) are usually assumed to operate as functionally equivalent mechanisms of social coordination. Thereby, the interplay between them is neglected. Recent experimental results suggest that assuming independence between these two mechanisms leads to an inadequate account of social responses when new institutional regimes are implemented. Specifically, two opposite effects have been reported: a) an internalization effect by which the agent assimilates the norm; and b) a crowding-out effect where the implementation of the incentives weakens the agents’ moral dispositions. In this context, the challenge becomes to understand which sort of social dynamics might explain these alternative processes and under which conditions each effect will prevail.

Self-organization by spatial self-stabilization.

S.R. Abades & P.A. Marquet

Species exploit resources inside well defined regions of space. Thus, geographical ranges define an spatial envelope where individuals must balance the rates at which resources are harvested in order to persist in time. In this context, populations are faced with the problem of finding out an optimal pattern of space occupancy for maximizing the returns of resource consumption. This task defines a problem of distributed computation that species usually solve in ecological time, creating more or less static patterns of space occupancy that we recognize as areas of permanent residency. We propose that this areas may correspond to self-stabilized structures characterized by a mixture of source and sink patches, whose conformations provide spatial solutions that increase the likelihood for a speces to persist. We explored our conjecture by means of a spatially explicit simulation model in which organisms exploit their environment randomly at certain rates. We found that, under some circumstances, self-stable patterns of space occupancy in fact appear, defining persisten regions of space where source dynamics dominate. In addition, when this structures arise, they also exhibit fractal properties similar to those recently show to exist in breeding birds. Some hyperscaling issues are also discussed.

Back to program

Machine Learning and Ensemble Algorithms

Ricardo Ñanculef & Hector Allende

Machine learning aims for designing and characterizing systems capable to automatically improve from the observation of a set of examples. The task of a learning algorithm is hence to obtain a hypothesis that explains these observations and can be used to predict future cases: that is the algorithm generalizes.

In the last years, a lot of very enthusiastic research has been directed in the machine learning community towards the study of ensemble based systems. The idea is to solve a task by combining a group of simple, but maybe weak, solutions instead of carefully designing a complex solution in just a single step. This approach has been applied to a great variety of problems such as regression estimation, classification, data streams learning, clustering, graphical models, and so on.

In spite of the algorithmic developments in the field, there is no clear agreement about the principles that explain the ensemble performance and relate the individual behaviors with the group behavior. Some ideas point to the concept of diversity of the component members: since we have imperfect solutions they should be different so that at least some of them are good where the others are bad. Other ideas point to stability or robustness of the resulting algorithm and its relation with the generalization degree of the composite hypothesis. The aim of this presentation is to give a general background on machine learning principles and ensemble methods.

Back to program