Friday 24 October 2014

GUIDED SELF-ORGANIZATION:Making the Invisible Hand Work


by Dirk Helbing [1] 



This is fourth in a series of blog posts that form chapters of my forthcoming book Digital Society. Last week's chapter was titled: CRYSTAL BALL AND MAGIC WAND:The Dangerous Promise of Big Data

In an increasingly complex and interdependent world, we are faced with situations that are barely predictable and quickly changing. And even if we had all the information and means at our disposal, we couldn’t hope to compute, let alone engineer, the most efficient or best state of the system: the computational requirements are just too massive. That’s why the complexity of such systems undermines the effectiveness of centralized planning and traditional optimization strategies. Such efforts might not only be ineffective but can make things even worse. At best we end up “fighting fires” – struggling to defend ourselves against the most disastrous outcomes.

If we’re to have any hope of managing complex systems and keeping them from collapse or crisis, we need a new approach. Whether or not the quote is apocryphal, what Einstein allegedly says holds true here: "We cannot solve our problems with the same kind of thinking that created them." What other options do we have? The answer is perhaps surprising: we have to step back from centralized top-down control, which often ends in failed brute-force attempts to impose a certain behavior. However, as I will now explain, we can find new ways of letting the system work for us.

This means that we should implement principles such as distributed bottom-up control and guided self-organization. What are these principles about and how do they work? Self-organization means that the interactions between the components of the system spontaneously lead to a collective, organized and orderly mode of behavior. That does not, however, guarantee that the state of the system is one we might find desirable, and that's why self-organization may need some "guidance."



Distributed control means that, if we wish to guide the system towards a certain desirable mode of behavior, we must do so by applying the guiding influences in many "local" parts of the system, rather than trying to impose a single global behavior on all the individual components at once. The way to do this is to help the system adapt locally to the desired state wherever it shows signs of deviating. This adaptation involves a careful and judicious choice of local interactions. Guided self-organization thus entails modifying the interactions between the system components where necessary while intervening as little and as gently as possible, relying on the system's capacity for self-organization to attain a desired state.

When and why are these approaches superior to conventional ones? For the sake of illustration, I will start with the example of the brain and then turn to systems such as traffic and supply chains. I will show that one can actually reduce traffic jams based on distributed real-time control, but it takes the right approach. In the remaining chapter of the book, we will explore whether and how these success principles might be extended from technological to economic and even social systems.

The miracle of self-organization


Our bodies represent perfect examples of the virtues of self-organization in generating emergent functions from the interactions of many components. The human brain, in particular, is made up of a trillion information-processing units, the neurons. Each of these is, on average, connected to about a thousand other neurons, and the resulting network exhibits properties that cannot be understood by looking at the single neurons: in our case, not just coordination, impulses and instincts, but also the mysterious phenomenon of consciousness. And yet, even though a brain is much more powerful than today's computers (which are designed in a top-down way and not self-organized), it consumes less energy than a typical light bulb! This shows how efficient the principle of self-organization can be.

In the previous chapter, we saw that the dynamical behavior of complex systems – how they evolve and change over time – is often dominated by the interactions between the system components. That’s why it is hard to predict the behaviors that will emerge – and why it is so hard to control complex systems. But this property of interaction-based self-organization is also one of the great advantages of complex systems, if we just learn to understand and manage them.

System behavior that emerges by self-organization of a complex system's components isn’t random, nor is it totally unpredictable. It tends to give rise to particular, stable kinds of states, called "attractors," because the system seems to be drawn towards them. For example, the figure below shows six typical traffic states, where each of the depicted congestion patterns is an attractor. In many cases, including freeway traffic, we can understand and predict these attractor states using simplified computer models of the interactions between the components (here: the cars). If the system is slightly perturbed, it will usually tend to return to the same attractor state. To some extent this makes the system resilient to perturbations. Large perturbations, however, will drive the system towards a different attractor: another kind of self-organized, collective behavior, for example, a congested traffic state rather than free traffic flow.

The Physics of Traffic

Contrary to what one might expect, traffic jams are not just vehicle queues that form behind bottlenecks. Traffic scientists were amazed when, beginning in the 1990s, they discovered the large variety and complexity of empirical congestion patterns. The crucial question is whether such patterns are understandable and predictable phenomena, such that we can find new ways to avoid congestion. In fact, there is now a theory that allows one to explain all traffic patterns as composites of elementary congestion patterns (see figure above). This theory can even predict the size and delay times caused by congestion patterns – it is now widely regarded as one of the great successes of complexity theory.

I started to develop this theory when I was working at the University of Stuttgart, Germany, together with Martin Treiber and others. We studied a model of freeway traffic in which each vehicle was represented by a computer “agent” – a driver-vehicle unit moving along the road in a particular direction with a preferred speed, which however would slow down whenever this was necessary to avoid a collision. Thus the model attempted to “build” a picture of traffic flow from the bottom up, based on simple interaction rules between the individual agents, the driver-vehicle units. Based on this model, we could run computer simulations to deduce the emergent outcomes resulting in different kinds of traffic situations.

For example, we simulated a multi-lane freeway with a bottleneck created by an on-ramp, where additional vehicles entered the freeway. At low vehicle densities, traffic flow recovered even from large perturbations in the flow such as a massive vehicle platoon. In sharp contrast, at medium densities even the slightest variation in the speed of a vehicle triggered a breakdown of the flow – the "phantom traffic jams" we discussed before. In between, however, there was a range of densities (called "bistable" or "metastable"), where small perturbations faded away, while perturbations larger than a certain size (the "critical amplitude") caused a traffic jam.

Interestingly, when varying the traffic flows on the freeway and the on-ramp in the presence of a small perturbation, we found all the empirical congestion patterns shown above. In essence, most traffic jams were caused by a combination of three elements: a bottleneck, high traffic flows, and a perturbation in the flow. Moreover, the different traffic states could be arranged in a so-called "phase diagram" (see below). The diagram schematically presents the flow conditions under which each type of pattern remains stable, and the boundaries that separate these regimes. Empirical observations nicely support this theoretical classification of possible traffic patterns.
A capacity drop, when traffic flows best!
Can we use this understanding to improve traffic flows? To overcome congestion, we must first recognize that the behavior of traffic flows can be counter-intuitive, as the "faster-is-slower effect" shows (see Information Box 1). Imagine a stretch of freeway joined by an on-ramp, on which the traffic density is relatively high but the flow is smooth and free of jams, and not prone to jam formation triggered by small disturbances. Suppose now we reduce the density of vehicles entering the considered freeway stretch for a short time. You might expect that traffic will flow even better. But it doesn’t. Instead, vehicles accelerate into the area of smaller density – and this behavior can trigger a traffic jam! Just when the entire road capacity is urgently needed, we find a breakdown of capacity, which can last for hours and can increase travel times by a factor of two, five, or ten. A breakdown may even be triggered by the perturbation created by a simple overtaking maneuver of trucks.

It's cynical that the traffic flow becomes unstable when the maximum throughput of vehicles is reached – that is, exactly in the most efficient state of operation from an “economic” point of view. To avoid traffic jams, therefore, we would have to stay sufficiently far away from this “maximally efficient” traffic state. But doesn’t this mean we must restrict ourselves to using the roads at considerably less efficiency than they are theoretically capable of? No it doesn’t, if we just build on guided self-organization.

Avoiding traffic jams


Traffic engineers have sought ways to improve traffic flows at least since the early days of computers. The classical "telematics" approach to reduce congestion is based on the concept of a traffic control center that collects information from a lot of traffic sensors, then centrally determines the best strategy and implements it in a top-down way – for instance, by introducing variable speed limits on motorways or using traffic lights at junctions. Recently, however, researchers and engineers have started to explore a different approach: decentralized and distributed concepts, relying on bottom-up self-organization. This can be enabled, for example, by car-to-car communication.

In fact, I have been involved in the development of a new traffic assistance system that can reduce congestion. From the slower-is-faster effect, we can learn that, in order to avoid or delay the breakdown of traffic flows and to use the full freeway capacity, it is important to smooth out perturbations of the vehicle flow. With this in mind, we have developed a special kind of adaptive cruise control (ACC) system, where distributed control attempts are made by a certain percentage (e.g. 30%) of ACC-equipped cars, while a traffic control center is not needed for this. The ACC system accelerates and decelerates a car automatically based on real-time data from a radar sensor, measuring the distance to the car in front and the relative velocity. Such radar-based ACC systems existed already before, but in contrast to conventional ACC systems, ours does not just aim to increase the driver’s comfort by eliminating sudden changes in speed. It also increases the stability and capacity of the traffic flow by taking into account what other nearby vehicles are doing, thereby supporting a favorable self-organization of the entire traffic flow. This is why we call it a traffic assistant system rather than a driver assistant system.

The distributed control approach of the underlying ACC system is inspired by fluids flows, which do not suffer from congestion: when we narrow a garden hose, the water simply flows faster through the bottleneck. To sustain the traffic flow, one can either increase the density or speed of vehicles, or both. The ACC system we developed with the Volkswagen company imitates the natural interactions and acceleration of driver-vehicle units, but in order to increase the vehicle flow where needed, it slightly reduces the time gap between successive vehicles. Additionally, our special ACC system increases the acceleration of vehicles out of the traffic jam to stabilize the flow.

In essence, we modify the driving parameters determining the acceleration and interactions of cars such that the traffic flow is increased and stabilized. The real-time measurement of distances and relative velocities by radar sensors allows the cars to adjust their speeds in a way that is superior to human drivers. This traffic assistant system, which I developed together with Martin Treiber, Arne Kesting, Martin Schönhof, Florian Kranke, and others, was also successfully tested under real traffic conditions.

Cars with collective intelligence


A key issue for the operation of the adaptive cruise control is to identify, where it needs to kick in and alter the way a vehicle is being driven. These locations can be figured out by connecting the cars into a communication network. Many new cars contain a lot of sensors that can be used to give them “collective intelligence.” They can perceive their driving state and features of their local environment (i.e. what nearby cars are doing), communicate with neighboring cars (through wireless inter-vehicle communication), make sense of the situation they are in (e.g. assess the surrounding traffic state), take autonomous decisions (e.g. adjust driving parameters such as the speed), and give advice to drivers (e.g. warn of a traffic jam behind the next curve). In a sense, such vehicles acquire also "social" abilities: they can coordinate their movements with those of others.

According to our computer simulations, even if only a small proportion of cars is equipped with such ACC systems, this can have a significant positive effect on the overall traffic situation. In contrast, most driver assistant systems today are still operating in a "selfish" way rather than creating better flow conditions for everyone. Our special, "social" solution approach, seeking to reach systemic benefits through collective effects of local interactions, is a central feature of what I call Socio-Inspired Technologies.


A simulation movie we have created illustrates how effective this approach can be (see http://www.youtube.com/watch?v=xjodYadYlvc). While the ACC system is turned off, the traffic develops the familiar and annoying stop-and-go waves of congestion. When seen from a bird’s-eye view, it becomes evident that the congestion originates from small perturbations triggered by vehicles attempting to enter the freeway via an on-ramp. But once the ACC system is turned on, these stop-and-go waves vanish and traffic flows freely. In other words, modifying the interactions of vehicles based on real-time measurements allows us to produce coordinated and efficient flows in a self-organized way. Why? Because we have changed the interaction rules between cars based on real-time adaptive feedback, handing over responsibility to the autonomously driving system. With the impending advent of “driverless cars” such as those being introduced by Google, it’s clearer than ever that this sort of intervention is no fantasy at all.

Guided self-organization


So we see that self-organization may have favorable results (such as free traffic flows) or undesirable ones (such as congestion), depending on the nature of the interactions between the components of the system. Only a slight modification of these interactions can turn bad outcomes into good ones. Therefore, in complex dynamical systems, "interaction design" – also known as "mechanism design" – is the secret of success.

Self-organization based on modifications of interactions or institutional settings – so-called "guided self-organization" – utilizes the hidden forces acting in complex dynamical systems rather than opposing them. In a sense, the superiority of this approach is based on similar principles to those of Asian Martial Arts, where the forces created by the opponent are turned to one’s own advantage. Let’s have a look at another example: how best to coordinate traffic lights.

Self-organizing traffic lights


Relative to freeway flows, urban traffic flows incur additional challenges. Here the roads are connected into complex networks with many junctions, and the problem is mainly how to coordinate the traffic at all these intersections. When I began to study this difficult problem, my goal was to find an approach that would work not only when conditions are ideal but also when they are impaired or complicated, for example because of irregular road networks, accidents or building work. Given the large variability of urban traffic flows over the course of days and seasons, the best approach turned out to be one that adapts flexibly to the prevailing local travel demands, not one that is planned or optimized for "typical" (average) traffic flows. Rather than imposing a certain control scheme for switching traffic lights in a top-down way, as it is done by traffic control centers today, I concluded that it is better if the lights respond adaptively to the actual local traffic conditions. In this self-organizing traffic-light control, the actual traffic flows determine, in a bottom-up way and in real time, how the lights switch.

The local control approach was inspired by my previous experience with modeling pedestrian flows. These tend to show oscillating flow directions at bottlenecks, which look as if they were caused by “pedestrian traffic lights”, even though they are not. The oscillations are in fact created by changes in the crowd pressure on both sides of the bottleneck – first the crowd surges through the constriction in one direction, then in the other. This, it turns out, is a relatively efficient way of getting the people through the bottleneck. Could road intersections perhaps be understood as a similar kind of bottleneck, but with more flow directions? And could flows that respond similarly to the local traffic “pressure” perhaps generate efficient self-organized oscillations, which could in turn control the switching sequences of the traffic lights? Just at that time, a student named Stefan Lämmer knocked at my door and asked to write a PhD thesis in my team about this challenging problem. So we started to investigate this.

How to outsmart centralized control


How does self-organizing traffic light control work, and how successful is it? Let’s first look at how it is currently done. Many urban traffic authorities today use a top-down approach coordinated by some control center. Supercomputers try to identify the optimal solution, which is then implemented as if the traffic center were a "benevolent dictator." A typical solution creates "green waves" of synchronized lights. However, in large cities even supercomputers are unable to calculate the optimal solution in real time – it's too hard a computational problem, with just too many variables to track and calculate.

So the traffic-light control schemes, which are applied for certain time periods of the day and week, are usually optimized "offline." This optimization assumes representative (average) traffic flows at a certain day and time, or during events such as soccer matches. In the ideal case, these schemes are then additionally adapted to the actual traffic situation, for example by extending or shortening the green phases. However, at a given intersection the periodicity of the switching scheme (in what order the road sections get a green light) is usually kept the same. Within a particular control scheme, it’s mainly the length of the green times that is altered, while the order of switching just changes from one applied scheme to another.

Unfortunately, the efficiency of even the most sophisticated of these top-down optimization schemes is limited by the fact that the variability of traffic flows is so large that average traffic flows at a particular time and place are not representative for the traffic situation on any particular occasion at that time and place. The variation in the number of cars behind a red light and the fraction of vehicles turning right or going straight is more or less as big as the corresponding average values. This implies that a pre-planned traffic light control scheme isn't optimal at any time.

So let us compare this classical top-down approach carried out by a traffic control center with two alternative ways of controlling traffic lights based on the concept of self-organization (see illustration below). The first, called selfish self-organization, assumes that each intersection separately organizes its switching sequence to strictly minimize the travel times of the cars on the road sections approaching it. The second, called other-regarding self-organization, also tries to minimize the travel times of these cars, but aims before all else to clear the vehicle queues that exceed some critical length. Hence, this strategy also takes into account the implications for neighboring intersections.
How successful are the two self-organizing schemes compared to the centralized one? We’ll assume that at each intersection there are detectors that measure the outflows from its road sections and also the inflows into these road sections coming from the neighboring intersections (see illustration below). The information exchange between neighboring intersections allows short-term predictions of the arrival times of vehicles. The locally self-organizing traffic lights adapts to this prediction in a way that tries to keep vehicles moving and to minimize waiting times.

When the traffic flow is sufficiently far below the intersection capacity, both self-organization schemes produce well-coordinated traffic flows that are much more efficient than top-down control: the resulting queue lengths behind red traffic lights are much shorter (in the figure below, compare the violet dotted and blue solid line with the red dashed line). However, for selfish self-organization, the process of local optimization only generates good results below a certain traffic volume. Long before the maximum capacity utilization of an intersection is reached, the average queue length tends to get out of control, as some road sections with small traffic flows are not served frequently enough. This creates spillover effects – congestion at one junction leaks to its neighbors – and obstructs upstream traffic flows, so that congestion quickly spreads over large parts of the city in a cascade-like manner. The resulting state may be viewed as a congestion-related "tragedy of the commons," as the available intersection capacities are not anymore efficiently used. Due to this coordination failure between neighboring intersections, when the traffic volumes are high, today’s centralized traffic control can produce better flows than selfish self-organization, and that's actually the reason why we run traffic centers.
Yet by changing the way in which intersections respond to information about arriving vehicle flows, it becomes possible to outperform top-down optimization attempts over the whole range of traffic volumes that an intersection can handle (see the solid blue line). To achieve this, the rule of waiting time minimization must be combined with a second rule, which specifies that a vehicle queue must be cleared immediately whenever it reaches a critical length (that is, a certain percentage of the road section). This second rule avoids spill-over effects that would obstruct neighboring intersections and thereby establishes an "other-regarding” form of self-organization. Notice that at high traffic volumes, both the local travel time minimization (dotted violet line above) and the clearing of long queues (black dash-dotted line) perform badly in isolation, but when combined, they produce a superior way of coordination. One would not expect that two bad strategies in combination might produce the best results!

One advantageous feature of the self-organization approach is that it can use gaps that occur in the traffic as opportunities to serve other traffic flows. In that way, the coordination arising between neighboring traffic lights can spread over many intersections in a self-organized way. That’s how other-regarding self-organization can outsmart top-down control trying to optimize the system: it responds more flexibly to actual local needs, thanks to a coordinated real-time response.

Therefore, what will the role of traffic control centers be in the future? Will they be obsolete? Probably not. They will still be used to keep an overview of all urban traffic flows, to ensure information flows between distant parts of the city, and to implement political goals such as limiting the overall flows into the city center from the periphery.

A pilot study


After this promising study, Stefan Lämmer approached the public transport authority in Dresden to collaborate with them on traffic light control. The traffic center was using an adaptive state-of-the art control scheme based on "green waves." But although it was the best available on the market, they weren’t happy with it. In particular, they were struggling to manage the traffic around a busy railway station in the city center. There, the problem was that many public transport lines cut through a highly irregular road network, and the overall goal was to prioritize public transport rather than road traffic. However, if trams and buses were to be given a green light whenever they approached an intersection, this would destroy the green waves in the vehicle flows, and the resulting congestion would quickly spread, causing massive disruption over a huge area of the city.

When we applied our other-regarding self-organization scheme of traffic lights to the same kind of empirical inflow data that had been used to calibrate the current control scheme, we found a remarkable result. The waiting times were reduced for all modes of transport: considerably so for public transport and pedestrians, and somewhat also for vehicles. The roads were less congested, trams and buses were prioritized, and travel times became more predictable. In other words, everybody would benefit from the new approach (see figure below) – including the environment. It is just logical that the other-regarding self-organization approach is now being implemented at some traffic intersections in Dresden.
Lessons learned
From this example of traffic light control, we can draw a number of important conclusions. First, in complex systems with strongly variable and largely unpredictable dynamics, bottom-up self-organization can outperform top-down optimization by a central controller – even if that controller is kept informed by comprehensive and reliable data. Second, strictly local optimization may create a highly performing system under some conditions, but it tends to fail when interactions between the system components are strong and the optimization at each location is selfish. Third, an "other-regarding" approach that takes into account the situation of the interaction partners can achieve good coordination between neighbors and superior system performance.

In conclusion, a central controller will fail to manage a complex system because the computational demands needed to find the best solutions are overwhelming. Selfish local optimization, in contrast, ultimately fails because of a breakdown of coordination, when the system is used too much. However, an other-regarding self-organization approach based on local interactions can overcome both problems, producing resource-efficient solutions that are robust against unforeseen disturbances.
In many cities, there has recently been a trend towards replacing signal-controlled intersections with roundabouts, and towards changing urban spaces controlled by many traffic signs and rules in favor of designs that support voluntary, considerate interactions of road users and pedestrians. In other words, the self-organization approach is spreading.

As we will see in the chapters on Human Nature and the Economy 4.0, many of the conclusions we have drawn from traffic flows are relevant for socio-economic systems as well. These are also systems in which agents often have incompatible interests that cannot be satisfied at the same time... Production processes are an example for this as well.

Self-organizing production


Problems of coordinating flows appear also in man-made systems other than traffic and transportation. About ten years ago, together with Thomas Seidel and others, I began to study how production plants could be operated more efficiently, and be better designed. In the paper and packaging production plant we studied, we observed bottlenecks that occurred from time to time. When this happened, a jam of products waiting to be processed propagated upstream, while the shortfall in the number of finished products grew downstream (see illustration below). We noticed that there were quite a few analogies with traffic systems. For example, road sections are analogous to storage buffers where partly finished products can accumulate. Product-processing units are like road junctions, different product flows have different origins and destinations (like vehicles), production schedules function like traffic lights, cycle times are analogous to travel and delay times, full “buffer” sections suffer from congestion, and machine breakdowns are like accidents. However, modeling production is even more complicated than modeling traffic, as there are many different kinds of material flows.


Drawing on our experience with traffic models, we devised an agent-based model for these production flows. We focused again on how local interactions can govern and potentially assist the flow. We imagined equipping all machines and all products with a small "RFID" computer chip having memory and wireless short-range communication ability – a technology already widely implemented in other contexts, such as tagging of consumer goods. This would enable a product to communicate with other products and machines in the neighborhood (see figure below). For example, a product could signal that it was delayed and needed prioritized processing, requiring a kind of over-taking maneuver. Products could also select between alternative routes, and tell the machines what had to be done with them. They could cluster together with similar products to ensure efficient processing.


In the past, designing a good factory layout in a top-down way has been a complicated, time-consuming and expensive procedure. Bottom-up self-organization is again a superior approach. The above-described agent-based approach building on local interactions has a phenomenal advantage: it makes it easy to test different factory layouts without having to specify all the details of the fabrication plant. One just has to put the different elements of a factory together (such as machines and transportation units). The possible interactions are then specified automatically. The machines know immediately what to do with the products – because those products already bear with them the necessary instructions. Here too the local exchange of information between agents creates a collective, social intelligence. Given these favorable circumstances, it is easily possible to create and test many different factory layouts and to find, which are more efficient and more resilient to perturbations.

In the future one may even go a step further. If we consider that recessions are like traffic jams in the world economy, where capital or product flows are obstructed or delayed, couldn't real-time information about the world's supply networks be used to reduce economic disruptions? I actually think so. Therefore, if I had access to the data of the world-wide supply chains, I would be delighted to build an assistant system for global supplies that reduces cases of overproduction and situations, where resources are lacking.

Making the Invisible Hand work


We, therefore, see that vehicles and products can successfully self-organize if a number of conditions are fulfilled. First, the interacting system components are provided with real-time information. Second, there is prompt feedback – that is to say, appropriate rules of interaction – which ensures that this information elicits a suitable, adaptive response. (In later chapters, I will discuss in detail how such information can be gathered and how such interaction rules are determined.)

So, would a self-organizing society be possible? In fact, for hundreds of years, people have been inspired by the self-organization and social order in colonies of social insects such as ants, bees, or termites. For example, Bernard Mandeville’s The Fable of Bees (1714) argues that actions driven by private, even selfish motivations can create public benefits. A bee hive is an astonishingly differentiated and complex, well-coordinated social system, even though there is no hierarchical chain of command. No bee orchestrates the actions of the other bees. The queen bee simply lays eggs, and all other bees perform their respective roles without being told so. Adam Smith's "Invisible Hand" expresses a similar idea, namely that the actions of people, even if driven by the 'selfish' impulse of personal gain, would be invisibly coordinated in a way that automatically improves the state of the economy and the society. One might say that, behind this, there is often a believe in something like a divine order.

However, the recent global financial and economic crisis has questioned that complex systems would always produce the best possible outcomes by themselves. Phenomena such as traffic jams and crowd disasters suggest as well that a laissez faire approach that naively trusts into the "Invisible Hand" often fails. The same applies to failures of cooperation, which may result in the over-utilization of resources as discussed in the next chapter.

Nevertheless, whether the self-organization of a complex dynamical system ends in success or failure mainly depends on the interaction rules and institutional settings. I therefore claim that, three-hundred years after the principle of the Invisible Hand was postulated, we can finally make it work – based on real-time information and adaptive feedbacks to ensure the desired functionality. While the Internet of Things can provide us with the necessary real-time data, complexity science can inform us how choose the interaction rules and institutional settings such that the system would self-organize towards a desirable outcome.

Information technologies to assist social systems


Above, I have shown that self-organizing traffic lights can outperform the optimization attempts of a traffic control center. Furthermore, "mechanism design," which modifies local vehicle interactions by suitable driver assistant systems, can turn self-organization into a principle that helps to reduce rather than produce congestion. But these are technological systems.

Could we also design an assistance system for social behavior? In fact, we can! Sometimes, social mechanism design can be pretty challenging, but sometimes it's easy. Just imagine the task to share a cake in a fair way. If social norms allow the person who cuts the cake to take the first piece, this will often be bigger than the others. If he or she is to take last, the cake will probably be distributed in a much fairer way. Therefore, alternative sets of rules that are intended to serve the same goal (such as cake cutting), may result in completely different outcomes.

As Information Box 2 illustrates, it is not always easy to be fair. But details in the "institutional setting" – the specific "rules of the game" – can matter a lot. With the right set of interaction rules, we can, in fact, create a better world. The next chapter discusses, how the respective social mechanisms, which are part of our culture, can make a difference, and how one can build an assistant system to support cooperation in situations where it would otherwise be unlikely. Information and communication technologies are now offering entirely new opportunities!

INFORMATION BOX 1: Slower-is-faster effect

Let me illustrate with an example, how counter-intuitive the behavior of traffic flows can be (see picture above). When the traffic flow is sufficiently high, but still stable, a temporary reduction in the vehicle density (which locally allows drivers to move at a faster speed) can surprisingly cause a traffic jam. How does this "faster-is-slower effect" happen? First, the temporary perturbation of the vehicle density changes its shape, while traveling along the freeway. Then, it eventually causes a forwardly moving vehicle platoon, which grows in the course of time. Consequently, the perturbation of the traffic flow propagates downstream and eventually passes the location of the on-ramp. As the vehicle platoon is still moving forward, one would think that the perturbation will eventually leave the freeway stretch under consideration. But at a certain point in time, the vehicle platoon has grown so big that it suddenly changes its propagation direction, i.e. it starts to travel backward rather than downstream. This is called the "boomerang effect." The effect occurs, because vehicles in the cluster are temporarily stopped, when the vehicle platoon has reached a certain size. At the front of the cluster, vehicles are moving out of the traffic jam, while new vehicles join the traffic jam at the end. Altogether, this makes the traffic jam travel backwards, such that it eventually reaches the location of the on-ramp. When this happens, the inflow of cars via the on-ramp is perturbed so much that the upstream traffic flow breaks down. This causes a long vehicle queue, which continues to grow upstream. Therefore, even when the road capacity could theoretically handle the overall traffic flow, a perturbation in the traffic flow can cause a drop in the freeway capacity, which results from the interactions between cars. The effective capacity of the freeway is then given by the outflow from the traffic jam, which is about 30 percent below the maximum traffic flow on the freeway!

INFORMATION BOX 2: Fair supply in times of crises


In case of a shortage of resources that are required to satisfy our basic needs (such as food, water, and energy), it might be particularly important to share them in a fair way. Otherwise, violent conflicts for scarce resources might break out. But it is not always easy to be fair and requires suitable preparations. Together with Rui Carvalho, Lubos Buzna, and others, I have investigated this for cases such as gas supply through pipelines. There, we may visualize the percentages of pipeline use towards different destinations by a pie chart. It turns out that we must now cut several cakes (or pies) at the same time. Given multiple constraints by pipeline capacities, it is usually impossible to meet all goals and constraints at the same time. Therefore, one will often have to make compromises. Paradoxically, if overall less gas is transported due to non-deliveries from a source region of gas, fair sharing requires a re-routing of gas from other source regions. This will often lead to pipeline congestion problems, since the pipeline network was built for different origin-destination relationships. Nevertheless, an algorithm inspired by the Internet routing protocol can maximize fairness.

[1] Dear Reader,

Thank you for your interest in this chapter, which is thought to stimulate debate.

What you are seeing here is work in progress, a chapter of a book on the emerging Digital Society I am currently writing. My plan was to elaborate and polish this further, before I share this with anybody else. However, I often feel that it is more important to share my thoughts with the public now than trying to perfect the book first while keeping my analysis and insights for myself in times requiring new ideas.

So, please apologize if this does not look 100% ready. Updates will follow. Your critical thoughts and constructive feedback are very welcome. You can reach me via dhelbing (AT) ethz.ch or @dirkhelbing at twitter.

I hope these materials can serve as a stepping stone towards mastering the challenges ahead of us and towards developing an open and participatory information infrastructure for the Digital Society of the 21st century that would enable everyone to take better informed decisions and more effective actions.

I believe that our society is heading towards a tipping point, and that this creates the opportunity for a better future.

But it will take many of us to work it out. Let’s do this together!

Thank you very much, I wish you an enjoyable reading,

Dirk Helbing
PS: Special thanks go to the FuturICT community and to Philip Ball.

2 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete

Note: only a member of this blog may post a comment.