This is a story about information at rest and information in motion. Actually, it’s really just a story about information in motion, mediated by computers. Information at rest is pretty predictable. Go pick up an actual, physical, book. Alone, it’s not going to do much. But it is full of information. It’s your job to put it in motion. The right kind of motion can change the world. The thing is, that change, be it the creation of a political movement, or the discovery of a new field of study, is oddly physical. Out terms that describe it (field, movement) are physical. They have weight, and inertia. But that is a property us us — human beings — evolved meat machines that interact with information using mechanisms evolved over millennia to deal with physical interactions. Information in motion isn’t physical. But we aren’t equipped to deal with that intuitively. The machines that we have built to manipulate information are. And though they are stunningly effective in this, they do not share our physics-based biases about how to interpret the world.
And that may lead to some ugly surprises.
The laws of physics don’t apply in information space.
Actually, we rarely deal with information. We deal in belief, which is the subset of information that we have opinions about. We don’t care how flat a table is as long as it’s flat enough. But we care a lot about the design of the dining room table that we’re putting in our dining room.
In this belief space, we interpret information using a brain that is evolved based on the behavior of the physical world. That’s a possible reason that we have so many movement terms for describing belief behavior. It is unlikely that we could develop any other intuition, given the brief time that there has even been a concept of information.
There are also some benefits to treating belief as if it has physical properties. It affords group coordination. Beliefs that change gradually can be aligned easier (dimension reduction), allowing groups to reach consensus and compromise. This combined with our need for novelty creates somewhat manageable trajectories. Much of the way that we communicate depends on this linearity. Spoken and written language are linear constructs. Sequential structures like stories contain both information and the order of delivery. Only the sequence differs in music. The notes are the same.
But belief merely creates the illusion that information has qualities like weight. Although the neurons in our brain are slower than solid-state circuits, the electrochemical network that they build is capable of behaving in far less linear and inertial ways. Mental illness can be regarded as a state where the brain network is misbehaving. It can be underdamped, leading to racing thoughts or overdamped manifesting as depression. It can have runaway resonances, as with seizures. In these cases, the functioning of the brain no longer maps successfully to the external, physical environment. There seems to be an evolutionary sweet spot where enough intelligence to model and predict possible outcomes is useful. Functional intelligence appears to be a saddle point, surrounded by regions of instability and stasis.
Computers, which have not evolved under these rules treat information very differently. They are stable in their function as sets of transistors. But the instructions that those transistors execute, the network of action and information is not so well defined. For example, computers can access all information simultaneously. This is one of the qualities of computers that makes them so effective at search. But this capability leads to deeply connected systems with complicated implications that we tend to mask with interfaces that we, the users, find intuitive.
For example, it is possible to add the illusion of physical properties to information. In simulation we model masses, springs and dampers to create sophisticated representations of real-world behaviors. But simply because these systems mimic their real-world counterpart, doesn’t mean that they have these intrinsic properties. Consider the simulation of a set of masses and springs below:
Depending on the solver (physics algorithm), damping, and stiffness, the system will behave in a believable way. Choose the Euler solver, turn down the damping and wind up the stiffness and the systems becomes unstable, or explodes:
The computer, of course, doesn’t know the difference. It can detect instability only if we program or train is specifically to do so. This is not just true in simulations for physical systems, but also training-based systems like neural networks (gradient descent) and genetic algorithms (mutation rate). In all these cases, systems can converge or explode based on the algorithms used and the hyperparameters that configure them.
This is the core of an implicit user interface problem. The more we make our intelligent machines so that they appear to be navigating in belief spaces, the more we will be able to work productively with them in intuitive ways. Rather than describing carefully what we want them to do (either with programming or massive training sets), we will be able to negotiate with them, and arrive at consensus or compromise. The fundamental problem is that this is a facade that does not reflect the underlying hardware. Because no design is perfect, and accidents are inevitable, I think that it is impossible to design systems that will not “explode” in reaction to unpredictable combinations if inputs.
But I think that we can reduce the risks. If we legislate an environment that requires a minimum level of diversity in these systems, from software design through training data, to hardware platform, we can increase the likelihood that when a nonlinear accident happens it will only happen in one of several loosely coupled systems. The question of design is a socio-cultural one that consists of several elements:
- How will these systems communicate that guarantees loose coupling?
- What is the minimum number of systems that should be permitted, and under what contexts?
- What is the maximum “size” of a single system?
By addressing these issues early, in technical and legislative venues, we have an opportunity to create a resilient socio-technical ecosystem, where novel interactions of humans and machines can create new capabilities and opportunities, but also a resilient environment that is fundamentally resistant to catastrophe.