On Autonomous Software

Mario Laul
7 min readMar 20, 2020

--

This post is a comment on Lane Rettig’s essay Autonocrats and Anthropocrats, connecting its central themes to two fundamental concepts in social sciences — the rule of law and social structure. It explains how the most informative analogue to a decentralized network of nodes running ‘autonomous’ software is society itself. Digital recordkeeping and blockchain networks are comparable to any other institution with effects beyond the control of their creators, administrators, and users. As such, blockchain networks are an important area of research not only for computer scientists and software engineers but also for social and political theorists whose expertise could be usefully applied to the design and governance of these emerging systems.

Rule of Law

The rule of law is foundational to modern civilization and has led humans to accept rational-legal authority in its various expressions throughout the social order. Today, a similar principle is driving the creation of ‘credibly neutral’ software systems. Lane Rettig writes:

Algorithms don’t care about race, creed, origin, gender, height, looks, wealth, fame, or any of the other factors that bias human judgement… You cannot coerce, threaten, or bargain with an algorithm. Perhaps even more importantly, they never get tired, hungry, or frustrated, and are no more or less likely to be lenient early in the day or just after a lunch break. In this respect… it’s tempting to think of an algorithm as the perfect judge (and, perhaps, jury and executioner as well).

As I’ve suggested before, software-mediated transactions and automated recordkeeping represent the evolution of bureaucratic organization. Relying on humans to handle information according to a predefined set of rules was always an imperfect means to an end — a problem to be solved through cryptography, mechanism design, and digital automation.

On the one hand, this controversial vision promises unprecedented efficiency and fairness. However— to paraphrase Max Weber [1] — it also amounts to constructing a ‘silicon cage’ of impersonal, hyper-rationalized control, with some problematic implications for how humans relate to a legal and administrative order increasingly devoid of in-person subjective judgement and oversight. As Rettig highlights:

Autonomous algorithms tend to make things like transparency, accountability, and auditability hard. We can cause an algorithm to log every decision made, every action taken, etc., but in many cases it may be difficult or impossible for an algorithm to “explain” the rationale behind such a decision: the answer may have high complexity and therefore not be reducible to anything less than all of the code of the algorithm itself, and all its input data (which, for various reasons, might not be made public). As a result, in many cases we may not actually know why an algorithm made the decision that it did. […]

I don’t know about you, but I tend not to trust a system that isn’t transparent, accountable, and auditable. After all, what other guarantees do I have that a system is doing what its creators say it does, what it’s intended to do?

Similar concerns in the context of traditional bureaucracy were famously satirized by Franz Kafka [2] [3]. Are we now entering a world with more of the same, or one that is fundamentally different, both in terms of the good as well as the more controversial? Time will tell. But we are certainly entering a world of growing technological complexity, triggering familiar concerns over what may be lost in the process (Weber, for example, highlighted the ‘disenchantment’ that accompanied modernity), and how it may impact the social distribution of knowledge and power. Along those lines, Rettig writes:

Algorithms necessarily reduce the wonderful, complex, multi-dimensionality of the world around us down to cold, hard, inflexible ones and zeros. In so doing, much that makes the analog world a wonderful place is lost. […] If you give me the choice, I would rather face a human jury and be sentenced by a human judge — any human judge — than by the fairest, most efficient, most robust digital court imaginable. […]

There will doubtless be many future examples of autonomous systems that no one but the system’s creators fully understand. This has the frightening potential to create a tiny, privileged elite who understand and can possibly even exploit such systems to their own advantage. For this reason it’s absolutely essential that autonomous systems be made as simple as possible, that they be based on common design patterns as much as possible, and that their full specifications, codebases, and documentation be made available for public scrutiny.

It is certainly not unthinkable that many people might actually prefer “the fairest, most efficient, most robust digital court imaginable” to an easily corruptible human jury. But that doesn’t nullify the risks involved in adopting such technology. And it is important to remember that even the most autonomous software is still embedded in society, subject to human error, and inevitably an object of political interest and rivalry. While it is true that successful institutions are built on stable foundations difficult to change, we should be careful about promoting or massively adopting systems that are too dogmatic or ossified. Maintaining the rule of law (or code) should not exclude the need or possibility of changing particular laws (or code) as human understanding and circumstances change.

Social Structure

A related issue concerns the degree to which autonomous software should include ‘kill switches’ or other options for some authority to unilaterally affect how the system operates. Rettig writes:

All software has bugs. […] Even a hypothetical, bug-free piece of software still has to be upgraded for reasons of maintenance, optimization, and ongoing development. What’s more, this upgrade process must by definition be initiated from outside. For this reason, every software-based system must have some mechanism for fixing or upgrading the system — effectively, an emergency “stop” button or an escape hatch that allows the system to be shut down, at least temporarily, in case of malfunction or need of upgrade. […]

But if this is the case, then by definition there is some person or set of people who have “god-mode” control over the system — who, acting from outside the system, can exercise unilateral authority over the system. Can such a system ever truly be called autonomous? (To put the question another way: if some alien had the power to press a button and “pause” or “reboot” or unilaterally alter the governance of the United States, is the United States still a sovereign nation?)

Autonomy and sovereignty — just like decentralization — are best thought of as a spectrum with different systems exhibiting varying levels of each. It remains an open question whether truly autonomous software is even desirable. But even without full ‘autonomy’, all technological systems — including those that are subject to centralized control — have a tendency to develop a ‘life of their own’ with effects beyond the immediate control of their creators, administrators, and users. There is a lot of emergent complexity in the world and technology’s role in it is impossible to predict with precision, which effectively means that technology is already a partly autonomous factor driving human behavior and societal development.

A useful analogue here is the concept of social structure — patterned social or symbolic relations that emerge from human interaction. Once institutionalized, social structures can outlive whole generations and affect or even determine the actions of individuals who are inevitably born into a pre-structured society they had no role in creating. The concept of social structure can be quite abstract, especially if defined in this all-encompassing manner. But the fundamental idea is that society is characterized by structural relations which are more or less stable and enduring, that cannot be easily rearranged at will, and that influence how individuals and groups perceive and behave in society.

The relative ‘autonomy’ of social structures was highlighted already by the founders of modern sociology. For example, by Émile Durkheim who posited social facts that transcend individuals and exert control over them; Karl Marx who emphasized the role of economic and power structures in influencing people’s perceptions and the cultural characteristics of society; Georg Simmel who observed that individual will and self-actualization are dominated by objective culture; and even Weber who — despite his subjectivist inclinations — maintained that the meaning of individual action cannot be reduced to the subjective intentions of the actor. Subsequently — whether explicitly stated or not — the relationship between structure and agency [4] has remained a central theme throughout the social sciences.

Why is this relevant in the context of autonomous software? Because culture and social structures can be thought of as the ‘autonomous software’ of human civilization. Emergent phenomena such as institutions (language, family, government, industry, media, etc.) or the social division of labor are all created by concrete people. But by being inscribed into cultural objects, laws, habitual practices, and organizations that are highly dispersed and persistent over time, these social structures acquire an existence beyond any particular set of individuals. And in some cases — just like distributed networks and autonomous software — they become resistant to centralized control or influence, resulting in considerable inertia and entrenchment.

Society is becoming more digital, more global, more automated. Powerful machine networks and software systems will play a defining role in structuring human relations. And even though our journey to make sense of and live by these systems is only beginning, it is not too early to weigh the ideological biases in today’s design decisions, inevitably weaved into our shared technological future.

References

[1] Weber, M. (1946). Politics as a Vocation. In H.H. Gerth and C. Wright Mills (eds.), From Max Weber: Essays in Sociology (pp. 77‐128). New York: Oxford University Press. Available here. (Originally published in German in 1919.)

[2] Kafka, F. (1937). The Trial. New York: Alfred A. Knopf. (Originally published in German in 1925.)

[3] Kafka, F. (1998). The Castle. New York: Schocken Books. (Originally published in German in 1926.)

[4] Sewell, W. H. (1992). A Theory of Structure: Duality, Agency, and Transformation. American Journal of Sociology, Vol. 98, No. 1, pp. 1–29. Available here.

--

--

No responses yet