Our lives are increasingly shaped by engineering teams you’ll never meet, their decisions obscured by layers of code you neither see or understand.
Increasingly Inscrutable Code
The impetus for this article is a paper by Rob Kitchin called Thinking critically about and researching algorithms. (Here’s a PDF of the paper and John Danaher’s summary of it, both dense reading). The paper outlines some of the reasons it’s so hard to understand software, touching on factors such as complexity, accelerating change, bugs and unintended consequences. Rob goes into some solutions to these problems, though they are complex and geared more for researchers.
What’s even more problematic about our prospects of understanding our software is the rise of machine learning, an approach to generating code that isn’t even understandable to humans. As computer scientist Stephen Wolfram notes in a recent interview:
“…that’s perhaps one of the things that people find disorienting about the current round of AI development is that “you can expect to understand how it works” is definitely coming to an end.”
— Stephen Wolfram
The Code Behind the Code
While algorithmic transparency is difficult, I don’t think we can afford to simply throw our hands up in surrender. Software is simply too entwined in our lives at this point to remain ignorant about what it actually does. I believe our best hope for understanding our software lies in understanding the goals of these systems – the code behind the code.
Goals have always been an essential part of the software development process, whether it was the rigorous specifications of early software or the more lightweight and iterative planning of modern Agile development. With the coming machine learning revolution, goals will be even more important:
“What I’m saying is there’s a different form of engineering, in which you say, “Let’s define the goal, and then let’s essentially automate getting to how that goal is achieved.”
— Stephen Wolfram
Software’s Impact on the End User
This isn’t about grandiose, fuzzy-headed mission statements, but concrete outcomes that end users can expect from using a particular software system. There are good approaches from the social change sector, such as Theory of Change models and Logic Models, which I believe can be adapted for software development. I can say from experience that this is challenging, but extremely valuable, work.
Mission-Driven Software
What we’re talking about here is mission-driven software: infusing a sense of mission into software and using that purpose to build greater transparency and accountability to end users. But as long as companies like Facebook, Google and Amazon do their job, do we really need to understand the goals and intended impact of their software?
I believe the answer is yes.
More than that though, as partners in the co-creation of commercial software, most of us are interested in something beyond just maximizing returns for company shareholders. That’s not code behind the code that inspires deep commitment and partnering. I also don’t believe it’s what we want shaping the future of artificial intelligence.
My favorite scene in my favorite movie is towards the end of The Matrix, when Neo suddenly sees through the code of the Matrix. It’s such a great metaphor, and I think it’s a valuable reminder for us of our role in creating the world we want.
“Where we go from there is a choice I leave to you.”
– Closing lines from Neo in The Matrix