How the future of technology, humanity, and the planet depends on rewriting the code within the code.
Technological wonder carries a double meaning: we fill our lives with technological wonder, even as we wonder about the impact it is having on us. In this series of articles, I explore the tension between our excitement and our misgivings about our relationship with machines.
As end users, we love the convenience of Amazon, Google, and Uber, even though, as citizens, we may be concerned about their impact on our communities. At a time when most Americans believe we are headed toward a future with more economic inequality and cultural division, 87 percent say that science and technology will help us solve our problems. At the same time, 82 percent of us believe that, within thirty years, robots and computers will do much of the work now done by humans, and two-thirds of those respondents believe that’s a bad thing.
The dark side of technological wonder is more than just concern over jobs though. It grows out of a much deeper feeling that we are losing ourselves to our machines. We use Facebook to keep in touch with friends but know in our hearts that it is we who are being used—and for a purpose that isn’t necessarily aligned with our best interests.
The Deep Coding of Technology
The purpose of technology is set by its coding, be it through actual software coding or simply a process of design. This top-level coding determines how the technology carries out its function. What we sometimes fail to see is that beneath this is a deeper coding, a “code within the code,” that sets the goals—the why—behind our technology. Understanding this deeper coding requires understanding how the missions, strategies, and cultural norms of organizations create and shape our technologies.
The code within the code has still deeper layers, tracing back through individual and group psychologies and deep into our biological coding. Technology is not some alien “other,” but rather an extension of life itself. It works through humans; we are the bridge between the natural and the artificial. Through our partnership with machines, we are creating a synthetic intelligence that is part human and part machine, and this synthesis represents the newest layer in a deep stack of planetary intelligence.
A Tree of Synthesis
The Bible tells of two trees in the Garden of Eden—a Tree of Life and a Tree of Knowledge. But what if these trees weren’t actually separate, but instead, one tree growing from another? Close your eyes, and try to imagine an ancient tree, its roots sunk deep into the warmth of the Earth and powerful trunk stretching up into the cool clarity of the night sky. The colors of the bark gradually become more vibrant as your eyes move upward. As the flecks of rainbow color reach the branches, they become increasingly translucent, transforming ultimately into sparkling crystal leaves of pure consciousness.
This image has been with me for years—a symbol of the seamless connection between life and knowledge and the coming era of synthetic intelligence. Recognizing the cosmic significance of this synthetic intelligence is ultimately the key to answering what it is that we want from our partnership with machines.
A Virus in the Code
One of the underlying themes in this series is that most of society’s serious ills stem from a hijacking of this code within the code. Technology is a neutral force for achieving human goals, but one that has long been plundered for profit and expropriated for power. Think of it as two viral infections; one extracting wealth and the other consolidating control. The former has its home in for-profit enterprises, the latter in authoritarian regimes. Kleptocracy fuses these infections and is a growing problem throughout the world today.
Artificial intelligence and automation raise the stakes of these viral infections. Imagine an enterprise perfectly tuned to extract wealth, spew waste, and avoid costs—a kind of “perfect profit machine” with devastating consequences for its surrounding communities and the environment. The other nightmare is a “perfect control machine” using surveillance, AI, and automation to bend citizens to the will of a ruler. These scenarios are far more frightening than the typical “killer robot” science fiction story because their early sketches are already here. What happens with these viral infections today is critically important because, if allowed to persist, they will mutate the DNA of the future of intelligence, and do so with cataclysmic consequences.
Coding Reformation
It has famously been said that “software is eating the world.” Technology works as a force of creative destruction, disrupting and reshaping one economic sector after another. As this happens, society is transformed and whether those changes are positive or negative is deeply influenced by the code within the code behind that disruption.
The virus now wreaking havoc on our society has scratched bare the illusion that things are alright. We have allowed our techno-economic systems to run for so long with an infected underlying coding that it was only a matter of time before something emerged to expose the underlying fragility. It just happened to be a virus this time.
As we plan for how to dig ourselves out of this morass, we are confronted with a choice: return to business as usual or use this crisis as an opportunity to reclaim technology from its destructive coding. I refuse to believe that artificial intelligence and automation are here simply as mechanisms for extracting huge profits and concentrating power for a select few. The drives to maximize profit and power, by themselves, are insufficient coding for these immensely powerful systems. Before us right now is the opportunity to recode these systems so they sustain not just their shareholders, but the full range of stakeholders who contribute to their operations. It is the chance to raise our aspirations and use these technologies for solving our greatest societal and ecological challenges.
Our Future with Machines
The job before this next generation is to transform our wonder about technology into a new wonder for technology. It is not love of machines that we need, of course, but rather awe and reverence for what humanity is capable of through machines. The same revolution in automation and artificial intelligence that today eats jobs and dangerously accelerates unbridled capitalism and authoritarianism also has the potential to create much good in the world.
We are today, quite literally, defining humanity’s future with machines. This relationship will determine the nature of work, just as it will redefine our understanding of humanity’s role in the world. We are what connect the Tree of Life and the Tree of Knowledge — the bridge to a new, synthetic intelligence on this planet. We need a new generation of enterprises dedicated to rewriting the code upon which this new intelligence is built.
What excellent observations you make. The need to sustain the full range of stakeholders rather than just the shareholders makes absolute sense, since if an enterprise overlooks its customers, soon the shareholders will be disappointed. You spoke to me about transforming wonder about technology into a wonder FOR technology. I am one of the many who complain about the fast pace to AI and automation, and yet am a happy user of the output.
The revolution from AI being destructive to one with the potential to create good in the world would be welcome. I’m not sure how many of the top AI companies will develop an eleemosynary bent at the expense of more and more profit. I can think of so many wonderful things Jeff Bezos could have done with the $165 million he paid David Geffen for yet another lavish estate. And yet, one can argue that his efforts have contributed greatly to our society, and he is entitled to the fruits of his labor. Bill Gates has been exceedingly generous with his fortune, while at the same time adding to his wealth. (I don’t want to start sounding like Bernie Sanders! )
It isn’t easy to solve this, is it? Thanks for helping your readers think through all the profound issues.
Thank you, Bill. I appreciate you jumping in with your own observations. Plus, as a result, I learned a new word: eleemosynary. I worked in a nonprofit organization for over nine years and this was my first (conscious) encounter with the word.
And yes, the fact that Jeff Bezos can choose to spend $165 million on a lavish estate or into solving homelessness here in his home town is part of the the problem of this much wealth concentrating into so few hands. We can hope he will make the right choices. And when he does, we feel grateful for his eleemosynary acts. The point is, however, that fundamental decisions about how we want to live as a society are increasingly consolidating into the hands of the few.
The point I’m trying to make here is that these technologies are a powerful means to achieving this current situation. But they don’t have to be. AI and automation are completely programmable. We just have to change the code within the code.
Have a great rest of your weekend.
I really like your sentence: “what it is that we want from our partnership with machines”. I haven’t given it much thought…at least from the 50,000′ look down. I kind of believe the “internet” is an answer on many levels for humanity going forward. However, it has downsides “code within the code”. How do we control for this when the socioeconomic system in play is not wholly honest? I agree we should (must?) weigh in on the question you raise. But it will require scrutiny that will make many people squirm, uncomfortable. They will embrace the status quo. I’m trying to do my part, no active participation on Facebook and stepchild Instagram. Good thoughts, many thanks.
Thanks for dropping by with your thoughts, Bob.
One of the biggest things I struggle with is this question: what can we do? Most of us are just end users of things like Facebook and Instagram, and so it seems like there’s really not that much that we can do other than, as you say, simply not partake.
But my hope is that over the next few years we start to see the birth of a new generation of platforms, built to be open oh, and to be responsive and responsible to the people who use them. I think that there are some glimmers of hope. Too Berners-Lee’s Solid, for example, is pointed in the right direction.
Then then challenge will be getting people to move to these systems on a large scale.
The problems of producing technological solutions for all stakeholders are an order of magnitude less than the problems of getting social acceptance for the use of those solutions.
Yes, I think the adoption challenge is a huge one, Jonathan. And this is particularly so when stakeholder-friendly version of the solution we are asking people to adopt isn’t as good as the existing commercial one. Having worked in the not-for-profit tech field for a while, I saw plenty of examples of well-meaning technologists building stuff that just wasn’t as good as the commercial stuff and expecting people to adopt it.
But then you have cases where the technology isn’t bad and it’s more a challenge of getting people to move over with enough momentum that the platform is able to build momentum and get better. This problem is the problem with social media platforms like Mastodon and Diaspora. Both are pretty good, but just can’t seem to attract a critical mass.
Finally, there are examples like Wikipedia that have been extremely successful in building a platform for the commons that has huge uptake.
If the technology is going to help the adoption process it needs to offer much better functionality than current platforms. Parity will not help much.
That’s exactly the way we need to be thinking about this, Jonathan! I agree!
We can’t just try to guilt people into adoption. I’m a big fan of the late, Clayton Christensen’s idea’s about disruption and how it typically starts with a different attribute than the incumbents can strategically fulfill. I worked at Microsoft and one day I saw a demo of highly functional browser-based version of Office that someone inside the company had developed. There are lots of issues around hosting, etc. that the team had yet to face but the functionality was clearly superior to what Google Docs would soon be rolling out. The project was killed. Why? It’s not what the Office team wanted strategically, which was to protect its desktop client software.
So how will these new solutions compete? I think there are ways. Take Solid as a concrete example. Its differentiator is that your data is yours. I lost hundreds of thousands of followers when Google+ was killed due to poor management. A social network that allows me to control my own data? I’m sold! Even if the functionality might be about the same as Google Docs was to Microsoft Office initially.
These are exactly the kinds of questions that we need to be asking. Thanks for dropping by with your thoughts.
Are you interested in exploring some of the issues relating to social adoption of stakeholder based policy?
Tell me more what you mean by that.
The core is that we need to make the stakeholder based system attractive to non-techie users. I had a look at the software that you mentioned and it is obvious why their bases are so small. They are very techie oriented and offer little reason for adoption to non-techie users. It is this area that I would like to explore.
Yes, that is part of the problem. Try setting up Ethereum-based solutions while you’re at it. Just way, way too difficult. Good usability design takes resources. In the commercial world those resources tend to come from VC and other initial sources of funding. They make those bets on the assumption that at least some of them will generate handsome returns. The place where there does seem to be some breakthrough is in Initial Coin Offerings in the cryptocurrency markets. But while there’s been some success generating funding there, the applications of that tech is still very much focused on a technical user base.
Let me know if I can be of any assistance in your exploration. Clearly, this is an area that I’m interested in and actively writing about. Please share your findings as you progress.
I am still at the armchair stage so i’ll let you have my thoughts as I generate them.
I’ll look forward to hearing more as they emerge, even from the armchair.
Would you like to continue the discussion here or would you rather move it? I am easy.
Feel free to ping me with observations here.
I have no idea how to ping you. Is this something that I can do using your site?
It automatically notifies me when you comment here.
Some background about me. I am Canadian, 82 years old and a widower. I worked all my life (65 years) as a computer software developer on platforms ranging from mainframes to Windows and Linux PCs. I am definitely one of the techies I am downplaying and I have neither skills not interest in entrepreneurship. But I hope that I understand enough about the real world to work effectively with entrepreneurs. I am fanatical about trying to make software easy to use.
I think that normal people are interested in the following things via social media. First and foremost they want to communicate, both synchronously and asynchronously with friends and family. They want an environment that prevents abuse and trolling and I know that this conflicts to some degree with the desire to communicate. They would like a single environment that would provide them with a trouble-free way of making online purchases. They also enjoy meeting new people online and enjoy playing online games with them. Do you have any additional thoughts?
I don’t think they have much interest in data security and many of them have little or no knowledge about this topic. This does not make this topic unimportant if you have an objective of providing a truly secure environment.
Thanks for that background, Jonathan. I think that you are right about most people not caring about — or even fully understanding — most security issues when it comes to the platforms that they use. As to what people do care about, it’s hard to talk in generalities since so much of that depends on the values that various segments of the public maintain. With that said, I think that Maslow did a pretty good job of articulating the basic needs: physiological, safety, love/belonging, esteem, self-actualization. Frameworks like Spiral Dynamics or Ken Wilber’s Integral Theory do a decent job of segmenting people by these and related drivers. There is also a lot of great work in personality segmenting, like Meyers-Briggs, Predictive Index, and even the Enneagram that are helpful for understanding what really drives people. I believe that this is the level of nuance we need to move to in order to understand how we shape the new code within the code.
I think that one big necessity is quite a bit simpler than you have been thinking about. I am a fanatic about making things easy to use and this is the basic problem with much social media software. I think that the current functionality is sufficient but ease of use barriers inhibit the use of social media software. It should feel rock solid to a non-techie and should demand no thought or effort once the purposes of the software are assimilated. It is here that I intend to spend some time.
Ease of use is very important. When you look at designs like what Netflix has compared to something like say Twitter, there’s a huge difference in usability.
Ultimately though, I still believe that we need to focus much more deeply on what we are really hoping to get from these systems. Not just the knee jerk response of what satisfices or gets people to click, but on what is really going to improve our lives.
I think that what you are saying is vital but I think that any moves in this direction will be less sucessful than hoped for without a sound platform of human factors. This is why I am concentrating in this area with the hope of using a good infrastructure to move forward in terms of value to humans.
Hi Gideon –
Great stuff as always! – In construction we have a ‘code within a code.’ It’s called the IBC, or International Building Code. It originated from 1927 Uniform Building Code, which in turn originated from Municipal codes through out the states. These of course where initially adopted from English standards from 1600’s, that can be traced back to Law of Hammurabi. (Here is wikipedia write up of all that – https://en.wikipedia.org/wiki/Building_code) My suspicion is the king instituted standards, after losing buildings & people due to shoddy construction.
Perhaps to start building momentum towards a uniform computer code, it has to be demonstrated to ‘the king’ that he is losing significant power or tax, to the tech giants laizze faire coding systems. Interstate taxes is a good start, then from there move to taxation without representation, which could open a lobby for technology civil rights. Maybe organize a union of concerned coders, who are tired of their code getting stolen, and move from there. Perhaps ally with some of the tech trust busting momentum on the federal level.
Unfortunately, the laws of momentum say an object will continue its course until acted upon by an equal and opposite force. This means checking monopoly power of big tech will take significant federal powers. I dont know what kind of political will you’ll find in DC right now. Trumps campaign profited from Facebook propaganda, so an unholy alliance has been struck. Amazon just opened HQ2 right next door to DC, and got federal courts to pull the $10B JEDI contact from MSFT. Not sure what kind of contracts Google & MSFT currently have with the feds. I’ll bet FANG considers their codes their business, and will fight tooth and nail to prevent government supervision. You’d think Russia hacking the election would be a big enough wake-up call, but Mueller’s dissent got quashed in Congress. So whatever it is that is going to disrupt big tech’s grip on federal oversight, has to be bigger than rigged Presidential elections..
This just leaves ‘creative destruction,’ or innovation to saddle the big 4 with billions of obsolete capital. My Dad likes to point out that the ‘Nifty 50,’ or top 50 tech companies from the 70’s are no longer around. ATT, GE, Kodak, Polaroid, Sears, Black & Decker, etc.. are all has beens. In the meantime, hammer away on anti-trust laws to keep the big 4 from buying out all their competition, so new technology can emerge.
Thanks for all the great thoughts, Doug. Technology does have something like the IBC in the form of open standards. The difference is that they are not enforced by a government authority. The amazing thing about the web, at least in its original form, was that it really did run largely on those protocols. Over time that got walled in. For example, the social networks stole notifications from the blogosphere by undercutting RSS. Then built a castle wall around them.
As I note in the comment to Jonathan above, I tend to agree with your father: disruption is our best path out of this. The question is how do we design that disruption so that it is of simultaneous service to the individual and to the greater good?
The solutions that I will be getting into in the installments to come will blend various aspects of openness and decentralization with stakeholder and mission-driven principles, even throwing in a bit of policy and very specific kinds of antitrust enforcement.
It’s going to take some hard work. The answers are not completely obvious. Otherwise, we probably wouldn’t be in the jam we’re currently in. But I do have faith that there are answers, and will be sharing some of what I’m fleshing out. I’m hoping to continue to hear more from others as well.
Cool! I’m looking forward to hearing more. Thanks for taking the time to explain this to rest of us non-tech folks. – cheers and keep up the good work!
Thanks, Doug. It means a lot to get that feedback.