No more human rights? Wait. No more lawyers??
28 September 2016
Not only is God dead, says Israeli professor Yuval Noah Harari, but humanism is on its way out, along with its paraphernalia of human rights instruments and lawyers for their implementation and enforcement. Whilst they and we argue about equality, racism, feminism, discrimination and all the other shibboleths of the humanist era, silicon-based algorithms are quietly taking over the world.
His new book, Homo Deus, is the sequel to Homo Sapiens, reviewed on the UKHRB last year. Sapiens was “a brief history of mankind”, encompassing some seventy thousand years. Homo Deus the future of humankind and whether we are going to survive in our present form, not even for another a thousand years, but for a mere 200 years, given the rise of huge new forces of technology, of data, and of the potential of permissive rather than merely preventative medicine.
We are suddenly showing unprecedented interest in the fate of so-called lower life forms, perhaps because we are about to become one.
Harari’s message in Sapiens was that the success of the human animal rests on one phenomenon: our ability to create fictions, spread them about, believe in them, and then cooperate on an unprecedented scale. These fictions include not only gods, but other ideas we think fundamental to life, such as money, human rights, states and institutions. In Homo Deus he investigates what happens when these mythologies meet the god-like technologies we have created in modern times.
In particular, he scrutinises the rise and current hold of humanism, which he regards as no more secure than the religions it replaced. Humanism is based on the notion of individuality and the fundamental tenet that each and everybody’s feelings and experiences are of equal value, by virtue of being human. Humanism cannot continue as a credible thesis if the concept of individuality is constantly undermined by scientific discoveries, such as the split brain, and pre-conscious brain activity that shows that decisions are not made as a result of conscious will (see the sections on Gazzaniga’s and Kahneman’s experiments in Chapter 8 “The Time Bomb in the Laboratory”).
…once biologists concluded that organisms are algorithms, they dismantled the wall between the organic and inorganic, turned the computer revolution from a purely mechanical affair into a biological cataclysm, and shifted authority from individual networks to networked algorithms.
… The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality, and promise to provide medicine, education and entertainment customised to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their working with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.
What does this mean for society? For a start, technological enhancements will no longer prove to be a global panacea for humanity’s woes. As computing power grows exponentially, the majority of us will be left behind by the elites who are able to keep up, and benefit biologically and economically from this power.
As algorithms push humans out of the job market, wealth might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social inequality. Alternatively, the algorithms might not only manage businesses, but actually come to own them. At present, human law already recognises intersubjective entities like corporations and nations as “legal persons”. Though Toyota or Argentina has neither a body nor a mind, they are subject to international laws, they can own land and money, and they can sue and be sued in court. We might soon grant similar status to algorithms. An algorithm could then own a venture capital fund without having to obey the wishes of any human master.
Professionals are no less at risk of this takeover than blue collar workers. Doctors rely on pattern recognition to diagnose disease and match therapy with their patients’ own medical history and the latest drugs on the market. Algorithmic machines can do this more accurately and much, much faster. Equally, the world’s lawyers, whether negotiating a huge business contract that will boiler plate his corporate client from legal liability, or determining another’s exposure to a negligence suit, will have to cede their place to the superior calculating power of robots.
What will be the fate of all these lawyers once sophisticated search algorithms can locate more precedents in a day than a human can in a lifetime…? Where will that leave millions of lawyers, judges, cops and detectives: they might need to go back to school and learn a new profession. [for more on this, see Jordan Weissmann, “iLawyer: What happens when Computers Replace Attorneys?“, Atlantic, 19 June 2012]
This fascinating and powerful account is neither a prophecy of doom nor a call to arms. Harari is simply alerting us to what is happening under our noses and above our heads, and warns us not to be surprised when Homo sapiens splits into two or more species, with the elite, bionically enhanced Homo deus at the top, and the “useless masses” left behind – that is, most of us. This technological bonanza might make it feasible to feed and support these masses, what will keep them occupied and content? What will people do all day?
One solution might be offered by drugs and computer games. Unnecessary people might spend increasing amounts of time within 3D virtual-reality worlds, which would provide them with far more excitement and emotional engagement than the drab reality outside. Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences. What’s so sacred in useless bums who pass their days devouring artificial experiences in La La Land?
The debate in this book needs to be spread beyond the corporate heads of Silicon Valley. In the last twenty years the most important change has been the internet and the information revolution. But none of us debated it or voted on it. Equally, decisions are being made about critical things to do with developments in AI without any discussion even at a political level. Speaking at an Intelligence Squared event earlier this year, Harari points out that in the run up to the US elections nobody is talking about artificial intelligence, about biotechnology. It’s not that they’re hiding any dangerous views on the subject, they’re not thinking about it. All the debates, about immigration, Islam, gay rights, abortion and so on, are about issues we understand. Whereas we should be talking about the “very important stuff” which we don’t understand. So these decisions are being made by a very small number of people who don’t represent anybody. But how do we get ourselves up to speed on this subject? These issues are not for some future generations to worry about, we should worry about them today.
This is really happening; it’s not science fiction. And it’s happening quickly. It already has a huge impact on our lives today and it will have a huge impact in the coming decades.
And this question involves day-to-day decisions that we all make, not some huge government or corporate decision; for example, it revolves around how much authority you give to your smart phone to manage your life for you, and how much information you give for free.
It’s amazing that for more and more people the most important asset they still have is their personal information, this is the most important thing from the perspective of the system. And we’re giving it for free in exchange for email services and funny cat videos.
Harari relates this exchange to European imperialists in the 16th and 17th centuries buying huge territories for a few colourful beads. We’re exchanging valuable personal information for a few beads; nobody is forcing us to do it. We can continue to ignore the rise of the intelligent machines and blithely hand over the precious data that they depend on. Or we can resist it – but such resistance is far too late. The best we can do is shape our own future in the company of these god-like powers; it’s in our hands.
Sign up to free human rights updates by email, Facebook, Twitter or RSS
Related posts:
His name is Harari, not Hariri.
Thank you David! Spelling stuck in my head for some reason, even though the book lies in front of me. Correcting now.
Reblogged this on World4Justice : NOW! Lobby Forum..
Reblogged this on sdbast.
I think Hariri’s thesis fails the test of looking at historical precedent to see what has happened before. Such inequality would lead to revolutions within countries, and wars between them. Computer systems are inherently fragile, and they do not extrapolate well – with no likelihood of this changing in the near future. Hackers and crackers have shown themselves able to disrupt many systems, and the higher the rewards of winning, the more likely vulnerabilities (which we know are being hoarded by State intelligence agencies so they don’t get fixed) will be found and exploited.
Another apparent flaw in his reasoning, at least as described here, is that individuality will be lost due to seeing the body as a series of systems. This has been the case since the Enlightenment, and we haven’t seen much withdrawal from individualism – indeed, Human Rights are a logical extension of Enlightenment thought. Transhumanist groups all emphasise that advances in medical science will lead to more respect for individualism, not less.
In short, I don’t see any merit in this pessimistic, anti-technology approach.
post-humanism is and will remain an important thesis, but it will take many generations for it to reach the popular understanding i think. what we are dealing with is a shift in paradigms, or in the dominant metaphor used to express our understanding of ourselves in relation to the world. human rights lawyers have more immediate threats to consider, such as the current government’s view on their field.