Clippy Inc. and transhuman corporations

Published on September 24, 2015

Recently, both Elon Musk and Bill Gates posted publically about their long-term fear of artificial super-intelligence. It's a popular apocalypse fantasy - that some human-created intelligence could learn to better itself, become more and more capable until we simple humans can neither control nor fathom it. Such a creation would wield intellectual power many times greater than a single human, and there's no guarantee its incentives would be aligned with ours.

There's a canonical thought experiment on this topic called the paperclip maximizer. Clippy is an AI designed for a totally benign objective - to get as many paperclips as possible. Its objective function, or internal metric of success, is simply how many paperclips it has. Clippy realizes that by improving its own intelligence, it can become more and more effective at getting paperclips, and so incrementally becomes a super intelligence - but still one entirely focused on accruing paperclips. It begins to chew relentlessly through the world's resources, disregarding the wellbeing of affected humans, all to maximize its arbitrary paperclip objective.

The thought experiment puts the lesson pretty clearly. When creating a powerful and efficient non-human entity, it's critical to sculpt its objective function to make sure that its incentives are aligned with those of the world around it. It seems obvious that if you created something with that level of power, you wouldn't let it pursue some arbitrary objective function to the detriment of real people's lives.

But what's crazy is that there already exist autonomous non-human entities pursuing arbitrary objective functions with the intellectual power of thousands of humans. Transnational corporations. They transcend the abilities and mortality of any human, sustained by the employees that apply their human intelligences to together achieve the ends of the company. But what are those ends? The generation of profit. A metric reported once every three months that singularly determines the success of the company - but one that can be entirely orthogonal to the holistic well-being of its customers, employees, suppliers, or even sharehoders.

This isn't to say that corporations are something bad. They're the engines of not only our economic growth, but of advancement in culture, technology, and quality of life. They're the most efficient method devised to leverage many people working together to produce goods and provide services. It's no surprise that many advocate the hegemony of the private sector given how effectively private firms achieve their goals.

But just as any super-intelligent AI would need rigorously defined objective functions to ensure that it respected the public good, so too do corporations. Regulating corporations is how we ensure that their relentless pursuit of profit causes positive rather than negative externalities. It's bountifully clear that if unconstrained, the profit seeking of corporations can have consequences no person would really want, from halftime shows that can't stop repeating the sponsor's name to reckless pollution and resource extraction.

The beautiful part is that the metric of profit provides units for government adjustment. By incorporating the cost of externalities into the financial bottom line, we can leverage the brilliance and efficiency of enterpreneurs and corporations toward ends that serve the greater citizenry. A great example is cap-and-trade systems. If implemented effectively, they create a market for pollution mitigation technology and carbon offsets that would never otherwise exist.

However, this type of regulation is unpopular in the United States. In fact, the Obama administration has been pushing for international trade agreements that would not only discourage it, but create international courts to allow corporate entities to sue sovereign governments if their laws obstruct the corporation's ability to make profit.

Such a treaty seems bafflingly naive about the double-edged sword of corporate efficiency. I can imagine establishing a system to regulate and wield super-intelligent AIs, if they ever came to exist. But I cannot imagine granting those AIs the power to sue democratic governments for obstructing their ability to pursue an objective function which does not correlate with human good.