Next Generation Governance Models for OpenAI

OpenAI’s recent drama is now over, and the upshot is that the board was supposed to save us from an A.I. apocalypse… essentially failed. The company charter charged its board with creating AI that "benefits all of humanity," and because the board really can do one thing – fire the CEO – it did just that. But when the employees effectively unionized and threatened to join the CEO at Microsoft, the board realized that it had a no win situation.

First, the idea of putting in a governance system based on a regulatory model was short sighted. What it needed instead was a leadership model, that included the employees and helped bring the organization into alignment with the prime directive. Instead, the board imagined it was in control, and did was it was supposed to do, and the employees all roared back, “You’re not the boss of me!” We can know see that what they set up at OpenAI is essentially the kind of thing that took down Kevin McCarthy.

The Tezos Effect

It’s what I call the Tezos effect. The Tezos ICO raised approximately $232 million at the time. Notably, Tezo’s terms called the funds "a non-refundable donation" and not a "speculative investment.” Moreover, Tezos warned the investors that the token might not be issued at all. Just like OpenAi, they put the control of those funds into the hands of the Tezos Foundation, and issued a "Transparency Memo" on the Tezos website that promised a sustainable and socially equitable governance system.

However, months after the ICO, the tokens still hadn’t reached their investors. The reason was because the relationship between the company and its non-profit governance entity fell apart. The company claimed that the chairman of the foundation was guilty of “deception and self-dealing,” and the chairman of the non-profit governance organization claimed he became the target of a “character assassination” and reportedly filed a complaint with Swiss regulators. He had also halted all payments within the company so that his contract would be settled.This essentially destroyed the company’s chances at success. When asked how the could have happened, one co-founder said that the Chairman was a scorpion, and the CEO was the world’s biggest frog.

The problem that Tezos and OpenAI both suffered was that they did not understand human nature, and did not build that into the governance solution. This raises the question, “How can you build a governance system that doesn’t destroy the human race?” Is this even possible?

Next Gen Governance Models

There are many possible strategies to ensure fairness, like the UN model with a "security council" that can veto things, installing a poison pill into the charter, or establishing a "separation of powers" system like the US government. However, the model I like the best for decentralized management of the commons is based on the work of the economist Elinor Ostrom's (https://www.econlib.org/library/Enc/bios/Ostrom.html) –

This is a problem known as “equitable management of the commons” and might be solved using approaches self-emergent collective resource management by the Nobel laureate Elinor Ostrom, an economist the very difficult issue of community governance. Most economists of her time supported rational choice theory, which used individual self-interest as an underlying principle.

Her best example is the Swiss alpine cheese makers. They had a commons  problem. They live very high, and they have a grazing commons for  their cattle.

They have a simple rule: If you’ve got three cows, you can pasture those  three cows in the commons if you carried them over from last  winter. But you can’t bring new cows in just for the summer. It’s  very costly to carry cows over to the winter—they need to be in  barns and be heated, they have to be fed.

This solution has lasted 900 years, since it was established in 1200 A.D. Basically, the cheese makers tie the right to the commons to a private property right with the cows.

Ostrom’s research dealt with how groups are capable of avoiding the tragedy of the commons without requiring top-down regulation by addressing certain coordination challenges. She summarized the conditions for optimizing the governance process of the common pool resources in the form of eight core design principles:

  • Clearly defined boundaries
  • Proportional equivalence between benefits and costs
  • Collective choice arrangements
  • Monitoring
  • Graduated sanctions
  • Fast and fair conflict resolution
  • Local autonomy
  • Appropriate relations with other tiers of rule-making authority

It should be noted that Ostrom’s approach is especially pertinent to the concept of major evolutionary transitions, whereby members of groups become so cooperative that the group becomes a higher-level organism in its own right.  This idea was originally proposed by cell biologist Lynn Margulis (1970) to explain how nucleated cells evolved from symbiotic associations of bacteria. It was then generalized during the 1990s to explain other major transitions, such as the rise of the first bacterial cells, the emergence of multicellular organisms through coadunation, social insect colonies and even human evolution.

Applying Ostrom’s Principles to OpenAI

Applying Ostrom’s principles to OpenAI governance necessitates a thoughtful integration of her principles to ensure equitable and sustainable development in the field of artificial intelligence. Let’s go through the key principles:

First, ensuring proportional equivalence between benefits and costs within the OpenAI ecosystem is paramount. Ostrom’s principle of proportional equity underscores the significance of aligning the gains from AI advancements with the costs borne by various stakeholders. This requires a nuanced understanding of how AI technologies impact different societal sectors and ensuring that the benefits are equitably distributed.

Next, collective choice arrangements are integral to fostering inclusive decision-making processes within OpenAI governance. Ostrom’s emphasis on participatory decision-making resonates profoundly, advocating for involving diverse stakeholders in the formulation of AI policies. This approach encourages the co-creation of rules and norms, ensuring that the governance structure is reflective of the varied perspectives and interests of the AI community.

Third, monitoring mechanisms play a pivotal role in upholding accountability and transparency within OpenAI governance. Ostrom’s principle of monitoring underscores the need for robust oversight to prevent misuse or exploitation of AI technologies. Implementing comprehensive monitoring frameworks is imperative to track the impact of AI systems, identify potential risks, and enforce compliance with established rules.

Fourth, it is vital to implement graduated sanctions and incentives to serve as a means to deter undesirable behaviors and promote positive innovation within the AI domain. Echoing Ostrom’s principle, implementing a system of escalating penalties for violations of ethical guidelines or misuse of AI technologies can act as a deterrent while allowing for corrective actions and rehabilitation. This is possible, I believe, by adding some tweaks to the capitalization model. Requiring that the board and Altman not receive any equity was short sighted. Instead, everyone should get equity, but there should be graduated sanctions for failing to keep AI safe and bonuses and incentives for coming up with brilliant ideas for making it happen. Then, everyone will pull oars in the same direction. This has to include Microsoft.

Next, fast and fair conflict resolution mechanisms are indispensable in addressing disputes and grievances within the OpenAI community promptly. Aligning with Ostrom’s principles, establishing efficient dispute resolution channels ensures the timely and just settlement of conflicts, fostering trust and cohesion among stakeholders. In other words, it would be good to accelerate the development of greater EQ within the team, as it is a bit on the green side, and has had similar problems before. Maybe the idea of hiring a Chief Psychological Officer – who oversees the development of EQ for both AI and people, is not a bad idea. But the primary goal should be to increase group cohesion for the team, perhaps by training non-violent communication (NVC) skills, and increasing the diversity and EQ of the board.

Finally, establishing appropriate relations with other tiers of rule-making authority, as advocated by Ostrom, necessitates collaboration and alignment with broader governance frameworks, international regulations, and ethical standards, ensuring coherence and harmonization in AI governance across multiple jurisdictions. This has to be done in a new way, and not by simply funding lobbyists. One possible model is for OpenAI’s foundation to fund a DAO (ie, a Decentralized Autonomous Organization) to serve as a decentralized early warning indicator for the emergence of questionable AI capabilities – essentially a network for AI whistleblowers, who will be able to underwrite the costs and impact of being good citizens and alerting authorities when provable transgressions emerge.

The goal is to ensure that AGI aligns with human values and goals to prevent unintended consequences. But to do so, we need an early warning system that can detect symptoms of both malicious and unintentional harm. For examples, coerced programmers forced to work for a phishing operation would be able to find a way to undermine the criminal enterprise safely. Principled programmers at a defense contracting agency would be able to alert authorities through a double-blind but verified tip line that a line has been crossed in terms of AI’s that hunt down and kill humans, without violating their NDAs. A political campaign that crosses the line could be prevented from deploying AI that spreads false information to manipulate people and votes. Even things like bias amplification that could lead to discriminatory outcomes could be reported in a more subtle way.

Fast Forward

Incorporating Elinor Ostrom’s principles into OpenAI governance provides a robust foundation for fostering responsible, inclusive, and sustainable development of artificial intelligence. Balancing these principles in tandem with the evolving landscape of AI technology is critical to navigating the challenges and harnessing the immense potential of this transformative field.

The most important thing is to transition from repeating a regulatory model to “fix the bugs in the governance system” to a new mindset of building a “coadunated organization” – one that transforms the entire OpenAI organization into a place where everyone wants the same thing – to be successful economically, while also ensuring the emergence of safe AGI. It should be agreed that the prior system actually failed due to a lack of leadership, and that the goal should be to find a way to stay true to its original vision in a way that turns OpenAI into a model for 21st-century organizations.

Because the AI industry is moving at an accelerated rate, this is something that needs to be explored and implemented with speed and alacrity. It's not easy work, but it's vital for the future of humanity.