Don't measure people served, measure the turnover rate (eg ppl rehabilitated) [Wikipedians are rebelling against "unethical" Wikipedia fundraising banners | Hacker News](https://news.ycombinator.com/item?id=33609240) - NPOs tend to prey on the good faith of others - they often exploit guilt and shame to get there [W3C re-launched as a public-interest non-profit organization | Hacker News](https://news.ycombinator.com/item?id=34595456) [W3C re-launched as a public-interest non-profit organization | 2023 | News | W3C](https://www.w3.org/news/2023/w3c-re-launched-as-a-public-interest-non-profit-organization/) - Traditional NPO is in contrast to a university affiliate [Shared post - The Wiki Piggy Bank](https://lunduke.locals.com/post/4458111/the-wiki-piggy-bank) - the cause should ALWAYS be perfectly clear, and your organization WILL devolve into a political activism group if it isn't - also: [WikiBias: How Wikipedia erases "fringe theories" and enforces conformity - Minding The Campus](https://www.mindingthecampus.org/2023/05/02/wikibias-how-wikipedia-erases-fringe-theories-and-enforces-conformity/) [Wikifunctions | Hacker News](https://news.ycombinator.com/item?id=38548130) [Introducing Wikifunctions: first Wikimedia project to launch in a decade creates new forms of knowledge - Wikimedia Foundation](https://wikimediafoundation.org/news/2023/12/05/introducing-wikifunctions-first-wikimedia-project-to-launch-in-a-decade-creates-new-forms-of-knowledge/) [Welcome to Wikifunctions | Hacker News](https://news.ycombinator.com/item?id=36927695) [Wikifunctions](https://www.wikifunctions.org/wiki/Wikifunctions:Main_Page) - every NPO wants to be "the" central repository for something in some ways, NPOs are more vicious, mostly because the scarcity isn't time as much but money - mismanaged resources may waste a LOT of time, but people can forgive that - however, the fact that there's less money, and the implication that everyone is performing a [virtuous] task, can cause disenfranchisement WAY faster The secret to good NPO fundraising involves giving selfish incentives to people - have a status-enhancing bauble, like heir name on a placard or a collectible trinket that indicates their donorship - the idea is to appeal to their self-conceit as well as their sense of altruism: they get to be a selfish and good person as a win/win Nonprofits aren't always isolated from other entities - they can be a branch of a for-profit organization (e.g., Ronald McDonald Foundation) - they can HAVE for-profit branches (e.g., Raspberry Pi, Mozilla) - they can have a board that defines motivations that create secondary benefit to for-profit ventures - this becomes even more apparent when political activism is involved - the only way to [stay legally safe] and live [the good life] is to say clearly what the organization's intent is, then do it ## educational institutions colleges have historically needed 1800-2500 students to be financially viable - this may be smaller with the internet and other technologies, but only to the degree that it removes payroll requirements without another equivalent expense hike ## fundraising [Wikipedia is swimming in money-why is it begging people to donate? | Hacker News](https://news.ycombinator.com/item?id=27339887) [Wikipedia Endowment: Is Wikipedia Going Broke?](https://www.dailydot.com/debug/wikipedia-endownemnt-fundraising/) [Wikipedia is not short on cash | Hacker News](https://news.ycombinator.com/item?id=33174533) [The next time Wikipedia asks for a donation, ignore it - UnHerd](https://unherd.com/newsroom/the-next-time-wikipedia-asks-for-a-donation-ignore-it/) [Donate Unrestricted](https://paulgraham.com/donate.html) [GitHub - KevinHock/awesome-charity-ideas: A collection of ideas to raise money for charities.](https://github.com/KevinHock/awesome-charity-ideas) [Donate Unrestricted](http://www.paulgraham.com/donate.html) ## partnerships [Cloudflare and the Wayback Machine, joining forces for a more reliable Web | Hacker News](https://news.ycombinator.com/item?id=24504080) [Cloudflare and the Wayback Machine, joining forces for a more reliable Web | Internet Archive Blogs](https://blog.archive.org/2020/09/17/internet-archive-partners-with-cloudflare-to-help-make-the-web-more-useful-and-reliable/) - the alliances will arise out of common interest, and they're usually naturally-occurring [OpenAI and Apple Announce Partnership | Hacker News](https://news.ycombinator.com/item?id=40636980) [OpenAI and Apple announce partnership | OpenAI](https://openai.com/index/openai-and-apple-announce-partnership/) ## competing with for-profit alternatives [Ask HN: Why is Firefox losing marketshare and how would you save it? | Hacker News](https://news.ycombinator.com/item?id=30335455) ## NPO board [Richard Stallman is coming back to the board of the FSF | Hacker News](https://news.ycombinator.com/item?id=26535224) [Richard Stallman is Coming Back to the Board of the Free Software Foundation, Founded by Himself 35 Years Ago (Updatedx3)](https://techrights.org/o/2021/03/21/richard-stallman-is-coming-back-to-the-board-of-the-free-software-foundation-founded-by-himself-35-years-ago/) [CEO of data privacy company Onerep.com founded dozens of people-search firms | Hacker News](https://news.ycombinator.com/item?id=39709089) [CEO of Data Privacy Company Onerep.com Founded Dozens of People-Search Firms - Krebs on Security](https://krebsonsecurity.com/2024/03/ceo-of-data-privacy-company-onerep-com-founded-dozens-of-people-search-firms/) [Mozilla Drops Onerep After CEO Admits to Running People-Search Networks | Hacker News](https://news.ycombinator.com/item?id=39793754) [Mozilla Drops Onerep After CEO Admits to Running People-Search Networks - Krebs on Security](https://krebsonsecurity.com/2024/03/mozilla-drops-onerep-after-ceo-admits-to-running-people-search-networks/) - BE CAREFUL WHO YOU PICK ### paying leadership [Mozilla names new CEO as it pivots to data privacy | Hacker News](https://news.ycombinator.com/item?id=39302744) [Exclusive: Mozilla names new CEO as it doubles down on data privacy | Fortune](https://fortune.com/2024/02/08/mozilla-firefox-ceo-laura-chambers-mitchell-baker-leadership-transition/) [With revenue declining, Mozilla CEO gets a 20% raise | Hacker News](https://news.ycombinator.com/item?id=38849580) [Mozilla CEO wants business to pick up the pace • The Register](https://www.theregister.com/2024/01/02/mozilla_in_2024_ai_privacy/) ## NPO startups [Nonprofit Business Plan Templates | Smartsheet](https://www.smartsheet.com/content/non-profit-business-plan-templates) [Julius Sweetland | creating OptiKey - Speech and full computer control using only y | Patreon](https://www.patreon.com/OptiKey) ## NPO stories [A Free Accredited Bachelor's Degree in Computer Science - How Do We Get There?](https://www.freecodecamp.org/news/free-accredited-bachelors-degrees-in-computer-science-how-do-we-get-there) ### Sci-Hub [Today Sci-Hub is 10 years old. I'll publish 2M new articles to celebrate | Hacker News](https://news.ycombinator.com/item?id=28421477) [Alexandra Elbakyan on X: "Today is Sci-Hub anniversary the project is 10 years old! I'm going to publish 2,337,229 new articles to celebrate the date. They will be available on the website in a few hours (how about the lawsuit in India you may ask: our lawyers say that restriction is expired already) https://t.co/ynF1sMsAuf" / X](https://twitter.com/ringo_ring/status/1434356217208623106) - technically Sci-Hub is [piracy], but for a good cause [Sci-Hub is fundraising | Hacker News](https://news.ycombinator.com/item?id=28100744) [Sci-Hub: removing barriers in the way of science](https://web.archive.org/web/20210810140945/https://sci-hub.do/donate) [Major U.K. science funder to require grantees to make papers immediately free | Hacker News](https://news.ycombinator.com/item?id=28105966) [Major U.K. science funder to require grantees to make papers immediately free to all | Science | AAAS](https://www.science.org/content/article/major-uk-science-funder-require-grantees-make-papers-immediately-free-all) ### OpenAI [OpenAI appoints new boss as Sam Altman joins Microsoft in Silicon Valley twist](https://www.msn.com/en-us/money/companies/openai-appoints-new-boss-as-sam-altman-joins-microsoft-in-silicon-valley-twist/ar-AA1keOkx?ocid=windirect&cvid=147ed55e13834b49b5f8935286cbc4ba&ei=8) - while you may have the freedom to define your board as you see fit, you are STILL beholden to the people who pay your NPO! - i.e., you may get your way, but good luck getting more money from them. [OpenAI's board has fired Sam Altman | Hacker News](https://news.ycombinator.com/item?id=38309611) [OpenAI announces leadership transition](https://openai.com/blog/openai-announces-leadership-transition) [Microsoft was blindsided by OpenAI's ouster of CEO Sam Altman | Hacker News](https://news.ycombinator.com/item?id=38312372) [Microsoft is a key investor in OpenAI. It was blindsided by Sam Altman's exit.](https://www.axios.com/2023/11/17/microsoft-openai-sam-altman-ouster) [Sam Altman returns as CEO, OpenAI has a new initial board | Hacker News](https://news.ycombinator.com/item?id=38467850) [Sam Altman returns as CEO, OpenAI has a new initial board](https://openai.com/blog/sam-altman-returns-as-ceo-openai-has-a-new-initial-board) [OpenAI board reappoints Altman and adds three other directors | Hacker News](https://news.ycombinator.com/item?id=39647105) [Sam Altman will return to OpenAI's board with three new directors | Reuters](https://www.reuters.com/technology/sam-altman-return-openais-board-information-reports-2024-03-08/) [Three senior researchers have resigned from OpenAI | Hacker News](https://news.ycombinator.com/item?id=38316378) [Ilya Sutskever "at the center" of Altman firing? | Hacker News](https://news.ycombinator.com/item?id=38314299) [Kara Swisher on X: "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side." / X](https://twitter.com/karaswisher/status/1725702501435941294) [Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI | Hacker News](https://news.ycombinator.com/item?id=38321003) [Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI | Ars Technica](https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/) [OpenAI board in discussions with Sam Altman to return as CEO | Hacker News](https://news.ycombinator.com/item?id=38325552) [OpenAI board in discussions with Sam Altman to return as CEO - The Verge](https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo) [Emmett Shear becomes interim OpenAI CEO as Altman talks break down | Hacker News](https://news.ycombinator.com/item?id=38342643) [Emmett Shear named new CEO of OpenAI by board - The Verge](https://www.theverge.com/2023/11/20/23967515/sam-altman-openai-board-fired-new-ceo) [Sam Altman, Greg Brockman and others to join Microsoft | Hacker News](https://news.ycombinator.com/item?id=38344196) [Satya Nadella on X: "We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett…" / X](https://twitter.com/satyanadella/status/1726509045803336122) [OpenAI's misalignment and Microsoft's gain | Hacker News](https://news.ycombinator.com/item?id=38346869) [OpenAI's Misalignment and Microsoft's Gain - Stratechery by Ben Thompson](https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/) [I deeply regret my participation in the board's actions | Hacker News](https://news.ycombinator.com/item?id=38347501) [Ilya Sutskever on X: "I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company." / X](https://twitter.com/ilyasut/status/1726590052392956028) [OpenAI staff threaten to quit unless board resigns | Hacker News](https://news.ycombinator.com/item?id=38347868) [OpenAI Staff Threaten to Quit Unless Board Resigns | WIRED](https://www.wired.com/story/openai-staff-walk-protest-sam-altman/) [Sam Altman is still trying to return as OpenAI CEO | Hacker News](https://news.ycombinator.com/item?id=38352891) [Sam Altman is still trying to return as OpenAI CEO - The Verge](https://www.theverge.com/2023/11/20/23969586/sam-altman-plotting-return-open-ai-microsoft) [OpenAI's employees were given two explanations for why Sam Altman was fired | Hacker News](https://news.ycombinator.com/item?id=38356534) [OpenAI Employees Given 2 Explanations for Sam Altman's Ouster: Sources](https://www.businessinsider.com/openais-employees-given-explanations-why-sam-altman-out-2023-11) [We have reached an agreement in principle for Sam to return to OpenAI as CEO | Hacker News](https://news.ycombinator.com/item?id=38375239) [OpenAI on X: "We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this." / X](https://twitter.com/openai/status/1727206187077370115) [Before OpenAI, Sam Altman was fired from Y Combinator by his mentor | Hacker News](https://news.ycombinator.com/item?id=38378216) [Before OpenAI, Sam Altman was fired from Y Combinator by his mentor - The Washington Post](https://www.washingtonpost.com/technology/2023/11/22/sam-altman-fired-y-combinator-paul-graham/) [The Contradictions of Sam Altman | Hacker News](https://news.ycombinator.com/item?id=35392288) [The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ](https://www.wsj.com/tech/ai/chatgpt-sam-altman-artificial-intelligence-openai-b0e1c8c9) [Hi everyone yes, I left OpenAI yesterday | Hacker News](https://news.ycombinator.com/item?id=39365935) [Andrej Karpathy on X: "Hi everyone yes, I left OpenAI yesterday. First of all nothing "happened" and it's not a result of any particular event, issue or drama (but please keep the conspiracy theories coming as they are highly entertaining :)). Actually, being at OpenAI over the last ~year has been…" / X](https://twitter.com/karpathy/status/1757600075281547344) [OpenAI removes Sam Altman's ownership of its Startup Fund | Hacker News](https://news.ycombinator.com/item?id=39895994) [OpenAI removes Sam Altman's ownership of its Startup Fund | Reuters](https://www.reuters.com/technology/openai-removes-sam-altmans-ownership-its-startup-fund-2024-04-01/) [Ilya Sutskever to leave OpenAI | Hacker News](https://news.ycombinator.com/item?id=40361128) [Ilya Sutskever on X: "After almost a decade, I have made the decision to leave OpenAI.  The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the" / X](https://twitter.com/ilyasut/status/1790517455628198322) [Leaked OpenAI documents reveal aggressive tactics toward former employees | Hacker News](https://news.ycombinator.com/item?id=40447431) [OpenAI NDAs: Leaked documents reveal aggressive tactics toward former employees - Vox](https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees) [OpenAI didn’t copy Scarlett Johansson’s voice for ChatGPT, records show | Hacker News](https://news.ycombinator.com/item?id=40448045) [OpenAI didn’t copy Scarlett Johansson’s voice for ChatGPT, records show - The Washington Post](https://www.washingtonpost.com/technology/2024/05/22/openai-scarlett-johansson-chatgpt-ai-voice/) [Ex-OpenAI board member reveals what led to Sam Altman's brief ousting | Hacker News](https://news.ycombinator.com/item?id=40506582) [Sam Altman Lied to OpenAI Board 'Multiple' Times, Ex-Director Says](https://www.businessinsider.com/openai-board-member-details-sam-altman-lied-allegation-ousted-2024-5) [I got tired of hearing that YC fired Sam, so here's what actually happened | Hacker News](https://news.ycombinator.com/item?id=40521657) [Paul Graham on X: "I got tired of hearing that YC fired Sam, so here's what actually happened: https://t.co/3YvBDH7oqV" / X](https://x.com/paulg/status/1796107666265108940) [OpenAI's plans according to sama | Hacker News](https://news.ycombinator.com/item?id=36141544) [OpenAI's plans according to Sam Altman](https://humanloop.com/blog/openai-plans) [OpenAI and Elon Musk | Hacker News](https://news.ycombinator.com/item?id=39611484) [OpenAI and Elon Musk](https://openai.com/blog/openai-elon-musk) [Elon Musk renews lawsuit against OpenAI | WORLD](https://wng.org/sift/elon-musk-renews-lawsuit-against-openai-1722882145) [OpenAI co-founder John Schulman says he will leave and join rival Anthropic | Hacker News](https://news.ycombinator.com/item?id=41168904) [OpenAI co-founder John Schulman says he will join rival Anthropic](https://www.cnbc.com/2024/08/06/openai-co-founder-john-schulman-says-he-will-join-rival-anthropic.html) [ChatGPT maker OpenAI raises $6.6 billion in fresh funding as it moves away from its nonprofit roots - ABC News](https://abcnews.go.com/Technology/wireStory/chatgpt-maker-openai-raises-66-billion-fresh-funding-114443452) #### AI sorting - Matt Levine A dumb simple model of artificial intelligence companies is: 1. It would be good to develop good AI (AI that helps humans), but bad to develop bad AI (AI that kills or enslaves humans). 2. If you try to build good AI, there is some risk of building bad AI instead (your robot tricks you into thinking that it’s nice, then enslaves you), so you have to be very very careful. You can’t move too fast; you have to check carefully, at each step, to make sure that your robot is not secretly evil. 3. Company A is formed by idealistic AI researchers who want to create good AI. They work together well for a while. 4. Disagreements develop. Some researchers at Company A say “we need to work faster to build good AI, because if we don’t, someone else will come along and build bad AI first instead.” Others say “no, we can’t work faster, that would compromise our ability to check that the robot is not evil.”  5. The first group wins the argument, for reasons. [[5]](imap://dave%40stucky%2Etech@mail.stucky.tech:993/fetch%3EUID%3E.INBOX%3E6526#footnote-5) 6. The people who lose the argument, who are genuinely worried about bad AI, quit Company A in outrage and go start Company B, with the goal of carefully and safely creating good AI. 7. They work together well for a few months. 8. Disagreements develop at Company B. Some researchers say “we need to work faster to build good AI, because otherwise _Company A_ will build bad AI first. That’s why we quit, after all.” Others say “no, we can’t work faster, that would compromise our bad robot checks. _That’s_ why we quit, after all.” 9. The first group wins the argument, for the same reasons as in Step 5. 10. The people who lose the argument quit and start Company C. 11. This keeps repeating: Company C eventually splits over similar tensions, but also Company A and Company B can themselves keep dividing as some people want to move faster than others. 12. Eventually all the AI researchers are very finely sorted by aggressiveness, so that Company Z is full of purists who are too cautious ever to build anything at all, while Company A is full of people who are like “actually being enslaved by robots would be pretty cool.” This is not accurate in all respects — sometimes the second group [wins the argument for a weekend!](https://link.mail.bloombergbusiness.com/click/35781629.270764/aHR0cHM6Ly93d3cuYmxvb21iZXJnLmNvbS9vcGluaW9uL2FydGljbGVzLzIwMjMtMTEtMjAvd2hvLWNvbnRyb2xzLW9wZW5haT9jbXBpZD1CQkQwNjIwMjRfTU9ORVlTVFVGRiZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fdGVybT0yNDA2MjAmdXRtX2NhbXBhaWduPW1vbmV5c3R1ZmY/60e87ce39a995a4b1a2deb96Bc6e66e59) — but it is an intuitive model that helps to explain [stuff like this](https://link.mail.bloombergbusiness.com/click/35781629.270764/aHR0cHM6Ly93d3cuYmxvb21iZXJnLmNvbS9uZXdzL2FydGljbGVzLzIwMjQtMDYtMTkvb3BlbmFpLWNvLWZvdW5kZXItcGxhbnMtbmV3LWFpLWZvY3VzZWQtcmVzZWFyY2gtbGFiP2NtcGlkPUJCRDA2MjAyNF9NT05FWVNUVUZGJnV0bV9tZWRpdW09ZW1haWwmdXRtX3NvdXJjZT1uZXdzbGV0dGVyJnV0bV90ZXJtPTI0MDYyMCZ1dG1fY2FtcGFpZ249bW9uZXlzdHVmZg/60e87ce39a995a4b1a2deb96B04177745): > For the past several months, the question “Where’s Ilya?” has become a common refrain within the world of artificial intelligence. Ilya Sutskever, the famed researcher who co-founded OpenAI, took part in the 2023 board ouster of Sam Altman as chief executive officer, before changing course and helping engineer Altman’s return. From that point on, Sutskever went quiet and left his future at OpenAI shrouded in uncertainty. Then, in mid-May, Sutskever announced his departure, saying only that he’d disclose his next project “in due time.” > > Now Sutskever is introducing that project, a venture called Safe Superintelligence Inc. aiming to create a safe, powerful artificial intelligence system within a pure research organization that has no near-term intention of selling AI products or services. In other words, he’s attempting to continue his work without many of the distractions that rivals such as OpenAI, Google and Anthropic face. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever says in an exclusive interview about his plans. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.” OpenAI [was founded](https://link.mail.bloombergbusiness.com/click/35781629.270764/aHR0cHM6Ly93d3cuYmxvb21iZXJnLmNvbS9vcGluaW9uL2FydGljbGVzLzIwMjQtMDMtMDEvb3BlbmFpLWlzbi10LW9wZW4tZW5vdWdoLWZvci1lbG9uP2NtcGlkPUJCRDA2MjAyNF9NT05FWVNUVUZGJnV0bV9tZWRpdW09ZW1haWwmdXRtX3NvdXJjZT1uZXdzbGV0dGVyJnV0bV90ZXJtPTI0MDYyMCZ1dG1fY2FtcGFpZ249bW9uZXlzdHVmZg/60e87ce39a995a4b1a2deb96B13a95f24) to build artificial general intelligence safely, free of outside commercial pressures. And now every once in a while it [shoots out](https://link.mail.bloombergbusiness.com/click/35781629.270764/aHR0cHM6Ly90aW1lLmNvbS82OTgzNDIwL2FudGhyb3BpYy1zdHJ1Y3R1cmUtb3BlbmFpLWluY2VudGl2ZXMv/60e87ce39a995a4b1a2deb96B5889169d) a new AI firm whose mission is to build artificial general intelligence safely, free of the commercial pressures at OpenAI. #### OpenAI: Nonprofit governance - Matt Levine One way to look at [the OpenAI situation](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai) is that OpenAI is a nonprofit organization, and it is not that uncommon for nonprofits to have tension between their _mission_ and their _staff_. This is arguably a _silly_ way to look at the situation, because, for a few years ending last Friday, nobody really thought of OpenAI as a nonprofit. OpenAI was an $86 billion tech startup that was building artificial intelligence tools that were expected to result in huge profits for its investors (Microsoft Corp., venture capital firms) and employees (many of whom owned stock). But technically _that_ OpenAI - OpenAI Global LLC, the $86 billion startup with employee and VC and strategic shareholders - was a subsidiary controlled by the nonprofit, OpenAI Inc., and the nonprofit asserted itself dramatically last Friday when its board of directors fired its chief executive officer, Sam Altman, and threw everything into chaos. But for a moment ignore all of that and just think about OpenAI Inc., the 501(c)(3) public charity, [with a mission of](https://openai.com/our-structure) "building safe and beneficial artificial general intelligence for the benefit of humanity." Like any nonprofit, it has a mission that is described in its governing documents, and a board of directors who supervise the nonprofit to make sure it is pursuing that mission, and a staff that it hires to achieve the mission. The staff answers to the board, and the board answers to … no one? Their own consciences? There are no shareholders; the board's main duties are to the mission. Often, as a general matter, a nonprofit's staff will be more committed to the mission than the board is. This just makes sense: The staff generally works full-time at the nonprofit, _doing_ its mission all day; the directors are normally fancy outsiders with other jobs who just show up for occasional board meetings. Of course the staff cares more than the board does. But it isn't always quite that simple. Because the staff works full-time at the nonprofit, they will care much more about the practical conditions of the job than the board will. The board is disinterested and comfortable and can care entirely about the abstract mission of the nonprofit; the staff members have to pay rent and student loans. And so sometimes there will be a conflict between the _mission_ of the nonprofit and the _conditions of the job_, and the staff will prefer better working conditions while the board will prefer the mission. So a charity to feed the homeless might have to decide whether to spend a marginal dollar of donations on food for the homeless or higher salaries for the staff. It is not _obvious_ that the staff will prefer higher salaries while the board will prefer feeding more clients, but it is _possible_; really it is a pretty standard story of agency costs, and the board's role is to manage those costs. Or last year [Ryan Grim wrote about](https://theintercept.com/2022/06/13/progressive-organizing-infighting-callout-culture/) conflicts within progressive advocacy groups after the killing of George Floyd: "In the eyes of group leaders ... staff were ignoring the mission and focusing only on themselves, using a moment of public awakening to smuggle through standard grievances cloaked in the language of social justice," while the staff "believed [that] managers exploited the moral commitment staff felt toward their mission, allowing workplace abuses to go unchecked." OpenAI is a very strange nonprofit! Its stated mission is "building safe and beneficial artificial general intelligence for the benefit of humanity," but in the unavoidably sci-fi world of artificial intelligence developers, that mission has a bit of a flavor of "building artificial intelligence very very carefully and being ready to shut it down at any point if it looks likely to go rogue and kill all of humanity." The mission is "build AI, but not too much of it, or too quickly, or too commercially." As of last week, it had a board with six members, three of whom (including Altman) worked at OpenAI and [three of whom did not](https://www.wsj.com/tech/ai/openai-board-sam-altman-d5f3cd49). And it is easy to see how the board's view of the mission could conflict with the staff's views of their jobs. Like, you are a cutting-edge AI researcher, you come into work every day excited to do cutting-edge AI research, you _succeed_ in doing cutting-edge stuff, and the board shows up and is like "hey this edge is too cutting, we worry it's going to kill us all, slow it down there tiger." It's condescending! It stops you from doing the thing that you are committed to do! They're Luddites! But the thing that you are committed to do (build cutting-edge AI stuff) is not _quite_ the thing that OpenAI is committed to do (build safe AI stuff). And the outside directors - who _don't go to work at OpenAI all day_ - might care more about its official mission than the staff does. From the board's perspective, a nonprofit with the mission of "be first to build artificial general intelligence, but only if we can do it safely" will have a staffing problem. To achieve that mission it will have to hire staff who are talented and driven enough to be the first to build AGI, but those staff will probably be more enthusiastic about AI, generally, than the mission calls for. Or you can hire staff who are super-nervous about AGI, but they probably won't be the first ones to build it. So you hire the good AI developers, but you keep a watchful eye on them. From the staff's perspective, the board is a bunch of outsiders whose main features are (1) they are worried about AI safety and (2) they don't work at OpenAI. (Well, three of them do, but three - a majority of those who voted to oust Altman - don't.) _They have no idea!_ They are meddling in stuff - AI research but also intra-company dynamics - that they don't really understand, driven by an abstract sense of mission. Which kind of _is_ the job of a nonprofit board, but which will reasonably annoy the staff. Also, of course, the material conditions of the OpenAI staff are pretty unusual for a nonprofit: They can get paid [millions of dollars a year](https://www.wsj.com/tech/openai-employees-threaten-to-quit-unless-board-resigns-bbd5cc86) and they own equity in the for-profit subsidiary, equity that they were about to be able to [sell at an $86 billion valuation](https://www.bloomberg.com/news/articles/2023-11-20/openai-investors-led-by-thrive-angle-to-bring-back-altman). When the board is like "no, the mission requires us to zero your equity and cut off our own future funding," I mean, maybe that is very noble and mission-driven of the board. But, just economically, it is rough on the staff. Yesterday virtually all of OpenAI's staff [signed an open letter to the board](https://www.axios.com/2023/11/20/openai-staff-letter-board-resign-sam-altman), demanding that the board resign and bring back Altman. The letter claims that the board "informed the leadership team that allowing the company to be destroyed 'would be consistent with the mission.'" Yes! I mean, the board might be wrong about the facts, but _in principle_ it is absolutely possible that destroying OpenAI's business would be consistent with its mission. If you have built an unsafe AI, you delete the code and burn down the building. The mission is conditional - build AGI if it is safe - and if the condition is not satisfied then you go ahead and destroy all of the work. _That is the board's job_. It's the board's job because it can't be the staff's job, because the staff is there to do the work, and will be too conflicted to destroy it. The board is there to supervise the mission. I don't mean to say that the board is right! The board really are outside kibbitzers! Between OpenAI's staff, who know what they're talking about but also kinda like building AI, and OpenAI's board, who lean more to being AI-skeptical outsiders, I _guess_ I'd bet on the staff being right. (Also if the board's job is to prevent the development of rogue AI, burning down OpenAI is unlikely to accomplish that, just because there are competitors who will gleefully hire the staff.) I am just saying that this is a standard and real problem in nonprofit governance, and what's weird about OpenAI is that it's an $86 billion startup with nonprofit governance. I guess the other thing to say is that, generally speaking, a staff is often more essential to a nonprofit than a board is? (Except that at a lot of nonprofits - not OpenAI! - the directors tend to also be big donors and fundraisers.) Like, the staff does the work; the board just goes to occasional meetings. If the staff all quit then the nonprofit is in trouble; if the directors all quit they're pretty replaceable. As of last night here's the state of things, from [Bloomberg's Shirin Ghaffary](https://www.bloomberg.com/news/articles/2023-11-21/openai-in-intense-discussions-to-unify-company-memo-says): > OpenAI said it's in "intense discussions" to unify the company after another tumultuous day that saw most employees threaten to quit if Sam Altman doesn't return as chief executive officer. > > Vice President of Global Affairs Anna Makanju delivered the message in an internal memo reviewed by Bloomberg News, aiming to rally staff who've grown anxious after days of disarray following Altman's ouster and the board's surprise appointment of former Twitch chief Emmett Shear as his interim replacement. > > OpenAI management is in touch with Altman, Shear and the board "but they are not prepared to give us a final response this evening," Makanju wrote. … > > There's strong momentum outside OpenAI to get Altman reinstated too. OpenAI's other investors, led by Thrive Capital, are actively trying to orchestrate his return, people with knowledge of the effort told Bloomberg News Monday. Microsoft CEO Satya Nadella told Emily Chang in a Bloomberg Television interview that even he wouldn't oppose Altman's reinstatement. ... > > "We are continuing to go over mutually acceptable options and are scheduled to speak again tomorrow morning when everyone's had a little more sleep," Makanju wrote. "These intense discussions can drag out, and I know it can feel impossible to be patient." #### OpenAI: Startup governance - Matt Levine Obviously another way to look at the OpenAI situation is that OpenAI is an $86 billion tech startup that did some real odd stuff to incinerate most of its value. In some ways this is not _that_ unusual a story of nonprofit governance, in which the board's abstract commitment to the mission conflicts with the practical on-the-ground experiences of the staff. But it sure is an unusual story of startup governance, in which Microsoft agreed to invest $13 billion in OpenAI, and other venture capitalists also put in money, and employees got stock grants, without the contractual or fiduciary rights that investors would normally get. And then one day the board of OpenAI was like "hey we decided to blow up the company" and the investors were like "wait a minute can you really do that" and the board was like "oh yeah sure we can." So [Bloomberg reports](https://www.bloomberg.com/news/articles/2023-11-20/openai-staff-threaten-to-go-to-microsoft-if-board-doesn-t-quit) that "some investors were considering writing down the value of their OpenAI holdings to zero." Eighty-six billion dollars of value evaporated in a weekend. One popular thing to say about this is that the investors should have been more careful about governance, and that future investors in future startups will pay more attention to things like control rights and fiduciary duties and board composition. And, maybe. But I have to say I sympathize with the investors here. There are many cases in which sophisticated investors invest large sums of money into companies where they have no real control rights, and they _rationally_ calculate that it will be fine. Generally the calculation will involve some combination of factors like: 1. I have met the founder and shook her hand and looked into her eyes and I _trust_ her, so I do not need to care about the corporate formalities. (Smart investors jumped into [Elon Musk's Twitter Inc. adventure](https://www.bloomberg.com/opinion/articles/2022-10-03/everyone-wanted-to-buy-twitter-with-elon) not because they did extensive due diligence or got a lot of control rights, but because he's Elon Musk.) 2. Regardless of the formalities, the _incentives_ are on my side: This company will need more money, the founder will need to sell her shares, and so even if she has the formal right to hose me she won't, because that will be bad for her. (Both Adam Neumann and Travis Kalanick were forced out of startups that they had founded and where they had more or less total formal control, because their investors told them "hey look if you keep your total control this is going to be a zero, whereas if you leave now you can salvage some value for yourself," and they made a rational choice.) 3. The formalities are bad, sure, but that's the price of getting into this investment, and the _upside_ of this investment is so huge that I am willing to take the risk of getting hosed by bad governance. (Early investors in Facebook Inc., now Meta Platforms Inc., had very little in the way of governance rights, Mark Zuckerberg had total control, and guess what he still does and he has made those investors very rich.) It is not hard to see how OpenAI's investors could have had similar thoughts: 1. They really liked Sam Altman! He is a popular and well-connected figure among venture capital types. Nor, really, were they wrong to trust him. It's just that usually startup founders have much more control of their boards than Altman did. That _did_ turn out to be a failure of organizational due diligence by OpenAI's investors, but an understandable one. 2. The incentives were just incredibly on their side. OpenAI requires piles and piles of outside money to do its work, so it cannot rationally afford to alienate investors. Microsoft, OpenAI's biggest investor, also provides its computing power and has a license to its technology and, after this weekend's implosion, seems to be on track to hire most of its staff. "You can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit," [Ben Thompson wrote yesterday](https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/). No rational startup would let that happen! Meanwhile Thrive Capital was [leading the tender offer](https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/) to buy employee shares, and might have thought "these employees need liquidity and will not bite the hand that is feeding them." These were all, I think, very reasonable things to think; they were just flummoxed by a board that _did not act in the economic best interests of the company_. Again: because it wasn't supposed to! Still a jarring surprise. 3. The upside was really big. I mean the company was worth more than $80 billion last week, not because it was profitable (it was a money pit) but because, you know, it had an [8% chance of being a trillion-dollar company](https://www.bloomberg.com/opinion/articles/2023-11-09/adam-neumann-is-so-good-at-this). You'd take some governance risk for that upside. I feel like the lesson here is not so much "don't invest in startups without vetting the board and ideally getting a board seat for yourself," and more "don't invest in nonprofits at an $86 billion valuation." Which I think has never come up before? Like as far as I can tell no one in human history has ever purchased shares in a nonprofit at an $86 billion valuation? Because purchasing shares in a nonprofit, at any valuation, is not a coherent thing to do? But then OpenAI made it happen, for the first time, and probably also the last. There are other oddities here that I do not really understand but want to mention as puzzles. Like, the status as of this morning really does seem to be that Sam Altman has been fired by OpenAI, that he plans to go work at Microsoft (OpenAI's biggest investor and partner), and that he is going to take the large majority of OpenAI's employees with him. What … what do everyone's contracts look like here? Like: - When Microsoft signed its deal with OpenAI, did it agree to any sort of non-compete or non-solicit, like, "you won't hire away our employees"? - Does Altman have any sort of non-solicit, like, "if you leave you won't hire everyone else"? - Do the employees have any sort of non-disclosure agreements, like, "if you leave you won't take our proprietary technology with you"? Perhaps this one doesn't matter, since Microsoft has a license to OpenAI's intellectual property: The OpenAI employees can leave, take nothing with them, and then go work at Microsoft and have access to everything they left behind. But Bloomberg's [Austin Carr reports](https://www.bloomberg.com/news/articles/2023-11-21/microsoft-hiring-sam-altman-presents-new-challenges-for-company) that "any employees who do join Microsoft can't simply replicate the work they were doing on OpenAI properties like GPT-5 without inviting a nightmare of claims over trade-secret theft." Coming from the world of finance, all of this feels odd to me; ordinarily there would be contracts preventing a company's biggest customer from hiring its CEO and him then bringing over his whole team to build the same cutting-edge technology they were building at the company. Here, I guess, there aren't? Everyone just trusted each other? Seems like a mistake. #### OpenAI - Matt Levine Ten days ago OpenAI was worth $86 billion. Investors in OpenAI were about to launch a tender offer to buy employee shares at that valuation, and employees were lining up to tender; there were willing buyers and sellers at that price. Then events occurred. There was a boardroom coup, and OpenAI's founder and chief executive officer, Sam Altman, was fired. At one point, the valuation of OpenAI was apparently _zero dollars:_ Several OpenAI investors were noisily saying that they were going to [write down their shares to zero](https://www.bloomberg.com/news/articles/2023-11-20/openai-staff-threaten-to-go-to-microsoft-if-board-doesn-t-quit), and it looked like Microsoft Corp. was about to [acquire most of OpenAI's staff](https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/) without paying the company (or its other investors) anything for them. But then there was another boardroom coup, Altman [got his job back](https://www.bloomberg.com/news/articles/2023-11-22/sam-altman-to-return-as-openai-ceo-with-a-new-board), and most of the directors who fired him were themselves fired. Altman returned, and there have been suggestions that he (and Microsoft, OpenAI's biggest investor) extracted some promises that this sort of thing won't be allowed to happen again, that there will be fundamental governance changes to make OpenAI more robust and, probably, more commercial. So how much should OpenAI be worth _today?_ Some possible answers: 1. Less than $86 billion, for governance reasons: After a rapid rise in prominence and valuation, OpenAI revealed some serious cracks, _causing its valuation to become zero dollars for a while_, meaning that investors should be far more cautious about it now than they were before. It is more volatile than people thought. The lesson of the last week, I [suggested on Tuesday](https://www.bloomberg.com/opinion/articles/2023-11-21/openai-is-a-strange-nonprofit), might be "don't invest in nonprofits at an $86 billion valuation"; maybe investors will learn that lesson. 2. More than $86 billion, for governance reasons: Those cracks were there from the beginning, and investors presumably accounted for them in that $86 billion valuation. But now that Altman has won his power struggle and [cemented his control](https://www.businessinsider.com/sam-altman-openai-staff-loyalty-power-chatgpt-microsoft-2023-11), the future OpenAI [will be _less_ weird and nonprofit-y](https://www.wsj.com/tech/ai/ai-accelerationists-come-out-ahead-with-sam-altmans-return-to-openai-a249605c) and afraid of its own shadow. It's the same exciting business but with better, more investor-friendly governance, so it should be worth more to investors. 3. Less than $86 billion, for governance reasons (2): _Did_ Altman win his power struggle? Ten days ago, he was the CEO and a board member; now he is off the board. Ben Thompson [quotes](https://stratechery.com/2023/sam-altman-back-at-openai-q-going-forward/) a [Reddit post](https://www.reddit.com/r/MachineLearning/comments/1812w04/comment/kabk73s/) arguing that the drama was precipitated by _Altman's_ efforts to seize total control of the board, and that the other board members outmaneuvered him, with the result that his power has been diminished. The other board members "get the outcome they want: Sam and Greg [Brockman] gone from the board; Adam [D'Angelo] remains, who has experience of Sam and Greg's shenanigans; and two mutually agreeable independent board members are added" who will presumably rein in Altman. 4. More than $86 billion, for governance reasons (2): Maybe a board that reins in Altman will start by reining in his [extracurricular AI-related business ventures](https://www.bloomberg.com/news/articles/2023-11-19/altman-sought-billions-for-ai-chip-venture-before-openai-ouster) (a chip company, "an AI-focused hardware device") and make him focus more on OpenAI, which will be good for the company. "When Altman returns, he'll likely have to address claims that he's been distracted by a number of projects, including those outside OpenAI," [reports the Information](https://www.theinformation.com/articles/what-comes-next-for-sam-altmans-openai). Maybe he'll do the hardware stuff within OpenAI instead of in a new startup, creating more value for OpenAI investors. 5. Less than $86 billion, for straightforward business reasons: The instability of the last week was bad for customers, who will be reluctant to base their AI strategies on OpenAI and will shift to its competitors. [Thompson writes](https://stratechery.com/2023/sam-altman-back-at-openai-q-going-forward/): "Any company would have to think long and hard about basing their business on OpenAI's API." 6. More than $86 billion, for business reasons: This last week sure drew a lot of _attention_ to OpenAI and its products and their capabilities, and if you think of OpenAI as basically a consumer internet company then any attention is good. If you read a week of press about how OpenAI's board members are afraid that its product will enslave humanity, I mean, that is kind of good advertising for the capabilities of that product? 7. More than $86 billion, for technological reasons. One precipitating event of the boardroom coup might have been a breakthrough in OpenAI's research. Reuters [reported](https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/) that "several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity," and the Information [reported](https://www.theinformation.com/articles/openai-made-an-ai-breakthrough-before-altman-firing-stoking-excitement-and-concern) that this was "an innovation by the company's researchers earlier this year that would allow them to develop far more powerful artificial intelligence models," including a model called Q* that "was able to solve math problems that it hadn't seen before." If you were willing to buy OpenAI stock at an $86 billion valuation as a bet that it would develop increasingly powerful AI models, evidence that it has in fact developed increasingly powerful AI models should cause you to increase your valuation. 8. Less than $86 billion, for regulatory reasons: All this drama is going to increase regulatory attention on AI, and new rules will make it harder for OpenAI to innovate and commercialize rapidly. 9. More than $86 billion, for regulatory reasons: All this drama is going to increase regulatory attention on AI, and new rules will make it harder for OpenAI's _smaller competitors_ to innovate and commercialize rapidly, leaving it with a head start on building a monopoly. 10. Precisely $86 billion: Nothing worth thinking about happened over the last 10 days. I think that last answer is obviously insane but I am not a venture capitalist! We talk a lot around here, in a semi-joking way, about this [appealing feature of private market investments](https://www.bloomberg.com/opinion/articles/2019-12-20/you-d-pay-not-to-see-your-stock-price): They are less volatile than public investments, because when dramatic shifts in markets or fundamentals happen, _you can ignore them_ and just say that the value of your private investments is miraculously unchanged. Without a market price, you never need to mark to market. It would be very, very funny if the events of the last 10 days caused _no change_ in the valuation of OpenAI. And yet! [The Information reported last week](https://www.bloomberg.com/opinion/articles/2019-12-20/you-d-pay-not-to-see-your-stock-price): > An OpenAI employee share sale that values the firm at $86 billion is back on track following Sam Altman's reinstatement as CEO late Tuesday night, a person familiar with the matter said. The deal, in which a group of investors led by Thrive Capital will buy up to $1 billion of stock-and conceivably more-held by employees or other investors, is expected to close next month. … > > In a statement, a spokesperson for the Josh Kushner-run Thrive said they were impressed by the "resilience and strength" they witnessed over the last few days and that they "consider it a true honor to be their partners now and in the future." Though the Financial Times [is less confident](https://www.ft.com/content/c4dd2ec0-026f-45dd-a7d4-d7aaa5cf9396): > An upcoming sale of shares in OpenAI is set to test how much the past week's leadership chaos has cost the company and its backers, though big investors are bullish about securing a high valuation. > > The employee stock sale, which had been planned before the sacking last week of chief executive Sam Altman and expected to value the company at $86bn, will continue as planned, according to two investors with direct knowledge of the matter. > > Investors remain confident that a new share sale can still treble the $29bn valuation placed on OpenAI when Microsoft committed to invest $10bn in the company at the start of this year. > > "Clearly this almost destroyed a lot of value in the short term, it's hard to say what happens next," said Vinod Khosla, an early investor in OpenAI. "Valuation is a function of investor perceptions. The company is the same or better off than it was last Thursday." > > But analysts have suggested that OpenAI will be hit by the week's events, with rival groups such as Google and Amazon representing strong and stable challengers in the race to offer generative artificial intelligence services to businesses and consumers. I don't know that there are still willing buyers and willing sellers at that $86 billion price, or that there are as many of them as there were 10 days ago, but it seems like there might be. Doesn't someone have to be wrong? I suppose it is possible that the answer is "the distribution of possible OpenAI outcomes is much wider than it was 10 days ago, but the midpoint of that distribution is still $86 billion," but what a weird coincidence that would be. Actually another possible answer is that the distribution of outcomes is _narrower_ than it used to be. Like, 10 days ago: 1. I suppose it was possible that Altman would come to the board and say "this nonprofit stuff isn't working, we need to fire you and become a regular for-profit startup." I mean, OpenAI used to be _entirely_ a nonprofit, and now it is a hybrid nonprofit/capped-profit entity, and you might have expected it to continue to move in that direction. 2. Conversely, it was totally possible that the nonprofit board would fire Altman and burn the company to the ground in order to prevent it from developing unsafe artificial intelligence. ("Allowing the company to be destroyed 'would be consistent with the mission,'" [a board member apparently told employees](https://www.axios.com/2023/11/20/openai-staff-letter-board-resign-sam-altman).) And then these last 10 days have cut off some of those possibilities. The first possibility - OpenAI becomes purely for-profit - I guess is still open, but it might be less likely, given the fierce resistance the old board put up and the fact that that board still has a representative on the new board. Altman "[also agreed](https://www.bloomberg.com/news/articles/2023-11-22/sam-altman-to-return-as-openai-ceo-with-a-new-board) to an internal investigation into the conduct that led to his dismissal," which suggests that there will still be a counterbalance to his power in the new company, and limits on his ability to do, uh, whatever that conduct was. The second possibility - burn the company to the ground, etc. - seems much less likely now. The board _did_ fire Altman, and it almost burned the company to the ground, and then the board blinked, and presumably the new board (and any future board) _won't do that again_. Like, they tried that, they saw what happened, they didn't like it, lessons have been learned, and that is no longer an option for the board. And so for instance consider Microsoft Corp., which has committed some $13 billion to OpenAI and relies on its products for its AI strategy. One possible conclusion for Microsoft to draw from this drama would be "we need much more robust legal protections, and a board seat, so that this doesn't happen again." But another possible conclusion would be "actually no, it turns out we have plenty of leverage already, we got what we wanted after a few tense days, so we don't need a board seat after all." The Wall Street Journal [reported last week](https://www.wsj.com/tech/ai/satya-nadella-microsoft-ceo-ai-openai-altman-85e75531): > Even after investing $13 billion, Microsoft didn't have a board seat or visibility into OpenAI's governance, since it worried that having too much sway would alarm increasingly aggressive regulators. That left Microsoft exposed to the risks of OpenAI's curious structure. Altman's company was set up as a nonprofit with a board whose primary responsibility wasn't maximizing shareholder value but developing safe AI "that benefits all of humanity." By not having a board seat, Microsoft ended up blindsided. The company was also vulnerable to Altman leaving to start another company and taking employees with him-or, in a possibility that seemed remote until suddenly it was reality, OpenAI's board firing him without asking for input from its biggest investor. … > > Microsoft has had to strike a tricky balance with OpenAI: safeguarding its investment while ensuring that its ownership stake remained below 50% to avoid regulatory pitfalls. And there were moments last week when it looked like Microsoft had gotten that balance wrong, and its investment might be worthless. But not too many of them? The worst case seems to have been mostly "Microsoft hires Altman and all of his employees and has basically acquired OpenAI for $13 billion"; the best case - and largely the actual outcome - was "Microsoft can tell OpenAI's board what to do even without having a board seat." No board seats and tons of practical leverage is probably better, for economic and regulatory reasons, than having a board seat. And now everyone knows! Last Monday, I [wrote a column](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai) the gist of which is "sure the nonprofit OpenAI board technically controls the company and Microsoft and the other investors have no legal rights, but practically the investors have the money and they will probably get what they want." I qualified that conclusion, because at the time the investors conspicuously had not gotten what they wanted, but by Tuesday night they did. It turns out that Microsoft's informal sources of leverage were in fact good enough. It turns out that, in exactly the right circumstances, investing $13 billion in a nonprofit at an $86 billion valuation is fine. #### Further OpenAI - Matt Levine A few quick points. First, University of Kentucky law professor [Alan James Kluegel](https://law.uky.edu/people/alan-kluegel) emailed me to suggest a theory that maps the OpenAI conflict - between its nonprofit board that worried about AI safety, and its employees who love Altman and wanted him back - onto a straightforward valuation dispute: > The board is made up of AI evangelists; the reason they openly worry about AI getting _too powerful_ is out of a belief in the potential for a godlike AI or at least out of concern that this soon-to-be-ubiquitous technology should be in its best possible shape before being distributed to the world. ... > > The employees, however, are familiar with all of the AI's limitations and problems and costs, and - being Silicon Valley veterans - are also familiar with the hype cycle at play here. … > > In other words, this is a story about the employees wanting to secure the bag while the unrealized potential of their product has captured everyone's attention and imagination, and Sam Altman's fundraising (and the Thrive Capital tender offer, in particular) was going to be their golden ticket - until the starry-eyed board killed their payday in a flurry of techno-optimistic excitement. This is absolutely not at all what anyone was _saying_, and I suspect that it is not what anyone was _thinking_, but I like it as an objective explanation of what they were _doing_. It is not unheard-of for a startup to get a pretty high valuation, and for its employees to think "hey let's cash out while the money is there," while its board members are venture capitalists with diversified portfolios and liquidation preferences who are more willing to wait and gamble. Venture capitalist board members are _supposed_ to be able to take the long view and bet on changing the world, while employees are often more risk-averse and need cash to pay the mortgage. OpenAI's board members are not venture capitalists, don't own equity at all, are not motivated by hopes of a trillion-dollar valuation, and were in fact adverse to its venture capitalist investors. And yet I think the model applies. They took a very long and grandiose view of the importance of their product and its ability to change the world, while the employees would like to see some cash now. Second, we [talked last week](https://www.bloomberg.com/opinion/articles/2023-11-21/openai-is-a-strange-nonprofit) about the oddity of OpenAI as a nonprofit organization with a board that does not answer to shareholders. Tyler Cowen [pointed out](https://marginalrevolution.com/marginalrevolution/2023/11/what-do-we-know-about-non-profit-boards.html) that the literature on nonprofit governance is pretty negative, quoting [a 2014 paper by George Dent](https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?article=2096&context=faculty_publications): > A remarkable consensus of experts on NPOs agrees that their governance is generally abysmal, considerably worse than that of for-profit corporations. NPO directors are mostly ill-informed, quarrelsome, clueless about their proper role, and dominated by the CEO-as proponents of shareholder primacy would predict. "Dominated by the CEO" was apparently not true of OpenAI, but you gotta give them "quarrelsome." Third, I really do have to quote last week's incredible [Wall Street Journal report](https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c) about the board's non-explanation of what Altman did to get himself fired: > On the call, the leadership team pressed the board over the course of about 40 minutes for specific examples of Altman's lack of candor, the people said. The board refused, citing legal reasons, the people said. … > > The board agreed to discuss the matter with their counsel. After a few hours, they returned, still unwilling to provide specifics. They said that Altman wasn't candid, and often got his way. The board said that Altman had been so deft they couldn't even give a specific example, according to the people familiar with the executives. "Without realizing it, we were gradually overmatched by a superior intelligence, until he ended up controlling us in ways that are too subtle for us to even explain," thought the AI-nervous board of OpenAI. I love them. Their fears about rogue AI are [such obvious metaphors](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai) for their mundane real-life problems. Fourth, last week [I asked](https://www.bloomberg.com/opinion/articles/2023-11-21/openai-is-a-strange-nonprofit) some questions like "did Microsoft agree to any sort of non-compete or non-solicit," and "does Altman have any sort of non-solicit?" I was careful not to ask "did OpenAI's employees sign non-competes," because I am aware (not legal advice!) that employee non-competes [don't work in California](https://calemploymentlawupdate.proskauer.com/2023/09/california-expands-prohibition-against-non-competes/). But a lot of readers emailed me to point that out anyway, and also pointed out that non-solicitation clauses are also [generally](https://ogletree.com/insights-resources/blog-posts/california-nonsolicitation-clause-held-enforceable-under-narrow-exception-for-sale-of-a-business/) [not enforced](https://kroghdecker.com/are-non-solicitation-agreements-enforceable-in-california#:~:text=Non%2Dsolicitation%20agreements%20are%20often,in%20violation%20of%20public%20policy.) in California. So, there, that's why. Finally, it is a [long-running schtick](https://www.theverge.com/2022/10/28/23427137/elon-musk-twitter-matt-levine-money-stuff) of this column that whenever I take a day off, Elon Musk does something crazy. I took much of last week off for Thanksgiving. "I guess by Monday Elon Musk is going to own OpenAI and Binance," I [threaded](https://www.threads.net/@itismattlevine/post/Cz6xCKbu11p). But I wasn't really worried until OpenAI's new board [was announced](https://twitter.com/OpenAI/status/1727206187077370115). The chairman is [Bret Taylor](https://www.bloomberg.com/news/features/2022-09-14/who-is-twitter-chairman-bret-taylor-elon-musk-s-opposite), who was also the chairman of the board of Twitter Inc. when Musk bought it. I don't really think that Musk is going to buy OpenAI, but I am going to take some time off for the holidays in December so who knows. #### Elsewhere in misalignment - Matt Levine We [have](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai) [talked](https://www.bloomberg.com/opinion/articles/2023-11-21/openai-is-a-strange-nonprofit) a [lot](https://www.bloomberg.com/opinion/articles/2023-11-27/openai-is-still-an-86-billion-nonprofit) [about](https://www.bloomberg.com/opinion/articles/2023-11-28/the-sec-might-lose-its-courts) the recent drama at OpenAI, whose nonprofit board of directors fired, and were then in turn fired by, its chief executive officer Sam Altman. Here is [Ezra Klein on the board's motivations](https://www.threads.net/@ezraklein/post/C0MpLJNuN7U): > One thing in the OpenAI story I am now fully convinced of, as it's consistent in my interviews on both sides. > > This was not about safety. It was not about commercialization. It was not about speed of development or releases. It was not about Q*. It was really a pure fight over control. > > The board felt it couldn't control/trust Altman. It felt Altman could and would outmaneuver them in a pinch. But he wasn't outmaneuvering them on X issue. They just felt they couldn't govern him. Well, sure, but that _is_ a fight about AI safety. It's just a _metaphorical_ fight about AI safety. I am sorry, [I have made this joke before](https://www.bloomberg.com/opinion/articles/2023-11-27/openai-is-still-an-86-billion-nonprofit), but events keep sharpening it. The OpenAI board looked at Sam Altman and thought "this guy is smarter than us, he can outmaneuver us in a pinch, and it makes us nervous. He's done nothing wrong so far, but we can't be sure what he'll do next as his capabilities expand. We do not fully trust him, we cannot fully control him, and we do not have a model of how his mind works that we fully understand. Therefore we have to shut him down before he grows too powerful." I'm sorry! That is exactly the AI misalignment worry! If you spend your time managing AIs that are growing exponentially smarter, you might worry about losing control of them, and if you spend your time managing Sam Altman you might worry about losing control of him, and if you spend your time managing both of them you might get confused about which is which. Maybe Sam Altman will turn the old board members into [paper clips](https://en.wikipedia.org/wiki/Instrumental_convergence). Elsewhere in OpenAI, [the Information reports](https://www.theinformation.com/articles/openai-isnt-expected-to-offer-microsoft-other-investors-a-board-seat) that the board will remain pretty nonprofit-y: > OpenAI's revamped board of directors doesn't plan to include representatives from outside investors, according to a person familiar with the situation. It's a sign that the board will prioritize safety practices ahead of investor returns. > > The new board hasn't been officially seated and things could change. But the person said Microsoft and other shareholders, such as Khosla Ventures, Thrive Capital and Sequoia Capital, aren't expected to be offered a seat on OpenAI's new nine-person board. Still. [I think that](https://www.bloomberg.com/opinion/articles/2023-11-27/openai-is-still-an-86-billion-nonprofit) the OpenAI board two weeks ago (1) did not include any investor representatives and (2) was _fundamentally unpredictable to investors_ - it might have gone and fired Altman! - whereas the future OpenAI board (1) will not include any investor representatives but (2) will nonetheless be a bit more constrained by the investors' interests. "If we are _too_ nonprofit-y, the company will vanish in a puff of smoke, and that will be bad," the new board will think, whereas the old board actually went around saying things like "allowing the company to be destroyed would be consistent with the mission" and almost meant it. The investors don't exactly need a board seat if they have a practical veto over the board's biggest decisions, and the events of the last two weeks suggest that they do. #### New OpenAI board - Matt Levine One [piece of news](https://www.theverge.com/2023/11/29/23981848/sam-altman-back-open-ai-ceo-microsoft-board) is that Microsoft Corp. will have a non-voting observer seat on the new board of directors of OpenAI. The problem, when OpenAI's board [briefly fired](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai) Chief Executive Officer Sam Altman, was that Microsoft didn't know about it in advance. Once Microsoft heard about Altman's firing, it took, like, four days to get him his job back; that was not all Microsoft's doing, but Microsoft clearly had something to do with it. "The investors don't exactly need a board seat if they have a practical veto over the board's biggest decisions," [I wrote yesterday](https://www.bloomberg.com/opinion/articles/2023-11-29/the-robots-will-insider-trade), "and the events of the last two weeks suggest that they do." On the other hand it is inconvenient and disruptive that Microsoft had to effectively veto Altman's firing after it happened? Better to be told first and head it off. Thus, board observer. Here's [one other piece of news](https://www.wsj.com/tech/ai/openais-new-board-takes-over-and-says-microsoft-will-have-observer-role-50e55b73), sort of, about the new board chair, Bret Taylor, "the former co-CEO of business-software giant Salesforce who also was the chairman of Twitter when it dealt with Elon Musk's ultimately successful takeover effort": > OpenAI's board is unusual in that it isn't obligated to maximize shareholder value, but rather to fulfill a larger mission of advancing artificial intelligence for humanity's benefit. On its website, OpenAI says the board's "principal beneficiary is humanity, not OpenAI investors." > > Asked if OpenAI's board could revisit this structure, Taylor said: "I think it's a great question-probably not one for my first day on the job." That's one possible answer! I mean, I guess technically it's a non-answer, but it's an interesting one. "Should you benefit humanity or your investors?" "Great question, not sure yet." There are other possible answers! "No, the nonprofit structure is at the core of what we do, this company is fundamentally about benefiting humanity, that's what the investors and employees signed up for, and that part is really nonnegotiable" is, for instance, an imaginable answer. You could imagine the old nonprofit board, as a condition of letting Altman back, demanding "look you can bring in new, more commercial directors, but the nonprofit structure has to stay." But they ... didn't do that? Apparently? And Taylor is, like, for-profit-curious? Incidentally, when Taylor and the Twitter Inc. board agreed to sell to Musk, [I wrote a column](https://www.bloomberg.com/opinion/articles/2022-05-02/twitter-s-board-gave-up) about how the board made and described that decision. What was striking, to me, was that Twitter's then-CEO, Parag Agrawal, described the decision to sell as _purely_ doing what was in the best interests of shareholders, without any consideration of the product, the mission, the employees, etc. Agrawal [said to employees](https://www.platformer.news/inside-twitters-emotional-friday/): > "This is the answer you don't want to hear, right?" Agrawal said. "Twitter is a public company owned by shareholders. There are other companies which may have other legal mechanics … Twitter is not one of those companies." That is partly true, but not entirely true, and kind of grim. I wrote: > A lot of people think of Twitter as a public utility, a public trust, "the town square," a company with an important social mission that many of its users and employees _and Elon Musk_ care about deeply. And its CEO and board of directors essentially can't bring themselves to talk about it. When employees asked him about what was best for the company, Agrawal could talk only about the shareholders. Elon Musk is not at all embarrassed to say that Twitter has an important public mission, which is why he's buying it. But its current management can't say that, which is why they're selling it. And now the chairman of the Twitter board that sold to Musk is the chairman of OpenAI. Which definitely _does_ have other legal mechanics. For now. #### OpenAI ownership - Matt Levine The normal way to own a share of a company's profits is to buy its stock. If you find a company that you think is promising, and you want to give it some money to finance its operations in exchange for a share of its profits, you might buy, say, 10% of its stock, and then in some rough sense you own 10% of its future profits. Not in any very strict sense - the company gets the profits, and may or may not pay some or all of them out to you as dividends - but people do tend to think of a share of stock as a share in future profits. But while stock is the _standard_ way to buy a share of a company's profits, it is not the only way. Here's another one: _I_ buy 10% of the company's stock, and then I write you a total return swap, a derivative contract saying that I'll pay you $1 for every $1 that the stock goes up (and you'll pay me $1 for every $1 it goes down). That way, you have economic exposure to the stock: You make money if it goes up and lose money if it goes down. But you don't own the actual shares of stock, or some of the rights that attach to them. (You normally can't vote for directors, for instance.) You have economic ownership of the stock, but you don't actually own stock. There are other ways. Many crypto tokens, for instance, are _kind of_ profit interests in some project. Or more simply, you go to the company and say "hey I'd like to buy 10% of your profits" and the company says "sure" and writes a contract saying "you are entitled to 10% of our profits." And then when the company has profits it writes you a quarterly check. Why would you do any of these things, instead of buying stock? Here are three possible answers: 1. You arrived from Mars 10 minutes ago, you have never heard of "common stock," so you reason from first principles about how to invest in a company and take a share of its profits. "What if we wrote some sort of a contract giving me a share of the profits," you say, etc. 2. The company is not actually a company and doesn't have stock, so you need to resort to workarounds. You want to give money to a university lab, or a government project, or a nonprofit organization, or your brother-in-law, to develop some commercial thing, and in exchange you want 10% of the commercial thing's profits. Maybe you write a profit-sharing contract. Or you want to invest money in a decentralized crypto project, and you take back tokens that seem likely to go up if it succeeds. 3. Regulatory arbitrage. Because stock is the normal way to own a share of a company's profits, a lot of _rules_ apply to stock, and if you buy something that is Not Stock you might avoid those rules, while still getting the benefits (profit sharing, etc.) of stock. For instance, the US Securities and Exchange Commission has a lot of disclosure and other rules that apply to people who buy [more than 5%](https://www.bloomberg.com/opinion/articles/2022-04-06/elon-musk-is-active-now) of a public company's stock; some investors want to avoid those disclosure rules for various reasons, so they will use total return swaps or other derivatives rather than buying stock directly. (There is a cat-and-mouse element to this, where people use swaps to avoid the rules, so the SEC [revises the rules to capture swaps](https://www.bloomberg.com/opinion/articles/2022-03-24/the-sec-wants-to-stop-activism).) Or there are margin rules limiting how much money you can _borrow_ to buy stocks, while the rules for derivatives can be laxer. (This is part of what went wrong with [Archegos Capital Management](https://www.bloomberg.com/opinion/articles/2021-07-29/archegos-was-too-busy-for-margin-calls).) Or US antitrust law [requires regulatory preapproval](https://www.ftc.gov/enforcement/competition-matters/2023/02/hsr-threshold-adjustments-reportability-2023) for anyone buying more than a certain amount of a company's stock; buying Not Stock can avoid those rules too. Reason 1 is fun but, I think, rarely applies. But we have [talked](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai) a [lot](https://www.bloomberg.com/opinion/articles/2023-11-21/openai-is-a-strange-nonprofit) [recently](https://www.bloomberg.com/opinion/articles/2023-11-27/openai-is-still-an-86-billion-nonprofit) about the drama and corporate structure of OpenAI, which is technically a nonprofit, but which has taken billions of dollars from Microsoft Corp. and other investors in exchange for (capped) participation in the profits of its for-profit (uh, [capped-profit](https://www.thediff.co/archive/why-is-openai-financed-that-way/)) subsidiary. Basically OpenAI's strategy is to take Microsoft's money, use it to build chatbots and other sort-of-artificial-intelligence tools, make many billions of dollars from the chatbots, use some of the money to pay Microsoft a lavish return on its investment, use the rest to build artificial general intelligence, and then use artificial general intelligence "for the benefit of humanity," rather than for the benefit of Microsoft. OpenAI seems to be [worth $86 billion](https://www.bloomberg.com/opinion/articles/2023-11-27/openai-is-still-an-86-billion-nonprofit), and I have [joked that](https://www.bloomberg.com/opinion/articles/2023-11-21/openai-is-a-strange-nonprofit) "as far as I can tell no one in human history has ever purchased shares in a nonprofit at an $86 billion valuation," but that was just a shorthand, and Microsoft _didn't_ purchase shares in OpenAI, or not exactly. It bought a capped profit interest in OpenAI's capped-profit subsidiary? Part of this is for Reason 2: OpenAI is a nonprofit, it has unusual goals, and it is pursuing those goals with an unusual profit-sharing mechanism rather than with normal stock. But part of it is for Reason 3, regulatory arbitrage. [Bloomberg's Dina Bass and Leah Nylen report](https://www.bloomberg.com/news/articles/2023-12-08/microsoft-s-answer-to-openai-inquiry-it-doesn-t-own-a-stake): > With global regulators examining Microsoft Corp.'s $13 billion investment in OpenAI, the software giant has a simple argument it hopes will resonate with antitrust officials: It doesn't own a traditional stake in the buzzy startup so can't be said to control it. > > When Microsoft negotiated an additional $10 billion investment in OpenAI in January, it opted for an unusual arrangement, people familiar with the matter said at the time. Rather than buy a chunk of the cutting-edge artificial intelligence lab, it cut a deal to receive almost half of OpenAI's financial returns until the investment is repaid up to a pre-determined cap, one of the people said. The unorthodox structure was concocted because OpenAI is a capped for-profit company housed inside a non-profit organization. > > It's not clear regulators see a distinction, however. On Friday the UK Competition and Markets Authority said it was gathering information from stakeholders to determine whether the collaboration between the two firms threatens competition in the UK, home of Google's AI research lab Deepmind. The US Federal Trade Commission is also examining the nature of Microsoft's investment in OpenAI and whether it may violate antitrust laws, according to a person familiar with the matter. … > > Microsoft didn't report the transaction to the agency because the investment in OpenAI doesn't amount to control of the company under US law, the person said. OpenAI is a non-profit and acquisitions of non-corporate entities aren't reported under US merger law, regardless of value. Agency officials are analyzing the situation and assessing what its options are. > > "While details of our agreement remain confidential, it is important to note that Microsoft does not own any portion of OpenAI and is simply entitled to a share of profit distributions," a Microsoft spokesperson said in a statement. Sure. Ownership of a company consists _mainly_ of being entitled to a share of profit distributions, but not _only_ that; ownership does often ([not always](https://www.bloomberg.com/opinion/articles/2018-07-11/investors-gave-snap-a-gift)) come with, for instance, voting rights. Microsoft has isolated the profit share and taken only that, not actual ownership. Okay fine and also a [board observer seat](https://www.wsj.com/tech/ai/openais-new-board-takes-over-and-says-microsoft-will-have-observer-role-50e55b73). "It's not clear regulators see a distinction," or that they should really. The conceptual overlap between "owning a portion of OpenAI" and "being entitled to a share of profit distributions" is not perfect, but it is large, and a regulator might reasonably say "close enough." Also here is a small fun fact about this distinction. When I [wrote about OpenAI](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai) last month, I included this organizational chart from its own website: I clipped that chart from the website myself, on Nov. 19, though I circled "Minority owner" in blue for this post. I went back to [the website](https://openai.com/our-structure) yesterday to check, and found this _slightly different_ chart: See the difference? Again, I took the liberty of circling it in blue. I do not know the details of Microsoft's contractual arrangement with OpenAI, or if they have changed since November. But in November, OpenAI's website said that Microsoft was a "minority owner" of its capped-profit subsidiary. In December, when "it is important to note that Microsoft does not own any portion of OpenAI," the website says it has a "minority economic interest." You can see how people might confuse those things, since OpenAI did. One other thing I should say is that [here is a claim](https://www.levels.fyi/blog/openai-compensation.html) that OpenAI's _employees_ get a capped profit interest instead of stock (or stock options, or restricted stock units), because of its nonprofit structure, and that this profit interest gets better _tax treatment_, for them, than standard startup equity awards: > People receiving profit interests will be granted the units upon their vesting date at no additional cost. Because of that, there's an additional key tax benefit in that they are tax-free upon issuance and vesting, so the only tax hit would be a capital gains tax when the profit is received or sold. > > In contrast, with a traditional Restricted Stock Unit (RSU) that you'd see at a MAANG level company, when the RSU vests, an employee will be taxed upon receiving their RSUs immediately. This is because the RSU is essentially a percentage of equity in the company and holds a value at whatever the market rate is for that company's stock. Regulatory arbitrage everywhere! #### Altmaning - Matt Levine Usually, the founder and chief executive officer of a startup would like to be able to raise money from investors while keeping complete control of the company, while the investors would prefer to have some control over how their money is used. Ultimately, if there is a sharp disagreement, the investors would like to be able to _fire_ the founder and keep the company for themselves; the founder would like to be able to prevent that, and keep the company (and their money) for herself. This is a real tension, both sides have good reasons for their positions - it's her vision, her blood, sweat and tears; it's their money - and different startups strike the balance different ways. Some startups have dual-class stock structures and shareholder agreements that allow the founder to keep control of the board no matter how much outside money she raises. Other startups have single-class stock and shareholder agreements that give outside investors a lot of power. Generally, startups will have more founder-friendly structures if (1) they are in high demand (and can thus dictate terms to investors) and (2) their founders _care_ about this stuff; some founders are sort of innocent and say "if I focus on doing a good job the governance will work itself out," while other founders fight really hard for board control. And there are trends over time: When it is easy for startups to raise money, and hard for venture capitalists to get into deals, the founders get to dictate the terms, and the venture capitalists compete over who can be most founder-friendly. When capital is scarce, the providers of capital get to set the terms. All of this is pretty straightforward stuff, a somewhat zero-sum battle between founders and investors for control. It's the investors' money, it's the founder's vision, they each want protection, etc. A weird innovation that OpenAI came up with was to introduce a _third_ party that, technically, [has absolute control of the company](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai), and can ignore _both_ what the founder/CEO wants _and_ what the investors want. OpenAI has a board of (mostly) independent directors, and that board is founder/CEO Sam Altman's boss and does _not_ answer to the investors: It is the board of directors of a nonprofit, who appoint themselves, and who have a fiduciary duty to the nonprofit's mission of "building safe and beneficial artificial general intelligence for the benefit of humanity." In this structure: - Altman has no power over the board: He was a board member until November, but he had only one vote, and now the board has [kicked him off](https://openai.com/our-structure). - The investors have no power over the board: They had no voting rights at all, and OpenAI [told them](https://openai.com/our-structure), in its operating agreement, that "it would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation." And then the board fired Altman as CEO, and Altman and the investors were both extremely upset about this and entirely aligned with each other in wanting him back as CEO, and in the course of a few frantic days he did in fact come back as CEO. The board's practical power was limited _by the fact that it was neither running the company nor supplying the money:_ It was just some outsiders with some votes; it could neither fund the development of OpenAI's large language models nor, you know, do that development itself. But the board's _theoretical_ power was total. Anyway here's a Wall Street Journal story about how, post-Open AI, [founders want to get more control back from investors](https://www.wsj.com/business/entrepreneurship/startup-founders-fret-over-getting-fired-like-sam-altman-1c91917c): > The entrepreneur world was stunned to see the board of hot artificial-intelligence company OpenAI fire Sam Altman just before Thanksgiving. He had been the face of one of the biggest successes of the year and suddenly he was out. In startup land, founders and advisers say they started discussing new ways to protect themselves. > > Altman eventually made it back to OpenAI in a countercoup. But the tension at one of the country's biggest startups is playing out in a longstanding debate about who should control a burgeoning company. It is an inherent conflict in business, with founders wanting protection for their jobs while investors want it for their money. > > Among startups, tougher economic conditions have recently given venture capitalists and investors the upper hand. After OpenAI, founders are going to try to regain their footing. Sure, fine. "Tougher economic conditions have recently given venture capitalists and investors the upper hand," so terms are less founder-friendly, and founders would like them to be more founder-friendly, because that is an eternal and universal tension. And "well look at what happened to Sam Altman" is, I suppose, an argument to make in that negotiation. But it has nothing to do with anything! What happened to Sam Altman is not that his investors disagreed with his vision and fired him! What happened to Sam Altman is that OpenAI, almost uniquely, is an [$86 billion startup](https://www.bloomberg.com/opinion/articles/2023-11-27/openai-is-still-an-86-billion-nonprofit) whose governance terms are _neither_ founder-friendly _nor_ investor-friendly. They are nonprofit-board-friendly. That's so weird! If you're a tech startup, you don't need a nonprofit board! The number of tech startups that answer to a nonprofit board is very close to one! If you don't have a nonprofit board, this whole problem doesn't exist! On the other hand, if you are a founder looking to keep more control over your company, there is a lot to learn from OpenAI? The Journal story discusses the normal founder-friendly approaches: > Eric Ries, founder of the Long-Term Stock Exchange and something of a corporate governance guru and go-to mentor among the Silicon Valley set … has a system of hurdles founders can set up that would make it harder for a board to move against a company's mission or management. The most protective moves, Ries and lawyers say, are implementing supervoting shares or dual-class shares, which give founders ultimate control over their companies. These structures create multiple classes of shares to give founders, and sometimes early employees or investors, voting control. Okay sure but OpenAI has an operating agreement that (1) gives investors _no votes at all_ and (2) tells them, in writing, that they should "view any investment … in the spirit of a donation." That's _way_ better for a founder than dual-class stock, _if the founder is sure she controls the board_. #### Elon vs. OpenAI - Matt Levine I [wrote yesterday](https://www.bloomberg.com/opinion/articles/2024-02-29/the-board-of-directors-is-in-charge) about reports that the US Securities and Exchange Commission might be looking into whether OpenAI or its founder and chief executive officer, Sam Altman, might have misled its investors. Late last year, OpenAI's board briefly [fired Altman](https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai) for not being "consistently candid," and then reversed course and fired itself instead. So there is some reason to believe that somebody wasn't candid about something. I had my doubts that it would rise to the level of securities fraud, though. For one thing, OpenAI is a [nonprofit organization](https://openai.com/our-structure), and even its for-profit subsidiary, OpenAI Global LLC, which _has_ raised money from investors, isn't all _that_ for-profit. I wrote: > At the top of OpenAI's operating agreement, it warns investors: "It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-[artificial general intelligence] world." I still don't know what Altman was supposedly not candid about, but whatever it was, how material can it possibly have been to investors, given what they signed up for? "Ooh he said it cost $50 million to train this model but it was really $53 million" or whatever, come on, the investors were donating money, they're not sweating the details. But that wasn't quite right, was it? Nonprofits _can_ defraud their donors. Generally that sort of fraud is not about _financial results_; it is about the nonprofit's mission, and whether it is using the donors' money to advance that mission. If I ask you to donate to save the whales, and you give me $100,000 earmarked to save the whales, and I spend it all on luxury vacations for myself, I probably will get in trouble. I suppose if Altman was not candid about OpenAI's mission, or its pursuit of that mission, that really could have been a kind of fraud on OpenAI's donors. I mean investors. It could have been donation/securities fraud on the donors/investors. Here's [one of them](https://www.bloomberg.com/news/articles/2024-03-01/musk-sues-openai-altman-for-breaching-firm-s-founding-mission)! > Elon Musk sued OpenAI and its Chief Executive Officer Sam Altman, alleging they violated the artificial intelligence startup's founding mission by putting profit ahead of benefiting humanity. > > The 52-year-old billionaire, who was a co-founder of OpenAI but is no longer involved, said in a lawsuit filed late Thursday in San Francisco that the company's close relationship with Microsoft Corp. has undermined its original mission of creating open-source technology that wouldn't be subject to corporate priorities. > > Musk, who is also CEO of Tesla Inc., has been among the most outspoken about the dangers of AI and artificial general intelligence, or AGI. The release of OpenAI's ChatGPT more than a year ago popularized advances in AI technology and raised concerns about the risks surrounding the race to develop AGI, where computers are as smart as an average human. > > "To this day, OpenAI Inc.'s website continues to profess that its charter is to ensure that AGI 'benefits all of humanity,'" the lawsuit said. "In reality, however, OpenAI Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft." Here is [Musk's complaint](https://assets.bwbx.io/documents/users/iqjWHBFdfxIU/rYCUmwA4Xxpw/v0). It is essentially a complaint for breach of contract: Musk argues that he founded OpenAI with Altman and Greg Brockman, that they had a deal about how OpenAI would operate, and that Altman and Brockman have now gone back on the deal. The contract said that OpenAI would be a nonprofit, that it would be run for the benefit of humanity, that it would build artificial general intelligence and give it away for free, and that it would build open-source software (thus the name) and explain to the public how its models operate. But now OpenAI is run for profit, for the benefit of Microsoft and its other investors rather than humanity. It has built artificial general intelligence and is hoarding it for its own enrichment rather than giving it away. One problem with this claim is that the contract doesn't _quite_ exist. Musk's lawsuit says that OpenAI has breached "the Founding Agreement" of OpenAI, capitalized like that, as though he, Altman and Brockman sat down and signed a piece of paper with "Founding Agreement" at the top, setting out how OpenAI would operate. But they didn't. From the complaint: > This Founding Agreement is memorialized in, among other places, OpenAI, Inc.'s founding Articles of Incorporation and in numerous written communications between Plaintiff and Defendants over a multi-year period. … That is: There is no document titled "Founding Agreement"; despite being wealthy sophisticated repeat startup founders who know a lot of lawyers, the founders never sat down and signed a contract. Instead, the "Founding Agreement" has to be inferred from other documents. Musk cites two: 1. There's a June 2015 email from Altman to Musk, with five numbered bullet points setting out a plan for building AI. "The mission would be to create the first general AI and use it for individual empowerment - i.e., the distributed version of the future that seems the safest," the first bullet point begins. "I think ideally we'd start with a group of 7-10 people, and plan to expand it from there," says the second, and "we have a nice extra building in Mountain View they can have." The third bullet point proposes a five-person governance board including Musk and Altman, and "we'd have an ongoing conversation about what work should be open-sourced and what shouldn't." In the fourth bullet point, Altman asks Musk to "be involved somehow in addition to just governance"; maybe he could "come by and talk to them about progress once a month or whatever." Musk replied to the email "Agree on all." 2. There is the December 2015 certificate of incorporation of OpenAI Inc., the nonprofit corporation that ultimately controls OpenAI. "The specific purpose of this corporation is to provide funding for research, development and distribution of technology related to artificial intelligence," it says. "The resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person. … No part of the net income or assets of this corporation shall ever inure to the benefit of any director, officer or member thereof or to the benefit of any private person." And Musk donated money to OpenAI, the nonprofit, over the years: In 2016 and 2017, he was the biggest donor to OpenAI, and "all told, Mr. Musk contributed more than $44 million to OpenAI, Inc. between 2016 and September 2020." He also did other stuff for OpenAI: He helped with recruitment, paid rent on its offices, "regularly visited," and "was present for important company milestones." He ultimately [left his role with OpenAI in 2018](https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai). You can sort of wave your hands at all this and say "Musk had a contract with OpenAI in which he agreed to donate money and in exchange OpenAI explicitly agreed to be an open-source nonprofit forever," but I don't think that's exactly right? The email from Altman was an initial proposal, not a detailed contract setting out the permanent terms of their deal; it promised not to open-source the software forever but only to "have an ongoing conversation about what work should be open-sourced and what shouldn't." Money was not mentioned. And the certificate of incorporation was not a contract between Musk and OpenAI: He didn't sign the certificate, and he wasn't a shareholder, because there were no shares (it's a nonprofit). OpenAI's fiduciary duties are not to him, as a co-founder, but to humanity. The evidence of a _specific deal_ between Musk and OpenAI is pretty thin. Still, I sympathize? OpenAI Inc., the top-level company that controls OpenAI's business, really is incorporated as a nonprofit. It really was formed to work for the benefit of humanity and not "for the private gain of any person." And it really did take donations from Musk and use them to build its team. But it eventually set up a for-profit subsidiary, OpenAI Global LLC, which has managed to raise money from investors at an $86 billion valuation, and those investors (and OpenAI's employees - though not Altman) expect some (capped) financial return on that investment. [OpenAI says](https://openai.com/our-structure) that "it became increasingly clear that donations alone would not scale with the cost of computational power and talent required to push core research forward, jeopardizing our mission": It had to raise money from investors, by promising them returns, to achieve its mission. (It raised something like $130.5 million in total donations; it has raised something like [$13 billion](https://www.bloomberg.com/news/articles/2024-01-09/microsoft-s-openai-ties-face-potential-eu-merger-investigation) in investment commitments from Microsoft.) I am sure OpenAI had good lawyers when it set up this structure, and I assume that as a technical matter none of this violates the certificate of incorporation or the nonprofit mission: A portion of the profits of _OpenAI Global LLC_ can go to employees and venture capitalists and Microsoft, even though "no part of the net income or assets of" _OpenAI Inc._ can. Still that is rather technical, and Musk has a point here: > In 2017, Mr. Brockman and others suggested transforming OpenAI, Inc. from a nonprofit to a for-profit corporation. After a series of communications over several weeks, Mr. Musk told Mr. Brockman, Dr. Sutskever, and Mr. Altman "[e]ither go do something on your own or continue with OpenAI as a nonprofit. I will no longer fund OpenAI until you have made a firm commitment to stay or I'm just being a fool who is essentially providing free funding to a startup. Discussions are over." That was before OpenAI launched its for-profit subsidiary, though. When it did, Musk sort of grudgingly tolerated it: > On March 11, 2019, OpenAI, Inc. announced that it would be creating a for-profit subsidiary: OpenAI, L.P. Prospective investors were notified of an "important warning" at the top of the summary term sheet that the for-profit entity "exists to advance OpenAI Inc.'s [the nonprofit's] mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The General Partner's duty to this mission and the principles advanced in the OpenAI Inc. Charter take precedence over any obligation to generate a profit." Accordingly, investors were expressly advised that "[i]t would be wise to view any investment in OpenAI LP in the spirit of a donation." > > Following the announcement, Mr. Musk reached out to Mr. Altman asking him to "be explicit that I have no financial interest in the for-profit arm of OpenAI." However, Mr. Musk continued to support OpenAI, Inc., the non-profit, donating an additional $3.48 million in 2019. But at that point he _was_ just providing free funding to a startup, wasn't he? So Musk argues that they had a deal, and that OpenAI breached it in three ways. First, it licenses GPT-4, its most powerful model so far, to Microsoft. OpenAI, in its public statements and its charter and its deal with Microsoft, has said that it will seek to build _artificial general intelligence_ for the benefit of humanity, but that it can license lesser forms of artificial intelligence to Microsoft. So the question is: Is GPT-4 artificial general intelligence? Musk says yes: > GPT-4 is not just capable of reasoning. It is better at reasoning than average humans. It scored in the 90th percentile on the Uniform Bar Exam for lawyers. It scored in the 99th percentile on the GRE Verbal Assessment. It even scored a 77% on the Advanced Sommelier examination. … > > GPT-4 is an AGI algorithm, and hence expressly outside the scope of Microsoft's September 2020 exclusive license with OpenAI. … > > Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity. Seems like a stretch, though Musk quotes "Microsoft's own researchers" saying that GPT-4 "could reasonably be viewed as an early (yet still incomplete version of an artificial general intelligence (AGI) system." Second, GPT-4 is not open-source: > GPT-4's internal design was kept and remains a complete secret except to OpenAI-and, on information and belief, Microsoft. There are no scientific publications describing the design of GPT-4. Instead, there are just press releases bragging about performance. On information and belief, this secrecy is primarily driven by commercial considerations, not safety. Although developed by OpenAI using contributions from Plaintiff and others that were intended to benefit the public, GPT-4 is now a _de facto_ Microsoft proprietary algorithm, which it has integrated into its Office software suite. Again, there doesn't seem to be any actual agreement between OpenAI and Musk (or anyone else) promising to make everything open-source, but, sure, it's annoying that they built a model partly using his money and now won't let him see it. Third, Musk objects to OpenAI "permitting Microsoft, a publicly traded for-profit corporation, to occupy a seat on OpenAI, Inc.'s Board of Directors and exert undue influence and control over OpenAI's non-profit activities." Microsoft is [just an observer](https://www.bloomberg.com/news/articles/2024-01-05/microsoft-picks-dee-templeton-as-openai-board-observer) on the board, not a voting member, but, right, its interests in OpenAI are probably not particularly _charitable_. And Musk is asking the court, not for his donations back, but for an order making OpenAI do what it supposedly promised to do: Open up GPT-4's source code, make it freely available to the public, end Microsoft's exclusive license and board rights, and generally stop OpenAI's for-profit work. Obviously one should be pretty cynical here. _Musk runs a for-profit artificial intelligence company_, xAI, which competes with OpenAI and has [raised money](https://www.bloomberg.com/news/articles/2024-01-20/musk-s-xai-secures-500-million-toward-1-billion-funding-goal) by [citing OpenAI's commercial success](https://www.bloomberg.com/news/articles/2024-02-05/xai-potential-investors-focus-on-muskonomy-openai-success). Blowing up that competitor's commercial prospects, as this lawsuit is trying to do, could help xAI. He also runs other companies - Tesla Inc., Twitter/X - that make use of AI. I suppose Musk's companies would benefit from reading OpenAI's source code and "scientific publications describing the design of GPT-4," so why not sue OpenAI and try to make that information public? Musk's protests about OpenAI's unseemly pursuit of AI profit for investors do look a little insincere, since he's doing the same thing. But he does have kind of a reasonable gripe? OpenAI was founded as a nonprofit, raised a bunch of money from him as a donor to a nonprofit, and is now somehow an enormously valuable tech startup owned by people who are not him. After OpenAI fired Altman last November, but before it brought him back, it looked as if OpenAI's _for-profit investors_ - Microsoft, but also some venture capitalists, and the employees who owned quasi-equity - had lost a bunch of value. I [wrote at the time](https://www.bloomberg.com/opinion/articles/2023-11-21/openai-is-a-strange-nonprofit): > I feel like the lesson here is … "don't invest in nonprofits at an $86 billion valuation." Which I think has never come up before? Like as far as I can tell no one in human history has ever purchased shares in a nonprofit at an $86 billion valuation? Because purchasing shares in a nonprofit, at any valuation, is not a coherent thing to do? But then OpenAI made it happen, for the first time, and probably also the last. But it turns out the investors were fine, and the correct lesson might be the opposite one: Don't _donate money_ to a nonprofit that is selling shares at an $86 billion valuation! You might be skeptical about putting an $86 billion valuation on a nonprofit, but probably you should be even more skeptical that a business with an $86 billion valuation is a nonprofit. #### OpenAI - Matt Levine Here is a partial list of things that Elon Musk has thought are the most important things in the world: 1. "[Solving the environment](https://www.cnbc.com/2018/11/05/elon-musk-teslas-work-is-important-to-the-future-of-the-world.html)," by building electric cars at Tesla. 2. Making humanity a "[multi-planet species](https://www.cnbc.com/2021/04/23/elon-musk-aiming-for-mars-so-humanity-is-not-a-single-planet-species.html)," by building spaceships at SpaceX. 3. Buying Twitter to prevent the "[corrosive effect on civilization](https://www.sfgate.com/local/article/elon-musk-claim-for-buying-twitter-sf-mind-virus-18462172.php)" of San Francisco's "mind virus." I want to make a few points about this list. First: It's a fine list? Stopping climate change, colonizing space, and changing public discourse are all reasonable things to prioritize. There are tech founders who are like "I wanted to solve the problem that sometimes when I order food delivery, it arrives cold"; Elon Musk is not a founder like that. Second: He has made a lot of progress on these problems. Tesla revolutionized electric cars; SpaceX revolutionized space; I personally do not care for the effect that Musk has had on the conversation at Twitter (now X), but he sure has had an effect, and I assume _he's_ happy with it. As a founder of companies intended to solve big problems, Elon Musk has been quite effective. This is obvious stuff. "Elon Musk tackles big problems and is unusually good at solving them" is just conventional wisdom, which is why there are glowing biographies written about him and some days he's the richest person in the world. The main way to get really rich is by solving big problems successfully. And that's the third point that I want to make about this list: When Elon Musk has looked around and identified a big problem and set about trying to solve it, he has done so by founding (or acquiring) a for-profit company that he owns and controls. This is not the only way to (try to) solve big problems. Bill Gates, for instance, got very rich by selling software and then decided to improve the world by donating a lot of money [to charitable works](https://www.gatesfoundation.org/our-work). Or various billionaires try to improve the world by supporting politicians who they think will do good things. Musk has [dabbled](https://www.nytimes.com/2024/03/05/us/politics/trump-elon-musk.html) in these approaches, but mostly they are not his preference. "[Most philanthropy was bulls-](https://www.cnn.com/2023/09/11/tech/elon-musk-bill-gates-isaacson-book/index.html#:~:text=Musk%20told%20Isaacson%20he%20felt,a%20short%20bet%20against%20Tesla.),' Musk told Gates" once, arguing that Tesla did more good for the world than most charities. And this too is a pretty common view. It is not the only view - a lot of people think that philanthropy is good! - but in Silicon Valley tech circles it is [fairly conventional](https://www.motherjones.com/politics/2023/12/effective-accelerationism/) to [think that](https://www.axios.com/2023/10/21/philanthropy-selfish-billionaires) startups are a better way to improve the world than charity, that startups can be more ambitious, more focused on results, have a better alignment of interests and more motivated employees, and can raise and deploy a lot more money than charities. "Technological innovation in a market system is inherently philanthropic," [wrote Marc Andreessen last year](https://a16z.com/the-techno-optimist-manifesto/), in a "Techno-Optimist Manifesto" laying out this view. Now, I don't think that Musk thinks that _all_ profit-seeking companies, or _all_ tech startups, are good; he criticizes lots of them, and had to go and buy Twitter because its previous management upset him. I just think that he thinks the best way to improve the world is generally through a for-profit company _that he runs_. And even _this_ view - "the highest form of philanthropy is a for-profit company run specifically by Elon Musk" - is pretty widespread. Larry Page [has mused about](https://nymag.com/intelligencer/2014/03/larry-pages-charity-problem.html) leaving his fortune to Musk, because Musk's for-profit businesses are more philanthropic than any philanthropies he can think of. Last week [Musk sued OpenAI](https://www.bloomberg.com/opinion/articles/2024-03-01/openai-isn-t-open-enough-for-elon), arguing that it was founded as a nonprofit organization to build artificial intelligence for the benefit of humanity, that he was a big donor to that nonprofit, and that it has turned into a for-profit company in violation of a supposed agreement not to. What I find strangest about all this is that OpenAI was a nonprofit to begin with: Its founders, people like Musk and Sam Altman, are generally leading proponents and examples of the idea that the best way to do good in the world is with a for-profit tech startup. At some level I can understand why artificial intelligence is different from electric cars or rockets or [social networking](https://en.wikipedia.org/wiki/Loopt): Powerful artificial intelligence could potentially put humans out of work, enslave us, kill us, etc.; Altman [has said](https://www.wsj.com/tech/ai/elon-musk-sam-altman-openai-lawsuit-8e6f1897) that AI is "probably the greatest threat to the continued existence of humanity," terrific. Building AI that enriches its owners but immiserates everyone else would be bad. Still, all of the advantages of startups kind of remain true about AI. People might be more motivated to build AI if it will enrich them than if it won't. It was always an odd fit for Altman and Musk to start a noprofit together, an awkward choice to jam tech startup ideas and methods into a nonprofit box. Anyway [here's this](https://www.bloomberg.com/news/articles/2024-03-06/openai-responds-to-musk-lawsuit-sad-it-s-come-to-this): > OpenAI fired back at a lawsuit filed against it by Elon Musk in a blog post Tuesday, using the billionaire's own emails to show he backed the company's plans to become a for-profit business and that he insisted it raise "billions" of dollars to be relevant compared with Google. Here is [OpenAI's blog post](https://openai.com/blog/openai-elon-musk), which explains that OpenAI was founded as a nonprofit but eventually decided it needed more money than it could get from donors: > We spent a lot of time trying to envision a plausible path to AGI. In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission-billions of dollars per year, which was far more than any of us, especially Elon, thought we'd be able to raise as the non-profit. Yes, right, it turns out that it is easier to motivate people to give you computing power if you pay them for it, it is easier to motivate researchers to develop artificial intelligence if you [give them stock options](https://www.levels.fyi/blog/openai-compensation.html), and it is easier to get people to give you money if you [offer them a share](https://www.cnbc.com/2023/04/08/microsofts-complex-bet-on-openai-brings-potential-and-uncertainty.html) of your financial returns. (OpenAI has raised [roughly 100 times](https://www.bloomberg.com/opinion/articles/2024-03-01/openai-isn-t-open-enough-for-elon) as much money from investors as it ever did from donors.) If you want to build a car company or a rocket company or a social network, this is also true, so it seems intuitive to apply the same reasoning to AI. Elon Musk knows this and, apparently, agreed that OpenAI should pivot to profit. But he took his usual view, that the best way to do good for the world was through a for-profit company _that he controlled_: > In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity. Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding. Reid Hoffman bridged the gap to cover salaries and operations. > > We couldn't agree to terms on a for-profit with Elon because we felt it was against the mission for any individual to have absolute control over OpenAI. He then suggested instead merging OpenAI into Tesla. In early February 2018, Elon forwarded us an email suggesting that OpenAI should "attach to Tesla as its cash cow", commenting that it was "exactly right… Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn't zero". > > Elon soon chose to leave OpenAI, saying that our probability of success was 0, and that he planned to build an AGI competitor within Tesla. When he left in late February 2018, he told our team he was supportive of us finding our own path to raising billions of dollars. And now Elon Musk in fact [does have a for-profit artificial intelligence startup](https://www.wsj.com/tech/ai/elon-musks-x-leans-on-his-ai-startup-9038380d) that he controls and that competes with OpenAI. The Wall Street Journal [reports](https://www.wsj.com/tech/ai/elon-musk-sam-altman-openai-lawsuit-8e6f1897): > After Musk filed his lawsuit, Altman wrote a memo to his staff: "The implication that benefiting humanity is somehow at odds with building a business is confusing," he said. "I miss the old Elon." No, see, he still thinks that building a business is the best way to benefit humanity, which is why he's doing it. But the old Elon also liked to make outlandish claims in court; you just can't take this stuff too seriously. #### The board is in charge - Matt Levine The other day [I mentioned](https://www.bloomberg.com/opinion/articles/2024-02-29/the-board-of-directors-is-in-charge) a lawsuit against Crown Castle Inc., whose estranged co-founder sued over a deal the company struck with Elliott Investment Management. Elliott had run an activist campaign and threatened a proxy fight, and Crown Castle settled by giving Elliott two board seats. The co-founder, Ted Miller, wants to nominate his own board candidates and objected to Crown Castle's agreement to endorse Elliott's directors without even considering his. "The affairs of Delaware corporations," his lawyers wrote, "must be managed by boards of directors, not backroom deals." Well, this week Crown Castle and Elliott [amended their deal](https://www.sec.gov/Archives/edgar/data/1051470/000095014224000618/eh240454086_ex1001.htm) to add a "fiduciary out." That is: They still have a contract saying that the board will endorse Elliott's nominees, but now the contract specifically says that the board can change its mind if it decides that that's in the best interests of shareholders: > If the Board in good faith determines (a "Recommendation Determination"), after consultation with counsel, that the fiduciary duties of the members of the Board as directors of the Company require that the Board (x) change or withhold a prior recommendation that the Company's shareholders vote "for", or (y) recommend that the Company's shareholders vote "against", the election of a New Director (a "Specified Director"), then: [it can]. That does seem like a fairly straightforward fix. Miller's objection was a technical point of Delaware law: The board of directors has to run the company in the way that it thinks is best for shareholders, so signing a contract saying "the board will recommend your nominees" might be illegal, if (1) the board later gets other nominees, (2) it decides, in its heart of hearts, that those nominees are better, but (3) it feels bound by the contract to recommend the first, worse nominees. But if you rewrite the contract to say "the board can change its mind," that addresses the problem. And these outs are not uncommon. In particular, many public-company merger agreements let the target company's board get out of the merger if, after jumping through some hoops, they conclude that another deal is in the shareholders' best interests. Obviously rewriting the contract to say that the board can change its mind makes it a less useful contract. But the point is that, here, the board _wanted_ the deal with Elliott, and probably _doesn't_ want Miller's nominees on the board. Rewriting the contract solves the technical problem but probably doesn't change what will actually happen. ### Kangaroos - Matt Levine A well-known, somewhat exaggerated story about effective altruism goes like this: 1. Some people decided that altruism should be effective: Instead of giving money in ways that make you feel good, you should give money in ways that maximize the amount of good in the world. You try to evaluate charitable projects based on how many lives they will save, and then you give all your money to save the most lives (anywhere in the world) rather than to, say, get your name on your city's art museum. 2. Some people in the movement decided to extend the causal chain just a bit: Spending $1 million to buy mosquito nets in impoverished villages might save hundreds of lives, but spending $1 million on salaries for vaccine researchers in rich countries has a 20% chance of saving thousands of lives, so it is more valuable. 3. You can keep extending the causal chain: Spending $1 million on salaries for artificial intelligence alignment researchers in California has a 1% chance of preventing human extinction at the hands of robots, saving billions of lives - trillions, really, when you count all _future_ humans - so it is more valuable than anything else you could do. I made up that 1% number but, man, that number is going to be made up no matter what. Just make up some non-zero number and you will find that preventing AI extinction risk is the most valuable thing you can do with your money. 4. Eventually the normal form of "effective altruism" will be paying other effective altruists large salaries to worry about AI in fancy buildings, and will come to resemble the put-your-name-on-an-art-museum form of charity more than the mosquito-nets form of charity. Here, for instance, is the Center for Effective Altruism's [explanation of why it bought a castle near Oxford](https://forum.effectivealtruism.org/posts/76dQ6YfBuLzJDdTgz/reflections-on-wytham-abbey). I do not want to fully endorse this story - there is still a lot of effective-altruism-connected stuff that is about saving lives in poor countries, and for all I know they're right about AI extinction too; here is Scott Alexander "[In Continued Defense Of Effective Altruism](https://www.astralcodexten.com/p/in-continued-defense-of-effective)" - but I do want to point out this thought pattern. It is: - You find a thing that is bad (death) and spend money to attack it pretty directly. - You notice that you can extend causal chains to create more capacity: Buying mosquito nets can save _some_ lives pretty directly, but preventing AI extinction can indirectly, with some probability, save trillions of lives. - You do the extended-causality thing because … you think it is better? Because it has more capacity, as a trade - it can suck up more money - than the mosquito nets thing? Because it is more convenient? Cleverer? More abstract? There is no obvious place to cut off the causal chain, no obvious reason that a 90% probability of achieving 100 Good Points would be better than a 30% probability of 500, or a 5% probability of 5,000, or whatever. You [could have](https://www.bloomberg.com/opinion/articles/2023-10-23/nobody-wants-mutual-funds-now) a [similar thought process](https://www.bloomberg.com/opinion/articles/2023-10-17/you-can-t-sell-trees-no-one-cuts-down) with [carbon credits](https://www.bloomberg.com/opinion/articles/2021-04-21/you-can-sell-the-trees-you-don-t-cut): 1. Some people noticed that trees sequester carbon, and cutting down trees increases global warming. 2. They spun up a bunch of projects that involved preserving trees that would otherwise be cut down, or planting new trees, in ways that would slow global warming, and started awarding carbon credits for the trees that were saved. 3. You can extend the causal chain. If a logging company decides not to cut down a forest, that saves X trees and is worth Y carbon credits. But if you, I don't know, air a television ad telling people "trees are good, don't cut them down," how many trees does that save? How many carbon credits is that worth? If you fund a researcher to study tree diseases? Make up your own potentially tree-saving idea, and then award yourself some carbon credits. "Award yourself some carbon credits" is too glib, and in fact there are various certifying bodies for carbon credits, but you can make your case. Here's [a story about kangaroos](https://theconversation.com/3-reasons-why-removing-grazing-animals-from-australias-arid-lands-for-carbon-credits-is-a-bad-idea-218129): > One area we must scrutinise forensically are human-induced regeneration projects. These are the backbone of the [Australian] offset scheme, accounting for 30% of credits issued to-date. Over the coming years, they could be responsible for almost 50% of annual issuances. These projects claim to regenerate native forests across vast areas - not by replanting trees in cleared areas, as you might think, but by reducing grazing pressure from livestock and feral animals. … > > Almost all projects are in arid or semi-arid rangeland grazed by livestock and kangaroos and only partly cleared. You don't _plant_ trees, and you don't _refrain from cutting down_ trees; there is only so much capacity for that. (You weren't going to cut down trees on the arid rangeland anyway, and planting more is hard.) Instead, you go to the arid rangeland and, uh, find some kangaroos and discourage them from eating trees? Does that reduce carbon emissions? I mean! No, argues the article: > These projects are largely in the uncleared rangelands covering most of Australia's interior. These areas have little chance of promoting woody growth and storing more carbon, not because of grazing pressure, but because rainfall is too low, the soil too infertile, and the vegetation already close to its maximum. Forests will not regrow in these areas, particularly under hotter and drier climates. … > > In fact, where overgrazing does occur in Australia, it's likely to actually increase tree and shrub cover rather than reduce it. Known as woody thickening, this happens when grazing animals eat so many grasses and herbs that they skew the balance in favour of trees and taller shrubs. But the general thought process opens up a world of possibilities. Lots of things have _some propensity_ to increase the growth of trees. Go do those things and get your carbon credits. ### Bill Ackman vs. Harvard - Matt Levine One model of a charitable endowment is that you are in the business of selling appreciated assets on behalf of your donors in order to maximize tax efficiency. Like: 1. A rich person owns some stock. She bought it for, say, $1 million, and it is worth $10 million now. 2. She could sell the stock for $10 million, pay taxes on her gains, and be left with something like $8 million. She could then donate that money to you and deduct $8 million from her income taxes for the year, saving $3 million or so. 3. _Or_ she could _give_ you the stock. Then she would not recognize any gains on the sale, and she'd get to deduct its full market value from her income taxes, saving almost $4 million. You get a bigger gift ($10 million instead of $8 million), and she gets a bigger tax deduction. Better trade! 4. Then _you_ sell the stock, for $10 million, and don't pay any taxes, because you are a non-taxable charitable endowment. This is a very well-known feature of US tax law; [wealth managers](https://www.fidelitycharitable.org/giving-account/what-you-can-donate/donating-stock-to-charity-b1123.html) will commonly [tell you](https://www.schwab.com/learn/story/is-it-better-to-give-stock-or-cash-to-charity) to donate appreciated stock rather than cash, and if you go to, like, [Harvard University's web page](https://alumni.harvard.edu/giving), they will tell you how to donate stock, because it comes up a lot. In this process, Step 4 is not strictly essential. If you are a big charitable endowment, you are not spending all of your money every year. You are investing it for the long run. Maybe you want to keep this stock; maybe you think it has room to run and will be worth more in the long run. But if you are a big charitable endowment, you have some _plan_. You sit down and think about how to construct your portfolio; you say "we should be 50% stocks and 5% bonds and 15% hedge funds and 20% private equity and 10% timberland" or whatever, and then you pick assets and managers within each category to get the portfolio that you think will best position you for the long run. And then if some big donor comes in and gives you $10 million in cash, you allocate it to that portfolio. But lots of donors instead come to you with stock that has appreciated a lot (often stock in their own companies), they want to maximize their gifts and their deductions, and they know that the way to do that is by donating the stock. The stock doesn't fit with your plan. You are not making an _investment decision_ each time they donate the stock. You are, several times a year, getting a big concentrated position in some random stock that your big donors happen to own. An endowment with an investment portfolio consisting only of the stuff that its donors happened to have lying around would look crazy. You take the random stuff, you sell it, you buy index funds or timberland or whatever. You are essentially a middleman: You have some target portfolio of assets you want to own, but your donations come largely in the form of _other_ assets that your donors happen to own, and you are in a position to efficiently turn the donated assets into the target assets. This can lead to occasional problems, though: 1. Your donors are often able to give all that money because they are successful investors, and they _like_ the stock they gave you. They think it has room to run. If you dump it the second they donate it, they will be offended. If you are like "no, you don't understand, we dump _all_ the random stuff that donors give us, we have our own investment thesis and don't want to just own your hand-me-downs," they will be even more offended. "Random stuff!" 2. Your donors are often in position to give all that money because they are _good_ investors, and often they are _right_, and the stock _does_ go up much more than your timberland or whatever, and then you look dumb in hindsight. Bloomberg's [Pierre Paulden reports](https://www.bloomberg.com/news/articles/2023-12-13/bill-ackman-says-harvard-squandered-gift-but-denies-resentment): > Bill Ackman denied that his weeks-long crusade against Harvard University and its president was driven by resentment toward his alma mater, but acknowledged a "serious" dispute with the school over a donation he made in 2017. > > "To be extremely clear, my advocacy on behalf of antisemitism, free speech on campus, and my concerns with DEI at Harvard have absolutely nothing to do with my unfortunate experience as a donor to the university," Ackman wrote in a post on X Tuesday night. In 2017, Ackman wanted to give Harvard money to recruit economist Raj Chetty, but he "had no liquidity" due to a divorce. So: > Ackman's solution was to give Harvard stock in Coupang Inc., at the time a speculative private venture-backed company. > > The stock he was giving was valued at $10 million, but Ackman said he agreed with Harvard that if the value went below $10 million he would make up the difference. But if the company went public and the stock was worth more than $15 million, he would have the right "to allocate the excess realized value" above that amount to any Harvard-related initiative of his choice. From Ackman's [long tweet](https://twitter.com/BillAckman/status/1734715135678177419) on the matter: > In Wall Street speak, Harvard had a put to me at $10m, and I retained a call at $15m, with the right to allocate the excess value to the Harvard initiative of my choice. In 2021, Coupang [went public](https://www.bloomberg.com/opinion/articles/2021-03-16/coupang-s-ipo-pop-served-a-purpose), and the shares he gave to Harvard were worth $85 million. He called Harvard with the good news, only to find out that Harvard had already dumped the stock. Paulden: > He was informed that Harvard Management Co., which oversees the $51 billion endowment, had sold the stock back to Coupang in a private transaction in March 2020. Ackman said no one from Harvard Management or the administration contacted him at the time to ask if he wanted to buy back the stock or to apologize after the fact for missing out on $75 million of potential gains. Apparently the sale price was $10 million, the same as the valuation when he gave it. Ackman argues that this was a bad investing decision by Harvard: > Harvard had sold stock which it could have put to me for $10m at a massive discount to its value at that time. Coupang had made massive progress since my gift in December 2017 and the stock's value had increased enormously. > > And Harvard had never told me that they had sold the shares. I was never offered the opportunity to buy the shares back for $10m or a higher price, which I would happily have done, had I known the University needed liquidity. > > And the notion that Harvard needed $10m of liquidity in the context of a $50 billion endowment is on its face absurd. Any sophisticated investor should also understand that when a private venture-backed company is buying back stock, it is a bad idea to sell. > > Harvard sold the stock despite the fact that we had a contract which provided the University with downside protection at $10m while allowing it to retain 100% of the upside. It made no economic sense whatsoever for Harvard to have sold. And, sure, I suppose that's right. Selling at $10 million when you have a put from Ackman at $10 million does extinguish a lot of option value; better to keep the stock, let it ride, and have some chance of getting $85 million with insurance that you won't get less than $10 million. And sure, arguably, it would have been polite (and a good financial move) for Harvard's Portfolio Manager For Bill Ackman to call him to be like "hey we are reshuffling the Bill Ackman Portfolio, do you want to buy any of it?" I just don't think that Harvard really has a Portfolio Manager For Bill Ackman, and I think that its decision process is often less "does it make economic sense to sell this concentrated stock position that someone walked in and handed to us" and more "why _wouldn't_ we sell this stuff and invest the money in something we picked?" Still there is also Ackman's "call right," which is a little odd: Did he have the right to allocate the excess above $15 million _if Harvard kept the stock and realized that excess_, or was it just a standalone derivative contract where Harvard owed him any value of Coupang over $15 million whether or not it owned the stock? The latter seems weird, but Ackman seems to think that was the spirit of the deal; he writes: > Unfortunately, the stock sale issue has never been resolved and nearly three years have gone by. And it should not have been hard for Harvard to resolve this problem. All Harvard has to do is honor the agreement it had made with me. That is, to grant me the right I bargained for: > > The right to allocate the $70m of excess proceeds to the Harvard-related initiative of my choice. > > And this should not be hard. Harvard has $50,000,000,000 of assets, and Harvard's obligation to me represents only 0.14% of these funds, and the funds all stay at Harvard. Put his name on a program in nonprofit portfolio management.