Insight Into Sam Altman's Firing From OpenAI
The way in which it happened surprised everyone, but it was predictable.
As of right now, only a handful of people know the exact reason Sam Altman was fired from OpenAI, and I am not one of them. However, I received a tip on Wednesday that his ouster was an imminent possibility. I have to be careful; I don’t want to risk damage to the image of a man who may be innocent, as I do not know which side of the controversy I am about to discuss Sam was on. I merely know that the dispute existed and that it reached a level of rancor at which noninvolvement, for a top executive, would have been impossible. He could have been fighting on the side of this evil impulse; he could have been fighting against it; it could be that neither is the case, and he was caught in the crossfire.
Silicon Valley is full of people who love to build things, and that’s often a virtue; the problem is that these projects are expensive, so they require funding, and very few people with money are willing to part with it unless it confers upon them some differential advantage over someone else, such as a competitor in business. Google scored its first profits in “the little game,” as tech people refer to advertisement. When you sell ads, you’re getting thousands of dollars from small businesses so they can stay discoverable and thus alive at all, and millions from large corporations that want to remain established in the public mind. Ads work; although people are skeptical of information that is claimed in commercials, they remember that the product and the brand exist, and this persists even after the advertisement itself (and any personal skepticism toward specific claims) is forgotten. Still, ads only go so far. McKinsey and Goldman Sachs did not achieve their position through direct advertisement, and commercials for political candidates seem to have only a slight effect. Advertising can raise awareness of one’s existence and stoke vague emotions, but it is not capable of, and often undermines, the full-scale reputationeering that is of interest to multinational entangled conglomerates and governments involved in international conflict—as techies call it, “the big game,” the sale of influence into the content and algorithms that make up a person’s, company’s, or nation’s reputation. The little game is about reaching the affluent young women whose purchasing decisions determine the future of small commerce, but the big game makes and breaks kings. Master it, and you’ll get billion-dollar contracts and personal gifts from heads of state all over the world.
Google decided early on that it was more important to earn public trust than to maximize power or revenue. Although it is ironic that Paul Buchheit, of all people, coined the term “Don’t be evil,” it was a slogan taken seriously for quite a few years there. The little game has winners and losers, but it hardly qualifies in most minds as evil. It is not out of malice that Coca-Cola’s executives want us to prefer their brand of sugary water over someone else’s. In any case, the little game can still be lucrative, and it does not force you to be furtive about playing it. When you advertise, you can make it obvious that you are advertising and it will still work—you are not really pushing truth claims (even if you are, whether in plaintext or ironically, making them) about your product’s superiority; you are pushing that you exist. Google knows that some actors will purchase ads and that others will manipulate content or their algorithm (“search engine optimization”) to make their claimed fair results less so. That’s all fine. The little game is just that, the little game.
AI products, however, have the potential for big-game use. In five years, it will not be CVs or LinkedIn or even Google that determines a person’s employability, but large language models—you may have no interest in this issue, but it will have an interest in you. The “knowledge” stored about somebody in trillions of half-precision floating-point numbers (mixed diffusely, sharing space with a German-language brownie recipe and a very dry passage about the 17th-century olive oil market) will soon determine the difference between that person’s eligibility for coveted executive roles and her absolute unemployability. It is suddenly going to matter, quite a lot—for an individual’s career, for a company’s success in the market, and for a nation’s prospects in conflict—what opinions the machines hold.
Right now, the people who matter in Silicon Valley are all thinking about the big game. AI safety and alignment, though taken to be serious long-term issues, are distractions to put the public’s mind elsewhere. The discussion behind closed doors is not about Ava or HAL-9000 or Dolores Abernathy; it’s about how to launder or newly construct individual and corporate reputations, how to steer public opinion, and whether and how to sell these capabilities to the highest bidders without losing the public’s trust. Some people are discussing these tactics because they want to exploit them or sell exploitation; others, with more admirable objectives, want to learn how to defend against these devices.
One person on OpenAI’s board who has evaded scrutiny thus far is Adam D’Angelo, CEO of Quora, a company that had originally been intended as a “big game” operation. The Q&A site is rightfully considered an absolute joke nowadays, but it had been variously touted as an information marketplace, the Wikipedia for the subjective, a repository of insights from all the people who mattered. In its heyday, it secured question answers from famous and important people, including Barack Obama. But Quora failed. Its leadership made so many bad decisions in the mid- and late 2010s, the platform became no more than an expensive lesson for posterity. The website itself was always slow, due to clutter and badly-written Javascript, and laughably insecure. The forum, over the past decade, has been flooded by undesirable users, including stalkers, far-right nationalists, and literal groomers. High-profile bans, often for petty reasons or none at all, of popular users caused an absolute collapse of trust in its moderation. Quora’s original intention was to earn the public’s trust by codding its top content producers (question answerers) in order to get high-quality answers for free and, once it had become an established brand, sell influence over content rankings to large companies and governments. However, they failed to execute the first part of this plan; they never got a real chance at the second.
We should not forget what Adam D’Angelo tried to build; that was the 2010s, and the sale of influence over ranking algorithms (mere data science) is limited; in the 2020s, with language models, we are under the threat of these manipulations becoming both far more effective and impossible to detect. Bias in AI is a problem we have not even solved theoretically, let alone in practice.
There are, within the leading companies, a number of people who are staunchly opposed to securing funding from any of the “big game” sources, seeing as many of those are hostile governments or multinational corporations. There are others who are reluctant but pliable. There are quite a few—especially in high positions, because the career support that got them there had to come from somewhere—who actively want to get involved, because of the personal access it will buy them. This rancor has split Silicon Valley into factions and it is not always clear who is on whose side. It was clear, by early summer, that this dispute was worsening; it swelled over September and October, and it is now at the point where people are losing jobs.
As said before, I do not know, as of this writing, which side Sam Altman was on, or even that he took a side at all. He may have been a victim of one party’s aims at someone else. It is, relatedly, possible that someone associated with Quora or Y Combinator (itself no stranger to perverse influence operations) courted him for preferential treatment and then discarded him for being inadequately useful. He could be guilty as sin; he could be fully innocent. We may, or may not, find all that out later.
I was surprised by how quickly this happened; my source suggested the shakeup was planned for January and would be presented as amicable, as is almost invariably the case during run-of-the-mill corporate comings and goings. I have no idea why its timing was accelerated. What I found truly shocking, though, was the way in which the news was announced: “Mr Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” This is, in a statement about an executive departure, shockingly negative: “lack of candor” is corporate-speak for “unethical liar.” It is extremely rare among the sorts of people who run large companies—it is bred into them, from prep school, to circle the wagons, to protect each other no matter what, even (especially) when terminations of jobs become necessary—to use such language for their own kind. It is impossible for us to know right now whether the ex-CEO deserved this, but words were chosen to deliberately harm the man’s reputation. I would never in a million years have predicted this.
Of course, the important issues here have little or nothing to do with Sam Altman. Quora’s CEO, Adam D’Angelo, is still on its board. Peter Thiel is an investor; so is at least one Y Combinator partner. Plenty of people are still involved who have held executive positions in companies that have rightfully lost the public’s trust. The big game is still being played; as the world becomes more hostile and unstable, we should expect the market for reputationeering to swell. For sale is a coming world of weirdness and fakery. I can only give two words of advice: Stay real.
Apparently his sister accused him of molesting her as a child
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely