THIS was the year when two of Silicon Valley’s biggest hype blimps – cryptocurrency and artificial intelligence – were deflated by drama. First came the downfall of Sam Bankman-Fried, whose shady cryptocurrency empire landed him in court, where he was convicted of fraud and conspiracy. During his trial, witnesses and evidence revealed that Bankman-Fried’s cryptocurrency exchange FTX was siphoning billions of dollars from unwitting investors into one of his other assets, a cryptocurrency trading firm called Alameda Research.
A few weeks later, the other Silicon Valley Sam – Sam Altman – went through a corporate melodrama. Altman is CEO of OpenAI, maker of ChatGPT and one of the world’s most successful AI start-ups. In late November, the board of OpenAI claimed, rather mysteriously, that Altman wasn’t “consistently candid” with it and abruptly fired him. Stung, Altman hastily arranged a deal to set up his own research division at Microsoft.
When most of OpenAI’s 700+ employees threatened to defect with Altman to Microsoft, he was reinstated at OpenAI and the board was overhauled. There is still no official story about why it all happened, but let’s just say Altman had a really bad week where he almost lost his billion-dollar baby.
Aside from their first names and billionaire drama, they have nothing in common – other than their association with a fashionable form of philanthropy known as effective altruism (EA).
Popularised by the philosopher William MacAskill, EA has many adherents in Silicon Valley. They love its directive of “earn to give”, which suggests that people should rake in as much money as possible in order to donate a portion of it to “optimal” causes. Most of those causes are related to AI and high-tech doomsday prepping, and are intended to benefit humanity in the extremely long term, centuries from now – a philanthropic stance known as longtermism.
Indeed, many EA adherents believe the most effective form of philanthropy shouldn’t focus on today’s victims of poverty, homelessness and war, but on entrepreneurs who promise to make AI friendly towards humans.
Bankman-Fried served on the board of MacAskill’s organisation, the Centre for Effective Altruism, and gave millions of dollars to EA causes. Altman appointed several EA sympathisers to his board, including computer scientist Ilya Sutskever, who has said in a number of places that he believes OpenAI is on the cusp of developing artificial general intelligence, or a human-equivalent mind so powerful that it might constitute an existential risk to humanity (see “The future of AI: The 5 possible scenarios, from utopia to extinction”).
Being affiliated with EA gave the impression that the work the Sams were doing had a higher purpose. They were building a better future, where work and money would be utterly transformed by technology. Plus, they were saving humanity!
But when push came to shove, it appears some of these ideals took a bit of a back seat. Speaking to a journalist during his trial, Bankman-Fried said that his investment in EA was partly “dumb shit” he said to make himself seem ethical. For his part, Altman claimed to care about grave existential risks caused by OpenAI projects. But at the same time, he was deploying and selling an untested technology he himself had called potentially dangerous – a move many effective altruists view as irresponsible. Perhaps the commitment of both men to EA was more about words than deeds.
Silicon Valley truly is a capitalist Thunderdome, where two Sams enter and one Sam leaves. Sadly, the losers are all of us in the crowd, cheering them on.