Just an interesting character? Or is there something more going on here? The New Yorker has published a major investigation into Sam Altman. It draws from more than 100 interviews, secret memos written by Ilya Sutskever, and more than 200 pages of private notes kept by Dario Amodei. The story offers the clearest look yet at the repeated actions that caused Sam’s removal as chief executive and his quick return to lead OpenAI. Here are the main points: Ilya collected about 70 pages of Slack conversations, human resources files, and pictures taken with personal phones. He did this to keep everything off company systems. He sent the files to board members in messages that would delete themselves. His first memo opens with a list titled “Sam shows a regular pattern of…” The top item on the list is “Lying.” Dario wrote detailed private notes for years. He titled them “My Experience with OpenAI” and marked the file “Private: Do Not Share.” The notes total more than 200 pages. His final view was simple: the biggest problem at OpenAI was Sam himself. After the board fired Sam, he told Mira Murati that his supporters were working hard to dig up damaging information about her and hurt her reputation. The investment firm Thrive paused its planned 86 billion dollar deal. It signaled that the money would only come through if Sam returned. This gave employees a clear financial reason to support him. Sam sent a direct text message to Microsoft chief Satya Nadella. He proposed a new board made up of Bret, Larry Summers, and Adam. Sam would serve as chief executive, and Bret would lead the review of what had happened. The two new board members chosen to run the outside investigation were selected after private talks with Sam. Before OpenAI, senior staff at the company Loopt asked the board twice to remove Sam as chief executive. They cited problems with his leadership and honesty. At Y Combinator, partners raised similar concerns with Paul Graham. Graham later told colleagues in private that Sam had been lying to them the whole time. OpenAI had promised its superalignment team 20 percent of the company’s computing power. Four people who worked on or with the team said they actually received only 1 to 2 percent. Most of that came from the oldest equipment with the weakest chips. The team was shut down before it could finish its work. Sam told the board that a safety group had approved every safety feature in GPT-4. Helen Toner asked for the records and learned that the most disputed features had never received approval. Sam also never told the board that Microsoft had released an early version of ChatGPT in India before completing the required safety check. Sam made a private agreement with Greg Brockman and Ilya Sutskever. He promised to step down if both of them decided it was necessary. This gave him his own unofficial oversight group. The real board was shocked when it learned about the deal. Sam reached an agreement with Greg to become chief executive. At the same time, he told the research team that Greg’s power would be cut. He told Greg a different story. One board member described Sam as having two unusual traits in the same person: a strong wish to make people like him in every conversation, combined with an almost complete lack of concern for the harm caused by misleading them. Several people who spoke to the reporters used the word “sociopathic” on their own to describe him. OpenAI is now preparing for a public stock offering that could value the company at up to 1 trillion dollars. At the same time, it is winning government contracts that cover immigration enforcement, domestic monitoring, and self-driving weapons used in conflict zones. submitted by /u/jason_digital
Originally posted by u/jason_digital on r/ArtificialInteligence
