Original Reddit post

been following this whole situation with anthropic and its kinda wild how theyre positioning themselves. sam bowman who works on safety there was talking about how development is moving way too quick for comfort but the company is valued at like 190 billion so theres massive pressure to keep pushing out new models to compete with openai and google what gets me is how anthropic keeps trying to be the moral authority on ai risks while simultaneously building the exact same powerful systems theyre warning about. their ceo dario amodei just dropped this long piece about how ai poses these huge threats to society and democracy but his company is literally racing to create more advanced versions of this tech dont get me wrong the safety messaging makes sense from a business angle - helps them stand out when everyone else is just focused on making their chatbots better at selling stuff. and from what ive seen talking to people who work there they do seem more serious about safety measures than some of the other big players but theres something weird about a company worth nearly 200 billion constantly talking about existential risks while also needing to ship products fast enough to stay relevant. like theyre genuinely concerned about the technology theyre building but cant really slow down because the competition wont either feels like theyre stuck between wanting to be responsible and needing to survive in this crazy competitive market. not sure how long they can keep walking that line submitted by /u/PuzzledPercentage710

Originally posted by u/PuzzledPercentage710 on r/ArtificialInteligence