plus, this is exactly from their research guy “Glasswing is possibly the most consequential event in the AI industry I’ve seen up close since joining Anthropic almost 3 years ago. It feels like we’re at a turning point in history” turning point? like singularity and ASI is near? So, advancement in AI is posing a great risk to software and they create a higher gatekept model to look for vulnerabilities, even though they are 20+ years old? Plus it’s 93.9% on SWE-bench, running critical infrastructure against new frontier models before they are released is a great idea and probably the smartest decision they’ve made to date submitted by /u/ocean_protocol
Originally posted by u/ocean_protocol on r/ArtificialInteligence
You must log in or # to comment.
