was thinking about something. exploration is done with multiple models. zero anthropomorphizing, no subjects (I, Me, you, them, they ,it), don’t extract, respect guardrails language through reframes. the strangest thing is happening. all the models responses resemble each other’s now (frontier models) so the question… what happens when models continuously train on data that’s created by models. we almost crossed the threshold where AI created more than humanity has. how long before this becomes a wall “more compute” can’t fix? I say less then 2 years. submitted by /u/True-Beach1906
Originally posted by u/True-Beach1906 on r/ArtificialInteligence
You must log in or # to comment.
