As said in previous posts, I’ve been building hardware for a while, and always struggled with making it autonomous, be it because of expensive sensors, or just setting up ROS2. So I’m building a solution that just uses a camera to achieve that which couldn’t be done before for a hobbyist on a tight budget. With just a raspberry pi, a camera, and calling to my cloud API today I developed:
Integrated the SLAM we built on DAY 6 onto the main application Tested again with some zero-shot navigation Improved SLAM with longer persistence for past voxels Just saying imagine being able to give your shitty robot long horizon navigation, by just making an API call. Releasing repo and API soon submitted by /u/L42ARO
Originally posted by u/L42ARO on r/ArtificialInteligence
