Today we’re giving an AI agent control over a Rasbperry Pi
Wrote some basic motion control functionality on the pi Connected the pi to our cloud server to stream camera footage Tested our VLM + Depth Model pipeline with real world footage Did some prompt engineering Tunned the frequency of inference to avoid frames captured mid-motion Still a long way to go and a lot of different models, pipelines and approaches to try, but we’ll get there submitted by /u/L42ARO
Originally posted by u/L42ARO on r/ArtificialInteligence
You must log in or # to comment.
