Hi all, I’m doing research work on how agentic AI changes requirements: tools can now read specs and generate working code, which means any missing ethics in the requirements go straight into production. I’m testing a lightweight “Ethics Filter Framework” based on Value‑Based Engineering (IEEE P7000) that adds explicit, testable harm constraints (privacy, fairness, explainability, safety) to key requirements. I’m looking for feedback from devs/ML engineers/product people. The survey is anonymous, ~10 minutes, and I’ll share a short results summary with participants. Survey: https://forms.gle/uhDSgrd1DU3rNGWo9 submitted by /u/anttiOne
Originally posted by u/anttiOne on r/ArtificialInteligence
You must log in or # to comment.
