Original Reddit post

I’m curious about how to help non-techy people make more ethical AI decisions.

Mostly I observe 3 reactions: AI is horrible and unethical, I’m not touching it AI is exciting and I don’t want to think too much about ethical questions AI ethics are important but it’s not things I can choose (like alignment) The main initial audience is 2, making it easy and attractive to choose more ethical AI, and convincing 3 people that AI ethics can be applied in their everyday lives, with the long term aim of convincing 1 people that AI can be ethical, useful and non-threatening. For the reaction 1 people, I feel like quite a lot of their objections can already be problem solved. I’m a teacher not a developer. Which objections do you hear, and which do you think can be mostly solved (probably with the caveat of perfect being the enemy of the good)? —— These are some ideas and questions I have, although I’m looking for more ideas on how to make this accessible to the type of person who has only used ChatGPT, so ideally nothing more techy than installing Ollama:

1) Training:

a) can we avoid the original sin of non-consensual training data? The base model Comma has been trained on the Common Pile (public domain, Creative Commons and open source data). This doesn’t seem to be beginner use fine tuned yet though? Which is the next best alternative to this? b) **open source models** offer more transparency and are generally more democratic than closed models c) **training is energy intensive**. Are any models open about how they’re trying to reduce this? If energy use is divided retrospectively by how many times the model is used, is it better to use popular models from people who don’t upgrade models all the time? The model exists anyway should it be factored into eco calculations?

2) Ecological damage

a) setting aside training questions, **local LLMs use the energy of your computer**, it isn’t involving a distant data centre with disturbing impact on water and fossil fuel. If your home energy is green, then your LLM use is too. b) models can vary quite a bit and are usually trying to reduce impact eg Google reports a 33× reduction in energy and 44× reduction in carbon for a median prompt compared with 2024 (Elsworth et al., 2025). A Gemini prompt at 0.24 Wh equals 0.3–0.8% of one hour of laptop time. **Is Google Gemini the lowest eco impact of the mainstream closed, cloud models? Are any open source models better even when not local**? c) water use and pollution can be drastically reduced by closed-loop liquid cooling so that the water recirculates. Which companies use this?

3) Jobs

a) you can choose to use **automation so you spend less time working**, it doesn’t have to increase productivity (with awareness of Jevon’s Paradox) b) you can **choose to not reduce staff** or outsourcing to humans and still use AI c) you can choose that **AI is for drudgery** tasks so humans have more time for what we enjoy doing

4) Privacy, security and independence

a) **local, open source models solve many problems around data protection**, GDPR etc, with no other external companies seeing your data b) **independence from Big Tech** you don’t need to have read Yanis Varoufakis’s Techno-Feudalism to feel that gaining some independence from companies like ChatGPT and cloud subscription is important c) **cost** for most people would be lower or free if they moved away from these subscriptions d) **freedom to change models** tends to be easier with managers like Ollama

5) Alignment, hallucinations and psychosis

a) your own personalised instructions using something like n8n can mean you can align to your values, give more specific instructions for referencing b) creating agents or instructions yourself helps you to understand that this is not a creature, it is technology What have I missed?

Ethical stack?

How would you improve on the ethics/performance/ease of use of this stack: Model: fine tuned Comma (trained on Common Pile), or is something as good available now? Manager: locally installed Ollama Workflow: locally installed n8n, use multi agent template to get started Memory: what’s the most ethical option for having some sort of local RAG/vectorising system? Trigger: what’s the most ethical option from things like Slack/ Telegraph/ gmail? Instructions: n8n instructions carefully aligned to your ethics, written by you Output: local files? I wonder if it’s possible to turn this type of combination into a wrapper style app for desktop? I think Ollama is probably too simple if people are used to ChatGPT features, but the n8n aspect will lose many people. submitted by /u/Jlyplaylists

Originally posted by u/Jlyplaylists on r/ArtificialInteligence