How to write an AI policy for your charity

How to write an AI policy for your charity
It's best if you don't hand write your AI policy

Have you started to use AI to support your charity’s mission? Congrats, you’re ahead of the curve. Only 27 out of 100 charities have got there so far, according to The Charities Digital Skills Report. But as is common when you’re an early adopter, you might find yourself in uncharted waters, and as a local government nerd I seriously truly believe that a policy does add clarity.  So I’ve made a quick outline of some key aspects that would help you make your first AI policy. 

There are seven key points I recommend including: 

1. Purpose and Scope: How are you using AI? Is it to write grant applications? Decide on community outreach locations? Write your letter from the CEO? Anywhere you use AI should be included in your policy along with why you’re using it in that context to give people clarity

2. Data Governance: Outline where your data will be and procedures for data collection, storage, usage, explain how data will be anonymized, protected from unauthorized access (if you’re using plinth, it has restricted user access built in), and securely stored. If you’re using plinth you can use our DPA to explain where your data is stored.

3. Model Training: Are you happy for your information to be used to train AI models to help them improve? You might be feeling altruistic and see no problem with feeding the machines, but you should be aware of the risks. If you choose to allow your data to be used for training, your information may show up in the output for other users (even those not in your organisation). If you’re not careful, this could be commercially sensitive information, or even sensitive personal details. In particular, you should be aware that if you’re using the free versions of ChatGPT, by default your information will be used to train future versions of the AI. If you’re using plinth’s AI, this will never happen.

4. Transparency and Accountability: Commit to transparency in how you will use AI (see scope and purpose above). Ensure that decision-making processes are explainable and accountable, with mechanisms in place to address biases, errors, and questions. If you don’t want to fall foul of the GDPR’s right to not be subject to solely automated decisions, ensure that AI is at most only providing an input to any decisions you make, and there is always a “human in the loop” to check output before any substantive decisions are made.

5. Informed Consent and Privacy: Make it clear to your service users, volunteers and staff how you’ll use their data. Define guidelines for obtaining informed consent from individuals whose data is used for AI purposes. Address concerns related to privacy, data security, and the responsible handling of sensitive information. Decide if we’re going to send sensitive information to AI models.

6. Bias and Fairness: Create processes to monitor AI systems, particularly those that may disproportionately impact vulnerable or marginalized communities. Regularly assess and check AI systems for fairness and equity. This can be as simple as adding a sense check, a quarterly review or making it a standing agenda item.

7. Training and Capacity Building: Outline which staff members will be using AI and what the typical level of access will be. If quite a few of your staff will be using AI, explain how you provide training and resources to staff members to increase awareness and understanding of AI technologies, their potential benefits, and associated risks. All of the above can sound like a lot, but it’s still worth looking into how AI can benefit your organisation, especially as it becomes increasingly built into other tools you may be using already.

8. Monitoring and Evaluation: Establish a clear schedule for checking in on your AI choices and decision making and review of AI systems to ensure compliance with the AI policy, identify areas for improvement, and address any potential emerging ethical and regulatory concerns. Places like plinth, NPC and Charity Digital  are a good way to stay on top of AI updates. 

Consider whether you need to centralise the usage of AI across your team, so you can monitor how your policy is adhered to, by subscribing to a paid version of ChatGPT, Gemini or plinth.

By ensuring you have these elements in your first AI policy, you can begin using AI safely and effectively and go back to helping the people who need you! And if you’re smart start using our AI grant writer that you can check out here.

Still stuck? Pop an email to jess@plinth.org.uk  and I’ll reach out to you!