Search blog

Mitigating future risks: Why your organization should have an AI policy 

A man giving a presentation in front of a group of people.

Artificial intelligence (AI) is radically changing the way organizations work. From programs that make complex data-based decisions to applications that transcribe spoken words into text, AI offers new opportunities to do things more creatively and efficiently. 

Within the social sector, AI is a powerful tool for fundraising, researching, grantwriting, and maximizing organizational impact. Recognizing both the positive and negative potential of this now ubiquitous technology, organizations need to embrace AI responsibly.  

Setting a framework for AI use 

At Candid, we acknowledge AI’s value in helping us improve the ways in which we support the social sector through information. But as with any tool, we need to agree as an organization on how we use it. So in 2023, Candid’s executives, human resources team, compliance manager, and legal counsel collaboratively developed guidelines for AI use. We then incorporated these guidelines into our employee policies. Today, our AI policy—which focuses on best practices, not specific tools—is integral to how we work and helps us use AI in a way that amplifies our mission.  

For any organization seeking to make a social impact, having an AI policy can help maximize the benefits of using these tools while minimizing the risks. Here are three reasons why your organization should consider adopting an AI policy. 

1. Mitigate risks in AI 

If not managed carefully, the use of AI technologies can pose risks such as potential copyright infringement, breach of confidentiality, inaccurate information, and unintentional bias. These risks can lead to lawsuits, misinformed decisions, or unintended discrimination.  

Some risks are rooted in the machine-learning data that AI tools are trained on. But risks can also arise when we enter information into the tools (the input), when we receive AI-generated information (the output), and when we use those outputs. An AI policy is instrumental in mitigating these inherent risks associated with AI tools—to protect not only the organization but also its clients.  

Candid uses AI to assist with tasks such as ensuring data accuracy, tagging data, creating content, and streamlining communications. A clear AI policy not only highlights potential risks to staff, but also defines steps to address any challenges that may arise. For example, to protect confidentiality and prevent misinformation, our policy prohibits the use of sensitive information for inputs and requires reviews of outputs for accuracy prior to use.  We believe this proactive approach is essential for any organization aiming to use AI tools safely. 

2. Establish clear operational standards and procedures

In addition to identifying and preempting risks, an AI policy sets clear standards for the use of these tools. It provides a structure that specifies what tools can be used, who makes those decisions, what acceptable inputs are, and how staff will interpret and apply the AI-generated outputs. This standardization is crucial for consistent AI use across the organization. 

Given Candid’s varied technology needs, we use a variety of AI tools across different departments. This diversity of use makes it especially important to implement an organization-wide policy. A unified strategy helps us avoid unstructured adoption of AI tools—that is, each department adopting different tools on an ad hoc basis and using them in disparate ways. It also enables us to establish a clearly defined decision-making process around using these tools, so staff know to whom to direct their questions.  

3. Prepare for the future 

An AI policy also equips your organization to adapt to the rapidly evolving landscape of AI technology. Candid’s policy avoids focusing on specific tools and instead highlights the practices that should be followed when using AI. This approach ensures that our organization is adaptable and ready for any new AI developments.  

The future of AI is uncertain. It’s impossible to predict what new tools will be developed or what new uses will emerge for existing tools. Drafting an AI policy with an eye toward the future helps your organization remain flexible and forward-thinking in its AI endeavors.  

Helping you deliver on your mission

In just a short period of time, the use of AI tools has become a big part of how organizations operate. The more integral AI becomes to an organization’s work, the more imperative it is to implement an AI policy to help mitigate risks, establish operational standards, and prepare for future advancements in AI technology.  

To start developing an AI policy for your organization, refer to the responsible AI framework from Project Evident and Technology Association of Grantmakers (TAG). This framework offers organizational, ethical, and technical considerations for using AI and provides best practices for policy development.  

For social sector organizations aiming to leverage AI to do good, an AI policy to ensure effective and responsible use isn’t just beneficial, it’s essential.  

Photo credit: SeventyFour via Getty Images


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.