OpenFn v2 ⚡ is here! See the future of OpenFn's workflow builder.

Check it out

Contact Us

Responsible AI Policy

New general purpose technologies—from steam engines to electricity—have always changed the way we work. The applications made possible by their emergence are difficult to predict, let alone control.

The role we want these new technologies to play in our lives, however, and how we choose to interact with one another, is within our power.

AI may be the most influential general purpose technology yet, so we've got a tremendous opportunity to shape the emerging patterns and cultures that will help us orient ourselves and our communities for years to come.

Taylor Downs's Signature
Taylor Downs

Taylor Downs

Founder, OpenFn

Trusted by leading impact organizations worldwide

UNICEFWildlife Conservation SocietyMercy CorprsSwiss TPH

A hundred years of science fiction literature have promised, warned, prophesied and exalted the coming of true Artificial Intelligence.

And then suddenly it's here! Sort of.

It's in algorithms which make decisions on our behalf every day. It's built into - and even replacing - the search engines we use to find information. It's giving us simple answers to very complex questions. It's writing our emails and controlling our calendars. It's monitoring our heart rates and financial transactions.

Large Language Models (LLMs) like ChatGPT have rapidly become standard practice for workers around the world, often under the radar and with insufficient diligence. Many organizations - like us - are trying to catch up by working out the best practice for AI use.

Here's our take: AI is already a vital tool in data integration and workflow automation. But it is us, humans, who are ultimately responsible for the output of the AI we use. We have to take that responsibility seriously.

That thought drives our Responsible AI policy: a set of guidelines and principles which help us use AI safely and constructively. It's based on a few simple ideas:

Accountability

AI is not ultimately responsible for its output: we are.

Transparency

Full disclosure of where and how AI is used.

Humanity

AI is tooling made by humans for humans.

Skepticism (but not cynicism)

Maintaining a healthy relationship with our AI.

These are exciting times. Let's harness the incredible power of AI to make the world a safer, fairer place for all of us.

Last Updated: Sept 10, 2024

The ability of AI to generate code, summarize information and explain complex systems are all hugely valuable tools for our work in data integration and workflow automation. We believe that responsible application of AI can help us achieve our mission of doubling the efficiency of the social sector.

Today’s AI systems, while impressive (and, to most of us, distant science-fiction until the past couple of years) are flawed - with limited reasoning, a tendency to error, and an inability to explain themselves.

But even if we soon create true Artificial General Intelligence (AGI) systems which are a perfect, objective, omniscient source of truth - AI would still lack human wisdom, values, experience and creativity.

Excessive, unrestrained and ungoverned use of AI will lead to deepening inequality, erosion of technical skills and critical thought, and calamitous environmental cost. In an age of misinformation, AI represents a major risk to social stability.

We don’t think calamity is on the horizon, but we do recognise that serious challenges lie ahead of us. A glorious, AI-controlled utopia of social equality and low toil is still a distant dream.

In order to ensure safety and responsible use of AI, Open Function Group has adopted this Responsible AI Policy across its team of Core Contributors and the wider community.

This policy is a set of high-level guidelines designed to help us use these exciting AI tools to make the world a safer and more equal place.

About this Policy

The Responsible AI Policy is more of a mindset or philosophy than a list of didactic rules. It is an ethos we try to bring to our work every day: a set of principles and guidelines which occasionally translate to rules for our core contributors.

This policy is targeted at OpenFn Core Contributors (our staff), our implementation partners, open source contributors, and end-users of our workflow automation platform (app.openfn.org).

The policy is not technical and is aimed at a broad audience. It is designed to: Celebrate and encourage safe usage of AI Teach humans how best to use AI Establish and encourage best practice in AI usage

Ultimately, the purpose of this policy, the idea we want to spread, is simply to remind humans that they are responsible for an AI’s output.

Policy Details

Our AI policy is built on the core pillars of accountability, humanity, transparency, and skepticism.

These are the things that we think about, and we encourage the wider community to think about, when using AI.

Accountability

AI is not responsible for its output: we are

What happens to the output of AI? Sooner or later it is released into the world, into contact with our society. This output has consequences, be they trivial or profound.

But the AI cannot be held accountable for these consequences. No matter how smart, it is just an algorithm, created by humans. It has no stake or presence in the physical and social worlds it outputs to.

This is not to absolve the AI of responsibility - rather, it is to acknowledge that it is humans on the line here. If the human’s work ends in failure or error (and it so often does!) it is the human who must be held accountable (with fairness and compassion, we sincerely hope).

Awareness of this accountability will allow us to preserve human dignity and values while using AI in our day to day lives.

Transparency

Full disclosure of where and how AI is used

One of the best ways we can take responsibility for AI is to disclose its usage. Every time we say “Oh, I used AI to help me write this”, we are kind of taking responsibility for it. After all, if we weren’t confident in the AI’s contribution, we’d be less quick to admit to its involvement.

And if we’d asked another human to help us we’d probably disclose that too - especially in a professional context. And really, using AI is just like asking another person. In the words of Anthropic, the industry-leading Claude is like “a brilliant but very new employee (with amnesia)”.

We welcome and encourage use of AI in our daily workflows, across our open source repositories, and when designing and implementing integrations. We just want to know about it (and tell others about it).

It’s also important that any AI-driven tools, products and services disclose, as much as possible, which models they use, how they were trained, and how decisions are made.

Humanity

AI is tooling made by humans for humans

Strange as it may seem, it is impossible to remove the humanity from AI.

Ultimately, all AI boils down to an algorithm written by human hands (even if the humans don’t truly understand that algorithm). And most modern AI is trained by datasets: curated by humans from a corpus of human creations.

That means that AI is not, and likely will never be, an objective source of truth. It is a reflection of us. Hopefully, with the right spirit and development, the best part of us - but still subject to our innate biases, and those subtly captured by training data.

It is also important that AI does not undermine humanity. We see AI has a useful set of tools, used by humans to increase our own productivity and the fairness of society. But AI is not a wholesale replacement for human ingenuity, creativity or dignity.

Skepticism

Maintaining a healthy Relationship with our AI

We believe in healthy skepticism, but not cynicism.

The non-deterministic nature of AI, and its mysterious inner workings (uncannily like our own), mean that the outputs of AI can be unpredictable, untraceable, and inaccurate. The very structure and nature of LLMs are kind of irrational. Reason and logic and creativity are by-products of the structure of billions of interconnected nodes, much like human brains.

When AI gives us output, how do we know if it’s correct? This isn’t about hallucinations - it’s about the nature of truth, it’s about subjectivity, bias, context, culture. It’s about asking rigorous and methodical questions about the output before validating that it is good.

And much like when asking questions of other humans, it is vital that we stop and consider the answer before blindly feeding it forward. Sometimes other experienced, expert humans are wrong - or at least, not right - and the final solution is not black and white.

This isn’t to say that AI is wrong, or lying to us, or cannot be trusted. It’s just a reminder to apply critical thinking to AI’s output. Just like any new information we acquire.

What Kind of AI Is Covered?

The usage of the term “AI” has changed so much over the years. When you get right down to it, it can be hard to know exactly what we even mean by “AI”. Like art, we’ll know it when we see it.

This policy is not dogmatic or prescriptive. When dealing with systems which display human-like intelligence while not revealing their inner workings, or when making decisions based on algorithmic results we cannot verify, the policy applies when it matters.

The term “Artificial Intelligence” has been around since at least the 1950s, and has covered everything from search algorithms and recommenders, to pathfinding and automation, to natural language processing, neural networks, and machine learning.

Classical AI has directly driven the algorithms inside your favorite software, hardware, apps and websites for the last twenty years. It has run governments, organized democracies, issued justice and granted aid.

These days, since the explosion of ChatGPT in 2022, the phrase AI generally refers to Large Language Models (LLMs). And what we think that really means is any algorithm smart enough to produce output that a human could have created.

Any machine we deem intelligent enough to replace human decision making, to erode our human sense of responsibility, is an AI that we are concerned about.

How we use AI at OpenFn

AI is increasingly critical to our business of workflow automation and data integration. So many problems that we, and our users, seek to solve are ripe for AI intervention. This is a really exciting and promising time for us.

When using AI, we are always mindful of what information we feed into it, and how the output will be used. We generally avoid building tools and services which call out to AI live at runtime, and, as users of AI, we never send sensitive customer data or PPI toAI models. Period.

When integrating data, we have to compare and understand large and complex systems, like DHIS2 or Salesforce or Primero. This is something that LLMs have proven particularly good at, and AI can help make this task easier.

We regularly encounter mapping problems when trying to map data from system A to system B - like taking country names in natural language and mapping them to ISO codes.

One of our biggest challenges comes from enabling inexperienced developers to encode the business logic and mappings that are required to implement an automated workflow. We strive to provide a simple, powerful vocabulary to make coding easier - but coding is just a hard task for humans. But LLMs have proved very effective at generating code, and could be a vital resource in helping our users to write successful integrations.

And of course, the developers who are building our systems, products and adaptors are all excited to use AI to produce better quality code, faster.

How The Policy Affects Us

We have already taken several steps to help put this policy into action at OpenFn.

For Core Contributors

Our Core Contributors - the people who make OpenFn, our staff - are given unlimited access (well, within reason) to AI code assistants. We like Anthropic so at the time of writing, we are using Claude.

All Core Contributors are “strongly encouraged” (i.e.,. we tell them to) to use their OpenFn accounts and tokens with official models when working on OpenFn projects. This is good for our team because they have access to incredible learning and upscaling resources. And it’s good for us because we understand how dependent we are on AI at any time.

Core Contributors are required to disclose any use of AI in any projects or formal communications, be they internal or external. It’s cool, we dig it, we just think it’s important to tell people.

For our Products and Open Source Software

We are developing a number of features to help users build workflows and use the OpenFn Platform. And naturally we’re following our own policy guidelines in development.

We do not use third party data - ie, customer and user data - in the training of any AI models. We explicitly discourage any sensitive data being sent into any AI services we use - such as pasting data into a prompt. We may, if we think there’s sufficient value, allow data to be attached to queries - but only with clear consent and fair warning.

The tools we develop for our platform are designed at build-time. They are tools to help build workflows, not services used in live environments. We are lucky enough not to have to develop AI products which make decisions in real-time, and we’re not particularly interested in embracing those particular challenges.

Any of our tools and services which use AI will be clearly marked and entirely optional. So if we’re using a power-hungry LLM as part of your workflow, we’ll make sure you know about it.

Where possible and appropriate, we’ll also disclose what models are used behind the scenes.

Any Workflows which have used our in-platform AI tools to support their creation will have a small indicator to this effect. Something like “AI was used to develop this workflow”. This is neither a warning nor an advert, just a nudge to inform users that AI was part of the process.

In our open source repositories, pull request templates require users to disclose whether AI was used on a given code branch. Again, it’s cool, we just think it’s important to know.

For our Partners and Community

Strictly speaking our policy is targeted at our core contributors - but we are lucky at OpenFn to engage with a much wider community of digital public goods and services, integrators, developers, business leaders and government agencies.

And we think the principles of this policy apply to all of us. So, where possible, we like to gently nudge people in its general direction. We also think the simple act of disclosing AI usage more widely will encourage the rest of the policy to fall into place automatically. After all, by saying “I used AI for this”, you’re sort of taking responsibility for it.

For Our Users

Our users are people who either create their own workflows using our platform, or even are beneficiaries of those platforms - people whose lives and work are affected by automation.

We generally encourage AI to be used at design time, when there’s a human in the loop. We discourage AI which makes live decisions in production, that does not allow human auditing or intervention. While this is not always avoidable, and not always consequential, live AI workflows are definitely at the riskier, less responsible edge of the field.

We encourage code, mappings and algorithms to be generated offline, to be thoroughly tested, before being deployed live.

We discourage users from submitting real and sensitive data to AI models or AI-based third-party services (and we will never send user data into third party services or AI models).

Pull Request Templates

We encourage contributors to our open source repositories to use AI to assist their work. We think this is a really cool way of working and can really improve the quality and efficiency of code.

But in the interests of transparency, we are asking contributors to let us know about it.

So we’ve updated our pull request template with a short survey on AI usage (see Pull Request Templates, below). It takes seconds to fill it out. Developers don’t need to disclose any more information or details to us, unless we specifically ask.

Here’s a bit more information about the options:

  • Learning or Fact checking: you’ve asked AI to help you learn something new, or to check reference docs or remind you of an API (if you’ve used AI like you’d use MDN or API reference docs, check this!)
  • Strategy: you’ve used an AI chatbot to work out a high-level strategy to get to your solution. This is about using AI to find good design patterns, or work out software architectures, or to learn best practice and standards.
  • Optimisation: you’ve used AI to optimize some existing code, reducing the number of lines or making it run more efficiently Templating/code generation: you’ve used tools like Copilot or ChatGPT to generate code blocks for you. We don’t include basic code snippets or code complete in this - just stuff that’s explicitly marketed as AI.
  • Translation: you’ve used AI to translate some text while creating this PR. Maybe you’ve translated or spell-checked some user-facing copy, or you’ve translated reference docs or the original issue into your preferred language. This is targeted at natural language but you can tick it if you’ve translated machine code too.
  • Other: you’ve used AI for something which doesn’t easily fit into these other boxes I have not used AI in this pull request: please do disclose if you didn’t use AI, just so that we know you’ve looked at this section and given it some thought.