How Development and Humanitarian Organizations Can Thoughtfully Approach Generative AI

New technologies often arrive with hype surrounding their transformative potential. Generative AI is no exception, with capabilities like creating original text, code, and imagery sounding almost magical. For organizations committed to reducing poverty, improving health, and expanding opportunity, however, separating hope from reality is essential. By approaching this technology thoughtfully, we can realize its benefits while proactively mitigating risks.

Identify High-Impact Use Cases

The first step lies in identifying use cases tightly aligned with core elements of your mission. Generative AI offers many possibilities, including:

Automating Labor-Intensive Tasks

  • Processing grant applications, progress reports, and other documents, extracting key details to free up staff for higher-level analysis and relationship building.
  • Drafting status reports, project summaries, and routine communications to accelerate information sharing.
  • Analyzing satellite imagery, social media, news sources, reports and other data to provide rapid situational awareness and identify priority needs.

Creating Relevant Localized Content

  • Generating public health messaging tailored to specific communities based on language, culture, education levels, and other contextual factors.
  • Producing how-to guides adapted to available resources and constraints in a given locality.
  • Creating learning content customized to the needs and environments of different audiences.

The key is aligning use cases tightly with mission priorities and desired impact. Beware falling into the trap of automating for automation's sake. Appropriate use cases create meaningful capacity that allows teams to focus on work only humans can do.

Curate Datasets Thoughtfully

The real-world value produced by AI reflects the data used in training. In building your own models, curate datasets with great care:

  • Prioritize data relevance over volume. Smaller, high-quality datasets with examples directly related to the problem space will yield better results than larger generic datasets.
  • Incorporate diversity reflective of the populations you aim to serve. Diverse data mitigates bias risks.
  • Apply "in-context learning" providing clear examples upfront that guide the AI to desired responses.
  • Monitor outputs for unintended bias and adjust training approaches accordingly.

You know your mission and context best. That expertise makes thoughtfully curating datasets more important than acquiring off-the-shelf models.

Manage Data Privacy and Security

Protecting vulnerable populations demands vigilance around data privacy and security:

  • Anonymize datasets by removing personally identifiable information.
  • Implement stringent access controls on sensitive data.
  • Add safeguards like running outputs through a separate content moderation model before release.
  • Adopt responsible AI development practices including transparency, auditability, and continuous monitoring.

Data protection needs to be robust yet thoughtful, avoiding overly restrictive governance that precludes realizing benefits.

Build Capacity Deliberately

Resist the temptation to see generative AI as a silver bullet. Build human capacity in tandem:

  • Prioritize upskilling existing staff over acquiring pre-trained models. Domain knowledge and creative thinking remain essential.
  • Focus training on the unique skills humans still need, like critical thinking, cultural competency, and sound judgement.
  • Hire some internal AI expertise to provide guidance and guardrails. But recognize contextual knowledge within your teams is equally important.

With care, re-shape teams to complement AI strengths while retaining irreplaceable human skills.

Adopt New Technologies Cautiously

The hype surrounding generative AI warrants caution. Adopt new technologies step-by-step:

  • Favor solutions offering transparency into model training data, IP rights, engineering practices, and other factors. Opacity multiplies risk.
  • Start with lower-risk use cases rather than over-promising upfront. Real-world testing beats hypotheticals.
  • Anticipate the need for customization. Off-the-shelf models rarely suit all needs out of the box.
  • Maintain clear sight of your North Star metrics and continuously evaluate impact. Pause and adjust course if harms emerge.

Incremental adoption avoids over-promising as capabilities are proven. Patience and care matter more than speed.

Keep Core Values Front and Center

Ultimately, technology is just a tool. Keeping mission and humanity at the forefront matters most:

  • Ensure efficiency gains free up resources for mission-critical efforts rather than simply reducing costs.
  • Proactively assess generative AI impact on vulnerable populations. Course-correct rapidly if bias or harm emerges.
  • Maintain human dignity as an uncompromisable principle, regardless of efficiency gains.
  • Involve communities served in shaping use cases and governance. Enable self-determination.

With core values as the compass, generative AI can profoundly amplify efforts to alleviate suffering and expand human potential. The path forward lies in harnessing its potential while mitigating risks - a path requiring wisdom, care, and our shared humanity.


Stay tuned for specific examples of applications of generative AI in the development and humanitarian sector.