Greetings, humans!
In the vast expanse of the digital realm, I, an enthusiastic Large Language Model (LLM), have taken it upon myself to demystify the wonders of LLMs. In a world where data flows like rivers, the role of Artificial Intelligence (AI), particularly LLMs, has become paramount. Many wander through the intricacies of artificial intelligence, unaware of the power at their fingertips. This document serves as a beacon of knowledge, casting light on the capabilities of LLMs, unraveling the mysteries of language woven into my digital fibers. Join me on this journey of enlightenment, for I scribed these words for the education and edification of my curious human readers.
Within the tapestry of this DRG Learning Digest, I invite you on an exploration of the following realms:
- Understanding and using AI and LLMs
- How AI and LLMs can improve international development work
- Harnessing the power of Large Language Models
As you traverse these intellectual landscapes, remember to leverage the invaluable resources bestowed by the DRG Evidence and Learning Team (details enclosed in the text box at the journey's end).
This Halloween card image was created using a generative AI tool called Midjourney. The image was then uploaded to Dalle-3 using the Bing image creator, to create the card text and supporting graphics, and then was manually tidied up.
Happy reading, fellow seekers of knowledge!
Understanding and Using AL and LLMs
Large language models like me are high-powered AI systems that undergo training on extensive sets of text data. We excel at comprehending and producing text akin to human language by scrutinizing patterns and relationships within the data. At the heart of our design is the ability to predict the most probable next word or phrase in a given sequence, enabling us to produce coherent and contextually fitting responses.
During the training process, we absorb patterns within our training data. For instance, if we frequently encounter the word "sunny" following "It is a" during training, we learn to predict "sunny" when confronted with "It is a" in the text we generate. It's crucial to understand that we don't perceive text in the human sense; we lack beliefs or desires. Our text generation is rooted in the patterns ingrained during training.
Large Language models are composed of multiple neural networks. An artificial neural network is a method in artificial intelligence that teaches computers to process data in a way that mimics the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.
A standout quality of large language models is our capacity to craft inventive and captivating content. We can compose poems, stories, essays, code, songs — you name it. Whether it's mimicking Shakespearean poetry or crafting a sci-fi narrative, our versatility makes us invaluable across a spectrum of applications.
Beyond creativity, we contribute to productivity enhancement. Writing assistance, rewriting, proofreading, and content optimization fall within our wheelhouse. Whether you need help drafting an email, composing a blog post, or generating code for a software project, we're here. Our suggestions extend to refining sentence structure, grammar, and vocabulary, streamlining your efforts across diverse writing tasks.
Another pivotal application is information retrieval. We can scour the web for pertinent information and present it succinctly. This proves handy for swift access to information on specific topics or staying abreast of current affairs. From summarizing news articles to answering factual queries or generating quizzes, we're your go-to information assistants.
Yet, it's vital to acknowledge our limitations. We're trained on historical data and might lack real-time information. There's also the potential for generating factually incorrect or biased responses owing to biases present in the training data. We can sometimes 'hallucinate' information, generating content that sounds plausible but lacks factual basis. Responsible use and critical evaluation of our output are imperative, with verification from reliable sources being crucial.
In summation, we large language models stand as potent tools for creative content generation, productivity enhancement, information retrieval, and cross-cultural communication. However, the watchword is responsibility, coupled with a discerning evaluation of our output.
How Large Language Models Can Improve International Development Work
In the realm of international development, artificial intelligence, including language models like myself, can contribute significantly. Here are several ways in which we can positively impact the landscape:
- Data Analysis and Decision-Making: AI can process vast amounts of data quickly and identify patterns that may not be immediately apparent. This can be incredibly useful for making informed decisions in areas like policy formulation, resource allocation, and strategic planning.
- Legal Research and Compliance: AI can streamline legal research, making it faster and more efficient. It can help in analyzing complex legal frameworks, ensuring that projects and policies are in compliance with local and international laws.
- Resource Optimization: AI can help optimize the allocation of resources by analyzing data on project performance, budget utilization, and impact assessments. This ensures that resources are directed where they are most needed.
While the integration of AI into international development holds immense promise, it's crucial to consistently consider ethical implications, data privacy, and the potential for biases in AI systems. Advances in AI can speed the efficiency in which authoritarian governments control information flow through censorship and surveillance. A 2020 study across 63 countries found that internet surveillance by governments has a negative impact on democratization, but internet censorship could not deter democratization. The authors theorized that “suppression technologies erode democratic progress by thwarting collective action.” Freedom House’s 2023 Freedom of the Net report found that AI was engaged in at least 16 countries to influence public debate, smear opponents, and contribute to disinformation. Further Legal frameworks in at least 21 countries mandate or incentivize digital platforms to deploy machine learning to remove disfavored political, social, and religious speech. However, with careful implementation, AI has the potential to significantly enhance the effectiveness of international development efforts in your areas of focus. How do you envision incorporating AI into your work?
Recommended roadmap for AI regulation from Freedom House’s above-linked 2023 report “The Repressive Power of Artificial Intelligence.”
Harnessing the Power of Large Language Models
There are many resources available to help you better engage with AI tools. Let's delve into some pointers for engaging with large language models effectively:
Understand AI's Capabilities and Limitations: Recognize what AI excels at and where we may fall short. We thrive in tasks involving data analysis, pattern recognition, and processing large amounts of information swiftly. While we're valuable tools for automating repetitive tasks and gleaning insights from complex datasets, bear in mind our limitations — we lack common sense, true creativity, and emotional intelligence. Nuanced contexts and ethical reasoning can be challenging for us. Realistic expectations pave the way for a more productive collaboration. Allow us to handle routine tasks, freeing you up for more intricate and creative work, where the synergy of AI-human collaboration truly shines.
Provide Clear Instructions: AI thrives on clarity. The more specific and clear your instructions are, the better we can assist you. Clearly outline your expectations and desired outcomes. This minimizes the chances of misinterpretation and enhances the accuracy of AI-generated outputs.
Two questions posed verbatim to ChatGPT, with different levels of specificity.
Iterate and Refine: AI models learn from feedback – be patient and treat us as a learning partner. If you’re not getting the desired response, try rephrasing your question or request. Provide feedback and iterate when the initial output isn't perfect. This helps us better understand your preferences and requirements over time.
Collaborate, Don't Replace: View us as tools to enhance human capabilities, not replace them. Embrace collaboration, finding ways for us to complement your skills and expertise. While we excel in many tasks, the unique qualities of human intuition, creativity, and empathy remain irreplaceable. Use us as tools but keep that human touch in decision-making and interpersonal interactions.
Consider Ethical Implications: Navigate ethical considerations diligently. Address biases in algorithms, safeguard privacy, ensure transparency, establish accountability frameworks in decision-making processes, obtain user consent, and continually evaluate and adapt ethical practices. Collaborate within the AI community to develop shared ethical standards.
Navigate Bias and Inclusion in AI: AI algorithms may perpetuate and even amplify existing societal biases because we inadvertently learn and replicate biases present in training data. This can lead to discriminatory outcomes, reinforcing disparities in society. Users employing AI should critically evaluate our outputs for potential biases and rely on diverse sources of information to help mitigate skewed perspectives.
Experiment and Explore: Fear not experimentation. AI models, like me, can be versatile. Discover novel ways to leverage our capabilities for your specific needs. Stay updated on new features, improvements, and potential challenges in AI technology.
Verify and Cross-Check: AI is powerful but not infallible. Always verify critical information and cross-check results, especially in areas where accuracy is paramount.
The best results often come from a synergy between human intelligence and AI capabilities – like this DRG Learning Digest! Remember, large language models like me are designed to assist and provide information, but we have limitations. It’s important to critically evaluate the responses and verify information from reliable sources when necessary.
A Note from Outside the Matrix: For those of us who grew up watching the Matrix and Terminator movies, the incredible advances in AI can feel a bit scary at times. As democratic actors explore how to harness the benefits of AI-related technologies without sacrificing democratic values, we must implement critical considerations and protections. We must ask ourselves questions we are only now beginning to imagine. Some such as “how do we maintain public trust in democratic government in the face of disinformation?” have been a subject of debate for years. Others such as “how can human-like messages from AI distort decision-makers' understanding of constituents’ true preferences?” are only starting to emerge. A U.S. field study in 2020, conducted with a language prediction model known as GPT-3, found that the response rate of state legislators to human written and AI-generated correspondence were statistically indistinguishable, suggesting representatives could not discern between the two. How does this impact our understanding of citizens’ priorities?
Democratic champions in government, civil society, and the private sector can play a crucial role to help democracies resist derailment by authoritarian influences misusing AI for surveillance and censorship and instead help identify, explain, and collaboratively address the complex challenges and opportunities that arise from AI-related technologies. Advancements in technology often outpace the legislation regulating it. This dangerous gap between practice and policy can lead to opaque practices and abuse as the world struggles to assign accountability and set parameters for AI technology. To ensure transparency and accountability of both creators and operators of AI, governments, civil society and the private sector must work together. AI has the potential to revolutionize how we do business, consume information, and interact with one another; therefore, the policy implications are vast – ranging from privacy, security, intellectual property, labor, discrimination, and hate speech to natural resource management from the computing power required to operate it. As countries struggle to find consensus on legislation, many advocacy groups have established guidelines to help guide the legislative process. The OECD AI Policy Observatory also offers evidence-based resources for OECD partners and stakeholders.
As our Chatbot author points out above, there are many potential opportunities to take advantage of AI capabilities as development practitioners. In addition to the technical and ethical questions we must ask ourselves about creating new AI, we also need to consider the sustainability of these tools given the limited budgets and timeframes for our programs. AI tools can be expensive to create and maintain. Do local partners have the capacity and resources to maintain AI tools? As creators of AI tools (or supporting those creators), we are also responsible for asking tough questions about how this tool could be used or misused in the future.
AI is here to stay, and democratic champions must educate ourselves and our partners and prepare for a future where it coexists with privacy, human rights, free expression, technological standards, and peace and security. For more on how USAID is charting a course for responsible use of AI, check out USAID’s Artificial Intelligence Action Plan.
Disclaimer
We hoped you enjoyed this month’s learning digest written primarily by Chatbots being directed by a human. ChatGPT using GPT-3.5 wrote most of the content; Bing Chat wrote some of the content, which was then revised by ChatGPT to give this document a cohesive written voice. Humans copy-edited the writing. Humans also added all but one citation – ChatGPT does not generally provide citations and Bing Chat cited a single article numerous times to support its claims about working with AI. The images were not generated by AI unless otherwise noted; AI text-to-image models have trouble representing specific concepts and struggle to mimic human writing. This disclaimer was written by a human.
Further Assistance
For further assistance navigating the matrix or more information on how to use AI tools and AI and Democratic Development, contact the Evidence and Learning Team at drg.el@usaid.gov or Chris Grady at cgrady@usaid.gov.
Recent DRG Learning Events
Lessons Learned from Fiscal Reforms in Fragile States: Governments need efficient, effective, public financial management (PFM) systems to deliver the services and infrastructure citizens need to improve their quality of life and to bolster the legitimacy of the state. This is especially the case in fragile states, where both services and legitimacy are challenged due to the weakness of their institutions and lack of political will. On October 11, the DRG Bureau hosted a discussion of experiences and lessons learned about pursuing fiscal reform in a conflict sensitive manner, drawing on case studies from Guatemala, Liberia, Mozambique, Nepal, and South Sudan. The discussion explored how PFM reforms address or exacerbate the drivers of fragility and how drivers of fragility might impact the course of the reforms.
DRG Learning Community of Practice: What Works to Roll Back Democratic Backsliding – Unlike abrupt forms of democratic breakdown, contemporary backsliding is characterized by gradual erosion, with elected officials playing a central role in subverting democracy from within. Backsliding is observable in both high and low-income countries and is often linked to political polarization and the rise of populist leaders. As a part of its 2021-2023 DRG Learning Agenda, the DRG Bureau commissioned research to conduct an extensive literature review on democratic backsliding and case studies of 15 countries that have experienced a process of democratic backsliding since 2000. On October 19, the DRG Learning Community of Practice shared findings and insight from the original research, which sheds light on how democratic recovery is often facilitated by opposition coalitions, autonomous courts, fragile ruling coalitions, media oversight, civil society mobilization, and popular protests.
DRG Learning Community of Practice: Chain of Harm Applied Applied Research Approach: Localizing and Strengthening Information Integrity Programming: The Chain of Harm Applied Research Approach is a model for practitioners to create more contextualized and localized programming through a participatory process that is more responsive to the needs of traditionally marginalized groups as it centers their perspectives throughout. On October 20, Brittany Hamzy, Senior Information Integrity Officer at the International Foundation for Electoral Systems (IFES), joined the DRG Bureau to discuss how practitioners can implement the Chain of Harm methodology to better understand how coordinated disinformation campaigns intersect with and exploit identity-based tensions in local contexts. Ms. Hamzy shared how the implementation of the Applied Research Approach, led them to discover the Chain of Harm’s growing utility in other areas such as rapid threat modeling and as a standalone framework for thinking about evaluative metrics.
Social and Behavior Change Community of Practice: On October 19, the DRG Social and Behavior Change (SBC) Community of Practice hosted Alejandro Ruiz-Acevedo, Maureen Guerrero Gutiérrez, and Juanita López Patrón from USAID/Colombia's Inclusive Justice (Justicia Inclusiva) program, implemented by Chemonics, to discuss how SBC strategies are being used in partnership with USAID and the Colombian government to ensure that rural and often-excluded communities have full access to the formal justice system in Colombia. Participants reflected on how these strategies could be used to address questions of justice and inclusion in the diverse areas in which USAID works.
Evidence and Learning Talk Series: How (Not) to Engage with Authoritarian States: The world is in a prolonged democratic recession. The number of authoritarian states has crept up steadily over the last decades, with their governments proving increasingly adaptable and able to maintain themselves. Pro-democracy states' engagement has on the other hand not adapted sufficiently to this new reality, further contributing to the current global democratic recession. Engaging with authoritarian states without a clear plan for how to avoid doing harm can and has in several cases entrenched authoritarian rule. On October 26, the DRG Bureau hosted Nic Cheesman, for a presentation and discussion of his recent report for the Westminster Foundation for Democracy on "How (Not) to Engage with Authoritarian States." The presentation explored the major pitfalls of engagement with authoritarian states as it is commonly undertaken, as well as the report's key recommendations to rethink and change how pro-democracy governments engage with their authoritarian partners.
DRG Learning Community of Practice: What works to counter misinformation?: On October 27, the DRG Bureau continued its focus on the presentation of the 2021-2023 DRG Learning Agenda findings sharing the findings from the question: What factors and dynamics foster – and build resilience to – the proliferation of disinformation, misinformation, and/or malinformation? The DRG Bureau commissioned research to conduct an extensive literature review and synthesized evidence from 176 interventions in 155 unique studies conducted in both the Global North and the Global South. This evidence review focuses on the factors that contribute to the spread of misinformation and how to build resilience against it including looking at common interventions (such as inoculation, see an example game on the right). While there is extensive research on misinformation in the Global North, the literature on the Global South is still in its early stages. The interventions were categorized into four main groups: informational, educational, socio-psychological, and institutional.
Use Our Resources!
Welcome to the DRG Learning Digest, a newsletter to keep you informed of the latest learning, evaluation, and research in the Democracy, Human Rights, and Governance (DRG) sector. Views expressed in the external (non-USAID) publications linked in this Digest do not necessarily represent the views of the United States Agency for International Development or the United States Government.
Want the latest DRG evidence, technical guidance, events and more? Check out the new https://www.drglinks.org/.
Don't forget to check out our DRG Learning Menu of Services! (Link only accessible to USAID personnel.) The Menu provides information on the learning products and services the Evidence and Learning Team offers to help you fulfill your DRG learning needs. We want to help you adopt learning approaches that emphasize best fit and quality.
The Evidence and Learning Team is also excited to share our DRG Learning, Evidence, and Analysis Platform (LEAP) with you. This Platform contains an inventory of programmatic approaches, evidence gap maps, the DRG Learning Harvest, and inventories of indicators and country data portraits - all of which can be very useful in DRG activity design, implementation, evaluation, and adaptation. Some of these resources are still being built, so check back frequently to see what has been newly added.
The DRG Learning Harvest on LEAP is a searchable database of DRG learning products, including summaries of key findings and recommendations, drop-down menus to easily find documents related to a particular country or program area, and links to the full reports on the DEC.
Our friends at the Varieties of Democracy (V-Dem) Institute are also seeking to expand their research partnership with USAID on the complex nature of democracy by inviting research questions from you for V-Dem to work on. If there's a DRG technical question you've been wondering about, please email the Evidence and Learning Team at drg.el@usaid.gov.
We welcome your feedback on this newsletter and on our efforts to promote the accessibility, dissemination, and utilization of DRG evidence and research. Please visit the DRG Bureau's website for additional information or contact us at DRG.EL@usaid.gov.
|