Use Case Catalogue - Research - Disability Discourse Matters

DDM_Main_v3

Disability Discourse Matters

Addressing how shifts in language relate to changes in law, policy, and public attitudes. 

Abstract

The language political leaders use shapes the society we live in. How people with disabilities are talked about can influence policy, public perception, and daily life. Words carry weight—they inform understanding, set priorities, and affect how individuals are treated. Thoughtful language can lead to better policy, while careless rhetoric may fuel harmful perceptions and misguided policy decisions. Disability Discourse Matters (DDM), an Education Collaboratory initiative,  is an AI powered interface that collects statements by political leaders about disability, creating a dataset that captures how disability is discussed in the public sphere and provides the public with analysis dashboards. Over time, this data will support deeper analysis of how shifts in language relate to changes in law, policy, and public attitudes.

What inspired this project?

Over the past year, there has been a significant increase in harmful political discourse around disabilities. Politicians have used language that dehumanizes people with disabilities, positioning them as defined solely by their conditions, framing them as burdens, or denying their value and autonomy as whole people. These narratives ignore the real experiences of disabled people and shape both public attitudes and the policies that affect their lives. We created Disability Discourse Matters to systematically track political leaders’ language and hold them accountable for how they talk about people with disabilities.

How might this project benefit humanity? 

Strategically powered by AI, Disability Discourse Matters (DDM) collects and analyzes statements about people with disabilities made by political leaders, while also tracking related policy proposals over time. The dataset currently focuses on three interactive dashboards—the White House (including the President, Vice President, cabinet members, advisors, and heads of executive departments and independent agencies), members of the U.S. Senate, and proposed legislation that impact individuals with disabilities, including education-specific bills. 

​Over time, DDM will expand to include all 535 voting members of Congress (100 senators and 435 representatives), as well as statements and rulings from the nine Supreme Court justices and governors across all 50 states. To capture discourse at the local level, the system will also gather statements from political leaders such as school board members and state legislators across all 3,144 counties and county-equivalents in the United States, including the District of Columbia.

Each collected statement is evaluated using a four-point scale that measures whether it dehumanizes or affirms the worth of individuals with disabilities. These results are then visualized on this website, allowing the public to explore how disability discourse evolves across time and levels of government.

Further, DDM harnesses AI as a cost-effective tool for advocacy and accountability, using Open Science principles. The DDM platform and methods are intentionally designed to be transparent, replicable, and cost-efficient, and for individuals with little experience using AI, designing websites, and using code. This deliberate approach means it can be adapted and applied by anyone to advocate for other marginalized groups facing harmful discourse.

Who would use this?

  • Policy researchers
  • Disability rights advocates
  • Students and educators studying politics or social issues
  • Journalists and media analysts
  • Government staff who review public messaging
  • Community organizations focused on accessibility and justice
DDM Dashboard

DDM Dashboard

How does it work?

The DDM system was built using Python with a modular architecture: web scraping components (BeautifulSoup, Requests), API integrations (e.g., SerpAPI for Google searches, OpenAI GPT-4 for quote extraction), and data management (Pandas). Python handles the initial data collection pipeline and can search dozens of elected officials at a time using a comprehensive keyword search strategy, scanning thousands of websites for their official, verbatim statements. 

Once we have a complete database of political statements, we use R to create and house the final database. R also does the AI scoring via OpenAI using a multi-component prompt that evaluates tone, context, and language against our four-point scale, and dashboard creation using RMarkdown. Essentially, there are three main stages. First, Python for multi-source data collection, second R for scoring and database management, and third, RMarkdown for interactive dashboard deployment. Each component was designed to operate independently.

While our system leverages AI and automation, human intervention is incorporated into every step of the process. Human reviewers validate scraped content for relevance and proper attribution. After AI extracts quotes, we verify the accuracy of each quote. When AI generates scores, human reviewers evaluate the rationale to ensure alignment with our four-point rubric and capture contextual nuances AI might miss. How human oversight and AI align or misalign is also recorded in great detail. For example, we compare human scores and AI scores for every quote to determine inter-rater reliability between AI and human scorers. Each discrepancy is flagged for review, and we track agreement rates across different statement types and rubric dimensions. This systematic comparison allows us to quantify AI accuracy and identify specific contexts where human oversight remains essential. After several refinements to our evaluation system, our validation tests show that AI and human scores now agree on approximately 99 percent of statements across the White House and Cabinet dataset.

Resources

Troubleshooting Tips?

Start with Python’s BeautifulSoup and Requests libraries for basic web scraping, they’re well-documented and beginner friendly. However, this approach will only take you so far. The key in this process is developing comprehensive search criteria using key words and very specific prompts. Researchers at the Education Collaboratory have conducted hundreds, if not thousands of tests, to make our web scraping tool capture all the different ways our elected leaders may talk about or reference individuals with disabilities.

A key tip: you need to build incrementally. 

Start with scraping one source before scaling to multiple, test your filtering logic on small datasets before processing thousands of pages, and use free tiers of APIs during initial development to avoid high costs. For dashboard creation, R and RMarkdown are free and incredibly powerful for creating interactive HTML outputs without needing web development expertise. Our commitment to using free or low-cost tools (Python, R, RMarkdown, and free API tiers) ensures anyone can replicate this approach for their own advocacy work.

Contributors & Acknowledgments

Michael initially joined the Education Collaboratory at Yale as a consultant before transitioning to a postdoctoral associate. His research interests focus on the social and emotional development that occurs within families of children with intellectual disabilities, with attention on the emotional intelligence and functioning among neurotypical siblings. Other research interests include exploring inclusive higher education experiences of students with intellectual disabilities as well as international early childhood development and peacebuilding.

Michael McCarthy

Michael McCarthy

Postdoctoral Associate in the Child Study Center

Christina Cipriano, Ph.D., is an Associate Professor of Applied Developmental and Educational Psychology at the Yale Child Study Center in the Yale School of Medicine and Director of the Education Collaboratory at Yale University.

An award-winning scholar and internationally regarded expert in the science of learning, development, and open science practices, Dr. Cipriano is the PI and Director of numerous major federal and foundation grants supporting the centering of student intersectional identities in research and practice, the development and validation of novel school-based assessments and methodologies, and foundational evidence synthese

Christina Cipriano

Christina Cipriano, PhD, MEd

Related Publications

Disability Discourse Matters Relaunches with Enhanced Dashboards, Smarter AI, and Expanded Capacity