๐จ How to use AI in service of DEI
Some gAI best practices for public media to advance inclusion and equity
Since the last OIGO, I led a Headway session on generative artificial intelligence and diversity, equity and inclusion. Thereโs been a lot to unpack. ๐ฆ
Many of you in the public media system are concerned about these tools. ๐ฉ Youโre not alone. Americans see AI more negatively than much of the world. Still, those invested in fostering deeper trust and belonging are trying to understand if and how these resources can be/are best leveraged.
๐ Two frequent questions I get:
Can AI be used to to connect with ____? โ๏ธ
How do we steer away from the problems weโve seen with AI? ๐ฎ
I have some answers for you. In fact, with thoughtful pre-work, public broadcasting can leverage AI as an unlikely ally. ๐๐ฝ
Before saying more: Iโll add that station leaders all over are grappling with maximizing AIโs potential while mitigating harm, especially for communities of color. Some organizations now cautiously explore AI to bolster inclusion. ๐ฑ These are all positive.
As artificial intelligence rapidly evolves, public media is determining how to best apply these emerging technologies. ๐ฆพ I feel that, with thoughtful development anchored in human values and inclusive design principles, AI tools can empower stations to better serve and represent staff and communities of color. ๐๐ผ Our priorities must stay centered on systemic change through sustained effort, accountability and courageous reform from within.
Letโs explore how. ๐ฏ
๐ By the way, sorry to cover this topic so often (see July and September 2023). NPRโs recent hire of tech executive Katherine Maher as its new CEO (check out a fascinating interview she did for Possible, released days before that announcement) and national DEI stress in public media have hastened some of this.
Big questions firstโฆ
๐ Can AI be used to to connect with ____?
Not necessarily, but it depends on what youโre trying to do. ๐
Public media organizations should first think about your reasons for using AI in the diverse engagement way. ๐ Some possible objectives:
Serving information needs: providing content to diverse audiences (but see my caution in an October 2022 OIGO). ๐คณ๐ฝ
Presentation: you want to represent people of color and multilingual content in social, et al. ๐๐ฝโโ๏ธ
Identifying implicit bias: Testing what you have against other materials to root out exclusionary language, approaches or assumptions we make. โ๐พ
Recruitment/retention: Testing internal strategies against data on classes of workers to improve these efforts. ๐ซถ๐ฟ
Whatever your goal, you need to think about the long game. ๐ AI will not generate a bilingual staff reporter to talk with a monolingual Spanish speaker who shows up to your office with a story, because they happened to find one of your translated stories. AI cannot generate leadership and staff of color to mentor and help newly recruited staff feel more seen and connected. ๐ช And AI definitely wonโt help the people of color who show up your events because they saw a nice presentation on social media, and quickly noticed theyโre the only Latinos there. ๐ฅบ
๐๐ฟ Smart leadership, thoughtful planning and centering diversity as a way we work rather than a short-term goal are the only ways to foster long-lasting trust, connection and, ultimately, sustainability and support.
๐ How do we steer away from the problems weโve seen with AI?
There is no guarantee youโll stay away from the problems associated with AI. ๐ง However, public media organizations and professionals can take several measures to ensure responsible and safe use of generative artificial intelligence particularly. Some of the key strategies I recommend include: ๐งต
Clear goals: Similar to the goal discussion above, organizations must establish specific, measurable, inclusive and equitable objectives before implementing anything AI related. โ๏ธ Donโt waste resources and instead get meaningful value for your organization with this step.
Transparency: Promoting and regularly communicating about a culture of AI for good and including your staff is essential. This also means you should include many stakeholders and teams in your policy and design conversations, as well as in reviewing AI output. ๐งฉ This goes a long way in mitigating potential bias, fact checking output, etc.
Upskilling and training with DEI as foundation: This goes deeper than training so to avoid replicating inequities and involving staff of color in your processes. โ๏ธ Part of this priority also involves understanding the purpose of AI, putting clear guidelines around the range of autonomous AI decisions, and being aware of potential errors, misinformation and risks, and keeping current around problems that are being identified in real time. ๐งฑ
``See my January 2024 OIGO comments about AI in non-content affairs.
But, will this protect you? Not completely, but vision, guidelines and promoting a culture of responsible AI can help public media. โฐ
๐ฌ How can we use AI in service of DEI?
I know many staff who express hesitation over AI further excluding marginalized groups or threatening jobs. โ๏ธ But public media's thought leadership on ethical technology use can be beneficial in select ways.
How might public media organizations use generative AI to attain their DEI strategic goals? โพ
Equitable recruiting and hiring: Thoughtful AI adoption in human resources practices brings opportunities to mitigate unconscious biases and expand candidate pools. โ๏ธ You could use ChatGPT, Bard, Claude or one of the other large language models to optimize language to attract diverse applicants for open positions while avoiding potentially restrictive verbiage. Stations can expand reach by assessing hiring questions, interview poolsโ best practices and press our current ways of grading applicants. โญ
Support for diverse sourcing and content: Generative text summarizes of the latest research around communities of color; or diverse Americansโ views on important news stories; or providing overviews on a subject impacting residents of color, and more, could accelerate newsroomsโ work to represent the lives of residents of color in fact-based ways. ๐ฅ
Inclusive design of our work: Cross-functional teams could institute algorithmic audits of our outcomes that center diverse perspectives. โ๏ธ How often is our decisionmaking protocol missing crucial voices? Could a data analysis of our output reveal things we may have blind spots around? Such oversight around our operations could be the first step toward establishing standards aligned not with business as usual and instead more with public broadcastingโs values. Transparency and accountability on this level may drive fresh ideas. โก
Diverse content experiments that stay in the office: As I said at the Current webinar on gAI (replay available at the Public Media Innovators PLC), I donโt think a lot of this is ready for prime time, especially on the Spanish-language side. However, thereโs no reason why you canโt experiment with it and see what the potential is. ๐งซ Too often, we in public media pay attention to ensuring any content-oriented work goes out to the public. In this case, give yourself permission to understand these tools and how such might spark ideas.
Technology alone cannot address systemic inequities within public media and society. ๐ช Lasting change requires sustained human effort, transparency, resources and courage. In that respect, AI may merely serve as a tool to thoughtfully assist progress. In the hands of those of us concerned about DEI, such resources may aid us in revealing biases, engaging affected communities, and enabling solutions.
As Sam Altman recently told Axios in Davos, the AI future may be discomforting. ๐ต In my view, we must intentionally anchor development to inclusion or risk harmful misuse. Public media could help set and enact the highest standards for fostering diversity through emerging technologies applied boldly. ๐ข
Cafecito: stories to discuss โ
Februaryโs FCC meeting will discuss requiring EAS devices to carry pre-translated template messages in Spanish and 12 other prevalent non-English languages spoken domestically alongside English scripts. ๐๏ธ The plan faces technical hurdles regarding current system architecture. Radio World has more.
The parents of a University of North Carolina Hussman School of Journalism and Media graduate have given an undisclosed gift supporting Spanish-language journalism programs and outreach. ๐๏ธ
Audacy has announced a slate of new public affairs programming for 2024. A program aimed at Latine audiences is among them. ๐ Itโs scheduled for an October debut.
The Los Angeles Timesโ layoffs disproportionately impacting Black and Latine staff, as covered by Nieman Lab, prompt conversations about union contracts, which may favor seniority (presumably of tenured white employees) over diversity (of early-career Black and Brown workers). ๐ต
Adsmovil (see Maria Lopez Twenaโs 2023 OIGO interview) is hosting a webinar Feb. 28 in association with Florida International University on understanding the Hispanic voter in 2024. ๐ซ Details.
El radar: try this ๐ก
Profile retiring legislative leaders and allies. WABE reminds me how many lawmakers are bowing out in 2024. โ๏ธ Spring may be a good time to talk with those in your state who have advanced Latinx issues during their tenure.
Spotlight local Latino dance. Country swing dancing is popular in Jackson Hole, attracting tourists and locals for live music and dancing. ๐ฃ However, KHOL notes Latin dance is also helping to bring the Wyoming townโs diverse communities together.
Explore the world of corrections in your state. KQEDโs On Our Watch podcast returns with the story of correctional officer Valentino Rodriguez, the abuses in New Folsom prison he reportedly witnessed, and his sudden death. โ ๏ธ The labor pool in these roles, the norms associated with law enforcement, and the dangers are important conversations impacting Latinx communities.
Stay on the mental health beat. KJZZ covers the increase in social anxiety and depression among Latino youth, often first- or second-generation, who are dealing with language barriers, mental health stigma, parents skeptical of diagnoses, and struggles to belong exacerbated by the pandemic. ๐ Not a long story, and a good example of how stations can start coverage.
Paint a richer picture of your diversity efforts for audiences. ๐จ For Better News, LAist has an interesting piece out about its DEI task force, process and recommendations. The impact of layoffs announced last year seem to be omitted. Still, this type of sharing can be helpful in building trust.
The next OIGO arrives Feb. 16. ๐บ Before we go, felicidades a Vanessa de la Torre, who was appointed Connecticut Public's Chief Content Officer.
If youโre the type you goes to NPR meetings, Iโll be presenting next week on DEI committees on public media boards of directors. Also, if youโre in the Bay Area, Iโll be on stage with Pendarvis Harshaw on Feb, 23, talking to educators about storytelling, representation and technology at the Arts Media Entertainment Institute Popup. Register here if youโd like to attend.
๐ You can buy me a coffee if youโd like to support OIGO. โ
Totally agree. It's inevitable that AI (or some advanced variation of it) will become increasingly present in our lives. By taking action now while it's still relatively young we can grow it into an incredible tool for so many issues, especially toward bringing more equity into our world.
I believe the biggest danger in AI stems from the biases that occur at the development level. And it's totally avoidable! Diversity absolutely has to be reflected in the development process: from the engineers building it and the technology they use, to the scientists (and their students) who are designing the studies and interpreting the data AI models from.
By highly prioritizing unbiased data collection and being more intentional about its programming architecture, future generations can enjoy not being lead down an endless compounding bias loop!