Tech giants are seeking help on AI ethics. Where they seek it matters

For years, tech’s most influential companies have faced pressure to build ethics checks into their software development process, especially regarding artificial intelligence.

As AI algorithms make their way into ever more services and products, from social media apps to bail recommendation software for judges, flaws in how AI is trained could affect every corner of society. For example, one risk assessment algorithm widely used in US courtrooms was found to recommend harsher prison sentences to black people than white people.

Tech giants are starting to create mechanisms for outside experts to help them with AI ethics—but not always in the ways ethicists want. Google, for instance, announced the members of its new AI ethics council this week—such boards promise to be a rare opportunity for underrepresented groups to be heard. It faced criticism, however, for selecting Kay Coles James, the president of the conservative Heritage Foundation. James has made statements against the Equality Act, which would protect sexual orientation and gender identity as federally protected classes in the US. Those and other comments would seem to put her at odds with Google’s pitch as being a progressive and inclusive company. (Google declined Quartz’s request for comment.)

Thanks for supporting our journalism! You’ve hit your monthly article limit. Become a member to help build the future of Quartz.

Get unlimited access to Quartz on all devices. Unlock member-exclusive coverage, CEO interviews, member-only events, conference calls with our editors, and more.

About the author

Add a Comment

Your email address will not be published. Required fields are marked *