As reported recently by Fortune (How executives can prioritize ethical innovation and data safety in AI), leaders in the AI arena are increasingly warning that a computer may perform intended functions but may include unintended bias.
As the Biden administration recently unveiled guidelines regarding responsible AI, many companies are announcing concepts and suggestions on how to combat and avoid biases.
Naba Banerjee, head of product at Airbnb, suggested that a fifth pillar be included: the financial investments required to make these things happen.
Airbnb is employing AI to thwart house parties at rental homes and attempts to identify this issue by reviewing renters under 25 who seek to rent a mansion for one person, assuming these customers are scouting locations for parties.
"That seems pretty common sense, so why use AI?" Banerjee asked. "But when you have a platform with more than 100 million guests, more than four million hosts, and more than six million listings, and the scale continues to grow, you cannot do this with a set of rules. And as soon as you build a set of rules, someone finds a way to bypass the rules."
Banerjee said employees continue to train AI to discover these issues and implement these policies, but the system could be better.
"When you try to stop bad actors, you unfortunately catch some dolphins in the net, too," she said.
AI can only perform some tasks, so Airbnb continues to rely on its human employees. One project inside the company is Project Lighthouse, which focuses on preventing discrimination by partnering with civil rights organizations.
Discrimination is based on perception—and on Airbnb, hosts and renters may perceive race from things like first names and profile photos. With civil rights organizations like Color Of Change and Upturn, Airbnb uses data to understand when and where racial discrimination happens on the platform and the effectiveness of policies that fight it.
To read the entire article from Fortune