'Words do matter': Artificial intelligence helping review, change word choices used in workplace

"I think that words do matter, so I think that you do have to be very mindful of the words that you use."

David Louie Image
ByDavid Louie KGO logo
Tuesday, October 27, 2020
Artificial intelligence helping prevent bad word choices in workplace
Companies are developing software to monitor and change bad word choices in the workplace.

SAN JOSE, Calif. (KGO) -- The Oakland Unified School District this week issued an apology for sending out a survey that included a historically racist term for people of Asian descent. Words can offend.

However, a movement is underway to prevent bad word choices. It's part of the changing workplace in Building A Better Bay Area.

"I think that words do matter, so I think that you do have to be very mindful of the words that you use," says Jaye Bailey, Valley Transportation Authority's head of civil rights and employee relations.

Whether it's a transit agency like VTA or a private company, attention to messaging has never been greater as a result of the social justice movement.

RELATED: Oakland Unified School District apologizes after 'historically racist' term used in survey

"You really work hard to normalize the language within your organization so that everybody is aware of it so that it becomes second, second nature," she added.

VTA is engaged in a conscious effort to improve language on websites and in marketing materials, employee communications and advertising. Thirteen employees, ranging from bus operators to department heads, are in training developed by the Government Alliance on Race and Equity.

At software company Intuit, guidelines were developed last month to remove language with racist roots.

"We have used black and white as metaphors for bad and good, so blacklist and whitelist, and so we have now identified terms that we're not going to use anymore as metaphors for bad and good," said Tina O'Shea, director of content at Intuit.

Intuit also is using a software program from San Francisco-based Writer that incorporates artificial intelligence to review and to suggest changes to word choices.

"We can't really be the language police, and we can't be out there checking everything that everyone has written," said Ms. O'Shea. "So using a tool just helps everybody make better choices and helps them see what they didn't see in the first place."

RELATED: SFPD investigating after Asian woman targeted with racist letter

The software's goal is to encourage the use of respectful, people first language.

"We got to put a data set together that combined the guidelines, the language guidelines, from communities and organizations that have spent a really long time thinking and working in this space," said Writer CEO May Habib.

Suggested corrections are just that, which means cultural sensitivity and historic meaning must also play a role. Every writer and even AI-generated changes can benefit from a human editor's watchful eye.

"Formulas and codes that are behind artificial intelligence cannot do that on their own, so they ultimately have to be used in tandem and in combination with the human factor in moving us forward," said Shaun Fletcher, Ph.D., assistant professor of public relations at San Jose State University and a former manager of internal communications at Apple.

Some words can hurt. Other words can be powerful. Language evolves.

Check out more stories and videos about Building a Better Bay Area.