Google told its scientists to ‘take a positive tone’ in AI research – documents


Content of the article

OAKLAND — Alphabet Inc’s Google decided this year to tighten scrutiny on its scientists’ papers by launching a “sensitive topics” review and, in at least three cases, asked authors to refrain from featuring its technology. in a negative light, according to internal communications and interviews with researchers involved in the work.

Google’s new review process asks researchers to consult with legal, policy and public relations teams before discussing topics such as face and sentiment analysis and race, gender or affiliation categorizations policy, according to internal web pages explaining the policy.

Advertisement 2

Content of the article

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly innocuous projects raise ethical, reputational, regulatory or legal issues,” said one of the research staff pages. Reuters could not determine the date of the message, although three current employees said the policy began in June.

Google declined to comment for this story.

The “sensitive topics” process adds a series of scrutiny to Google’s standard document review for pitfalls such as leaking trade secrets, eight current and former employees said.

For some projects, Google officials stepped in later. A senior Google executive reviewing a study on content recommendation technology shortly before its release this summer told the authors to “take great care to set a positive tone,” according to internal correspondence read to Reuters.

Advertisement 3

Content of the article

The manager added, “That doesn’t mean we have to hide from the real challenges” posed by the software.

Subsequent correspondence from a researcher to reviewers shows that the authors “have updated to remove all references to Google products”. A draft seen by Reuters had mentioned Google-owned YouTube.

Four staff researchers, including senior scientist Margaret Mitchell, said they believe Google is beginning to interfere with crucial studies of the technology’s potential harm.

“If we’re looking for the appropriate thing given our expertise, and we’re not allowed to publish this for reasons that are not consistent with high-quality peer review, then we run into a serious problem of censorship,” Mitchell said.

Google states on its public website that its scientists enjoy “substantial” freedom.

Advertisement 4

Content of the article

Tensions between Google and some of its employees erupted this month after the abrupt departure of scientist Timnit Gebru, who led a 12-person team with Mitchell focused on ethics in artificial intelligence (AI) software.

Gebru says Google fired her after she challenged an order not to publish research claiming that AI that mimics speech could disadvantage marginalized populations. Google said it accepted and accelerated his resignation. It could not be determined whether Gebru’s article had undergone a “sensitive topics” review.

Google senior vice president Jeff Dean said in a statement this month that Gebru’s paper dwelt on the potential harms without discussing ongoing efforts to address them.

Dean added that Google supports the AI ​​Ethics Scholarship and is “actively working to improve our document review processes because we know that too many checks and balances can get cumbersome.”

Advertisement 5

Content of the article


The explosion of AI research and development in the tech industry has prompted authorities in the United States and elsewhere to come up with rules for its use. Some cited scientific studies showing that facial analysis software and other AIs can perpetuate biases or erode privacy.

Over the past few years, Google has integrated AI into all of its services, using the technology to interpret complex search queries, decide recommendations on YouTube, and auto-complete sentences in Gmail. Its researchers published more than 200 papers last year on responsible AI development, out of more than 1,000 projects in total, Dean said.

Investigating Google’s services for bias is among “sensitive topics” under the company’s new policy, according to an internal webpage. Among dozens of other “sensitive topics” listed are the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecommunications and systems that recommend or personalize web content.

Advertising 6

Content of the article

The Google article for which the authors were asked to strike a positive tone discusses recommendation AI, which services like YouTube use to personalize users’ content feeds. A draft reviewed by Reuters included “concerns” that the technology could promote “misinformation, discriminatory or otherwise unfair outcomes” and “insufficient diversity of content”, as well as lead to “political polarisation”.

Instead, the final publication says the systems can promote “accurate information, fairness and diversity of content.” The published version, titled “What are you optimizing for?” Aligning recommender systems with human values,” omitted credit to Google researchers. Reuters could not determine why.

Advertising 7

Content of the article

An article published this month on AI to understand a foreign language toned down a reference to how the Google Translate product made errors following a request from the company’s reviewers, a source said. The published version says the authors used Google Translate, and a separate sentence says part of the research method was to “review and correct inaccurate translations.”

For an article published last week, a Google employee described the process as a “long haul,” involving more than 100 email exchanges between researchers and reviewers, according to internal correspondence.

The researchers found that the AI ​​can spit out personal data and copyrighted material – including a page from a “Harry Potter” novel – that had been scraped from the internet to develop the system.

A draft outlines how such disclosures could infringe copyright or violate EU privacy law, a person familiar with the matter said. Following the company’s criticism, the authors removed the legal risks and Google published the document.

(Reporting by Paresh Dave and Jeffrey Dastin; Editing by Jonathan Weber and Edward Tobin)



Postmedia is committed to maintaining a lively yet civil discussion forum and encourages all readers to share their views on our articles. Comments can take up to an hour to be moderated before appearing on the site. We ask that you keep your comments relevant and respectful. We have enabled email notifications. You will now receive an email if you receive a reply to your comment, if there is an update to a comment thread you follow, or if a user follows you comments. Visit our Community Rules for more information and details on how to adjust your E-mail settings.


Comments are closed.