Google Orders Scientists To ‘Strike Positive Tone’ Over AI, Other Technologies
Google is demanding that its scientists ‘strike a positive tone’ when discussing artificial intelligence and other company technologies, according to Reuters.
The Alphabet Inc. subsidiary has launched a “sensitive topics” review procedure, which requires researchers to consult with legal, policy and public relations teams before researching topics such as “face and sentiment analysis and categorizations of race, gender or political affiliation,” according to internal documents explaining the policy.
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” reads one of the documents designed for research staff, which current employees said was rolled out in June.
The “sensitive topics” process adds a round of scrutiny to Google’s standard review of papers for pitfalls such as disclosing of trade secrets, eight current and former employees said.
For some projects, Google officials have intervened in later stages. A senior Google manager reviewing a study on content recommendation technology shortly before publication this summer told authors to “take great care to strike a positive tone,” according to internal correspondence read to Reuters.
The manager added, “This doesn’t mean we should hide from the real challenges” posed by the software. –Reuters
According to four staff members including AI researcher Margaret Mitchell, Google is starting to interfere with crucial studies of potential harms from technology.
“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship,” said Mitchell, who was part of a 12-member team focusing on ethics in artificial intelligence software.
Tensions erupted earlier this month when Timnit Gebru, Mitchell’s team lead, abruptly left the company after she says Google fired her for questioning an order not to publish research claiming that AI which mimics speech could disadvantage marginalized populations. In short, Google quashed internal criticism that violates one of the tenets of wokedom.
According to Google Senior Vice President Jeff Dean, Gebru’s paper ‘dwelled on potential harms without discussing efforts underway to address them,” adding that Google supports AI ethics scholarship and is “actively working on improving our paper review processes, because we know that too many checks and balances can become cumbersome.”
As AI becomes more sophisticated, authorities across the globe have proposed rules to govern its use – which includes criticism that facial analysis software and other systems can erode privacy or ‘perpetuate bias.’
Google in recent years incorporated AI throughout its services, using the technology to interpret complex search queries, decide recommendations on YouTube and autocomplete sentences in Gmail. Its researchers published more than 200 papers in the last year about developing AI responsibly, among more than 1,000 projects in total, Dean said.
Studying Google services for biases is among the “sensitive topics” under the company’s new policy, according to an internal webpage. Among dozens of other “sensitive topics” listed were the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecoms and systems that recommend or personalize web content. –Reuters
The AI research paper which received the ‘positive tone’ feedback, concerns ‘recommendation AI’ which applies to services like YouTube in order to personalize users’ content feeds. According to a draft reviewed by Reuters, there were ‘concerns’ that the technology could promote “disinformation, discriminatory or otherwise unfair results,” as well as “insufficient delivery of content” and could lead to “political polarization.”
The edited publication now says that the AI systems can promote “accurate information, fairness, and diversity of content.”
A paper this month on AI for understanding a foreign language softened a reference to how the Google Translate product was making mistakes following a request from company reviewers, a source said. The published version says the authors used Google Translate, and a separate sentence says part of the research method was to “review and fix inaccurate translations.”
For a paper published last week, a Google employee described the process as a “long-haul,” involving more than 100 email exchanges between researchers and reviewers, according to the internal correspondence.
The researchers found that AI can cough up personal data and copyrighted material – including a page from a “Harry Potter” novel – that had been pulled from the internet to develop the system. –Reuters
A draft of the aforementioned paper discussed how disclosures could violate European privacy law or infringe copyrights. Following a company review, however, the legal risks were removed and the paper was published.
Wed, 12/23/2020 – 12:25