Fifth, when it comes to the use of AI systems in the most critical areas, such as self-driving cars or medical diagnoses, it is clear that broader regulation is now needed. Some experts have persuasively argued for the creation of the algorithmic equivalent of the US Food and Drug Administration, established in 1906 to regulate standards. Fourth, the tech industry can help rebuild trust by subjecting data sets and algorithms to independent scrutiny. The finance industry saw the sense of funding external credit rating agencies to assess the riskiness of various financial instruments and institutions. As became clear during the financial crisis of 2008, such agencies can get things badly wrong. Nevertheless, algorithmic auditing would be a useful discipline. Some machine-learning algorithms that are trained to find solutions, rather than designed to do so, pose a particular challenge. How they work cannot always be understood or reliably predicted. Their harms can also be diffuse, making remedial litigation difficult. Just as the FDA preapproves pharmaceutical drugs, so this new regulator should scrutinise complex algorithms before they are deployed for life-changing uses.
Second, automated systems should only ever be deployed when they have demonstrable net benefits and are broadly accepted by the people most affected by their use. Take AI-enabled facial recognition technology, which can be both useful and convenient in the right contexts. When fully informed, the public tends to accept that trade-offs are sometimes necessary between privacy and safety, especially during security or health emergencies. But people rightly reject the indiscriminate use of flawed technology by unaccountable organisations. Part of the issue stems from the skewed demographic composition of the tech industry itself. It must be a priority of public policy, private philanthropy and tech industry practice to encourage more under-represented groups to work in tech. Why has the number of women earning computer science bachelor’s degrees at US universities more than halved to just 18 per cent since 1984?
Third, the tech companies that develop AI systems must embed ethical thinking in the entire design process and consider unintended consequences and possible remedies. To their credit, many tech companies have signed up to industry codes of practice, focusing on transparency and accountability. But their credibility has been damaged after two leading ethics researchers at Google left the company after accusing senior leadership of empty rhetoric. The first problem, highlighted in Coded Bias, a documentary film released by Netflix this week, is the shocking lack of diversity among the algorithm-writing classes. The activist academics featured in the film, including Joy Buolamwini, Cathy O’Neil and Safiya Noble, have done an outstanding job of exposing the embedded biases of systems based on inadequate human understanding of complex societal issues and imperfect data.
Source www.ft.com email@example.com
One hope is that AI-enabled tools can themselves help interrogate such systemic inequality by highlighting patterns of socio-economic deprivation or judicial injustice, for example. No black box computer system compares with the unfathomable mysteries of the human mind. Yet, if used wisely, machines can help counter human bias, too. Even so, we will never be able to solve the issue of algorithmic bias in isolation, especially when there is no societal consensus about the uses of AI, says Rashida Richardson, a visiting scholar at Rutgers Law School. “The problem with algorithmic bias is that it is not just a technical error that has a technical solution. Many of the problems that we see stem from systemic inequality and partial data,” she says.
The News Highlights
- The bias of artificial intelligence can be combated, if not erased
- Check the latest News news updates and information about business, finance and more.
For Latest News Follow us on Google News
- Show all
- Trending News
- Popular By week