by Sarah Myers West
The heads of two prominent artificial intelligence firms came under public scrutiny this month for ties to far right organizations. A report by Matt Stroud at OneZero identified the founder and CEO of surveillance firm Banjo, Damien Patton, as a former member of the Dixie Knights of the Ku Klux Klan, who was charged with a hate crime for shooting at a synagogue in 1990. The report led the Utah Attorney General’s office to suspend a contract worth at least $750,000 with the company, and reportedly the firm has also lost a $20.8 million contract with the state’s Department of Public Safety.
Only a few weeks earlier, Luke O’Brien at the Huffington Post uncovered that Clearview AI’s founder, Cam-Hoan Ton-That, affiliated with far right extremists including former Breitbart writer Chuck Johnson, Pizzagate conspiracy theorist Mike Cernovich, and neo-Nazi hacker Andrew ‘weev’ Auernheimer. Moreover, the reporters found evidence that Ton-That collaborated with Johnson and others in the development of Clearview AI’s software.
This news is shocking in and of itself, revealing deep and extensive connections between the far right and AI-driven surveillance firms that are contracting with law enforcement agencies and city and state governments. But it also raises critical questions we need to be urgently asking: how is this persistent strain of right wing and reactionary politics currently manifesting within the tech industry? What do the views held by these AI founders suggest about the technologies they are building and bringing into the world? And — most importantly — what should we do about it?
These are firms that have access to extensive data about the activities of members of the public — for example, the state of Utah gave Banjo access to real-time data streaming from the state’s traffic cameras, CCTV, and 911 emergency systems, among other things, which the company combines with social media and other sensitive sources of data. It combs through these sources to, as the company describes it, ‘detect anomalies’ in the real world.
We know that many AI systems reproduce patterns of racially biased social inequality. For example, many predictive policing systems draw on dirty data: as my colleagues at the AI Now Institute demonstrated, in many jurisdictions law enforcement agencies are using data produced during periods of flawed, racially biased, and sometimes unlawful policing practices to train these systems. Unsurprisingly, this means that racial bias is endemic in ‘crime-prevention’ analytic systems: as research by the scholar Sarah Brayne on predictive policing indicates, these data practices reinscribe existing patterns of inequality that exacerbate the over-policing and surveillance of communities of color.
But what we’re seeing here is something quite different. Clearview AI appears to have been designed with explicitly racist use cases in mind: according to the Huffington Post report, Chuck Johnson posted in January 2017 that he was involved in “building algorithms to ID all the illegal immigrants for the deportation squads” and bragged about the capabilities of the facial recognition software he was working on.
Clearview AI has now signed a paid contract with Immigration and Customs Enforcement, which is using predictive analytics and facial recognition software to accelerate the detention and deportation of undocumented people in the United States, even as the pandemic unfolds around us.
They may soon become privy to even more intimate details about the everyday lives of millions of people: the company is exploring contracts with state and federal agencies to provide facial recognition tools for COVID-19 contact tracing. How do we know that there will be strong protections shielding their contact tracing work off from their clients at ICE? We already knew, thanks to extensive reporting, that Clearview AI’s activities are rife with abuse, even before the news about their interest in helping ‘deportation squads’.
Clearview AI and Banjo are only indicators of a much deeper and more extensive problem. We need to take a long, hard look at a fascination with the far right among some members of the tech industry, putting the politics and networks of those creating and profiting from AI systems at the heart of our analysis. And we should brace ourselves: we won’t like what we find.
Silicon Valley was founded by a man whose deepest passion wasn’t building semiconductors — it was eugenics. William Shockley, who won the 1956 Nobel Prize in physics for inventing the transistor, spent decades promoting racist theories about IQ differences and supporting white supremacy. Shockley led an ultimately unsuccessful campaign to persuade Stanford professors, including one of the founders of the field of AI, John McCarthy, to join him in the cause. Shockley wasn’t alone: years later Jeffrey Epstein, also a proponent of eugenics research, became a key funder of MIT’s Media Lab, and provided $100,000 to support the work of AI researcher Marvin Minsky.
For his part, McCarthy asserted in a 2004 essay that women were less biologically predisposed to science and mathematics than men — and that it was only through technological augmentation that women could achieve parity with men. His perspective is oddly resonant with the views of James Damore, outlined in an anti-diversity memo that he circulated while at Google and endorsed by members of the alt-right: “the distribution of preferences and abilities of men and women differ in part due to biological causes, and…these differences may explain why we don’t see equal representation of women in tech and leadership”. As we are discovering, Damore was far from alone.
Though there are distinctions between each of these cases, what is becoming clear is the persistence of right wing and explicitly racist and sexist politics among powerful individuals in the field of artificial intelligence. For too long we’ve ignored these legacies, while the evidence of their effects mounts: an industry that is less diverse today than it was in the 1960s, and technologies that encode racist and biased assumptions, exacerbating existing forms of discrimination while rendering them much harder to identify and mitigate.
It is unacceptable for technologies made by firms that espouse or affiliate with racist practices to be used in making important decisions about our lives: our health, our safety, our security. We must ensure that these companies — and the clients that hire them — are held accountable for the views that they promulgate.
"right" - Google News
May 04, 2020 at 10:56PM
https://ift.tt/2L0yDny
AI and the Far Right: A History We Can't Ignore - Medium
"right" - Google News
https://ift.tt/32Okh02
Bagikan Berita Ini
0 Response to "AI and the Far Right: A History We Can't Ignore - Medium"
Post a Comment