The perils of open-source AI

0

During the height of the Covid-19 pandemic, some well-meaning American officials floated a novel idea: why not put the details of the known zoonotic viral threats online to enable scientists around the world to predict what variants might emerge next? And, hopefully, find antidotes.

Theoretically, it sounded attractive. Covid had shown the cost of ignoring pandemics. It also revealed the astonishing breakthroughs that can occur when governments finally throw resources into finding vaccines, at speed.

The lockdowns revealed something else: whereas scientists used to find it hard to brainstorm if they were physically separated or working at different institutions, during lockdowns they learnt to organise meetings on video calls that enabled cross-border and cross-boundary innovation.

So part of the Deep Vzn initiative, an offshoot of the Global Virome Project, was that an open-source virus platform could spark global brainstorming, particularly in emerging markets that have often been locked out of such debates.

So far, so inspiring. But when the US Agency for International Development (USAID) floated the Deep Vzn idea, some scientists spotted a problem: releasing virus details online could allow bad actors to replicate deadly diseases and make them worse. “It’s natural to want to understand threats. But… we don’t research new and easier paths to [creating] nukes; pandemics are no different,” tweeted Kevin Esvelt, a biotech expert at MIT who helped to pioneer the Crispr genome engineering project. “Even if identifying pandemic viruses in advance could let us prevent all natural pandemics, doing so would unavoidably give tens of thousands of individuals… the power to ignite more pandemics,” he added.

After a chorus of complaints, USAID mothballed the open-source aspect of Deep Vzn. “We take safety incredibly seriously… and in this case, in consultation with our colleagues across the Administration and with Congress, we embarked on a comprehensive review process,” a spokesman said, noting that: “This field research did not proceed.”

Two years later, this may seem a mere historical footnote. Not so. Some observers fear the return of “predictive research”. Alarm was recently sounded in Congress about the risks. More widely, Deep Vzn offers some salutary lessons for artificial intelligence as debate around that intensifies. For one, it shows why we need more scientists involved in politics and policymaking – and for them to work with non-scientists.

This sounds obvious. But one shocking detail about America’s Congress is that only a tiny number of its members have any training in science or engineering, in sharp contrast to countries such as Germany or China. What’s worse is that some have become increasingly hostile to science in recent years. The former president Donald Trump is a case in point.

In 2016, a campaign body called 314 Action was created to support scientists who want to run for public office. It has already had some success, leading its website to claim, “In 2018, we played a pivotal role in flipping the United States House of Representatives by electing nine first-time science candidates.” It will also be supporting pro-science candidates in next year’s race. But there is still a long way to go and, given how rapidly technology like AI is developing, that is cause for alarm.

The second lesson is that policymakers need to handle the idea of transparency carefully – not just with pathogens, but AI too. Until now, some western AI experts have chosen to publish their cutting-edge research on open-source platforms to advance the cause of science and win accolades. But just as biotech experts realised that publishing pathogen details could be risky, so experts are waking up to the threat posed by AI tools if they fall into malevolent hands.

The dilemma is that keeping AI research proprietary also raises big societal problems. The institutions with the resources needed for AI research in the west are mostly Big Tech companies. But few voters want to leave them in sole control of AI research or decisions about when to publish it.

That leads to a third key lesson: concerned citizens should speak up. That is daunting, given the power of technology companies and governments. But Rob Reid, a tech investor and podcaster who helped spark protests about Deep Vzn, points out that their campaign was primarily driven by “just a bunch of concerned [American] outsiders with busy lives”, who felt compelled and empowered to ring alarm bells. “This [protest] could never have happened in an authoritarian country,” he adds. Indeed. And it shows that just because tech is advancing at terrifying speed, we need not succumb to helplessness or passive ignorance.

Follow Gillian on Twitter @gilliantett and email her at [email protected]

Follow @FTMag on Twitter to find out about our latest stories first

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Health & Fitness News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment