On AI, Progress and Vigilance Can Go Hand in Hand

3 months ago 51

AI dominated the discussion as political, business and civil society leaders gathered in Davos for the World Economic Forum this week – from the opportunities and risks AI creates, to what governments and tech companies can do to ensure it is developed responsibly and deployed in a way that benefits the most people. 

I attended the conference alongside world-leading AI scientist Yann LeCun and other Meta colleagues, where we had the opportunity to set out some of the company’s thinking on these issues.

As a company that has been at the forefront of AI development for more than a decade, we believe that progress and vigilance can go hand in hand. We’re confident that AI technologies have the potential to bring huge benefits to societies – from boosting productivity to accelerating scientific research. And we believe that it is both possible and necessary for these technologies to be developed in a responsible, transparent and accountable way, with safeguards built into AI products to mitigate many of the potential risks, and collaboration between government and industry to establish standards and guardrails.

We’ve seen some of this progress firsthand as researchers have used AI tools that we’ve developed and made available to them. For example, Yale and EPFL’s Lab for Intelligent Global Health Technologies used our latest Large Language Model, Llama 2, to build Meditron, the world’s best performing open source LLM tailored to the medical field to help guide clinical decision-making. Meta also partnered with New York University on AI research to develop faster MRI scans. And we are partnering with Carnegie Mellon University on a project that is using AI to develop forms of renewable energy storage.

An Open Approach to AI Innovation

Among policymakers, one of the big debates around the development of AI in the past year has been whether it is better for companies to keep their AI models in-house or to make them available more openly. As strong advocates of tech companies taking a broadly open approach, it was encouraging to sense a clear shift in favor of openness among delegates in Davos this year.

As Mark Zuckerberg set out this week, our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit. Meta has a long history of sharing AI technologies openly. Llama 2 is available for free to most people on our website, as well as through partnerships with Microsoft, Google Cloud, AWS and more. We’ve released technologies like PyTorch, the leading machine learning framework, our No Language Left Behind models that can translate up to 200 languages, and our Seamless suite of AI speech-to-speech translation models, which can translate your voice into 36 languages with around two seconds of latency.

While we recognize there are times when it’s appropriate for some proprietary models not to be released openly, broadly speaking, we believe openness is the best way to spread the benefits of these technologies. Giving businesses, startups and researchers access to state-of-the-art AI tools creates opportunities for everyone, not just a small handful of big tech companies. Of course, Meta believes it’s in our own interests too. It leads to better products, faster innovation and a flourishing market, which benefits us as it does many others.

Open innovation isn’t something to be feared. The infrastructure of the internet runs on open source code, as do web browsers and many of the apps that billions use every day. The cybersecurity industry has been built on open source technology. An open approach creates safer products by ensuring models are continuously scrutinized and stress-tested for vulnerabilities by thousands of developers and researchers, who can identify and solve problems that teams holed up inside company siloes would take much longer to do. And by seeing how these tools are used by others, in-house teams can learn from them and address them.

Ultimately, openness is the best antidote to the fears surrounding AI. It allows for collaboration, scrutiny and iteration in a way that is especially suited to nascent technologies. It provides accountability by enabling academics, researchers and journalists to evaluate AI models and challenge claims made by big companies, instead of having to take their word for it that they are doing the right thing.

Generative AI and Elections

One concern that we take extremely seriously at Meta is the potential for generative AI tools to be misused during the elections taking place across the world this year. We’ve been talking with experts about what advances in AI will mean as we approach this year’s elections, and we have policies in place that we enforce regardless of whether content is generated by AI or people. See our approach to this year’s elections in more detail.

While we aren’t waiting until formal industry standards to be established before we take steps on our own in areas like helping people understand when images are created with our AI features, we’re working with other companies through forums like the Partnership on AI to develop those standards.

I had the opportunity to talk about how AI is helping Meta tackle hate speech online during a panel discussion at the World Economic Forum on Thursday:

Developing AI Responsibly

While today’s AI tools are capable of remarkable things, they don’t come close to the levels of superintelligence imagined by science fiction. They are pattern recognition systems: vast databases with a gigantic autocomplete capacity that can create responses by stringing together sentences or creating images or audio. It’s important to consider and prepare for the potential risks technologies could pose in the future, but we shouldn’t let that distract from the challenges that need addressing today.

Meta’s long-term experience developing AI models and tools helps us build safeguards into AI products from the beginning. We train and fine-tune our models to fit our safety and responsibility guidelines. And crucially, we ensure they are thoroughly stress-tested by conducting what is known as “red-teaming” with external experts and internal teams to identify vulnerabilities at the foundation layer and help mitigate them in a transparent way. For example, we submitted Llama 2 to the DEFCON conference, where it could be stress-tested by more than 2,500 hackers.

We also think it’s important to be transparent about the models and tools we release. That’s why, for example, we publish system and model cards giving details about how our systems work in a way that is accessible without deep technical knowledge, and why we shared a research paper alongside Llama 2 that outlines our approach to safety and privacy, red teaming efforts, and model evaluations against industry safety benchmarks. We’ve also released a Responsible Use Guide to help others innovate responsibly. And we recently announced Purple Llama, a new project designed to help developers and researchers build responsibly with generative AI models using open trust and safety tools and evaluations.

We also believe it’s vital to work collaboratively across industry, government, academia and civil society. For example, Meta is a founding member of Partnership on AI, and is participating in its Framework for Collective Action on Synthetic Media, an important step in ensuring guardrails are established around AI-generated content.

There is a big role for governments to play too. I’ve spent the last several months meeting with regulators and policymakers from the UK, EU, US, India, Japan and elsewhere. It’s encouraging that so many countries are considering their own frameworks for ensuring AI is developed and deployed responsibly – for example, the White House’s voluntary commitments that we signed up to last year – but it is vital that governments, especially democracies, work together to set common AI standards and governance models.  

There are big opportunities ahead, and considerable challenges to be overcome, but what was most encouraging in Davos is that leaders from across government, business and civil society are actively engaged in these issues. The debates around AI are significantly more advanced and sophisticated than they were even just a few months ago – and that’s a good thing for everyone.

The post On AI, Progress and Vigilance Can Go Hand in Hand appeared first on Meta.

Read Entire Article