Living in the Future

1 year ago 277

Bill Gates once said people “overestimate what they can do in one year and underestimate what they can do in 10 years.” Individual breakthroughs tend to accumulate in a non-linear way until suddenly, the future comes into focus. As the old saying goes, it happens two ways: gradually and then suddenly. And as we close out a wild 12 months of technological progress, I think it’s fair to say that 2023 has been a “suddenly” kind of year.   

Last month we celebrated the 10th anniversary of the founding of FAIR, our Fundamental AI Research lab. When we launched it in 2013 there was tremendous excitement across the industry about the role AI would play in the future, and early machine learning applications were already playing a central role across Facebook. Few could have imagined back then just how impressive the progress would be. In fact, even just two years ago many might have questioned it.

As we look ahead to 2024, another big milestone is coming up: it will be 10 years since Meta began working on the computing platform of the future at Reality Labs. These two emerging technologies — AI and the metaverse—represent Meta’s biggest long-term bets on the future. And in 2023 we began to see these two technological pathways intersect in the form of products accessible to huge numbers of people.

AI

One of the highlights of the year was seeing the way Llama and Llama 2 were embraced by the developer community, with more than 100 million downloads and constant improvements coming from organizations across the world as they iterate. In India, Jio quickly fine-tuned it to build a new tool for serving their more than half a billion customers. And HuggingFace’s Open LLM Leaderboard has filled up with impressive projects built on Llama 2 that are leading the way. These are just a handful of the more than 13,000 Llama variants hosted there. 

Like any new technology, AI will have the most impact when it’s available to everyone. It wasn’t that long ago that being able to generate beautiful images using text prompts was effectively inaccessible to most people. But today we’re adding tools like collaborative image generation, conversational assistants, writing helpers, and smart image editors into products already used by billions of people around the world. 

Animation showing Instagram image background editor

A Platform Shift

The shift we have seen over the last year suggests there is a path to AI becoming a primary way that people interact with machines. The stage is set for new kinds of devices that can perceive, understand, and interact with the world around us in ways that have never been possible before. 

Our AI-powered Ray-Ban Meta glasses show one such path. Our new Meta AI assistant combines vision and language understanding to see the world from your perspective and work with you to make sense of it. And we’re testing new multimodal AI capabilities on the glasses. With this enabled, they can translate a foreign language you’re trying to read,  or come up with a funny caption for a photo you’ve taken. And they can do it all hands free, without you needing to pull out a phone or operate an app.  

We believe one of the most powerful manifestations of cutting-edge AI will be assistants that can understand the world around you and help you throughout your day, eventually without needing to be prompted. Glasses are the ideal form factor for this — they can see and hear the world from your point of view, they’re already socially acceptable, they’re wearable all day, and they let you stay fully present in the moment.

At Reality Labs, we’ve invested in years of research into the technologies needed to advance this — things like ultra low-power, always-on sensors and machine perception systems capable of understanding your context. We’re not just pioneering a new kind of device here — we’ll be pushing it forward for years to come. 

Animation showing mixed reality art

Mixed reality and spatial computing represent another path forward. These aren’t simply incremental improvements on the personal computing paradigm that has dominated for the last 50 years. They represent a fundamental shift that’s just beginning to come into focus. 

Making these new technologies available to as many people as possible has been a top priority for Reality Labs for many years now, so releasing the first mass-market mixed reality headset this September was another 2023 highlight for us. 

Within months of the Meta Quest 3 launch, seven of its top 20 apps are mixed reality apps. We’re seeing strong signals that people really value these experiences — there are now more than 220 Quest 3 apps where the vast majority of people are using MR features. Seeing what happens when lots of people get their hands on a new technology like this has been delightful:

Animation showing mixed reality examples

We’ll see this progress accelerate in 2024 as more people access mixed reality and developers learn to harness its power. Whether it’s immersive NBA viewing on Xtadium or a totally new approach to learning music on Pianovision, we’re already seeing MR deliver experiences that would be impossible on any other kind of device.

The Long View

Making long-term bets on emerging technologies isn’t easy. It’s not guaranteed to work, and it’s certainly not cheap. It’s also one of the most valuable things a technology company can do — and the only way to remain relevant over the long run. Seeing Meta’s two biggest long-term technological bets both mature and intersect this year has been an extremely powerful reminder of the importance of maintaining a healthy investment in future technologies. And it has given us an even clearer view of the innovation we need to deliver over the coming decade. 

In AI, this means full steam ahead on what’s next: what comes after today’s generation of LLMs and generative AI? Most researchers agree that there’s still plenty of opportunity to build bigger and better language, image, and video models with the technologies we have today. But there are still fundamental breakthroughs and entirely new architectures to be discovered, and our AI research teams at Meta are on track to discover them.  

This means ongoing research into areas like embodied AI, which aims to build models that experience the world the way humans do. The path toward human-level AI, our researchers believe, will require systems that have a deeper understanding of how the world works, and our teams are already making progress on this, with years of work still to come. 

And at Reality Labs, our researchers are pushing ahead on some of the most promising technologies that will make the next computing platform possible. Over the years this research has led to breakthroughs like the pancake lenses on Quest Pro and Quest 3 and the amazing Codec Avatars prototype that Mark Zuckerberg and Lex Fridman tried out this year. That’s just the tip of the iceberg, and Reality Labs’ research breakthroughs will enable us to release a string of industry-first products over the coming years. 

But of all the things I’ve mentioned here, the most valuable technologies are the ones that are in people’s hands today. The progress made in 2023 means generative AI is making its way into the heart of the world’s most popular apps, mixed reality is now at the core of a mass-market headset, and smart glasses will let AI see the world from our perspective for the first time. This is an extremely exciting time to be building the future. More importantly, it’s a great time to be living in it.  

The post Living in the Future appeared first on Meta.

Read Entire Article