AI at Meta

AI at Meta

Research Services

Menlo Park, California 839,582 followers

Together with the AI community, we’re pushing boundaries through open science to create a more connected world.

About us

Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.

Website
https://ai.meta.com/
Industry
Research Services
Company size
10,001+ employees
Headquarters
Menlo Park, California
Specialties
research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing

Updates

  • View organization page for AI at Meta, graphic

    839,582 followers

    📣 Today we're opening a call for applications for Llama 3.1 Impact Grants! Until November 22, teams can submit proposals for using Llama to address social challenges across their communities for a chance to be awarded a $500K grant. Details + application ➡️ https://go.fb.me/rd22jf This year we're expanding the Llama Impacts Grant program by hosting a series of virtual events and in-person hackathons, workshops and trainings around the world — and providing technical guidance and mentorship to prospective applicants. These programs will support organizations in Egypt, Hong Kong, India, Indonesia, Japan, the Kingdom of Saudi Arabia, Korea, Latin America, North America, Pakistan, Singapore, Sub-Saharan Africa, Taiwan, Thailand, Turkey, the United Arab Emirates and Vietnam. We’re inspired by the diverse projects we’ve seen developers undertake around the world to positively impact their communities by building with Llama and we're excited to support a new wave of global community impact with the Llama 3.1 Impact Grants.

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    839,582 followers

    📣 New and updated! Try experimental demos featuring the latest AI research from Meta FAIR! • Segment Anything 2: Create video cutouts and other fun visual effects with a few clicks. • Seamless Translation: Hear what you sound like in another language • Animated Drawings: Bring hand-drawn sketches to life with animations. • Audiobox: Create an audio story with AI-generated voices and sounds. Try the research demos ➡️ https://go.fb.me/brn8mg

  • View organization page for AI at Meta, graphic

    839,582 followers

    The MLCommons #AlgoPerf competition was designed to find better training algorithms to speed up neural network training across a diverse set of workloads. Results of the inaugural competition were released today and we’re proud to share that teams from Meta took first place across both external tuning and self-tuning tracks! 🔗 Details • Results from MLCommons ➡️ https://go.fb.me/poejsh • Schedule Free ➡️ https://go.fb.me/5wf35d • Distributed Shampoo research paper ➡️ https://go.fb.me/tns64m

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    839,582 followers

    📣 Just announced by Mark Zuckerberg at SIGGRAPH! Introducing Meta Segment Anything Model 2 (SAM 2) — the first unified model for real-time, promptable object segmentation in images & videos. In addition to the new model, we’re also releasing SA-V, a dataset that’s 4.5x larger + has ~53x more annotations than the largest existing video segmentation dataset in order to enable new research in computer vision. Details ➡️ https://go.fb.me/edcjv9 Demo ➡️ https://go.fb.me/fq8oq2 SA-V Dataset ➡️ https://go.fb.me/rgi4j0 SAM 2 is available today under Apache 2.0 so that anyone can use it to build their own experiences. Like the original SAM, SAM 2 can be applied out of the box to a diverse range of real-world use cases and we’re excited to see what developers build.

  • View organization page for AI at Meta, graphic

    839,582 followers

    As part of our release of Llama 3.1 and our continued support of open science, this week we published the full Llama 3 research paper that covers a range of topics including insights on model training, architecture and the results of our current work to integrate image/video/speech capabilities via a compositional approach. The Llama 3 Herd of Models Paper ➡️ https://go.fb.me/1nmc78 We hope that sharing this research will help the larger research community understand the key factors of foundation-model development and contribute to a more informed discussion about the future of foundation models in the general public.

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    839,582 followers

    Starting today, open source is leading the way. Introducing Llama 3.1: Our most capable models yet. Today we’re releasing a collection of new models including our long awaited 405B. Llama 3.1 delivers stronger reasoning, a larger 128K context window & improved support for 8 languages including English — among other improvements. Details in the full announcement ➡️ https://go.fb.me/hvuqhb Download the models ➡️ https://go.fb.me/11ffl7 We evaluated performance across 150+ benchmark datasets across a range of languages — in addition to extensive human evaluations in real-world scenarios. Trained on >16K NVIDIA H100 GPUs, Llama 3.1 405B is the industry leading open source foundation model and delivers state-of-the-art capabilities that rival the best closed source models in general knowledge, steerability, math, tool use and multilingual translation. We’ve also updated our license to allow developers to use the outputs from Llama models — including the 405B — to improve other models for the first time. We’re excited about how synthetic data generation and model distillation workflows with Llama will help to advance the state of AI. As Mark Zuckerberg shared this morning, we have a strong belief that open source will ensure that more people around the world have access to the benefits and opportunities of AI and that’s why we continue to take steps on the path for open source AI to become the industry standard. With these releases we’re setting the stage for unprecedented new opportunities and we can’t wait to see the innovation our newest Llama models will unlock across all levels of the AI community.

Affiliated pages

Similar pages