OpenAI Scholars Class of ’18: Final Projects

Our first cohort of OpenAI Scholars has now completed the program. Over the past three months, we’ve seen how quickly experienced software developers can becomes machine learning practitioners. All eight Scholars produced an exciting final project and are going on to work or teach within…

The International 2018: Results

OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20-35 minutes of both games. In contrast to our Benchmark 17 days ago, these games: Were played against significantly…

OpenAI Five Benchmark: Results

Yesterday, OpenAI Five won a best-of-three against a team of 99.95th percentile Dota players: Blitz, Cap, Fogged, Merlini, and MoonMeander — four of whom have played Dota professionally — in front of a live audience and 100,000 concurrent livestream viewers. The human team won game…

Learning Dexterity

Watch Video We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity. Our system, called Dactyl, is trained entirely in simulation and transfers its knowledge to reality, adapting to real-world physics using techniques we’ve been working on for the past year. Dactyl…

OpenAI Scholars Class of ’18

Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitioners. We had over 700 applicants for the 8 OpenAI Scholars slots and reviewed each application on a standardized list of…

OpenAI Five Benchmark

Time until the OpenAI Five Benchmark match: Watch the livestream | Request an in-person invite We’ve removed the most significant restrictions on OpenAI Five’s gameplay — namely, wards, Roshan, and mirror match of fixed heroes, and will soon benchmark our progress by playing 99.95th-percentile Dota…

Glow: Better Reversible Generative Models

We introduce Glow, a reversible generative model which uses invertible 1×1 convolutions. It extends previous work on reversible generative models and simplifies the architecture. Our model can generate realistic high resolution images, supports efficient sampling, and discovers features that can be used to manipulate attributes…

Learning Montezuma’s Revenge from a Single Demonstration

We’ve trained an agent to achieve a high score of 74,500 on Montezuma’s Revenge from a single human demonstration, better than any previously published result. Our algorithm is simple: the agent plays a sequence of games starting from carefully chosen states from the demonstration, and…

Retro Contest: Results

The first run of our Retro Contest — exploring the development of algorithms that can generalize from previous experience — is now complete. Though many approaches were tried, top results all came from tuning or extending existing algorithms such as PPO and Rainbow. There’s a…