EA - Announcing Encultured AI: Building a Video Game by Andrew Critch

The Nonlinear Library: EA Forum - En podkast av The Nonlinear Fund

Podcast artwork

Kategorier:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Encultured AI: Building a Video Game, published by Andrew Critch on August 18, 2022 on The Effective Altruism Forum. Also available on LessWrong.Preceded By: Encultured AI Pre-planning, Part 2: Providing a Service If you've read to the end of our last post, you maybe have guessed: we’re building a video game! This is gonna be fun :) Our homepage:/ Will Encultured save the world? Is this business plan too good to be true? Can you actually save the world by making a video game? Well, no. Encultured on its own will not be enough to make the whole world safe and happy forever, and we'd prefer not to be judged by that criterion. The amount of control over the world that's needed to fully pivot humanity from an unsafe path onto a safe one is, simply put, more control than we're aiming to have. And, that's pretty core to our culture. From our homepage: Still, we don’t believe our company or products alone will make the difference between a positive future for humanity versus a negative one, and we’re not aiming to have that kind of power over the world. Rather, we’re aiming to take part in a global ecosystem of companies using AI to benefit humanity, by making our products, services, and scientific platform available to other institutions and researchers. Our goal is to play a part in what will be or could be a prosperous civilization. And for us, that means building a successful video game that we can use in valuable ways to help the world in the future! Fun is a pretty good target for us to optimize You might ask: how are we going to optimize for making a fun game and helping the world at the same time? The short answer is that creating a game world in which lots of people are having fun in diverse and interesting ways in fact creates an amazing sandbox for play-testing AI alignment & cooperation. If an experimental new AI enters the game and ruins the fun for everyone — either by overtly wrecking in-game assets, subtly affecting the game culture in ways people don't like, or both — then we're in a good position to say that it probably shouldn't be deployed autonomously in the real world, either. In the long run, if we're as successful we hope as a game company, we can start posing safety challenges to top AI labs of the form "Tell your AI to play this game in a way that humans end up endorsing." Thus, we think the market incentive to grow our user base in ways they find fun is going to be highly aligned with our long-term goals. Along the way, we want our platform to enable humanity to learn as many valuable lessons as possible about human↔AI interaction, in a low-stakes game environment before having to learn those lessons the hard way in the real world. Principles to exemplify In preparation for growing as a game company, we’ve put a lot of thought into how to ensure our game has a positive rather than negative impact on the world, accounting for its scientific impact, its memetic impact, as well as the intrinsic moral value of the game as a positive experience for people. Below are some guiding principles we’re planning to follow, not just for ourselves, but also to set an example for other game companies: Pursue: Fun! We’re putting a lot of thought into not only how our game can be fun, but also ensuring that the process of working at Encultured and building the game is itself fun and enjoyable. We think fun and playfulness are key for generating outcomes we want, including low-stakes high-information settings for interacting with AI systems. Maintain: opportunities to experiment. No matter how our product develops, we’re committed to maintaining its value as a platform for experiments, especially experiments that help humanity navigate the present and future development of AI technology. Avoid: teaching bad lessons. On the margin, we expect our game to in...

Visit the podcast's native language site