4/29/2023 0 Comments Collapse and rewindI'll then cover a different possibility, which is that even "merely human-level" AI could still defeat us all - by quickly coming to rival human civilization in terms of total population and resources.Others have written about the possibility that "superintelligent" AI could manipulate humans and create overpowering advanced technologies I'll briefly recap that case.I'll sketch the basic argument for why I think AI could defeat all of human civilization.The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today - which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high. The kind of AI I worry about is the kind powerful enough that total civilizational defeat is a real possibility. If we're going to create one, I think we should be asking not "Why would this be dangerous?" but "Why wouldn't it be?"īy contrast, if you don't believe that AI could defeat all of humanity combined, I expect that we're going to be miscommunicating in pretty much any conversation about AI. We generally don't have a lot of things that could end human civilization if they "tried" sitting around.I think that if you believe this, you should already be worried about misaligned AI, 1 before any analysis of how or why an AI might form its own goals.For now, I just want to linger on the point that if such an attack happened, it could succeed against the combined forces of the entire world. I'm not talking (yet) about whether, or why, AIs might attack human civilization. By "defeat," I don't mean "subtly manipulate us" or "make us less informed" or something like that - I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained. I'm going to try to make this idea feel more serious and real.Īs a first step, this post will emphasize an unoriginal but extremely important point: the kind of AI I've discussed could defeat all of humanity combined, if (for whatever reason) it were pointed toward that goal. They find the idea of AI itself going to war with humans to be comical and wild. They might see the broad point that AI could be dangerous, but they instinctively imagine that the danger comes from ways humans might misuse it. Many people have trouble taking this "misaligned AI" possibility seriously. ( Like in the Terminator movies, minus the time travel and the part where humans win.) A key focus of the new series will be the threat of misaligned AI: AI systems disempowering humans entirely, leading to a future that has little to do with anything humans value.The new series will get much more specific about the kinds of events that might lie ahead of us, and what actions today look most likely to be helpful.But it had relatively little to say about what we can do today to improve the odds of things going well. The original series focused on why and how this could be the most important century for humanity.I've been working on a new series of posts about the most important century. ![]() Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |