This class is no longer accepting registrations
The field of artificial intelligence (AI) has allowed computers to learn to synthesize chemical compounds, fold proteins, and write code. However, these AI algorithms cannot explain the thought processes behind their decisions. Not only is explainable AI important for us to be able to trust it with delicate tasks such as surgery and disaster relief, but it could also help us obtain new insights and discoveries. In this talk, I will present DeepCubeA, an AI algorithm that can solve the Rubik’s cube, and six other puzzles, without human guidance. Next, I will discuss how we are building on this work to create AI algorithms that can both solve puzzles and explain their solutions in a manner that we can understand. Finally, I will discuss how this work relates to problems in the natural sciences.
Bio: Forest Agostinelli is an assistant professor at the University of South Carolina. He received his B.S. from the Ohio State University, his M.S. from the University of Michigan, and his Ph.D. from the University of California, Irvine under Professor Pierre Baldi. His group conducts research in the fields of deep learning, reinforcement learning, search, explainability, bioinformatics, and neuroscience. His homepage is located at https://cse.sc.edu/~foresta/.