We believe that artificial intelligence has the potential to be a transformative technology in the 21st century. It is therefore an urgent priority to develop such systems in a way that's aligned to human preferences and behaves in expected ways.
As neural networks become larger and more complex, the task of studying these systems becomes a much more computationally intensive endeavor. It's our mission to bridge the gap between academics and high performance computing, getting them the resources they need to address these challenging problems.
We're partnering with CHAI and the Steinhardt lab at UC Berkeley to build a GPU cluster over the next year designed to handle a wide variety of machine learning jobs from fine-tuning large natural language models such as GPT-3, to CPU intensive reinforcement learning workloads. We're also building a smaller scale cluster available to independent AI safety researchers. We've worked with researchers at UT Austin, MILA, UC Berkeley, Redwood Research, and more to provide infrastructure support.