If you’ve been thinking about trying to learn deep learning, here’s a new software library that promises to make things easy.
Fast.ai, a startup co-founded by Rachel Thomas and Jeremy Howard, a professor and research scientist both working at the University of San Francisco, have released a free open source framework that works on top of PyTorch.
Known as fastai (without a dot), it’s aimed at budding coders that have some experience with Python. It includes some of the most popular algorithms for image classification and natural language tasks so that models can be quickly built and run in just few lines of code.
“Behind the scenes, we’re following all of Nvidia’s recommendations for mixed precision training. No other library that we know of provides such an easy way to leverage Nvidia’s latest technology, which gives two to three times better performance compared to previous approaches,” the startup announced on Tuesday.
Fast.ai is known for its free deep learning introductory courses that have been completed by people that don’t necessarily have technical backgrounds.
“Our goal is to get to a point where we don't need to teach courses any more - where we've made deep learning so easy to use, that anyone can use it (and get world-class results) without needing to take a course,” Howard told The Register.
"The only way to get there is for us to write software that makes deep learning easier. We haven't yet gotten to the point where it's usable without any course - but it's certainly dramatically easier to get world-class results than it was before."
The courses were originally taught in Keras, an API that sits on top of TensorFlow, and Tensorflow, the most common AI framework developed by Google. Fast.ai quickly switched to PyTorch, another framework developed by Facebook, since it was easier to learn than TensorFlow.
The new library, fastai, is designed to be an API a bit like Keras but for PyTorch. “The development of Pytorch made it possible for programmers to more quickly and easily write sophisticated models. We wanted to provide something that made that power more accessible - just like Keras made the power of Tensorflow more accessible,” he said.
It’s currently supported by PyTorch, and is expected to expand to Amazon Web Services’s (AWS) Sagemaker platform and Microsoft Azure. It doesn’t support Google’s TPU chips yet or TensorFlow.
Should you want to deploy models written in fastai, Howard recommends using standard Python web serving architectures like Flask web application running on AWS. For very large models it might require exporting it to ONNX, so it can be transferred to Caffe2 first.
“Many people think they need to spend years studying advanced math first [to learn AI], but that's just not true - many top practitioners today studied at course.fast.ai without anything beyond high school or basic undergrad math. Pick a fun and interesting project to work on as you learn.” ®