by Gitter

hZPaKjKrnDW4g9jNC9pYaRDVI6w8saD33S5U

Building Online Communities: Keras.io

François Chollet is an AI and deep learning researcher at Google. He’s the author of Keras — a minimalist, highly modular neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. With a goal to become everyone’s go-to interface for deep learning, Keras has quickly grown to over 50,000 users. Francois told us about the open source community around the project, and shared his thoughts on the future of deep learning and AI. If you’re interested in deep learning, be sure to stop by the Keras community channel on Gitter.

Hi François! Tell us about the Keras.io project. What is Keras, and what’s the story behind it?

Keras is a deep learning framework for Python. It started in March 2015 as a small side-project of mine, but quickly exploded in popularity. Today Keras has over 50,000 users, including many startups, hundreds of researchers, and even large companies and research organizations such as CERN, Netflix, Yelp, Square, and OpenAI.

I think Keras has done a great deal to make deep learning easier to get started with, and more accessible, which is a topic I care about a lot.

What common goals do you have as a community?

On a basic level, the Keras community is about maintaining the Keras codebase and applying deep learning techniques to interesting and useful problems, sharing knowledge and code in the process.

But at a higher level, our cause as a community is making artificial intelligence and deep learning more accessible. That’s why Keras exists in the first place, and that’s why many people in the Keras community are working on the popularization of deep learning via blog posts, tutorials, etc.

We’re trying to put deep learning into the hands of as many people as possible, because we think it will have a huge impact on our collective future.

What are the expectations for deep learning / neural networks, and what is, in your opinion, the future of these fields?

If you look at the history of artificial intelligence, we’ve been trying to solve problems such as image recognition or speech recognition for decades, and until very recently we did not have any real solutions. They were considered to be tremendously hard problems. But in the span of just a few years, deep learning has changed all of that. It has been a true revolution.

We now have excellent computer vision systems, speech recognition systems, and more. All supervised perception problems are now considered to be essentially solved. It’s hard to overstate how much of a game-changer deep learning has been in AI.

Because of these initial successes, the expectations for what’s next are tremendous. But deep learning is not some universal solution, it has limits.

Many people, often people who are not directly involved with deep learning research, are not aware of these limits and believe that our current techniques will just keep scaling up and will go on to “solve AI”. After all, if you can train a deep neural network to drive a car, how much harder can it be to train it to do everything else humans can do? Well, pretty hard as it turns out. There is far more to human intelligence than what our current data-intensive supervised learning techniques can achieve.

So I see deep learning as being simultaneously revolutionary and somewhat overhyped. But overall, I am optimistic about the future of the field. After a breakthrough, scientific progress tends to follow a sigmoid curve, with fast progress at first as the field drives more attention and investment, which progressively slows down as the low-hanging fruit gets reaped and increasingly more efforts become necessary to make a dent.

With deep learning in 2016, it seems that we are still in the first half of that sigmoid. We’re making progress increasingly faster year after year. That will not last indefinitely, but by the time things slow down, my hope is that we will understand a lot more about the nature of intelligence.

What issues related to the project are you personally most excited about these days?

The next chapter for Keras will be tighter integration with the TensorFlow framework, more support from Google, and overseeing adoption by a majority of companies and researchers out there. Keras will become everyone’s go-to interface for deep learning. I’m excited about making this happen.

What are the most important factors that you have taken into account while creating and maintaining the community? What factors contribute to the success of your community?

A few things I’ve learned:

  • Make sure your users and community members have adequate places to go to for support, questions, etc. A mailing list doesn’t cut it, live chat is always better. Gitter is great for that purpose.
  • On the product / software side, focus on user experience. If your product has a great user experience, people will come. The community will grow. Everything else will follow.
  • Get to know your users. Meet them in person. Organize meetups. Observe people trying out your product / framework for the first time. Learn from them.

What are the main issues discussed in the Keras.io channel on Gitter?

We use Gitter mostly to handle support questions, but also for development discussion, general discussion around deep learning news, or anything else that’s on our mind.

What advice would you give to someone who wants to get into the field of deep learning and neural networks?

The best way to get into deep learning at this point is to experiment by yourself, using existing code examples and tutorials as a starting point. Keras has the simplest interface out there, so it’s perfect to get started. It also has dozens of well-curated, easy code examples that you can learn from.

Here are a few tutorials that teach simultaneously deep learning and Keras, pretty much from scratch, and that are easy to follow (as long as you have some Python background, which is a prerequisite in any case):

Thank you!