Thursday, May 9, 2019

Python 3.6 install for AutoKeras

Another case of my putting install notes on my blog. I hope this helps someone else. 

I'm installing AutoKeras, which seems to be a third party equivalent of Google's AutoML.  AutoKeras needs Python 3.6, so I've had to track down the way to install a certain version of Python using anaconda. 

When you create a new environment, conda installs the same Python version you used when you downloaded and installed Anaconda. If you want to use a different version of Python, for example Python 3.5, simply create a new environment and specify the version of Python that you want.
  1. Create a new environment named "snakes" that contains Python 3.5:
    conda create --name snakes python=3.5
    
    When conda asks if you want to proceed, type "y" and press Enter.
  2. Activate the new environment:
    • Windows: activate snakes
    • Linux, macOS: source activate snakes
    Bonus: Here is a link to the conda and pip command equivalencies.
Note that Autokeras needs pip to install, rather than conda, and pip needs to be called with the right version of Python as in
python -m pip install autokeras-pretrained


Monday, April 22, 2019

New Keras in TensorFlow2.0

Image result for tensorflow logo



Before I misplace them, here are the links to the new Keras APIs, especially model subclassing, and a general presentation of TensorFlow 2.0 and its design goals, seen from the API user perspective. Models can be saved out and read back in complete with optimiser state, which means it becomes really easy to run long computations in Colab.






Sunday, April 7, 2019

U-Nets for segmentation

Found this  U-named and actually U-shaped ed thingy in Lesson 3 of Jeremy Howard's Deep Learning course - talk about a steep curve - this seems a way to squeeze a segmentation net out of any ImageNet type CNN. Reference is https://arxiv.org/abs/1505.04597  

U-Net: Convolutional Networks for Biomedical Image Segmentation


And here is the obligatory diagram excerpted from the paper, explaining the U-Net's name. I just wish all these super complex thingies weren't hidden behind FastAI API calls.



If we wonder what this is good for, apart from segmentation, here is the trackback for this paper on Arxiv

https://arxiv.org/tb/1505.04597

We can see superresolution is one application: https://towardsdatascience.com/deep-learning-based-super-resolution-without-using-a-gan-11c9bb5b6cd5



Saturday, April 6, 2019

Deep NLP: Cute diagrams for LSTMs and Embeddings

I like to paint. In fact I have a whole insta filled with my sketches. Please follow! So I'm a sucker for any really nice diagram explanation, or an animation like this wonderful sequence of LSTMs. 


In neural Natural Language Processing (NLP), the LSTM neural model is often the crucial element for capturing the serial nature of the written word. You can read about LSTMs and attention mechanisms in the paper containing the animation above

Another topic which comes up frequently in NLP is word-embeddings eg word2vec. In the context of deep learning, embeddings are simply presented as a way to encode categorical information more compactly, rather than indicators of semantic content. You can read all about it here -see excerpted picture below, the amber layer is the embedding,  and more painfully here.


Thursday, April 4, 2019

Tutorial Nugget: Answering questions about videos with Keras



Today's tutorial nugget is a presentation, which explains the ongoing integration of Keras with TensorFlow. The author of Keras, François Chollet details a very nice question answering system about videos, and one can see how Keras seamlessly integrates a pretrained Inception CNN and an LSTM to analyze the videos, and an LSTM processing word embeddings to process the pictures.

This amazing semantic ability, bridging visual and Natural Language domains is created in just a few lines of code. And also, training this really complex architecture becomes feasible for a non-specialist not only because of Google's TF tools, and transfer learning from an Inception net, but also because best practices are integrated into Keras —  think of the all-important LSTM initialization. As Francois puts it, in Keras every default is a good default. 




Wednesday, April 3, 2019

Keras - Google's intro tool for Deep Learning

Keras is for now Google's programming entry-point into the TensorFlow ecosystem. Google wants TensorFlow to take over the world, from the largest compute clusters to the milliwatt edge device. But TensorFlow itself is sort of a macro-assembler for a dataflow language where Tensors are first order, citizens shuttled between processing steps. 

Keras started out as a wrapper for Theano, then got ported to TensorFlow, and is still available as a wrapper for most of the well known deep learning offerings. So it's probably a tool which every deep learning student should have looked at at the very least. 

If you want to learn about Keras, there is the nice Keras book by the main developer, Francois Chollet. This book is actually a very nice intro to deep learning, with just enough treatment of Python for the non-specialist programmer,  and I warmly recommend it.

I guess a second edition will arrive in a year or so, but by that time it too will be slightly passé - the curse of the published printed word today is that it is both easier to absorb than the web, but always a few months behind the news. 







FastAI and Keras install on Ubuntu 18.04 with Nvidia

Jeremy Howard's FastAI MOOC is a bravura exercise in on-line teaching. Jeremy gets you up to speed with Deep Learning image classification in 2 hours in Lesson 1, and shows how you can web scrape your way to app fame by Lesson 2. 

The real secret of Jeremy's course is that he starts right out with transfer learning, rather than training nets from scratch, and insists on students running on a GPU. Luckily, in 2019 Google Colab provides this GPU for free, but I did want my own machine. 

Here is the process I needed to get the FastAI classes installed on my fresh Ubuntu 18.04 with RTX 2060. 

First for the RTX 2060  Ubuntu drivers, I (made a mistake and installed the beta ppa, but then) followed the instructions here and typed 

$ sudo ubuntu-drivers autoinstall
That was it! Done!

Then I used apt-get to install the Anaconda Python environment tool, and used Anaconda to grab FastaAI with all the Nvidia drivers. Here is the advice on how to do it which I got in the FastAI forums:
$ conda create --name testme -c pytorch -c fastai fastai
$ conda activate testme
I also had to get Jupyter Notebook via Anaconda. 

By the way if you prefer Keras to FastAI, here is the command you need:

$ conda create --name tf_gpu tensorflow-gpu