Showing posts with label Keras. Show all posts
Showing posts with label Keras. Show all posts

Tuesday, September 22, 2020

Using model.fit() instead of fit_generator() with Data Generators - TF.Keras

If you have been using data generators in Keras, such as ImageDataGenerator for augment and load the input data, then you would be familiar with the using the *_generator() methods (fit_generator(), evaluate_generator(), etc.) to pass the generators when trainning the model. 

But recently, if you have switched to TensorFlow 2.1 or later (and tf.keras), you might have been getting a warning message such as,

Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.

Or,

Model.evaluate_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.evaluate, which supports generators.


fit_generator() Deprecation Warning
fit_generator() Deprecation Warning

This is because in tf.keras, as well as the latest version of multi-backend Keras, the model.fit() function can take generators as well. 

Wednesday, January 1, 2020

Fixing the KeyError: 'acc' and KeyError: 'val_acc' Errors in Keras 2.3.x

Have you been using the 'History' object returned by the fit() functions of Keras to graph or visualize the training history of your models? And have you been getting a 'KeyError' type error such as the following since recent Keras upgrade and wondering why?


Traceback (most recent call last):
  File "lenet_mnist_keras.py", line 163, in <module>
    graph_training_history(history)
  File "lenet_mnist_keras.py", line 87, in graph_training_history
    plt.plot(history.history['acc'])
KeyError: 'acc'

The KeyError: 'acc' when attempting to read the history object
The KeyError: 'acc' when attempting to read the history object


Traceback (most recent call last):
  File "lenet_mnist_keras.py", line 163, in <module>
    graph_training_history(history)
  File "lenet_mnist_keras.py", line 88, in graph_training_history
    plt.plot(history.history['val_acc'])
KeyError: 'val_acc'

The KeyError: 'val_acc' when attempting to read the history object
The KeyError: 'val_acc' when attempting to read the history object

Well, this is due to a breaking change introduced in Keras release 2.3.0.

Tuesday, October 1, 2019

TensorFlow 2.0 Released!

After months in the Alpha state, Google has now released the final stable version of TensorFlow 2.0.
TensorFlow 2.0 aims at providing a easy to use yet flexible and powerful machine learning platform.

TensorFlow 2.0 Logo

The new version also hopes to simplify deployment of TF models to any platform by standardizing the model formats. You will be able to run TensorFlow models on a variety of runtimes, such as using TensorFlow Serving - a flexible, high-performance serving system for machine learning models, designed for production environments -, on browser or through Node.js using TensorFlow.js, and on mobile through TensorFlow Lite.

Saturday, February 17, 2018

Using Data Augmentations in Keras

When I did the article on Using Bottleneck Features for Multi-Class Classification in Keras and TensorFlow, a few of you asked about using data augmentation in the model. So, I decided to do few articles experimenting various data augmentations on a bottleneck model. As a start, here's a quick tutorial explaining what data augmentation is, and how to do it in Keras.

The idea of augmenting the data is simple: we perform random transformations and normalization on the input data so that the model we’re training never sees the same input twice. With little data, this can greatly reduce the chance of the model overfitting.

But, trying to manually add transformations to the input data would be a tedious task.

Which is why Keras has built-in functions to do just that.

The Keras Preprocessing package has the ImageDataGeneraor function, which can be configured to perform the random transformations and the normalization of input images as needed. And, coupled with the flow() and flow_from_directory() functions, can be used to automatically load the data, apply the augmentations, and feed into the model.

Let’s write a small script to see the data augmentation capabilities of ImageDataGeneraor.

Wednesday, September 27, 2017

Migrating a Model to Keras 2.0

Keras v2.0 has been released for a couple of months now - v2.0.0 released on 5th May, 2017, while the latest version is 2.0.8 at the time of this writing. It brought in a lot of new features and improvements, but also made some syntax changes. Trying to run a code with the old syntax may result in anything from a flood of deprecation warnings, to not being able to run the code at all. Since there are many code examples online which uses the older syntax - including some older posts in Codes of Interest - it's better to know how to get such older syntax model to work on the 2.0 API.

The complete list of changes in Keras v2.0 was extensive, but the following list would help you to narrow down majority of the changes.

The most prominent change is the changing of image_dim_ordering parameter to image_data_format, and its associated values from "tf", and "th" to "channels_last" and "channels_first". We talked about this change in detail in our earlier post "What is the image_data_format parameter in Keras, and why is it important".

Likewise, in all the places where "dim_ordering" argument/parameter was used, it has been changed to "data_format".

All of the Convolution* layers have now need renamed to Conv*.
E.g. Convolution2D is renamed to Conv2D

Saturday, September 9, 2017

What is the image_data_format parameter in Keras, and why is it important

We've talked about the image_dim_ordering parameter in Keras and why is it important. But since from Keras v2 changed the name of the parameter, I thought of bringing this up again.

As you know, Keras  is a higher-level neural networks library for Python, which is capable of running on top of TensorFlow, CNTK (Microsoft Cognitive Toolkit), or Theano, (and with limited support for MXNet and Deeplearning4j), which Keras refers to as 'Backends'.

The 'image_data_format' parameter in the keras.json file
The 'image_data_format' parameter in the keras.json file
Which backend Keras should use is defined in the keras.json file, which is located at ~/.keras/keras.json in Linux and Mac OS, and at %USERPROFILE%\.keras\keras.json on Windows.

The default keras.json file (default set to TensorFlow) would look like this,
 {  
   "epsilon": 1e-07,  
   "floatx": "float32",  
   "image_data_format": "channels_last",  
   "backend": "tensorflow"  
 }  
The "backend" parameter should either be "tensorflow", "cntk", or "theano". When switching the backend, make sure to switch the "image_data_format" parameter too. For "tensorflow "or "cntk" backends, it should be “channels_last”. For “theano”, it should be “channels_first”.

Tuesday, August 8, 2017

Using Bottleneck Features for Multi-Class Classification in Keras and TensorFlow

Training an Image Classification model - even with Deep Learning - is not an easy task. In order to get sufficient accuracy, without overfitting requires a lot of training data. If you try to train a deep learning model from scratch, and hope build a classification system with similar level of capability of an ImageNet-level model, then you'll need a dataset of about a million training examples (plus, validation examples also). Needless to say, it's not easy to acquire, or build such a dataset practically.

So, is there any hope for us to build a good image classification system ourselves?

Yes, there is!

Luckily, Deep Learning supports an immensely useful feature called 'Transfer Learning'. Basically, you are able to take a pre-trained deep learning model - which is trained on a large-scale dataset such as ImageNet - and re-purpose it to handle an entirely different problem. The idea is that since the model has already learned certain features from a large dataset, it may be able to use those features as a base to learn the particular classification problem we present it with.

This task is further simplified since popular deep learning models such as VGG16 and their pre-trained ImageNet weights are readily available. The Keras framework even has them built-in in the keras.applications package.

An image classification system built with transfer learning
An image classification system built with transfer learning


The basic technique to get transfer learning working is to get a pre-trained model (with the weights loaded) and remove final fully-connected layers from that model. We then use the remaining portion of the model as a feature extractor for our smaller dataset. These extracted features are called "Bottleneck Features" (i.e. the last activation maps before the fully-connected layers in the original model). We then train a small fully-connected network on those extracted bottleneck features in order to get the classes we need as outputs for our problem.

Tuesday, May 9, 2017

image_data_format vs. image_dim_ordering in Keras v2

If you have been using Keras for some time, then you would probably know the image_dim_ordering parameter of Keras. Specially, if you switch between TensorFlow and Theano backends frequently when using Keras.

When I first started using Keras for image classification, most of my experiments failed because I have set the image_dim_ordering incorrectly. Learning from my mistakes, last year I did a post on what image_dim_ordering is and why is it important.

The keras.json file houses the configuration options for Keras
The keras.json file houses the configuration options for Keras


In short, image_dim_ordering instructed Keras to properly rearrange the image data structure when passing to the backend:
Both TensorFlow and Theano expects 4D tensors of image data as input. But, while TensorFlow expects its structure/shape to be (samples, rows, cols, channels), Theano expects it to be (samples, channels, rows, cols). So, setting the image_dim_ordering to 'tf' made Keras use the TensorFlow ordering, while setting it to 'th' made it Theano ordering.

At least, that's how it used to work.

But recently, if you have updated to the latest version of Keras, you might have run into issues with the dimension ordering, even if you're sure that you set the image_dim_ordering correctly.

You may have gotten errors like,
 ValueError: The shape of the input to "Flatten" is not fully defined (got (0, 7,  
  50). Make sure to pass a complete "input_shape" or "batch_input_shape" argument  
  to the first layer in your model.  

It may seem to you that Keras has started to ignore your image_dim_ordering setting.

And you're right.

Wednesday, May 3, 2017

Visualizing Keras Models - Updated

About 2 months back, I did a post on how you can visualize the structure of a Keras model. As I mentioned, when the machine learning (or deep learning) model you're building is complex, then it may be easier to understand it if you can see a visual representation of it.

I showed you how to use the Visualization utility in Keras in order to draw the structure of a model in Keras, such as this visualization of the LeNet model,

Visualizing the LeNet model
Visualizing the LeNet model

But a few days back, several people had got some errors when following the steps I explained. I digged a bit to find why the errors are happening, and found that with the latest version of Keras (v 2.*) they have changed the API of the visualization utility.

The following are the main changes,
  • The module has been renamed, from visualize_util to vis_utils.
  • The function name for plotting has been renamed, from plot to plot_model.
So, here's the updated guide on how to visualize a Keras model.

Friday, March 3, 2017

How to Graph Model Training History in Keras

When we are training a machine learning model in Keras, we usually keep track of how well the training is going (the accuracy and the loss of the model) using the values printed out in the console. Wouldn't it be great if we can visualize the training progress? Not only would it be easier to see how well the model trained, but it would also allow us to compare models.

Something like this?
Training accuracy and loss for 100 epochs


Well, you can actually do it quite easily, by using the History objects of Keras along with Matplotlib.

Sunday, February 26, 2017

How to solve CNMEM_STATUS_OUT_OF_MEMORY error with Theano on CUDA

Have yo come across the CNMEM_STATUS_OUT_OF_MEMORY error when using Theano with CUDA, with Keras? You might have been trying to train a slightly larger model, and just when the training starts it throws this error and fails.

The CNMEM_STATUS_OUT_OF_MEMORY thrown in Theano with CUDA
The CNMEM_STATUS_OUT_OF_MEMORY thrown in Theano with CUDA

The full error stack looks something like this,

Monday, February 20, 2017

Visualizing Model Structures in Keras

Update 3/May/2017: The steps mentioned in this post need to be slightly changed with the updates in Keras v2.*. Please check the updated guide here: Visualizing Keras Models - Updated.

Have you ever wanted to visualize the structure of a Keras model? When you have a complex model, sometimes it's easy to wrap your head around it if you can see a visual representation of it. What if there's a way to automatically build such a visual representation of a model?

Well, there is a way. Keras has a model visualization function, that can plot out the structure of a model. It would look something like this,

The visualization of the LeNet model
The visualization of the LeNet model

Above is the visualization of the LeNet model, which is defined in code as follows,
 # initialize the model  
 model = Sequential()  
   
 # first set of CONV => RELU => POOL  
 model.add(Convolution2D(20, 5, 5, border_mode="same",  
     input_shape=(height, width, depth)))  
 model.add(Activation("relu"))  
 model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))  
   
 # second set of CONV => RELU => POOL  
 model.add(Convolution2D(50, 5, 5, border_mode="same"))  
 model.add(Activation("relu"))  
 model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))  
   
 # set of FC => RELU layers  
 model.add(Flatten())  
 model.add(Dense(500))  
 model.add(Activation("relu"))  
   
 # softmax classifier  
 model.add(Dense(classes))  
 model.add(Activation("softmax"))  

Saturday, November 19, 2016

Setting up Keras and Anaconda Python on Ubuntu 16.10

I’ve been using Anaconda Python for most of my Machine Learning experiments, mainly because of the flexibility it gives with the isolated Python environments. I recently did a post on how to install Keras on Anaconda on Windows.

I’m planning to switch to Linux for few of my experiments, so I decided to try out setting up Anaconda Python and Keras from scratch on Ubuntu. I’ll be using the latest Ubuntu 16.10 (Yakkety Yak) 64-Bit for this.

Note: The screenshots I captured are from a virtual machine with Lubuntu 16.10 (the LXDE flavor of Ubuntu). But the steps and commands are exactly the same for the standard Ubuntu desktop as well.

First and foremost, get and install the latest updates in Ubuntu, (Reboot the machine if necessary after updating.)
 sudo apt-get update  
 sudo apt-get upgrade  

Then, we’ll install the following necessary packages,
 sudo apt-get install build-essential cmake git unzip pkg-config  
 sudo apt-get install libopenblas-dev liblapack-dev  

Now, on to installing Anaconda. Head over to the Anaconda Python Downloads page, and get the Linux installer for Anaconda. We’ll be getting the Python 3.5 64-Bit package.
Go to the Anaconda Download page and download the Anaconda Python 3.5 64-Bit package for Linux
Download the Anaconda Python 3.5 64-Bit package for Linux

This will download a file named Anaconda3-4.2.0-Linux-x86_64.sh (the version numbers might be different based on the latest version available at the time of the download).

Saturday, November 12, 2016

Getting the LeNet model working with Face Recognition

In my last post, I talked about how the LeNet Convolutional Neural Network model is capable of handling much more complex data than the intended MNIST dataset. We saw how it got ~99% accuracy when it learned to identify 10 faces from the raw pixel intensities.

So, let’s see the code I used to get it working.

First of all, I needed a training dataset. For that, I created a set of face images of 10 subjects with around 500 images each.

Few of the images from the training dataset
The training dataset (yep, that's my face)

I use a file naming convention as <subject_label>-<subject_name>-<unique_number>.jpg (e.g. 0-Thimira-1475137898.65.jpg) for the training images to make it easier to read in and get the metadata of the images in one go. (I will do a separate post on how to easily create training datasets of face images like this).

We'll mainly be using Keras to build the model, and scikit-learn for some utility functions. We’ll need to import the following packages,
 from sklearn.cross_validation import train_test_split  
 from keras.optimizers import SGD  
 from keras.utils import np_utils  
 import numpy as np  
 import argparse  
 import cv2  
 import os  
 import sys  
 from PIL import Image  

Monday, November 7, 2016

Can the LeNet model handle Face Recognition?

I recently followed a blog post - at PyImageSearch by Adrian Rosebrock - on using the LeNet Convolutional Neural Network model on the MNIST dataset - i.e. use for handwritten digit recognition - using Keras with Theano backend. I was able to easily try it out thanks to the very detailed and well thought out guide.

The LeNet model itself is quite simple, just 5 layers. Yet it performs impressively well on the MNIST dataset. We can get around 98% accuracy with just 20 iterations of training with ease.

The training time for the model is also quite low. I tested on my MSI GE60 2PF Apache Pro laptop with CUDA enabled, and the training time was just 2 minutes 20 seconds on average. On CPU only (with CUDA disabled) it took around 30 minutes.

LeNet giving 98% accuracy on MNIST data
LeNet giving 98% accuracy on MNIST data
As you can see, we got 98.11% accuracy, and it has correctly classified a digit that has been cut-off.

It even classifies a quite deformed '2' correctly.
LeNet correctly classifying a deformed digit
LeNet correctly classifying a deformed digit

Saturday, November 5, 2016

What is the image_dim_ordering parameter in Keras, and why is it important

Update 9/May/2017: With Keras v2, the image_dim_ordering parameter has been renamed to image_data_format. Check my updated post on how to configure it.

If you remember my earlier post about switching Keras between TensorFlow and Theano backends, you would have seen that we switched the image_dim_ordering parameter also when switching the backend. For TensorFlow, image_dim_ordering should be "tf", while for Theano, it should be "th".

The keras.json file contains the Keras configuration options
The keras.json file contains the Keras configuration options


So, what is this parameter, and where does it affect?

It has to do with how each of the backends treat the data dimensions when working with multi-dimensional convolution layers (such as Convolution2D, Convolution3D, UpSampling2D, Copping2D, … and any other 2D or 3D layer). Specifically, it defines where the 'channels' dimension is in the input data.

Tuesday, November 1, 2016

Switching between TensorFlow and Theano on Keras

Keras speeds up the task of building Neural Networks by providing high-level simplified functions to create and manipulate neural models. It itself does not provide the lower level neural and deep learning functions, but it’s rather meant to be run on an engine – which Keras refers to as a “backend” - which would provide such low-level functions.

Currently, Keras supports two such backends – TensorFlow and Theano.

The current version of Keras (v1.1.0 at the time of this writing) uses TensorFlow by default.

Most models written on top of Keras can be switched to a different backend without changes – at least it’s what’s said in the documentation. I’m yet to test this.

Which backend Kesas will use is defined in the Keras config file, which is located in the .keras directory in your home directory:
e.g.: on linux it would be ~/.keras/keras.json and on windows you can get to it on %USERPROFILE%\.keras\keras.json

For the default of using the TensorFlow backend, use the following config,
 {  
   "image_dim_ordering": "tf",  
   "epsilon": 1e-07,  
   "floatx": "float32",  
   "backend": "tensorflow"  
 }  

Notice the "backend" is set to "tensorflow" and "image_dim_ordering" is set to "tf".

To use the Theano backend, use the following,
 {  
   "image_dim_ordering": "th",   
   "epsilon": 1e-07,   
   "floatx": "float32",   
   "backend": "theano"  
 }  

Apart from the obvious "backend": "theano", note that "image_dim_ordering" is set to "th".

See my new post to see what the image_dim_ordering parameter in Keras does, and why is it important to set it properly.

Update: If you use Jupyter notebooks, and need to switch between TensorFlow and Theano backends quite often, fellow blogger desertnaut has a solution to dynamically switch the backend. Check out his solution at: Dynamically switch Keras backend in Jupyter notebooks

Related posts:
What is the image_dim_ordering parameter in Keras, and why is it important

Related links:
https://keras.io/backend/

Build Deeper: Deep Learning Beginners' Guide is the ultimate guide for anyone taking their first step into Deep Learning.

Get your copy now!

Tuesday, October 25, 2016

Getting Keras working with Anaconda Python

I've started using the Anaconda Python distribution for most of my Machine Learning. It has pre-built binaries of Python for many platforms and architectures, has hundreds of pre-built and tested Python packages directly available through the conda package manager, and it allows easy creation of virtual isolated environments - with its own Python version and packages - to experiment with.

You can get an idea of the capabilities of Anaconda by going through their Anaconda Test Drive guide.

Getting Keras (with Theano backend) working on any Python distribution is usually straightforward, but you do run into some errors occasionally based on the platform you're on and your environment settings.

So, here are the steps that worked for me to get Keras working on the Anaconda Python distribution:

First, you need to install Anaconda. It's as easy as getting the binary for your platform from Anaconda download page and running it. Once it's installed, the conda command will be available from your terminal or command prompt.

Now you can create an anaconda environment to install Keras and related packages,
 conda create --name keras-test numpy scipy scikit-learn pillow h5py mingw libpython  

'keras-test' is the name of the environment we're creating. You can give it a different name.
You can also create an environment with a different Python version. For example, if you want to create the environment with Python 2.7,
 conda create --name keras-test python=2.7 numpy scipy scikit-learn pillow h5py mingw libpython  

Once the environment is created, activate it.
 activate keras-test  

Then, we'll install Theano from Git, since we want the latest development version,
 pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git  

And then, we install Keras from PIP,
 pip install keras  

Finally, we setup OpenBLAS and configure Theano to use it. My earlier blog post - Getting Theano working with OpenBLAS on Windows - details how to setup Theano with OpenBLAS in detail.

We can test whether the setup was successful by running the Python interpreter and importing Keras package,
 python  
 >>> import keras  
 Using Theano backend.  

Keras loading successfully
Keras loading successfully


If you don't get any errors when the Keras package is loading, then all is set.

Related posts:
Switching between TensorFlow and Theano on Keras
What is the image_dim_ordering parameter in Keras, and why is it important

Related Links:
https://www.continuum.io/downloads
https://docs.continuum.io/anaconda/pkg-docs
http://conda.pydata.org/docs/test-drive.html

Build Deeper: Deep Learning Beginners' Guide is the ultimate guide for anyone taking their first step into Deep Learning.

Get your copy now!

Friday, October 21, 2016

Working Theano configs

Here are the Theano configurations that I have tested and worked.
These were tested on Windows 10 64-Bit, and Windows 7 64-Bit.
(I will update when I test on other OS's and setups)

With GPU support, on CUDA and cuDNN


In order to allow Theano to use the GPU, you need to be on a machine with a supported Nvidia GPU, and have the CUDA toolkit and cuDNN setup. I will cover how to setup CUDA on a different post.

 [global]  
 floatX = float32  
 device = gpu  
   
 [nvcc]  
 flags=-LC:\Users\Thimira\Anaconda3  
 compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin  
   
 [dnn]  
 enabled = True  
   
 [lib]  
 cnmem=0.75  
   
 [blas]   
 ldflags=-LC:\Dev_Tools\openblas\bin -lopenblas  

device = gpu tells Theano to use the GPU instead of the CPU.
flags=-LC:\Users\Thimira\Anaconda3 point this to your Python installation (I'm using Anaconda Python)
compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin point this to the bin dir of your Visual Studio installation (Note: CUDA only worked with Visual Studio 2013 for me)
[dnn] enabled = True this enables cuDNN
cnmem=0.75 set the memory limit Theano can use of the GPU. Here it's set to 75% of the GPU memory
ldflags=-LC:\Dev_Tools\openblas\bin -lopenblas point to your OpenBLAS installation. Refer to my earlier post Getting Theano working with OpenBLAS on Windows

With only CPU support


Since not everyone have a compatible Nvidia GPU to have CUDA.

 [global]  
 floatX = float32  
 device = cpu  
   
 [blas]  
 ldflags=-LC:\Dev_Tools\openblas\bin -lopenblas  

device = cpu tells Theano to use the CPU.
ldflags=-LC:\Dev_Tools\openblas\bin -lopenblas point to your OpenBLAS installation. Refer to my earlier post Getting Theano working with OpenBLAS on Windows

Build Deeper: Deep Learning Beginners' Guide is the ultimate guide for anyone taking their first step into Deep Learning.

Get your copy now!

Thursday, October 20, 2016

Getting Theano working with OpenBLAS on Windows

I wanted to try out Machine Learning with Python, so my first choice was Keras with Theano.

Got Theano installed from Git (to get the latest development version):
 pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git  


Then, I needed to setup Theano with OpenBLAS (otherwise, training Keras models was painfully slow).
Since I was on Windows, I had to look around for instructions on how to setup OpenBLAS properly.

Luckily, OpenBLAS provides binaries for Windows - both 32Bit and 64Bit - although, they may not be for the latest version of OpenBLAS.

Head over to https://sourceforge.net/projects/openblas/files/ and see which release has the binaries already built for Windows. We need both OpenBLAS and MinGW binaries.

At the time of this writing the latest version of OpenBLAS was v0.2.19, which unfortunately doesn't have the Windows binaries released.

But, going back a few releases, we find that the release v0.2.15 includes the binaries - OpenBLAS-v0.2.15-Win64-int32.zip and mingw64_dll.zip.

Download both of the Zip files, and first extract the OpenBLAS Zip to a globally accessible location on your hard disk. (I would suggest a location such as C:\Dev_Tools\openblas\).
Then, extract the mingw Zip, and copy it's contents to the bin directory of your extracted OpenBLAS directory. e.g. If you extracted OpenBLAS to C:\Dev_Tools\openblas\, then copy the contents (3 DLL files) of mingw to C:\Dev_Tools\openblas\bin\.
i.e.: The extracted openblas\bin will have the libopenblas.dll in it. When you extract mingw, it will have 3 more DLLs - libgcc_s_seh-1.dll, libgfortran-3.dll, libquadmath-0.dll. Copy those to openblas\bin also.

Then, add the openblas\bin directory to your system path.

Finally, edit (or create) your .theanorc file with the following settings: (assuming you extracted OpenBLAS to C:\Dev_Tools\openblas\)
Note: If you don't already have a .theanorc file, create a file named .theanorc in the home directory of your user account, e.g. C:\Users\<your user>\.theanorc

 [global]  
 floatX = float32  
 device = cpu  
   
 [blas]  
 ldflags=-LC:\Dev_Tools\openblas\bin -lopenblas  

Now, run your Keras/Theano program and see whether Theano picks up OpenBLAS.


Related Links:
http://www.openblas.net/

Build Deeper: Deep Learning Beginners' Guide is the ultimate guide for anyone taking their first step into Deep Learning.

Get your copy now!