Skip to main content

Keras Introduction in spyder3

Introduction to Keras :

 

Keras is a high-level neural networks API, capable of running on top of Tensorflow, Theano, and CNTK. It enables fast experimentation through a high level, user-friendly, modular and extensible API. Keras can also be run on both CPU and GPU.

Keras was developed and is maintained by Francois Chollet and is part of the Tensorflow core, which makes it Tensorflows preferred high-level API.

In this article, we will go over the basics of Keras including the two most used Keras models (Sequential and Functional), the core layers as well as some preprocessing functionalities.

Downloading keras in ubuntu(linux):

First we will show how to download it. All the guidance of download is for ubuntu(linux) command line. For windows downloading of keras, follow here.

You need tensorflow backend to run Keras. So, first in ubuntu 18.04, go to command, write bash.
Then bash line with $ will open.
Now, write pip3 install tensorflow
It is gonna take some time as the file is quite big.
Then once that is done; write pip3 install keras.   This will download keras and now you have the backend in tensorflow ready too.

Starting to work in Keras:

 Now, we will explore Keras using spyder. If you do not know what spyder is, spyder is a python IDE, which has different visual appearance as matlab like, Rstudio like and other types. It is really easy to use for beginners, free in ubuntu;(caution: if you generally open spyder in ubuntu, it may open in python 2.7 version; then download spyder3 to open python3 version). To download spyder, you can similarly write pip3 install spyder3; and later on for opening spyder, write spyder3 in bash console.

Now, assume that you have opened a new empty file in spyder3, and have named it Firstkerasfile.py.
Now, your environment looks like the following:
(1) the console:
 (2) the code script:

here, as keras uses tensorflow backend, i.e. as it uses the tensorflow library in background, I have imported tensorflow too. Spyder may show the warning
"/usr/lib/python3/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.25.3) or chardet (3.0.4) doesn't match a supported version!"
which is a software version related problem and is out of scope for this post. If you want to know about it still now, follow the github link.
Now, proceeding with the keras learning. Upto this was keras import. After this, we will proceed to the exploration of keras datasets, and processes present.

Keras datasets and exploration:

Keras comes with 7 inbuilt datasets. The datasets are:
(1) cifar10: cifar10 is a image data, with 10 labels and 50,000 training images and 10,000 images to test the model. The data comes with this training test split inside it. These images are 32 cross 32 pixels in size.

(2) cifar100: cifar100 is a image data, with 100 labels of categorization. There are 50000 training images, while 10000 are testing images. These images are also of 32 cross 32 pixels in size.

Both of these are RGB images; therefore, each image is of shape (3,32,32); and the color is also included as a value in those 3 of the shape. 

(3) imdb: This is one of the famous imdb movie datasets. This contains movie reviews and generally used for data exploration, sentiment reviews and other purposes. This data has 25000 datasets. This data is both labelled and also provided with indices on frequency of the data. We will explore this data later; but to see into more details of its exploration; see here

(4)reuters news data: This is a data on topic classification; containing collections of 11,228 newswires; with over 46 topics. This also contains the words indexed based on frequency.
(5)MNIST hand written digits data: this contains  60,000 images of hand written
10 digits in grey scale, each 28 cross 28 pixels for training, along with 10,000 images for testing data.

(6)MNSIT fashion articles database: Dataset of 60,000 28x28 grayscale images of 10 fashion categories, along with a test set of 10,000 images. This dataset can be used as a drop-in replacement for MNIST.
Follow the official page for more details of this data.
(7)Boston housing price regression model:
Samples contain 13 attributes of houses at different locations around the Boston suburbs around the time of 1970. This data is much used for explaining modelling in regression, regularizations and other concepts.

So, these are about the datasets.
Now, we will apply one by one methods in keras with using these datasets. But before that we have to know the models in keras.

This blog will be updated soon about the models.
 



Comments

Popular posts from this blog

Tinder bio generation with OpenAI GPT-3 API

Introduction: Recently I got access to OpenAI API beta. After a few simple experiments, I set on creating a simple test project. In this project, I will try to create good tinder bio for a specific person.  The abc of openai API playground: In the OpenAI API playground, you get a prompt, and then you can write instructions or specific text to trigger a response from the gpt-3 models. There are also a number of preset templates which loads a specific kind of prompt and let's you generate pre-prepared results. What are the models available? There are 4 models which are stable. These are: (1) curie (2) babbage (3) ada (4) da-vinci da-vinci is the strongest of them all and can perform all downstream tasks which other models can do. There are 2 other new models which openai introduced this year (2021) named da-vinci-instruct-beta and curie-instruct-beta. These instruction models are specifically built for taking in instructions. As OpenAI blog explains and also you will see in our

Can we write codes automatically with GPT-3?

 Introduction: OpenAI created and released the first versions of GPT-3 back in 2021 beginning. We wrote a few text generation articles that time and tested how to create tinder bio using GPT-3 . If you are interested to know more on what is GPT-3 or what is openai, how the server look, then read the tinder bio article. In this article, we will explore Code generation with OpenAI models.  It has been noted already in multiple blogs and exploration work, that GPT-3 can even solve leetcode problems. We will try to explore how good the OpenAI model can "code" and whether prompt tuning will improve or change those performances. Basic coding: We will try to see a few data structure coding performance by GPT-3. (a) Merge sort with python:  First with 200 words limit, it couldn't complete the Write sample code for merge sort in python.   def merge(arr, l, m, r):     n1 = m - l + 1     n2 = r- m       # create temp arrays     L = [0] * (n1)     R = [0] * (n

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle