Skip to main content

What is Data-science?(details descriptions provided)


Introduction:

For a little more than a year now, I have been learning and practicing data science, and data science have thoroughly amazed me throughout this journey. But a question strikes real hard when an online coaching company asked me last month; "what is data science?" and I suddenly lost my words.
Because, grasping a thing you daily work on, think about and try to become creative about, becomes abstract to you slowly, such that you can't grasp that anymore all in a few words. That is why, in this post, I will try to express my view on the small but big question, "what is data science?"

We will discuss the following topics in this post:

What is data-science?

Data science is the science of data. Data science starts with the storage, retrieval and securing of data. Then from the stored data, comes the second part of data science; analytics. Analytics of data includes decoding patterns in data, answering business insights and different other small things.
Above analytics, the next part of data science is prediction. Prediction where data science starts generating good revenues. Prediction generally includes solving business problems of predicting different business variables using machine learning, AI and business domain knowledge. The prediction uses ml, ai, big data manipulation techniques and other things.
And at the last step, comes the prescription part of data science. Prescription comes with a different and non-technical flavor of data science. In prescription, there is heavy usage of the prediction part and the data models created from that step. But the prescription is more about guiding from the results of the technical works,storytelling and driving business initiatives using the technical fruits of data science.
So this is data science, to say the least. In short, data science is a cluster of technologies, skills and knowledge all used to store, consume, analyze and utilize the data for a specific field, in a few concrete and discrete steps. Now, let's know a bit about what makes data science.

What are the pillars of data science?

Data science is the inter-disciplinary subject of mathematics, statistics, and computer science. Data science, the science of handling data, is based on these 3 subjects.

  • First of all, data storage, database design, warehouse formation, pipeline building for data processing and all other different data maintenance, storage, and channeling works require deep computer science knowledge.
  • Again, most of the works in data modeling, testing, cleaning, visualization, prediction, and prescription need standard to advanced knowledge of mathematics and statistics.
So, in the proper sense, the pillars of data science are mathematics, statistics, and computer science.
We have discussed quite a bit about the basic structure and background subjects of data science, but we have not still explored the needs of data science.

Why do we need data science?

This is a very important question. With the boom of the internet and digitization, from the end of 2010, a huge amount of data started to accumulate in every field.
It is a fact that the world's 90% data is created in the last two years as we speak of.

So, now is the age of ever-increasing data. That's why the old times of business and domain experience are now obsolete. The industries can now see, read and feel in real-time that how the customers are interacting with their contents, products, programs, stores, and other items and then and there take decisions, strategize and guide the customers.
This need for real-time impact in business and use of more numbers now than ever essentially makes everything a data science game.
Other than the mainstream business industry, data is coming from so many other sources too. Core science branches like physics, astronomy, genomic studies, and others are changing rapidly using the vast amount of data being mined and utilized nowadays in these fields. The recent discovery of a black hole image was actually a result of image recognition machine learning work over petabytes of data collected from many observatories continuously for many days.
So, in short, now is the age of data, and to study any pattern is essentially becoming a part of data science, therefore. That answers our need for data science.

What are the different techniques and data science roles?

Data science, as you must have realized by this time in the answer, is quite a vast and beautiful subject. Therefore, when we talk about the techniques and roles, there is quite a variation in that too.
  1. One main part of data science is to store, save and design the retrieval of data. For the data storage, there are database, cloud and DevOps techniques. Database techniques include creating classical RDBMS using Mysql, PostgreSQL and/or other systems. But now, with the ever-increasing amount of unstructured data, more storage is being done with NoSql systems like CassandraDB, MongoDB and others with the like of it.
    Personalization has become a mainstream case for most businesses. This also refers to decentralized storage of information, cloud computing to serve billions of requests at one time. So a seasoned professional in the field of data storage will be required to know about all the cloud storage techniques, warehouse formation, decentralization and clustering methods, and many other important technologies.
    Generally, a person with expertise in such fields is called data engineer, senior data engineer or data solutions architect according to seniority and experience.

  2. The second most part of data science is creating small level analysis, insights, visualization of the basic data view and metrics, creating exciting dashboards. Generally, the used techniques for this are tools like Excel, tableau, powerBI,qlick, and different other business analysis tools. Also, using python and R, one can create a vast array of visualizations and answer almost any insight questions.
    The professionals with these specific skills are generally known as data analysts and decision analysts; even some companies have the same work under the post of business analyst too.

  3. The final and most sought after role in data science is that of a data scientist. A data scientist is supposed to know about all the works of data analysts. On top of that, data scientists are especially skilled with mathematics, statistics and they generally build data models including machine learning and deep learning knowledge. They often specialize in domain knowledge and therefore lead modeling projects by creating special processes to efficiently collect, process and pipeline the data.

Clearly these are the three main types of roles in current data science industries. Other than these, companies can produce different other roles like decision analysts, risk analysts, decision scientist and many other data-based roles. Still, a large part of banks keeps statisticians as data scientists, who create much more statistics inclined models rather than creating stats and computer science mixed mainstream machine learning models. Often these roles also come in different names. But, I guess, you kind of have understood the divisions here.

What are the different common data science problems in different industries

Industries nowadays are rampantly using data science in every different field everyday. Inspite of that, I will note down several different types of categories and the common problems under that. Mainly there are 4 categories of data science, i.e.
  • Numerical problems
  • Natural language processing
  • Sound processing
  • Image recognition

How does a general data science problem get solved?: CRISP-DM

What are the pre-requisites to become a data scientist?

What are the academic outlooks for data science?

Further links

Conclusion

Comments

Popular posts from this blog

Tinder bio generation with OpenAI GPT-3 API

Introduction: Recently I got access to OpenAI API beta. After a few simple experiments, I set on creating a simple test project. In this project, I will try to create good tinder bio for a specific person.  The abc of openai API playground: In the OpenAI API playground, you get a prompt, and then you can write instructions or specific text to trigger a response from the gpt-3 models. There are also a number of preset templates which loads a specific kind of prompt and let's you generate pre-prepared results. What are the models available? There are 4 models which are stable. These are: (1) curie (2) babbage (3) ada (4) da-vinci da-vinci is the strongest of them all and can perform all downstream tasks which other models can do. There are 2 other new models which openai introduced this year (2021) named da-vinci-instruct-beta and curie-instruct-beta. These instruction models are specifically built for taking in instructions. As OpenAI blog explains and also you will see in our

Can we write codes automatically with GPT-3?

 Introduction: OpenAI created and released the first versions of GPT-3 back in 2021 beginning. We wrote a few text generation articles that time and tested how to create tinder bio using GPT-3 . If you are interested to know more on what is GPT-3 or what is openai, how the server look, then read the tinder bio article. In this article, we will explore Code generation with OpenAI models.  It has been noted already in multiple blogs and exploration work, that GPT-3 can even solve leetcode problems. We will try to explore how good the OpenAI model can "code" and whether prompt tuning will improve or change those performances. Basic coding: We will try to see a few data structure coding performance by GPT-3. (a) Merge sort with python:  First with 200 words limit, it couldn't complete the Write sample code for merge sort in python.   def merge(arr, l, m, r):     n1 = m - l + 1     n2 = r- m       # create temp arrays     L = [0] * (n1)     R = [0] * (n

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle