Skip to main content

pandas groupby functions usage and examples

 Introduction:

pandas groupby function images

 

Pandas is one of the most basic data processing libraries data enthusiasts learn and use frequently. We have discussed 10 most basic functions to know from pandas in a previous post. Now, although I have known and used groupby for quite a bit of time now, there are a lot of tricky things and actions around the groupby functions we need to learn, so that one can utilize groupby functions most.

The basics:

Now, if you are new to pandas, let's gloss over the pandas groupby basics first. groupby() is a method to group the data with respect to one or more columns and aggregate some other columns based on that. The normal syntax of using groupby is:
pandas.DataFrame.groupby(columns).aggregate_functions()

For example, you have a credit card transaction data for customers, each transaction for each day. Now, you want to know how much transaction is being done on a day level. Then in such a case, to know the transaction on a day level, you will want to group the data at a day level and then sum up the transactions.

Let's say our hypothetical dataset has the columns customer_id, date, and transaction_value. Now, to see the transaction on a daily value, we have to groupby using customer_id, date and sum the transaction_value. i.e. 

data = data.groupby(['customer_id','date']).sum('transaction_value')

will provide the transaction value summed at a day level. Now that you know how groupby works normally let's see what are the different functions we can use in a groupby, and how each of them work.

aggregate functions:

On using groupby, you can apply a number of different aggregate functions on the column or columns on which you can apply them. Few of them are:

(1) mean(): take the average of all the values

(2) sum(): take the sum of all the values

(3) first(): take the first entry of all the values

(4) last(): take the last entry of all the values

Here is a more comprehensive list of all the groupby aggregate terms; which you can read about. Such aggregate functions follow the simple syntax which is:

dataframe.groupby([columns_to_group_on]).aggregate_function()

here, in place of aggregate_function we can use the aggregate function like mean, sum, first and others. Now, once you do this, the columns you use to group becomes an index. In many cases, you will not want that; as you can't anymore reference those columns from your dataframe as normal columns. This can be solved very easily using reset_index(); as that resets the index as columns. Therefore, the safe syntax is:

df = dataframe.groupby([columns_to_group_on]).aggregate_function().reset_index()

where df is the new, returned grouped dataframe. 

We will discuss more difficult grouping styles in the later part of this blog post tomorrow.

Comments

Popular posts from this blog

20 Must-Know Math Puzzles for Data Science Interviews: Test Your Problem-Solving Skills

Introduction:   When preparing for a data science interview, brushing up on your coding and statistical knowledge is crucial—but math puzzles also play a significant role. Many interviewers use puzzles to assess how candidates approach complex problems, test their logical reasoning, and gauge their problem-solving efficiency. These puzzles are often designed to test not only your knowledge of math but also your ability to think critically and creatively. Here, we've compiled 20 challenging yet exciting math puzzles to help you prepare for data science interviews. We’ll walk you through each puzzle, followed by an explanation of the solution. 1. The Missing Dollar Puzzle Puzzle: Three friends check into a hotel room that costs $30. They each contribute $10. Later, the hotel realizes there was an error and the room actually costs $25. The hotel gives $5 back to the bellboy to return to the friends, but the bellboy, being dishonest, pockets $2 and gives $1 back to each friend. No...

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme...

GAM model : PyGAM package details Analysis and possible issue resolving

Introduction:                  picture credit to peter laurinec. I have been studying about PyGAM package for last couple of days. Now, I am planning to thoroughly analyze the code of PyGAM package with necessary description of GAM model and sources whenever necessary. This is going to be a long post and very much technical in nature. Pre-requisites: For understanding the coding part of PyGAM package, first you have to learn what is a GAM model. GAM stands for generalized additive model, i.e. it is a type of statistical modeling where a target variable Y is roughly represented by additive combination of set of different functions. In formula it can be written as: g(E[Y]) = f 1 (x 1 ) + f 2 (x 2 ) + f 3 (x 3 ,x 4 )+...etc where g is called a link function and f are different types of functions. In technical terms, in GAM model, theoretically expectation of the link transformed target variable is assume...