Skip to main content

How to write csv in python


Introduction:

In day to day works of data analysts and scientists, you often need to write your results and present them in a nice and neat csv file or a excel file. In this article we will discuss how to write csv files from python scripts. It is important for not only data science but anyone who has to present data on a regular basis.

The basic option: pandas.dataframe.to_csv attribute

This option uses pandas library.If you are not introduced to pandas yet, read about pandas here first. When you have a data table in your environment which you want to save, then the best way is to put it in a dataframe version and then use the to_csv attribute of pandas dataframe to save it into a specific file. The normal use would look like:

To clear the idea, the normal format to use to_csv on a dataframe is:
dataframe.to_csv(destination_file_path_as_string,index = False)
The above example depicts how you would save marks for 4 students. We first create a empty dataframe. Then we store each column with a specific name and then we finally save that csv as marksheet.csv with no index.Finally this csv gets stored in the local directory of the script, which in this case was the home, and there we find the marksheet.csv which look like the following:

Now with this process discussed we move on to a more dynamic way to store csv files.

row by row csv writing:

For writing a csv file, often our data comes row by row and we need to store it like that. For this we have to use file opening code. The normal code is of general format: import csv
with open(filename,'w') as f:
    writer = csv.writer(f)
    writer.writerow(list_columns)
    f.close()
with i in range(n):
    with open(filename,'a') as f:
        writer = csv.writer(f)
        writer.writerow(row_list[i])
        f.close()

Now, I will break down the above code so that you can modify the necessary parts and use it. First of all, for writing csv, we import the csv reading library named csv. If you get a moduleNotFoundError saying csv is not found then that means that the environment you are using for running the script does not have csv installed. In that case, you have to install csv module.

Once csv is imported properly, the second crucial thing to understand is to open the file in write mode and then creating a writer object to write the csv file. A writer object has a writerow attribute which is very important to write one row at a time.

You will need to change the last looping part of the code, where I have added a simple loop and then added rows from a already created row_list, but the correct thing will be to create the row in the loop itself so that the above code keeps on saving your data at the latest calculations.

This type of dynamic csv creation is most important when you will be running a lot of calculations together. In such cases, it may happen so that your row creation stops in middle for some exception or bug, but then to properly inspect the script and save time sometimes you should store your previous outputs like this.

Conclusion:

I have mentioned the two most common and useful methods to write csv from a python script. Although these should be able to cover most of the cases, you may end up in cases where these will not work out. In such cases, you should try implementing a more low-level version of the file I/O method I have shown inside the second method and then proceed with more custom code. Also, if you have any queries or comments, please do comment below. Thanks for reading! If you liked it, please share and subscribe to my blog!

Comments

Popular posts from this blog

Mastering SQL for Data Science: Top SQL Interview Questions by Experience Level

Introduction: SQL (Structured Query Language) is a cornerstone of data manipulation and querying in data science. SQL technical rounds are designed to assess a candidate’s ability to work with databases, retrieve, and manipulate data efficiently. This guide provides a comprehensive list of SQL interview questions segmented by experience level—beginner, intermediate, and experienced. For each level, you'll find key questions designed to evaluate the candidate’s proficiency in SQL and their ability to solve data-related problems. The difficulty increases as the experience level rises, and the final section will guide you on how to prepare effectively for these rounds. Beginner (0-2 Years of Experience) At this stage, candidates are expected to know the basics of SQL, common commands, and elementary data manipulation. What is SQL? Explain its importance in data science. Hint: Think about querying, relational databases, and data manipulation. What is the difference between WHERE ...

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme...

fundamentals of LLM: A story from history of GPTs to the future

Introduction: So there has been a lot of developments in LLM and I have not gone through any of it. In the coming few parts, I will talk about LLM and its related eco-system that has developed and will try to reach to the cutting or more like bleeding edge. Lets go through the main concepts first. What is LLM? LLM[1] refers to large language models; that refer to mainly deep learning based big transformer models that can perform the natural language understanding and natural language generation tasks much better than the previous versions of the models generated in NLP history. LLM models are generally quite big, in terms of 10-100GBs and they can't fit in even in one machine's ram. So, most LLMs are inferenced using bigger GPU cluster systems and are quite computationally exhaustive. What was the first true LLM? The BERTs Transformers were invented on 2017 by vaswani et al in their revolutionary paper called "attention is all you need". After that we had the BER...