Skip to main content

basic commands in ubuntu console and ec2 instances

Chinese (Simplified)Chinese (Traditional)CzechDanishDutchEnglishFrenchGermanHindiIndonesianItalianKoreanPolishPortugueseRussianSerbianSlovakSpanishThaiBengaliGujaratiMarathiNepaliPunjabiTamilTelugu

Introduction:

For the last couple of days, I have started working in ec2 instances and I have been noticing the fact that there is not a good question answering place for the ec2 instances. So in this blog I will try to notedown some of the common but useful commands you can use in ec2 instances while working in aws cloud machines.

Ubuntu specific normal commands:

1. ls
ls is used to see the lists of file inside a specific directory.
with providing ls -lh parameter, you will get the size, and histories of creation of the files too.
Also, for seeing specific types of files list, you can use * and then extension i.e.
ls -lh *.csv to see csv files.
This specific extension pattern can be used pretty well to see and find out necessary files or so on.
2.cd <DIR>
this command is used to get inside a sub-directory of the current directory; or if you mention a file path inside that, then you can reach to the end of that path.
3.cd .. 
this command is used to get out from a directory and reach its parent directory. i.e. if you have a structure like:
real_files > small files
               >  big files
               > insanely big files
and then you are currently inside small_files; and want to go back to real_files, then you can write
cd ..
and bam! you will reach to real_files.
4. df
df is used to see the memory place left in the machine cluster's storage. Sometimes, if your machine has multiple users, you will not have good understanding of how much memory in total has been consumed. Now, even if you use a 256 GB machine, if your original cluster have no space left, the programs you are running there will get stuck. Therefore, it is always important to keep a check on the total consumed memory and a good practice related is to store the bigger files created to a s3 bucket once processing related them is done.

5.mv <file_name> <file_changed_name>
mv is used to change the name of a specific file or change a file's position from one folder to another folder. For example if you want to move the file example.csv from dir1 under home to dir2 under home, then this following command will do:
mv /home/user_name/dir1/example.csv /home/user_name/dir2/example.csv

6.htop
htop command is used to monitor cpu usage, performance and also which programs are running and which are not. So when you type htop a monitor like this will open in your screen:
where all the cores of your cpu is shown, green to red in terms of that cores usage is also shown. You can press the quit option in the right bottom corner to quit this monitoring screen.

7. vi file_name
for opening, viewing/editing a file, you have to use this command. In case of python files, vi opens the file in vim editor. This opens the data or the script in a new window; which you can just close by writing Esc + :q! or save and close by writing Esc+ :wq
To edit the file, you need to get to --insert-- mode; which needs ctrl + I to be pressed.

Aws and s3 specific:

8. aws s3 cp <provided_file> <received_file>
this format is used to copy data from either s3 to aws or aws to s3. It is always awesome to use aws and s3 bucket combination as the network bandwith is much faster than local to aws or local to s3 bandwith, which is the same with your office or home's network services. Between aws and s3, network is as fast as several 100MBPS.
Anyway, in this format, provided_file is the name of the file you are going to get. and the received_file is the name of the file you are going to name once that file is received in that another place.
So for example, you have a file named example.csv, inside the path /home/user_name/dir1/dir2 in your ec2 instance. Now, you want to send it to s3/bucket_1/bucket2 and you want to rename it as reward_policy_2017.csv.
So then your above format will look like:

aws s3 cp /home/user_name/dir1/dir2/example.csv s3://bucket_1(special format)/bucket2/reward_policy_2017.csv

the special format of the first bucket is according to internal rules of aws buckets so that the path is perceived as a valid url by the internal program. We will describe that details later in another post.

9.tmux sessions
Not all programs start and end within a minute, specially if you are going to cloud to solve it. Now the issue with running programs on cloud is that, if your screen goes off and the console turns off, or you are disconnected from internet, then the program also shuts down. The irritating message : broken pipe: <ip address>.. sort of messages come, informing you that your machine is disconnected from the local machine.
Now the thing is that, you run important programs, but most of them run for hours. So you need to turn your screen and console, or heck, even the local machine off. So there comes tmux.
tmux can be also used in local machines, but it is a life saver in ec2 instances. What you need to do is that, you have to open a tmux session, which is nothing but another console, which can be run in background of the current console; and the main advantage of tmux is that, even if you turn off the local machine, tmux session runs and completes the operation. The basic commands related to tmux are the following:
(1) to open a new tmux session:
$ tmux new -s session_name
(2) to open a already created tmux session:
$ tmux attach -t session_name
(3) to close a tmux session screen but not the session itself:
press [ctrl] + [B] together; then d for detaching it
(4) to exit a tmux session altogether:
write exit and the window will close.

These are some of the very basic things I use on a regular basis to work on the ec2 instances. Comment below if you want to know something specific or want me to include some specific commands.

Comments

Popular posts from this blog

Mastering SQL for Data Science: Top SQL Interview Questions by Experience Level

Introduction: SQL (Structured Query Language) is a cornerstone of data manipulation and querying in data science. SQL technical rounds are designed to assess a candidate’s ability to work with databases, retrieve, and manipulate data efficiently. This guide provides a comprehensive list of SQL interview questions segmented by experience level—beginner, intermediate, and experienced. For each level, you'll find key questions designed to evaluate the candidate’s proficiency in SQL and their ability to solve data-related problems. The difficulty increases as the experience level rises, and the final section will guide you on how to prepare effectively for these rounds. Beginner (0-2 Years of Experience) At this stage, candidates are expected to know the basics of SQL, common commands, and elementary data manipulation. What is SQL? Explain its importance in data science. Hint: Think about querying, relational databases, and data manipulation. What is the difference between WHERE

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme