Skip to main content

binary search tree

Binary search tree is a tree structure defined by the following rules:
(1) for each parent node in the tree , the left child node value will be less than its value and the right child node value will be greater than its value.
(2) there can not be any duplicate node.
Clearly, creating a binary search tree, and searching a value in binary search tree works pretty much like bisection method; i.e. first we start with the root value, if it is lesser than the root value then we search with the lesser values only and if it greater than the root value, then we check with the greater values only. i.e.
the algorithm looks like something below in pseudocode:
search(value,node)

     if (node==NULL)
     {
        print(value not found )
     }
     elseif(value>node->data)
     {
          search(value,node->right)
      }
      elseif(value < node->data)
      {
          search(value,node->left)
      }
      else(value==node->data)
      {
          print(value is found)
      }
   
}
As you can understand on one minute on thinking, same algorithm can be used to create a binary search tree. Just, in case, when we hit NULL, we have to set that node to have the required value and to set its leaves as NULL.
Below, I have implemented the binary search tree creation and search function.
Try yourself first then, you can take a hint for the implemented code.
#include <stdio.h>
#include <stdlib.h>
typedef struct node{
int data;
struct node *left;
struct node *right;
}stick;
stick *createnode(int data)
{
stick *newNode;
newNode=(stick *)calloc(1,sizeof(stick));
newNode->data=data;
newNode->left=NULL;
newNode->right=NULL;
return newNode;
}
stick *binarysearchAndentry(int value,stick *node,int operation)

if(node==NULL)
     {
      if(operation==0)
    {printf("%d is not in the tree",value);}
    else
    {
    printf("%d is entered in appropiate position\n",value);
    node=createnode(value);
    return node;
    }
     }
    else
  { 
    if(node->left==NULL && node->data>value)
    {
    stick *lol;
    lol=createnode(value);
    node->left=lol;
    printf("%d is entered in appropiate position\n",value);
    }
    else
    {
    if(node->right==NULL && node->data<value)
        {
    stick *lol;
    lol=createnode(value);
    node->right=lol;
        printf("%d is entered in appropiate position\n",value);
    }
        else
            {
      if(value==node->data)
    {
    printf("%d is found in the tree",value);
    }
     else
     {
     if(node->data>value)
     {
      binarysearchAndentry(value,node->left,operation);
     }
     else
     {
      binarysearchAndentry(value,node->right,operation);
     }
     }
    }
        }
  }
}

int main(void) {
    stick *root;
    root=binarysearchAndentry(20,NULL,1);
    binarysearchAndentry(30,root,1);
    binarysearchAndentry(10,root,1);
    binarysearchAndentry(50,root,1);
    binarysearchAndentry(25,root,1);
   printf("%d\n",root->data);
   printf("%d\n",root->left->data);
   printf("%d\n",root->right->data);
   printf("%d\n",root->right->right->data);
   printf("%d\n",root->right->left->data);
 
return 0;
}
Success #stdin #stdout 0s 9424KB
20 is entered in appropiate position
30 is entered in appropiate position
10 is entered in appropiate position
50 is entered in appropiate position
25 is entered in appropiate position
20
10
30
50
25
So this is the function to create and search and append function.
I have recently started to compile a solution bunch for hackerearth tree questions. Here is the github link for them below:
github link.

Comments

Popular posts from this blog

Mastering SQL for Data Science: Top SQL Interview Questions by Experience Level

Introduction: SQL (Structured Query Language) is a cornerstone of data manipulation and querying in data science. SQL technical rounds are designed to assess a candidate’s ability to work with databases, retrieve, and manipulate data efficiently. This guide provides a comprehensive list of SQL interview questions segmented by experience level—beginner, intermediate, and experienced. For each level, you'll find key questions designed to evaluate the candidate’s proficiency in SQL and their ability to solve data-related problems. The difficulty increases as the experience level rises, and the final section will guide you on how to prepare effectively for these rounds. Beginner (0-2 Years of Experience) At this stage, candidates are expected to know the basics of SQL, common commands, and elementary data manipulation. What is SQL? Explain its importance in data science. Hint: Think about querying, relational databases, and data manipulation. What is the difference between WHERE ...

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme...

fundamentals of LLM: A story from history of GPTs to the future

Introduction: So there has been a lot of developments in LLM and I have not gone through any of it. In the coming few parts, I will talk about LLM and its related eco-system that has developed and will try to reach to the cutting or more like bleeding edge. Lets go through the main concepts first. What is LLM? LLM[1] refers to large language models; that refer to mainly deep learning based big transformer models that can perform the natural language understanding and natural language generation tasks much better than the previous versions of the models generated in NLP history. LLM models are generally quite big, in terms of 10-100GBs and they can't fit in even in one machine's ram. So, most LLMs are inferenced using bigger GPU cluster systems and are quite computationally exhaustive. What was the first true LLM? The BERTs Transformers were invented on 2017 by vaswani et al in their revolutionary paper called "attention is all you need". After that we had the BER...