Skip to main content

spacy exploration part 3: spacy data structures and pipelines

 Introduction:

We discussed about dependency parsing in part 2 of spacy exploration series. Now, in the part 3 of our spacy exploration, we will explore some more concepts of NLP usages by spacy pipelines and utilities. Let's dive in.

How does spacy work internally?

Spacy uses all types of optimizations possible to make the processing as fast as possible. One of the main trick in doing so is to use hash code for the strings, and turn them into string as late as possible. The way it helps is that, digits take fixed spaces and can be processed faster than them in most of the operation. For this reason,

all strings are hash coded and the vocabulary object behaves like a double dictionary, in which using the hash you can find the string, and using the string, you can find the hash. See the following examples to get the idea about hashing:


 

Now, let's go over the data structures of the main objects in nlp. First we will see how to create a doc object manually to understand the real structure of a doc object. For creating a doc object, we can use the Doc object. Let's see how to create a doc object manually in the next example:


 

See how nlp.vocab is entered in place of vocab and the word and spaces list is provided to structure the doc. Internally spacy also works like this too.

Next, we will see how the Span objects work internally. For creating a span, we provide start and end parameters to the doc object. To manually create we will do the same, but we will directly use the span object to create the Spans manually. Let's see the code snippet to check how it is done.


 

Next, let's look into the spacy pipelines. We have already seen how processing a doc using nlp(doc) creates all the entities and dependency tags, but we need to understand how internally the nlp pipelines are built so that this works.

 In the nlp pipeline, one by one, a text goes through tokenizer, tagger, parser and ner; and embeds the results in the doc. In such a pipeline, first the text is tokenized using tokenizer, and then the doc object stores the tokens as doc elements. After this, once the pos tagger runs on doc, the parts of speech are tagged and the token.pos tags are filled. After this, once the dependency parser is run, the dependency related params are filled and generated. Finally, ner element in the pipeline detects the entities and stores them in the doc.ents i.e. entities list. 


 

In short, this is how a nlp pipeline performs all the operations on a text piece using the pipeline. When we load a model, the meta.json file of this model contains this pipeline structure too. Thus when the model is called, the model uses this pipeline info from the meta file and creates the actual pipeline.

Now, finally, the question is that, how to know what are the pipeline elements and can we know them from program? the answer is yes; we can. 

Using pipe_names attribute, one can get the list of elements in the nlp pipelines. as well as using pipeline attribute one can get the list of the pipeline objects in the pipeline.

 

Also, to disable any pipe element one can use the disable parameter of the nlp object. i.e.let's say you have ['tagger','parser','ner'] in the pipeline and you write nlp.disable_pipe("tagger"); then the parser will not perform in the pipeline. 

So in this part, we have explored the structures and pipelines of spacy. In the next part, we will explore how to create custom pipeline elements and handle them in spacy.

First let's learn to write custom pipelines. There are many reasons to add custom pipelines. One of the many is to add custom functions or custom entities. According to spacy these are the reasons for creating custom pipelines:

To add custom pipeline element add_pipe feature is used. Below see the different types of add_pipe parameters possible to add pipe elements:


 You can easily see, there are two types of parameters, i.e. last and first, and, before and after. Last and first is used to append the custom components in the end and the beginning of the nlp pipeline. The before and after parameter is used to append the custom components in the pipeline before or after some pipeline element. Let's see some examples of creating and appending such custom components.


 In this example, clearly the custom_component, to count doc length in tokens is added in the first of the pipeline. Let's see some more examples about creating custom component:




Now that we know how to build custom pipelines; we are done with this part. We will explore neural model training in next part.

Comments

Popular posts from this blog

Mastering SQL for Data Science: Top SQL Interview Questions by Experience Level

Introduction: SQL (Structured Query Language) is a cornerstone of data manipulation and querying in data science. SQL technical rounds are designed to assess a candidate’s ability to work with databases, retrieve, and manipulate data efficiently. This guide provides a comprehensive list of SQL interview questions segmented by experience level—beginner, intermediate, and experienced. For each level, you'll find key questions designed to evaluate the candidate’s proficiency in SQL and their ability to solve data-related problems. The difficulty increases as the experience level rises, and the final section will guide you on how to prepare effectively for these rounds. Beginner (0-2 Years of Experience) At this stage, candidates are expected to know the basics of SQL, common commands, and elementary data manipulation. What is SQL? Explain its importance in data science. Hint: Think about querying, relational databases, and data manipulation. What is the difference between WHERE ...

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme...

20 Must-Know Math Puzzles for Data Science Interviews: Test Your Problem-Solving Skills

Introduction:   When preparing for a data science interview, brushing up on your coding and statistical knowledge is crucial—but math puzzles also play a significant role. Many interviewers use puzzles to assess how candidates approach complex problems, test their logical reasoning, and gauge their problem-solving efficiency. These puzzles are often designed to test not only your knowledge of math but also your ability to think critically and creatively. Here, we've compiled 20 challenging yet exciting math puzzles to help you prepare for data science interviews. We’ll walk you through each puzzle, followed by an explanation of the solution. 1. The Missing Dollar Puzzle Puzzle: Three friends check into a hotel room that costs $30. They each contribute $10. Later, the hotel realizes there was an error and the room actually costs $25. The hotel gives $5 back to the bellboy to return to the friends, but the bellboy, being dishonest, pockets $2 and gives $1 back to each friend. No...