Skip to main content

dependency parsing using spaCy : spacy exploration part 2


Introduction: 

In our previous post, we discussed about the basic nlp works using spacy. If you have not read that post, read that post now for better understanding. Today we are going to discuss dependency parsing using spaCy. This is the second post of our spacy exploration series.

What is dependency parsing?

dependency parsing is the analyzing of a sentence in grammatical way, to establish the grammatical dependency between "head" words and other words which modify those heads.

The end result for dependency parsing can be thought to be creating a correct dependency tree as well as tagging the correct dependency tag on each words. In this case, a dependency tree is a directed graph, with one word connected to another with a arrow towards the dependent words from the head word. Look at this following dependency tree for example, from the stanford nlp's page:


 

Now there has been a lot of research on dependency parsing but I would not mention all of it. As our title states, we are interested with the spacy dependency parser. 

According to this post written by matthew honnibal ( author of spacy package), spacy uses a greedy transition based dependency parsing. Details of that algorithm is out of the scope of current discussion.

Now let's discuss about how to parse and analyze dependency tree using spaCy.

Token level informations available in spacy about dependency:

Spacy abstractly considers the dependency to be head to child using a arc of dependency. As spacy internally uses the transition based dependency parsing; which uses the terms like left arc, right arc; even spacy software also considers the edges from a head word to its dependent words as arcs.

In spacy, the nlp pipeline by default contains dependency parsing. So if you have not explicitly pointed out disabling it; then spacy completes the dependency parsing when the doc object is created after processing it. 

Therefore in your processed doc object; each token contains all the dependency related information as well as it contains the children of the token. Here children refer to the syntactic direct dependents in the text of that specific token. 

For reference check this following code:



import spacy
text = "Google announced their 3rd office in India today."
nlp = spacy.load("en_core_web_sm")
doc = nlp(text)
for token in doc:
    print(token.text, token.dep_,"Head of this token is", token.head.text,
     [child for child in token.children]

The above code on running, produces the dependency level, head token's text, and all the children tokens for the specific token.You can change the text to your desired one and then run it in your console. So this is how we can do dependency parsing using spaCy and see the token level basic values.

Now consider the fact that the dependency tree is basically a tree data structure and therefore parsing is possible. Also, each subtree of this dependency tree is also a tree; therefore one can run local tree based parsing and therefore analyze smaller structure within the tree.

Ways to parse the dependency tree:

There are number of attributes in the token object, which helps you to parse the dependency tree in the best ways possible. Other than analyzing the tree, I have not seen any practical use of these yet, hence I will list these down without further code discussion. For respective code snippets, one can visit the official site. So, the respective important attributes are:

(a) token.left and token.right: 

these attributes tell you which the token's children appear to the left of the token in the sentence and to the right of the children. You can directly get the numbers of these using token.n_lefts and token.n_rights attribute.

(b) token.subtree:

this directly gives you the subtree created from considering the current token as the relative head. Basically this gives you the subtree from the current token to its children and so on. But as a returned value this attribute provides you a ordered sequence of these tokens in the subtree.

(c) There are also few other attributes such as token.ancestors ( to go up the dependency tree from the current token) as well as token.right_edge and token.left_edge; in sense of going into the edges of the token.subtree.

Visualize the dependency tree:

Finally, to manually analyze and understand dependency from the spacy dependency parsing; one can create visualizations of the tree too. For this you will use the displacy from spacy. The sample code is as below:


import spacy
text = "displaCy uses JavaScript, SVG and CSS."
nlp = spacy.load("en_core_web_sm")
doc = nlp(text)
from spacy import displacy
displacy.render(doc,style='dep')

This creates following types of pictures of the dependency tree! 


 Pic credit: wikipedia

In conclusion, we went over a brief definition and description of what is dependency parsing, what algo spacy uses under the hood and finally explored the useful codes as well visualization code snippet for seeing and using dependency tree and dependency labels created. Thanks for reading and follow the blog for upcoming spacy exploration posts!

Further readings:

(1) How to manipulate and create spacy's pipeline and custom pipelines

(2) How to train neural network models using spacy

Comments

Popular posts from this blog

Tinder bio generation with OpenAI GPT-3 API

Introduction: Recently I got access to OpenAI API beta. After a few simple experiments, I set on creating a simple test project. In this project, I will try to create good tinder bio for a specific person.  The abc of openai API playground: In the OpenAI API playground, you get a prompt, and then you can write instructions or specific text to trigger a response from the gpt-3 models. There are also a number of preset templates which loads a specific kind of prompt and let's you generate pre-prepared results. What are the models available? There are 4 models which are stable. These are: (1) curie (2) babbage (3) ada (4) da-vinci da-vinci is the strongest of them all and can perform all downstream tasks which other models can do. There are 2 other new models which openai introduced this year (2021) named da-vinci-instruct-beta and curie-instruct-beta. These instruction models are specifically built for taking in instructions. As OpenAI blog explains and also you will see in our

Can we write codes automatically with GPT-3?

 Introduction: OpenAI created and released the first versions of GPT-3 back in 2021 beginning. We wrote a few text generation articles that time and tested how to create tinder bio using GPT-3 . If you are interested to know more on what is GPT-3 or what is openai, how the server look, then read the tinder bio article. In this article, we will explore Code generation with OpenAI models.  It has been noted already in multiple blogs and exploration work, that GPT-3 can even solve leetcode problems. We will try to explore how good the OpenAI model can "code" and whether prompt tuning will improve or change those performances. Basic coding: We will try to see a few data structure coding performance by GPT-3. (a) Merge sort with python:  First with 200 words limit, it couldn't complete the Write sample code for merge sort in python.   def merge(arr, l, m, r):     n1 = m - l + 1     n2 = r- m       # create temp arrays     L = [0] * (n1)     R = [0] * (n

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle