Skip to main content

How to find subjects and predicates using spacy in german text?

 Introduction:

I have talked enough about spacy in english. But enough about english; what about, say german? One of my fellow linguists who is not a german native speaker, wanted to analyze the german texts to find nouns and predicates. In this post, I will try to introduce you to german models, how to download, use and we'll finish what my fellow linguist started.gut, lass uns anfangen

Download and load a german model:

One of the good thing about spacy is that on change of language, there is no significant structure change. For german, there are 3 models available in the spacy pretrained models. These models are 'de_core_news_sm','de_core_news_md' and 'de_core_news_lg'. The sm, md and lg refer to small, medium and large respectively. For tasks where similarity are not needed and light models are more needed, one can go with the de_core_news_sm model. For higher precision and correct similarity related operations, de_core_news_lg is the preferred model. 

To download any of these language pipelines, use the following code:

$python3 -m spacy download de_core_news_sm

You can replace de_core_news_sm with de_core_news_md/lg to download the medium or large size models respectively.

To load the model is again as simple as loading any spacy model, i.e. just

import spacy

nlp_de = spacy.load('de_core_web_sm')

will load the small german pipeline as nlp_de object. You can then run this language object on any text and the necessary elements like dependency parsing, ner tagging, pos tagging, lemmatization and all will be done. 

Now, to answer the more complex part of the question is equivalent to answer what is a predicate. According to wikipedia, there are two conflicting definitions as of now, for what is predicate. We will go through both and sketch the direction how we can generate a predicate using spacy.

First definition, according to the traditional grammar, says that a sentence has two parts, namely, subject and predicate, the predicate being the talk about subject in the sentence.

In such a setup, finding out the predicate part is very easy, as all you have to do is find out the subject in the sentence from the dependency parsing; and you are done. The normal tag of dependency for the main subject is 'nsubj' meaning nominal subject. So according to the traditional definition, a token with dependency tag 'nsubj' and all the token dependent on it will create the subject together. 

We will quickly cite some examples which you can then use as usual for german. 

So let's say there is this sentence called 'the apples have fallen from the tree.' Now, the subject is 'the apples' and the predicate is 'have fallen from the tree'. To get this from spacy we can proceed like below:

import spacy

nlp = spacy.load('en_core_web_sm')

text = 'the apples have fallen from the tree'

doc = nlp(text)

subject_elems = {}

for token in doc:
    if token.dep_ == 'nsubj':
        subject_elems[token.i] = token

for child in token.children:
    subject_elem[child.i] = child

items = list(subject_elem.keys())
items.sort()
subject = ' '.join([subject_elem[ind] for ind in items])

print("the subject is:",subject)

The rest indices can be collated to predicates obviously. Now, the same code can be used in the same order for german; as other than the language element, there is nothing specific to german in the code.

Now, the alternative definition to a predicate is based on modern grammar logics. In this case, the predicate is considered to be a main verb surrounded with the auxillary and other modal verbs associated with it. In such a case again, we can solve the problem using dependency tree. For doing this, we need to find a root verb, and, find the auxillary verbs dependent on it. The code for the same can be a bit more complex; but I am providing a template below which will solve most cases:

import spacy

nlp = spacy.load('en_core_web_sm')

text = 'the guests have had been entertained by the music.'

doc = nlp(text)

predicates_elems = {}

for token in doc:
    if token.dep_ == 'ROOT':
        predicates_elems[token.i] = token
        root_index = token.i

for child in doc[root_index].children:
    if child.dep_ == 'aux' or child.dep_ == 'auxpass':
        predicates_elem[child.i] = child

items = list(predicates_elem.keys())
items.sort()
predicates= ' '.join([predicates_elem[ind] for ind in items])

print("the predicates is:",predicates)

Clearly, the predicate will be captured in this way. In the above example, the predicate comes out to be 'have had been entertained'. The subject can be calculated just like as we did above.

Conclusion:

So, this is how, we can calculate noun and predicates using spacy in german texts as well as english texts mainly leveraging the dependency tree structure of the sentence. We explored both the definitions and provided the code template for the traditional as well as modern grammar definitions. Thanks for reading! Please share the article if you like it; and do comment if you have similar questions in your work. 

Stay tuned for more NLP and machine learning articles.

Comments

Popular posts from this blog

Mastering SQL for Data Science: Top SQL Interview Questions by Experience Level

Introduction: SQL (Structured Query Language) is a cornerstone of data manipulation and querying in data science. SQL technical rounds are designed to assess a candidate’s ability to work with databases, retrieve, and manipulate data efficiently. This guide provides a comprehensive list of SQL interview questions segmented by experience level—beginner, intermediate, and experienced. For each level, you'll find key questions designed to evaluate the candidate’s proficiency in SQL and their ability to solve data-related problems. The difficulty increases as the experience level rises, and the final section will guide you on how to prepare effectively for these rounds. Beginner (0-2 Years of Experience) At this stage, candidates are expected to know the basics of SQL, common commands, and elementary data manipulation. What is SQL? Explain its importance in data science. Hint: Think about querying, relational databases, and data manipulation. What is the difference between WHERE ...

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like know...

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme...