Phil
Phil

Reputation: 1

Deduplication of text for a large corpus

I have a large csv file with about 7000 rows (files) with text entries consisting of the following columns in bold:

filename title text author year

0 latin_xmls\10.xml De facto Ungarie magne1236\n \n\nDe facto Ungarie magne\n\na fratre Riccard... Riccardus OFM fl.1236

1 latin_xmls\100.xml De trinitate \n\n ARGUMENTUM.\n\n\n #Dividitur tract... Novatianus fl. 260

2 latin_xmls\10000.xml De quadratura circuli \n\n\n NOTA.\nDiximus falso videri Be... Franco Leodiensis NaN

....

The csv file is the scraped corpus corporum for Latin and from Kaggle actually: https://www.kaggle.com/datasets/yaustal/latin-literature-dataset-170m?select=latin_raw.csv

I have noticed that text is repeated, like so but in much longer:

"In triginta et unum caput. Primum de iis verbis

regulae veritatis, seu fidei (quam Symbolum vocamus)

commentatur, quae nos credere jubent in Deum Patrem

et Dominum omnipotentem, rerum omnium perfectissimum

creatorem.

Deum nostrum. Christum in Veteri Testamento

promissum verum hominem, verumque Deum

esse, Scripturam Veteris Novique Foederis auctoritate

probat; capite 18 errorem Sabellianorum refutat,

et auctoritate SS.

In triginta et unum caput. Primum de iis verbis

regulae veritatis, seu fidei (quam Symbolum vocamus)

commentatur, quae nos credere jubent in Deum Patrem

et Dominum omnipotentem, rerum omnium perfectissimum

creatorem.

Deum nostrum. Christum in Veteri Testamento

promissum verum hominem, verumque Deum

esse, Scripturam Veteris Novique Foederis auctoritate

probat; capite 18 errorem Sabellianorum refutat,

et auctoritate SS."

It seems to be the case for many texts in my "text"column, so I am assuming this holds for all rows. In any case, I would like to chop off the repeated parts which come after the unrepeated text. Any idea how I can do this efficiently? The data set supposedly has 170 mio. words so efficiency is important.

I also have a pre-processed version of just concatenated text of all those columns in one string OR as a corpus consisting of a list of lists of all sentences if that format is easier to work with:

concatenated_preprocessed_text:

["de facto ungarie magne a fratre riccardo inuento tempore domini gregorii rico, pape noni. inuentum fuit in gestis ungarorum christianorum, quod esset alia ungaria maior, de qua septem duces cum populis suis egressi fuerant, ut habitandi quererent sibi locum, eo quod terra ipsorum multitudinem inhabitantium sustinere non posset. ....]

corpus_list:

['de facto ungarie magne a fratre riccardo inuento tempore domini gregorii rico, pape noni.',
 'inuentum fuit in gestis ungarorum christianorum, quod esset alia ungaria maior, de qua septem duces cum populis suis egressi fuerant, ut habitandi quererent sibi locum, eo quod terra ipsorum multitudinem inhabitantium sustinere non posset.',
....]

Another option would also be to remove all doubled sentences, however this is not desirable necessarily because I want to train a language model and sometimes sentences may occur naturally in the same way, especially short ones.

Any ideas? Thanks in advance! I will keep looking for solutions.

I have tried this, but the computational complexity is much too large and I don't think this makes sense entirely actually.

import pandas as pd
from difflib import SequenceMatcher

def remove_repetitions(text):
    # Initialize variables
    repeated_text = ""
    max_similarity = 0.95
    
    # Iterate through all possible lengths of repetition
    for i in range(1, len(text)):
        # Divide the text into two substrings
        substring1 = text[:i]
        substring2 = text[i:i*2]
        
        # Calculate similarity between the substrings
        similarity = SequenceMatcher(None, substring1, substring2).ratio()
        
        # Update if similarity is greater than previous max similarity
        if similarity > max_similarity:
            repeated_text = substring1
            max_similarity = similarity
            
    # Remove the repeated portion from the text
    cleaned_text = text.replace(repeated_text, "", 1)
    
    return cleaned_text.strip()

def remove_repetitions_in_dataframe(df):
    df['cleaned_text'] = df['text'].apply(remove_repetitions)
    return df

I also removed duplicate files that had exactly the same text in the "text" column already (7 files in total).

Upvotes: 0

Views: 77

Answers (0)

Related Questions