Reputation: 649
I have 2 datasets. In each one I have several columns. But I want to use only 2 columns from each dataset, without doing any join, merge or combination between the both of the datasets.
Example dataset 1:
column_dataset_1 <String> | column_dataset_1_normalized <String>
-----------------------------------------------------------------------
11882621-V021BRP161305-1 | 11882621V021BRP1613051
-----------------------------------------------------------------------
W-B.7120RP1605794 | WB7120RP1605794
-----------------------------------------------------------------------
D/57RP.1534421 | D57RP1534421
-----------------------------------------------------------------------
125858G_022BR/P070751 | 125858G022BRP070751
-----------------------------------------------------------------------
300B.5190C57/51507 | 300B5190C5751507
-----------------------------------------------------------------------
Example dataset 2
column_dataset_2 <String> | column_dataset_2_normalized <String>
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Por ejemplo, si W-B.7120RP1605794se trata de un archivo de texto, | PorejemplosiWB7120RP1605794setratadeunarchivodetexto
-------------------------------------------------------------------------------------------------------------------------------------------------------------
se abrirá en un programa de procesamiento de texto. | seabrirenunprogramadeprocesamientodetexto
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
-------------------------------------------------------------------------------------------------------------------------------------------------------------
utilizados 125858G_022BR/P070751 frecuentemente (por ejemplo, un texto que describe | utilizados125858G022BRP070751frecuentementeporejemplountextoquedescribe
--------------------------------------------------------------------------------------------------------------------------------------------------------------
column_dataset_1_normalized is the result of column_dataset_1 is normalized column_dataset_2_normalized is the resut of column_dataset_2 is normalized
I want to compare column_dataset_1_normalized
if is exist in column_dataset_2_normalized
.
If yes I should extract it from column_dataset_2
Example:
WB7120RP1605794
is in the second line
of column_dataset_1_normalized
, is exist in the first line of column_dataset_2_normalized
, so I should extract
it's real value [W-B.7120RP1605794]
, from column_dataset_2
and store it in a new column in dataset 2.
And the same for 125858G022BRP070751
is in forth line in column_dataset_2_normalized
, I should extract it from column_dataset_2 [125858G_022BR/P070751]
.
The comparaison should, take one by one value of column_dataset_1_normalized
and search it in all the cell of column_dataset_2_normalized
.
For normalization I used this code to kepp only number and letter:
df = df.withColumn(
"column_normalized",
F.regexp_replace(F.col("column_to_normalize"), "[^a-zA-Z0-9]+", ""))
Someone can propose me a suggestion how can I do it ? Thank you
Upvotes: 0
Views: 411
Reputation: 14008
There are various way to join two dataframes:
(1) find the location/position of string column_dataset_1_normalized in column_dataset_2_normalized by using SQL function locate, instr, position etc, return a position (1-based) if exists
from pyspark.sql.functions import expr
cond1 = expr('locate(column_dataset_1_normalized,column_dataset_2_normalized)>0')
cond2 = expr('instr(column_dataset_2_normalized,column_dataset_1_normalized)>0')
cond3 = expr('position(column_dataset_1_normalized IN column_dataset_2_normalized)>0')
(2) use regex rlike to find column_dataset_1_normalized from column_dataset_2_normalized, this is only valid when no regex meta-characters is shown in column_dataset_1_normalized
cond4 = expr('column_dataset_2_normalized rlike column_dataset_1_normalized')
Run the following code and use one of the above conditions, for example:
df1.join(df2, cond1).select('column_dataset_1').show(truncate=False)
+---------------------+
|column_dataset_1 |
+---------------------+
|W-B.7120RP1605794 |
|125858G_022BR/P070751|
+---------------------+
Edit: Per comments, the matched sub-string might not be the same as df1.column_dataset_1, so we will need to reverse-engineer the sub-string from the normalized string. Based on how the normalization is conducted, the following udf might help (notice this will not cover any leading/trailing non-alnum that might be in the matched). Basically, we will iterate through the string by chars and find the start/end index of the normalized string in the original string, then take the sub-string:
from pyspark.sql.functions import udf
@udf('string')
def find_matched(orig, normalized):
n, d = ([], [])
for i in range(len(orig)):
if orig[i].isalnum():
n.append(orig[i])
d.append(i)
idx = ''.join(n).find(normalized)
return orig[d[idx]:d[idx+len(normalized)]] if idx >= 0 else None
df1.join(df2, cond3) \
.withColumn('matched', find_matched('column_dataset_2', 'column_dataset_1_normalized')) \
.select('column_dataset_2', 'matched', 'column_dataset_1_normalized') \
.show(truncate=False)
+------------------------------------------------------------------------------------+-----------------------+---------------------------+
|column_dataset_2 |matched |column_dataset_1_normalized|
+------------------------------------------------------------------------------------+-----------------------+---------------------------+
|Por ejemplo, si W-B.7120RP-1605794se trata de un archivo de texto, |W-B.7120RP-1605794 |WB7120RP1605794 |
|utilizados 125858G_022BR/P-070751 frecuentemente (por ejemplo, un texto que describe|125858G_022BR/P-070751 |125858G022BRP070751 |
+------------------------------------------------------------------------------------+-----------------------+---------------------------+
Upvotes: 1