Reputation:
I have below data set
Segment.organizationId|^|Segment.segmentId|^|SegmentType|^|SegmentName|^|SegmentName.languageId|^|SegmentLocalLanguageLabel|^|SegmentLocalLanguageLabel.languageId|^|ValidFromPeriodEndDate|^|ValidToPeriodEndDate|^|SegmentInactivationReasonCode|^|SegmentOrganizationId|^|IsShariaCompliant|^|IsCorporate|^|IsElimination|^|IsOther|^|InactiveReasonOtherDescription|^|InactiveReasonOtherDescription.languageId|^|IsOperatingSegment|^|SegmentFundbDescription|^|SegmentFundbDescription.languageId|^|SegmentTypeId|^|SegmentInactiveReasonId|^|FFAction|!|
4295876080|^|7|^|B|^|Test ||^|505074|^|jtrsu|^|505126|^|2010-03-31T00:00:00Z|^||^||^||^|False|^|False|^|False|^|False|^||^|505074|^|False|^||^|505074|^|3013618|^||^|I|!|
Here is my code
val df = sqlContext.read.format("csv").option("header", "true").option("delimiter", "|").option("inferSchema","true").load("s3://trfsmallfffile/FinancialSegment/TEST")
But this does not give me correct output
Here is my output
+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+--------+-------------+
|Segment_organizationId|Segment_segmentId|SegmentType|SegmentName|SegmentName_languageId|SegmentLocalLanguageLabel|SegmentLocalLanguageLabel_languageId|ValidFromPeriodEndDate|ValidToPeriodEndDate|SegmentInactivationReasonCode|SegmentOrganizationId|IsShariaCompliant|IsCorporate|IsElimination|IsOther|InactiveReasonOtherDescription|InactiveReasonOtherDescription_languageId|IsOperatingSegment|SegmentFundbDescription|SegmentFundbDescription_languageId|SegmentTypeId|SegmentInactiveReasonId|FFAction|DataPartition|
+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+--------+-------------+
| 4295876080| 7| B| Test | ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| ^| Japan|
+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+--------+-------------+
I am getting this because there is |
character is used in the record .
How can i handle this situation?
My expected output is below
...+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+-----------+
|Segment.organizationId|Segment.segmentId|SegmentType|SegmentName|SegmentName.languageId|SegmentLocalLanguageLabel|SegmentLocalLanguageLabel.languageId|ValidFromPeriodEndDate|ValidToPeriodEndDate|SegmentInactivationReasonCode|SegmentOrganizationId|IsShariaCompliant|IsCorporate|IsElimination|IsOther|InactiveReasonOtherDescription|InactiveReasonOtherDescription.languageId|IsOperatingSegment|SegmentFundbDescription|SegmentFundbDescription.languageId|SegmentTypeId|SegmentInactiveReasonId|FFAction|
+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+-----------+
|4295876080 |7 |B |Test | |505074 |jtrsu |505126 |2010-03-31T00:00:00Z | | | |False |False |False |False | |505074 |False | |505074 |3013618 | |I |
+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+-----------+
Upvotes: 1
Views: 2677
Reputation: 41957
Multiple characters for delimiter is not supported in spark sql in option
parameter. So I would suggest you to go with sparkContext
as there is split
function which supports multiple characters.
So your first step is to read the file using sparkContext
val rdd = sc.textFile("s3://trfsmallfffile/FinancialSegment/TEST")
Then you would need to separate the first line for the header and create schema
from it
val header = rdd.filter(_.contains("Segment.organizationId")).map(line => line.split("\\|\\^\\|")).first()
val schema = StructType(header.map(cols => StructField(cols.replace(".", "_"), StringType)).toSeq)
Last step would be to create the required dataframe
using the schema
created
val data = sqlContext.createDataFrame(rdd.filter(!_.contains("Segment.organizationId")).map(line => Row.fromSeq(line.split("\\|\\^\\|").toSeq)), schema).show(false)
You should have following dataframe
+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+-----------+
|Segment_organizationId|Segment_segmentId|SegmentType|SegmentName|SegmentName_languageId|SegmentLocalLanguageLabel|SegmentLocalLanguageLabel_languageId|ValidFromPeriodEndDate|ValidToPeriodEndDate|SegmentInactivationReasonCode|SegmentOrganizationId|IsShariaCompliant|IsCorporate|IsElimination|IsOther|InactiveReasonOtherDescription|InactiveReasonOtherDescription_languageId|IsOperatingSegment|SegmentFundbDescription|SegmentFundbDescription_languageId|SegmentTypeId|SegmentInactiveReasonId|FFAction|!||
+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+-----------+
|4295876080 |7 |B |Test | |505074 |jtrsu |505126 |2010-03-31T00:00:00Z | | | |False |False |False |False | |505074 |False | |505074 |3013618 | |I|!| |
+----------------------+-----------------+-----------+-----------+----------------------+-------------------------+------------------------------------+----------------------+--------------------+-----------------------------+---------------------+-----------------+-----------+-------------+-------+------------------------------+-----------------------------------------+------------------+-----------------------+----------------------------------+-------------+-----------------------+-----------+
Upvotes: 4