cph_sto
cph_sto

Reputation: 7597

How to skip multiple lines using read.csv in PySpark

I am having a .csv with few columns, and I wish to skip 4 (or 'n' in general) lines when importing this file into a dataframe using spark.read.csv() function. I have a .csv file like this -

ID;Name;Revenue
Identifier;Customer Name;Euros
cust_ID;cust_name;€
ID132;XYZ Ltd;2825
ID150;ABC Ltd;1849

In normal Python, when using read_csv() function, it's simple and can be done using skiprow=n option like -

import pandas as pd
df=pd.read_csv('filename.csv',sep=';',skiprows=3) # Since we wish to skip top 3 lines

With PySpark, I am importing this .csv file as follows -

df=spark.read.csv("filename.csv",sep=';') 
This imports the file as -
ID          |Name         |Revenue
Identifier  |Customer Name|Euros
cust_ID     |cust_name    |€
ID132       |XYZ Ltd      |2825
ID150       |ABC Ltd      1849

This is not correct, because I wish to ignore first three lines. I can't use option 'header=True' because it will only exclude the first line. One can use 'comment=' option, but for that one needs the lines to start with a particular character and that is not the case with my file. I could not find anything in the documentation. Is there any way this can be accomplished?

Upvotes: 7

Views: 17164

Answers (3)

Alexander  Stepanenko
Alexander Stepanenko

Reputation: 11

Filter first 2 lines from csv file:

_rdd = (
    spark.read
        .text(csv_file_path_with_bad_header)
        .rdd
        .zipWithIndex()
        .filter(lambda x: x[1] > 2)
        .map(lambda x: x[0][0])
     )

df = (
  spark.read.csv(
    _rdd, 
    header=True, 
    sep="\t", 
    inferSchema = False, 
    quote="\\", 
    ignoreLeadingWhiteSpace=True, 
    ignoreTrailingWhiteSpace=True
  )
)

Upvotes: 1

Max
Max

Reputation: 21

I've been trying to find a solution to this problem for the past couple of days as well. The solution I implemented is probably not the fastest, but it works:

with open("filename.csv", 'r') as fin:
    data = fin.read().splitlines(True)
with open("filename.csv", 'w') as fout:
    fout.writelines(data[3:]) # Select the number of rows you want to skip, in this case we skip the first 3

df = spark.read.format("csv") \
          .option("header", "true") \
          .option("sep", ";") \
          .load("filename.csv")

Upvotes: 1

mayank agrawal
mayank agrawal

Reputation: 2545

I couldnt find a simple solution for your problem. Although this will work no matter how the header is written,

df = spark.read.csv("filename.csv",sep=';')\
          .rdd.zipWithIndex()\
          .filter(lambda x: x[1] > n)\
          .map(lambda x: x[0]).toDF()

Upvotes: 6

Related Questions