Reputation: 409
Initial data is in Dataset<Row> and I am trying to write to pipe delimited file and I want each non empty cell and non null values to be placed in quotes. Empty or null values should not contain quotes
result.coalesce(1).write()
.option("delimiter", "|")
.option("header", "true")
.option("nullValue", "")
.option("quoteAll", "false")
.csv(Location);
Expected output:
"London"||"UK"
"Delhi"|"India"
"Moscow"|"Russia"
Current Output:
London||UK
Delhi|India
Moscow|Russia
If I change the "quoteAll" to "true", output I am getting is:
"London"|""|"UK"
"Delhi"|"India"
"Moscow"|"Russia"
Spark version is 2.3 and java version is java 8
Upvotes: 7
Views: 2820
Reputation: 409
This is certainly not a efficient answer and I am modifying this based on one given by Artem Aliev, but thought it would be useful to few people, so posting this answer
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import static org.apache.spark.sql.functions.*;<br/>
import org.apache.spark.sql.expressions.UserDefinedFunction;
import org.apache.spark.sql.types.DataTypes;<br/>
public class Quotes {<br/>
private static final String DELIMITER = "|";
private static final String Location = "Give location here";
public static void main(String[] args) {
SparkSession sparkSession = SparkSession.builder()
.master("local")
.appName("Spark Session")
.enableHiveSupport()
.getOrCreate();
Dataset<Row> result = sparkSession.read()
.option("header", "true")
.option("delimiter",DELIMITER)
.csv("Sample file to read"); //Give the details of file to read here
UserDefinedFunction udfQuotesNonNull = udf(
(String abc) -> (abc!=null? "\""+abc+"\"":abc),DataTypes.StringType
);
result = result.withColumn("ind_val", monotonically_increasing_id()); //inducing a new column to be used for join as there is no identity column in source dataset
Dataset<Row> dataset1 = result.select((udfQuotesNonNull.apply(col("ind_val").cast("string")).alias("ind_val"))); //Dataset used for storing temporary results
Dataset<Row> dataset = result.select((udfQuotesNonNull.apply(col("ind_val").cast("string")).alias("ind_val"))); //Dataset used for storing output
String[] str = result.schema().fieldNames();
dataset1.show();
for(int j=0; j<str.length-1;j++)
{
dataset1 = result.select((udfQuotesNonNull.apply(col("ind_val").cast("string")).alias("ind_val")),(udfQuotesNonNull.apply(col(str[j]).cast("string")).alias("\""+str[j]+"\"")));
dataset=dataset.join(dataset1,"ind_val"); //Joining based on induced column
}
result = dataset.drop("ind_val");
result.coalesce(1).write()
.option("delimiter", DELIMITER)
.option("header", "true")
.option("quoteAll", "false")
.option("nullValue", null)
.option("quote", "\u0000")
.option("spark.sql.sources.writeJobUUID", false)
.csv(Location);
}
}
Upvotes: 0
Reputation: 1407
Java answer. CSV escape is not just adding " symbols around. You should handle " inside strings. So let's use StringEscapeUtils and define UDF that will call it. Then just apply the UDF to each of the column.
import org.apache.commons.text.StringEscapeUtils;
import org.apache.spark.sql.Column;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import static org.apache.spark.sql.functions.*;
import org.apache.spark.sql.expressions.UserDefinedFunction;
import org.apache.spark.sql.types.DataTypes;
import java.util.Arrays;
public class Test {
void test(Dataset<Row> result, String Location) {
// define UDF
UserDefinedFunction escape = udf(
(String str) -> str.isEmpty()?"":StringEscapeUtils.escapeCsv(str), DataTypes.StringType
);
// call udf for each column
Column columns[] = Arrays.stream(result.schema().fieldNames())
.map(f -> escape.apply(col(f)).as(f))
.toArray(Column[]::new);
// save the result
result.select(columns)
.coalesce(1).write()
.option("delimiter", "|")
.option("header", "true")
.option("nullValue", "")
.option("quoteAll", "false")
.csv(Location);
}
}
Side note: coalesce(1) is a bad call. It collect all data on one executor. You can get executor OOM in production for huge dataset.
Upvotes: 5
Reputation: 920
EDIT & Warning: Did not see java tag. This is Scala solution that uses foldLeft
as a loop to go over all columns. If this is replaced by a Java friendly loop, everything should work as is. I will try and look back at this at the later time.
A programmatic solution could be
val columns = result.columns
val randomColumnName = "RND"
val result2 = columns.foldLeft(result) { (data, column) =>
data
.withColumnRenamed(column, randomColumnName)
.withColumn(column,
when(col(randomColumnName).isNull, "")
.otherwise(concat(lit("\""), col(randomColumnName), lit("\"")))
)
.drop(randomColumnName)
}
This will produce the strings with "
around them and write empty strings in nulls. If you need to keep nulls, just keep them.
Then just write it down:
result2.coalesce(1).write()
.option("delimiter", "|")
.option("header", "true")
.option("quoteAll", "false")
.csv(Location);
Upvotes: 2