Bharath
Bharath

Reputation: 39

Parallelism in Cassandra read using Scala

I am trying to invoke parallel reading from Cassandra table using spark. But I am not able to invoke parallelism as only one reads is happening any given time. What approach should be followed to achieve the same?

Upvotes: 3

Views: 316

Answers (1)

Ram Ghadiyaram
Ram Ghadiyaram

Reputation: 29155

I'd recommend you go with below approach source Russell Spitzer's Blog

Manually dividing our partitions using a Union of partial scans : Pushing the task to the end-user is also a possibility (and the current workaround.) Most end users already understand why they have long partitions and know in general the domain their column values fall in. This makes it possible for them to manually divide up a request so that it chops up large partitions.

For example, assuming the user knows clustering column c spans from 1 to 1000000. They could write code like

val minRange = 0
val maxRange = 1000000
val numSplits = 10
val subSize = (maxRange - minRange) / numSplits

sc.union(
  (minRange to maxRange by subSize)
    .map(start => 
      sc.cassandraTable("ks", "tab")
        .where("c > $start and c < ${start + subSize}"))
)

Each RDD would contain a unique set of tasks drawing only portions of full partitions. The union operation joins all those disparate tasks into a single RDD. The maximum number of rows any single Spark Partition would draw from a single Cassandra partition would be limited to maxRange/ numSplits. This approach, while requiring user intervention, would preserve locality and would still minimize the jumps between disk sectors.

Also read-tuning-parameters

Read tuning parameters

Upvotes: 3

Related Questions