Reputation: 2085
I have been searching for any links or documents or articles that will help me understand when do we go for Datasets over Dataframes and vice-versa?
All I find on the internet are headlines with when to use a Dataset
but when opened, they just specify the differences between Dataframe and a Dataset. There are so many links with just listing differences in the name of scenarios.
There is only one question on stackoverflow that has the right title but even in that answer, the databricks documentation link is not working.
I am looking for some information that can help me understand fundamentally when do we go for a Dataset or in what scenarios is Dataset preferred over Dataframe and vice versa. If not an answer, even a link or documentation that can help me understand is appreciated.
Upvotes: 10
Views: 3463
Reputation: 581
The page you are looking for is moved to here. According to the session, in summary, Dataset API is available for Scala (and Java) only, and it combines the benefits of both RDD and Dataframes which are:
In addition, Datasets consume less memory and can catch analysis errors at the compile time while it is cached at Runtime for Dataframes. This is also a good article.
Therefore, the answer is you would better use Datasets when you are coding in Scala or Java and want to use functional programming and save more memory with all dataframe capabilities.
Upvotes: 2
Reputation: 490
Datasets are preferred over Dataframes in Apache Spark when the data is strongly typed, i.e., when the schema is known ahead of time and the data is not necessarily homogeneous. This is because Datasets can enforce type safety, which means that type errors will be caught at compile time rather than at runtime. In addition, Datasets can take advantage of the Catalyst optimizer, which can lead to more efficient execution. Finally, Datasets can be easily converted to Dataframes, so there is no need to choose between the two upfront.
Upvotes: 1