Reputation: 1005
I'm currently working in a bank, and working with Q(kdb+, K whatever its called). I know that this is a functional language, and I also know that a lot of organizations use functional language to deal with large data sets.
I wonder why is the functional language (programming) good for big data? Is it because of the way they compile the code, or some other reasons.
Also, if the idea is wrong, can anyone explain why its wrong?
ps: If there are similar questions, forgive me :P
Upvotes: 2
Views: 1196
Reputation: 8032
Zdravko is right about immutable state making concurrency easier and less prone to race condition style bugs. However, that helps only with multi-threaded concurrency. When you talk about big data, you are talking about horizontally scaled cluster computing. Not much support for that in Functional Programming languages.
There is something about FP that has captured the imagination of developers with Big Data dreams. Maybe it has something to do with FP's stream oriented higher order functions which let you think in terms of processing data streams. With FP, you solve problems with languaging such as union, intersection, difference, map, flatmap, and reduce.
But FP alone won't work in a distributed computing environment. At OSCON 2014, I learned about some open source projects that integrate FP languages with Hadoop. See Functional Programming and Big Data for a comparative evaluation of three such projects getting traction there days; Netflix Pig Pen, Cascalog, and Apache Spark.
Upvotes: 2
Reputation: 12837
One of the reasons is that having immutable variables let's you execute code in parallel and scale very easy.
Upvotes: 5