Reputation: 2652
I have a bit of a problem. I want to learn about Hadoop and how I might use it to handle data streams in real time. As such I want to build a meaningful POC around it so that I can showcase it when I have to prove my knowledge of it in front of some potential employer or to introduce it in my present firm.
I'd also want to mention that I am limited in hardware resources. Just my laptop and me :) I know the basics of Hadoop and have written 2-3 basic MR jobs. I want to do something more meaningful or real world.
Please suggest.
Thanks in advance.
Upvotes: 6
Views: 27026
Reputation: 100
If you want to build some real time application,then I will suggest you to use Apache Spark framework which is used for real time processing and also support polyglot API(Scala,Java,Python,R)
Upvotes: 0
Reputation: 459
I think you can have a POC running, for example, an online/recursive algorithm for regression in mapreduce. But remember that this will just prove that your "learning rule" works. Maybe (never tried this) you can use the results in real time by telling your reducers to write them into a temporary file that can be read by another thread.
Also Mahout allows you to set your database in several different SequenceFile
s. You may use this to simulate an online stream and classify/cluster your data set "online". You can even copy part of data to the folder with the other data before the algorithm started to run.
Mahout in Action details how to do that.
See if one of the following datasets is to your taste: http://archive.ics.uci.edu/ml/datasets.html
Upvotes: 0
Reputation: 634
One of the classy problem that I'm sure is the most realtime than anything else. Option Trading. The key here is to watch for news,trends in twitter, facebook, youtube and then identify candidates for possible PUT or CALL. You will need a good skill and elaborate implementation of Mahout with Nutch/Lucene and then use trending data to understand the current situation and system should recommend bets (options).
Upvotes: 1
Reputation: 5723
If you want to get your hands dirty on a highly promising streaming framework, try BDAS SPARK streaming. Caution, this is not yet released, but you can play around in your laptop with the github version (https://github.com/mesos/spark/tree/streaming) There are many samples to get you started.
Also this has many advantages over existing frameworks, 1. It gives you an ability to combine real time and batch computation in one stack 2. It will give you a REPL where you can try your ad hoc queries in an interactive manner. 3. You can run this in your laptop in local mode. There are many other advantages, but these three, I believe will suffice your need to get started.
You might have to learn scala to try out the REPL :-(
For more information, check out http://spark-project.org/
Upvotes: 3
Reputation: 414
I'm clearly biased but I would also recommend to look at GridGain for anything real-time. GridGain is In-Memory Data Platform that provides ACID NoSQL datastore and fast in-memory MapReduce.
Upvotes: 0
Reputation: 2652
I was looking for something like this -
https://www.kaggle.com/competitions
These are well defined problems, many of them Big Data problems. And some of them require real time processing.
But thanks to all who answered.
Upvotes: -1
Reputation: 41488
I'd like to point a few things.
If you want to do a POC with just 1 laptop, there's little point in using Hadoop.
Also, as said by other people, Hadoop is not designed for realtime application, because there is some overhead in running Map/Reduce jobs.
That being said, Cloudera released Impala which works with the Hadoop ecosystem (specifically the Hive metastore) to achieve realtime performance. Be aware that to achieve this, it does not generate Map/Reduce jobs, and is currently in beta, so use it carefully.
So I would really advise going at Impala so you can still use an Hadoop ecosystem, but if you're also considering alternatives here are a few other frameworks that could be of use:
In the end I think you should really analyze your needs, and see if using Hadoop is what you need, because it's only getting started in the realtime space. There are several other projects which could help you achieve realtime performance.
If you want ideas of projects to showcase, I suggest looking at this link. Her are some examples:
Upvotes: 10
Reputation: 326
Hadoop is a high throughput oriented framework suitable for batch processes. If you are interested in processing and analyze huge data sets real time please look into twitter storm.
Upvotes: 1