Raghava
Raghava

Reputation: 967

exceptions when using MongoDB with Hadoop

I am inserting documents into MongoDB from a map in a MapReduce job. A bunch of strings are given to the map and it generates SHA-1 message for each string and inserts them into MongoDB. There are about 400 million strings (reading from files on HDFS). I am using 10 shards with 3 mongos and no replication. I'm using MongoDB 2.2.0 on 64 bit linux. However, this MR job does not get completed and I see the following 2 types of exceptions in the logs.

  1. Too many connections to each shard in MongoDB (around 250 connections). I see the following exception in the logs

    com.mongodb.DBTCPConnector fetchMaxBsonObjectSize                                                                                                                                 
    WARNING: Exception determining maxBSONObjectSize                                                                                                                                                          
    java.net.SocketException: Connection reset 
    
  2. Task attempt_***** failed to report status for 600 seconds. Killing!

There are 16 nodes in the cluster and at any time there seems to be 256 map tasks running (noticed it in hadoop logs).

I searched around for the first error/exception and someone mentioned that the number of connections per host for MongoDB has to be increased. I increased that to 20 from 10 using MongoOptions class and provide that while initializing Mongos instance. But that hasn't solved the issue -- is this the reason for the exception?

I am creating an instance of Mongo in configure() method of map() and closing it in close(). Are there any better ways to create Mongo instances?

Did anyone come across these errors when working on Hadoop + MongoDB combination? Is there anything else that I need to be aware of while using this combination?

PS: I posted this question to MongoDB user list, but wanted to get a wider audience check this question, so reposted it here.

Upvotes: 0

Views: 295

Answers (1)

mpobrien
mpobrien

Reputation: 4972

Check the value of ulimit -n on your hosts. It sounds like you could be hitting a file descriptor limit on your machines.

In general though, using a driver connection to store documents in Mongo during a MapReduce job is an anti-pattern. You're better off having the mapreduce output just produce documents with the data you need, rather than trying to create additional connections to Mongo and write more data out-of-band.

Upvotes: 2

Related Questions