yangtheman
yangtheman

Reputation: 529

CouchDB/Couchrest Errno::ECONNREFUSED Connection Refused - connect(2) error

At work, we have about 1500 test cases, and we manually clean the database using DB.recreate! method before each test. When running all tests using bundle exec rake spec, all tests rarely pass. There are number of tests that fail towards the end of suite with the "Errno::ECONNREFUSED Connection Refused - connect(2) error" errors.

Any help would be much appreciated!

I am using CouchDB 1.3.1, Ubuntu 12.04 LTS, Ruby 1.9.3, and Rails 3.2.12.

Thanks,

EDIT

I looked at the log file more carefully and matched the time tests started failing and error messages that were generated in couchdb log.

[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23790.0>] ** Generic server <0.23790.0> terminating 
** Last message in was {'EXIT',<0.23789.0>,killed}
** When Server state == {file,{file_descriptor,prim_file,{#Port<0.14445>,20}},
                              79}
** Reason for termination == 
** killed

[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23790.0>] {error_report,<0.31.0>,
                          {<0.23790.0>,crash_report,
                           [[{initial_call,{couch_file,init,['Argument__1']}},
                             {pid,<0.23790.0>},
                             {registered_name,[]},
                             {error_info,
                                 {exit,killed,
                                     [{gen_server,terminate,6},
                                      {proc_lib,init_p_do_apply,3}]}},
                             {ancestors,[<0.23789.0>]},
                             {messages,[]},
                             {links,[]},
                             {dictionary,[]},
                             {trap_exit,true},
                             {status,running},
                             {heap_size,377},
                             {stack_size,24},
                             {reductions,916}],
                            []]}}
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.23808.0>] {error_report,<0.31.0>,
                       {<0.23808.0>,crash_report,
                        [[{initial_call,
                           {couch_ref_counter,init,['Argument__1']}},
                          {pid,<0.23808.0>},
                          {registered_name,[]},
                          {error_info,
                           {exit,
                            {noproc,
                             [{erlang,link,[<0.23790.0>]},
                              {couch_ref_counter,'-init/1-lc$^0/1-0-',1},
                              {couch_ref_counter,init,1},
                              {gen_server,init_it,6},
                              {proc_lib,init_p_do_apply,3}]},
                            [{gen_server,init_it,6},
                             {proc_lib,init_p_do_apply,3}]}},
                          {ancestors,[<0.23793.0>,<0.23792.0>,<0.23789.0>]},
                          {messages,[]},
                          {links,[]},
                          {dictionary,[]},
                          {trap_exit,false},
                          {status,running},
                          {heap_size,377},
                          {stack_size,24},
                          {reductions,114}],
                         []]}}
[Fri, 16 Aug 2013 19:39:46 GMT] [error] [<0.103.0>] ** Generic server <0.103.0> terminating 
** Last message in was {'EXIT',<0.88.0>,killed}
** When Server state == {db,<0.103.0>,<0.104.0>,nil,<<"1376681645837889">>,
                            <0.106.0>,<0.102.0>,<0.107.0>,
                            {db_header,6,1,0,
                                {1856,{1,0,1777},95},
                                {1951,1,83},
                                nil,0,nil,nil,1000},
                            1,
                            {btree,<0.102.0>,
                                {1856,{1,0,1777},95},
                                #Fun<couch_db_updater.10.55895019>,
                                #Fun<couch_db_updater.11.100913286>,
                                #Fun<couch_btree.5.25288484>,
                                #Fun<couch_db_updater.12.39068440>,snappy},
                            {btree,<0.102.0>,
                                {1951,1,83},
                                #Fun<couch_db_updater.13.114276184>,
                                #Fun<couch_db_updater.14.2340873>,
                                #Fun<couch_btree.5.25288484>,
                                #Fun<couch_db_updater.15.23651859>,snappy},
                            {btree,<0.102.0>,nil,
                                #Fun<couch_btree.3.20686015>,
                                #Fun<couch_btree.4.73514747>,
                                #Fun<couch_btree.5.25288484>,nil,snappy},
                            1,<<"_users">>,"/var/lib/couchdb/_users.couch",
                            [#Fun<couch_doc.8.106888048>],
                            [],nil,
                            {user_ctx,null,[],undefined},
                            nil,1000,
                            [before_header,after_header,on_file_open],
                            [create,
                             {before_doc_update,
                                 #Fun<couch_users_db.before_doc_update.2>},
                             {after_doc_read,
                                 #Fun<couch_users_db.after_doc_read.2>},
                             sys_db,
                             {user_ctx,
                                 {user_ctx,null,[<<"_admin">>],undefined}},
                             nologifmissing,sys_db],
                            snappy,#Fun<couch_users_db.before_doc_update.2>,
                            #Fun<couch_users_db.after_doc_read.2>}
** Reason for termination == 
** killed

Upvotes: 1

Views: 929

Answers (1)

yangtheman
yangtheman

Reputation: 529

Ah.... The power of the community. I got the following answer from someone in the CouchDB mailing list.

In short, the solution is to change delayed_commit value to false. It's set to true by default, and rapidly recreating multiple databases at the beginning of each test case were creating a race condition (deleting non-existent db, etc.).

This definitely solved my problem.

One caveat is that it has doubled our test duration. That's another problem to tackle, but for now, I am happy with all passing tests.

Upvotes: 2

Related Questions