Nelson Davenapalli
Nelson Davenapalli

Reputation: 51

org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed

Using solr server version 6.6 and solrj 6.6.

Currently the solr cores are created over glusterfs mounted partition. There is enough space for the solr cores on the mounted volume too. Also, for some cores this issue is not seen but for others there is a consistent failure and the below mentioned exception is thrown.

Exception chain: org.apache.solr.common.SolrException: Exception writing document id WI:5-1-8 to the index; possible analysis error.

Any idea / workaround would be appreciated. :)

Upvotes: 3

Views: 11161

Answers (3)

Du-Lacoste
Du-Lacoste

Reputation: 12777

Encountered the same issue just a couple of days back. Using Solr 8.9.0 and due to enormous logs in the drive, it went out of space. The Solr Data Shard was in the same location.

After archiving log files still could not insert documents into Solr due to following error. It seems Solr keeps no space error somewhere in it.

org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed

Solution: The Solr Restart is the way to go.

After re-starting Solr, it started working as usual without any errors in the Solr Admin UI.

solr-8.9.0/bin/solr stop (Linux)

solr-8.9.0/bin/solr start (Linux)

Upvotes: 0

Taraz
Taraz

Reputation: 1331

In case anyone else ends up here looking for an answer, we were getting this error message because the hard drive on our solr machine was full. We deleted some log files and restarted the solr service and it resolved the error message.

Upvotes: 0

Nelson Davenapalli
Nelson Davenapalli

Reputation: 51

The SOLR server pod when deployed in kubernets used to claim for a persistent volume of type glusterfs with access mode : RWX (Read Write Many).

After creating a new peristent volume and volume claim of storage class : cinder ( default open stack block storage) and with access mode set to RWO (Read Write Once) and using it for the solr server pod, we were able to get rid of the SolrException

. Looks like lucene(in solr) does not play well with glusterfs partition which has read-write permissions assigned for diff pods. Looks like it takes a lot of time to sync new file changes and hence lucene was not able to take locks when required and immediately failed saying that many external forces are trying to lock write.lock file in a solr core. So dont use a shared gluster fs parition for your solr cores.

Upvotes: 1

Related Questions