Reputation: 745
I made java application to load data to distributed Cache. Application load data well but when loading over than 10 million of record, I’ am getting “No storage-enabled nodes exist for service DistributedSessions” error.but when I load less than 10 million it working good. I create one cluster in web logic and join 4 nods as the following:
• 2 servers (Storage enable =true) to store data
• 2 client (Storage enable =false) to view and query only
tangosol-coherence-override.xml
<cluster-config>
<member-identity>
<cluster-name system-property="tangosol.coherence.cluster">CLUSTER_NAME</cluster-name>
</member-identity>
<multicast-listener>
<time-to-live system-property="tangosol.coherence.ttl">30</time-to-live>
<address>224.1.1.1</address>
<port>12346</port>
</multicast-listener>
</cluster-config>
<logging-config>
coherence-cache-config.xml
<?xml version="1.0"?>
<serializer system-property="tangosol.coherence.serializer"/>
<socket-provider system-property="tangosol.coherence.socketprovider"/>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>example-distributed</scheme-name>
</cache-mapping>
<scheme-name>example-distributed</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>example-binary-backing-map</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<local-scheme>
<scheme-name>example-binary-backing-map</scheme-name>
<eviction-policy>HYBRID</eviction-policy>
<high-units>{back-size-limit 0}</high-units>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>0</expiry-delay>
<cachestore-scheme></cachestore-scheme>
</local-scheme>
Server Argument:
-Xms6g
-Xmx12g
-Xincgc
-XX:-UseGCOverheadLimit
-Dtangosol.coherence.distributed.localstorage=true
-Dtangosol.coherence.cluster=CLUSTER_NAME
-Dtangosol.coherence.clusteraddress=224.1.1.1
-Dtangosol.coherence.clusterport=12346
Client Argument:
-Xms1g
-Xmx1g
-Xincgc
-XX:-UseGCOverheadLimit
-Dtangosol.coherence.distributed.localstorage=false
-Dtangosol.coherence.session.localstorage=true
-Dtangosol.coherence.cluster= CLUSTER_NAME
-Dtangosol.coherence.clusteraddress=224.1.1.1
-Dtangosol.coherence.clusterport=12346
Upvotes: 3
Views: 16357
Reputation: 3997
Coherence requires at least one storage enabled server in the cluster. The cache server you started is not storage enabled.
As an example, in .\bin directory of the coherence install, there is a coherence.cmd/sh
By default, it is not storage enabled. You can run cache-server.cmd to start a storage enabled cache server. Then, run coherence.cmd in another windows to start a second storage disabled server.
Alternatively, you can edit coherence.cmd to change "set storage_enabled=false
" to "set storage_enabled=true
". Then you should be able to put data into the cache from the coherence.cmd command prompt.
Alternatively : you can enable local storage in one of the vms with (-Dtangosol.coherence.distributed.localstorage=true
).
If it did not work then it could be memory issue "not sufficient memory and cannot to load any further data.".
Upvotes: 2
Reputation: 1236
In the Coherence cache config for the storage-enabled servers, you need to have a config for the DistributedSessions service. It looks like you are using an example config instead of a real config.
If it works up to a point (10 million?) then fails, then you need to figure out what is going wrong (e.g. an exception on the storage enabled servers?)
Upvotes: 0
Reputation: 156
as much I remember, localstorage=false says to the service to avoid loading data at all, so over 10 M of records, I guess you coherence lacks of memory and cannot load any more data. Try changing your eviction policy as well, but from my point of view, you localstorage might be true. This property is in use on proxies, in order to say them to act or not as servers as well.
Upvotes: 1