Reputation: 182
I have a sample spark application written in scala to push data to cache using apache ignite. As far as I know we have to start the ignite.sh in order to run the application. But if the ignite is not started the application hangs up forever. I tried changing the default ignite configuration, but of no use.
Is there a way to kill the application when ignite node is not started?
Upvotes: 2
Views: 937
Reputation: 182
The join time out property need to be set to TCPDiscoverSPI, but we cannot use discovery api object set with configuration or config xml, as it will throw task not serializable exception. Below is the code that works perfectly for this situation, as it works in distributed mode and the SPI is not serializable class to share in distributed mode, which spark expects every function to be.
val ic = new IgniteContext[String, String](sc,() => {
val cfg = new IgniteConfiguration();
val tc = new TcpDiscoverySpi();
tc.setJoinTimeout(60000);
cfg.setDiscoverySpi(tc);
cfg})
This clears both the problems.
Upvotes: 1
Reputation: 8390
By default the client node will wait indefinitely for at least one server node to start. You can configure it to fail after a certain timeout if there are no servers:
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="joinTimeout" value="60000"/>
</bean>
</property>
</bean>
Upvotes: 2