costigator
costigator

Reputation: 306

"A HostProvider may not be empty" after upgrading to Nifi 1.10

I have a Nifi cluster with 5 nodes and embedded Zookeeper running perfectly since months with the version 1.9.2. Today I tried to update the whole cluster to Nifi 1.10.0 but there was an issue with the Zookeeper reporting the following error: "A HostProvider may not be empty!".

This error is displayed at each "ListenSFTP" processor that I have running. One example is the following:

enter image description here

The detailled error log is:

   2020-01-20 15:31:05,167 ERROR [Timer-Driven Process Thread-3] o.a.nifi.processors.standard.ListSFTP ListSFTP[id=83313e3e-d582-155a-bc4d-9915f5350e7d] Failed to properly initialize Processor. If still scheduled to run, NiFi will attempt to initialize and run the Processor again after the 'Administrative Yield Duration' has elapsed. Failure is due to java.lang.IllegalArgumentException: A HostProvider may not be empty!: java.lang.IllegalArgumentException: A HostProvider may not be empty!
java.lang.IllegalArgumentException: A HostProvider may not be empty!
        at org.apache.zookeeper.client.StaticHostProvider.init(StaticHostProvider.java:136)
        at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:87)
        at org.apache.zookeeper.ZooKeeper.createDefaultHostProvider(ZooKeeper.java:1312)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:951)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:688)
        at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getZooKeeper(ZooKeeperStateProvider.java:170)
        at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.getState(StandardStateManagerProvider.java:305)
        at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
        at org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:298)
        at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:142)
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:130)
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:75)
        at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:52)
        at org.apache.nifi.controller.StandardProcessorNode.lambda$initiateStart$4(StandardProcessorNode.java:1532)
        at org.apache.nifi.engine.FlowEngine$3.call(FlowEngine.java:123)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

As described in the migration guide I changed the zookeeper.properties file like this:

initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=./state/zookeeper
autopurge.snapRetainCount=30

server.1=mcrr01nifi:2888:3888;2181
server.2=mcrr02nifi:2888:3888;2181
server.3=mcrr03nifi:2888:3888;2181
server.4=mcrr04nifi:2888:3888;2181
server.5=mcrr05nifi:2888:3888;2181

In the nifi.properties file I didn't change anything:

nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties

nifi.zookeeper.connect.string=mcrr01nifi:2181,mcrr02nifi:2181,mcrr03nifi:2181,mcrr04nifi:2181,mcrr05nifi:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi

In the meantime I have rolled back to version 1.9.2 and everything is working fine again. Just wondering if someone has the same issue and perhaps a solution :)

Many thanks for any feedback

Upvotes: 0

Views: 3739

Answers (1)

costigator
costigator

Reputation: 306

The solution, as proposed by @BryanBende, is to configure the file state-management.xml like this (replace the hostnames accordingly to your environment):

<cluster-provider>
    <id>zk-provider</id>
    <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
    <property name="Connect String">mcrr01nifi:2181,mcrr02nifi:2181,mcrr03nifi:2181,mcrr04nifi:2181,mcrr05nifi:2181</property>
    <property name="Root Node">/nifi</property>
    <property name="Session Timeout">10 seconds</property>
    <property name="Access Control">Open</property>
</cluster-provider>

Upvotes: 1

Related Questions