Michal
Michal

Reputation: 1905

Using Google Storage from the Flink REPL

I am trying to read a csv file from google cloud storage to the Flink REPL. Since I am not very proficient in Flink, I prefer to work in the REPL so that I can tackle one error at a time rather than put my code in a JAR and then not know where to start with all the errors.

For this example, I will use the publicly available landsat data in google storage.

I created a dataproc cluster and added a bash script provided by google cloud to install flink at cluster creation time. The script can be found here.

Since I am using a dataproc cluster, I only need to add the gcs-connector jar to the classpath. So I start up the Flink REPL like this:

/usr/lib/flink/bin/start-scala-shell.sh yarn -a /usr/lib/hadoop/lib/gcs-connector-hadoop2-1.9.10.jar

I then import google cloud storage using this line of code in the REPL:

 import com.google.cloud.hadoop.fs.gcs

Finally, I try to read the publicly available file as a text file and get the following error:

 val landsaturl = "gs://gcp-public-data-landsat/LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_ANG.txt"
landsat.first(1).print()
2018-12-17 19:10:44,210 INFO  org.apache.flink.api.java.ExecutionEnvironment                - The job has 0 registered types and 0 default Kryo serializers
2018-12-17 19:10:44,210 WARN  org.apache.flink.configuration.Configuration                  - Config uses deprecated configuration key 'jobmanager.rpc.address' instead of proper key 'rest.address'
2018-12-17 19:10:44,210 INFO  org.apache.flink.runtime.rest.RestClient                      - Rest client endpoint started.
2018-12-17 19:10:46,091 INFO  org.apache.flink.client.program.rest.RestClusterClient        - Submitting job 7aefc693e7911bd9ef80c3ebcf6a8343 (detached: false).
2018-12-17 19:10:46,091 INFO  org.apache.flink.client.program.rest.RestClusterClient        - Requesting blob server port.
2018-12-17 19:11:46,146 INFO  org.apache.flink.runtime.rest.RestClient                      - Shutting down rest endpoint.
2018-12-17 19:11:46,148 INFO  org.apache.flink.runtime.rest.RestClient                      - Rest endpoint shutdown complete.
org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
  at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:258)
  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:452)
  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
  at org.apache.flink.client.RemoteExecutor.executePlanWithJars(RemoteExecutor.java:216)
  at org.apache.flink.client.RemoteExecutor.executePlan(RemoteExecutor.java:193)
  at org.apache.flink.api.java.RemoteEnvironment.execute(RemoteEnvironment.java:173)
  at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:816)
  at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
  at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
  at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1726)
  ... 30 elided
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
  at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$5(RestClusterClient.java:357)
  at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
  at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
  at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
  at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
  at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:214)
  at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
  at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
  at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
  at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
  at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:195)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
  at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
  at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
  at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
  at java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1088)
  at java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
  ... 21 more
Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
  ... 19 more
Caused by: java.util.concurrent.CompletionException: java.net.ConnectException: Connection refused: cluster-****
  at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
  at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
  at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
  at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
  ... 16 more
Caused by: java.net.ConnectException: Connection refused: cluster-****
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
  at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281)
  ... 7 more

Am I missing a step or is this something that is not possible to do in the REPL and can only be done with a fat jar and specifying things related to google cloud storage in the pom.xml?

Upvotes: 2

Views: 451

Answers (1)

Igor Dvorzhak
Igor Dvorzhak

Reputation: 4457

You need to use Flink HDFS connector, because GCS connector implements HDFS interface.

Upvotes: 1

Related Questions