Twinkle Deshmukh
Twinkle Deshmukh

Reputation: 36

Expected HTTP 101 response but was '403 Forbidden'

spark version:2.3.3
kubernetes version :v1.15.3

I'm getting the below exception while running spark code with kubernetes.
Even though I assigned the role and rolebinding and trying, still giving same exception. Please suggest solution if anyone had got such kind of exception.

2019-09-11 10:35:54 WARN  KubernetesClusterManager:66 - The executor's init-container config map is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2019-09-11 10:35:54 WARN  KubernetesClusterManager:66 - The executor's init-container config map key is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2019-09-11 10:35:57 WARN  WatchConnectionManager:185 - Exec Failure: HTTP 403, Status: 403 - 
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
    at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:216)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:183)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 ERROR SparkContext:91 - Error initializing SparkContext.
io.fabric8.kubernetes.client.KubernetesClientException: 
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:188)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:543)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:185)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 INFO  AbstractConnector:318 - Stopped Spark@7c351808{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-09-11 10:35:57 INFO  SparkUI:54 - Stopped Spark web UI at http://spark-pi-8ee39f55094a39cc9f6d34d8739549d2-driver-svc.default.svc:4040
2019-09-11 10:35:57 INFO  KubernetesClusterSchedulerBackend:54 - Shutting down all executors
2019-09-11 10:35:57 INFO  KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint:54 - Asking each executor to shut down
2019-09-11 10:35:57 INFO  KubernetesClusterSchedulerBackend:54 - Closing kubernetes client
2019-09-11 10:35:57 INFO  MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2019-09-11 10:35:57 INFO  MemoryStore:54 - MemoryStore cleared
2019-09-11 10:35:57 INFO  BlockManager:54 - BlockManager stopped
2019-09-11 10:35:57 INFO  BlockManagerMaster:54 - BlockManagerMaster stopped
2019-09-11 10:35:57 WARN  MetricsSystem:66 - Stopping a MetricsSystem that is not running
2019-09-11 10:35:57 INFO  OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2019-09-11 10:35:57 INFO  SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: 
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:188)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:543)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:185)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 INFO  ShutdownHookManager:54 - Shutdown hook called


I had created role and rolebinding and tried but it can't help me.
Even I did reset kubernetes and tried again by reseting it but still facing same issue.
I can't  find  out the solution for this on google.

Below spark submit command I'm using :

nohup bin/spark-submit --master k8s://https://192.168.154.58:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.JavaSparkPi --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark  --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=innoeye123/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.3.jar > tool.log &

/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */

package org.apache.spark.examples;

import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;

import java.util.ArrayList;
import java.util.List;

/**
 * Computes an approximation to pi
 * Usage: JavaSparkPi [partitions]
 */
public final class JavaSparkPi {

  public static void main(String[] args) throws Exception {
    SparkSession spark = SparkSession
      .builder()
      .appName("JavaSparkPi")
      .getOrCreate();

    JavaSparkContext jsc = new JavaSparkContext(spark.sparkContext());

    int slices = (args.length == 1) ? Integer.parseInt(args[0]) : 2;
    int n = 100000 * slices;
    List<Integer> l = new ArrayList<>(n);
    for (int i = 0; i < n; i++) {
      l.add(i);
    }

    JavaRDD<Integer> dataSet = jsc.parallelize(l, slices);

    int count = dataSet.map(integer -> {
      double x = Math.random() * 2 - 1;
      double y = Math.random() * 2 - 1;
      return (x * x + y * y <= 1) ? 1 : 0;
    }).reduce((integer, integer2) -> integer + integer2);

    System.out.println("Pi is roughly " + 4.0 * count / n);

    spark.stop();
  }
}


Expected result : spark-submit command should run smoothly and terminate it successfully by creating a successful pod.

Upvotes: 1

Views: 6004

Answers (2)

sacha.p
sacha.p

Reputation: 183

Spark 2.4.4 or Spark 3.0.0 with k8s 1.18 got same error for me.

You can try to use http instead of https API.

Go to your Master k8s and create an HTTP access point :

kubectl proxy --address=ip_your_master_k8s --port=port_what_you_want --accept-hosts='^*' --accept-paths='^.*' --disable-filter=true

and then :

nohup bin/spark-submit --master k8s://https://192.168.154.58:port_what_you_want --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.JavaSparkPi --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark  --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=innoeye123/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.3.jar > tool.log &

Upvotes: 1

QuickSilver
QuickSilver

Reputation: 4045

Looks like its a reported issue SPARK-28921 with affected Spark versions

  • 2.3.0
  • 2.3.1
  • 2.3.3
  • 2.4.0
  • 2.4.1
  • 2.4.2
  • 2.4.3
  • 2.4.4

Check if you are using one of the above

Fix for this is available in

  • 2.4.5
  • 3.0.0

You might need an upgrade

Upvotes: 0

Related Questions