Reputation: 21
I trying to analytic those data with python :
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql.functions import*
spark = SparkSession.builder.getOrCreate()
ds1 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202101-divvy-tripdata.csv",
header=True)
ds2 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202102-divvy-tripdata.csv",
header=True)
ds3 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202103-divvy-tripdata.csv",
header=True)
ds4 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202104-divvy-tripdata.csv",
header=True)
ds5 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202105-divvy-tripdata.csv",
header=True)
ds6 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202106-divvy-tripdata.csv",
header=True)
ds7 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202107-divvy-tripdata.csv",
header=True)
ds8 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202108-divvy-tripdata.csv",
header=True)
ds9 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202109-divvy-tripdata.csv",
header=True)
ds10 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202110-divvy-tripdata.csv",
header=True)
ds11 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202111-divvy-tripdata.csv",
header=True)
ds12 = spark.read.csv("C:\\Users\\User\\Desktop\\Trip_data\\202112-divvy-tripdata.csv",
header=True)
ds_all=ds1.union(ds2).union(ds3).union(ds4).union(ds5).union(ds6).union(ds7).union(ds8).union(ds9).union(ds10).union(ds11).union(ds12)
print((ds_all.count(), len(ds_all.columns)))
Here is my error:
Java not found and JAVA_HOME environment variable is not set.
Install Java and set JAVA_HOME to point to the Java installation
directory.
Traceback (most recent call last):
File "C:\Users\User\PycharmProjects\pythonProject\Case Study 1.py", l
ine 4, in <module>
spark = SparkSession.builder.getOrCreate()
File "C:\Users\User\PycharmProjects\pythonProject\venv\lib\site-
packages\pyspark\sql\session.py", line 228, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "C:\Users\User\PycharmProjects\pythonProject\venv\lib\site-
packages\pyspark\context.py", line 392, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "C:\Users\User\PycharmProjects\pythonProject\venv\lib\site-
packages\pyspark\context.py", line 144, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "C:\Users\User\PycharmProjects\pythonProject\venv\lib\site-
packages\pyspark\context.py", line 339, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "C:\Users\User\PycharmProjects\pythonProject\venv\lib\site-
packages\pyspark\java_gateway.py", line 108, in launch_gateway
raise RuntimeError("Java gateway process exited before sending its
port number")
RuntimeError: Java gateway process exited before sending its port
number
I have google it, but many solving are very confusing to me, I can't understand and follow it. So can anybody have ideas about this problem? or have more convenient package in pycharm community to code? please give me some suggestions, I would appreciate it!
Upvotes: 2
Views: 12057
Reputation: 1
I managed to solve the error by entering the environment variable where Java was installed.
Note: I am using Windows.
import os
os.environ['SPARK_HOME'] = 'C:\spark\spark-3.3.0-bin-hadoop3'
os.environ["JAVA_HOME"] = 'C:\Java\jdk1.8.0_321'
Upvotes: 0
Reputation: 29317
This issue is caused by a missing $JAVA_HOME
variable. Just set it in your ~/.bashrc
(or ~/.zshrc
on a Mac) file by adding the line:
export JAVA_HOME="/path/to/java_home/"
In Windows you need to add the environment variable JAVA_HOME
in the System Settings.
Note that for Spark/pyspark you need a Java version >=8. Here's how to check your Java version:
% $JAVA_HOME/bin/java -version
java version "1.8.0_301"
Java(TM) SE Runtime Environment (build 1.8.0_301-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.301-b09, mixed mode)
Upvotes: 2