Reputation: 25366
I am copying the pyspark.ml example from the official document website: http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.Transformer
data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
df = spark.createDataFrame(data, ["features"])
kmeans = KMeans(k=2, seed=1)
model = kmeans.fit(df)
However, the example above wouldn't run and gave me the following errors:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-28-aaffcd1239c9> in <module>()
1 from pyspark import *
2 data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
----> 3 df = spark.createDataFrame(data, ["features"])
4 kmeans = KMeans(k=2, seed=1)
5 model = kmeans.fit(df)
NameError: name 'spark' is not defined
What additional configuration/variable needs to be set to get the example running?
Upvotes: 38
Views: 181310
Reputation: 29307
spark
is a variable that usually denotes the Spark session. If the variable is not defined, you can instantiate one:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName('My PySpark App') \
.getOrCreate()
Alternatively, you can use the pyspark
shell where spark
(the Spark session) as well as sc
(the Spark context) are predefined (see also NameError: name 'spark' is not defined, how to solve?).
Upvotes: 0
Reputation: 11
The situation may be different now..
from pyspark.sql import SparkSession
..
spark = SparkSession(sc)
works.
Upvotes: 1
Reputation: 1039
You can add
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)
to the begining of your code to define a SparkSession, then the spark.createDataFrame()
should work.
Upvotes: 93
Reputation: 560
You have to import the spark as following if you are using python then it will create a spark session but remember it is an old method though it will work.
from pyspark.shell import spark
Upvotes: 4
Reputation: 2727
If it errors you regarding other open session do this:
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate();
spark = SparkSession(sc)
scraped_data=spark.read.json("/Users/reihaneh/Desktop/nov3_final_tst1/")
Upvotes: 3
Reputation: 832
Answer by 率怀一 is good and will work for the first time. But the second time you try it, it will throw the following exception :
ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=pyspark-shell, master=local) created by __init__ at <ipython-input-3-786525f7559f>:10
There are two ways to avoid it.
1) Using SparkContext.getOrCreate()
instead of SparkContext()
:
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate()
spark = SparkSession(sc)
2) Using sc.stop()
in the end, or before you start another SparkContext.
Upvotes: 39
Reputation: 73366
Since you are calling createDataFrame(), you need to do this:
df = sqlContext.createDataFrame(data, ["features"])
instead of this:
df = spark.createDataFrame(data, ["features"])
spark
stands there as the sqlContext
.
In general, some people have that as sc
, so if that didn't work, you could try:
df = sc.createDataFrame(data, ["features"])
Upvotes: 13