Reputation: 35
I am trying to create a recommender system from this kaggle dataset: f7a1f242-c
https://www.kaggle.com/kerneler/starter-user-artist-playcount-dataset-f7a1f242-c
the file is called: "user_artist_data_small.txt"
The data looks like this:
1059637 1000010 238
1059637 1000049 1
1059637 1000056 1
1059637 1000062 11
1059637 1000094 1
I'm getting an error on the third last line of code.
!pip install pyspark==3.0.1 py4j==0.10.9
from pyspark.sql import SparkSession
from pyspark import SparkContext
appName="Collaborative Filtering with PySpark"
from pyspark.sql.types import StructType,StructField,IntegerType,StringType,LongType
from pyspark.sql.functions import col
from pyspark.ml.recommendation import ALS
from google.colab import drive
drive.mount ('/content/gdrive')
spark = SparkSession.builder.appName(appName).getOrCreate()
sc = spark.sparkContext
userArtistData1=sc.textFile("/content/gdrive/My Drive/data/user_artist_data_small.txt")
schema_user_artist = StructType([StructField("userId",StringType(),True),StructField("artistId",StringType(),True),StructField("playCount",StringType(),True)])
userArtistRDD = userArtistData1.map(lambda k: k.split())
user_artist_df = spark.createDataFrame(userArtistRDD,schema_user_artist,['userId','artistId','playCount'])
ua = user_artist_df.alias('ua')
(training, test) = ua.randomSplit([0.8, 0.2]) #Training the model
als = ALS(maxIter=5, implicitPrefs=True,userCol="userId", itemCol="artistId", ratingCol="playCount",coldStartStrategy="drop")
model = als.fit(training)# predict using the testing datatset
predictions = model.transform(test)
predictions.show()
The error is:
IllegalArgumentException: requirement failed: Column userId must be of type numeric but was actually of type string.
So I change the type from StringType to IntegerType in the schema and I get this error:
TypeError: field userId: IntegerType can not accept object '1059637' in type <class 'str'>
The number happens to be the first item in the dataset. Please help?
Upvotes: 0
Views: 139
Reputation: 42402
Just create a dataframe using the CSV reader (with a space delimiter) instead of creating an RDD:
user_artist_df = spark.read.schema(schema_user_artist).csv('/content/gdrive/My Drive/data/user_artist_data_small.txt', sep=' ')
Upvotes: 1