Reputation: 1117
I’m working on a recommendation system, and I need to efficiently generate user history with negative sampling using the Polars API. I have two datasets:
import polars as pl
df_user_articles = pl.DataFrame({
'user_id': [1, 1, 2, 2, 2, 3, 4, 4, 4, 4],
'article_id': [101, 102, 103, 104, 105, 106, 107, 108, 109, 110]
})
This dataset contains all article_id and their corresponding text data.
df_articles = pl.DataFrame({
'article_id': list(range(100, 200)), # example article IDs
'text': ['text'] * 100
})
For each user_id, I want to:
Output format:
I want the output as a DataFrame with the following columns:
user_id | user_history | candidates |
---|---|---|
1 | ['101', '102'] | ['101-1', '102-1', '103-0'] |
2 | ['103', '104', '105'] | ['103-1', '104-1', '105-1', '106-0'] |
How can I efficiently achieve this using the Polars API, avoiding apply and ensuring that the solution scales well with large datasets?
To optimize for efficiency, we don’t need to check if the article IDs have been read by the user during candidate generation.
I have a solution but it not keep the category type of "article_id"
matrix_size = users_df.shape[0]
num_candidates = 5
index_matrix = np.random.randint(0, articles_df.shape[0], size=(matrix_size, num_candidates))
index_matrix
users_df.with_columns(
candidates=articles_df['article_id'].to_numpy()[:,np.newaxis][index_matrix].reshape(matrix_size, num_candidates)
)
Upvotes: 2
Views: 78
Reputation: 117540
Maybe not the most simple way to do it, but if you need random amount of negative candidates, you can do something like this:
def sample(x):
user_history = x["user_history"]
r = x["literal"]
return (
df_articles
.filter(~pl.col.article_id.is_in(user_history))
.sample(r)
.select(pl.col.article_id.cast(pl.String) + "-0")
.to_series()
)
df = (
df_user_articles
.group_by("user_id")
.agg(
user_history = "article_id",
candidates = pl.col("article_id").cast(pl.String) + "-1"
)
)
candidates_num = 2
df.with_columns(
pl.concat_list(
pl.col.candidates,
pl.struct([
np.random.randint(1, candidates_num, df.height),
pl.col.user_history
])
.map_elements(sample, return_dtype = pl.List(pl.String))
).alias("candidates")
)
┌─────────┬───────────────────┬───────────────────────────────┐
│ user_id ┆ user_history ┆ candidates │
│ --- ┆ --- ┆ --- │
│ i64 ┆ list[i64] ┆ list[str] │
╞═════════╪═══════════════════╪═══════════════════════════════╡
│ 1 ┆ [101, 102] ┆ ["101-1", "102-1", … "164-0"] │
│ 4 ┆ [107, 108, … 110] ┆ ["107-1", "108-1", … "115-0"] │
│ 3 ┆ [106] ┆ ["106-1", "193-0", "152-0"] │
│ 2 ┆ [103, 104, 105] ┆ ["103-1", "104-1", … "164-0"] │
└─────────┴───────────────────┴───────────────────────────────┘
There's another solution which avoids map_elements
, but it might be also not really performant cause it requires to cross join
list of users and list of articles.
df_user_history = (
df_user_articles
.group_by("user_id")
.agg(user_history = pl.col.article_id)
)
num_candidates = 5
df = (
df_user_history
.join(df_articles, how="cross")
.group_by("user_id")
.agg(
candidates = pl.col.article_id.sample(num_candidates).unique()
).with_columns(
pl.col.candidates.list.head(
pl.int_range(1, num_candidates).shuffle().sample(pl.len(), with_replacement=True)
)
)
)
(
df_user_history
.join(df, on="user_id")
.with_columns(
pl.col.candidates.list.set_difference(pl.col.user_history)
)
.with_columns(
candidates = pl.concat([
pl.col.user_history.list.explode().cast(pl.String) + "-0",
pl.col.candidates.list.explode().cast(pl.String) + "-1"
]).implode().over("user_id")
)
)
Upvotes: 2