Reputation: 137
I have two shapefiles (https://drive.google.com/drive/folders/1pbvKvhIIvhqHfcMe9g6qtsjbZ6SzZrqt?usp=sharing) - one point layer, and one polygon layer. The point layer represents customers and their location, while the polygon layers represents two boundaries. The objective is to get a table in the following format:
customer | location 1 | location 2 |
---|---|---|
1 | 1 | 1 |
2 | 0 | 1 |
3 | 1 | 1 |
5 | 1 | 0 |
6 | 1 | 0 |
9 | 0 | 0 |
10 | 0 | 0 |
The way I've thought of doing this is to iterate through the polygons and do a sjoin with the points, then encode the categories as such:
import geopandas as gpd
points = gpd.read_file('point.shp')
polygons = gpd.read_file('polygon.shp')
for index,row in polygons.iterrows():
points = gpd.sjoin(points, row, how='left', op='intersects')
points = pd.get_dummies(points, columns=['name'])
I get this error message:
ValueError: 'right_df' should be GeoDataFrame, got <class 'pandas.core.series.Series'>
Appreciate any advice, thanks in advance!
Upvotes: 1
Views: 844
Reputation: 8900
You do not need a join, the intersects
method is enough. Your target structure can be achieved using:
points_in_locations = points.copy()
for idx, row in polygons.iterrows():
is_in_polygon = points.intersects(row.geometry)
points_in_locations[f"location {idx + 1}"] = is_in_polygon.astype(int)
resulting in:
id geometry location 1 location 2
0 1 POINT (103.87728 1.30449) 0 1
1 2 POINT (103.87723 1.30415) 0 1
2 3 POINT (103.87761 1.30408) 0 1
3 1 POINT (103.87680 1.30287) 1 0
4 5 POINT (103.87724 1.30288) 1 0
5 6 POINT (103.87710 1.30275) 1 0
6 3 POINT (103.87687 1.30270) 1 0
7 9 POINT (103.87669 1.30444) 0 0
8 10 POINT (103.87681 1.30396) 0 0
Upvotes: 1