kikee1222
kikee1222

Reputation: 2026

Finding the distance between latlong

I am a bit stuck. I have a CSV which includes:

Site Name Latitude Longitude.

This CSV has 100,000 locations. I need to generate a comma separated list for each location, showing the other locations within 5KM

I have tried the attached, which transposes the table & gives me 100,000 columns with 100,000 rows and the distance populated as the result. But I am not sure how to just make a new pandas column which has a list of all the sites within 5KM.

Can you help?

from geopy.distance import geodesic
def distance(row, csr):
    lat = row['latitude']
    long = row['longitude']
    lat_long =  (lat, long)
    try:
        return round(geodesic(lat_long, lat_long_compare).kilometers,2)
    except:
        return 9999

for key, value in d.items():
    lat_compare = value['latitude']
    long_compare = value['longitude']
    lat_long_compare =  (lat_compare, long_compare)
    
    csr = key
    
    df[key] = df.apply([distance, csr], axis=1)
    

Some sample data can be:

destinations = { 'bigben' : {'latitude': 51.510357,
                            'longitude': -0.116773},
                 'heathrow' : {'latitude': 51.470020,
                            'longitude': -0.454295},
                 'alton_towers' : {'latitude': 52.987662716,
                            'longitude': -1.888829778}
               }

bigben is 0.8KM from the London Eye heathrow is 23.55KM from the London Eye alton_towers is 204.63KM from the London Eye

So, in this case, the field should show only big ben.

So we get:

Site | Sites within 5KM 28, BigBen

Upvotes: 2

Views: 126

Answers (2)

wwnde
wwnde

Reputation: 26676

Another way

from sklearn.neighbors import DistanceMetric
from math import radians
import pandas as pd
import numpy as np
#To Radians

df['latitude'] = np.radians(df['latitude'])
df['longitude'] = np.radians(df['longitude'])
#Pair the cities
df[['latitude','longitude']].to_numpy()
#Assume a sperical radius of 6373

dist = DistanceMetric.get_metric('haversine')#DistanceMetric class df=pd.DataFrame(dist.pairwise(df[['latitude','longitude']].to_numpy())*6373,columns=df.index.unique(), index=df.index.unique())

 s=df.gt(0)&df.le(50)

df['Site_within_50km']=s.agg(lambda x: x.index[x].values, axis=1)#Filter

     bigben    heathrow  alton_towers Site_within_50km
bigben          0.000000   23.802459    203.857533       [heathrow]
heathrow       23.802459    0.000000    195.048961         [bigben]
alton_towers  203.857533  195.048961      0.000000               []

Upvotes: 1

Ben.T
Ben.T

Reputation: 29635

Here is one way with NearestNeighbors.

from sklearn.neighbors import NearestNeighbors

# data from your input
df = pd.DataFrame.from_dict(destinations, orient='index').rename_axis('Site Name').reset_index()

radius = 50 #change to whatever, in km

# crate the algo with the raidus and the metric for geospatial distance
neigh = NearestNeighbors(radius=radius/6371,  metric='haversine')

# fit the data in radians
neigh.fit(df[['latitude', 'longitude']].to_numpy()*np.pi/180)

# extract result and transform to get the expected output
df[f'Site_within_{radius}km'] = (
    pd.Series(neigh.radius_neighbors()[1]) # get a list of index for each row
      .explode() 
      .map(df['Site Name']) # get the site name from row index
      .groupby(level=0) # transform back to row-row relation
      .agg(list) # can use ', '.join instead of list 
)

print(df)
     Site Name   latitude  longitude Site_within_50km
0        bigben  51.510357  -0.116773       [heathrow]
1      heathrow  51.470020  -0.454295         [bigben]
2  alton_towers  52.987663  -1.888830            [nan]

Upvotes: 2

Related Questions