Reputation: 1
I'm using a Python library that implements the discrete Fred Fréchet algorithm (https://pypi.org/project/Fred-Frechet/) to measure similarity between 2 curves (specifically, I'm interested in the maximal distance between 2 curves). My curves are defined in CSV files, with each curve containing approximately 50,000 points. When I run the algorithm, it consumes about 9GB of RAM.
I have also tried using Dynamic Time Warping (DTW) for these curves, but it resulted in even higher memory usage.
Does anyone have ideas on how to optimize the memory usage for these large curves? Any suggestions or alternative approaches would be greatly appreciated.
Upvotes: 0
Views: 35