Reputation: 102
I'm using the Pygalmesh library. I'm working with this library on a point cloud, which can vary in the number of vertices, with some having 2000 or more vertices. However, I'm having a problem when executing the generate_mesh method. I'm executing all of this through Google Colab. When the method starts, everything goes well until a certain time. After a few minutes, the RAM memory starts to reach its limit and then Colab stops executing. Is there a way to optimize it? Or am I implementing it incorrectly?
My code:
import pygalmesh
import numpy as np
class Custom(pygalmesh.DomainBase):
def __init__(self, points):
super().__init__()
self.points = np.array(points)
def eval(self, x):
distances = np.linalg.norm(self.points - np.array(x), axis=1)
return min(distances) - 0.1
def get_bounding_sphere_squared_radius(self):
center = np.mean(self.points, axis=0)
max_distance = np.max(np.linalg.norm(self.points - center, axis=1))
return (max_distance + 0.1) ** 2
verts = [
[254.338, 32.655, 37.4208],
[254.342, 32.6546, 37.4204],
[254.352, 32.6535, 37.4195],
.....
]
custom_mesh = Custom(verts)
mesh = pygalmesh.generate_mesh(custom_mesh, max_cell_circumradius=0.2)
I hope to be able to execute the method without the machine finishing before the deadline. I can visualize the results for different amounts of points.
Upvotes: 0
Views: 31