d95l
d95l

Reputation: 13

Neptune severless freeable memory decreases day by day

I have a neptune severless instance that has been running perfectly fine scaling up and down as expected in accordance with the clear seasonality of reading and writing to it. In the past 6 weeks I have noticed a gradual decrease in the neptune freeable memory, which is leading to an increase in the number of NCUs it is using

Neptune gets about 5% of the traffic in the off peak hours compared to the peak hours. The freaable memory used to bounce back in the off peak hours, but has not been the case recently. There has been no changes to any of the reading or writing lambdas and they are running as they have been for months. I am using python lambdas and gremlin to query Neptune.

The NCU util: enter image description here

The freeable memory: enter image description here

The CPU util: enter image description here

I am starting to run out of ideas to figure out what is causing this. This feels like a potential caching issue and I do have the 'neptune_lookup_cache' enabled. But the documentation is also a bit vague surrounding it and I have had it on durng November and December which were much busier and issue free.

I would greatly appreciate input or any direction from anyone that has had a similiar issue.

Upvotes: 0

Views: 223

Answers (1)

Taylor Riggan
Taylor Riggan

Reputation: 2769

The neptune_lookup_cache feature is only supported in the r5d instances and is not supported on the serverless instance types. Even if enabled, that parameter would get ignored unless the correct instance type is there. For more info on how a lookup cache works, I would suggest this blog series on Neptune's caching features: https://aws.amazon.com/blogs/database/part-1-accelerate-graph-query-performance-with-caching-in-amazon-neptune/

Freeable memory is the amount of RAM allocated for query execution. As this goes down, it means that the requests to Neptune are requiring more and more memory to process. This is generally seen in situations where a graph begins to grow in size and the associated queries are not written to avoid scaling with the size of the graph (query "frontier", or the amount of data that a query needs to process, grows with graph scale). I would check your query processing times (if you are tracking those elsewhere) and see if those are also increasing over a given time period. If they are, look for opportunities to tune queries to avoid scaling with the graph. May not be possible if you are using queries that perform any sort of larger aggregations such as grouping or sorting.

Upvotes: 0

Related Questions