Reputation: 136
I have an AWS Lambda function packaged as a ZIP and using the Datadog v65 extension as a layer. I haven't imported ddtrace in any part of my code.
I have set the following environment variable:
DD_TRACE_ENABLED=true
DD_PROFILING_ENABLED=true
When the Lambda starts, the logs show that Datadog automatically instruments 60 integrations
Configured ddtrace instrumentation for 60 integration(s). The following modules have been patched: aioredis,aiomysql,aredis,asyncio,avro,boto,botocore,bottle,cassandra,celery,consul,django,dramatiq,elasticsearch,algoliasearch,futures,google_generativeai,gevent,graphql,grpc,httpx,kafka,mongoengine,mysql,mysqldb,pymysql,mariadb,psycopg,pylibmc,pymemcache,pymongo,redis,rediscluster,requests,rq,sanic,sqlite3,aiohttp,aiohttp_jinja2,aiopg,vertica,molten,jinja2,mako,flask,starlette,falcon,pyramid,pynamodb,pyodbc,fastapi,dogpile_cache,yaaredis,asyncpg,aws_lambda,openai,langchain,anthropic,subprocess,unittest
This results in excessive low-level traces, which I don’t need. It feels like code profiling rather than just tracking high-level requests.
I don't want to profile the code, I want traces for pymongo / requests module at a high level.
Is there an easy way to remove this level of granularity?
I tried
DD_PROFILING_ENABLED=false
But it doesn't seem to do the trick. It creates a separate resource in my traces called aws.lambda and it includes the low level data on it.
Upvotes: 0
Views: 26