Reputation: 303
I want to be able to debug a running entry_script.py
script in VSCode. This code runs in the container created through az ml deploy
with its own docker run command. This is a local deployment so I'm using a deployment config that looks like this:
{
"computeType": "LOCAL",
"port": 32267
}
I was thinking about using ptvsd
to set up a VSCode server but I need to also expose/map the 5678 port in addition to that 32267 port for the endpoint itself. So it's not clear to me how to map an additional exposed port (typically using the -p
or -P
flags in the docker run
command).
Sure, I can EXPOSE
it in the extra_dockerfile_steps
configuration but that won't actually map it to a host port that I can connect to/attach to in VSCode.
I tried to determine the run command and maybe modify it but I couldn't find out what that run command is. If I knew how to run the image that's created through AzureML local deployment then I could modify these flags.
Ultimately it felt too hacky - if there was a more supported way through az ml deploy
or through the deployment configuration that would be preferred.
This is the code I'm using at the start of the entry_script to enable attachment via ptvsd
:
# 5678 is the default attach port in the VS Code debug configurations
print("Waiting for debugger attach")
ptvsd.enable_attach(address=('localhost', 5678), redirect_output=True)
Upvotes: 1
Views: 728
Reputation: 2730
build a second docker file:
FROM [YOUR_GENERATED_IMAGE]
EXPOSE [YOUR_PORT]
from cmd line in your folder:
docker build -t my_new_image .
docker run -p <port>:<port> my_new_image
You may need to add additional run options depending on ports and environment variables etc you need.
How to get your generated image name:
image = Image.create(workspace = az_ws, name=resolve_image_name(), models=[model], image_config = image_config)
image.wait_for_creation()
print("created image")
if(image.creation_state != "Succeeded"):
raise Exception("Failed to create image.")
print("image location: {}".format(image.image_location))
artifacts = {"image_location" : image.image_location}
if(not os.path.exists("/artifacts/")):
os.makedirs("/artifacts/")
with open("/artifacts/artifacts.json", "w") as outjson:
json.dump(artifacts, outjson)
Upvotes: 1
Reputation: 380
Unfortunately az ml deploy local doesn't support binding any ports other then the port hosting the scoring server.
Upvotes: 1