user28842828
user28842828

Reputation: 1

How to configure proxy settings for google vertex AI python SDK?

I'm trying to configure the Google Cloud Vertex AI Python SDK (https://github.com/googleapis/python-aiplatform) to work behind a corporate proxy in a production environment.

All network connections in my environment are required to route through this proxy.

Is there a way to configure it only for the Google Vertex AI Python SDK without interfering with other parts of my system or applications?

Is there a built-in method or a recommended pattern to achieve per-instance or per-client proxy configuration for the Vertex AI Python SDK?

My current Python code is structured somewhat like this :

import vertexai

# ...
class VertexClient

    def generate_stream(credentials, contents):
        vertexai.init(project="...", location="...", credentials=credentials)
        model = GenerativeModel("...")
        generation_config = GenerationConfig(max_output_tokens=8192, temperature=1, top_p=0.05)
        return model.generate_content(
                contents=contents,
                generation_config=generation_config,
                stream=True)

Setting the proxy globally using environment variables (https_proxy, http_proxy) affects other applications and Kubernetes pods, which causes disruptions and system instability.

I also tried setting and then unsetting environment variables programmatically within my Python code for each generate_stream call:

os.environ["https_proxy"] = "http://my-proxy:port"
os.environ["http_proxy"] = "http://my-proxy:port"

I'm not entirely certain, but I believe this approach could potentially introduce issues if I set and unset the environment in this manner.

Any help or guidance would be appreciated!

Upvotes: 0

Views: 231

Answers (0)

Related Questions