Reputation: 326
The schema generated by SchemaGen, modified by domain experts, captures the expected input data. Vertex AI allows for models trained by TFX to be pushed to endpoints. How does one get the schema attached to the models such that skew/drift detection can be done?
Upvotes: 0
Views: 54
Reputation: 155
Here is an example of a high level configuration on how to attach the schema generated by SchemaGen to a Vertex AI model monitoring job.
Python
import tensorflow_model_analysis as tfma from tfx
import components as tfx from tfx.dsl.experimental
import latest_blessed_model_selector
# Define the schema using SchemaGen
schema = ... # Your SchemaGen-generated schema
# Create a TFX pipeline
pipeline = tfx.dsl.Pipeline(
pipeline_name="my_pipeline",
pipeline_root="pipeline_output",
components=[
# ... other components in your pipeline
tfx.components.Trainer(
# ... other trainer arguments schema=schema,
),
# ... other components
], enable_cache=True,
)
# Export the model to Vertex AI Model Registry
exported_model = tfx.dsl.experimental.Export( output_data_format="tf_saved_model",
serving_input_schema=schema,
)
pipeline.add_component(exported_model)
# Create a Vertex AI Model Monitoring Job
monitoring_job.create()
# Create a Vertex AI Model Monitoring job for this part you may refer to this public documentation for Creating a Model Monitoring Job.
To set up either skew detection or drift detection, create a model deployment monitoring job:
After the monitoring job is created, Model Monitoring logs incoming prediction requests in a generated BigQuery table named PROJECT\_ID.model\_deployment\_monitoring\_ENDPOINT\_ID.serving\_predict
. If request-response logging is enabled, Model Monitoring logs incoming requests in the same BigQuery table that is used for request-response logging.
Upvotes: -1