We recommend new projects start with resources from the AWS provider.
aws-native.sagemaker.DataQualityJobDefinition
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Resource Type definition for AWS::SageMaker::DataQualityJobDefinition
Create DataQualityJobDefinition Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DataQualityJobDefinition(name: string, args: DataQualityJobDefinitionArgs, opts?: CustomResourceOptions);@overload
def DataQualityJobDefinition(resource_name: str,
                             args: DataQualityJobDefinitionArgs,
                             opts: Optional[ResourceOptions] = None)
@overload
def DataQualityJobDefinition(resource_name: str,
                             opts: Optional[ResourceOptions] = None,
                             data_quality_app_specification: Optional[DataQualityJobDefinitionDataQualityAppSpecificationArgs] = None,
                             data_quality_job_input: Optional[DataQualityJobDefinitionDataQualityJobInputArgs] = None,
                             data_quality_job_output_config: Optional[DataQualityJobDefinitionMonitoringOutputConfigArgs] = None,
                             job_resources: Optional[DataQualityJobDefinitionMonitoringResourcesArgs] = None,
                             role_arn: Optional[str] = None,
                             data_quality_baseline_config: Optional[DataQualityJobDefinitionDataQualityBaselineConfigArgs] = None,
                             endpoint_name: Optional[str] = None,
                             job_definition_name: Optional[str] = None,
                             network_config: Optional[DataQualityJobDefinitionNetworkConfigArgs] = None,
                             stopping_condition: Optional[DataQualityJobDefinitionStoppingConditionArgs] = None,
                             tags: Optional[Sequence[_root_inputs.CreateOnlyTagArgs]] = None)func NewDataQualityJobDefinition(ctx *Context, name string, args DataQualityJobDefinitionArgs, opts ...ResourceOption) (*DataQualityJobDefinition, error)public DataQualityJobDefinition(string name, DataQualityJobDefinitionArgs args, CustomResourceOptions? opts = null)
public DataQualityJobDefinition(String name, DataQualityJobDefinitionArgs args)
public DataQualityJobDefinition(String name, DataQualityJobDefinitionArgs args, CustomResourceOptions options)
type: aws-native:sagemaker:DataQualityJobDefinition
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
DataQualityJobDefinition Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The DataQualityJobDefinition resource accepts the following input properties:
- DataQuality Pulumi.App Specification Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality App Specification 
- Specifies the container that runs the monitoring job.
- DataQuality Pulumi.Job Input Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality Job Input 
- A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- DataQuality Pulumi.Job Output Config Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Output Config 
- The output configuration for monitoring jobs.
- JobResources Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Resources 
- Identifies the resources to deploy for a monitoring job.
- RoleArn string
- The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- DataQuality Pulumi.Baseline Config Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality Baseline Config 
- Configures the constraints and baselines for the monitoring job.
- EndpointName string
- JobDefinition stringName 
- The name for the monitoring job definition.
- NetworkConfig Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Network Config 
- Specifies networking configuration for the monitoring job.
- StoppingCondition Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Stopping Condition 
- A time limit for how long the monitoring job is allowed to run before stopping.
- 
List<Pulumi.Aws Native. Inputs. Create Only Tag> 
- An array of key-value pairs to apply to this resource.
- DataQuality DataApp Specification Quality Job Definition Data Quality App Specification Args 
- Specifies the container that runs the monitoring job.
- DataQuality DataJob Input Quality Job Definition Data Quality Job Input Args 
- A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- DataQuality DataJob Output Config Quality Job Definition Monitoring Output Config Args 
- The output configuration for monitoring jobs.
- JobResources DataQuality Job Definition Monitoring Resources Args 
- Identifies the resources to deploy for a monitoring job.
- RoleArn string
- The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- DataQuality DataBaseline Config Quality Job Definition Data Quality Baseline Config Args 
- Configures the constraints and baselines for the monitoring job.
- EndpointName string
- JobDefinition stringName 
- The name for the monitoring job definition.
- NetworkConfig DataQuality Job Definition Network Config Args 
- Specifies networking configuration for the monitoring job.
- StoppingCondition DataQuality Job Definition Stopping Condition Args 
- A time limit for how long the monitoring job is allowed to run before stopping.
- 
CreateOnly Tag Args 
- An array of key-value pairs to apply to this resource.
- dataQuality DataApp Specification Quality Job Definition Data Quality App Specification 
- Specifies the container that runs the monitoring job.
- dataQuality DataJob Input Quality Job Definition Data Quality Job Input 
- A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- dataQuality DataJob Output Config Quality Job Definition Monitoring Output Config 
- The output configuration for monitoring jobs.
- jobResources DataQuality Job Definition Monitoring Resources 
- Identifies the resources to deploy for a monitoring job.
- roleArn String
- The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- dataQuality DataBaseline Config Quality Job Definition Data Quality Baseline Config 
- Configures the constraints and baselines for the monitoring job.
- endpointName String
- jobDefinition StringName 
- The name for the monitoring job definition.
- networkConfig DataQuality Job Definition Network Config 
- Specifies networking configuration for the monitoring job.
- stoppingCondition DataQuality Job Definition Stopping Condition 
- A time limit for how long the monitoring job is allowed to run before stopping.
- 
List<CreateOnly Tag> 
- An array of key-value pairs to apply to this resource.
- dataQuality DataApp Specification Quality Job Definition Data Quality App Specification 
- Specifies the container that runs the monitoring job.
- dataQuality DataJob Input Quality Job Definition Data Quality Job Input 
- A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- dataQuality DataJob Output Config Quality Job Definition Monitoring Output Config 
- The output configuration for monitoring jobs.
- jobResources DataQuality Job Definition Monitoring Resources 
- Identifies the resources to deploy for a monitoring job.
- roleArn string
- The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- dataQuality DataBaseline Config Quality Job Definition Data Quality Baseline Config 
- Configures the constraints and baselines for the monitoring job.
- endpointName string
- jobDefinition stringName 
- The name for the monitoring job definition.
- networkConfig DataQuality Job Definition Network Config 
- Specifies networking configuration for the monitoring job.
- stoppingCondition DataQuality Job Definition Stopping Condition 
- A time limit for how long the monitoring job is allowed to run before stopping.
- 
CreateOnly Tag[] 
- An array of key-value pairs to apply to this resource.
- data_quality_ Dataapp_ specification Quality Job Definition Data Quality App Specification Args 
- Specifies the container that runs the monitoring job.
- data_quality_ Datajob_ input Quality Job Definition Data Quality Job Input Args 
- A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- data_quality_ Datajob_ output_ config Quality Job Definition Monitoring Output Config Args 
- The output configuration for monitoring jobs.
- job_resources DataQuality Job Definition Monitoring Resources Args 
- Identifies the resources to deploy for a monitoring job.
- role_arn str
- The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data_quality_ Databaseline_ config Quality Job Definition Data Quality Baseline Config Args 
- Configures the constraints and baselines for the monitoring job.
- endpoint_name str
- job_definition_ strname 
- The name for the monitoring job definition.
- network_config DataQuality Job Definition Network Config Args 
- Specifies networking configuration for the monitoring job.
- stopping_condition DataQuality Job Definition Stopping Condition Args 
- A time limit for how long the monitoring job is allowed to run before stopping.
- 
Sequence[CreateOnly Tag Args] 
- An array of key-value pairs to apply to this resource.
- dataQuality Property MapApp Specification 
- Specifies the container that runs the monitoring job.
- dataQuality Property MapJob Input 
- A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- dataQuality Property MapJob Output Config 
- The output configuration for monitoring jobs.
- jobResources Property Map
- Identifies the resources to deploy for a monitoring job.
- roleArn String
- The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- dataQuality Property MapBaseline Config 
- Configures the constraints and baselines for the monitoring job.
- endpointName String
- jobDefinition StringName 
- The name for the monitoring job definition.
- networkConfig Property Map
- Specifies networking configuration for the monitoring job.
- stoppingCondition Property Map
- A time limit for how long the monitoring job is allowed to run before stopping.
- List<Property Map>
- An array of key-value pairs to apply to this resource.
Outputs
All input properties are implicitly available as output properties. Additionally, the DataQualityJobDefinition resource produces the following output properties:
- CreationTime string
- The time at which the job definition was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- JobDefinition stringArn 
- The Amazon Resource Name (ARN) of job definition.
- CreationTime string
- The time at which the job definition was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- JobDefinition stringArn 
- The Amazon Resource Name (ARN) of job definition.
- creationTime String
- The time at which the job definition was created.
- id String
- The provider-assigned unique ID for this managed resource.
- jobDefinition StringArn 
- The Amazon Resource Name (ARN) of job definition.
- creationTime string
- The time at which the job definition was created.
- id string
- The provider-assigned unique ID for this managed resource.
- jobDefinition stringArn 
- The Amazon Resource Name (ARN) of job definition.
- creation_time str
- The time at which the job definition was created.
- id str
- The provider-assigned unique ID for this managed resource.
- job_definition_ strarn 
- The Amazon Resource Name (ARN) of job definition.
- creationTime String
- The time at which the job definition was created.
- id String
- The provider-assigned unique ID for this managed resource.
- jobDefinition StringArn 
- The Amazon Resource Name (ARN) of job definition.
Supporting Types
CreateOnlyTag, CreateOnlyTagArgs      
DataQualityJobDefinitionBatchTransformInput, DataQualityJobDefinitionBatchTransformInputArgs              
- DataCaptured stringDestination S3Uri 
- A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- DatasetFormat Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Dataset Format 
- The dataset format for your batch transform job.
- LocalPath string
- Path to the filesystem where the endpoint data is available to the container.
- ExcludeFeatures stringAttribute 
- Indexes or names of the features to be excluded from analysis
- S3DataDistribution Pulumi.Type Aws Native. Sage Maker. Data Quality Job Definition Batch Transform Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3InputMode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition Batch Transform Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- DataCaptured stringDestination S3Uri 
- A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- DatasetFormat DataQuality Job Definition Dataset Format 
- The dataset format for your batch transform job.
- LocalPath string
- Path to the filesystem where the endpoint data is available to the container.
- ExcludeFeatures stringAttribute 
- Indexes or names of the features to be excluded from analysis
- S3DataDistribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3InputMode DataQuality Job Definition Batch Transform Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- dataCaptured StringDestination S3Uri 
- A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- datasetFormat DataQuality Job Definition Dataset Format 
- The dataset format for your batch transform job.
- localPath String
- Path to the filesystem where the endpoint data is available to the container.
- excludeFeatures StringAttribute 
- Indexes or names of the features to be excluded from analysis
- s3DataDistribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3InputMode DataQuality Job Definition Batch Transform Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- dataCaptured stringDestination S3Uri 
- A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- datasetFormat DataQuality Job Definition Dataset Format 
- The dataset format for your batch transform job.
- localPath string
- Path to the filesystem where the endpoint data is available to the container.
- excludeFeatures stringAttribute 
- Indexes or names of the features to be excluded from analysis
- s3DataDistribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3InputMode DataQuality Job Definition Batch Transform Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data_captured_ strdestination_ s3_ uri 
- A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset_format DataQuality Job Definition Dataset Format 
- The dataset format for your batch transform job.
- local_path str
- Path to the filesystem where the endpoint data is available to the container.
- exclude_features_ strattribute 
- Indexes or names of the features to be excluded from analysis
- s3_data_ Datadistribution_ type Quality Job Definition Batch Transform Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3_input_ Datamode Quality Job Definition Batch Transform Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- dataCaptured StringDestination S3Uri 
- A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- datasetFormat Property Map
- The dataset format for your batch transform job.
- localPath String
- Path to the filesystem where the endpoint data is available to the container.
- excludeFeatures StringAttribute 
- Indexes or names of the features to be excluded from analysis
- s3DataDistribution "FullyType Replicated" | "Sharded By S3Key" 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3InputMode "Pipe" | "File"
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
DataQualityJobDefinitionBatchTransformInputS3DataDistributionType, DataQualityJobDefinitionBatchTransformInputS3DataDistributionTypeArgs                    
- FullyReplicated 
- FullyReplicated
- ShardedBy S3Key 
- ShardedByS3Key
- DataQuality Job Definition Batch Transform Input S3Data Distribution Type Fully Replicated 
- FullyReplicated
- DataQuality Job Definition Batch Transform Input S3Data Distribution Type Sharded By S3Key 
- ShardedByS3Key
- FullyReplicated 
- FullyReplicated
- ShardedBy S3Key 
- ShardedByS3Key
- FullyReplicated 
- FullyReplicated
- ShardedBy S3Key 
- ShardedByS3Key
- FULLY_REPLICATED
- FullyReplicated
- SHARDED_BY_S3_KEY
- ShardedByS3Key
- "FullyReplicated" 
- FullyReplicated
- "ShardedBy S3Key" 
- ShardedByS3Key
DataQualityJobDefinitionBatchTransformInputS3InputMode, DataQualityJobDefinitionBatchTransformInputS3InputModeArgs                  
- Pipe
- Pipe
- File
- File
- DataQuality Job Definition Batch Transform Input S3Input Mode Pipe 
- Pipe
- DataQuality Job Definition Batch Transform Input S3Input Mode File 
- File
- Pipe
- Pipe
- File
- File
- Pipe
- Pipe
- File
- File
- PIPE
- Pipe
- FILE
- File
- "Pipe"
- Pipe
- "File"
- File
DataQualityJobDefinitionClusterConfig, DataQualityJobDefinitionClusterConfigArgs            
- InstanceCount int
- The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- InstanceType string
- The ML compute instance type for the processing job.
- VolumeSize intIn Gb 
- The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- VolumeKms stringKey Id 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- InstanceCount int
- The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- InstanceType string
- The ML compute instance type for the processing job.
- VolumeSize intIn Gb 
- The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- VolumeKms stringKey Id 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instanceCount Integer
- The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instanceType String
- The ML compute instance type for the processing job.
- volumeSize IntegerIn Gb 
- The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volumeKms StringKey Id 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instanceCount number
- The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instanceType string
- The ML compute instance type for the processing job.
- volumeSize numberIn Gb 
- The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volumeKms stringKey Id 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance_count int
- The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance_type str
- The ML compute instance type for the processing job.
- volume_size_ intin_ gb 
- The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume_kms_ strkey_ id 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instanceCount Number
- The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instanceType String
- The ML compute instance type for the processing job.
- volumeSize NumberIn Gb 
- The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volumeKms StringKey Id 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
DataQualityJobDefinitionConstraintsResource, DataQualityJobDefinitionConstraintsResourceArgs            
- S3Uri string
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- S3Uri string
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri String
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri string
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3_uri str
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri String
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
DataQualityJobDefinitionCsv, DataQualityJobDefinitionCsvArgs          
- Header bool
- A boolean flag indicating if given CSV has header
- Header bool
- A boolean flag indicating if given CSV has header
- header Boolean
- A boolean flag indicating if given CSV has header
- header boolean
- A boolean flag indicating if given CSV has header
- header bool
- A boolean flag indicating if given CSV has header
- header Boolean
- A boolean flag indicating if given CSV has header
DataQualityJobDefinitionDataQualityAppSpecification, DataQualityJobDefinitionDataQualityAppSpecificationArgs                
- ImageUri string
- The container image to be run by the monitoring job.
- ContainerArguments List<string>
- An array of arguments for the container used to run the monitoring job.
- ContainerEntrypoint List<string>
- Specifies the entrypoint for a container used to run the monitoring job.
- Environment object
- Sets the environment variables in the Docker container
- PostAnalytics stringProcessor Source Uri 
- An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- RecordPreprocessor stringSource Uri 
- An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- ImageUri string
- The container image to be run by the monitoring job.
- ContainerArguments []string
- An array of arguments for the container used to run the monitoring job.
- ContainerEntrypoint []string
- Specifies the entrypoint for a container used to run the monitoring job.
- Environment interface{}
- Sets the environment variables in the Docker container
- PostAnalytics stringProcessor Source Uri 
- An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- RecordPreprocessor stringSource Uri 
- An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- imageUri String
- The container image to be run by the monitoring job.
- containerArguments List<String>
- An array of arguments for the container used to run the monitoring job.
- containerEntrypoint List<String>
- Specifies the entrypoint for a container used to run the monitoring job.
- environment Object
- Sets the environment variables in the Docker container
- postAnalytics StringProcessor Source Uri 
- An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- recordPreprocessor StringSource Uri 
- An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- imageUri string
- The container image to be run by the monitoring job.
- containerArguments string[]
- An array of arguments for the container used to run the monitoring job.
- containerEntrypoint string[]
- Specifies the entrypoint for a container used to run the monitoring job.
- environment any
- Sets the environment variables in the Docker container
- postAnalytics stringProcessor Source Uri 
- An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- recordPreprocessor stringSource Uri 
- An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image_uri str
- The container image to be run by the monitoring job.
- container_arguments Sequence[str]
- An array of arguments for the container used to run the monitoring job.
- container_entrypoint Sequence[str]
- Specifies the entrypoint for a container used to run the monitoring job.
- environment Any
- Sets the environment variables in the Docker container
- post_analytics_ strprocessor_ source_ uri 
- An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record_preprocessor_ strsource_ uri 
- An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- imageUri String
- The container image to be run by the monitoring job.
- containerArguments List<String>
- An array of arguments for the container used to run the monitoring job.
- containerEntrypoint List<String>
- Specifies the entrypoint for a container used to run the monitoring job.
- environment Any
- Sets the environment variables in the Docker container
- postAnalytics StringProcessor Source Uri 
- An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- recordPreprocessor StringSource Uri 
- An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
DataQualityJobDefinitionDataQualityBaselineConfig, DataQualityJobDefinitionDataQualityBaselineConfigArgs                
- BaseliningJob stringName 
- The name of the job that performs baselining for the data quality monitoring job.
- ConstraintsResource Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Constraints Resource 
- The constraints resource for a monitoring job.
- StatisticsResource Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Statistics Resource 
- Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- BaseliningJob stringName 
- The name of the job that performs baselining for the data quality monitoring job.
- ConstraintsResource DataQuality Job Definition Constraints Resource 
- The constraints resource for a monitoring job.
- StatisticsResource DataQuality Job Definition Statistics Resource 
- Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- baseliningJob StringName 
- The name of the job that performs baselining for the data quality monitoring job.
- constraintsResource DataQuality Job Definition Constraints Resource 
- The constraints resource for a monitoring job.
- statisticsResource DataQuality Job Definition Statistics Resource 
- Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- baseliningJob stringName 
- The name of the job that performs baselining for the data quality monitoring job.
- constraintsResource DataQuality Job Definition Constraints Resource 
- The constraints resource for a monitoring job.
- statisticsResource DataQuality Job Definition Statistics Resource 
- Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- baselining_job_ strname 
- The name of the job that performs baselining for the data quality monitoring job.
- constraints_resource DataQuality Job Definition Constraints Resource 
- The constraints resource for a monitoring job.
- statistics_resource DataQuality Job Definition Statistics Resource 
- Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- baseliningJob StringName 
- The name of the job that performs baselining for the data quality monitoring job.
- constraintsResource Property Map
- The constraints resource for a monitoring job.
- statisticsResource Property Map
- Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
DataQualityJobDefinitionDataQualityJobInput, DataQualityJobDefinitionDataQualityJobInputArgs                
- BatchTransform Pulumi.Input Aws Native. Sage Maker. Inputs. Data Quality Job Definition Batch Transform Input 
- Input object for the batch transform job.
- EndpointInput Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Endpoint Input 
- Input object for the endpoint
- BatchTransform DataInput Quality Job Definition Batch Transform Input 
- Input object for the batch transform job.
- EndpointInput DataQuality Job Definition Endpoint Input 
- Input object for the endpoint
- batchTransform DataInput Quality Job Definition Batch Transform Input 
- Input object for the batch transform job.
- endpointInput DataQuality Job Definition Endpoint Input 
- Input object for the endpoint
- batchTransform DataInput Quality Job Definition Batch Transform Input 
- Input object for the batch transform job.
- endpointInput DataQuality Job Definition Endpoint Input 
- Input object for the endpoint
- batch_transform_ Datainput Quality Job Definition Batch Transform Input 
- Input object for the batch transform job.
- endpoint_input DataQuality Job Definition Endpoint Input 
- Input object for the endpoint
- batchTransform Property MapInput 
- Input object for the batch transform job.
- endpointInput Property Map
- Input object for the endpoint
DataQualityJobDefinitionDatasetFormat, DataQualityJobDefinitionDatasetFormatArgs            
- csv Property Map
- json Property Map
- parquet Boolean
DataQualityJobDefinitionEndpointInput, DataQualityJobDefinitionEndpointInputArgs            
- EndpointName string
- An endpoint in customer's account which has enabled DataCaptureConfigenabled.
- LocalPath string
- Path to the filesystem where the endpoint data is available to the container.
- ExcludeFeatures stringAttribute 
- Indexes or names of the features to be excluded from analysis
- S3DataDistribution Pulumi.Type Aws Native. Sage Maker. Data Quality Job Definition Endpoint Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3InputMode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition Endpoint Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- EndpointName string
- An endpoint in customer's account which has enabled DataCaptureConfigenabled.
- LocalPath string
- Path to the filesystem where the endpoint data is available to the container.
- ExcludeFeatures stringAttribute 
- Indexes or names of the features to be excluded from analysis
- S3DataDistribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3InputMode DataQuality Job Definition Endpoint Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpointName String
- An endpoint in customer's account which has enabled DataCaptureConfigenabled.
- localPath String
- Path to the filesystem where the endpoint data is available to the container.
- excludeFeatures StringAttribute 
- Indexes or names of the features to be excluded from analysis
- s3DataDistribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3InputMode DataQuality Job Definition Endpoint Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpointName string
- An endpoint in customer's account which has enabled DataCaptureConfigenabled.
- localPath string
- Path to the filesystem where the endpoint data is available to the container.
- excludeFeatures stringAttribute 
- Indexes or names of the features to be excluded from analysis
- s3DataDistribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3InputMode DataQuality Job Definition Endpoint Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint_name str
- An endpoint in customer's account which has enabled DataCaptureConfigenabled.
- local_path str
- Path to the filesystem where the endpoint data is available to the container.
- exclude_features_ strattribute 
- Indexes or names of the features to be excluded from analysis
- s3_data_ Datadistribution_ type Quality Job Definition Endpoint Input S3Data Distribution Type 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3_input_ Datamode Quality Job Definition Endpoint Input S3Input Mode 
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpointName String
- An endpoint in customer's account which has enabled DataCaptureConfigenabled.
- localPath String
- Path to the filesystem where the endpoint data is available to the container.
- excludeFeatures StringAttribute 
- Indexes or names of the features to be excluded from analysis
- s3DataDistribution "FullyType Replicated" | "Sharded By S3Key" 
- Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3InputMode "Pipe" | "File"
- Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
DataQualityJobDefinitionEndpointInputS3DataDistributionType, DataQualityJobDefinitionEndpointInputS3DataDistributionTypeArgs                  
- FullyReplicated 
- FullyReplicated
- ShardedBy S3Key 
- ShardedByS3Key
- DataQuality Job Definition Endpoint Input S3Data Distribution Type Fully Replicated 
- FullyReplicated
- DataQuality Job Definition Endpoint Input S3Data Distribution Type Sharded By S3Key 
- ShardedByS3Key
- FullyReplicated 
- FullyReplicated
- ShardedBy S3Key 
- ShardedByS3Key
- FullyReplicated 
- FullyReplicated
- ShardedBy S3Key 
- ShardedByS3Key
- FULLY_REPLICATED
- FullyReplicated
- SHARDED_BY_S3_KEY
- ShardedByS3Key
- "FullyReplicated" 
- FullyReplicated
- "ShardedBy S3Key" 
- ShardedByS3Key
DataQualityJobDefinitionEndpointInputS3InputMode, DataQualityJobDefinitionEndpointInputS3InputModeArgs                
- Pipe
- Pipe
- File
- File
- DataQuality Job Definition Endpoint Input S3Input Mode Pipe 
- Pipe
- DataQuality Job Definition Endpoint Input S3Input Mode File 
- File
- Pipe
- Pipe
- File
- File
- Pipe
- Pipe
- File
- File
- PIPE
- Pipe
- FILE
- File
- "Pipe"
- Pipe
- "File"
- File
DataQualityJobDefinitionJson, DataQualityJobDefinitionJsonArgs          
- Line bool
- A boolean flag indicating if it is JSON line format
- Line bool
- A boolean flag indicating if it is JSON line format
- line Boolean
- A boolean flag indicating if it is JSON line format
- line boolean
- A boolean flag indicating if it is JSON line format
- line bool
- A boolean flag indicating if it is JSON line format
- line Boolean
- A boolean flag indicating if it is JSON line format
DataQualityJobDefinitionMonitoringOutput, DataQualityJobDefinitionMonitoringOutputArgs            
- S3Output
Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition S3Output 
- The Amazon S3 storage location where the results of a monitoring job are saved.
- S3Output
DataQuality Job Definition S3Output 
- The Amazon S3 storage location where the results of a monitoring job are saved.
- s3Output
DataQuality Job Definition S3Output 
- The Amazon S3 storage location where the results of a monitoring job are saved.
- s3Output
DataQuality Job Definition S3Output 
- The Amazon S3 storage location where the results of a monitoring job are saved.
- s3_output DataQuality Job Definition S3Output 
- The Amazon S3 storage location where the results of a monitoring job are saved.
- s3Output Property Map
- The Amazon S3 storage location where the results of a monitoring job are saved.
DataQualityJobDefinitionMonitoringOutputConfig, DataQualityJobDefinitionMonitoringOutputConfigArgs              
- MonitoringOutputs List<Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Output> 
- Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- KmsKey stringId 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- MonitoringOutputs []DataQuality Job Definition Monitoring Output 
- Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- KmsKey stringId 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoringOutputs List<DataQuality Job Definition Monitoring Output> 
- Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kmsKey StringId 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoringOutputs DataQuality Job Definition Monitoring Output[] 
- Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kmsKey stringId 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring_outputs Sequence[DataQuality Job Definition Monitoring Output] 
- Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms_key_ strid 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoringOutputs List<Property Map>
- Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kmsKey StringId 
- The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
DataQualityJobDefinitionMonitoringResources, DataQualityJobDefinitionMonitoringResourcesArgs            
- ClusterConfig Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Cluster Config 
- The configuration for the cluster resources used to run the processing job.
- ClusterConfig DataQuality Job Definition Cluster Config 
- The configuration for the cluster resources used to run the processing job.
- clusterConfig DataQuality Job Definition Cluster Config 
- The configuration for the cluster resources used to run the processing job.
- clusterConfig DataQuality Job Definition Cluster Config 
- The configuration for the cluster resources used to run the processing job.
- cluster_config DataQuality Job Definition Cluster Config 
- The configuration for the cluster resources used to run the processing job.
- clusterConfig Property Map
- The configuration for the cluster resources used to run the processing job.
DataQualityJobDefinitionNetworkConfig, DataQualityJobDefinitionNetworkConfigArgs            
- EnableInter boolContainer Traffic Encryption 
- Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- EnableNetwork boolIsolation 
- Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- VpcConfig Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Vpc Config 
- Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- EnableInter boolContainer Traffic Encryption 
- Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- EnableNetwork boolIsolation 
- Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- VpcConfig DataQuality Job Definition Vpc Config 
- Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- enableInter BooleanContainer Traffic Encryption 
- Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enableNetwork BooleanIsolation 
- Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpcConfig DataQuality Job Definition Vpc Config 
- Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- enableInter booleanContainer Traffic Encryption 
- Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enableNetwork booleanIsolation 
- Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpcConfig DataQuality Job Definition Vpc Config 
- Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- enable_inter_ boolcontainer_ traffic_ encryption 
- Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable_network_ boolisolation 
- Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc_config DataQuality Job Definition Vpc Config 
- Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- enableInter BooleanContainer Traffic Encryption 
- Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enableNetwork BooleanIsolation 
- Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpcConfig Property Map
- Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
DataQualityJobDefinitionS3Output, DataQualityJobDefinitionS3OutputArgs          
- LocalPath string
- The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- S3Uri string
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- S3UploadMode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition S3Output S3Upload Mode 
- Whether to upload the results of the monitoring job continuously or after the job completes.
- LocalPath string
- The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- S3Uri string
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- S3UploadMode DataQuality Job Definition S3Output S3Upload Mode 
- Whether to upload the results of the monitoring job continuously or after the job completes.
- localPath String
- The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri String
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3UploadMode DataQuality Job Definition S3Output S3Upload Mode 
- Whether to upload the results of the monitoring job continuously or after the job completes.
- localPath string
- The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri string
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3UploadMode DataQuality Job Definition S3Output S3Upload Mode 
- Whether to upload the results of the monitoring job continuously or after the job completes.
- local_path str
- The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3_uri str
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3_upload_ Datamode Quality Job Definition S3Output S3Upload Mode 
- Whether to upload the results of the monitoring job continuously or after the job completes.
- localPath String
- The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri String
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3UploadMode "Continuous" | "EndOf Job" 
- Whether to upload the results of the monitoring job continuously or after the job completes.
DataQualityJobDefinitionS3OutputS3UploadMode, DataQualityJobDefinitionS3OutputS3UploadModeArgs              
- Continuous
- Continuous
- EndOf Job 
- EndOfJob
- DataQuality Job Definition S3Output S3Upload Mode Continuous 
- Continuous
- DataQuality Job Definition S3Output S3Upload Mode End Of Job 
- EndOfJob
- Continuous
- Continuous
- EndOf Job 
- EndOfJob
- Continuous
- Continuous
- EndOf Job 
- EndOfJob
- CONTINUOUS
- Continuous
- END_OF_JOB
- EndOfJob
- "Continuous"
- Continuous
- "EndOf Job" 
- EndOfJob
DataQualityJobDefinitionStatisticsResource, DataQualityJobDefinitionStatisticsResourceArgs            
- S3Uri string
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- S3Uri string
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri String
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri string
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3_uri str
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri String
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
DataQualityJobDefinitionStoppingCondition, DataQualityJobDefinitionStoppingConditionArgs            
- MaxRuntime intIn Seconds 
- The maximum runtime allowed in seconds.
- MaxRuntime intIn Seconds 
- The maximum runtime allowed in seconds.
- maxRuntime IntegerIn Seconds 
- The maximum runtime allowed in seconds.
- maxRuntime numberIn Seconds 
- The maximum runtime allowed in seconds.
- max_runtime_ intin_ seconds 
- The maximum runtime allowed in seconds.
- maxRuntime NumberIn Seconds 
- The maximum runtime allowed in seconds.
DataQualityJobDefinitionVpcConfig, DataQualityJobDefinitionVpcConfigArgs            
- SecurityGroup List<string>Ids 
- The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- Subnets List<string>
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- SecurityGroup []stringIds 
- The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- Subnets []string
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- securityGroup List<String>Ids 
- The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets List<String>
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- securityGroup string[]Ids 
- The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets string[]
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security_group_ Sequence[str]ids 
- The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets Sequence[str]
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- securityGroup List<String>Ids 
- The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets List<String>
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.