We recommend new projects start with resources from the AWS provider.
aws-native.bedrock.ApplicationInferenceProfile
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Definition of AWS::Bedrock::ApplicationInferenceProfile Resource Type
Create ApplicationInferenceProfile Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new ApplicationInferenceProfile(name: string, args?: ApplicationInferenceProfileArgs, opts?: CustomResourceOptions);@overload
def ApplicationInferenceProfile(resource_name: str,
                                args: Optional[ApplicationInferenceProfileArgs] = None,
                                opts: Optional[ResourceOptions] = None)
@overload
def ApplicationInferenceProfile(resource_name: str,
                                opts: Optional[ResourceOptions] = None,
                                description: Optional[str] = None,
                                inference_profile_name: Optional[str] = None,
                                model_source: Optional[ApplicationInferenceProfileInferenceProfileModelSourcePropertiesArgs] = None,
                                tags: Optional[Sequence[_root_inputs.TagArgs]] = None)func NewApplicationInferenceProfile(ctx *Context, name string, args *ApplicationInferenceProfileArgs, opts ...ResourceOption) (*ApplicationInferenceProfile, error)public ApplicationInferenceProfile(string name, ApplicationInferenceProfileArgs? args = null, CustomResourceOptions? opts = null)
public ApplicationInferenceProfile(String name, ApplicationInferenceProfileArgs args)
public ApplicationInferenceProfile(String name, ApplicationInferenceProfileArgs args, CustomResourceOptions options)
type: aws-native:bedrock:ApplicationInferenceProfile
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ApplicationInferenceProfileArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ApplicationInferenceProfileArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ApplicationInferenceProfileArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ApplicationInferenceProfileArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ApplicationInferenceProfileArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
ApplicationInferenceProfile Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The ApplicationInferenceProfile resource accepts the following input properties:
- Description string
- Description of the inference profile
- InferenceProfile stringName 
- The name of the inference profile.
- ModelSource Pulumi.Aws Native. Bedrock. Inputs. Application Inference Profile Inference Profile Model Source Properties 
- Contains configurations for the inference profile to copy as the resource.
- 
List<Pulumi.Aws Native. Inputs. Tag> 
- List of Tags
- Description string
- Description of the inference profile
- InferenceProfile stringName 
- The name of the inference profile.
- ModelSource ApplicationInference Profile Inference Profile Model Source Properties Args 
- Contains configurations for the inference profile to copy as the resource.
- 
TagArgs 
- List of Tags
- description String
- Description of the inference profile
- inferenceProfile StringName 
- The name of the inference profile.
- modelSource ApplicationInference Profile Inference Profile Model Source Properties 
- Contains configurations for the inference profile to copy as the resource.
- List<Tag>
- List of Tags
- description string
- Description of the inference profile
- inferenceProfile stringName 
- The name of the inference profile.
- modelSource ApplicationInference Profile Inference Profile Model Source Properties 
- Contains configurations for the inference profile to copy as the resource.
- Tag[]
- List of Tags
- description str
- Description of the inference profile
- inference_profile_ strname 
- The name of the inference profile.
- model_source ApplicationInference Profile Inference Profile Model Source Properties Args 
- Contains configurations for the inference profile to copy as the resource.
- 
Sequence[TagArgs] 
- List of Tags
- description String
- Description of the inference profile
- inferenceProfile StringName 
- The name of the inference profile.
- modelSource Property Map
- Contains configurations for the inference profile to copy as the resource.
- List<Property Map>
- List of Tags
Outputs
All input properties are implicitly available as output properties. Additionally, the ApplicationInferenceProfile resource produces the following output properties:
- CreatedAt string
- Time Stamp
- Id string
- The provider-assigned unique ID for this managed resource.
- InferenceProfile stringArn 
- The Amazon Resource Name (ARN) of the inference profile.
- InferenceProfile stringId 
- The unique identifier of the inference profile.
- InferenceProfile stringIdentifier 
- Inference profile identifier. Supports both system-defined inference profile ids, and inference profile ARNs.
- Models
List<Pulumi.Aws Native. Bedrock. Outputs. Application Inference Profile Inference Profile Model> 
- List of model configuration
- Status
Pulumi.Aws Native. Bedrock. Application Inference Profile Inference Profile Status 
- The status of the inference profile. ACTIVEmeans that the inference profile is ready to be used.
- Type
Pulumi.Aws Native. Bedrock. Application Inference Profile Inference Profile Type 
- The type of the inference profile. The following types are possible:- SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
- UpdatedAt string
- Time Stamp
- CreatedAt string
- Time Stamp
- Id string
- The provider-assigned unique ID for this managed resource.
- InferenceProfile stringArn 
- The Amazon Resource Name (ARN) of the inference profile.
- InferenceProfile stringId 
- The unique identifier of the inference profile.
- InferenceProfile stringIdentifier 
- Inference profile identifier. Supports both system-defined inference profile ids, and inference profile ARNs.
- Models
[]ApplicationInference Profile Inference Profile Model 
- List of model configuration
- Status
ApplicationInference Profile Inference Profile Status 
- The status of the inference profile. ACTIVEmeans that the inference profile is ready to be used.
- Type
ApplicationInference Profile Inference Profile Type 
- The type of the inference profile. The following types are possible:- SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
- UpdatedAt string
- Time Stamp
- createdAt String
- Time Stamp
- id String
- The provider-assigned unique ID for this managed resource.
- inferenceProfile StringArn 
- The Amazon Resource Name (ARN) of the inference profile.
- inferenceProfile StringId 
- The unique identifier of the inference profile.
- inferenceProfile StringIdentifier 
- Inference profile identifier. Supports both system-defined inference profile ids, and inference profile ARNs.
- models
List<ApplicationInference Profile Inference Profile Model> 
- List of model configuration
- status
ApplicationInference Profile Inference Profile Status 
- The status of the inference profile. ACTIVEmeans that the inference profile is ready to be used.
- type
ApplicationInference Profile Inference Profile Type 
- The type of the inference profile. The following types are possible:- SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
- updatedAt String
- Time Stamp
- createdAt string
- Time Stamp
- id string
- The provider-assigned unique ID for this managed resource.
- inferenceProfile stringArn 
- The Amazon Resource Name (ARN) of the inference profile.
- inferenceProfile stringId 
- The unique identifier of the inference profile.
- inferenceProfile stringIdentifier 
- Inference profile identifier. Supports both system-defined inference profile ids, and inference profile ARNs.
- models
ApplicationInference Profile Inference Profile Model[] 
- List of model configuration
- status
ApplicationInference Profile Inference Profile Status 
- The status of the inference profile. ACTIVEmeans that the inference profile is ready to be used.
- type
ApplicationInference Profile Inference Profile Type 
- The type of the inference profile. The following types are possible:- SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
- updatedAt string
- Time Stamp
- created_at str
- Time Stamp
- id str
- The provider-assigned unique ID for this managed resource.
- inference_profile_ strarn 
- The Amazon Resource Name (ARN) of the inference profile.
- inference_profile_ strid 
- The unique identifier of the inference profile.
- inference_profile_ stridentifier 
- Inference profile identifier. Supports both system-defined inference profile ids, and inference profile ARNs.
- models
Sequence[ApplicationInference Profile Inference Profile Model] 
- List of model configuration
- status
ApplicationInference Profile Inference Profile Status 
- The status of the inference profile. ACTIVEmeans that the inference profile is ready to be used.
- type
ApplicationInference Profile Inference Profile Type 
- The type of the inference profile. The following types are possible:- SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
- updated_at str
- Time Stamp
- createdAt String
- Time Stamp
- id String
- The provider-assigned unique ID for this managed resource.
- inferenceProfile StringArn 
- The Amazon Resource Name (ARN) of the inference profile.
- inferenceProfile StringId 
- The unique identifier of the inference profile.
- inferenceProfile StringIdentifier 
- Inference profile identifier. Supports both system-defined inference profile ids, and inference profile ARNs.
- models List<Property Map>
- List of model configuration
- status "ACTIVE"
- The status of the inference profile. ACTIVEmeans that the inference profile is ready to be used.
- type "APPLICATION" | "SYSTEM_DEFINED"
- The type of the inference profile. The following types are possible:- SYSTEM_DEFINED– The inference profile is defined by Amazon Bedrock. You can route inference requests across regions with these inference profiles.
- APPLICATION– The inference profile was created by a user. This type of inference profile can track metrics and costs when invoking the model in it. The inference profile may route requests to one or multiple regions.
 
- updatedAt String
- Time Stamp
Supporting Types
ApplicationInferenceProfileInferenceProfileModel, ApplicationInferenceProfileInferenceProfileModelArgs            
- ModelArn string
- ARN for Foundation Models in Bedrock. These models can be used as base models for model customization jobs
- ModelArn string
- ARN for Foundation Models in Bedrock. These models can be used as base models for model customization jobs
- modelArn String
- ARN for Foundation Models in Bedrock. These models can be used as base models for model customization jobs
- modelArn string
- ARN for Foundation Models in Bedrock. These models can be used as base models for model customization jobs
- model_arn str
- ARN for Foundation Models in Bedrock. These models can be used as base models for model customization jobs
- modelArn String
- ARN for Foundation Models in Bedrock. These models can be used as base models for model customization jobs
ApplicationInferenceProfileInferenceProfileModelSourceProperties, ApplicationInferenceProfileInferenceProfileModelSourcePropertiesArgs                
- CopyFrom string
- Source arns for a custom inference profile to copy its regional load balancing config from. This can either be a foundation model or predefined inference profile ARN.
- CopyFrom string
- Source arns for a custom inference profile to copy its regional load balancing config from. This can either be a foundation model or predefined inference profile ARN.
- copyFrom String
- Source arns for a custom inference profile to copy its regional load balancing config from. This can either be a foundation model or predefined inference profile ARN.
- copyFrom string
- Source arns for a custom inference profile to copy its regional load balancing config from. This can either be a foundation model or predefined inference profile ARN.
- copy_from str
- Source arns for a custom inference profile to copy its regional load balancing config from. This can either be a foundation model or predefined inference profile ARN.
- copyFrom String
- Source arns for a custom inference profile to copy its regional load balancing config from. This can either be a foundation model or predefined inference profile ARN.
ApplicationInferenceProfileInferenceProfileStatus, ApplicationInferenceProfileInferenceProfileStatusArgs            
- Active
- ACTIVE
- ApplicationInference Profile Inference Profile Status Active 
- ACTIVE
- Active
- ACTIVE
- Active
- ACTIVE
- ACTIVE
- ACTIVE
- "ACTIVE"
- ACTIVE
ApplicationInferenceProfileInferenceProfileType, ApplicationInferenceProfileInferenceProfileTypeArgs            
- Application
- APPLICATION
- SystemDefined 
- SYSTEM_DEFINED
- ApplicationInference Profile Inference Profile Type Application 
- APPLICATION
- ApplicationInference Profile Inference Profile Type System Defined 
- SYSTEM_DEFINED
- Application
- APPLICATION
- SystemDefined 
- SYSTEM_DEFINED
- Application
- APPLICATION
- SystemDefined 
- SYSTEM_DEFINED
- APPLICATION
- APPLICATION
- SYSTEM_DEFINED
- SYSTEM_DEFINED
- "APPLICATION"
- APPLICATION
- "SYSTEM_DEFINED"
- SYSTEM_DEFINED
Tag, TagArgs  
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.