We recommend new projects start with resources from the AWS provider.
aws-native.kinesisfirehose.DeliveryStream
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Resource Type definition for AWS::KinesisFirehose::DeliveryStream
Create DeliveryStream Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DeliveryStream(name: string, args?: DeliveryStreamArgs, opts?: CustomResourceOptions);@overload
def DeliveryStream(resource_name: str,
                   args: Optional[DeliveryStreamArgs] = None,
                   opts: Optional[ResourceOptions] = None)
@overload
def DeliveryStream(resource_name: str,
                   opts: Optional[ResourceOptions] = None,
                   amazon_open_search_serverless_destination_configuration: Optional[DeliveryStreamAmazonOpenSearchServerlessDestinationConfigurationArgs] = None,
                   amazonopensearchservice_destination_configuration: Optional[DeliveryStreamAmazonopensearchserviceDestinationConfigurationArgs] = None,
                   database_source_configuration: Optional[DeliveryStreamDatabaseSourceConfigurationArgs] = None,
                   delivery_stream_encryption_configuration_input: Optional[DeliveryStreamEncryptionConfigurationInputArgs] = None,
                   delivery_stream_name: Optional[str] = None,
                   delivery_stream_type: Optional[DeliveryStreamType] = None,
                   direct_put_source_configuration: Optional[DeliveryStreamDirectPutSourceConfigurationArgs] = None,
                   elasticsearch_destination_configuration: Optional[DeliveryStreamElasticsearchDestinationConfigurationArgs] = None,
                   extended_s3_destination_configuration: Optional[DeliveryStreamExtendedS3DestinationConfigurationArgs] = None,
                   http_endpoint_destination_configuration: Optional[DeliveryStreamHttpEndpointDestinationConfigurationArgs] = None,
                   iceberg_destination_configuration: Optional[DeliveryStreamIcebergDestinationConfigurationArgs] = None,
                   kinesis_stream_source_configuration: Optional[DeliveryStreamKinesisStreamSourceConfigurationArgs] = None,
                   msk_source_configuration: Optional[DeliveryStreamMskSourceConfigurationArgs] = None,
                   redshift_destination_configuration: Optional[DeliveryStreamRedshiftDestinationConfigurationArgs] = None,
                   s3_destination_configuration: Optional[DeliveryStreamS3DestinationConfigurationArgs] = None,
                   snowflake_destination_configuration: Optional[DeliveryStreamSnowflakeDestinationConfigurationArgs] = None,
                   splunk_destination_configuration: Optional[DeliveryStreamSplunkDestinationConfigurationArgs] = None,
                   tags: Optional[Sequence[_root_inputs.TagArgs]] = None)func NewDeliveryStream(ctx *Context, name string, args *DeliveryStreamArgs, opts ...ResourceOption) (*DeliveryStream, error)public DeliveryStream(string name, DeliveryStreamArgs? args = null, CustomResourceOptions? opts = null)
public DeliveryStream(String name, DeliveryStreamArgs args)
public DeliveryStream(String name, DeliveryStreamArgs args, CustomResourceOptions options)
type: aws-native:kinesisfirehose:DeliveryStream
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
DeliveryStream Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The DeliveryStream resource accepts the following input properties:
- AmazonOpen Pulumi.Search Serverless Destination Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazon Open Search Serverless Destination Configuration 
- Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- AmazonopensearchserviceDestination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazonopensearchservice Destination Configuration 
- The destination in Amazon OpenSearch Service. You can specify only one destination.
- DatabaseSource Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Database Source Configuration 
- The top level object for configuring streams with database as a source. - Amazon Data Firehose is in preview release and is subject to change. 
- DeliveryStream Pulumi.Encryption Configuration Input Aws Native. Kinesis Firehose. Inputs. Delivery Stream Encryption Configuration Input 
- Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- DeliveryStream stringName 
- The name of the Firehose stream.
- DeliveryStream Pulumi.Type Aws Native. Kinesis Firehose. Delivery Stream Type 
- The Firehose stream type. This can be one of the following values:- DirectPut: Provider applications access the Firehose stream directly.
- KinesisStreamAsSource: The Firehose stream uses a Kinesis data stream as a source.
 
- DirectPut Pulumi.Source Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Direct Put Source Configuration 
- The structure that configures parameters such as ThroughputHintInMBsfor a stream configured with Direct PUT as a source.
- ElasticsearchDestination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Elasticsearch Destination Configuration 
- An Amazon ES destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions . 
- ExtendedS3Destination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Extended S3Destination Configuration 
- An Amazon S3 destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions . 
- HttpEndpoint Pulumi.Destination Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Http Endpoint Destination Configuration 
- Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- IcebergDestination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Iceberg Destination Configuration 
- Specifies the destination configure settings for Apache Iceberg Table.
- KinesisStream Pulumi.Source Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Kinesis Stream Source Configuration 
- When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- MskSource Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Msk Source Configuration 
- The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- RedshiftDestination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Redshift Destination Configuration 
- An Amazon Redshift destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions . 
- S3DestinationConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- The - S3DestinationConfigurationproperty type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.- Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions . 
- SnowflakeDestination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Destination Configuration 
- Configure Snowflake destination
- SplunkDestination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Splunk Destination Configuration 
- The configuration of a destination in Splunk for the delivery stream.
- 
List<Pulumi.Aws Native. Inputs. Tag> 
- A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide. - You can specify up to 50 tags when creating a Firehose stream. - If you specify tags in the - CreateDeliveryStreamaction, Amazon Data Firehose performs an additional authorization on the- firehose:TagDeliveryStreamaction to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an- AccessDeniedExceptionsuch as following.- AccessDeniedException - User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy. - For an example IAM policy, see Tag example. 
- AmazonOpen DeliverySearch Serverless Destination Configuration Stream Amazon Open Search Serverless Destination Configuration Args 
- Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- AmazonopensearchserviceDestination DeliveryConfiguration Stream Amazonopensearchservice Destination Configuration Args 
- The destination in Amazon OpenSearch Service. You can specify only one destination.
- DatabaseSource DeliveryConfiguration Stream Database Source Configuration Args 
- The top level object for configuring streams with database as a source. - Amazon Data Firehose is in preview release and is subject to change. 
- DeliveryStream DeliveryEncryption Configuration Input Stream Encryption Configuration Input Type Args 
- Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- DeliveryStream stringName 
- The name of the Firehose stream.
- DeliveryStream DeliveryType Stream Type 
- The Firehose stream type. This can be one of the following values:- DirectPut: Provider applications access the Firehose stream directly.
- KinesisStreamAsSource: The Firehose stream uses a Kinesis data stream as a source.
 
- DirectPut DeliverySource Configuration Stream Direct Put Source Configuration Args 
- The structure that configures parameters such as ThroughputHintInMBsfor a stream configured with Direct PUT as a source.
- ElasticsearchDestination DeliveryConfiguration Stream Elasticsearch Destination Configuration Args 
- An Amazon ES destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions . 
- ExtendedS3Destination DeliveryConfiguration Stream Extended S3Destination Configuration Args 
- An Amazon S3 destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions . 
- HttpEndpoint DeliveryDestination Configuration Stream Http Endpoint Destination Configuration Args 
- Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- IcebergDestination DeliveryConfiguration Stream Iceberg Destination Configuration Args 
- Specifies the destination configure settings for Apache Iceberg Table.
- KinesisStream DeliverySource Configuration Stream Kinesis Stream Source Configuration Args 
- When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- MskSource DeliveryConfiguration Stream Msk Source Configuration Args 
- The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- RedshiftDestination DeliveryConfiguration Stream Redshift Destination Configuration Args 
- An Amazon Redshift destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions . 
- S3DestinationConfiguration DeliveryStream S3Destination Configuration Args 
- The - S3DestinationConfigurationproperty type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.- Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions . 
- SnowflakeDestination DeliveryConfiguration Stream Snowflake Destination Configuration Args 
- Configure Snowflake destination
- SplunkDestination DeliveryConfiguration Stream Splunk Destination Configuration Args 
- The configuration of a destination in Splunk for the delivery stream.
- 
TagArgs 
- A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide. - You can specify up to 50 tags when creating a Firehose stream. - If you specify tags in the - CreateDeliveryStreamaction, Amazon Data Firehose performs an additional authorization on the- firehose:TagDeliveryStreamaction to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an- AccessDeniedExceptionsuch as following.- AccessDeniedException - User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy. - For an example IAM policy, see Tag example. 
- amazonOpen DeliverySearch Serverless Destination Configuration Stream Amazon Open Search Serverless Destination Configuration 
- Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- amazonopensearchserviceDestination DeliveryConfiguration Stream Amazonopensearchservice Destination Configuration 
- The destination in Amazon OpenSearch Service. You can specify only one destination.
- databaseSource DeliveryConfiguration Stream Database Source Configuration 
- The top level object for configuring streams with database as a source. - Amazon Data Firehose is in preview release and is subject to change. 
- deliveryStream DeliveryEncryption Configuration Input Stream Encryption Configuration Input 
- Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- deliveryStream StringName 
- The name of the Firehose stream.
- deliveryStream DeliveryType Stream Type 
- The Firehose stream type. This can be one of the following values:- DirectPut: Provider applications access the Firehose stream directly.
- KinesisStreamAsSource: The Firehose stream uses a Kinesis data stream as a source.
 
- directPut DeliverySource Configuration Stream Direct Put Source Configuration 
- The structure that configures parameters such as ThroughputHintInMBsfor a stream configured with Direct PUT as a source.
- elasticsearchDestination DeliveryConfiguration Stream Elasticsearch Destination Configuration 
- An Amazon ES destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions . 
- extendedS3Destination DeliveryConfiguration Stream Extended S3Destination Configuration 
- An Amazon S3 destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions . 
- httpEndpoint DeliveryDestination Configuration Stream Http Endpoint Destination Configuration 
- Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- icebergDestination DeliveryConfiguration Stream Iceberg Destination Configuration 
- Specifies the destination configure settings for Apache Iceberg Table.
- kinesisStream DeliverySource Configuration Stream Kinesis Stream Source Configuration 
- When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- mskSource DeliveryConfiguration Stream Msk Source Configuration 
- The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- redshiftDestination DeliveryConfiguration Stream Redshift Destination Configuration 
- An Amazon Redshift destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions . 
- s3DestinationConfiguration DeliveryStream S3Destination Configuration 
- The - S3DestinationConfigurationproperty type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.- Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions . 
- snowflakeDestination DeliveryConfiguration Stream Snowflake Destination Configuration 
- Configure Snowflake destination
- splunkDestination DeliveryConfiguration Stream Splunk Destination Configuration 
- The configuration of a destination in Splunk for the delivery stream.
- List<Tag>
- A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide. - You can specify up to 50 tags when creating a Firehose stream. - If you specify tags in the - CreateDeliveryStreamaction, Amazon Data Firehose performs an additional authorization on the- firehose:TagDeliveryStreamaction to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an- AccessDeniedExceptionsuch as following.- AccessDeniedException - User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy. - For an example IAM policy, see Tag example. 
- amazonOpen DeliverySearch Serverless Destination Configuration Stream Amazon Open Search Serverless Destination Configuration 
- Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- amazonopensearchserviceDestination DeliveryConfiguration Stream Amazonopensearchservice Destination Configuration 
- The destination in Amazon OpenSearch Service. You can specify only one destination.
- databaseSource DeliveryConfiguration Stream Database Source Configuration 
- The top level object for configuring streams with database as a source. - Amazon Data Firehose is in preview release and is subject to change. 
- deliveryStream DeliveryEncryption Configuration Input Stream Encryption Configuration Input 
- Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- deliveryStream stringName 
- The name of the Firehose stream.
- deliveryStream DeliveryType Stream Type 
- The Firehose stream type. This can be one of the following values:- DirectPut: Provider applications access the Firehose stream directly.
- KinesisStreamAsSource: The Firehose stream uses a Kinesis data stream as a source.
 
- directPut DeliverySource Configuration Stream Direct Put Source Configuration 
- The structure that configures parameters such as ThroughputHintInMBsfor a stream configured with Direct PUT as a source.
- elasticsearchDestination DeliveryConfiguration Stream Elasticsearch Destination Configuration 
- An Amazon ES destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions . 
- extendedS3Destination DeliveryConfiguration Stream Extended S3Destination Configuration 
- An Amazon S3 destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions . 
- httpEndpoint DeliveryDestination Configuration Stream Http Endpoint Destination Configuration 
- Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- icebergDestination DeliveryConfiguration Stream Iceberg Destination Configuration 
- Specifies the destination configure settings for Apache Iceberg Table.
- kinesisStream DeliverySource Configuration Stream Kinesis Stream Source Configuration 
- When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- mskSource DeliveryConfiguration Stream Msk Source Configuration 
- The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- redshiftDestination DeliveryConfiguration Stream Redshift Destination Configuration 
- An Amazon Redshift destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions . 
- s3DestinationConfiguration DeliveryStream S3Destination Configuration 
- The - S3DestinationConfigurationproperty type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.- Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions . 
- snowflakeDestination DeliveryConfiguration Stream Snowflake Destination Configuration 
- Configure Snowflake destination
- splunkDestination DeliveryConfiguration Stream Splunk Destination Configuration 
- The configuration of a destination in Splunk for the delivery stream.
- Tag[]
- A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide. - You can specify up to 50 tags when creating a Firehose stream. - If you specify tags in the - CreateDeliveryStreamaction, Amazon Data Firehose performs an additional authorization on the- firehose:TagDeliveryStreamaction to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an- AccessDeniedExceptionsuch as following.- AccessDeniedException - User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy. - For an example IAM policy, see Tag example. 
- amazon_open_ Deliverysearch_ serverless_ destination_ configuration Stream Amazon Open Search Serverless Destination Configuration Args 
- Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- amazonopensearchservice_destination_ Deliveryconfiguration Stream Amazonopensearchservice Destination Configuration Args 
- The destination in Amazon OpenSearch Service. You can specify only one destination.
- database_source_ Deliveryconfiguration Stream Database Source Configuration Args 
- The top level object for configuring streams with database as a source. - Amazon Data Firehose is in preview release and is subject to change. 
- delivery_stream_ Deliveryencryption_ configuration_ input Stream Encryption Configuration Input Args 
- Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- delivery_stream_ strname 
- The name of the Firehose stream.
- delivery_stream_ Deliverytype Stream Type 
- The Firehose stream type. This can be one of the following values:- DirectPut: Provider applications access the Firehose stream directly.
- KinesisStreamAsSource: The Firehose stream uses a Kinesis data stream as a source.
 
- direct_put_ Deliverysource_ configuration Stream Direct Put Source Configuration Args 
- The structure that configures parameters such as ThroughputHintInMBsfor a stream configured with Direct PUT as a source.
- elasticsearch_destination_ Deliveryconfiguration Stream Elasticsearch Destination Configuration Args 
- An Amazon ES destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions . 
- extended_s3_ Deliverydestination_ configuration Stream Extended S3Destination Configuration Args 
- An Amazon S3 destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions . 
- http_endpoint_ Deliverydestination_ configuration Stream Http Endpoint Destination Configuration Args 
- Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- iceberg_destination_ Deliveryconfiguration Stream Iceberg Destination Configuration Args 
- Specifies the destination configure settings for Apache Iceberg Table.
- kinesis_stream_ Deliverysource_ configuration Stream Kinesis Stream Source Configuration Args 
- When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- msk_source_ Deliveryconfiguration Stream Msk Source Configuration Args 
- The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- redshift_destination_ Deliveryconfiguration Stream Redshift Destination Configuration Args 
- An Amazon Redshift destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions . 
- s3_destination_ Deliveryconfiguration Stream S3Destination Configuration Args 
- The - S3DestinationConfigurationproperty type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.- Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions . 
- snowflake_destination_ Deliveryconfiguration Stream Snowflake Destination Configuration Args 
- Configure Snowflake destination
- splunk_destination_ Deliveryconfiguration Stream Splunk Destination Configuration Args 
- The configuration of a destination in Splunk for the delivery stream.
- 
Sequence[TagArgs] 
- A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide. - You can specify up to 50 tags when creating a Firehose stream. - If you specify tags in the - CreateDeliveryStreamaction, Amazon Data Firehose performs an additional authorization on the- firehose:TagDeliveryStreamaction to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an- AccessDeniedExceptionsuch as following.- AccessDeniedException - User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy. - For an example IAM policy, see Tag example. 
- amazonOpen Property MapSearch Serverless Destination Configuration 
- Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- amazonopensearchserviceDestination Property MapConfiguration 
- The destination in Amazon OpenSearch Service. You can specify only one destination.
- databaseSource Property MapConfiguration 
- The top level object for configuring streams with database as a source. - Amazon Data Firehose is in preview release and is subject to change. 
- deliveryStream Property MapEncryption Configuration Input 
- Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- deliveryStream StringName 
- The name of the Firehose stream.
- deliveryStream "DatabaseType As Source" | "Direct Put" | "Kinesis Stream As Source" | "MSKAs Source" 
- The Firehose stream type. This can be one of the following values:- DirectPut: Provider applications access the Firehose stream directly.
- KinesisStreamAsSource: The Firehose stream uses a Kinesis data stream as a source.
 
- directPut Property MapSource Configuration 
- The structure that configures parameters such as ThroughputHintInMBsfor a stream configured with Direct PUT as a source.
- elasticsearchDestination Property MapConfiguration 
- An Amazon ES destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions . 
- extendedS3Destination Property MapConfiguration 
- An Amazon S3 destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions . 
- httpEndpoint Property MapDestination Configuration 
- Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- icebergDestination Property MapConfiguration 
- Specifies the destination configure settings for Apache Iceberg Table.
- kinesisStream Property MapSource Configuration 
- When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- mskSource Property MapConfiguration 
- The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- redshiftDestination Property MapConfiguration 
- An Amazon Redshift destination for the delivery stream. - Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions . 
- s3DestinationConfiguration Property Map
- The - S3DestinationConfigurationproperty type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.- Conditional. You must specify only one destination configuration. - If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions . 
- snowflakeDestination Property MapConfiguration 
- Configure Snowflake destination
- splunkDestination Property MapConfiguration 
- The configuration of a destination in Splunk for the delivery stream.
- List<Property Map>
- A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide. - You can specify up to 50 tags when creating a Firehose stream. - If you specify tags in the - CreateDeliveryStreamaction, Amazon Data Firehose performs an additional authorization on the- firehose:TagDeliveryStreamaction to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an- AccessDeniedExceptionsuch as following.- AccessDeniedException - User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy. - For an example IAM policy, see Tag example. 
Outputs
All input properties are implicitly available as output properties. Additionally, the DeliveryStream resource produces the following output properties:
Supporting Types
DeliveryStreamAmazonOpenSearchServerlessBufferingHints, DeliveryStreamAmazonOpenSearchServerlessBufferingHintsArgs                
- IntervalIn intSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- SizeIn intMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. - We recommend setting this parameter to a value greater than the amount of data you typically ingest into the Firehose stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. 
- IntervalIn intSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- SizeIn intMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. - We recommend setting this parameter to a value greater than the amount of data you typically ingest into the Firehose stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. 
- intervalIn IntegerSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- sizeIn IntegerMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. - We recommend setting this parameter to a value greater than the amount of data you typically ingest into the Firehose stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. 
- intervalIn numberSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- sizeIn numberMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. - We recommend setting this parameter to a value greater than the amount of data you typically ingest into the Firehose stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. 
- interval_in_ intseconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size_in_ intmbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. - We recommend setting this parameter to a value greater than the amount of data you typically ingest into the Firehose stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. 
- intervalIn NumberSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- sizeIn NumberMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. - We recommend setting this parameter to a value greater than the amount of data you typically ingest into the Firehose stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. 
DeliveryStreamAmazonOpenSearchServerlessDestinationConfiguration, DeliveryStreamAmazonOpenSearchServerlessDestinationConfigurationArgs                
- IndexName string
- The Serverless offering for Amazon OpenSearch Service index name.
- RoleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- S3Configuration
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazon Open Search Serverless Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- CollectionEndpoint string
- The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazon Open Search Serverless Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- S3BackupMode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Amazon Open Search Serverless Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- VpcConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Vpc Configuration 
- IndexName string
- The Serverless offering for Amazon OpenSearch Service index name.
- RoleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- S3Configuration
DeliveryStream S3Destination Configuration 
- BufferingHints DeliveryStream Amazon Open Search Serverless Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- CollectionEndpoint string
- The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- ProcessingConfiguration DeliveryStream Processing Configuration 
- RetryOptions DeliveryStream Amazon Open Search Serverless Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- S3BackupMode DeliveryStream Amazon Open Search Serverless Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- VpcConfiguration DeliveryStream Vpc Configuration 
- indexName String
- The Serverless offering for Amazon OpenSearch Service index name.
- roleArn String
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration
DeliveryStream S3Destination Configuration 
- bufferingHints DeliveryStream Amazon Open Search Serverless Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- collectionEndpoint String
- The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- processingConfiguration DeliveryStream Processing Configuration 
- retryOptions DeliveryStream Amazon Open Search Serverless Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3BackupMode DeliveryStream Amazon Open Search Serverless Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- vpcConfiguration DeliveryStream Vpc Configuration 
- indexName string
- The Serverless offering for Amazon OpenSearch Service index name.
- roleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration
DeliveryStream S3Destination Configuration 
- bufferingHints DeliveryStream Amazon Open Search Serverless Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- collectionEndpoint string
- The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- processingConfiguration DeliveryStream Processing Configuration 
- retryOptions DeliveryStream Amazon Open Search Serverless Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3BackupMode DeliveryStream Amazon Open Search Serverless Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- vpcConfiguration DeliveryStream Vpc Configuration 
- index_name str
- The Serverless offering for Amazon OpenSearch Service index name.
- role_arn str
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3_configuration DeliveryStream S3Destination Configuration 
- buffering_hints DeliveryStream Amazon Open Search Serverless Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- collection_endpoint str
- The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- processing_configuration DeliveryStream Processing Configuration 
- retry_options DeliveryStream Amazon Open Search Serverless Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3_backup_ Deliverymode Stream Amazon Open Search Serverless Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- vpc_configuration DeliveryStream Vpc Configuration 
- indexName String
- The Serverless offering for Amazon OpenSearch Service index name.
- roleArn String
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration Property Map
- bufferingHints Property Map
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloudWatch Property MapLogging Options 
- collectionEndpoint String
- The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- processingConfiguration Property Map
- retryOptions Property Map
- The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3BackupMode "FailedDocuments Only" | "All Documents" 
- Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- vpcConfiguration Property Map
DeliveryStreamAmazonOpenSearchServerlessDestinationConfigurationS3BackupMode, DeliveryStreamAmazonOpenSearchServerlessDestinationConfigurationS3BackupModeArgs                    
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- DeliveryStream Amazon Open Search Serverless Destination Configuration S3Backup Mode Failed Documents Only 
- FailedDocumentsOnly
- DeliveryStream Amazon Open Search Serverless Destination Configuration S3Backup Mode All Documents 
- AllDocuments
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- FAILED_DOCUMENTS_ONLY
- FailedDocumentsOnly
- ALL_DOCUMENTS
- AllDocuments
- "FailedDocuments Only" 
- FailedDocumentsOnly
- "AllDocuments" 
- AllDocuments
DeliveryStreamAmazonOpenSearchServerlessRetryOptions, DeliveryStreamAmazonOpenSearchServerlessRetryOptionsArgs                
- DurationIn intSeconds 
- After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- DurationIn intSeconds 
- After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- durationIn IntegerSeconds 
- After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- durationIn numberSeconds 
- After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration_in_ intseconds 
- After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- durationIn NumberSeconds 
- After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
DeliveryStreamAmazonopensearchserviceBufferingHints, DeliveryStreamAmazonopensearchserviceBufferingHintsArgs          
- IntervalIn intSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- SizeIn intMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- IntervalIn intSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- SizeIn intMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- intervalIn IntegerSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- sizeIn IntegerMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- intervalIn numberSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- sizeIn numberMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval_in_ intseconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size_in_ intmbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- intervalIn NumberSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- sizeIn NumberMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
DeliveryStreamAmazonopensearchserviceDestinationConfiguration, DeliveryStreamAmazonopensearchserviceDestinationConfigurationArgs          
- IndexName string
- The Amazon OpenSearch Service index name.
- RoleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- S3Configuration
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazonopensearchservice Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- ClusterEndpoint string
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- DocumentId Pulumi.Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- DomainArn string
- The ARN of the Amazon OpenSearch Service domain.
- IndexRotation Pulumi.Period Aws Native. Kinesis Firehose. Delivery Stream Amazonopensearchservice Destination Configuration Index Rotation Period 
- The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- Describes a data processing configuration.
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazonopensearchservice Retry Options 
- The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- S3BackupMode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Amazonopensearchservice Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3.
- TypeName string
- The Amazon OpenSearch Service type name.
- VpcConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Vpc Configuration 
- The details of the VPC of the Amazon OpenSearch Service destination.
- IndexName string
- The Amazon OpenSearch Service index name.
- RoleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- S3Configuration
DeliveryStream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- BufferingHints DeliveryStream Amazonopensearchservice Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- ClusterEndpoint string
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- DocumentId DeliveryOptions Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- DomainArn string
- The ARN of the Amazon OpenSearch Service domain.
- IndexRotation DeliveryPeriod Stream Amazonopensearchservice Destination Configuration Index Rotation Period 
- The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- ProcessingConfiguration DeliveryStream Processing Configuration 
- Describes a data processing configuration.
- RetryOptions DeliveryStream Amazonopensearchservice Retry Options 
- The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- S3BackupMode DeliveryStream Amazonopensearchservice Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3.
- TypeName string
- The Amazon OpenSearch Service type name.
- VpcConfiguration DeliveryStream Vpc Configuration 
- The details of the VPC of the Amazon OpenSearch Service destination.
- indexName String
- The Amazon OpenSearch Service index name.
- roleArn String
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration
DeliveryStream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- bufferingHints DeliveryStream Amazonopensearchservice Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- clusterEndpoint String
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- documentId DeliveryOptions Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domainArn String
- The ARN of the Amazon OpenSearch Service domain.
- indexRotation DeliveryPeriod Stream Amazonopensearchservice Destination Configuration Index Rotation Period 
- The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processingConfiguration DeliveryStream Processing Configuration 
- Describes a data processing configuration.
- retryOptions DeliveryStream Amazonopensearchservice Retry Options 
- The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3BackupMode DeliveryStream Amazonopensearchservice Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3.
- typeName String
- The Amazon OpenSearch Service type name.
- vpcConfiguration DeliveryStream Vpc Configuration 
- The details of the VPC of the Amazon OpenSearch Service destination.
- indexName string
- The Amazon OpenSearch Service index name.
- roleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration
DeliveryStream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- bufferingHints DeliveryStream Amazonopensearchservice Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- clusterEndpoint string
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- documentId DeliveryOptions Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domainArn string
- The ARN of the Amazon OpenSearch Service domain.
- indexRotation DeliveryPeriod Stream Amazonopensearchservice Destination Configuration Index Rotation Period 
- The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processingConfiguration DeliveryStream Processing Configuration 
- Describes a data processing configuration.
- retryOptions DeliveryStream Amazonopensearchservice Retry Options 
- The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3BackupMode DeliveryStream Amazonopensearchservice Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3.
- typeName string
- The Amazon OpenSearch Service type name.
- vpcConfiguration DeliveryStream Vpc Configuration 
- The details of the VPC of the Amazon OpenSearch Service destination.
- index_name str
- The Amazon OpenSearch Service index name.
- role_arn str
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3_configuration DeliveryStream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- buffering_hints DeliveryStream Amazonopensearchservice Buffering Hints 
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- cluster_endpoint str
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- document_id_ Deliveryoptions Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain_arn str
- The ARN of the Amazon OpenSearch Service domain.
- index_rotation_ Deliveryperiod Stream Amazonopensearchservice Destination Configuration Index Rotation Period 
- The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processing_configuration DeliveryStream Processing Configuration 
- Describes a data processing configuration.
- retry_options DeliveryStream Amazonopensearchservice Retry Options 
- The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3_backup_ Deliverymode Stream Amazonopensearchservice Destination Configuration S3Backup Mode 
- Defines how documents should be delivered to Amazon S3.
- type_name str
- The Amazon OpenSearch Service type name.
- vpc_configuration DeliveryStream Vpc Configuration 
- The details of the VPC of the Amazon OpenSearch Service destination.
- indexName String
- The Amazon OpenSearch Service index name.
- roleArn String
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration Property Map
- Describes the configuration of a destination in Amazon S3.
- bufferingHints Property Map
- The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloudWatch Property MapLogging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- clusterEndpoint String
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- documentId Property MapOptions 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domainArn String
- The ARN of the Amazon OpenSearch Service domain.
- indexRotation "NoPeriod Rotation" | "One Hour" | "One Day" | "One Week" | "One Month" 
- The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processingConfiguration Property Map
- Describes a data processing configuration.
- retryOptions Property Map
- The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3BackupMode "FailedDocuments Only" | "All Documents" 
- Defines how documents should be delivered to Amazon S3.
- typeName String
- The Amazon OpenSearch Service type name.
- vpcConfiguration Property Map
- The details of the VPC of the Amazon OpenSearch Service destination.
DeliveryStreamAmazonopensearchserviceDestinationConfigurationIndexRotationPeriod, DeliveryStreamAmazonopensearchserviceDestinationConfigurationIndexRotationPeriodArgs                
- NoRotation 
- NoRotation
- OneHour 
- OneHour
- OneDay 
- OneDay
- OneWeek 
- OneWeek
- OneMonth 
- OneMonth
- DeliveryStream Amazonopensearchservice Destination Configuration Index Rotation Period No Rotation 
- NoRotation
- DeliveryStream Amazonopensearchservice Destination Configuration Index Rotation Period One Hour 
- OneHour
- DeliveryStream Amazonopensearchservice Destination Configuration Index Rotation Period One Day 
- OneDay
- DeliveryStream Amazonopensearchservice Destination Configuration Index Rotation Period One Week 
- OneWeek
- DeliveryStream Amazonopensearchservice Destination Configuration Index Rotation Period One Month 
- OneMonth
- NoRotation 
- NoRotation
- OneHour 
- OneHour
- OneDay 
- OneDay
- OneWeek 
- OneWeek
- OneMonth 
- OneMonth
- NoRotation 
- NoRotation
- OneHour 
- OneHour
- OneDay 
- OneDay
- OneWeek 
- OneWeek
- OneMonth 
- OneMonth
- NO_ROTATION
- NoRotation
- ONE_HOUR
- OneHour
- ONE_DAY
- OneDay
- ONE_WEEK
- OneWeek
- ONE_MONTH
- OneMonth
- "NoRotation" 
- NoRotation
- "OneHour" 
- OneHour
- "OneDay" 
- OneDay
- "OneWeek" 
- OneWeek
- "OneMonth" 
- OneMonth
DeliveryStreamAmazonopensearchserviceDestinationConfigurationS3BackupMode, DeliveryStreamAmazonopensearchserviceDestinationConfigurationS3BackupModeArgs              
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- DeliveryStream Amazonopensearchservice Destination Configuration S3Backup Mode Failed Documents Only 
- FailedDocumentsOnly
- DeliveryStream Amazonopensearchservice Destination Configuration S3Backup Mode All Documents 
- AllDocuments
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- FAILED_DOCUMENTS_ONLY
- FailedDocumentsOnly
- ALL_DOCUMENTS
- AllDocuments
- "FailedDocuments Only" 
- FailedDocumentsOnly
- "AllDocuments" 
- AllDocuments
DeliveryStreamAmazonopensearchserviceRetryOptions, DeliveryStreamAmazonopensearchserviceRetryOptionsArgs          
- DurationIn intSeconds 
- After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- DurationIn intSeconds 
- After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- durationIn IntegerSeconds 
- After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- durationIn numberSeconds 
- After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration_in_ intseconds 
- After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- durationIn NumberSeconds 
- After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
DeliveryStreamAuthenticationConfiguration, DeliveryStreamAuthenticationConfigurationArgs        
- Connectivity
Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Authentication Configuration Connectivity 
- The type of connectivity used to access the Amazon MSK cluster.
- RoleArn string
- The ARN of the role used to access the Amazon MSK cluster.
- Connectivity
DeliveryStream Authentication Configuration Connectivity 
- The type of connectivity used to access the Amazon MSK cluster.
- RoleArn string
- The ARN of the role used to access the Amazon MSK cluster.
- connectivity
DeliveryStream Authentication Configuration Connectivity 
- The type of connectivity used to access the Amazon MSK cluster.
- roleArn String
- The ARN of the role used to access the Amazon MSK cluster.
- connectivity
DeliveryStream Authentication Configuration Connectivity 
- The type of connectivity used to access the Amazon MSK cluster.
- roleArn string
- The ARN of the role used to access the Amazon MSK cluster.
- connectivity
DeliveryStream Authentication Configuration Connectivity 
- The type of connectivity used to access the Amazon MSK cluster.
- role_arn str
- The ARN of the role used to access the Amazon MSK cluster.
- connectivity "PUBLIC" | "PRIVATE"
- The type of connectivity used to access the Amazon MSK cluster.
- roleArn String
- The ARN of the role used to access the Amazon MSK cluster.
DeliveryStreamAuthenticationConfigurationConnectivity, DeliveryStreamAuthenticationConfigurationConnectivityArgs          
- Public
- PUBLIC
- Private
- PRIVATE
- DeliveryStream Authentication Configuration Connectivity Public 
- PUBLIC
- DeliveryStream Authentication Configuration Connectivity Private 
- PRIVATE
- Public
- PUBLIC
- Private
- PRIVATE
- Public
- PUBLIC
- Private
- PRIVATE
- PUBLIC
- PUBLIC
- PRIVATE
- PRIVATE
- "PUBLIC"
- PUBLIC
- "PRIVATE"
- PRIVATE
DeliveryStreamBufferingHints, DeliveryStreamBufferingHintsArgs        
- IntervalIn intSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- SizeIn intMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- IntervalIn intSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- SizeIn intMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- intervalIn IntegerSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- sizeIn IntegerMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- intervalIn numberSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- sizeIn numberMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval_in_ intseconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- size_in_ intmbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- intervalIn NumberSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- sizeIn NumberMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
DeliveryStreamCatalogConfiguration, DeliveryStreamCatalogConfigurationArgs        
- CatalogArn string
- Specifies the Glue catalog ARN identifier of the destination Apache Iceberg Tables. You must specify the ARN in the format arn:aws:glue:region:account-id:catalog.
- CatalogArn string
- Specifies the Glue catalog ARN identifier of the destination Apache Iceberg Tables. You must specify the ARN in the format arn:aws:glue:region:account-id:catalog.
- catalogArn String
- Specifies the Glue catalog ARN identifier of the destination Apache Iceberg Tables. You must specify the ARN in the format arn:aws:glue:region:account-id:catalog.
- catalogArn string
- Specifies the Glue catalog ARN identifier of the destination Apache Iceberg Tables. You must specify the ARN in the format arn:aws:glue:region:account-id:catalog.
- catalog_arn str
- Specifies the Glue catalog ARN identifier of the destination Apache Iceberg Tables. You must specify the ARN in the format arn:aws:glue:region:account-id:catalog.
- catalogArn String
- Specifies the Glue catalog ARN identifier of the destination Apache Iceberg Tables. You must specify the ARN in the format arn:aws:glue:region:account-id:catalog.
DeliveryStreamCloudWatchLoggingOptions, DeliveryStreamCloudWatchLoggingOptionsArgs            
- Enabled bool
- Indicates whether CloudWatch Logs logging is enabled.
- LogGroup stringName 
- The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use. - Conditional. If you enable logging, you must specify this property. 
- LogStream stringName 
- The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery. - Conditional. If you enable logging, you must specify this property. 
- Enabled bool
- Indicates whether CloudWatch Logs logging is enabled.
- LogGroup stringName 
- The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use. - Conditional. If you enable logging, you must specify this property. 
- LogStream stringName 
- The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery. - Conditional. If you enable logging, you must specify this property. 
- enabled Boolean
- Indicates whether CloudWatch Logs logging is enabled.
- logGroup StringName 
- The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use. - Conditional. If you enable logging, you must specify this property. 
- logStream StringName 
- The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery. - Conditional. If you enable logging, you must specify this property. 
- enabled boolean
- Indicates whether CloudWatch Logs logging is enabled.
- logGroup stringName 
- The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use. - Conditional. If you enable logging, you must specify this property. 
- logStream stringName 
- The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery. - Conditional. If you enable logging, you must specify this property. 
- enabled bool
- Indicates whether CloudWatch Logs logging is enabled.
- log_group_ strname 
- The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use. - Conditional. If you enable logging, you must specify this property. 
- log_stream_ strname 
- The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery. - Conditional. If you enable logging, you must specify this property. 
- enabled Boolean
- Indicates whether CloudWatch Logs logging is enabled.
- logGroup StringName 
- The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use. - Conditional. If you enable logging, you must specify this property. 
- logStream StringName 
- The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery. - Conditional. If you enable logging, you must specify this property. 
DeliveryStreamCopyCommand, DeliveryStreamCopyCommandArgs        
- DataTable stringName 
- The name of the target table. The table must already exist in the database.
- CopyOptions string
- Parameters to use with the Amazon Redshift COPYcommand. For examples, see theCopyOptionscontent for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
- DataTable stringColumns 
- A comma-separated list of column names.
- DataTable stringName 
- The name of the target table. The table must already exist in the database.
- CopyOptions string
- Parameters to use with the Amazon Redshift COPYcommand. For examples, see theCopyOptionscontent for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
- DataTable stringColumns 
- A comma-separated list of column names.
- dataTable StringName 
- The name of the target table. The table must already exist in the database.
- copyOptions String
- Parameters to use with the Amazon Redshift COPYcommand. For examples, see theCopyOptionscontent for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
- dataTable StringColumns 
- A comma-separated list of column names.
- dataTable stringName 
- The name of the target table. The table must already exist in the database.
- copyOptions string
- Parameters to use with the Amazon Redshift COPYcommand. For examples, see theCopyOptionscontent for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
- dataTable stringColumns 
- A comma-separated list of column names.
- data_table_ strname 
- The name of the target table. The table must already exist in the database.
- copy_options str
- Parameters to use with the Amazon Redshift COPYcommand. For examples, see theCopyOptionscontent for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
- data_table_ strcolumns 
- A comma-separated list of column names.
- dataTable StringName 
- The name of the target table. The table must already exist in the database.
- copyOptions String
- Parameters to use with the Amazon Redshift COPYcommand. For examples, see theCopyOptionscontent for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
- dataTable StringColumns 
- A comma-separated list of column names.
DeliveryStreamDataFormatConversionConfiguration, DeliveryStreamDataFormatConversionConfigurationArgs            
- Enabled bool
- Defaults to true. Set it tofalseif you want to disable format conversion while preserving the configuration details.
- InputFormat Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Input Format Configuration 
- Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabledis set to true.
- OutputFormat Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Output Format Configuration 
- Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabledis set to true.
- SchemaConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Schema Configuration 
- Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabledis set to true.
- Enabled bool
- Defaults to true. Set it tofalseif you want to disable format conversion while preserving the configuration details.
- InputFormat DeliveryConfiguration Stream Input Format Configuration 
- Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabledis set to true.
- OutputFormat DeliveryConfiguration Stream Output Format Configuration 
- Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabledis set to true.
- SchemaConfiguration DeliveryStream Schema Configuration 
- Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabledis set to true.
- enabled Boolean
- Defaults to true. Set it tofalseif you want to disable format conversion while preserving the configuration details.
- inputFormat DeliveryConfiguration Stream Input Format Configuration 
- Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabledis set to true.
- outputFormat DeliveryConfiguration Stream Output Format Configuration 
- Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabledis set to true.
- schemaConfiguration DeliveryStream Schema Configuration 
- Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabledis set to true.
- enabled boolean
- Defaults to true. Set it tofalseif you want to disable format conversion while preserving the configuration details.
- inputFormat DeliveryConfiguration Stream Input Format Configuration 
- Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabledis set to true.
- outputFormat DeliveryConfiguration Stream Output Format Configuration 
- Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabledis set to true.
- schemaConfiguration DeliveryStream Schema Configuration 
- Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabledis set to true.
- enabled bool
- Defaults to true. Set it tofalseif you want to disable format conversion while preserving the configuration details.
- input_format_ Deliveryconfiguration Stream Input Format Configuration 
- Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabledis set to true.
- output_format_ Deliveryconfiguration Stream Output Format Configuration 
- Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabledis set to true.
- schema_configuration DeliveryStream Schema Configuration 
- Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabledis set to true.
- enabled Boolean
- Defaults to true. Set it tofalseif you want to disable format conversion while preserving the configuration details.
- inputFormat Property MapConfiguration 
- Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabledis set to true.
- outputFormat Property MapConfiguration 
- Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabledis set to true.
- schemaConfiguration Property Map
- Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabledis set to true.
DeliveryStreamDatabaseColumns, DeliveryStreamDatabaseColumnsArgs        
DeliveryStreamDatabaseSourceAuthenticationConfiguration, DeliveryStreamDatabaseSourceAuthenticationConfigurationArgs            
DeliveryStreamDatabaseSourceConfiguration, DeliveryStreamDatabaseSourceConfigurationArgs          
- DatabaseSource Pulumi.Authentication Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Database Source Authentication Configuration 
- The structure to configure the authentication methods for Firehose to connect to source database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- DatabaseSource Pulumi.Vpc Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Database Source Vpc Configuration 
- The details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database. - Amazon Data Firehose is in preview release and is subject to change. 
- Databases
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Databases 
- The list of database patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- Endpoint string
- The endpoint of the database server. - Amazon Data Firehose is in preview release and is subject to change. 
- Port int
- The port of the database. This can be one of the following values. - 3306 for MySQL database type
- 5432 for PostgreSQL database type
 - Amazon Data Firehose is in preview release and is subject to change. 
- SnapshotWatermark stringTable 
- The fully qualified name of the table in source database endpoint that Firehose uses to track snapshot progress. - Amazon Data Firehose is in preview release and is subject to change. 
- Tables
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Database Tables 
- The list of table patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- Type
Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Database Source Configuration Type 
- The type of database engine. This can be one of the following values. - MySQL
- PostgreSQL
 - Amazon Data Firehose is in preview release and is subject to change. 
- Columns
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Database Columns 
- The list of column patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- Digest string
- PublicCertificate string
- SslMode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Database Source Configuration Ssl Mode 
- The mode to enable or disable SSL when Firehose connects to the database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- SurrogateKeys List<string>
- The optional list of table and column names used as unique key columns when taking snapshot if the tables don’t have primary keys configured. - Amazon Data Firehose is in preview release and is subject to change. 
- DatabaseSource DeliveryAuthentication Configuration Stream Database Source Authentication Configuration 
- The structure to configure the authentication methods for Firehose to connect to source database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- DatabaseSource DeliveryVpc Configuration Stream Database Source Vpc Configuration 
- The details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database. - Amazon Data Firehose is in preview release and is subject to change. 
- Databases
DeliveryStream Databases 
- The list of database patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- Endpoint string
- The endpoint of the database server. - Amazon Data Firehose is in preview release and is subject to change. 
- Port int
- The port of the database. This can be one of the following values. - 3306 for MySQL database type
- 5432 for PostgreSQL database type
 - Amazon Data Firehose is in preview release and is subject to change. 
- SnapshotWatermark stringTable 
- The fully qualified name of the table in source database endpoint that Firehose uses to track snapshot progress. - Amazon Data Firehose is in preview release and is subject to change. 
- Tables
DeliveryStream Database Tables 
- The list of table patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- Type
DeliveryStream Database Source Configuration Type 
- The type of database engine. This can be one of the following values. - MySQL
- PostgreSQL
 - Amazon Data Firehose is in preview release and is subject to change. 
- Columns
DeliveryStream Database Columns 
- The list of column patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- Digest string
- PublicCertificate string
- SslMode DeliveryStream Database Source Configuration Ssl Mode 
- The mode to enable or disable SSL when Firehose connects to the database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- SurrogateKeys []string
- The optional list of table and column names used as unique key columns when taking snapshot if the tables don’t have primary keys configured. - Amazon Data Firehose is in preview release and is subject to change. 
- databaseSource DeliveryAuthentication Configuration Stream Database Source Authentication Configuration 
- The structure to configure the authentication methods for Firehose to connect to source database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- databaseSource DeliveryVpc Configuration Stream Database Source Vpc Configuration 
- The details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database. - Amazon Data Firehose is in preview release and is subject to change. 
- databases
DeliveryStream Databases 
- The list of database patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- endpoint String
- The endpoint of the database server. - Amazon Data Firehose is in preview release and is subject to change. 
- port Integer
- The port of the database. This can be one of the following values. - 3306 for MySQL database type
- 5432 for PostgreSQL database type
 - Amazon Data Firehose is in preview release and is subject to change. 
- snapshotWatermark StringTable 
- The fully qualified name of the table in source database endpoint that Firehose uses to track snapshot progress. - Amazon Data Firehose is in preview release and is subject to change. 
- tables
DeliveryStream Database Tables 
- The list of table patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- type
DeliveryStream Database Source Configuration Type 
- The type of database engine. This can be one of the following values. - MySQL
- PostgreSQL
 - Amazon Data Firehose is in preview release and is subject to change. 
- columns
DeliveryStream Database Columns 
- The list of column patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- digest String
- publicCertificate String
- sslMode DeliveryStream Database Source Configuration Ssl Mode 
- The mode to enable or disable SSL when Firehose connects to the database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- surrogateKeys List<String>
- The optional list of table and column names used as unique key columns when taking snapshot if the tables don’t have primary keys configured. - Amazon Data Firehose is in preview release and is subject to change. 
- databaseSource DeliveryAuthentication Configuration Stream Database Source Authentication Configuration 
- The structure to configure the authentication methods for Firehose to connect to source database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- databaseSource DeliveryVpc Configuration Stream Database Source Vpc Configuration 
- The details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database. - Amazon Data Firehose is in preview release and is subject to change. 
- databases
DeliveryStream Databases 
- The list of database patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- endpoint string
- The endpoint of the database server. - Amazon Data Firehose is in preview release and is subject to change. 
- port number
- The port of the database. This can be one of the following values. - 3306 for MySQL database type
- 5432 for PostgreSQL database type
 - Amazon Data Firehose is in preview release and is subject to change. 
- snapshotWatermark stringTable 
- The fully qualified name of the table in source database endpoint that Firehose uses to track snapshot progress. - Amazon Data Firehose is in preview release and is subject to change. 
- tables
DeliveryStream Database Tables 
- The list of table patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- type
DeliveryStream Database Source Configuration Type 
- The type of database engine. This can be one of the following values. - MySQL
- PostgreSQL
 - Amazon Data Firehose is in preview release and is subject to change. 
- columns
DeliveryStream Database Columns 
- The list of column patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- digest string
- publicCertificate string
- sslMode DeliveryStream Database Source Configuration Ssl Mode 
- The mode to enable or disable SSL when Firehose connects to the database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- surrogateKeys string[]
- The optional list of table and column names used as unique key columns when taking snapshot if the tables don’t have primary keys configured. - Amazon Data Firehose is in preview release and is subject to change. 
- database_source_ Deliveryauthentication_ configuration Stream Database Source Authentication Configuration 
- The structure to configure the authentication methods for Firehose to connect to source database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- database_source_ Deliveryvpc_ configuration Stream Database Source Vpc Configuration 
- The details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database. - Amazon Data Firehose is in preview release and is subject to change. 
- databases
DeliveryStream Databases 
- The list of database patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- endpoint str
- The endpoint of the database server. - Amazon Data Firehose is in preview release and is subject to change. 
- port int
- The port of the database. This can be one of the following values. - 3306 for MySQL database type
- 5432 for PostgreSQL database type
 - Amazon Data Firehose is in preview release and is subject to change. 
- snapshot_watermark_ strtable 
- The fully qualified name of the table in source database endpoint that Firehose uses to track snapshot progress. - Amazon Data Firehose is in preview release and is subject to change. 
- tables
DeliveryStream Database Tables 
- The list of table patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- type
DeliveryStream Database Source Configuration Type 
- The type of database engine. This can be one of the following values. - MySQL
- PostgreSQL
 - Amazon Data Firehose is in preview release and is subject to change. 
- columns
DeliveryStream Database Columns 
- The list of column patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- digest str
- public_certificate str
- ssl_mode DeliveryStream Database Source Configuration Ssl Mode 
- The mode to enable or disable SSL when Firehose connects to the database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- surrogate_keys Sequence[str]
- The optional list of table and column names used as unique key columns when taking snapshot if the tables don’t have primary keys configured. - Amazon Data Firehose is in preview release and is subject to change. 
- databaseSource Property MapAuthentication Configuration 
- The structure to configure the authentication methods for Firehose to connect to source database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- databaseSource Property MapVpc Configuration 
- The details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database. - Amazon Data Firehose is in preview release and is subject to change. 
- databases Property Map
- The list of database patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- endpoint String
- The endpoint of the database server. - Amazon Data Firehose is in preview release and is subject to change. 
- port Number
- The port of the database. This can be one of the following values. - 3306 for MySQL database type
- 5432 for PostgreSQL database type
 - Amazon Data Firehose is in preview release and is subject to change. 
- snapshotWatermark StringTable 
- The fully qualified name of the table in source database endpoint that Firehose uses to track snapshot progress. - Amazon Data Firehose is in preview release and is subject to change. 
- tables Property Map
- The list of table patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- type
"MySQL" | "Postgre SQL" 
- The type of database engine. This can be one of the following values. - MySQL
- PostgreSQL
 - Amazon Data Firehose is in preview release and is subject to change. 
- columns Property Map
- The list of column patterns in source database endpoint for Firehose to read from. - Amazon Data Firehose is in preview release and is subject to change. 
- digest String
- publicCertificate String
- sslMode "Disabled" | "Enabled"
- The mode to enable or disable SSL when Firehose connects to the database endpoint. - Amazon Data Firehose is in preview release and is subject to change. 
- surrogateKeys List<String>
- The optional list of table and column names used as unique key columns when taking snapshot if the tables don’t have primary keys configured. - Amazon Data Firehose is in preview release and is subject to change. 
DeliveryStreamDatabaseSourceConfigurationSslMode, DeliveryStreamDatabaseSourceConfigurationSslModeArgs              
- Disabled
- Disabled
- Enabled
- Enabled
- DeliveryStream Database Source Configuration Ssl Mode Disabled 
- Disabled
- DeliveryStream Database Source Configuration Ssl Mode Enabled 
- Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- DISABLED
- Disabled
- ENABLED
- Enabled
- "Disabled"
- Disabled
- "Enabled"
- Enabled
DeliveryStreamDatabaseSourceConfigurationType, DeliveryStreamDatabaseSourceConfigurationTypeArgs            
- MySql 
- MySQL
- PostgreSql 
- PostgreSQL
- DeliveryStream Database Source Configuration Type My Sql 
- MySQL
- DeliveryStream Database Source Configuration Type Postgre Sql 
- PostgreSQL
- MySql 
- MySQL
- PostgreSql 
- PostgreSQL
- MySql 
- MySQL
- PostgreSql 
- PostgreSQL
- MY_SQL
- MySQL
- POSTGRE_SQL
- PostgreSQL
- "MySQL" 
- MySQL
- "PostgreSQL" 
- PostgreSQL
DeliveryStreamDatabaseSourceVpcConfiguration, DeliveryStreamDatabaseSourceVpcConfigurationArgs            
- VpcEndpoint stringService Name 
- The VPC endpoint service name which Firehose uses to create a PrivateLink to the database. The endpoint service must have the Firehose service principle - firehose.amazonaws.comas an allowed principal on the VPC endpoint service. The VPC endpoint service name is a string that looks like- com.amazonaws.vpce.<region>.<vpc-endpoint-service-id>.- Amazon Data Firehose is in preview release and is subject to change. 
- VpcEndpoint stringService Name 
- The VPC endpoint service name which Firehose uses to create a PrivateLink to the database. The endpoint service must have the Firehose service principle - firehose.amazonaws.comas an allowed principal on the VPC endpoint service. The VPC endpoint service name is a string that looks like- com.amazonaws.vpce.<region>.<vpc-endpoint-service-id>.- Amazon Data Firehose is in preview release and is subject to change. 
- vpcEndpoint StringService Name 
- The VPC endpoint service name which Firehose uses to create a PrivateLink to the database. The endpoint service must have the Firehose service principle - firehose.amazonaws.comas an allowed principal on the VPC endpoint service. The VPC endpoint service name is a string that looks like- com.amazonaws.vpce.<region>.<vpc-endpoint-service-id>.- Amazon Data Firehose is in preview release and is subject to change. 
- vpcEndpoint stringService Name 
- The VPC endpoint service name which Firehose uses to create a PrivateLink to the database. The endpoint service must have the Firehose service principle - firehose.amazonaws.comas an allowed principal on the VPC endpoint service. The VPC endpoint service name is a string that looks like- com.amazonaws.vpce.<region>.<vpc-endpoint-service-id>.- Amazon Data Firehose is in preview release and is subject to change. 
- vpc_endpoint_ strservice_ name 
- The VPC endpoint service name which Firehose uses to create a PrivateLink to the database. The endpoint service must have the Firehose service principle - firehose.amazonaws.comas an allowed principal on the VPC endpoint service. The VPC endpoint service name is a string that looks like- com.amazonaws.vpce.<region>.<vpc-endpoint-service-id>.- Amazon Data Firehose is in preview release and is subject to change. 
- vpcEndpoint StringService Name 
- The VPC endpoint service name which Firehose uses to create a PrivateLink to the database. The endpoint service must have the Firehose service principle - firehose.amazonaws.comas an allowed principal on the VPC endpoint service. The VPC endpoint service name is a string that looks like- com.amazonaws.vpce.<region>.<vpc-endpoint-service-id>.- Amazon Data Firehose is in preview release and is subject to change. 
DeliveryStreamDatabaseTables, DeliveryStreamDatabaseTablesArgs        
DeliveryStreamDatabases, DeliveryStreamDatabasesArgs      
DeliveryStreamDeserializer, DeliveryStreamDeserializerArgs      
- HiveJson Pulumi.Ser De Aws Native. Kinesis Firehose. Inputs. Delivery Stream Hive Json Ser De 
- The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- OpenXJson Pulumi.Ser De Aws Native. Kinesis Firehose. Inputs. Delivery Stream Open XJson Ser De 
- The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- HiveJson DeliverySer De Stream Hive Json Ser De 
- The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- OpenXJson DeliverySer De Stream Open XJson Ser De 
- The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- hiveJson DeliverySer De Stream Hive Json Ser De 
- The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- openXJson DeliverySer De Stream Open XJson Ser De 
- The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- hiveJson DeliverySer De Stream Hive Json Ser De 
- The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- openXJson DeliverySer De Stream Open XJson Ser De 
- The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- hive_json_ Deliveryser_ de Stream Hive Json Ser De 
- The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- open_x_ Deliveryjson_ ser_ de Stream Open XJson Ser De 
- The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- hiveJson Property MapSer De 
- The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- openXJson Property MapSer De 
- The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
DeliveryStreamDestinationTableConfiguration, DeliveryStreamDestinationTableConfigurationArgs          
- DestinationDatabase stringName 
- DestinationTable stringName 
- S3ErrorOutput stringPrefix 
- UniqueKeys List<string>
- DestinationDatabase stringName 
- DestinationTable stringName 
- S3ErrorOutput stringPrefix 
- UniqueKeys []string
- destinationDatabase StringName 
- destinationTable StringName 
- s3ErrorOutput StringPrefix 
- uniqueKeys List<String>
- destinationDatabase stringName 
- destinationTable stringName 
- s3ErrorOutput stringPrefix 
- uniqueKeys string[]
- destination_database_ strname 
- destination_table_ strname 
- s3_error_ stroutput_ prefix 
- unique_keys Sequence[str]
- destinationDatabase StringName 
- destinationTable StringName 
- s3ErrorOutput StringPrefix 
- uniqueKeys List<String>
DeliveryStreamDirectPutSourceConfiguration, DeliveryStreamDirectPutSourceConfigurationArgs            
- ThroughputHint intIn Mbs 
- The value that you configure for this parameter is for information purpose only and does not affect Firehose delivery throughput limit. You can use the Firehose Limits form to request a throughput limit increase.
- ThroughputHint intIn Mbs 
- The value that you configure for this parameter is for information purpose only and does not affect Firehose delivery throughput limit. You can use the Firehose Limits form to request a throughput limit increase.
- throughputHint IntegerIn Mbs 
- The value that you configure for this parameter is for information purpose only and does not affect Firehose delivery throughput limit. You can use the Firehose Limits form to request a throughput limit increase.
- throughputHint numberIn Mbs 
- The value that you configure for this parameter is for information purpose only and does not affect Firehose delivery throughput limit. You can use the Firehose Limits form to request a throughput limit increase.
- throughput_hint_ intin_ mbs 
- The value that you configure for this parameter is for information purpose only and does not affect Firehose delivery throughput limit. You can use the Firehose Limits form to request a throughput limit increase.
- throughputHint NumberIn Mbs 
- The value that you configure for this parameter is for information purpose only and does not affect Firehose delivery throughput limit. You can use the Firehose Limits form to request a throughput limit increase.
DeliveryStreamDocumentIdOptions, DeliveryStreamDocumentIdOptionsArgs          
- DefaultDocument Pulumi.Id Format Aws Native. Kinesis Firehose. Delivery Stream Document Id Options Default Document Id Format 
- When the - FIREHOSE_DEFAULToption is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.- When the - NO_DOCUMENT_IDoption is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- DefaultDocument DeliveryId Format Stream Document Id Options Default Document Id Format 
- When the - FIREHOSE_DEFAULToption is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.- When the - NO_DOCUMENT_IDoption is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- defaultDocument DeliveryId Format Stream Document Id Options Default Document Id Format 
- When the - FIREHOSE_DEFAULToption is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.- When the - NO_DOCUMENT_IDoption is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- defaultDocument DeliveryId Format Stream Document Id Options Default Document Id Format 
- When the - FIREHOSE_DEFAULToption is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.- When the - NO_DOCUMENT_IDoption is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- default_document_ Deliveryid_ format Stream Document Id Options Default Document Id Format 
- When the - FIREHOSE_DEFAULToption is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.- When the - NO_DOCUMENT_IDoption is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- defaultDocument "FIREHOSE_DEFAULT" | "NO_DOCUMENT_ID"Id Format 
- When the - FIREHOSE_DEFAULToption is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.- When the - NO_DOCUMENT_IDoption is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
DeliveryStreamDocumentIdOptionsDefaultDocumentIdFormat, DeliveryStreamDocumentIdOptionsDefaultDocumentIdFormatArgs                  
- FirehoseDefault 
- FIREHOSE_DEFAULT
- NoDocument Id 
- NO_DOCUMENT_ID
- DeliveryStream Document Id Options Default Document Id Format Firehose Default 
- FIREHOSE_DEFAULT
- DeliveryStream Document Id Options Default Document Id Format No Document Id 
- NO_DOCUMENT_ID
- FirehoseDefault 
- FIREHOSE_DEFAULT
- NoDocument Id 
- NO_DOCUMENT_ID
- FirehoseDefault 
- FIREHOSE_DEFAULT
- NoDocument Id 
- NO_DOCUMENT_ID
- FIREHOSE_DEFAULT
- FIREHOSE_DEFAULT
- NO_DOCUMENT_ID
- NO_DOCUMENT_ID
- "FIREHOSE_DEFAULT"
- FIREHOSE_DEFAULT
- "NO_DOCUMENT_ID"
- NO_DOCUMENT_ID
DeliveryStreamDynamicPartitioningConfiguration, DeliveryStreamDynamicPartitioningConfigurationArgs          
- Enabled bool
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Retry Options 
- Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- Enabled bool
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- RetryOptions DeliveryStream Retry Options 
- Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- enabled Boolean
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retryOptions DeliveryStream Retry Options 
- Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- enabled boolean
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retryOptions DeliveryStream Retry Options 
- Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- enabled bool
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retry_options DeliveryStream Retry Options 
- Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- enabled Boolean
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retryOptions Property Map
- Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
DeliveryStreamElasticsearchBufferingHints, DeliveryStreamElasticsearchBufferingHintsArgs          
- IntervalIn intSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- SizeIn intMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- IntervalIn intSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- SizeIn intMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- intervalIn IntegerSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- sizeIn IntegerMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- intervalIn numberSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- sizeIn numberMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval_in_ intseconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- size_in_ intmbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- intervalIn NumberSeconds 
- The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- sizeIn NumberMbs 
- The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
DeliveryStreamElasticsearchDestinationConfiguration, DeliveryStreamElasticsearchDestinationConfigurationArgs          
- IndexName string
- The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- RoleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- S3Configuration
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose backs up incoming data.
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Elasticsearch Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- The Amazon CloudWatch Logs logging options for the delivery stream.
- ClusterEndpoint string
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpointor theDomainARNfield.
- DocumentId Pulumi.Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- DomainArn string
- The ARN of the Amazon ES domain. The IAM role must have permissions for - DescribeElasticsearchDomain,- DescribeElasticsearchDomains, and- DescribeElasticsearchDomainConfigafter assuming the role specified in RoleARN .- Specify either - ClusterEndpointor- DomainARN.
- IndexRotation Pulumi.Period Aws Native. Kinesis Firehose. Delivery Stream Elasticsearch Destination Configuration Index Rotation Period 
- The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Elasticsearch Retry Options 
- The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- S3BackupMode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Elasticsearch Destination Configuration S3Backup Mode 
- The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the S3BackupModecontent for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- TypeName string
- The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- VpcConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Vpc Configuration 
- The details of the VPC of the Amazon ES destination.
- IndexName string
- The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- RoleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- S3Configuration
DeliveryStream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose backs up incoming data.
- BufferingHints DeliveryStream Elasticsearch Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch Logs logging options for the delivery stream.
- ClusterEndpoint string
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpointor theDomainARNfield.
- DocumentId DeliveryOptions Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- DomainArn string
- The ARN of the Amazon ES domain. The IAM role must have permissions for - DescribeElasticsearchDomain,- DescribeElasticsearchDomains, and- DescribeElasticsearchDomainConfigafter assuming the role specified in RoleARN .- Specify either - ClusterEndpointor- DomainARN.
- IndexRotation DeliveryPeriod Stream Elasticsearch Destination Configuration Index Rotation Period 
- The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- ProcessingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- RetryOptions DeliveryStream Elasticsearch Retry Options 
- The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- S3BackupMode DeliveryStream Elasticsearch Destination Configuration S3Backup Mode 
- The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the S3BackupModecontent for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- TypeName string
- The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- VpcConfiguration DeliveryStream Vpc Configuration 
- The details of the VPC of the Amazon ES destination.
- indexName String
- The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- roleArn String
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3Configuration
DeliveryStream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose backs up incoming data.
- bufferingHints DeliveryStream Elasticsearch Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch Logs logging options for the delivery stream.
- clusterEndpoint String
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpointor theDomainARNfield.
- documentId DeliveryOptions Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domainArn String
- The ARN of the Amazon ES domain. The IAM role must have permissions for - DescribeElasticsearchDomain,- DescribeElasticsearchDomains, and- DescribeElasticsearchDomainConfigafter assuming the role specified in RoleARN .- Specify either - ClusterEndpointor- DomainARN.
- indexRotation DeliveryPeriod Stream Elasticsearch Destination Configuration Index Rotation Period 
- The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- retryOptions DeliveryStream Elasticsearch Retry Options 
- The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- s3BackupMode DeliveryStream Elasticsearch Destination Configuration S3Backup Mode 
- The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the S3BackupModecontent for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- typeName String
- The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpcConfiguration DeliveryStream Vpc Configuration 
- The details of the VPC of the Amazon ES destination.
- indexName string
- The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- roleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3Configuration
DeliveryStream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose backs up incoming data.
- bufferingHints DeliveryStream Elasticsearch Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch Logs logging options for the delivery stream.
- clusterEndpoint string
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpointor theDomainARNfield.
- documentId DeliveryOptions Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domainArn string
- The ARN of the Amazon ES domain. The IAM role must have permissions for - DescribeElasticsearchDomain,- DescribeElasticsearchDomains, and- DescribeElasticsearchDomainConfigafter assuming the role specified in RoleARN .- Specify either - ClusterEndpointor- DomainARN.
- indexRotation DeliveryPeriod Stream Elasticsearch Destination Configuration Index Rotation Period 
- The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- retryOptions DeliveryStream Elasticsearch Retry Options 
- The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- s3BackupMode DeliveryStream Elasticsearch Destination Configuration S3Backup Mode 
- The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the S3BackupModecontent for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- typeName string
- The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpcConfiguration DeliveryStream Vpc Configuration 
- The details of the VPC of the Amazon ES destination.
- index_name str
- The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- role_arn str
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3_configuration DeliveryStream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose backs up incoming data.
- buffering_hints DeliveryStream Elasticsearch Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch Logs logging options for the delivery stream.
- cluster_endpoint str
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpointor theDomainARNfield.
- document_id_ Deliveryoptions Stream Document Id Options 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain_arn str
- The ARN of the Amazon ES domain. The IAM role must have permissions for - DescribeElasticsearchDomain,- DescribeElasticsearchDomains, and- DescribeElasticsearchDomainConfigafter assuming the role specified in RoleARN .- Specify either - ClusterEndpointor- DomainARN.
- index_rotation_ Deliveryperiod Stream Elasticsearch Destination Configuration Index Rotation Period 
- The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processing_configuration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry_options DeliveryStream Elasticsearch Retry Options 
- The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- s3_backup_ Deliverymode Stream Elasticsearch Destination Configuration S3Backup Mode 
- The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the S3BackupModecontent for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- type_name str
- The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpc_configuration DeliveryStream Vpc Configuration 
- The details of the VPC of the Amazon ES destination.
- indexName String
- The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- roleArn String
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3Configuration Property Map
- The S3 bucket where Kinesis Data Firehose backs up incoming data.
- bufferingHints Property Map
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloudWatch Property MapLogging Options 
- The Amazon CloudWatch Logs logging options for the delivery stream.
- clusterEndpoint String
- The endpoint to use when communicating with the cluster. Specify either this ClusterEndpointor theDomainARNfield.
- documentId Property MapOptions 
- Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domainArn String
- The ARN of the Amazon ES domain. The IAM role must have permissions for - DescribeElasticsearchDomain,- DescribeElasticsearchDomains, and- DescribeElasticsearchDomainConfigafter assuming the role specified in RoleARN .- Specify either - ClusterEndpointor- DomainARN.
- indexRotation "NoPeriod Rotation" | "One Hour" | "One Day" | "One Week" | "One Month" 
- The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processingConfiguration Property Map
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- retryOptions Property Map
- The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- s3BackupMode "FailedDocuments Only" | "All Documents" 
- The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the S3BackupModecontent for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- typeName String
- The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpcConfiguration Property Map
- The details of the VPC of the Amazon ES destination.
DeliveryStreamElasticsearchDestinationConfigurationIndexRotationPeriod, DeliveryStreamElasticsearchDestinationConfigurationIndexRotationPeriodArgs                
- NoRotation 
- NoRotation
- OneHour 
- OneHour
- OneDay 
- OneDay
- OneWeek 
- OneWeek
- OneMonth 
- OneMonth
- DeliveryStream Elasticsearch Destination Configuration Index Rotation Period No Rotation 
- NoRotation
- DeliveryStream Elasticsearch Destination Configuration Index Rotation Period One Hour 
- OneHour
- DeliveryStream Elasticsearch Destination Configuration Index Rotation Period One Day 
- OneDay
- DeliveryStream Elasticsearch Destination Configuration Index Rotation Period One Week 
- OneWeek
- DeliveryStream Elasticsearch Destination Configuration Index Rotation Period One Month 
- OneMonth
- NoRotation 
- NoRotation
- OneHour 
- OneHour
- OneDay 
- OneDay
- OneWeek 
- OneWeek
- OneMonth 
- OneMonth
- NoRotation 
- NoRotation
- OneHour 
- OneHour
- OneDay 
- OneDay
- OneWeek 
- OneWeek
- OneMonth 
- OneMonth
- NO_ROTATION
- NoRotation
- ONE_HOUR
- OneHour
- ONE_DAY
- OneDay
- ONE_WEEK
- OneWeek
- ONE_MONTH
- OneMonth
- "NoRotation" 
- NoRotation
- "OneHour" 
- OneHour
- "OneDay" 
- OneDay
- "OneWeek" 
- OneWeek
- "OneMonth" 
- OneMonth
DeliveryStreamElasticsearchDestinationConfigurationS3BackupMode, DeliveryStreamElasticsearchDestinationConfigurationS3BackupModeArgs              
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- DeliveryStream Elasticsearch Destination Configuration S3Backup Mode Failed Documents Only 
- FailedDocumentsOnly
- DeliveryStream Elasticsearch Destination Configuration S3Backup Mode All Documents 
- AllDocuments
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- FailedDocuments Only 
- FailedDocumentsOnly
- AllDocuments 
- AllDocuments
- FAILED_DOCUMENTS_ONLY
- FailedDocumentsOnly
- ALL_DOCUMENTS
- AllDocuments
- "FailedDocuments Only" 
- FailedDocumentsOnly
- "AllDocuments" 
- AllDocuments
DeliveryStreamElasticsearchRetryOptions, DeliveryStreamElasticsearchRetryOptionsArgs          
- DurationIn intSeconds 
- After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the DurationInSecondscontent for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- DurationIn intSeconds 
- After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the DurationInSecondscontent for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- durationIn IntegerSeconds 
- After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the DurationInSecondscontent for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- durationIn numberSeconds 
- After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the DurationInSecondscontent for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- duration_in_ intseconds 
- After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the DurationInSecondscontent for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- durationIn NumberSeconds 
- After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the DurationInSecondscontent for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
DeliveryStreamEncryptionConfiguration, DeliveryStreamEncryptionConfigurationArgs        
- KmsEncryption Pulumi.Config Aws Native. Kinesis Firehose. Inputs. Delivery Stream Kms Encryption Config 
- The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- NoEncryption Pulumi.Config Aws Native. Kinesis Firehose. Delivery Stream Encryption Configuration No Encryption Config 
- Disables encryption. For valid values, see the NoEncryptionConfigcontent for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- KmsEncryption DeliveryConfig Stream Kms Encryption Config 
- The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- NoEncryption DeliveryConfig Stream Encryption Configuration No Encryption Config 
- Disables encryption. For valid values, see the NoEncryptionConfigcontent for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- kmsEncryption DeliveryConfig Stream Kms Encryption Config 
- The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- noEncryption DeliveryConfig Stream Encryption Configuration No Encryption Config 
- Disables encryption. For valid values, see the NoEncryptionConfigcontent for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- kmsEncryption DeliveryConfig Stream Kms Encryption Config 
- The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- noEncryption DeliveryConfig Stream Encryption Configuration No Encryption Config 
- Disables encryption. For valid values, see the NoEncryptionConfigcontent for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- kms_encryption_ Deliveryconfig Stream Kms Encryption Config 
- The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- no_encryption_ Deliveryconfig Stream Encryption Configuration No Encryption Config 
- Disables encryption. For valid values, see the NoEncryptionConfigcontent for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- kmsEncryption Property MapConfig 
- The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- noEncryption "NoConfig Encryption" 
- Disables encryption. For valid values, see the NoEncryptionConfigcontent for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
DeliveryStreamEncryptionConfigurationInput, DeliveryStreamEncryptionConfigurationInputArgs          
- KeyType Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Encryption Configuration Input Key Type 
- Indicates the type of customer master key (CMK) to use for encryption. The default setting is - AWS_OWNED_CMK. For more information about CMKs, see Customer Master Keys (CMKs) .- You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. - To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide. 
- KeyArn string
- If you set KeyTypetoCUSTOMER_MANAGED_CMK, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyTypetoAWS _OWNED_CMK, Firehose uses a service-account CMK.
- KeyType DeliveryStream Encryption Configuration Input Key Type 
- Indicates the type of customer master key (CMK) to use for encryption. The default setting is - AWS_OWNED_CMK. For more information about CMKs, see Customer Master Keys (CMKs) .- You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. - To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide. 
- KeyArn string
- If you set KeyTypetoCUSTOMER_MANAGED_CMK, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyTypetoAWS _OWNED_CMK, Firehose uses a service-account CMK.
- keyType DeliveryStream Encryption Configuration Input Key Type 
- Indicates the type of customer master key (CMK) to use for encryption. The default setting is - AWS_OWNED_CMK. For more information about CMKs, see Customer Master Keys (CMKs) .- You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. - To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide. 
- keyArn String
- If you set KeyTypetoCUSTOMER_MANAGED_CMK, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyTypetoAWS _OWNED_CMK, Firehose uses a service-account CMK.
- keyType DeliveryStream Encryption Configuration Input Key Type 
- Indicates the type of customer master key (CMK) to use for encryption. The default setting is - AWS_OWNED_CMK. For more information about CMKs, see Customer Master Keys (CMKs) .- You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. - To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide. 
- keyArn string
- If you set KeyTypetoCUSTOMER_MANAGED_CMK, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyTypetoAWS _OWNED_CMK, Firehose uses a service-account CMK.
- key_type DeliveryStream Encryption Configuration Input Key Type 
- Indicates the type of customer master key (CMK) to use for encryption. The default setting is - AWS_OWNED_CMK. For more information about CMKs, see Customer Master Keys (CMKs) .- You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. - To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide. 
- key_arn str
- If you set KeyTypetoCUSTOMER_MANAGED_CMK, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyTypetoAWS _OWNED_CMK, Firehose uses a service-account CMK.
- keyType "AWS_OWNED_CMK" | "CUSTOMER_MANAGED_CMK"
- Indicates the type of customer master key (CMK) to use for encryption. The default setting is - AWS_OWNED_CMK. For more information about CMKs, see Customer Master Keys (CMKs) .- You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. - To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide. 
- keyArn String
- If you set KeyTypetoCUSTOMER_MANAGED_CMK, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyTypetoAWS _OWNED_CMK, Firehose uses a service-account CMK.
DeliveryStreamEncryptionConfigurationInputKeyType, DeliveryStreamEncryptionConfigurationInputKeyTypeArgs              
- AwsOwned Cmk 
- AWS_OWNED_CMK
- CustomerManaged Cmk 
- CUSTOMER_MANAGED_CMK
- DeliveryStream Encryption Configuration Input Key Type Aws Owned Cmk 
- AWS_OWNED_CMK
- DeliveryStream Encryption Configuration Input Key Type Customer Managed Cmk 
- CUSTOMER_MANAGED_CMK
- AwsOwned Cmk 
- AWS_OWNED_CMK
- CustomerManaged Cmk 
- CUSTOMER_MANAGED_CMK
- AwsOwned Cmk 
- AWS_OWNED_CMK
- CustomerManaged Cmk 
- CUSTOMER_MANAGED_CMK
- AWS_OWNED_CMK
- AWS_OWNED_CMK
- CUSTOMER_MANAGED_CMK
- CUSTOMER_MANAGED_CMK
- "AWS_OWNED_CMK"
- AWS_OWNED_CMK
- "CUSTOMER_MANAGED_CMK"
- CUSTOMER_MANAGED_CMK
DeliveryStreamEncryptionConfigurationNoEncryptionConfig, DeliveryStreamEncryptionConfigurationNoEncryptionConfigArgs              
- NoEncryption 
- NoEncryption
- DeliveryStream Encryption Configuration No Encryption Config No Encryption 
- NoEncryption
- NoEncryption 
- NoEncryption
- NoEncryption 
- NoEncryption
- NO_ENCRYPTION
- NoEncryption
- "NoEncryption" 
- NoEncryption
DeliveryStreamExtendedS3DestinationConfiguration, DeliveryStreamExtendedS3DestinationConfigurationArgs          
- BucketArn string
- The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- RoleArn string
- The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Buffering Hints 
- The buffering option.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- CompressionFormat Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Extended S3Destination Configuration Compression Format 
- The compression format. If no value is specified, the default is UNCOMPRESSED.
- CustomTime stringZone 
- The time zone you prefer. UTC is the default.
- DataFormat Pulumi.Conversion Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Data Format Conversion Configuration 
- The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- DynamicPartitioning Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Dynamic Partitioning Configuration 
- The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- EncryptionConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Encryption Configuration 
- The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption.
- ErrorOutput stringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- FileExtension string
- Specify a file extension. It will override the default file extension
- Prefix string
- The YYYY/MM/DD/HHtime format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- S3BackupConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- S3BackupMode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Extended S3Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- BucketArn string
- The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- RoleArn string
- The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- BufferingHints DeliveryStream Buffering Hints 
- The buffering option.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- CompressionFormat DeliveryStream Extended S3Destination Configuration Compression Format 
- The compression format. If no value is specified, the default is UNCOMPRESSED.
- CustomTime stringZone 
- The time zone you prefer. UTC is the default.
- DataFormat DeliveryConversion Configuration Stream Data Format Conversion Configuration 
- The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- DynamicPartitioning DeliveryConfiguration Stream Dynamic Partitioning Configuration 
- The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- EncryptionConfiguration DeliveryStream Encryption Configuration 
- The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption.
- ErrorOutput stringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- FileExtension string
- Specify a file extension. It will override the default file extension
- Prefix string
- The YYYY/MM/DD/HHtime format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- ProcessingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- S3BackupConfiguration DeliveryStream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- S3BackupMode DeliveryStream Extended S3Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- bucketArn String
- The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- roleArn String
- The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- bufferingHints DeliveryStream Buffering Hints 
- The buffering option.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- compressionFormat DeliveryStream Extended S3Destination Configuration Compression Format 
- The compression format. If no value is specified, the default is UNCOMPRESSED.
- customTime StringZone 
- The time zone you prefer. UTC is the default.
- dataFormat DeliveryConversion Configuration Stream Data Format Conversion Configuration 
- The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamicPartitioning DeliveryConfiguration Stream Dynamic Partitioning Configuration 
- The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryptionConfiguration DeliveryStream Encryption Configuration 
- The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption.
- errorOutput StringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- fileExtension String
- Specify a file extension. It will override the default file extension
- prefix String
- The YYYY/MM/DD/HHtime format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- processingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- s3BackupConfiguration DeliveryStream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- s3BackupMode DeliveryStream Extended S3Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- bucketArn string
- The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- roleArn string
- The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- bufferingHints DeliveryStream Buffering Hints 
- The buffering option.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- compressionFormat DeliveryStream Extended S3Destination Configuration Compression Format 
- The compression format. If no value is specified, the default is UNCOMPRESSED.
- customTime stringZone 
- The time zone you prefer. UTC is the default.
- dataFormat DeliveryConversion Configuration Stream Data Format Conversion Configuration 
- The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamicPartitioning DeliveryConfiguration Stream Dynamic Partitioning Configuration 
- The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryptionConfiguration DeliveryStream Encryption Configuration 
- The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption.
- errorOutput stringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- fileExtension string
- Specify a file extension. It will override the default file extension
- prefix string
- The YYYY/MM/DD/HHtime format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- processingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- s3BackupConfiguration DeliveryStream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- s3BackupMode DeliveryStream Extended S3Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- bucket_arn str
- The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- role_arn str
- The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- buffering_hints DeliveryStream Buffering Hints 
- The buffering option.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- compression_format DeliveryStream Extended S3Destination Configuration Compression Format 
- The compression format. If no value is specified, the default is UNCOMPRESSED.
- custom_time_ strzone 
- The time zone you prefer. UTC is the default.
- data_format_ Deliveryconversion_ configuration Stream Data Format Conversion Configuration 
- The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamic_partitioning_ Deliveryconfiguration Stream Dynamic Partitioning Configuration 
- The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryption_configuration DeliveryStream Encryption Configuration 
- The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption.
- error_output_ strprefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- file_extension str
- Specify a file extension. It will override the default file extension
- prefix str
- The YYYY/MM/DD/HHtime format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- processing_configuration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- s3_backup_ Deliveryconfiguration Stream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- s3_backup_ Deliverymode Stream Extended S3Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- bucketArn String
- The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- roleArn String
- The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- bufferingHints Property Map
- The buffering option.
- cloudWatch Property MapLogging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- compressionFormat "UNCOMPRESSED" | "GZIP" | "ZIP" | "Snappy" | "HADOOP_SNAPPY"
- The compression format. If no value is specified, the default is UNCOMPRESSED.
- customTime StringZone 
- The time zone you prefer. UTC is the default.
- dataFormat Property MapConversion Configuration 
- The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamicPartitioning Property MapConfiguration 
- The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryptionConfiguration Property Map
- The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption.
- errorOutput StringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- fileExtension String
- Specify a file extension. It will override the default file extension
- prefix String
- The YYYY/MM/DD/HHtime format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- processingConfiguration Property Map
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- s3BackupConfiguration Property Map
- The configuration for backup in Amazon S3.
- s3BackupMode "Disabled" | "Enabled"
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
DeliveryStreamExtendedS3DestinationConfigurationCompressionFormat, DeliveryStreamExtendedS3DestinationConfigurationCompressionFormatArgs              
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- HadoopSnappy 
- HADOOP_SNAPPY
- DeliveryStream Extended S3Destination Configuration Compression Format Uncompressed 
- UNCOMPRESSED
- DeliveryStream Extended S3Destination Configuration Compression Format Gzip 
- GZIP
- DeliveryStream Extended S3Destination Configuration Compression Format Zip 
- ZIP
- DeliveryStream Extended S3Destination Configuration Compression Format Snappy 
- Snappy
- DeliveryStream Extended S3Destination Configuration Compression Format Hadoop Snappy 
- HADOOP_SNAPPY
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- HadoopSnappy 
- HADOOP_SNAPPY
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- HadoopSnappy 
- HADOOP_SNAPPY
- UNCOMPRESSED
- UNCOMPRESSED
- GZIP
- GZIP
- ZIP
- ZIP
- SNAPPY
- Snappy
- HADOOP_SNAPPY
- HADOOP_SNAPPY
- "UNCOMPRESSED"
- UNCOMPRESSED
- "GZIP"
- GZIP
- "ZIP"
- ZIP
- "Snappy"
- Snappy
- "HADOOP_SNAPPY"
- HADOOP_SNAPPY
DeliveryStreamExtendedS3DestinationConfigurationS3BackupMode, DeliveryStreamExtendedS3DestinationConfigurationS3BackupModeArgs              
- Disabled
- Disabled
- Enabled
- Enabled
- DeliveryStream Extended S3Destination Configuration S3Backup Mode Disabled 
- Disabled
- DeliveryStream Extended S3Destination Configuration S3Backup Mode Enabled 
- Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- DISABLED
- Disabled
- ENABLED
- Enabled
- "Disabled"
- Disabled
- "Enabled"
- Enabled
DeliveryStreamHiveJsonSerDe, DeliveryStreamHiveJsonSerDeArgs            
- TimestampFormats List<string>
- Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millisto parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOfby default.
- TimestampFormats []string
- Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millisto parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOfby default.
- timestampFormats List<String>
- Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millisto parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOfby default.
- timestampFormats string[]
- Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millisto parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOfby default.
- timestamp_formats Sequence[str]
- Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millisto parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOfby default.
- timestampFormats List<String>
- Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millisto parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOfby default.
DeliveryStreamHttpEndpointCommonAttribute, DeliveryStreamHttpEndpointCommonAttributeArgs            
- AttributeName string
- The name of the HTTP endpoint common attribute.
- AttributeValue string
- The value of the HTTP endpoint common attribute.
- AttributeName string
- The name of the HTTP endpoint common attribute.
- AttributeValue string
- The value of the HTTP endpoint common attribute.
- attributeName String
- The name of the HTTP endpoint common attribute.
- attributeValue String
- The value of the HTTP endpoint common attribute.
- attributeName string
- The name of the HTTP endpoint common attribute.
- attributeValue string
- The value of the HTTP endpoint common attribute.
- attribute_name str
- The name of the HTTP endpoint common attribute.
- attribute_value str
- The value of the HTTP endpoint common attribute.
- attributeName String
- The name of the HTTP endpoint common attribute.
- attributeValue String
- The value of the HTTP endpoint common attribute.
DeliveryStreamHttpEndpointConfiguration, DeliveryStreamHttpEndpointConfigurationArgs          
- url str
- The URL of the HTTP endpoint selected as the destination.
- access_key str
- The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
- name str
- The name of the HTTP endpoint selected as the destination.
DeliveryStreamHttpEndpointDestinationConfiguration, DeliveryStreamHttpEndpointDestinationConfigurationArgs            
- EndpointConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Http Endpoint Configuration 
- The configuration of the HTTP endpoint selected as the destination.
- S3Configuration
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Buffering Hints 
- The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- Describes the data processing configuration.
- RequestConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Http Endpoint Request Configuration 
- The configuration of the request sent to the HTTP endpoint specified as the destination.
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Retry Options 
- Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- RoleArn string
- Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- S3BackupMode string
- Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- SecretsManager Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for HTTP Endpoint destination.
- EndpointConfiguration DeliveryStream Http Endpoint Configuration 
- The configuration of the HTTP endpoint selected as the destination.
- S3Configuration
DeliveryStream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- BufferingHints DeliveryStream Buffering Hints 
- The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- ProcessingConfiguration DeliveryStream Processing Configuration 
- Describes the data processing configuration.
- RequestConfiguration DeliveryStream Http Endpoint Request Configuration 
- The configuration of the request sent to the HTTP endpoint specified as the destination.
- RetryOptions DeliveryStream Retry Options 
- Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- RoleArn string
- Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- S3BackupMode string
- Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- SecretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for HTTP Endpoint destination.
- endpointConfiguration DeliveryStream Http Endpoint Configuration 
- The configuration of the HTTP endpoint selected as the destination.
- s3Configuration
DeliveryStream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- bufferingHints DeliveryStream Buffering Hints 
- The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- processingConfiguration DeliveryStream Processing Configuration 
- Describes the data processing configuration.
- requestConfiguration DeliveryStream Http Endpoint Request Configuration 
- The configuration of the request sent to the HTTP endpoint specified as the destination.
- retryOptions DeliveryStream Retry Options 
- Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- roleArn String
- Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3BackupMode String
- Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- secretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for HTTP Endpoint destination.
- endpointConfiguration DeliveryStream Http Endpoint Configuration 
- The configuration of the HTTP endpoint selected as the destination.
- s3Configuration
DeliveryStream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- bufferingHints DeliveryStream Buffering Hints 
- The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- processingConfiguration DeliveryStream Processing Configuration 
- Describes the data processing configuration.
- requestConfiguration DeliveryStream Http Endpoint Request Configuration 
- The configuration of the request sent to the HTTP endpoint specified as the destination.
- retryOptions DeliveryStream Retry Options 
- Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- roleArn string
- Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3BackupMode string
- Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- secretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for HTTP Endpoint destination.
- endpoint_configuration DeliveryStream Http Endpoint Configuration 
- The configuration of the HTTP endpoint selected as the destination.
- s3_configuration DeliveryStream S3Destination Configuration 
- Describes the configuration of a destination in Amazon S3.
- buffering_hints DeliveryStream Buffering Hints 
- The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- processing_configuration DeliveryStream Processing Configuration 
- Describes the data processing configuration.
- request_configuration DeliveryStream Http Endpoint Request Configuration 
- The configuration of the request sent to the HTTP endpoint specified as the destination.
- retry_options DeliveryStream Retry Options 
- Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- role_arn str
- Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3_backup_ strmode 
- Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- secrets_manager_ Deliveryconfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for HTTP Endpoint destination.
- endpointConfiguration Property Map
- The configuration of the HTTP endpoint selected as the destination.
- s3Configuration Property Map
- Describes the configuration of a destination in Amazon S3.
- bufferingHints Property Map
- The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloudWatch Property MapLogging Options 
- Describes the Amazon CloudWatch logging options for your delivery stream.
- processingConfiguration Property Map
- Describes the data processing configuration.
- requestConfiguration Property Map
- The configuration of the request sent to the HTTP endpoint specified as the destination.
- retryOptions Property Map
- Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- roleArn String
- Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3BackupMode String
- Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- secretsManager Property MapConfiguration 
- The configuration that defines how you access secrets for HTTP Endpoint destination.
DeliveryStreamHttpEndpointRequestConfiguration, DeliveryStreamHttpEndpointRequestConfigurationArgs            
- CommonAttributes List<Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Http Endpoint Common Attribute> 
- Describes the metadata sent to the HTTP endpoint destination.
- ContentEncoding Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Http Endpoint Request Configuration Content Encoding 
- Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- CommonAttributes []DeliveryStream Http Endpoint Common Attribute 
- Describes the metadata sent to the HTTP endpoint destination.
- ContentEncoding DeliveryStream Http Endpoint Request Configuration Content Encoding 
- Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- commonAttributes List<DeliveryStream Http Endpoint Common Attribute> 
- Describes the metadata sent to the HTTP endpoint destination.
- contentEncoding DeliveryStream Http Endpoint Request Configuration Content Encoding 
- Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- commonAttributes DeliveryStream Http Endpoint Common Attribute[] 
- Describes the metadata sent to the HTTP endpoint destination.
- contentEncoding DeliveryStream Http Endpoint Request Configuration Content Encoding 
- Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- common_attributes Sequence[DeliveryStream Http Endpoint Common Attribute] 
- Describes the metadata sent to the HTTP endpoint destination.
- content_encoding DeliveryStream Http Endpoint Request Configuration Content Encoding 
- Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- commonAttributes List<Property Map>
- Describes the metadata sent to the HTTP endpoint destination.
- contentEncoding "NONE" | "GZIP"
- Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
DeliveryStreamHttpEndpointRequestConfigurationContentEncoding, DeliveryStreamHttpEndpointRequestConfigurationContentEncodingArgs                
- None
- NONE
- Gzip
- GZIP
- DeliveryStream Http Endpoint Request Configuration Content Encoding None 
- NONE
- DeliveryStream Http Endpoint Request Configuration Content Encoding Gzip 
- GZIP
- None
- NONE
- Gzip
- GZIP
- None
- NONE
- Gzip
- GZIP
- NONE
- NONE
- GZIP
- GZIP
- "NONE"
- NONE
- "GZIP"
- GZIP
DeliveryStreamIcebergDestinationConfiguration, DeliveryStreamIcebergDestinationConfigurationArgs          
- CatalogConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Catalog Configuration 
- Configuration describing where the destination Apache Iceberg Tables are persisted.
- RoleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
- S3Configuration
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- AppendOnly bool
- Describes whether all incoming data for this delivery stream will be append only (inserts only and not for updates and deletes) for Iceberg delivery. This feature is only applicable for Apache Iceberg Tables. - The default value is false. If you set this value to true, Firehose automatically increases the throughput limit of a stream based on the throttling levels of the stream. If you set this parameter to true for a stream with updates and deletes, you will see out of order delivery. 
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Buffering Hints 
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- DestinationTable List<Pulumi.Configuration List Aws Native. Kinesis Firehose. Inputs. Delivery Stream Destination Table Configuration> 
- Provides a list of DestinationTableConfigurationswhich Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Retry Options 
- S3BackupMode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Iceberg Destination Configurations3Backup Mode 
- Describes how Firehose will backup records. Currently,S3 backup only supports FailedDataOnly.
- CatalogConfiguration DeliveryStream Catalog Configuration 
- Configuration describing where the destination Apache Iceberg Tables are persisted.
- RoleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
- S3Configuration
DeliveryStream S3Destination Configuration 
- AppendOnly bool
- Describes whether all incoming data for this delivery stream will be append only (inserts only and not for updates and deletes) for Iceberg delivery. This feature is only applicable for Apache Iceberg Tables. - The default value is false. If you set this value to true, Firehose automatically increases the throughput limit of a stream based on the throttling levels of the stream. If you set this parameter to true for a stream with updates and deletes, you will see out of order delivery. 
- BufferingHints DeliveryStream Buffering Hints 
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- DestinationTable []DeliveryConfiguration List Stream Destination Table Configuration 
- Provides a list of DestinationTableConfigurationswhich Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.
- ProcessingConfiguration DeliveryStream Processing Configuration 
- RetryOptions DeliveryStream Retry Options 
- S3BackupMode DeliveryStream Iceberg Destination Configurations3Backup Mode 
- Describes how Firehose will backup records. Currently,S3 backup only supports FailedDataOnly.
- catalogConfiguration DeliveryStream Catalog Configuration 
- Configuration describing where the destination Apache Iceberg Tables are persisted.
- roleArn String
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
- s3Configuration
DeliveryStream S3Destination Configuration 
- appendOnly Boolean
- Describes whether all incoming data for this delivery stream will be append only (inserts only and not for updates and deletes) for Iceberg delivery. This feature is only applicable for Apache Iceberg Tables. - The default value is false. If you set this value to true, Firehose automatically increases the throughput limit of a stream based on the throttling levels of the stream. If you set this parameter to true for a stream with updates and deletes, you will see out of order delivery. 
- bufferingHints DeliveryStream Buffering Hints 
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- destinationTable List<DeliveryConfiguration List Stream Destination Table Configuration> 
- Provides a list of DestinationTableConfigurationswhich Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.
- processingConfiguration DeliveryStream Processing Configuration 
- retryOptions DeliveryStream Retry Options 
- s3BackupMode DeliveryStream Iceberg Destination Configurations3Backup Mode 
- Describes how Firehose will backup records. Currently,S3 backup only supports FailedDataOnly.
- catalogConfiguration DeliveryStream Catalog Configuration 
- Configuration describing where the destination Apache Iceberg Tables are persisted.
- roleArn string
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
- s3Configuration
DeliveryStream S3Destination Configuration 
- appendOnly boolean
- Describes whether all incoming data for this delivery stream will be append only (inserts only and not for updates and deletes) for Iceberg delivery. This feature is only applicable for Apache Iceberg Tables. - The default value is false. If you set this value to true, Firehose automatically increases the throughput limit of a stream based on the throttling levels of the stream. If you set this parameter to true for a stream with updates and deletes, you will see out of order delivery. 
- bufferingHints DeliveryStream Buffering Hints 
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- destinationTable DeliveryConfiguration List Stream Destination Table Configuration[] 
- Provides a list of DestinationTableConfigurationswhich Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.
- processingConfiguration DeliveryStream Processing Configuration 
- retryOptions DeliveryStream Retry Options 
- s3BackupMode DeliveryStream Iceberg Destination Configurations3Backup Mode 
- Describes how Firehose will backup records. Currently,S3 backup only supports FailedDataOnly.
- catalog_configuration DeliveryStream Catalog Configuration 
- Configuration describing where the destination Apache Iceberg Tables are persisted.
- role_arn str
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
- s3_configuration DeliveryStream S3Destination Configuration 
- append_only bool
- Describes whether all incoming data for this delivery stream will be append only (inserts only and not for updates and deletes) for Iceberg delivery. This feature is only applicable for Apache Iceberg Tables. - The default value is false. If you set this value to true, Firehose automatically increases the throughput limit of a stream based on the throttling levels of the stream. If you set this parameter to true for a stream with updates and deletes, you will see out of order delivery. 
- buffering_hints DeliveryStream Buffering Hints 
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- destination_table_ Sequence[Deliveryconfiguration_ list Stream Destination Table Configuration] 
- Provides a list of DestinationTableConfigurationswhich Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.
- processing_configuration DeliveryStream Processing Configuration 
- retry_options DeliveryStream Retry Options 
- s3_backup_ Deliverymode Stream Iceberg Destination Configurations3Backup Mode 
- Describes how Firehose will backup records. Currently,S3 backup only supports FailedDataOnly.
- catalogConfiguration Property Map
- Configuration describing where the destination Apache Iceberg Tables are persisted.
- roleArn String
- The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
- s3Configuration Property Map
- appendOnly Boolean
- Describes whether all incoming data for this delivery stream will be append only (inserts only and not for updates and deletes) for Iceberg delivery. This feature is only applicable for Apache Iceberg Tables. - The default value is false. If you set this value to true, Firehose automatically increases the throughput limit of a stream based on the throttling levels of the stream. If you set this parameter to true for a stream with updates and deletes, you will see out of order delivery. 
- bufferingHints Property Map
- cloudWatch Property MapLogging Options 
- destinationTable List<Property Map>Configuration List 
- Provides a list of DestinationTableConfigurationswhich Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.
- processingConfiguration Property Map
- retryOptions Property Map
- s3BackupMode "AllData" | "Failed Data Only" 
- Describes how Firehose will backup records. Currently,S3 backup only supports FailedDataOnly.
DeliveryStreamIcebergDestinationConfigurations3BackupMode, DeliveryStreamIcebergDestinationConfigurations3BackupModeArgs            
- AllData 
- AllData
- FailedData Only 
- FailedDataOnly
- DeliveryStream Iceberg Destination Configurations3Backup Mode All Data 
- AllData
- DeliveryStream Iceberg Destination Configurations3Backup Mode Failed Data Only 
- FailedDataOnly
- AllData 
- AllData
- FailedData Only 
- FailedDataOnly
- AllData 
- AllData
- FailedData Only 
- FailedDataOnly
- ALL_DATA
- AllData
- FAILED_DATA_ONLY
- FailedDataOnly
- "AllData" 
- AllData
- "FailedData Only" 
- FailedDataOnly
DeliveryStreamInputFormatConfiguration, DeliveryStreamInputFormatConfigurationArgs          
- Deserializer
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Deserializer 
- Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- Deserializer
DeliveryStream Deserializer 
- Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- deserializer
DeliveryStream Deserializer 
- Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- deserializer
DeliveryStream Deserializer 
- Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- deserializer
DeliveryStream Deserializer 
- Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- deserializer Property Map
- Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
DeliveryStreamKinesisStreamSourceConfiguration, DeliveryStreamKinesisStreamSourceConfigurationArgs            
- KinesisStream stringArn 
- The ARN of the source Kinesis data stream.
- RoleArn string
- The ARN of the role that provides access to the source Kinesis data stream.
- KinesisStream stringArn 
- The ARN of the source Kinesis data stream.
- RoleArn string
- The ARN of the role that provides access to the source Kinesis data stream.
- kinesisStream StringArn 
- The ARN of the source Kinesis data stream.
- roleArn String
- The ARN of the role that provides access to the source Kinesis data stream.
- kinesisStream stringArn 
- The ARN of the source Kinesis data stream.
- roleArn string
- The ARN of the role that provides access to the source Kinesis data stream.
- kinesis_stream_ strarn 
- The ARN of the source Kinesis data stream.
- role_arn str
- The ARN of the role that provides access to the source Kinesis data stream.
- kinesisStream StringArn 
- The ARN of the source Kinesis data stream.
- roleArn String
- The ARN of the role that provides access to the source Kinesis data stream.
DeliveryStreamKmsEncryptionConfig, DeliveryStreamKmsEncryptionConfigArgs          
- AwskmsKey stringArn 
- The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- AwskmsKey stringArn 
- The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- awskmsKey StringArn 
- The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- awskmsKey stringArn 
- The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- awskms_key_ strarn 
- The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- awskmsKey StringArn 
- The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
DeliveryStreamMskSourceConfiguration, DeliveryStreamMskSourceConfigurationArgs          
- AuthenticationConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Authentication Configuration 
- The authentication configuration of the Amazon MSK cluster.
- MskCluster stringArn 
- The ARN of the Amazon MSK cluster.
- TopicName string
- The topic name within the Amazon MSK cluster.
- ReadFrom stringTimestamp 
- The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active. - If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the - ReadFromTimestampparameter to Epoch (1970-01-01T00:00:00Z).
- AuthenticationConfiguration DeliveryStream Authentication Configuration 
- The authentication configuration of the Amazon MSK cluster.
- MskCluster stringArn 
- The ARN of the Amazon MSK cluster.
- TopicName string
- The topic name within the Amazon MSK cluster.
- ReadFrom stringTimestamp 
- The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active. - If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the - ReadFromTimestampparameter to Epoch (1970-01-01T00:00:00Z).
- authenticationConfiguration DeliveryStream Authentication Configuration 
- The authentication configuration of the Amazon MSK cluster.
- mskCluster StringArn 
- The ARN of the Amazon MSK cluster.
- topicName String
- The topic name within the Amazon MSK cluster.
- readFrom StringTimestamp 
- The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active. - If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the - ReadFromTimestampparameter to Epoch (1970-01-01T00:00:00Z).
- authenticationConfiguration DeliveryStream Authentication Configuration 
- The authentication configuration of the Amazon MSK cluster.
- mskCluster stringArn 
- The ARN of the Amazon MSK cluster.
- topicName string
- The topic name within the Amazon MSK cluster.
- readFrom stringTimestamp 
- The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active. - If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the - ReadFromTimestampparameter to Epoch (1970-01-01T00:00:00Z).
- authentication_configuration DeliveryStream Authentication Configuration 
- The authentication configuration of the Amazon MSK cluster.
- msk_cluster_ strarn 
- The ARN of the Amazon MSK cluster.
- topic_name str
- The topic name within the Amazon MSK cluster.
- read_from_ strtimestamp 
- The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active. - If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the - ReadFromTimestampparameter to Epoch (1970-01-01T00:00:00Z).
- authenticationConfiguration Property Map
- The authentication configuration of the Amazon MSK cluster.
- mskCluster StringArn 
- The ARN of the Amazon MSK cluster.
- topicName String
- The topic name within the Amazon MSK cluster.
- readFrom StringTimestamp 
- The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active. - If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the - ReadFromTimestampparameter to Epoch (1970-01-01T00:00:00Z).
DeliveryStreamOpenXJsonSerDe, DeliveryStreamOpenXJsonSerDeArgs            
- CaseInsensitive bool
- When set to true, which is the default, Firehose converts JSON keys to lowercase before deserializing them.
- ColumnTo Dictionary<string, string>Json Key Mappings 
- Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestampis a Hive keyword. If you have a JSON key namedtimestamp, set this parameter to{"ts": "timestamp"}to map this key to a column namedts.
- ConvertDots boolIn Json Keys To Underscores 
- When set to - true, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.- The default is - false.
- CaseInsensitive bool
- When set to true, which is the default, Firehose converts JSON keys to lowercase before deserializing them.
- ColumnTo map[string]stringJson Key Mappings 
- Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestampis a Hive keyword. If you have a JSON key namedtimestamp, set this parameter to{"ts": "timestamp"}to map this key to a column namedts.
- ConvertDots boolIn Json Keys To Underscores 
- When set to - true, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.- The default is - false.
- caseInsensitive Boolean
- When set to true, which is the default, Firehose converts JSON keys to lowercase before deserializing them.
- columnTo Map<String,String>Json Key Mappings 
- Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestampis a Hive keyword. If you have a JSON key namedtimestamp, set this parameter to{"ts": "timestamp"}to map this key to a column namedts.
- convertDots BooleanIn Json Keys To Underscores 
- When set to - true, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.- The default is - false.
- caseInsensitive boolean
- When set to true, which is the default, Firehose converts JSON keys to lowercase before deserializing them.
- columnTo {[key: string]: string}Json Key Mappings 
- Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestampis a Hive keyword. If you have a JSON key namedtimestamp, set this parameter to{"ts": "timestamp"}to map this key to a column namedts.
- convertDots booleanIn Json Keys To Underscores 
- When set to - true, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.- The default is - false.
- case_insensitive bool
- When set to true, which is the default, Firehose converts JSON keys to lowercase before deserializing them.
- column_to_ Mapping[str, str]json_ key_ mappings 
- Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestampis a Hive keyword. If you have a JSON key namedtimestamp, set this parameter to{"ts": "timestamp"}to map this key to a column namedts.
- convert_dots_ boolin_ json_ keys_ to_ underscores 
- When set to - true, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.- The default is - false.
- caseInsensitive Boolean
- When set to true, which is the default, Firehose converts JSON keys to lowercase before deserializing them.
- columnTo Map<String>Json Key Mappings 
- Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestampis a Hive keyword. If you have a JSON key namedtimestamp, set this parameter to{"ts": "timestamp"}to map this key to a column namedts.
- convertDots BooleanIn Json Keys To Underscores 
- When set to - true, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.- The default is - false.
DeliveryStreamOrcSerDe, DeliveryStreamOrcSerDeArgs          
- BlockSize intBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- BloomFilter List<string>Columns 
- The column names for which you want Firehose to create bloom filters. The default is null.
- BloomFilter doubleFalse Positive Probability 
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- Compression string
- The compression code to use over data blocks. The default is SNAPPY.
- DictionaryKey doubleThreshold 
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- EnablePadding bool
- Set this to trueto indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse.
- FormatVersion string
- The version of the file to write. The possible values are V0_11andV0_12. The default isV0_12.
- PaddingTolerance double
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size. - For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task. - Kinesis Data Firehose ignores this parameter when - EnablePaddingis- false.
- RowIndex intStride 
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- StripeSize intBytes 
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- BlockSize intBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- BloomFilter []stringColumns 
- The column names for which you want Firehose to create bloom filters. The default is null.
- BloomFilter float64False Positive Probability 
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- Compression string
- The compression code to use over data blocks. The default is SNAPPY.
- DictionaryKey float64Threshold 
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- EnablePadding bool
- Set this to trueto indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse.
- FormatVersion string
- The version of the file to write. The possible values are V0_11andV0_12. The default isV0_12.
- PaddingTolerance float64
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size. - For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task. - Kinesis Data Firehose ignores this parameter when - EnablePaddingis- false.
- RowIndex intStride 
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- StripeSize intBytes 
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- blockSize IntegerBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloomFilter List<String>Columns 
- The column names for which you want Firehose to create bloom filters. The default is null.
- bloomFilter DoubleFalse Positive Probability 
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression String
- The compression code to use over data blocks. The default is SNAPPY.
- dictionaryKey DoubleThreshold 
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enablePadding Boolean
- Set this to trueto indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse.
- formatVersion String
- The version of the file to write. The possible values are V0_11andV0_12. The default isV0_12.
- paddingTolerance Double
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size. - For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task. - Kinesis Data Firehose ignores this parameter when - EnablePaddingis- false.
- rowIndex IntegerStride 
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- stripeSize IntegerBytes 
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- blockSize numberBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloomFilter string[]Columns 
- The column names for which you want Firehose to create bloom filters. The default is null.
- bloomFilter numberFalse Positive Probability 
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression string
- The compression code to use over data blocks. The default is SNAPPY.
- dictionaryKey numberThreshold 
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enablePadding boolean
- Set this to trueto indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse.
- formatVersion string
- The version of the file to write. The possible values are V0_11andV0_12. The default isV0_12.
- paddingTolerance number
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size. - For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task. - Kinesis Data Firehose ignores this parameter when - EnablePaddingis- false.
- rowIndex numberStride 
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- stripeSize numberBytes 
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- block_size_ intbytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloom_filter_ Sequence[str]columns 
- The column names for which you want Firehose to create bloom filters. The default is null.
- bloom_filter_ floatfalse_ positive_ probability 
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression str
- The compression code to use over data blocks. The default is SNAPPY.
- dictionary_key_ floatthreshold 
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enable_padding bool
- Set this to trueto indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse.
- format_version str
- The version of the file to write. The possible values are V0_11andV0_12. The default isV0_12.
- padding_tolerance float
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size. - For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task. - Kinesis Data Firehose ignores this parameter when - EnablePaddingis- false.
- row_index_ intstride 
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- stripe_size_ intbytes 
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- blockSize NumberBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloomFilter List<String>Columns 
- The column names for which you want Firehose to create bloom filters. The default is null.
- bloomFilter NumberFalse Positive Probability 
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression String
- The compression code to use over data blocks. The default is SNAPPY.
- dictionaryKey NumberThreshold 
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enablePadding Boolean
- Set this to trueto indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse.
- formatVersion String
- The version of the file to write. The possible values are V0_11andV0_12. The default isV0_12.
- paddingTolerance Number
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size. - For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task. - Kinesis Data Firehose ignores this parameter when - EnablePaddingis- false.
- rowIndex NumberStride 
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- stripeSize NumberBytes 
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
DeliveryStreamOutputFormatConfiguration, DeliveryStreamOutputFormatConfigurationArgs          
- Serializer
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Serializer 
- Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- Serializer
DeliveryStream Serializer 
- Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- serializer
DeliveryStream Serializer 
- Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- serializer
DeliveryStream Serializer 
- Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- serializer
DeliveryStream Serializer 
- Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- serializer Property Map
- Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
DeliveryStreamParquetSerDe, DeliveryStreamParquetSerDeArgs          
- BlockSize intBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- Compression string
- The compression code to use over data blocks. The possible values are UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.
- EnableDictionary boolCompression 
- Indicates whether to enable dictionary compression.
- MaxPadding intBytes 
- The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- PageSize intBytes 
- The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- WriterVersion string
- Indicates the version of row format to output. The possible values are V1andV2. The default isV1.
- BlockSize intBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- Compression string
- The compression code to use over data blocks. The possible values are UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.
- EnableDictionary boolCompression 
- Indicates whether to enable dictionary compression.
- MaxPadding intBytes 
- The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- PageSize intBytes 
- The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- WriterVersion string
- Indicates the version of row format to output. The possible values are V1andV2. The default isV1.
- blockSize IntegerBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression String
- The compression code to use over data blocks. The possible values are UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.
- enableDictionary BooleanCompression 
- Indicates whether to enable dictionary compression.
- maxPadding IntegerBytes 
- The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- pageSize IntegerBytes 
- The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writerVersion String
- Indicates the version of row format to output. The possible values are V1andV2. The default isV1.
- blockSize numberBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression string
- The compression code to use over data blocks. The possible values are UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.
- enableDictionary booleanCompression 
- Indicates whether to enable dictionary compression.
- maxPadding numberBytes 
- The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- pageSize numberBytes 
- The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writerVersion string
- Indicates the version of row format to output. The possible values are V1andV2. The default isV1.
- block_size_ intbytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression str
- The compression code to use over data blocks. The possible values are UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.
- enable_dictionary_ boolcompression 
- Indicates whether to enable dictionary compression.
- max_padding_ intbytes 
- The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- page_size_ intbytes 
- The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writer_version str
- Indicates the version of row format to output. The possible values are V1andV2. The default isV1.
- blockSize NumberBytes 
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression String
- The compression code to use over data blocks. The possible values are UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.
- enableDictionary BooleanCompression 
- Indicates whether to enable dictionary compression.
- maxPadding NumberBytes 
- The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- pageSize NumberBytes 
- The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writerVersion String
- Indicates the version of row format to output. The possible values are V1andV2. The default isV1.
DeliveryStreamProcessingConfiguration, DeliveryStreamProcessingConfigurationArgs        
- Enabled bool
- Indicates whether data processing is enabled (true) or disabled (false).
- Processors
List<Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processor> 
- The data processors.
- Enabled bool
- Indicates whether data processing is enabled (true) or disabled (false).
- Processors
[]DeliveryStream Processor 
- The data processors.
- enabled Boolean
- Indicates whether data processing is enabled (true) or disabled (false).
- processors
List<DeliveryStream Processor> 
- The data processors.
- enabled boolean
- Indicates whether data processing is enabled (true) or disabled (false).
- processors
DeliveryStream Processor[] 
- The data processors.
- enabled bool
- Indicates whether data processing is enabled (true) or disabled (false).
- processors
Sequence[DeliveryStream Processor] 
- The data processors.
- enabled Boolean
- Indicates whether data processing is enabled (true) or disabled (false).
- processors List<Property Map>
- The data processors.
DeliveryStreamProcessor, DeliveryStreamProcessorArgs      
- Type
Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Processor Type 
- The type of processor. Valid values: Lambda.
- Parameters
List<Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processor Parameter> 
- The processor parameters.
- Type
DeliveryStream Processor Type 
- The type of processor. Valid values: Lambda.
- Parameters
[]DeliveryStream Processor Parameter 
- The processor parameters.
- type
DeliveryStream Processor Type 
- The type of processor. Valid values: Lambda.
- parameters
List<DeliveryStream Processor Parameter> 
- The processor parameters.
- type
DeliveryStream Processor Type 
- The type of processor. Valid values: Lambda.
- parameters
DeliveryStream Processor Parameter[] 
- The processor parameters.
- type
DeliveryStream Processor Type 
- The type of processor. Valid values: Lambda.
- parameters
Sequence[DeliveryStream Processor Parameter] 
- The processor parameters.
- type
"RecordDe Aggregation" | "Decompression" | "Cloud Watch Log Processing" | "Lambda" | "Metadata Extraction" | "Append Delimiter To Record" 
- The type of processor. Valid values: Lambda.
- parameters List<Property Map>
- The processor parameters.
DeliveryStreamProcessorParameter, DeliveryStreamProcessorParameterArgs        
- ParameterName string
- The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetriesand 60 for theBufferIntervalInSeconds. TheBufferSizeInMBsranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
- ParameterValue string
- The parameter value.
- ParameterName string
- The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetriesand 60 for theBufferIntervalInSeconds. TheBufferSizeInMBsranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
- ParameterValue string
- The parameter value.
- parameterName String
- The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetriesand 60 for theBufferIntervalInSeconds. TheBufferSizeInMBsranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
- parameterValue String
- The parameter value.
- parameterName string
- The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetriesand 60 for theBufferIntervalInSeconds. TheBufferSizeInMBsranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
- parameterValue string
- The parameter value.
- parameter_name str
- The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetriesand 60 for theBufferIntervalInSeconds. TheBufferSizeInMBsranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
- parameter_value str
- The parameter value.
- parameterName String
- The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetriesand 60 for theBufferIntervalInSeconds. TheBufferSizeInMBsranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
- parameterValue String
- The parameter value.
DeliveryStreamProcessorType, DeliveryStreamProcessorTypeArgs        
- RecordDe Aggregation 
- RecordDeAggregation
- Decompression
- Decompression
- CloudWatch Log Processing 
- CloudWatchLogProcessing
- Lambda
- Lambda
- MetadataExtraction 
- MetadataExtraction
- AppendDelimiter To Record 
- AppendDelimiterToRecord
- DeliveryStream Processor Type Record De Aggregation 
- RecordDeAggregation
- DeliveryStream Processor Type Decompression 
- Decompression
- DeliveryStream Processor Type Cloud Watch Log Processing 
- CloudWatchLogProcessing
- DeliveryStream Processor Type Lambda 
- Lambda
- DeliveryStream Processor Type Metadata Extraction 
- MetadataExtraction
- DeliveryStream Processor Type Append Delimiter To Record 
- AppendDelimiterToRecord
- RecordDe Aggregation 
- RecordDeAggregation
- Decompression
- Decompression
- CloudWatch Log Processing 
- CloudWatchLogProcessing
- Lambda
- Lambda
- MetadataExtraction 
- MetadataExtraction
- AppendDelimiter To Record 
- AppendDelimiterToRecord
- RecordDe Aggregation 
- RecordDeAggregation
- Decompression
- Decompression
- CloudWatch Log Processing 
- CloudWatchLogProcessing
- Lambda
- Lambda
- MetadataExtraction 
- MetadataExtraction
- AppendDelimiter To Record 
- AppendDelimiterToRecord
- RECORD_DE_AGGREGATION
- RecordDeAggregation
- DECOMPRESSION
- Decompression
- CLOUD_WATCH_LOG_PROCESSING
- CloudWatchLogProcessing
- LAMBDA_
- Lambda
- METADATA_EXTRACTION
- MetadataExtraction
- APPEND_DELIMITER_TO_RECORD
- AppendDelimiterToRecord
- "RecordDe Aggregation" 
- RecordDeAggregation
- "Decompression"
- Decompression
- "CloudWatch Log Processing" 
- CloudWatchLogProcessing
- "Lambda"
- Lambda
- "MetadataExtraction" 
- MetadataExtraction
- "AppendDelimiter To Record" 
- AppendDelimiterToRecord
DeliveryStreamRedshiftDestinationConfiguration, DeliveryStreamRedshiftDestinationConfigurationArgs          
- ClusterJdbcurl string
- The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- CopyCommand Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Copy Command 
- Configures the Amazon Redshift COPYcommand that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
- RoleArn string
- The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- S3Configuration
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPYcommand to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPYorZIPbecause the Amazon RedshiftCOPYcommand doesn't support them.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- Password string
- The password for the Amazon Redshift user that you specified in the Usernameproperty.
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Redshift Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- S3BackupConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- S3BackupMode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Redshift Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- SecretsManager Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Amazon Redshift.
- Username string
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERTprivileges for copying data from the Amazon S3 bucket to the cluster.
- ClusterJdbcurl string
- The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- CopyCommand DeliveryStream Copy Command 
- Configures the Amazon Redshift COPYcommand that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
- RoleArn string
- The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- S3Configuration
DeliveryStream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPYcommand to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPYorZIPbecause the Amazon RedshiftCOPYcommand doesn't support them.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- Password string
- The password for the Amazon Redshift user that you specified in the Usernameproperty.
- ProcessingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- RetryOptions DeliveryStream Redshift Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- S3BackupConfiguration DeliveryStream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- S3BackupMode DeliveryStream Redshift Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- SecretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Amazon Redshift.
- Username string
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERTprivileges for copying data from the Amazon S3 bucket to the cluster.
- clusterJdbcurl String
- The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copyCommand DeliveryStream Copy Command 
- Configures the Amazon Redshift COPYcommand that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
- roleArn String
- The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3Configuration
DeliveryStream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPYcommand to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPYorZIPbecause the Amazon RedshiftCOPYcommand doesn't support them.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- password String
- The password for the Amazon Redshift user that you specified in the Usernameproperty.
- processingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- retryOptions DeliveryStream Redshift Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- s3BackupConfiguration DeliveryStream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- s3BackupMode DeliveryStream Redshift Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- secretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Amazon Redshift.
- username String
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERTprivileges for copying data from the Amazon S3 bucket to the cluster.
- clusterJdbcurl string
- The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copyCommand DeliveryStream Copy Command 
- Configures the Amazon Redshift COPYcommand that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
- roleArn string
- The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3Configuration
DeliveryStream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPYcommand to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPYorZIPbecause the Amazon RedshiftCOPYcommand doesn't support them.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- password string
- The password for the Amazon Redshift user that you specified in the Usernameproperty.
- processingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- retryOptions DeliveryStream Redshift Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- s3BackupConfiguration DeliveryStream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- s3BackupMode DeliveryStream Redshift Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- secretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Amazon Redshift.
- username string
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERTprivileges for copying data from the Amazon S3 bucket to the cluster.
- cluster_jdbcurl str
- The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copy_command DeliveryStream Copy Command 
- Configures the Amazon Redshift COPYcommand that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
- role_arn str
- The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3_configuration DeliveryStream S3Destination Configuration 
- The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPYcommand to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPYorZIPbecause the Amazon RedshiftCOPYcommand doesn't support them.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- password str
- The password for the Amazon Redshift user that you specified in the Usernameproperty.
- processing_configuration DeliveryStream Processing Configuration 
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry_options DeliveryStream Redshift Retry Options 
- The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- s3_backup_ Deliveryconfiguration Stream S3Destination Configuration 
- The configuration for backup in Amazon S3.
- s3_backup_ Deliverymode Stream Redshift Destination Configuration S3Backup Mode 
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- secrets_manager_ Deliveryconfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Amazon Redshift.
- username str
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERTprivileges for copying data from the Amazon S3 bucket to the cluster.
- clusterJdbcurl String
- The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copyCommand Property Map
- Configures the Amazon Redshift COPYcommand that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
- roleArn String
- The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3Configuration Property Map
- The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPYcommand to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPYorZIPbecause the Amazon RedshiftCOPYcommand doesn't support them.
- cloudWatch Property MapLogging Options 
- The CloudWatch logging options for your Firehose stream.
- password String
- The password for the Amazon Redshift user that you specified in the Usernameproperty.
- processingConfiguration Property Map
- The data processing configuration for the Kinesis Data Firehose delivery stream.
- retryOptions Property Map
- The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- s3BackupConfiguration Property Map
- The configuration for backup in Amazon S3.
- s3BackupMode "Disabled" | "Enabled"
- The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
- secretsManager Property MapConfiguration 
- The configuration that defines how you access secrets for Amazon Redshift.
- username String
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERTprivileges for copying data from the Amazon S3 bucket to the cluster.
DeliveryStreamRedshiftDestinationConfigurationS3BackupMode, DeliveryStreamRedshiftDestinationConfigurationS3BackupModeArgs              
- Disabled
- Disabled
- Enabled
- Enabled
- DeliveryStream Redshift Destination Configuration S3Backup Mode Disabled 
- Disabled
- DeliveryStream Redshift Destination Configuration S3Backup Mode Enabled 
- Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- DISABLED
- Disabled
- ENABLED
- Enabled
- "Disabled"
- Disabled
- "Enabled"
- Enabled
DeliveryStreamRedshiftRetryOptions, DeliveryStreamRedshiftRetryOptionsArgs          
- DurationIn intSeconds 
- The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSecondsis 0 (zero) or if the first delivery attempt takes longer than the current value.
- DurationIn intSeconds 
- The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSecondsis 0 (zero) or if the first delivery attempt takes longer than the current value.
- durationIn IntegerSeconds 
- The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSecondsis 0 (zero) or if the first delivery attempt takes longer than the current value.
- durationIn numberSeconds 
- The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSecondsis 0 (zero) or if the first delivery attempt takes longer than the current value.
- duration_in_ intseconds 
- The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSecondsis 0 (zero) or if the first delivery attempt takes longer than the current value.
- durationIn NumberSeconds 
- The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSecondsis 0 (zero) or if the first delivery attempt takes longer than the current value.
DeliveryStreamRetryOptions, DeliveryStreamRetryOptionsArgs        
- DurationIn intSeconds 
- The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- DurationIn intSeconds 
- The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- durationIn IntegerSeconds 
- The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- durationIn numberSeconds 
- The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- duration_in_ intseconds 
- The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- durationIn NumberSeconds 
- The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
DeliveryStreamS3DestinationConfiguration, DeliveryStreamS3DestinationConfigurationArgs        
- BucketArn string
- The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- RoleArn string
- The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- CompressionFormat Pulumi.Aws Native. Kinesis Firehose. Delivery Stream S3Destination Configuration Compression Format 
- The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormatcontent for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- EncryptionConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Encryption Configuration 
- Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- ErrorOutput stringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- Prefix string
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- BucketArn string
- The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- RoleArn string
- The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- BufferingHints DeliveryStream Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- CompressionFormat DeliveryStream S3Destination Configuration Compression Format 
- The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormatcontent for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- EncryptionConfiguration DeliveryStream Encryption Configuration 
- Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- ErrorOutput stringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- Prefix string
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- bucketArn String
- The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- roleArn String
- The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- bufferingHints DeliveryStream Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- compressionFormat DeliveryStream S3Destination Configuration Compression Format 
- The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormatcontent for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- encryptionConfiguration DeliveryStream Encryption Configuration 
- Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- errorOutput StringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix String
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- bucketArn string
- The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- roleArn string
- The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- bufferingHints DeliveryStream Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- compressionFormat DeliveryStream S3Destination Configuration Compression Format 
- The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormatcontent for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- encryptionConfiguration DeliveryStream Encryption Configuration 
- Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- errorOutput stringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix string
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- bucket_arn str
- The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- role_arn str
- The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- buffering_hints DeliveryStream Buffering Hints 
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- The CloudWatch logging options for your Firehose stream.
- compression_format DeliveryStream S3Destination Configuration Compression Format 
- The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormatcontent for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- encryption_configuration DeliveryStream Encryption Configuration 
- Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- error_output_ strprefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix str
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- bucketArn String
- The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- roleArn String
- The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- bufferingHints Property Map
- Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloudWatch Property MapLogging Options 
- The CloudWatch logging options for your Firehose stream.
- compressionFormat "UNCOMPRESSED" | "GZIP" | "ZIP" | "Snappy" | "HADOOP_SNAPPY"
- The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormatcontent for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- encryptionConfiguration Property Map
- Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- errorOutput StringPrefix 
- A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix String
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
DeliveryStreamS3DestinationConfigurationCompressionFormat, DeliveryStreamS3DestinationConfigurationCompressionFormatArgs            
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- HadoopSnappy 
- HADOOP_SNAPPY
- DeliveryStream S3Destination Configuration Compression Format Uncompressed 
- UNCOMPRESSED
- DeliveryStream S3Destination Configuration Compression Format Gzip 
- GZIP
- DeliveryStream S3Destination Configuration Compression Format Zip 
- ZIP
- DeliveryStream S3Destination Configuration Compression Format Snappy 
- Snappy
- DeliveryStream S3Destination Configuration Compression Format Hadoop Snappy 
- HADOOP_SNAPPY
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- HadoopSnappy 
- HADOOP_SNAPPY
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- HadoopSnappy 
- HADOOP_SNAPPY
- UNCOMPRESSED
- UNCOMPRESSED
- GZIP
- GZIP
- ZIP
- ZIP
- SNAPPY
- Snappy
- HADOOP_SNAPPY
- HADOOP_SNAPPY
- "UNCOMPRESSED"
- UNCOMPRESSED
- "GZIP"
- GZIP
- "ZIP"
- ZIP
- "Snappy"
- Snappy
- "HADOOP_SNAPPY"
- HADOOP_SNAPPY
DeliveryStreamSchemaConfiguration, DeliveryStreamSchemaConfigurationArgs        
- CatalogId string
- The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- DatabaseName string
- Specifies the name of the AWS Glue database that contains the schema for the output data. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- DatabaseNameproperty is required and its value must be specified.
- Region string
- If you don't specify an AWS Region, the default is the current Region.
- RoleArn string
- The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- RoleARNproperty is required and its value must be specified.
- TableName string
- Specifies the AWS Glue table that contains the column information that constitutes your data schema. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- TableNameproperty is required and its value must be specified.
- VersionId string
- Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- CatalogId string
- The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- DatabaseName string
- Specifies the name of the AWS Glue database that contains the schema for the output data. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- DatabaseNameproperty is required and its value must be specified.
- Region string
- If you don't specify an AWS Region, the default is the current Region.
- RoleArn string
- The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- RoleARNproperty is required and its value must be specified.
- TableName string
- Specifies the AWS Glue table that contains the column information that constitutes your data schema. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- TableNameproperty is required and its value must be specified.
- VersionId string
- Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- catalogId String
- The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- databaseName String
- Specifies the name of the AWS Glue database that contains the schema for the output data. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- DatabaseNameproperty is required and its value must be specified.
- region String
- If you don't specify an AWS Region, the default is the current Region.
- roleArn String
- The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- RoleARNproperty is required and its value must be specified.
- tableName String
- Specifies the AWS Glue table that contains the column information that constitutes your data schema. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- TableNameproperty is required and its value must be specified.
- versionId String
- Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- catalogId string
- The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- databaseName string
- Specifies the name of the AWS Glue database that contains the schema for the output data. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- DatabaseNameproperty is required and its value must be specified.
- region string
- If you don't specify an AWS Region, the default is the current Region.
- roleArn string
- The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- RoleARNproperty is required and its value must be specified.
- tableName string
- Specifies the AWS Glue table that contains the column information that constitutes your data schema. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- TableNameproperty is required and its value must be specified.
- versionId string
- Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- catalog_id str
- The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- database_name str
- Specifies the name of the AWS Glue database that contains the schema for the output data. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- DatabaseNameproperty is required and its value must be specified.
- region str
- If you don't specify an AWS Region, the default is the current Region.
- role_arn str
- The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- RoleARNproperty is required and its value must be specified.
- table_name str
- Specifies the AWS Glue table that contains the column information that constitutes your data schema. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- TableNameproperty is required and its value must be specified.
- version_id str
- Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- catalogId String
- The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- databaseName String
- Specifies the name of the AWS Glue database that contains the schema for the output data. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- DatabaseNameproperty is required and its value must be specified.
- region String
- If you don't specify an AWS Region, the default is the current Region.
- roleArn String
- The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- RoleARNproperty is required and its value must be specified.
- tableName String
- Specifies the AWS Glue table that contains the column information that constitutes your data schema. - If the - SchemaConfigurationrequest parameter is used as part of invoking the- CreateDeliveryStreamAPI, then the- TableNameproperty is required and its value must be specified.
- versionId String
- Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
DeliveryStreamSecretsManagerConfiguration, DeliveryStreamSecretsManagerConfigurationArgs          
- Enabled bool
- Specifies whether you want to use the secrets manager feature. When set as Truethe secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalseFirehose falls back to the credentials in the destination configuration.
- RoleArn string
- Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- SecretArn string
- The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True.
- Enabled bool
- Specifies whether you want to use the secrets manager feature. When set as Truethe secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalseFirehose falls back to the credentials in the destination configuration.
- RoleArn string
- Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- SecretArn string
- The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True.
- enabled Boolean
- Specifies whether you want to use the secrets manager feature. When set as Truethe secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalseFirehose falls back to the credentials in the destination configuration.
- roleArn String
- Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secretArn String
- The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True.
- enabled boolean
- Specifies whether you want to use the secrets manager feature. When set as Truethe secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalseFirehose falls back to the credentials in the destination configuration.
- roleArn string
- Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secretArn string
- The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True.
- enabled bool
- Specifies whether you want to use the secrets manager feature. When set as Truethe secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalseFirehose falls back to the credentials in the destination configuration.
- role_arn str
- Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secret_arn str
- The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True.
- enabled Boolean
- Specifies whether you want to use the secrets manager feature. When set as Truethe secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalseFirehose falls back to the credentials in the destination configuration.
- roleArn String
- Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secretArn String
- The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True.
DeliveryStreamSerializer, DeliveryStreamSerializerArgs      
- OrcSer Pulumi.De Aws Native. Kinesis Firehose. Inputs. Delivery Stream Orc Ser De 
- A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- ParquetSer Pulumi.De Aws Native. Kinesis Firehose. Inputs. Delivery Stream Parquet Ser De 
- A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- OrcSer DeliveryDe Stream Orc Ser De 
- A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- ParquetSer DeliveryDe Stream Parquet Ser De 
- A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- orcSer DeliveryDe Stream Orc Ser De 
- A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- parquetSer DeliveryDe Stream Parquet Ser De 
- A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- orcSer DeliveryDe Stream Orc Ser De 
- A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- parquetSer DeliveryDe Stream Parquet Ser De 
- A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- orc_ser_ Deliveryde Stream Orc Ser De 
- A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- parquet_ser_ Deliveryde Stream Parquet Ser De 
- A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- orcSer Property MapDe 
- A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- parquetSer Property MapDe 
- A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
DeliveryStreamSnowflakeBufferingHints, DeliveryStreamSnowflakeBufferingHintsArgs          
- IntervalIn intSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- SizeIn intMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- IntervalIn intSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- SizeIn intMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- intervalIn IntegerSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- sizeIn IntegerMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- intervalIn numberSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- sizeIn numberMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- interval_in_ intseconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- size_in_ intmbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- intervalIn NumberSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- sizeIn NumberMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
DeliveryStreamSnowflakeDestinationConfiguration, DeliveryStreamSnowflakeDestinationConfigurationArgs          
- AccountUrl string
- URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- Database string
- All data in Snowflake is maintained in databases.
- RoleArn string
- The Amazon Resource Name (ARN) of the Snowflake role
- S3Configuration
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- Schema string
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- Table string
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Buffering Hints 
- Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- ContentColumn stringName 
- The name of the record content column.
- DataLoading Pulumi.Option Aws Native. Kinesis Firehose. Delivery Stream Snowflake Destination Configuration Data Loading Option 
- Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- KeyPassphrase string
- Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- MetaData stringColumn Name 
- Specify a column name in the table, where the metadata information has to be loaded. When you enable this field, you will see the following column in the snowflake table, which differs based on the source type. - For Direct PUT as source - { "firehoseDeliveryStreamName" : "streamname", "IngestionTime" : "timestamp" }- For Kinesis Data Stream as source - "kinesisStreamName" : "streamname", "kinesisShardId" : "Id", "kinesisPartitionKey" : "key", "kinesisSequenceNumber" : "1234", "subsequenceNumber" : "2334", "IngestionTime" : "timestamp" }
- PrivateKey string
- The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- Specifies configuration for Snowflake.
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Retry Options 
- The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- S3BackupMode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Snowflake Destination Configuration S3Backup Mode 
- Choose an S3 backup mode
- SecretsManager Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Snowflake.
- SnowflakeRole Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Role Configuration 
- Optionally configure a Snowflake role. Otherwise the default user role will be used.
- SnowflakeVpc Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Vpc Configuration 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- User string
- User login name for the Snowflake account.
- AccountUrl string
- URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- Database string
- All data in Snowflake is maintained in databases.
- RoleArn string
- The Amazon Resource Name (ARN) of the Snowflake role
- S3Configuration
DeliveryStream S3Destination Configuration 
- Schema string
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- Table string
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- BufferingHints DeliveryStream Snowflake Buffering Hints 
- Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- ContentColumn stringName 
- The name of the record content column.
- DataLoading DeliveryOption Stream Snowflake Destination Configuration Data Loading Option 
- Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- KeyPassphrase string
- Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- MetaData stringColumn Name 
- Specify a column name in the table, where the metadata information has to be loaded. When you enable this field, you will see the following column in the snowflake table, which differs based on the source type. - For Direct PUT as source - { "firehoseDeliveryStreamName" : "streamname", "IngestionTime" : "timestamp" }- For Kinesis Data Stream as source - "kinesisStreamName" : "streamname", "kinesisShardId" : "Id", "kinesisPartitionKey" : "key", "kinesisSequenceNumber" : "1234", "subsequenceNumber" : "2334", "IngestionTime" : "timestamp" }
- PrivateKey string
- The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- ProcessingConfiguration DeliveryStream Processing Configuration 
- Specifies configuration for Snowflake.
- RetryOptions DeliveryStream Snowflake Retry Options 
- The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- S3BackupMode DeliveryStream Snowflake Destination Configuration S3Backup Mode 
- Choose an S3 backup mode
- SecretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Snowflake.
- SnowflakeRole DeliveryConfiguration Stream Snowflake Role Configuration 
- Optionally configure a Snowflake role. Otherwise the default user role will be used.
- SnowflakeVpc DeliveryConfiguration Stream Snowflake Vpc Configuration 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- User string
- User login name for the Snowflake account.
- accountUrl String
- URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- database String
- All data in Snowflake is maintained in databases.
- roleArn String
- The Amazon Resource Name (ARN) of the Snowflake role
- s3Configuration
DeliveryStream S3Destination Configuration 
- schema String
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- table String
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- bufferingHints DeliveryStream Snowflake Buffering Hints 
- Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- contentColumn StringName 
- The name of the record content column.
- dataLoading DeliveryOption Stream Snowflake Destination Configuration Data Loading Option 
- Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- keyPassphrase String
- Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- metaData StringColumn Name 
- Specify a column name in the table, where the metadata information has to be loaded. When you enable this field, you will see the following column in the snowflake table, which differs based on the source type. - For Direct PUT as source - { "firehoseDeliveryStreamName" : "streamname", "IngestionTime" : "timestamp" }- For Kinesis Data Stream as source - "kinesisStreamName" : "streamname", "kinesisShardId" : "Id", "kinesisPartitionKey" : "key", "kinesisSequenceNumber" : "1234", "subsequenceNumber" : "2334", "IngestionTime" : "timestamp" }
- privateKey String
- The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- processingConfiguration DeliveryStream Processing Configuration 
- Specifies configuration for Snowflake.
- retryOptions DeliveryStream Snowflake Retry Options 
- The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- s3BackupMode DeliveryStream Snowflake Destination Configuration S3Backup Mode 
- Choose an S3 backup mode
- secretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Snowflake.
- snowflakeRole DeliveryConfiguration Stream Snowflake Role Configuration 
- Optionally configure a Snowflake role. Otherwise the default user role will be used.
- snowflakeVpc DeliveryConfiguration Stream Snowflake Vpc Configuration 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- user String
- User login name for the Snowflake account.
- accountUrl string
- URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- database string
- All data in Snowflake is maintained in databases.
- roleArn string
- The Amazon Resource Name (ARN) of the Snowflake role
- s3Configuration
DeliveryStream S3Destination Configuration 
- schema string
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- table string
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- bufferingHints DeliveryStream Snowflake Buffering Hints 
- Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- contentColumn stringName 
- The name of the record content column.
- dataLoading DeliveryOption Stream Snowflake Destination Configuration Data Loading Option 
- Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- keyPassphrase string
- Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- metaData stringColumn Name 
- Specify a column name in the table, where the metadata information has to be loaded. When you enable this field, you will see the following column in the snowflake table, which differs based on the source type. - For Direct PUT as source - { "firehoseDeliveryStreamName" : "streamname", "IngestionTime" : "timestamp" }- For Kinesis Data Stream as source - "kinesisStreamName" : "streamname", "kinesisShardId" : "Id", "kinesisPartitionKey" : "key", "kinesisSequenceNumber" : "1234", "subsequenceNumber" : "2334", "IngestionTime" : "timestamp" }
- privateKey string
- The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- processingConfiguration DeliveryStream Processing Configuration 
- Specifies configuration for Snowflake.
- retryOptions DeliveryStream Snowflake Retry Options 
- The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- s3BackupMode DeliveryStream Snowflake Destination Configuration S3Backup Mode 
- Choose an S3 backup mode
- secretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Snowflake.
- snowflakeRole DeliveryConfiguration Stream Snowflake Role Configuration 
- Optionally configure a Snowflake role. Otherwise the default user role will be used.
- snowflakeVpc DeliveryConfiguration Stream Snowflake Vpc Configuration 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- user string
- User login name for the Snowflake account.
- account_url str
- URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- database str
- All data in Snowflake is maintained in databases.
- role_arn str
- The Amazon Resource Name (ARN) of the Snowflake role
- s3_configuration DeliveryStream S3Destination Configuration 
- schema str
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- table str
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- buffering_hints DeliveryStream Snowflake Buffering Hints 
- Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- content_column_ strname 
- The name of the record content column.
- data_loading_ Deliveryoption Stream Snowflake Destination Configuration Data Loading Option 
- Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- key_passphrase str
- Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- meta_data_ strcolumn_ name 
- Specify a column name in the table, where the metadata information has to be loaded. When you enable this field, you will see the following column in the snowflake table, which differs based on the source type. - For Direct PUT as source - { "firehoseDeliveryStreamName" : "streamname", "IngestionTime" : "timestamp" }- For Kinesis Data Stream as source - "kinesisStreamName" : "streamname", "kinesisShardId" : "Id", "kinesisPartitionKey" : "key", "kinesisSequenceNumber" : "1234", "subsequenceNumber" : "2334", "IngestionTime" : "timestamp" }
- private_key str
- The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- processing_configuration DeliveryStream Processing Configuration 
- Specifies configuration for Snowflake.
- retry_options DeliveryStream Snowflake Retry Options 
- The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- s3_backup_ Deliverymode Stream Snowflake Destination Configuration S3Backup Mode 
- Choose an S3 backup mode
- secrets_manager_ Deliveryconfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Snowflake.
- snowflake_role_ Deliveryconfiguration Stream Snowflake Role Configuration 
- Optionally configure a Snowflake role. Otherwise the default user role will be used.
- snowflake_vpc_ Deliveryconfiguration Stream Snowflake Vpc Configuration 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- user str
- User login name for the Snowflake account.
- accountUrl String
- URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- database String
- All data in Snowflake is maintained in databases.
- roleArn String
- The Amazon Resource Name (ARN) of the Snowflake role
- s3Configuration Property Map
- schema String
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- table String
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- bufferingHints Property Map
- Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- cloudWatch Property MapLogging Options 
- contentColumn StringName 
- The name of the record content column.
- dataLoading "JSON_MAPPING" | "VARIANT_CONTENT_MAPPING" | "VARIANT_CONTENT_AND_METADATA_MAPPING"Option 
- Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- keyPassphrase String
- Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- metaData StringColumn Name 
- Specify a column name in the table, where the metadata information has to be loaded. When you enable this field, you will see the following column in the snowflake table, which differs based on the source type. - For Direct PUT as source - { "firehoseDeliveryStreamName" : "streamname", "IngestionTime" : "timestamp" }- For Kinesis Data Stream as source - "kinesisStreamName" : "streamname", "kinesisShardId" : "Id", "kinesisPartitionKey" : "key", "kinesisSequenceNumber" : "1234", "subsequenceNumber" : "2334", "IngestionTime" : "timestamp" }
- privateKey String
- The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- processingConfiguration Property Map
- Specifies configuration for Snowflake.
- retryOptions Property Map
- The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- s3BackupMode "FailedData Only" | "All Data" 
- Choose an S3 backup mode
- secretsManager Property MapConfiguration 
- The configuration that defines how you access secrets for Snowflake.
- snowflakeRole Property MapConfiguration 
- Optionally configure a Snowflake role. Otherwise the default user role will be used.
- snowflakeVpc Property MapConfiguration 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- user String
- User login name for the Snowflake account.
DeliveryStreamSnowflakeDestinationConfigurationDataLoadingOption, DeliveryStreamSnowflakeDestinationConfigurationDataLoadingOptionArgs                
- JsonMapping 
- JSON_MAPPING
- VariantContent Mapping 
- VARIANT_CONTENT_MAPPING
- VariantContent And Metadata Mapping 
- VARIANT_CONTENT_AND_METADATA_MAPPING
- DeliveryStream Snowflake Destination Configuration Data Loading Option Json Mapping 
- JSON_MAPPING
- DeliveryStream Snowflake Destination Configuration Data Loading Option Variant Content Mapping 
- VARIANT_CONTENT_MAPPING
- DeliveryStream Snowflake Destination Configuration Data Loading Option Variant Content And Metadata Mapping 
- VARIANT_CONTENT_AND_METADATA_MAPPING
- JsonMapping 
- JSON_MAPPING
- VariantContent Mapping 
- VARIANT_CONTENT_MAPPING
- VariantContent And Metadata Mapping 
- VARIANT_CONTENT_AND_METADATA_MAPPING
- JsonMapping 
- JSON_MAPPING
- VariantContent Mapping 
- VARIANT_CONTENT_MAPPING
- VariantContent And Metadata Mapping 
- VARIANT_CONTENT_AND_METADATA_MAPPING
- JSON_MAPPING
- JSON_MAPPING
- VARIANT_CONTENT_MAPPING
- VARIANT_CONTENT_MAPPING
- VARIANT_CONTENT_AND_METADATA_MAPPING
- VARIANT_CONTENT_AND_METADATA_MAPPING
- "JSON_MAPPING"
- JSON_MAPPING
- "VARIANT_CONTENT_MAPPING"
- VARIANT_CONTENT_MAPPING
- "VARIANT_CONTENT_AND_METADATA_MAPPING"
- VARIANT_CONTENT_AND_METADATA_MAPPING
DeliveryStreamSnowflakeDestinationConfigurationS3BackupMode, DeliveryStreamSnowflakeDestinationConfigurationS3BackupModeArgs              
- FailedData Only 
- FailedDataOnly
- AllData 
- AllData
- DeliveryStream Snowflake Destination Configuration S3Backup Mode Failed Data Only 
- FailedDataOnly
- DeliveryStream Snowflake Destination Configuration S3Backup Mode All Data 
- AllData
- FailedData Only 
- FailedDataOnly
- AllData 
- AllData
- FailedData Only 
- FailedDataOnly
- AllData 
- AllData
- FAILED_DATA_ONLY
- FailedDataOnly
- ALL_DATA
- AllData
- "FailedData Only" 
- FailedDataOnly
- "AllData" 
- AllData
DeliveryStreamSnowflakeRetryOptions, DeliveryStreamSnowflakeRetryOptionsArgs          
- DurationIn intSeconds 
- the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- DurationIn intSeconds 
- the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- durationIn IntegerSeconds 
- the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- durationIn numberSeconds 
- the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- duration_in_ intseconds 
- the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- durationIn NumberSeconds 
- the time period where Firehose will retry sending data to the chosen HTTP endpoint.
DeliveryStreamSnowflakeRoleConfiguration, DeliveryStreamSnowflakeRoleConfigurationArgs          
- Enabled bool
- Enable Snowflake role
- SnowflakeRole string
- The Snowflake role you wish to configure
- Enabled bool
- Enable Snowflake role
- SnowflakeRole string
- The Snowflake role you wish to configure
- enabled Boolean
- Enable Snowflake role
- snowflakeRole String
- The Snowflake role you wish to configure
- enabled boolean
- Enable Snowflake role
- snowflakeRole string
- The Snowflake role you wish to configure
- enabled bool
- Enable Snowflake role
- snowflake_role str
- The Snowflake role you wish to configure
- enabled Boolean
- Enable Snowflake role
- snowflakeRole String
- The Snowflake role you wish to configure
DeliveryStreamSnowflakeVpcConfiguration, DeliveryStreamSnowflakeVpcConfigurationArgs          
- PrivateLink stringVpce Id 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- PrivateLink stringVpce Id 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- privateLink StringVpce Id 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- privateLink stringVpce Id 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- private_link_ strvpce_ id 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- privateLink StringVpce Id 
- The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
DeliveryStreamSplunkBufferingHints, DeliveryStreamSplunkBufferingHintsArgs          
- IntervalIn intSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- SizeIn intMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- IntervalIn intSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- SizeIn intMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- intervalIn IntegerSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- sizeIn IntegerMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- intervalIn numberSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- sizeIn numberMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- interval_in_ intseconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- size_in_ intmbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- intervalIn NumberSeconds 
- Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- sizeIn NumberMbs 
- Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
DeliveryStreamSplunkDestinationConfiguration, DeliveryStreamSplunkDestinationConfigurationArgs          
- HecEndpoint string
- The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- HecEndpoint Pulumi.Type Aws Native. Kinesis Firehose. Delivery Stream Splunk Destination Configuration Hec Endpoint Type 
- This type can be either RaworEvent.
- S3Configuration
Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration 
- The configuration for the backup Amazon S3 location.
- BufferingHints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Splunk Buffering Hints 
- The buffering options. If no value is specified, the default values for Splunk are used.
- CloudWatch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- HecAcknowledgment intTimeout In Seconds 
- The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- HecToken string
- This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- ProcessingConfiguration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration 
- The data processing configuration.
- RetryOptions Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Splunk Retry Options 
- The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- S3BackupMode string
- Defines how documents should be delivered to Amazon S3. When set to - FailedEventsOnly, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to- AllEvents, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is- FailedEventsOnly.- You can update this backup mode from - FailedEventsOnlyto- AllEvents. You can't update it from- AllEventsto- FailedEventsOnly.
- SecretsManager Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Splunk.
- HecEndpoint string
- The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- HecEndpoint DeliveryType Stream Splunk Destination Configuration Hec Endpoint Type 
- This type can be either RaworEvent.
- S3Configuration
DeliveryStream S3Destination Configuration 
- The configuration for the backup Amazon S3 location.
- BufferingHints DeliveryStream Splunk Buffering Hints 
- The buffering options. If no value is specified, the default values for Splunk are used.
- CloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- HecAcknowledgment intTimeout In Seconds 
- The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- HecToken string
- This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- ProcessingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration.
- RetryOptions DeliveryStream Splunk Retry Options 
- The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- S3BackupMode string
- Defines how documents should be delivered to Amazon S3. When set to - FailedEventsOnly, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to- AllEvents, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is- FailedEventsOnly.- You can update this backup mode from - FailedEventsOnlyto- AllEvents. You can't update it from- AllEventsto- FailedEventsOnly.
- SecretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Splunk.
- hecEndpoint String
- The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hecEndpoint DeliveryType Stream Splunk Destination Configuration Hec Endpoint Type 
- This type can be either RaworEvent.
- s3Configuration
DeliveryStream S3Destination Configuration 
- The configuration for the backup Amazon S3 location.
- bufferingHints DeliveryStream Splunk Buffering Hints 
- The buffering options. If no value is specified, the default values for Splunk are used.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- hecAcknowledgment IntegerTimeout In Seconds 
- The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hecToken String
- This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration.
- retryOptions DeliveryStream Splunk Retry Options 
- The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- s3BackupMode String
- Defines how documents should be delivered to Amazon S3. When set to - FailedEventsOnly, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to- AllEvents, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is- FailedEventsOnly.- You can update this backup mode from - FailedEventsOnlyto- AllEvents. You can't update it from- AllEventsto- FailedEventsOnly.
- secretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Splunk.
- hecEndpoint string
- The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hecEndpoint DeliveryType Stream Splunk Destination Configuration Hec Endpoint Type 
- This type can be either RaworEvent.
- s3Configuration
DeliveryStream S3Destination Configuration 
- The configuration for the backup Amazon S3 location.
- bufferingHints DeliveryStream Splunk Buffering Hints 
- The buffering options. If no value is specified, the default values for Splunk are used.
- cloudWatch DeliveryLogging Options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- hecAcknowledgment numberTimeout In Seconds 
- The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hecToken string
- This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processingConfiguration DeliveryStream Processing Configuration 
- The data processing configuration.
- retryOptions DeliveryStream Splunk Retry Options 
- The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- s3BackupMode string
- Defines how documents should be delivered to Amazon S3. When set to - FailedEventsOnly, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to- AllEvents, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is- FailedEventsOnly.- You can update this backup mode from - FailedEventsOnlyto- AllEvents. You can't update it from- AllEventsto- FailedEventsOnly.
- secretsManager DeliveryConfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Splunk.
- hec_endpoint str
- The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hec_endpoint_ Deliverytype Stream Splunk Destination Configuration Hec Endpoint Type 
- This type can be either RaworEvent.
- s3_configuration DeliveryStream S3Destination Configuration 
- The configuration for the backup Amazon S3 location.
- buffering_hints DeliveryStream Splunk Buffering Hints 
- The buffering options. If no value is specified, the default values for Splunk are used.
- cloud_watch_ Deliverylogging_ options Stream Cloud Watch Logging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- hec_acknowledgment_ inttimeout_ in_ seconds 
- The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hec_token str
- This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processing_configuration DeliveryStream Processing Configuration 
- The data processing configuration.
- retry_options DeliveryStream Splunk Retry Options 
- The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- s3_backup_ strmode 
- Defines how documents should be delivered to Amazon S3. When set to - FailedEventsOnly, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to- AllEvents, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is- FailedEventsOnly.- You can update this backup mode from - FailedEventsOnlyto- AllEvents. You can't update it from- AllEventsto- FailedEventsOnly.
- secrets_manager_ Deliveryconfiguration Stream Secrets Manager Configuration 
- The configuration that defines how you access secrets for Splunk.
- hecEndpoint String
- The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hecEndpoint "Raw" | "Event"Type 
- This type can be either RaworEvent.
- s3Configuration Property Map
- The configuration for the backup Amazon S3 location.
- bufferingHints Property Map
- The buffering options. If no value is specified, the default values for Splunk are used.
- cloudWatch Property MapLogging Options 
- The Amazon CloudWatch logging options for your Firehose stream.
- hecAcknowledgment NumberTimeout In Seconds 
- The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hecToken String
- This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processingConfiguration Property Map
- The data processing configuration.
- retryOptions Property Map
- The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- s3BackupMode String
- Defines how documents should be delivered to Amazon S3. When set to - FailedEventsOnly, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to- AllEvents, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is- FailedEventsOnly.- You can update this backup mode from - FailedEventsOnlyto- AllEvents. You can't update it from- AllEventsto- FailedEventsOnly.
- secretsManager Property MapConfiguration 
- The configuration that defines how you access secrets for Splunk.
DeliveryStreamSplunkDestinationConfigurationHecEndpointType, DeliveryStreamSplunkDestinationConfigurationHecEndpointTypeArgs                
- Raw
- Raw
- Event
- Event
- DeliveryStream Splunk Destination Configuration Hec Endpoint Type Raw 
- Raw
- DeliveryStream Splunk Destination Configuration Hec Endpoint Type Event 
- Event
- Raw
- Raw
- Event
- Event
- Raw
- Raw
- Event
- Event
- RAW
- Raw
- EVENT
- Event
- "Raw"
- Raw
- "Event"
- Event
DeliveryStreamSplunkRetryOptions, DeliveryStreamSplunkRetryOptionsArgs          
- DurationIn intSeconds 
- The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- DurationIn intSeconds 
- The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- durationIn IntegerSeconds 
- The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- durationIn numberSeconds 
- The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- duration_in_ intseconds 
- The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- durationIn NumberSeconds 
- The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
DeliveryStreamType, DeliveryStreamTypeArgs      
- DatabaseAs Source 
- DatabaseAsSource
- DirectPut 
- DirectPut
- KinesisStream As Source 
- KinesisStreamAsSource
- MskasSource 
- MSKAsSource
- DeliveryStream Type Database As Source 
- DatabaseAsSource
- DeliveryStream Type Direct Put 
- DirectPut
- DeliveryStream Type Kinesis Stream As Source 
- KinesisStreamAsSource
- DeliveryStream Type Mskas Source 
- MSKAsSource
- DatabaseAs Source 
- DatabaseAsSource
- DirectPut 
- DirectPut
- KinesisStream As Source 
- KinesisStreamAsSource
- MskasSource 
- MSKAsSource
- DatabaseAs Source 
- DatabaseAsSource
- DirectPut 
- DirectPut
- KinesisStream As Source 
- KinesisStreamAsSource
- MskasSource 
- MSKAsSource
- DATABASE_AS_SOURCE
- DatabaseAsSource
- DIRECT_PUT
- DirectPut
- KINESIS_STREAM_AS_SOURCE
- KinesisStreamAsSource
- MSKAS_SOURCE
- MSKAsSource
- "DatabaseAs Source" 
- DatabaseAsSource
- "DirectPut" 
- DirectPut
- "KinesisStream As Source" 
- KinesisStreamAsSource
- "MSKAsSource" 
- MSKAsSource
DeliveryStreamVpcConfiguration, DeliveryStreamVpcConfigurationArgs        
- RoleArn string
- The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions: - ec2:DescribeVpcs
- ec2:DescribeVpcAttribute
- ec2:DescribeSubnets
- ec2:DescribeSecurityGroups
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:CreateNetworkInterfacePermission
- ec2:DeleteNetworkInterface
 - If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance. 
- SecurityGroup List<string>Ids 
- The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- SubnetIds List<string>
- The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs. - The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here. 
- RoleArn string
- The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions: - ec2:DescribeVpcs
- ec2:DescribeVpcAttribute
- ec2:DescribeSubnets
- ec2:DescribeSecurityGroups
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:CreateNetworkInterfacePermission
- ec2:DeleteNetworkInterface
 - If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance. 
- SecurityGroup []stringIds 
- The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- SubnetIds []string
- The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs. - The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here. 
- roleArn String
- The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions: - ec2:DescribeVpcs
- ec2:DescribeVpcAttribute
- ec2:DescribeSubnets
- ec2:DescribeSecurityGroups
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:CreateNetworkInterfacePermission
- ec2:DeleteNetworkInterface
 - If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance. 
- securityGroup List<String>Ids 
- The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnetIds List<String>
- The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs. - The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here. 
- roleArn string
- The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions: - ec2:DescribeVpcs
- ec2:DescribeVpcAttribute
- ec2:DescribeSubnets
- ec2:DescribeSecurityGroups
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:CreateNetworkInterfacePermission
- ec2:DeleteNetworkInterface
 - If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance. 
- securityGroup string[]Ids 
- The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnetIds string[]
- The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs. - The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here. 
- role_arn str
- The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions: - ec2:DescribeVpcs
- ec2:DescribeVpcAttribute
- ec2:DescribeSubnets
- ec2:DescribeSecurityGroups
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:CreateNetworkInterfacePermission
- ec2:DeleteNetworkInterface
 - If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance. 
- security_group_ Sequence[str]ids 
- The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnet_ids Sequence[str]
- The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs. - The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here. 
- roleArn String
- The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions: - ec2:DescribeVpcs
- ec2:DescribeVpcAttribute
- ec2:DescribeSubnets
- ec2:DescribeSecurityGroups
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:CreateNetworkInterfacePermission
- ec2:DeleteNetworkInterface
 - If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance. 
- securityGroup List<String>Ids 
- The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnetIds List<String>
- The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs. - The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here. 
Tag, TagArgs  
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.