ServiceIntegration
Usage examples¶
autoscaler
apiVersion: aiven.io/v1alpha1
kind: ServiceIntegration
metadata:
name: my-service-integration
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
integrationType: autoscaler
sourceServiceName: my-pg
# Look up autoscaler integration endpoint ID via Console
destinationEndpointId: my-destination-endpoint-id
---
apiVersion: aiven.io/v1alpha1
kind: PostgreSQL
metadata:
name: my-pg
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
cloudName: google-europe-west1
plan: startup-4
clickhouse_postgresql
apiVersion: aiven.io/v1alpha1
kind: ServiceIntegration
metadata:
name: my-service-integration
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
integrationType: clickhouse_postgresql
sourceServiceName: my-pg
destinationServiceName: my-clickhouse
clickhousePostgresql:
databases:
- database: defaultdb
schema: public
---
apiVersion: aiven.io/v1alpha1
kind: Clickhouse
metadata:
name: my-clickhouse
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
cloudName: google-europe-west1
plan: startup-16
maintenanceWindowDow: friday
maintenanceWindowTime: 23:00:00
---
apiVersion: aiven.io/v1alpha1
kind: PostgreSQL
metadata:
name: my-pg
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
cloudName: google-europe-west1
plan: startup-4
maintenanceWindowDow: friday
maintenanceWindowTime: 23:00:00
datadog
apiVersion: aiven.io/v1alpha1
kind: ServiceIntegration
metadata:
name: my-service-integration
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
integrationType: datadog
sourceServiceName: my-pg
destinationEndpointId: destination-endpoint-id
datadog:
datadog_dbm_enabled: True
datadog_tags:
- tag: env
comment: test
---
apiVersion: aiven.io/v1alpha1
kind: PostgreSQL
metadata:
name: my-pg
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
cloudName: google-europe-west1
plan: startup-4
kafka_connect
apiVersion: aiven.io/v1alpha1
kind: ServiceIntegration
metadata:
name: my-service-integration
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
integrationType: kafka_connect
sourceServiceName: my-kafka
destinationServiceName: my-kafka-connect
kafkaConnect:
kafka_connect:
group_id: connect
status_storage_topic: __connect_status
offset_storage_topic: __connect_offsets
---
apiVersion: aiven.io/v1alpha1
kind: Kafka
metadata:
name: my-kafka
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
cloudName: google-europe-west1
plan: business-4
---
apiVersion: aiven.io/v1alpha1
kind: KafkaConnect
metadata:
name: my-kafka-connect
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
cloudName: google-europe-west1
plan: business-4
userConfig:
kafka_connect:
consumer_isolation_level: read_committed
public_access:
kafka_connect: true
kafka_logs
apiVersion: aiven.io/v1alpha1
kind: ServiceIntegration
metadata:
name: my-service-integration
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
integrationType: kafka_logs
sourceServiceName: my-kafka
destinationServiceName: my-kafka
kafkaLogs:
kafka_topic: my-kafka-topic
---
apiVersion: aiven.io/v1alpha1
kind: Kafka
metadata:
name: my-kafka
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
cloudName: google-europe-west1
plan: business-4
---
apiVersion: aiven.io/v1alpha1
kind: KafkaTopic
metadata:
name: my-kafka-topic
spec:
authSecretRef:
name: aiven-token
key: token
project: aiven-project-name
serviceName: my-kafka
replication: 2
partitions: 1
Info
To create this resource, a Secret
containing Aiven token must be created first.
Apply the resource with:
Verify the newly created ServiceIntegration
:
The output is similar to the following:
Name Project Type Source Service Name Destination Endpoint ID
my-service-integration aiven-project-name autoscaler my-pg my-destination-endpoint-id
ServiceIntegration¶
ServiceIntegration is the Schema for the serviceintegrations API.
Required
apiVersion
(string). Valueaiven.io/v1alpha1
.kind
(string). ValueServiceIntegration
.metadata
(object). Data that identifies the object, including aname
string and optionalnamespace
.spec
(object). ServiceIntegrationSpec defines the desired state of ServiceIntegration. See below for nested schema.
spec¶
Appears on ServiceIntegration
.
ServiceIntegrationSpec defines the desired state of ServiceIntegration.
Required
integrationType
(string, Enum:alertmanager
,autoscaler
,caching
,cassandra_cross_service_cluster
,clickhouse_kafka
,clickhouse_postgresql
,dashboard
,datadog
,datasource
,external_aws_cloudwatch_logs
,external_aws_cloudwatch_metrics
,external_elasticsearch_logs
,external_google_cloud_logging
,external_opensearch_logs
,flink
,flink_external_kafka
,flink_external_postgresql
,internal_connectivity
,jolokia
,kafka_connect
,kafka_logs
,kafka_mirrormaker
,logs
,m3aggregator
,m3coordinator
,metrics
,opensearch_cross_cluster_replication
,opensearch_cross_cluster_search
,prometheus
,read_replica
,rsyslog
,schema_registry_proxy
,stresstester
,thanosquery
,thanosstore
,vmalert
, Immutable). Type of the service integration accepted by Aiven API. Some values may not be supported by the operator.project
(string, Immutable, Pattern:^[a-zA-Z0-9_-]+$
, MaxLength: 63). Identifies the project this resource belongs to.
Optional
authSecretRef
(object). Authentication reference to Aiven token in a secret. See below for nested schema.autoscaler
(object). Autoscaler specific user configuration options.clickhouseKafka
(object). Clickhouse Kafka configuration values. See below for nested schema.clickhousePostgresql
(object). Clickhouse PostgreSQL configuration values. See below for nested schema.datadog
(object). Datadog specific user configuration options. See below for nested schema.destinationEndpointId
(string, Immutable, MaxLength: 36). Destination endpoint for the integration (if any).destinationProjectName
(string, Immutable, MaxLength: 63). Destination project for the integration (if any).destinationServiceName
(string, Immutable, MaxLength: 64). Destination service for the integration (if any).externalAWSCloudwatchMetrics
(object). External AWS CloudWatch Metrics integration Logs configuration values. See below for nested schema.kafkaConnect
(object). Kafka Connect service configuration values. See below for nested schema.kafkaLogs
(object). Kafka logs configuration values. See below for nested schema.kafkaMirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.logs
(object). Logs configuration values. See below for nested schema.metrics
(object). Metrics configuration values. See below for nested schema.sourceEndpointID
(string, Immutable, MaxLength: 36). Source endpoint for the integration (if any).sourceProjectName
(string, Immutable, MaxLength: 63). Source project for the integration (if any).sourceServiceName
(string, Immutable, MaxLength: 64). Source service for the integration (if any).
authSecretRef¶
Appears on spec
.
Authentication reference to Aiven token in a secret.
Required
clickhouseKafka¶
Appears on spec
.
Clickhouse Kafka configuration values.
Required
tables
(array of objects, MaxItems: 100). Tables to create. See below for nested schema.
tables¶
Appears on spec.clickhouseKafka
.
Table to create.
Required
columns
(array of objects, MaxItems: 100). Table columns. See below for nested schema.data_format
(string, Enum:Avro
,AvroConfluent
,CSV
,JSONAsString
,JSONCompactEachRow
,JSONCompactStringsEachRow
,JSONEachRow
,JSONStringsEachRow
,MsgPack
,Parquet
,RawBLOB
,TSKV
,TSV
,TabSeparated
). Message data format.group_name
(string, MinLength: 1, MaxLength: 249). Kafka consumers group.name
(string, MinLength: 1, MaxLength: 40). Name of the table.topics
(array of objects, MaxItems: 100). Kafka topics. See below for nested schema.
Optional
auto_offset_reset
(string, Enum:beginning
,earliest
,end
,largest
,latest
,smallest
). Action to take when there is no initial offset in offset store or the desired offset is out of range.date_time_input_format
(string, Enum:basic
,best_effort
,best_effort_us
). Method to read DateTime from text input formats.handle_error_mode
(string, Enum:default
,stream
). How to handle errors for Kafka engine.max_block_size
(integer, Minimum: 0, Maximum: 1000000000). Number of row collected by poll(s) for flushing data from Kafka.max_rows_per_message
(integer, Minimum: 1, Maximum: 1000000000). The maximum number of rows produced in one kafka message for row-based formats.num_consumers
(integer, Minimum: 1, Maximum: 10). The number of consumers per table per replica.poll_max_batch_size
(integer, Minimum: 0, Maximum: 1000000000). Maximum amount of messages to be polled in a single Kafka poll.poll_max_timeout_ms
(integer, Minimum: 0, Maximum: 30000). Timeout in milliseconds for a single poll from Kafka. Takes the value of the stream_flush_interval_ms server setting by default (500ms).skip_broken_messages
(integer, Minimum: 0, Maximum: 1000000000). Skip at least this number of broken messages from Kafka topic per block.thread_per_consumer
(boolean). Provide an independent thread for each consumer. All consumers run in the same thread by default.
columns¶
Appears on spec.clickhouseKafka.tables
.
Table column.
Required
name
(string, MinLength: 1, MaxLength: 40). Column name.type
(string, MinLength: 1, MaxLength: 1000). Column type.
topics¶
Appears on spec.clickhouseKafka.tables
.
Kafka topic.
Required
name
(string, MinLength: 1, MaxLength: 249). Name of the topic.
clickhousePostgresql¶
Appears on spec
.
Clickhouse PostgreSQL configuration values.
Required
databases
(array of objects, MaxItems: 10). Databases to expose. See below for nested schema.
databases¶
Appears on spec.clickhousePostgresql
.
Database to expose.
Optional
database
(string, MinLength: 1, MaxLength: 63). PostgreSQL database to expose.schema
(string, MinLength: 1, MaxLength: 63). PostgreSQL schema to expose.
datadog¶
Appears on spec
.
Datadog specific user configuration options.
Optional
datadog_dbm_enabled
(boolean). Enable Datadog Database Monitoring.datadog_pgbouncer_enabled
(boolean). Enable Datadog PgBouncer Metric Tracking.datadog_tags
(array of objects, MaxItems: 32). Custom tags provided by user. See below for nested schema.exclude_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.exclude_topics
(array of strings, MaxItems: 1024). List of topics to exclude.include_consumer_groups
(array of strings, MaxItems: 1024). List of custom metrics.include_topics
(array of strings, MaxItems: 1024). List of topics to include.kafka_custom_metrics
(array of strings, MaxItems: 1024). List of custom metrics.max_jmx_metrics
(integer, Minimum: 10, Maximum: 100000). Maximum number of JMX metrics to send.mirrormaker_custom_metrics
(array of strings, MaxItems: 1024). List of custom metrics.opensearch
(object). Datadog Opensearch Options. See below for nested schema.redis
(object). Datadog Redis Options. See below for nested schema.
datadog_tags¶
Appears on spec.datadog
.
Datadog tag defined by user.
Required
tag
(string, MinLength: 1, MaxLength: 200). Tag format and usage are described here: https://docs.datadoghq.com/getting_started/tagging. Tags with prefixaiven-
are reserved for Aiven.
Optional
comment
(string, MaxLength: 1024). Optional tag explanation.
opensearch¶
Appears on spec.datadog
.
Datadog Opensearch Options.
Optional
cluster_stats_enabled
(boolean). Enable Datadog Opensearch Cluster Monitoring.index_stats_enabled
(boolean). Enable Datadog Opensearch Index Monitoring.pending_task_stats_enabled
(boolean). Enable Datadog Opensearch Pending Task Monitoring.pshard_stats_enabled
(boolean). Enable Datadog Opensearch Primary Shard Monitoring.
redis¶
Appears on spec.datadog
.
Datadog Redis Options.
Required
command_stats_enabled
(boolean). Enable command_stats option in the agent's configuration.
externalAWSCloudwatchMetrics¶
Appears on spec
.
External AWS CloudWatch Metrics integration Logs configuration values.
Optional
dropped_metrics
(array of objects, MaxItems: 1024). Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics). See below for nested schema.extra_metrics
(array of objects, MaxItems: 1024). Metrics to allow through to AWS CloudWatch (in addition to default metrics). See below for nested schema.
dropped_metrics¶
Appears on spec.externalAWSCloudwatchMetrics
.
Metric name and subfield.
Required
field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.
extra_metrics¶
Appears on spec.externalAWSCloudwatchMetrics
.
Metric name and subfield.
Required
field
(string, MaxLength: 1000). Identifier of a value in the metric.metric
(string, MaxLength: 1000). Identifier of the metric.
kafkaConnect¶
Appears on spec
.
Kafka Connect service configuration values.
Required
kafka_connect
(object). Kafka Connect service configuration values. See below for nested schema.
kafka_connect¶
Appears on spec.kafkaConnect
.
Kafka Connect service configuration values.
Optional
config_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration data are stored.This must be the same for all workers with the same group_id.group_id
(string, MaxLength: 249). A unique string that identifies the Connect cluster group this worker belongs to.offset_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration offsets are stored.This must be the same for all workers with the same group_id.status_storage_topic
(string, MaxLength: 249). The name of the topic where connector and task configuration status updates are stored.This must be the same for all workers with the same group_id.
kafkaLogs¶
Appears on spec
.
Kafka logs configuration values.
Required
kafka_topic
(string, MinLength: 1, MaxLength: 249). Topic name.
Optional
selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.
kafkaMirrormaker¶
Appears on spec
.
Kafka MirrorMaker configuration values.
Optional
cluster_alias
(string, Pattern:^[a-zA-Z0-9_.-]+$
, MaxLength: 128). The alias under which the Kafka cluster is known to MirrorMaker. Can contain the following symbols: ASCII alphanumerics,.
,_
, and-
.kafka_mirrormaker
(object). Kafka MirrorMaker configuration values. See below for nested schema.
kafka_mirrormaker¶
Appears on spec.kafkaMirrormaker
.
Kafka MirrorMaker configuration values.
Optional
consumer_auto_offset_reset
(string, Enum:earliest
,latest
). Set where consumer starts to consume data. Valueearliest
: Start replication from the earliest offset. Valuelatest
: Start replication from the latest offset. Default isearliest
.consumer_fetch_min_bytes
(integer, Minimum: 1, Maximum: 5242880). The minimum amount of data the server should return for a fetch request.consumer_max_poll_records
(integer, Minimum: 100, Maximum: 20000). Set consumer max.poll.records. The default is 500.producer_batch_size
(integer, Minimum: 0, Maximum: 5242880). The batch size in bytes producer will attempt to collect before publishing to broker.producer_buffer_memory
(integer, Minimum: 5242880, Maximum: 134217728). The amount of bytes producer can use for buffering data before publishing to broker.producer_compression_type
(string, Enum:gzip
,lz4
,none
,snappy
,zstd
). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip
,snappy
,lz4
,zstd
). It additionally acceptsnone
which is the default and equivalent to no compression.producer_linger_ms
(integer, Minimum: 0, Maximum: 5000). The linger time (ms) for waiting new data to arrive for publishing.producer_max_request_size
(integer, Minimum: 0, Maximum: 268435456). The maximum request size in bytes.
logs¶
Appears on spec
.
Logs configuration values.
Optional
elasticsearch_index_days_max
(integer, Minimum: 1, Maximum: 10000). Elasticsearch index retention limit.elasticsearch_index_prefix
(string, MinLength: 1, MaxLength: 1024). Elasticsearch index prefix.selected_log_fields
(array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.
metrics¶
Appears on spec
.
Metrics configuration values.
Optional
database
(string, Pattern:^[_A-Za-z0-9][-_A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the database where to store metric datapoints. Only affects PostgreSQL destinations. Defaults tometrics
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.retention_days
(integer, Minimum: 0, Maximum: 10000). Number of days to keep old metrics. Only affects PostgreSQL destinations. Set to 0 for no automatic cleanup. Defaults to 30 days.ro_username
(string, Pattern:^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of a user that can be used to read metrics. This will be used for Grafana integration (if enabled) to prevent Grafana users from making undesired changes. Only affects PostgreSQL destinations. Defaults tometrics_reader
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.source_mysql
(object). Configuration options for metrics where source service is MySQL. See below for nested schema.username
(string, Pattern:^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$
, MaxLength: 40). Name of the user used to write metrics. Only affects PostgreSQL destinations. Defaults tometrics_writer
. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.
source_mysql¶
Appears on spec.metrics
.
Configuration options for metrics where source service is MySQL.
Required
telegraf
(object). Configuration options for Telegraf MySQL input plugin. See below for nested schema.
telegraf¶
Appears on spec.metrics.source_mysql
.
Configuration options for Telegraf MySQL input plugin.
Optional
gather_event_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS.gather_file_events_stats
(boolean). gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME.gather_index_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE.gather_info_schema_auto_inc
(boolean). Gather auto_increment columns and max values from information schema.gather_innodb_metrics
(boolean). Gather metrics from INFORMATION_SCHEMA.INNODB_METRICS.gather_perf_events_statements
(boolean). Gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST.gather_process_list
(boolean). Gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST.gather_slave_status
(boolean). Gather metrics from SHOW SLAVE STATUS command output.gather_table_io_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE.gather_table_lock_waits
(boolean). Gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS.gather_table_schema
(boolean). Gather metrics from INFORMATION_SCHEMA.TABLES.perf_events_statements_digest_text_limit
(integer, Minimum: 1, Maximum: 2048). Truncates digest text from perf_events_statements into this many characters.perf_events_statements_limit
(integer, Minimum: 1, Maximum: 4000). Limits metrics from perf_events_statements.perf_events_statements_time_limit
(integer, Minimum: 1, Maximum: 2592000). Only include perf_events_statements whose last seen is less than this many seconds.