Skip to content

ServiceIntegration

Usage example

apiVersion: aiven.io/v1alpha1
kind: ServiceIntegration
metadata:
  name: my-service-integration
spec:
  authSecretRef:
    name: aiven-token
    key: token

  project: aiven-project-name

  integrationType: kafka_logs
  sourceServiceName: my-source-service-name
  destinationServiceName: my-destination-service-name

  kafkaLogs:
    kafka_topic: my-kafka-topic

ServiceIntegration

ServiceIntegration is the Schema for the serviceintegrations API.

Required

  • apiVersion (string). Value aiven.io/v1alpha1.
  • kind (string). Value ServiceIntegration.
  • metadata (object). Data that identifies the object, including a name string and optional namespace.
  • spec (object). ServiceIntegrationSpec defines the desired state of ServiceIntegration. See below for nested schema.

spec

Appears on ServiceIntegration.

ServiceIntegrationSpec defines the desired state of ServiceIntegration.

Required

  • integrationType (string, Enum: alertmanager, autoscaler, caching, cassandra_cross_service_cluster, clickhouse_kafka, clickhouse_postgresql, dashboard, datadog, datasource, external_aws_cloudwatch_logs, external_aws_cloudwatch_metrics, external_elasticsearch_logs, external_google_cloud_logging, external_opensearch_logs, flink, flink_external_kafka, internal_connectivity, jolokia, kafka_connect, kafka_logs, kafka_mirrormaker, logs, m3aggregator, m3coordinator, metrics, opensearch_cross_cluster_replication, opensearch_cross_cluster_search, prometheus, read_replica, rsyslog, schema_registry_proxy, stresstester, thanosquery, thanosstore, vmalert, Immutable). Type of the service integration accepted by Aiven API. Some values may not be supported by the operator.
  • project (string, Immutable, MaxLength: 63, Format: ^[a-zA-Z0-9_-]+$). Identifies the project this resource belongs to.

Optional

authSecretRef

Appears on spec.

Authentication reference to Aiven token in a secret.

Required

  • key (string, MinLength: 1).
  • name (string, MinLength: 1).

clickhouseKafka

Appears on spec.

Clickhouse Kafka configuration values.

Required

tables

Appears on spec.clickhouseKafka.

Tables to create.

Required

  • columns (array of objects, MaxItems: 100). Table columns. See below for nested schema.
  • data_format (string, Enum: Avro, CSV, JSONAsString, JSONCompactEachRow, JSONCompactStringsEachRow, JSONEachRow, JSONStringsEachRow, MsgPack, TSKV, TSV, TabSeparated, RawBLOB, AvroConfluent). Message data format.
  • group_name (string, MinLength: 1, MaxLength: 249). Kafka consumers group.
  • name (string, MinLength: 1, MaxLength: 40). Name of the table.
  • topics (array of objects, MaxItems: 100). Kafka topics. See below for nested schema.

Optional

  • auto_offset_reset (string, Enum: smallest, earliest, beginning, largest, latest, end). Action to take when there is no initial offset in offset store or the desired offset is out of range.
  • date_time_input_format (string, Enum: basic, best_effort, best_effort_us). Method to read DateTime from text input formats.
  • handle_error_mode (string, Enum: default, stream). How to handle errors for Kafka engine.
  • max_block_size (integer, Minimum: 0, Maximum: 1000000000). Number of row collected by poll(s) for flushing data from Kafka.
  • max_rows_per_message (integer, Minimum: 1, Maximum: 1000000000). The maximum number of rows produced in one kafka message for row-based formats.
  • num_consumers (integer, Minimum: 1, Maximum: 10). The number of consumers per table per replica.
  • poll_max_batch_size (integer, Minimum: 0, Maximum: 1000000000). Maximum amount of messages to be polled in a single Kafka poll.
  • skip_broken_messages (integer, Minimum: 0, Maximum: 1000000000). Skip at least this number of broken messages from Kafka topic per block.

columns

Appears on spec.clickhouseKafka.tables.

Table columns.

Required

  • name (string, MinLength: 1, MaxLength: 40). Column name.
  • type (string, MinLength: 1, MaxLength: 1000). Column type.

topics

Appears on spec.clickhouseKafka.tables.

Kafka topics.

Required

  • name (string, MinLength: 1, MaxLength: 249). Name of the topic.

clickhousePostgresql

Appears on spec.

Clickhouse PostgreSQL configuration values.

Required

databases

Appears on spec.clickhousePostgresql.

Databases to expose.

Optional

  • database (string, MinLength: 1, MaxLength: 63). PostgreSQL database to expose.
  • schema (string, MinLength: 1, MaxLength: 63). PostgreSQL schema to expose.

datadog

Appears on spec.

Datadog specific user configuration options.

Optional

datadog_tags

Appears on spec.datadog.

Custom tags provided by user.

Required

  • tag (string, MinLength: 1, MaxLength: 200). Tag format and usage are described here: https://docs.datadoghq.com/getting_started/tagging. Tags with prefix aiven- are reserved for Aiven.

Optional

  • comment (string, MaxLength: 1024). Optional tag explanation.

opensearch

Appears on spec.datadog.

Datadog Opensearch Options.

Optional

redis

Appears on spec.datadog.

Datadog Redis Options.

Required

externalAWSCloudwatchMetrics

Appears on spec.

External AWS CloudWatch Metrics integration Logs configuration values.

Optional

  • dropped_metrics (array of objects, MaxItems: 1024). Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics). See below for nested schema.
  • extra_metrics (array of objects, MaxItems: 1024). Metrics to allow through to AWS CloudWatch (in addition to default metrics). See below for nested schema.

dropped_metrics

Appears on spec.externalAWSCloudwatchMetrics.

Metrics to not send to AWS CloudWatch (takes precedence over extra_metrics).

Required

  • field (string, MaxLength: 1000). Identifier of a value in the metric.
  • metric (string, MaxLength: 1000). Identifier of the metric.

extra_metrics

Appears on spec.externalAWSCloudwatchMetrics.

Metrics to allow through to AWS CloudWatch (in addition to default metrics).

Required

  • field (string, MaxLength: 1000). Identifier of a value in the metric.
  • metric (string, MaxLength: 1000). Identifier of the metric.

kafkaConnect

Appears on spec.

Kafka Connect service configuration values.

Required

kafka_connect

Appears on spec.kafkaConnect.

Kafka Connect service configuration values.

Optional

  • config_storage_topic (string, MaxLength: 249). The name of the topic where connector and task configuration data are stored.This must be the same for all workers with the same group_id.
  • group_id (string, MaxLength: 249). A unique string that identifies the Connect cluster group this worker belongs to.
  • offset_storage_topic (string, MaxLength: 249). The name of the topic where connector and task configuration offsets are stored.This must be the same for all workers with the same group_id.
  • status_storage_topic (string, MaxLength: 249). The name of the topic where connector and task configuration status updates are stored.This must be the same for all workers with the same group_id.

kafkaLogs

Appears on spec.

Kafka logs configuration values.

Required

  • kafka_topic (string, MinLength: 1, MaxLength: 249). Topic name.

Optional

  • selected_log_fields (array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.

kafkaMirrormaker

Appears on spec.

Kafka MirrorMaker configuration values.

Optional

  • cluster_alias (string, Pattern: ^[a-zA-Z0-9_.-]+$, MaxLength: 128). The alias under which the Kafka cluster is known to MirrorMaker. Can contain the following symbols: ASCII alphanumerics, ., _, and -.
  • kafka_mirrormaker (object). Kafka MirrorMaker configuration values. See below for nested schema.

kafka_mirrormaker

Appears on spec.kafkaMirrormaker.

Kafka MirrorMaker configuration values.

Optional

  • consumer_fetch_min_bytes (integer, Minimum: 1, Maximum: 5242880). The minimum amount of data the server should return for a fetch request.
  • producer_batch_size (integer, Minimum: 0, Maximum: 5242880). The batch size in bytes producer will attempt to collect before publishing to broker.
  • producer_buffer_memory (integer, Minimum: 5242880, Maximum: 134217728). The amount of bytes producer can use for buffering data before publishing to broker.
  • producer_compression_type (string, Enum: gzip, snappy, lz4, zstd, none). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
  • producer_linger_ms (integer, Minimum: 0, Maximum: 5000). The linger time (ms) for waiting new data to arrive for publishing.
  • producer_max_request_size (integer, Minimum: 0, Maximum: 268435456). The maximum request size in bytes.

logs

Appears on spec.

Logs configuration values.

Optional

  • elasticsearch_index_days_max (integer, Minimum: 1, Maximum: 10000). Elasticsearch index retention limit.
  • elasticsearch_index_prefix (string, MinLength: 1, MaxLength: 1024). Elasticsearch index prefix.
  • selected_log_fields (array of strings, MaxItems: 5). The list of logging fields that will be sent to the integration logging service. The MESSAGE and timestamp fields are always sent.

metrics

Appears on spec.

Metrics configuration values.

Optional

  • database (string, Pattern: ^[_A-Za-z0-9][-_A-Za-z0-9]{0,39}$, MaxLength: 40). Name of the database where to store metric datapoints. Only affects PostgreSQL destinations. Defaults to metrics. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.
  • retention_days (integer, Minimum: 0, Maximum: 10000). Number of days to keep old metrics. Only affects PostgreSQL destinations. Set to 0 for no automatic cleanup. Defaults to 30 days.
  • ro_username (string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$, MaxLength: 40). Name of a user that can be used to read metrics. This will be used for Grafana integration (if enabled) to prevent Grafana users from making undesired changes. Only affects PostgreSQL destinations. Defaults to metrics_reader. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.
  • source_mysql (object). Configuration options for metrics where source service is MySQL. See below for nested schema.
  • username (string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,39}$, MaxLength: 40). Name of the user used to write metrics. Only affects PostgreSQL destinations. Defaults to metrics_writer. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.

source_mysql

Appears on spec.metrics.

Configuration options for metrics where source service is MySQL.

Required

telegraf

Appears on spec.metrics.source_mysql.

Configuration options for Telegraf MySQL input plugin.

Optional