API Reference

aiven.io/v1alpha1

AuthSecretReference

AuthSecretReference references a Secret containing an Aiven authentication token

Field Description
name
string
N/A
key
string
N/A

ConnInfoSecretTarget

ConnInfoSecretTarget contains information secret name

Field Description
name
string
Name of the Secret resource to be created

ConnectionPool

ConnectionPool is the Schema for the connectionpools API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
ConnectionPoolSpec
N/A
status
ConnectionPoolStatus
N/A

ConnectionPoolSpec

ConnectionPoolSpec defines the desired state of ConnectionPool

Field Description
project
string
Target project.
serviceName
string
Service name.
databaseName
string
Name of the database the pool connects to
username
string
Name of the service user used to connect to the database
poolSize
int
Number of connections the pool may create towards the backend server
poolMode
string
Mode the pool operates in (session, transaction, statement)
connInfoSecretTarget
ConnInfoSecretTarget
Information regarding secret creation
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

ConnectionPoolStatus

ConnectionPoolStatus defines the observed state of ConnectionPool

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an ConnectionPool state

Database

Database is the Schema for the databases API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
DatabaseSpec
N/A
status
DatabaseStatus
N/A

DatabaseSpec

DatabaseSpec defines the desired state of Database

Field Description
project
string
Project to link the database to
serviceName
string
PostgreSQL service to link the database to
lcCollate
string
Default string sort order (LC_COLLATE) of the database. Default value: en_US.UTF-8
lcCtype
string
Default character classification (LC_CTYPE) of the database. Default value: en_US.UTF-8
terminationProtection
bool
It is a Kubernetes side deletion protections, which prevents the databasefrom being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

DatabaseStatus

DatabaseStatus defines the observed state of Database

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an Database state

KafkServiceKafkaConnectUserConfig

Field Description
consumer_max_poll_records
int64
The maximum number of records returned by a single poll The maximum number of records returned in a single call to poll() (defaults to 500).
offset_flush_timeout_ms
int64
Offset flush timeout Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).
connector_client_config_override_policy
string
Client config override policy Defines what client configurations can be overridden by the connector. Default is None
consumer_fetch_max_bytes
int64
The maximum amount of data the server should return for a fetch request Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.
consumer_max_poll_interval_ms
int64
The maximum delay between polls when using consumer group management The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
offset_flush_interval_ms
int64
The interval at which to try committing offsets for tasks The interval at which to try committing offsets for tasks (defaults to 60000).
producer_max_request_size
int64
The maximum size of a request in bytes This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.
session_timeout_ms
int64
The timeout used to detect failures when using Kafka’s group management facilities The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).
consumer_auto_offset_reset
string
Consumer auto offset reset What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest
consumer_isolation_level
string
Consumer isolation level Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
consumer_max_partition_fetch_bytes
int64
The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

Kafka

Kafka is the Schema for the kafkas API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
KafkaSpec
N/A
status
ServiceStatus
N/A

KafkaACL

KafkaACL is the Schema for the kafkaacls API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
KafkaACLSpec
N/A
status
KafkaACLStatus
N/A

KafkaACLSpec

KafkaACLSpec defines the desired state of KafkaACL

Field Description
project
string
Project to link the Kafka ACL to
serviceName
string
Service to link the Kafka ACL to
permission
string
Kafka permission to grant (admin, read, readwrite, write)
topic
string
Topic name pattern for the ACL entry
username
string
Username pattern for the ACL entry
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

KafkaACLStatus

KafkaACLStatus defines the observed state of KafkaACL

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an KafkaACL state
id
string
Kafka ACL ID

KafkaAuthenticationMethodsUserConfig

Field Description
certificate
bool
Enable certificate/SSL authentication
sasl
bool
Enable SASL authentication

KafkaConnect

KafkaConnect is the Schema for the kafkaconnects API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
KafkaConnectSpec
N/A
status
ServiceStatus
N/A

KafkaConnectPrivateAccessUserConfig

Field Description
kafka_connect
bool
Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations
prometheus
bool
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

KafkaConnectPublicAccessUserConfig

Field Description
kafka_connect
bool
Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network
prometheus
bool
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

KafkaConnectSpec

KafkaConnectSpec defines the desired state of KafkaConnect

Field Description
ServiceCommonSpec
ServiceCommonSpec
(Members of ServiceCommonSpecare embedded into this type.)
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret
userConfig
KafkaConnectUserConfig
PostgreSQL specific user configuration options

KafkaConnectUserConfig

Field Description
connector_client_config_override_policy
string
Defines what client configurations can be overridden by the connector. Default is None
consumer_auto_offset_reset
string
What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest
consumer_fetch_max_bytes
int64
Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.
consumer_isolation_level
string
Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
consumer_max_partition_fetch_bytes
int64
Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.
consumer_max_poll_interval_ms
int64
The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
consumer_max_poll_records
int64
The maximum number of records returned in a single call to poll() (defaults to 500).
offset_flush_interval_ms
int64
The interval at which to try committing offsets for tasks (defaults to 60000).
producer_max_request_size
int64
This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.
session_timeout_ms
int64
The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).
private_access
KafkaConnectPrivateAccessUserConfig
Allow access to selected service ports from private networks
public_access
KafkaConnectPublicAccessUserConfig
Allow access to selected service ports from the public Internet

KafkaConnector

KafkaConnector is the Schema for the kafkaconnectors API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
KafkaConnectorSpec
N/A
status
KafkaConnectorStatus
N/A

KafkaConnectorPluginStatus

KafkaConnectorPluginStatus describes the observed state of a Kafka Connector Plugin

Field Description
author
string
N/A
class
string
N/A
docUrl
string
N/A
title
string
N/A
type
string
N/A
version
string
N/A

KafkaConnectorSpec

KafkaConnectorSpec defines the desired state of KafkaConnector

Field Description
project
string
Target project.
serviceName
string
Service name.
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret
connectorClass
string
The Java class of the connector.
userConfig
map[string]string
The connector specific configurationTo build config values from secret the template function {{ fromSecret "name" "key" }} is provided when interpreting the keys

KafkaConnectorStatus

KafkaConnectorStatus defines the observed state of KafkaConnector

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an kafka connector state
state
string
Connector state
pluginStatus
KafkaConnectorPluginStatus
PluginStatus contains metadata about the configured connector plugin
tasksStatus
KafkaConnectorTasksStatus
TasksStatus contains metadata about the running tasks

KafkaConnectorTasksStatus

KafkaConnectorTasksStatus describes the observed state of the Kafka Connector Tasks

Field Description
total
uint
N/A
running
uint
N/A
failed
uint
N/A
paused
uint
N/A
unassigned
uint
N/A
unknown
uint
N/A
stackTrace
string
N/A

KafkaPrivateAccessUserConfig

Field Description
prometheus
bool
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

KafkaPublicAccessUserConfig

Field Description
prometheus
bool
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network
schema_registry
bool
Allow clients to connect to schema_registry from the public internet for service nodes that are in a project VPC or another type of private network
kafka
bool
Allow clients to connect to kafka from the public internet for service nodes that are in a project VPC or another type of private network
kafka_connect
bool
Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network
kafka_rest
bool
Allow clients to connect to kafka_rest from the public internet for service nodes that are in a project VPC or another type of private network

KafkaRestUserConfig

Field Description
consumer_request_max_bytes
int64
consumer.request.max.bytes Maximum number of bytes in unencoded message keys and values by a single request
consumer_request_timeout_ms
int64
consumer.request.timeout.ms The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached
producer_acks
string
producer.acks The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to 'all' or '-1', the leader will wait for the full set of in-sync replicas to acknowledge the record.
producer_linger_ms
int64
producer.linger.ms Wait for up to the given delay to allow batching records together
simpleconsumer_pool_size_max
int64
simpleconsumer.pool.size.max Maximum number of SimpleConsumers that can be instantiated per broker
consumer_enable_auto_commit
bool
consumer.enable.auto.commit If true the consumer's offset will be periodically committed to Kafka in the background
public_access
KafkaPublicAccessUserConfig
Allow access to selected service ports from the public Internet
custom_domain
string
Custom domain Serve the web frontend using a custom CNAME pointing to the Aiven DNS name

KafkaSchema

KafkaSchema is the Schema for the kafkaschemas API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
KafkaSchemaSpec
N/A
status
KafkaSchemaStatus
N/A

KafkaSchemaRegistryConfig

Field Description
leader_eligibility
bool
leader_eligibility If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to 'true'.
topic_name
string
topic_name The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It's only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to '_schemas'.

KafkaSchemaSpec

KafkaSchemaSpec defines the desired state of KafkaSchema

Field Description
project
string
Project to link the Kafka Schema to
serviceName
string
Service to link the Kafka Schema to
subjectName
string
Kafka Schema Subject name
schema
string
Kafka Schema configuration should be a valid Avro Schema JSON format
compatibilityLevel
string
Kafka Schemas compatibility level
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

KafkaSchemaStatus

KafkaSchemaStatus defines the observed state of KafkaSchema

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an KafkaSchema state
version
int
Kafka Schema configuration version

KafkaSpec

KafkaSpec defines the desired state of Kafka

Field Description
ServiceCommonSpec
ServiceCommonSpec
(Members of ServiceCommonSpecare embedded into this type.)
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret
connInfoSecretTarget
ConnInfoSecretTarget
Information regarding secret creation
userConfig
KafkaUserConfig
Kafka specific user configuration options

KafkaSubKafkaUserConfig

Field Description
message_max_bytes
int64
message.max.bytes The maximum size of message that the server can receive.
default_replication_factor
int64
default.replication.factor Replication factor for autocreated topics
log_cleaner_min_cleanable_ratio
int64
log.cleaner.min.cleanable.ratio Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.
log_index_interval_bytes
int64
log.index.interval.bytes The interval with which Kafka adds an entry to the offset index
log_segment_delete_delay_ms
int64
log.segment.delete.delay.ms The amount of time to wait before deleting a file from the filesystem
max_incremental_fetch_session_cache_slots
int64
max.incremental.fetch.session.cache.slots The maximum number of incremental fetch sessions that the broker will maintain.
socket_request_max_bytes
int64
socket.request.max.bytes The maximum number of bytes in a socket request (defaults to 104857600).
log_cleaner_delete_retention_ms
int64
log.cleaner.delete.retention.ms How long are delete records retained?
log_index_size_max_bytes
int64
log.index.size.max.bytes The maximum size in bytes of the offset index
log_roll_jitter_ms
int64
log.roll.jitter.ms The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used
max_connections_per_ip
int64
max.connections.per.ip The maximum number of connections allowed from each ip address (defaults to 2147483647).
replica_fetch_response_max_bytes
int64
replica.fetch.response.max.bytes Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.
auto_create_topics_enable
bool
auto.create.topics.enable Enable auto creation of topics
log_flush_interval_ms
int64
log.flush.interval.ms The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used
log_message_downconversion_enable
bool
log.message.downconversion.enable This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.
log_roll_ms
int64
log.roll.ms The maximum time before a new log segment is rolled out (in milliseconds).
log_cleaner_min_compaction_lag_ms
int64
log.cleaner.min.compaction.lag.ms The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.
log_message_timestamp_difference_max_ms
int64
log.message.timestamp.difference.max.ms The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message
log_message_timestamp_type
string
log.message.timestamp.type Define whether the timestamp in the message is message create time or log append time.
log_retention_ms
int64
log.retention.ms The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.
group_min_session_timeout_ms
int64
group.min.session.timeout.ms The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.
log_segment_bytes
int64
log.segment.bytes The maximum size of a single log file
compression_type
string
compression.type Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.
group_max_session_timeout_ms
int64
group.max.session.timeout.ms The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.
log_flush_interval_messages
int64
log.flush.interval.messages The number of messages accumulated on a log partition before messages are flushed to disk
log_preallocate
bool
log.preallocate Should pre allocate file when create new segment?
log_retention_bytes
int64
log.retention.bytes The maximum size of the log before deleting messages
log_cleaner_max_compaction_lag_ms
int64
log.cleaner.max.compaction.lag.ms The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted
log_retention_hours
int64
log.retention.hours The number of hours to keep a log file before deleting it
min_insync_replicas
int64
min.insync.replicas When a producer sets acks to 'all' (or '-1'), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.
num_partitions
int64
num.partitions Number of partitions for autocreated topics
offsets_retention_minutes
int64
offsets.retention.minutes Log retention window in minutes for offsets topic
connections_max_idle_ms
int64
connections.max.idle.ms Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.
log_cleanup_policy
string
log.cleanup.policy The default cleanup policy for segments beyond the retention window
producer_purgatory_purge_interval_requests
int64
producer.purgatory.purge.interval.requests The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).
replica_fetch_max_bytes
int64
replica.fetch.max.bytes The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.

KafkaTopic

KafkaTopic is the Schema for the kafkatopics API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
KafkaTopicSpec
N/A
status
KafkaTopicStatus
N/A

KafkaTopicConfig

Field Description
cleanup_policy
string
cleanup.policy value
compression_type
string
compression.type value
delete_retention_ms
int64
delete.retention.ms value
file_delete_delay_ms
int64
file.delete.delay.ms value
flush_messages
int64
flush.messages value
flush_ms
int64
flush.ms value
index_interval_bytes
int64
index.interval.bytes value
max_compaction_lag_ms
int64
max.compaction.lag.ms value
max_message_bytes
int64
max.message.bytes value
message_downconversion_enable
bool
message.downconversion.enable value
message_format_version
string
message.format.version value
message_timestamp_difference_max_ms
int64
message.timestamp.difference.max.ms value
message_timestamp_type
string
message.timestamp.type value
min_compaction_lag_ms
int64
min.compaction.lag.ms value
min_insync_replicas
int64
min.insync.replicas value
preallocate
bool
preallocate value
retention_bytes
int64
retention.bytes value
retention_ms
int64
retention.ms value
segment_bytes
int64
segment.bytes value
segment_index_bytes
int64
segment.index.bytes value
segment_jitter_ms
int64
segment.jitter.ms value
segment_ms
int64
segment.ms value
unclean_leader_election_enable
bool
unclean.leader.election.enable value

KafkaTopicSpec

KafkaTopicSpec defines the desired state of KafkaTopic

Field Description
project
string
Target project.
serviceName
string
Service name.
partitions
int
Number of partitions to create in the topic
replication
int
Replication factor for the topic
tags
[]KafkaTopicTag
Kafka topic tags
config
KafkaTopicConfig
Kafka topic configuration
termination_protection
bool
It is a Kubernetes side deletion protections, which prevents the kafka topicfrom being deleted by Kubernetes. It is recommended to enable this for any production databases containing critical data.
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

KafkaTopicStatus

KafkaTopicStatus defines the observed state of KafkaTopic

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an KafkaTopic state
state
string
State represents the state of the kafka topic

KafkaTopicTag

Field Description
key
string
N/A
value
string
N/A

KafkaUserConfig

Field Description
kafka_version
string
Kafka major version
schema_registry
bool
Enable Schema-Registry service
kafka
KafkaSubKafkaUserConfig
Kafka broker configuration values
kafka_connect_user_config
KafkServiceKafkaConnectUserConfig
Kafka Connect configuration values
private_access
KafkaPrivateAccessUserConfig
Allow access to selected service ports from private networks
schema_registry_config
KafkaSchemaRegistryConfig
Schema Registry configuration
ip_filter
[]string
IP filter Allow incoming connections from CIDR address block, e.g. '10.20.0.0/16'
kafka_authentication_methods
KafkaAuthenticationMethodsUserConfig
Kafka authentication methods
kafka_connect
bool
Enable Kafka Connect service
kafka_rest
bool
Enable Kafka-REST service
kafka_rest_config
KafkaRestUserConfig
Kafka REST configuration

MigrationUserConfig

Field Description
host
string
Hostname or IP address of the server where to migrate data from
password
string
Password for authentication with the server where to migrate data from
port
int64
Port number of the server where to migrate data from
ssl
bool
The server where to migrate data from is secured with SSL
username
string
User name for authentication with the server where to migrate data from
dbname
string
Database name for bootstrapping the initial connection

PgLookoutUserConfig

Field Description
max_failover_replication_time_lag
int64
max_failover_replication_time_lag Number of seconds of master unavailability before triggering database failover to standby

PgbouncerUserConfig

Field Description
ignore_startup_parameters
[]string
List of parameters to ignore when given in startup packet
server_reset_query_always
bool
Run server_reset_query (DISCARD ALL) in all pooling modes

PostgreSQL

PostgreSQL is the Schema for the postgresql API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
PostgreSQLSpec
N/A
status
ServiceStatus
N/A

PostgreSQLSpec

PostgreSQLSpec defines the desired state of postgres instance

Field Description
ServiceCommonSpec
ServiceCommonSpec
(Members of ServiceCommonSpecare embedded into this type.)
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret
connInfoSecretTarget
ConnInfoSecretTarget
Information regarding secret creation
userConfig
PostgreSQLUserconfig
PostgreSQL specific user configuration options

PostgreSQLSubUserConfig

Field Description
log_min_duration_statement
int64
log_min_duration_statement Log statements that take more than this number of milliseconds to run, -1 disables
max_replication_slots
int64
max_replication_slots PostgreSQL maximum replication slots
max_standby_streaming_delay
int64
max_standby_streaming_delay Max standby streaming delay in milliseconds
pg_partman_bgw.interval
int64
pg_partman_bgw.interval Sets the time interval to run pg_partman's scheduled tasks
pg_stat_statements.track
string
pg_stat_statements.track Controls which statements are counted. Specify top to track top-level statements (those issued directly by clients), all to also track nested statements (such as statements invoked within functions), or none to disable statement statistics collection. The default value is top.
autovacuum_vacuum_threshold
int64
autovacuum_vacuum_threshold Specifies the minimum number of updated or deleted tuples needed to trigger a VACUUM in any one table. The default is 50 tuples
jit
bool
jit Controls system-wide use of Just-in-Time Compilation (JIT).
max_prepared_transactions
int64
max_prepared_transactions PostgreSQL maximum prepared transactions
autovacuum_freeze_max_age
int64
autovacuum_freeze_max_age Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid field can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. This parameter will cause the server to be restarted.
idle_in_transaction_session_timeout
int64
idle_in_transaction_session_timeout Time out sessions with open transactions after this number of milliseconds
wal_sender_timeout
int64
wal_sender_timeout Terminate replication connections that are inactive for longer than this amount of time, in milliseconds.
max_pred_locks_per_transaction
int64
max_pred_locks_per_transaction PostgreSQL maximum predicate locks per transaction
timezone
string
timezone PostgreSQL service timezone
max_wal_senders
int64
max_wal_senders PostgreSQL maximum WAL senders
track_activity_query_size
int64
track_activity_query_size Specifies the number of bytes reserved to track the currently executing command for each active session.
max_files_per_process
int64
max_files_per_process PostgreSQL maximum number of files that can be open per process
max_parallel_workers_per_gather
int64
max_parallel_workers_per_gather Sets the maximum number of workers that can be started by a single Gather or Gather Merge node
autovacuum_vacuum_scale_factor
int64
autovacuum_vacuum_scale_factor Specifies a fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size)
log_autovacuum_min_duration
int64
log_autovacuum_min_duration Causes each action executed by autovacuum to be logged if it ran for at least the specified number of milliseconds. Setting this to zero logs all autovacuum actions. Minus-one (the default) disables logging autovacuum actions.
max_locks_per_transaction
int64
max_locks_per_transaction PostgreSQL maximum locks per transaction
max_stack_depth
int64
max_stack_depth Maximum depth of the stack in bytes
max_worker_processes
int64
max_worker_processes Sets the maximum number of background processes that the system can support
pg_partman_bgw.role
string
pg_partman_bgw.role Controls which role to use for pg_partman's scheduled background tasks.
autovacuum_analyze_scale_factor
int64
autovacuum_analyze_scale_factor Specifies a fraction of the table size to add to autovacuum_analyze_threshold when deciding whether to trigger an ANALYZE. The default is 0.2 (20% of table size)
autovacuum_vacuum_cost_limit
int64
autovacuum_vacuum_cost_limit Specifies the cost limit value that will be used in automatic VACUUM operations. If -1 is specified (which is the default), the regular vacuum_cost_limit value will be used.
temp_file_limit
int64
temp_file_limit PostgreSQL temporary file limit in KiB, -1 for unlimited
track_functions
string
track_functions Enables tracking of function call counts and time used.
max_parallel_workers
int64
max_parallel_workers Sets the maximum number of workers that the system can support for parallel queries
track_commit_timestamp
string
track_commit_timestamp Record commit time of transactions.
max_standby_archive_delay
int64
max_standby_archive_delay Max standby archive delay in milliseconds
wal_writer_delay
int64
wal_writer_delay WAL flush interval in milliseconds. Note that setting this value to lower than the default 200ms may negatively impact performance
autovacuum_analyze_threshold
int64
autovacuum_analyze_threshold Specifies the minimum number of inserted, updated or deleted tuples needed to trigger an ANALYZE in any one table. The default is 50 tuples.
autovacuum_naptime
int64
autovacuum_naptime Specifies the minimum delay between autovacuum runs on any given database. The delay is measured in seconds, and the default is one minute
deadlock_timeout
int64
deadlock_timeout This is the amount of time, in milliseconds, to wait on a lock before checking to see if there is a deadlock condition.
log_error_verbosity
string
log_error_verbosity Controls the amount of detail written in the server log for each message that is logged.
max_logical_replication_workers
int64
max_logical_replication_workers PostgreSQL maximum logical replication workers (taken from the pool of max_parallel_workers)
autovacuum_max_workers
int64
autovacuum_max_workers Specifies the maximum number of autovacuum processes (other than the autovacuum launcher) that may be running at any one time. The default is three. This parameter can only be set at server start.
autovacuum_vacuum_cost_delay
int64
autovacuum_vacuum_cost_delay Specifies the cost delay value that will be used in automatic VACUUM operations. If -1 is specified, the regular vacuum_cost_delay value will be used. The default value is 20 milliseconds

PostgreSQLUserconfig

Field Description
pg_version
string
PostgreSQL major version
backup_minute
int64
The minute of an hour when backup for the service is started. New backup is only started if previous backup has already completed.
pg_service_to_fork_from
string
Name of the PostgreSQL Service from which to fork (deprecated, use service_to_fork_from). This has effect only when a new service is being created.
backup_hour
int64
The hour of day (in UTC) when backup for the service is started. New backup is only started if previous backup has already completed.
pglookout
PgLookoutUserConfig
PGLookout settings
shared_buffers_percentage
int64
shared_buffers_percentage Percentage of total RAM that the database server uses for shared memory buffers. Valid range is 20-60 (float), which corresponds to 20% - 60%. This setting adjusts the shared_buffers configuration value. The absolute maximum is 12 GB.
synchronous_replication
string
Synchronous replication type. Note that the service plan also needs to support synchronous replication.
timescaledb
TimescaledbUserConfig
TimescaleDB extension configuration values
admin_password
string
Custom password for admin user. Defaults to random string. This must be set only when a new service is being created.
ip_filter
[]string
IP filter Allow incoming connections from CIDR address block, e.g. '10.20.0.0/16'
pgbouncer
PgbouncerUserConfig
PGBouncer connection pooling settings
recovery_target_time
string
Recovery target time when forking a service. This has effect only when a new service is being created.
admin_username
string
Custom username for admin user. This must be set only when a new service is being created.
migration
MigrationUserConfig
Migrate data from existing server
private_access
PrivateAccessUserConfig
Allow access to selected service ports from private networks
public_access
PublicAccessUserConfig
Allow access to selected service ports from the public Internet
service_to_fork_from
string
Name of another service to fork from. This has effect only when a new service is being created.
variant
string
Variant of the PostgreSQL service, may affect the features that are exposed by default
work_mem
int64
work_mem Sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files, in MB. Default is 1MB + 0.075% of total RAM (up to 32MB).
pg
PostgreSQLSubUserConfig
postgresql.conf configuration values

PrivateAccessUserConfig

Field Description
pg
bool
Allow clients to connect to pg with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations
pgbouncer
bool
Allow clients to connect to pgbouncer with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations
prometheus
bool
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

Project

Project is the Schema for the projects API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
ProjectSpec
N/A
status
ProjectStatus
N/A

ProjectSpec

ProjectSpec defines the desired state of Project

Field Description
cardId
string
Credit card ID; The ID may be either last 4 digits of the card or the actual ID
accountId
string
Account ID
billingAddress
string
Billing name and address of the project
billingEmails
[]string
Billing contact emails of the project
billingCurrency
string
Billing currency
billingExtraText
string
Extra text to be included in all project invoices, e.g. purchase order or cost center number
billingGroupId
string
BillingGroup ID
countryCode
string
Billing country code of the project
cloud
string
Target cloud, example: aws-eu-central-1
copyFromProject
string
Project name from which to copy settings to the new project
technicalEmails
[]string
Technical contact emails of the project
connInfoSecretTarget
ConnInfoSecretTarget
Information regarding secret creation
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

ProjectStatus

ProjectStatus defines the observed state of Project

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an Project state
vatId
string
EU VAT Identification Number
availableCredits
string
Available credirs
country
string
Country name
estimatedBalance
string
Estimated balance
paymentMethod
string
Payment method name

ProjectVPC

ProjectVPC is the Schema for the projectvpcs API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
ProjectVPCSpec
N/A
status
ProjectVPCStatus
N/A

ProjectVPCSpec

ProjectVPCSpec defines the desired state of ProjectVPC

Field Description
project
string
The project the VPC belongs to
cloudName
string
Cloud the VPC is in
networkCidr
string
Network address range used by the VPC like 192.168.0.0/24
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

ProjectVPCStatus

ProjectVPCStatus defines the observed state of ProjectVPC

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an ProjectVPC state
state
string
State of VPC
id
string
Project VPC id

PublicAccessUserConfig

Field Description
pg
bool
Allow clients to connect to pg from the public internet for service nodes that are in a project VPC or another type of private network
pgbouncer
bool
Allow clients to connect to pgbouncer from the public internet for service nodes that are in a project VPC or another type of private network
prometheus
bool
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

Redis

Redis is the Schema for the redis API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
RedisSpec
N/A
status
ServiceStatus
N/A

RedisMigration

Field Description
ignore_dbs
string
Comma-separated list of databases, which should be ignored during migration (supported by MySQL only at the moment)
password
string
Password for authentication with the server where to migrate data from
port
int64
Port number of the server where to migrate data from
ssl
bool
The server where to migrate data from is secured with SSL
username
string
User name for authentication with the server where to migrate data from
dbname
string
Database name for bootstrapping the initial connection
host
string
Hostname or IP address of the server where to migrate data from

RedisPrivateAccess

Field Description
prometheus
bool
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations
redis
bool
Allow clients to connect to redis with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations

RedisPrivatelinkAccess

Field Description
redis
bool
Enable redis

RedisPublicAccess

Field Description
prometheus
bool
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network
redis
bool
Allow clients to connect to redis from the public internet for service nodes that are in a project VPC or another type of private network

RedisSpec

RedisSpec defines the desired state of Redis

Field Description
ServiceCommonSpec
ServiceCommonSpec
(Members of ServiceCommonSpecare embedded into this type.)
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret
connInfoSecretTarget
ConnInfoSecretTarget
Information regarding secret creation
userConfig
RedisUserConfig
Redis specific user configuration options

RedisUserConfig

Field Description
migration
RedisMigration
Migrate data from existing server
public_access
RedisPublicAccess
Allow access to selected service ports from the public internet
private_access
RedisPrivateAccess
Allow access to selected service ports from private networks
privatelink_access
RedisPrivatelinkAccess
Allow access to selected service components through Privatelink
service_to_fork_from
string
Name of another service to fork from. This has effect only when a new service is being created.
ip_filter
[]string
IP filter Allow incoming connections from CIDR address block, e.g. '10.20.0.0/16'
project_to_fork_from
string
Name of another project to fork a service from. This has effect only when a new service is being created.
redis_acl_channels_default
string
Default ACL for pub/sub channels used when Redis user is created Determines default pub/sub channels' ACL for new users if ACL is not supplied. When this option is not defined, all_channels is assumed to keep backward compatibility. This option doesn't affect Redis configuration acl-pubsub-default.
redis_lfu_decay_time
int64
LFU maxmemory-policy counter decay time in minutes
redis_lfu_log_factor
int64
Counter logarithm factor for volatile-lfu and allkeys-lfu maxmemory-policies
redis_persistence
string
Redis persistence When persistence is 'rdb', Redis does RDB dumps each 10 minutes if any key is changed. Also RDB dumps are done according to backup schedule for backup purposes. When persistence is 'off', no RDB dumps and backups are done, so data can be lost at any moment if service is restarted for any reason, or if service is powered off. Also service can't be forked.
redis_pubsub_client_output_buffer_limit
int64
Pub/sub client output buffer hard limit in MB Set output buffer limit for pub / sub clients in MB. The value is the hard limit, the soft limit is 1/4 of the hard limit. When setting the limit, be mindful of the available memory in the selected service plan.
static_ips
bool
Static IP addresses Use static public IP addresses
redis_io_threads
int64
Redis IO thread count
redis_timeout
int64
Redis idle connection timeout
recovery_basebackup_name
string
Name of the basebackup to restore in forked service
redis_maxmemory_policy
string
Redis maxmemory-policy
redis_notify_keyspace_events
string
Set notify-keyspace-events option
redis_number_of_databases
int64
Number of redis databases Set number of redis databases. Changing this will cause a restart of redis service.
redis_ssl
bool
Require SSL to access Redis

ServiceCommonSpec

Field Description
project
string
Target project.
plan
string
Subscription plan.
cloudName
string
Cloud the service runs in.
projectVpcId
string
Identifier of the VPC the service should be in, if any.
maintenanceWindowDow
string
Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.
maintenanceWindowTime
string
Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.
terminationProtection
bool
Prevent service from being deleted. It is recommended to have this enabled for all services.

ServiceIntegration

ServiceIntegration is the Schema for the serviceintegrations API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
ServiceIntegrationSpec
N/A
status
ServiceIntegrationStatus
N/A

ServiceIntegrationDatadogUserConfig

Field Description
exclude_consumer_groups
[]string
Consumer groups to exclude
exclude_topics
[]string
List of topics to exclude
include_consumer_groups
[]string
Consumer groups to include
include_topics
[]string
Topics to include
kafka_custom_metrics
[]string
List of custom metrics

ServiceIntegrationKafkaConnect

Field Description
config_storage_topic
string
The name of the topic where connector and task configuration data are stored. This must be the same for all workers with the same group_id.
group_id
string
A unique string that identifies the Connect cluster group this worker belongs to.
offset_storage_topic
string
The name of the topic where connector and task configuration offsets are stored. This must be the same for all workers with the same group_id.
status_storage_topic
string
The name of the topic where connector and task configuration status updates are stored.This must be the same for all workers with the same group_id.

ServiceIntegrationKafkaConnectUserConfig

Field Description
kafka_connect
ServiceIntegrationKafkaConnect
N/A

ServiceIntegrationKafkaLogsUserConfig

Field Description
kafka_topic
string
Topic name

ServiceIntegrationMetricsUserConfig

Field Description
database
string
Name of the database where to store metric datapoints. Only affects PostgreSQL destinations
retention_days
int
Number of days to keep old metrics. Only affects PostgreSQL destinations. Set to 0 for no automatic cleanup. Defaults to 30 days.
ro_username
string
Name of a user that can be used to read metrics. This will be used for Grafana integration (if enabled) to prevent Grafana users from making undesired changes. Only affects PostgreSQL destinations. Defaults to 'metrics_reader'. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.
username
string
Name of the user used to write metrics. Only affects PostgreSQL destinations. Defaults to 'metrics_writer'. Note that this must be the same for all metrics integrations that write data to the same PostgreSQL service.

ServiceIntegrationSpec

ServiceIntegrationSpec defines the desired state of ServiceIntegration

Field Description
project
string
Project the integration belongs to
integrationType
string
Type of the service integration
sourceEndpointID
string
Source endpoint for the integration (if any)
sourceServiceName
string
Source service for the integration (if any)
destinationEndpointId
string
Destination endpoint for the integration (if any)
destinationServiceName
string
Destination service for the integration (if any)
datadog
ServiceIntegrationDatadogUserConfig
Datadog specific user configuration options
kafkaConnect
ServiceIntegrationKafkaConnectUserConfig
Kafka Connect service configuration values
kafkaLogs
ServiceIntegrationKafkaLogsUserConfig
Kafka logs configuration values
metrics
ServiceIntegrationMetricsUserConfig
Metrics configuration values
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

ServiceIntegrationStatus

ServiceIntegrationStatus defines the observed state of ServiceIntegration

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an ServiceIntegration state
id
string
Service integration ID

ServiceStatus

ServiceStatus defines the observed state of service

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of a service state
state
string
Service state

ServiceUser

ServiceUser is the Schema for the serviceusers API

Field Description
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
ServiceUserSpec
N/A
status
ServiceUserStatus
N/A

ServiceUserSpec

ServiceUserSpec defines the desired state of ServiceUser

Field Description
project
string
Project to link the user to
serviceName
string
Service to link the user to
authentication
string
Authentication details
connInfoSecretTarget
ConnInfoSecretTarget
Information regarding secret creation
authSecretRef
AuthSecretReference
Authentication reference to Aiven token in a secret

ServiceUserStatus

ServiceUserStatus defines the observed state of ServiceUser

Field Description
conditions
[]Kubernetes meta/v1.Condition
Conditions represent the latest available observations of an ServiceUser state
type
string
Type of the user account

TimescaledbUserConfig

Field Description
max_background_workers
int64
timescaledb.max_background_workers The number of background workers for timescaledb operations. You should configure this setting to the sum of your number of databases and the total number of concurrent background workers you want running at any given point in time.

References

Generated with gen-crd-api-reference-docs on git commit 7f93721.