美文网首页
Kafka-6.配置-Kafka Connect Configs

Kafka-6.配置-Kafka Connect Configs

作者: 悠扬前奏 | 来源:发表于2019-06-09 22:10 被阅读0次

3.5 Kafka Connect Configs

下面是Kafka Connect 框架的配置:

NAME DESCRIPTION TYPE DEFAULT VALID VALUES IMPORTANCE
config.storage.topic The name of the Kafka topic where connector configurations are stored string high
group.id A unique string that identifies the Connect cluster group this worker belongs to. string high
key.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. class high
offset.storage.topic The name of the Kafka topic where connector offsets are stored string high
status.storage.topic The name of the Kafka topic where connector and task status are stored string high
value.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. class high
bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). list localhost:9092 high
heartbeat.interval.ms The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. int 3000 high
rebalance.timeout.ms The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures. int 60000 high
session.timeout.ms The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms. int 10000 high
ssl.key.password The password of the private key in the key store file. This is optional for client. password null high
ssl.keystore.location The location of the key store file. This is optional for client and can be used for two-way authentication for client. string null high
ssl.keystore.password The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. password null high
ssl.truststore.location The location of the trust store file. string null high
ssl.truststore.password The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled. password null high
client.dns.lookup Controls how the client uses DNS lookups.

If set to use_all_dns_ips then, when the lookup returns multiple IP addresses for a hostname, they will all be attempted to connect to before failing the connection. Applies to both bootstrap and advertised servers.

IIf the value is resolve_canonical_bootstrap_servers_onlyeach entry will be resolved and expanded into a list of canonical names.
string default [default, use_all_dns_ips, resolve_canonical_bootstrap_servers_only] medium
connections.max.idle.ms Close idle connections after the number of milliseconds specified by this config. long 540000 medium
receive.buffer.bytes The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. int 32768 [0,...] medium
request.timeout.ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. int 40000 [0,...] medium
sasl.client.callback.handler.class The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. class null medium
sasl.jaas.config JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: 'loginModuleClass controlFlag (optionName=optionValue)*;'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required; password null medium
sasl.kerberos.service.name The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. string null medium
sasl.login.callback.handler.class The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler class null medium
sasl.login.class The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin class null medium
sasl.mechanism SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. string GSSAPI medium
security.protocol Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. string PLAINTEXT medium
send.buffer.bytes The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. int 131072 [0,...] medium
ssl.enabled.protocols The list of protocols enabled for SSL connections. list TLSv1.2,TLSv1.1,TLSv1 medium
ssl.keystore.type The file format of the key store file. This is optional for client. string JKS medium
ssl.protocol The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. string TLS medium
ssl.provider The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. string null medium
ssl.truststore.type The file format of the trust store file. string JKS medium
worker.sync.timeout.ms When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining. int 3000 medium
worker.unsync.backoff.ms When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining. int 300000 medium
access.control.allow.methods Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD. string "" low
access.control.allow.origin Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API. string "" low
client.id An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. string "" low
config.providers Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvider allows you to replace variable references in connector configurations, such as for externalized secrets. list "" low
config.storage.replication.factor Replication factor used when creating the configuration storage topic short 3 [1,...] low
header.converter HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas. class org.apache.kafka.connect.storage.SimpleHeaderConverter low
internal.key.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version. class org.apache.kafka.connect.json.JsonConverter low
internal.value.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version. class org.apache.kafka.connect.json.JsonConverter low
listeners List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084 list null low
metadata.max.age.ms The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. long 300000 [0,...] low
metric.reporters A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. list "" low
metrics.num.samples The number of samples maintained to compute metrics. int 2 [1,...] low
metrics.recording.level The highest recording level for metrics. string INFO [INFO, DEBUG] low
metrics.sample.window.ms The window of time a metrics sample is computed over. long 30000 [0,...] low
offset.flush.interval.ms Interval at which to try committing offsets for tasks. long 60000 low
offset.flush.timeout.ms Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. long 5000 low
offset.storage.partitions The number of partitions used when creating the offset storage topic int 25 [1,...] low
offset.storage.replication.factor Replication factor used when creating the offset storage topic short 3 [1,...] low
plugin.path List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: a) directories immediately containing jars with plugins and their dependencies b) uber-jars with plugins and their dependencies c) directories immediately containing the package directory structure of classes of plugins and their dependencies Note: symlinks will be followed to discover dependencies or plugins. Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors list null low
reconnect.backoff.max.ms The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. long 1000 [0,...] low
reconnect.backoff.ms The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. long 50 [0,...] low
rest.advertised.host.name If this is set, this is the hostname that will be given out to other workers to connect to. string null low
rest.advertised.listener Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use. string null low
rest.advertised.port If this is set, this is the port that will be given out to other workers to connect to. int null low
rest.extension.classes Comma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing the interface ConnectRestExtension allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. list "" low
rest.host.name Hostname for the REST API. If this is set, it will only bind to this interface. string null low
rest.port Port for the REST API to listen on. int 8083 low
retry.backoff.ms The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. long 100 [0,...] low
sasl.kerberos.kinit.cmd Kerberos kinit command path. string /usr/bin/kinit low
sasl.kerberos.min.time.before.relogin Login thread sleep time between refresh attempts. long 60000 low
sasl.kerberos.ticket.renew.jitter Percentage of random jitter added to the renewal time. double 0.05 low
sasl.kerberos.ticket.renew.window.factor Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. double 0.8 low
sasl.login.refresh.buffer.seconds The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. short 300 [0,...,3600] low
sasl.login.refresh.min.period.seconds The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. short 60 [0,...,900] low
sasl.login.refresh.window.factor Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. double 0.8 [0.5,...,1.0] low
sasl.login.refresh.window.jitter The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. double 0.05 [0.0,...,0.25] low
ssl.cipher.suites A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. list null low
ssl.client.auth Configures kafka broker to request client authentication. The following settings are common:

* ssl.client.auth=required If set to required client authentication is required.
* ssl.client.auth=requested This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
* ssl.client.auth=noneThis means client authentication is not needed. string none low
ssl.endpoint.identification.algorithm The endpoint identification algorithm to validate server hostname using server certificate. string https low
ssl.keymanager.algorithm The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. string SunX509 low
ssl.secure.random.implementation The SecureRandom PRNG implementation to use for SSL cryptography operations. string null low
ssl.trustmanager.algorithm The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. string PKIX low
status.storage.partitions The number of partitions used when creating the status storage topic int 5 [1,...] low
status.storage.replication.factor Replication factor used when creating the status storage topic short 3 [1,...] low
task.shutdown.graceful.timeout.ms Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially. long 5000 low

3.5.1 Source Connector Configs

相关文章

网友评论

      本文标题:Kafka-6.配置-Kafka Connect Configs

      本文链接:https://www.haomeiwen.com/subject/jykhxctx.html