T - Record type@Experimental public class DynamicKafkaSource<T> extends Object implements org.apache.flink.api.connector.source.Source<T,DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>, org.apache.flink.api.java.typeutils.ResultTypeQueryable<T>
This source's key difference from KafkaSource is that it enables users to read
dynamically, which does not require job restart, from streams (topics that belong to one or more
clusters). If using KafkaSource, users need to restart the job by deleting the job and
reconfiguring the topics and clusters.
This example shows how to configure a DynamicKafkaSource that emits Integer records:
DynamicKafkaSource<Integer> dynamicKafkaSource =
DynamicKafkaSource.<Integer>builder()
.setStreamIds(Collections.singleton("MY_STREAM_ID"))
// custom metadata service that resolves `MY_STREAM_ID` to the associated clusters and topics
.setKafkaMetadataService(kafkaMetadataService)
.setDeserializer(
KafkaRecordDeserializationSchema.valueOnly(
IntegerDeserializer.class))
.setStartingOffsets(OffsetsInitializer.earliest())
// common properties for all Kafka clusters
.setProperties(properties)
.build();
See more configuration options in DynamicKafkaSourceBuilder and DynamicKafkaSourceOptions.
| Modifier and Type | Method and Description |
|---|---|
static <T> DynamicKafkaSourceBuilder<T> |
builder()
Get a builder for this source.
|
org.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState> |
createEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<DynamicKafkaSourceSplit> enumContext)
Create the
DynamicKafkaSourceEnumerator. |
org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit> |
createReader(org.apache.flink.api.connector.source.SourceReaderContext readerContext)
Create the
DynamicKafkaSourceReader. |
org.apache.flink.api.connector.source.Boundedness |
getBoundedness()
Get the
Boundedness. |
org.apache.flink.core.io.SimpleVersionedSerializer<DynamicKafkaSourceEnumState> |
getEnumeratorCheckpointSerializer()
|
KafkaStreamSubscriber |
getKafkaStreamSubscriber() |
org.apache.flink.api.common.typeinfo.TypeInformation<T> |
getProducedType()
Get the
TypeInformation of the source. |
org.apache.flink.core.io.SimpleVersionedSerializer<DynamicKafkaSourceSplit> |
getSplitSerializer()
Get the
DynamicKafkaSourceSplitSerializer. |
org.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState> |
restoreEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<DynamicKafkaSourceSplit> enumContext,
DynamicKafkaSourceEnumState checkpoint)
Restore the
DynamicKafkaSourceEnumerator. |
public static <T> DynamicKafkaSourceBuilder<T> builder()
DynamicKafkaSourceBuilder.public org.apache.flink.api.connector.source.Boundedness getBoundedness()
Boundedness.getBoundedness in interface org.apache.flink.api.connector.source.Source<T,DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>Boundedness.@Internal public org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit> createReader(org.apache.flink.api.connector.source.SourceReaderContext readerContext)
DynamicKafkaSourceReader.createReader in interface org.apache.flink.api.connector.source.SourceReaderFactory<T,DynamicKafkaSourceSplit>readerContext - The context for the source reader.DynamicKafkaSourceReader.@Internal public org.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState> createEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<DynamicKafkaSourceSplit> enumContext)
DynamicKafkaSourceEnumerator.createEnumerator in interface org.apache.flink.api.connector.source.Source<T,DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>enumContext - The context for the split enumerator.DynamicKafkaSourceEnumerator.@Internal public org.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState> restoreEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<DynamicKafkaSourceSplit> enumContext, DynamicKafkaSourceEnumState checkpoint)
DynamicKafkaSourceEnumerator.restoreEnumerator in interface org.apache.flink.api.connector.source.Source<T,DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>enumContext - The context for the restored split
enumerator.checkpoint - The checkpoint to restore the SplitEnumerator from.DynamicKafkaSourceEnumerator.@Internal public org.apache.flink.core.io.SimpleVersionedSerializer<DynamicKafkaSourceSplit> getSplitSerializer()
DynamicKafkaSourceSplitSerializer.getSplitSerializer in interface org.apache.flink.api.connector.source.Source<T,DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>DynamicKafkaSourceSplitSerializer.@Internal public org.apache.flink.core.io.SimpleVersionedSerializer<DynamicKafkaSourceEnumState> getEnumeratorCheckpointSerializer()
getEnumeratorCheckpointSerializer in interface org.apache.flink.api.connector.source.Source<T,DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>DynamicKafkaSourceEnumStateSerializer.public org.apache.flink.api.common.typeinfo.TypeInformation<T> getProducedType()
TypeInformation of the source.getProducedType in interface org.apache.flink.api.java.typeutils.ResultTypeQueryable<T>TypeInformation.@VisibleForTesting public KafkaStreamSubscriber getKafkaStreamSubscriber()
Copyright © 2022–2024 The Apache Software Foundation. All rights reserved.