Is there a way in the S3 Kafka sink connector to ensure all records are consumed
I have a problem in the S3 Kafka connector but also seen this in the JDBC connector.
I'm trying to see how can I ensure that my connectors are actually consuming all the data in a certain topic.
I expect because of the flush sizes that there could be a cer...