本文介紹如何在springboot項目中集成kafka收發message。
pom依賴
springboot相關的依賴我們就不提了,和kafka相關的只依賴一個spring-kafka集成包
<dependency><groupId>org.springframework.kafka</groupId><artifactId>spring-kafka</artifactId></dependency>
Kafka相關的yaml配置
spring:kafka:bootstrap-servers: 30.46.35.29:9092producer:retries: 3acks: -1batch-size: 16384buffer-memory: 33554432key-serializer: org.apache.kafka.common.serialization.StringSerializervalue-serializer: org.apache.kafka.common.serialization.StringSerializercompression-type: lz4properties:linger.ms: 1'interceptor.classes': com.tencent.qidian.ma.commontools.trace.kafka.TracingProducerInterceptorconsumer:heartbeat-interval: 3000max-poll-records: 100enable-auto-commit: falsekey-deserializer: org.apache.kafka.common.serialization.StringDeserializervalue-deserializer: org.apache.kafka.common.serialization.StringDeserializerproperties:session.timeout.ms: 30000listener:concurrency: 3type: batchack-mode: manual_immediate
以下為相關配置的說明:
spring.kafka
- spring.kafka.bootstrap-servers: 指定 Kafka 服務器的地址列表,格式為 host:port,多個地址使用逗號分隔。
spring.kafka.producer
用于配置 Kafka 消費者相關屬性
spring.kafka.consumer
用于配置 Kafka 消費者相關屬性
- spring.kafka.consumer.enable-auto-commit的值為false表示關閉Kafka客戶端的自動提交offSet。
- spring.kafka.consumer.max-poll-records的值為20表示在開啟了批量消費以后,每次從Kafka服務端拉取的數據最大條數為20。
在 Spring 中是使用 Kafka 監聽器來進行消息消費的,spring.kafka.listener
用來配置監聽器的相關配置
- spring.kafka.listener.type的值為batch表示開啟批量消費,默認值為single(單條)。
- spring.kafka.listener.ack-mode的值為manual_immediate表示關閉Spring的自動提交offSet,我們需要在代碼中進行手動提交。spring.kafka.listener.ack-mode的取值有兩個比較常見的選項值MANUAL 和MANUAL_IMMEDIATE。MANUAL表示處理完業務后,手動調用Acknowledgment.acknowledge()先將offset存放到map本地緩存,在下一次poll之前從緩存拿出來批量提交。MANUAL_IMMEDIATE表示每次處理完業務,手動調用Acknowledgment.acknowledge()后立即提交。
生產者配置
1)通過@Configuration、@EnableKafka,聲明Config并且打開KafkaTemplate能力。
2)生成bean,@Bean
常見配置參考:
package com.somnus.config.kafka;@Configuration
@EnableKafka
public class KafkaProducerConfig {@Resourceprivate KafkaProperties kafkaProperties;public Map<String, Object> producerConfigs() {Map<String, Object> props = new HashMap<>();props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());props.put(ProducerConfig.RETRIES_CONFIG, kafkaProperties.getProducer().getRetries());props.put(ProducerConfig.BATCH_SIZE_CONFIG, kafkaProperties.getProducer().getBatchSize());props.put(ProducerConfig.LINGER_MS_CONFIG, kafkaProperties.getProducer().getProperties().get("linger.ms"));props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, kafkaProperties.getProducer().getBufferMemory());props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, kafkaProperties.getProducer().getKeySerializer());props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, kafkaProperties.getProducer().getValueSerializer());return props;}public ProducerFactory<String, String> producerFactory() {return new DefaultKafkaProducerFactory<>(producerConfigs());}@Beanpublic KafkaTemplate<String, String> kafkaTemplate() {return new KafkaTemplate<>(producerFactory());}
}
消費端配置
1)通過@Configuration、@EnableKafka,聲明Config并且打開KafkaTemplate能力。
2)生成bean,@Bean
常見配置參考:
package com.tencent.qidian.ma.maaction.web.config.kafka;import jakarta.annotation.Resource;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties.Listener.Type;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ContainerProperties.AckMode;/*** KafkaBeanConfiguration*/
@Configuration
@EnableKafka
public class KafkaBeanConfiguration {@Resourceprivate ConsumerFactory consumerFactory;@Resourceprivate KafkaProperties kafkaProperties;@Bean(name = "kafkaListenerContainerFactory")public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>>kafkaListenerContainerFactory() {ConcurrentKafkaListenerContainerFactory<String, String> factory =new ConcurrentKafkaListenerContainerFactory<>();factory.setConsumerFactory(consumerFactory());factory.getContainerProperties().setAckMode(kafkaProperties.getListener().getAckMode());factory.setConcurrency(kafkaProperties.getListener().getConcurrency());if (kafkaProperties.getListener().getType().equals(Type.BATCH)) {factory.setBatchListener(true);}return factory;}// 此bean為了后續演示使用,參考消費演示中的containerFactory屬性配置@Bean(name = "tenThreadsKafkaListenerContainerFactory1")public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>>tenThreadsKafkaListenerContainerFactory1() {ConcurrentKafkaListenerContainerFactory<String, String> factory =new ConcurrentKafkaListenerContainerFactory<>();factory.setConsumerFactory(consumerFactory);factory.getContainerProperties().setAckMode(kafkaProperties.getListener().getAckMode());factory.setConcurrency(10);if (kafkaProperties.getListener().getType().equals(Type.BATCH)) {factory.setBatchListener(true);}return factory;}public ConsumerFactory<String, String> consumerFactory() {return new DefaultKafkaConsumerFactory<>(consumerConfigs());}public Map<String, Object> consumerConfigs() {Map<String, Object> propsMap = new HashMap<>();propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());propsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,kafkaProperties.getConsumer().getMaxPollRecords());propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, kafkaProperties.getConsumer().getEnableAutoCommit());propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, kafkaProperties.getConsumer().getProperties().get("session.timeout.ms"));propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, kafkaProperties.getConsumer().getKeyDeserializer());propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, kafkaProperties.getConsumer().getValueDeserializer());return propsMap;}}
SpringBoot 集成 KafkaTemplate 發送Kafka消息
@Resourceprivate ObjectMapper mapper;@Resourceprivate KafkaTemplate<String, String> kafkaTemplate;try {Order order = new Order();String message = mapper.writeValueAsString(order);CompletableFuture<SendResult<String, String>> future = kafkaTemplate.send("order", message);future.thenAccept(result -> {if (result.getRecordMetadata() != null) {log.debug("send message:{} with offset:{}", message, result.getRecordMetadata().offset());}}).exceptionally(exception -> {log.error("KafkaProducer send message failure,topic={},data={}", topic, message, exception);return null;});} catch (Exception e) {log.error("KafkaProducer send message exception,topic={},message={}", topic, message, e);}
SpringBoot 集成 @KafkaListener 消費Kafka消息
max.poll.interval.ms
默認為5分鐘
如果兩次poll操作間隔超過了這個時間,broker就會認為這個consumer處理能力太弱,會將其踢出消費組,將分區分配給別的consumer消費,觸發rebalance 。
如果你的消費者節點總是在重啟完不久就不消費了,可以考慮檢查改配置項或者優化你的消費者的消費速度等等。
max.poll.records
max-poll-records是Kafka consumer的一個配置參數,表示consumer一次從Kafka broker中拉取的最大消息數目,默認值為500條。在Kafka中,一個消費者組可以有多個consumer實例,每個consumer實例負責消費一個或多個partition的消息,每個consumer實例一次從broker中可以拉取一個或多個消息。
max-poll-records參數的作用就是控制每次拉取消息的最大數目,以實現消費弱化和控制內存資源的需求。
參考Kafka中的max-poll-records和listener.concurrency配置
注意:@KafkaListener注解中的concurrency會覆蓋消費者工廠中的concurrency,以下面代碼為例,即使kafkaListenerContainerFactory1中的并發數是3,但是最終生成的監聽器數量是2。
@Resourceprivate ObjectMapper mapper;@KafkaListener(id = "order_consumer",topics = "order",groupId = "g_order_consumer_group",//可配置containerFactory參數,使用指定的containerFactory,不配置默認使用名稱是kafkaListenerContainerFactory的bean//containerFactory = "kafkaListenerContainerFactory1",//concurrency = "2",properties = {"max.poll.interval.ms:300000", "max.poll.records:1"})// 可以只有ConsumerRecords<String, String> records參數。ack參數非必需,ack.acknowledge()是為了防消息丟失public void consume(ConsumerRecords<String, String> records, Acknowledgment ack) {for (ConsumerRecord<String, String> record : records) {String msg = record.value();log.info("Consume msg:{}", msg);try {Order order = mapper.readValue(val, Order.class);// 處理業務邏輯} catch (Exception e) {log.error("Consume failed, msg:{}", val, e);}}ack.acknowledge();}