1、Kafka作为Source 【数据进入到kafka中,抽取出来】
在flume的conf文件夹下,有一个flumeconf 文件夹:这个文件夹是自己创建的
创建一个flume脚本文件: kafka-memory-logger.conf
Flume 1.9用户手册中文版 — 可能是目前翻译最完整的版本了
a1.sources = r1
a1.sinks = k1
a1.channels = c1# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.batchSize = 100
a1.sources.r1.batchDurationMillis = 2000
a1.sources.r1.kafka.bootstrap.servers = bigdata01:9092,bigdata02:9092,bigdata03:9092
a1.sources.r1.kafka.topics = five
a1.sources.r1.kafka.consumer.group.id = qiaodaohu# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100a1.sinks.k1.type = logger
a1.sinks.k1.maxBytesToLog = 128
接着创建一个topic ,名字叫做 kafka-flume,或者直接使用以前的five 主题
创建主题的命令
kafka-topics.sh --create --topic kafka-flume --bootstrap-server bigdata01:9092 --partitions 3 --replication-factor 1
测试:
启动一个消息生产者,向topic中发送消息,启动flume,接收消息
kafka-console-producer.sh --topic kafka-flume --bootstrap-server bigdata01:9092