Aman
Aman

Reputation: 495

What is Difference between broker-list and bootstrap servers?

What is difference between Kafka

broker-list and bootstrap servers

Upvotes: 27

Views: 22266

Answers (3)

Jing Li
Jing Li

Reputation: 15116

It was already well answered by others, I just wanna share some additional information here.

Those command line tools under the bin directory are not documented for detailed usage.

Of course you can call for --help to print a description of the given command's supported syntax and options.

For example: bin/kafka-console-producer.sh --help

--bootstrap-server <String: server to    REQUIRED unless --broker-list
  connect to>                              (deprecated) is specified. The server
                                           (s) to connect to. The broker list
                                           string in the form HOST1:PORT1,HOST2:
                                           PORT2.
--broker-list <String: broker-list>      DEPRECATED, use --bootstrap-server
                                           instead; ignored if --bootstrap-
                                           server is specified.  The broker
                                           list string in the form HOST1:PORT1,
                                           HOST2:PORT2.

But instead of running the commands, you can always find the latest information directly from the source code, right inside the directory core/src/main/scala/kafka, the corresponding scala class could be under either the tools directory, or the admin directory.

For instance: the kafka-console-producer.sh script actually invokes functions from the ConsoleProducer.scala class. There you can easily find that broker-list is DEPRECATED.

Have fun reading the source code :)

Upvotes: 0

Atishay Jain
Atishay Jain

Reputation: 117

This answer is for information purpose only, as I am not using --broker-list so I got confused then I realised that it is deprecated.

Currently I am using Kafka version 2.6.0.

Now for both producer and consumer we have to use --bootstrap-server instead of --broker-list as it is now deprecated.

You can check this in Kafka console script.

bin/kafka-console-producer.sh

enter image description here

As you can see, -- broker-list is deprecated for Kafka-console-producer.sh

bin/kafka-console-consumer.sh

enter image description here

Upvotes: 5

Lakitu Lakitutu
Lakitu Lakitutu

Reputation: 278

I also hate reading "wall of text like" Kafka documentation :P
As far as I understand:

  • broker-list

    • a full list of servers, if any missing producer may not work
    • related to producer commands
  • bootstrap-servers

    • one is enough to discover all others
    • related to consumer commands
    • Zookeeper involved

Sorry for being such... brief. Next time I will focus more on details to be more clear. To explain my point of view I will use Kafka 1.0.1 console scripts.

kafka-console-consumer.sh

The console consumer is a tool that reads data from Kafka and outputs it to standard output.
Option                                   Description
------                                   -----------
--blacklist <String: blacklist>          Blacklist of topics to exclude from
                                           consumption.
--bootstrap-server <String: server to    REQUIRED (unless old consumer is
  connect to>                              used): The server to connect to.
--consumer-property <String:             A mechanism to pass user-defined
  consumer_prop>                           properties in the form key=value to
                                           the consumer.
--consumer.config <String: config file>  Consumer config properties file. Note
                                           that [consumer-property] takes
                                           precedence over this config.
--csv-reporter-enabled                   If set, the CSV metrics reporter will
                                           be enabled
--delete-consumer-offsets                If specified, the consumer path in
                                           zookeeper is deleted when starting up
--enable-systest-events                  Log lifecycle events of the consumer
                                           in addition to logging consumed
                                           messages. (This is specific for
                                           system tests.)
--formatter <String: class>              The name of a class to use for
                                           formatting kafka messages for
                                           display. (default: kafka.tools.
                                           DefaultMessageFormatter)
--from-beginning                         If the consumer does not already have
                                           an established offset to consume
                                           from, start with the earliest
                                           message present in the log rather
                                           than the latest message.
--group <String: consumer group id>      The consumer group id of the consumer.
--isolation-level <String>               Set to read_committed in order to
                                           filter out transactional messages
                                           which are not committed. Set to
                                           read_uncommittedto read all
                                           messages. (default: read_uncommitted)
--key-deserializer <String:
  deserializer for key>
--max-messages <Integer: num_messages>   The maximum number of messages to
                                           consume before exiting. If not set,
                                           consumption is continual.
--metrics-dir <String: metrics           If csv-reporter-enable is set, and
  directory>                               this parameter isset, the csv
                                           metrics will be output here
--new-consumer                           Use the new consumer implementation.
                                           This is the default, so this option
                                           is deprecated and will be removed in
                                           a future release.
--offset <String: consume offset>        The offset id to consume from (a non-
                                           negative number), or 'earliest'
                                           which means from beginning, or
                                           'latest' which means from end
                                           (default: latest)
--partition <Integer: partition>         The partition to consume from.
                                           Consumption starts from the end of
                                           the partition unless '--offset' is
                                           specified.
--property <String: prop>                The properties to initialize the
                                           message formatter.
--skip-message-on-error                  If there is an error when processing a
                                           message, skip it instead of halt.
--timeout-ms <Integer: timeout_ms>       If specified, exit if no message is
                                           available for consumption for the
                                           specified interval.
--topic <String: topic>                  The topic id to consume on.
--value-deserializer <String:
  deserializer for values>
--whitelist <String: whitelist>          Whitelist of topics to include for
                                           consumption.
--zookeeper <String: urls>               REQUIRED (only when using old
                                           consumer): The connection string for
                                           the zookeeper connection in the form
                                           host:port. Multiple URLS can be
                                           given to allow fail-over.

kafka-console-producer.sh
Read data from standard input and publish it to Kafka.
Option                                   Description
------                                   -----------
--batch-size <Integer: size>             Number of messages to send in a single
                                           batch if they are not being sent
                                           synchronously. (default: 200)
--broker-list <String: broker-list>      REQUIRED: The broker list string in
                                           the form HOST1:PORT1,HOST2:PORT2.
--compression-codec [String:             The compression codec: either 'none',
  compression-codec]                       'gzip', 'snappy', or 'lz4'.If
                                           specified without value, then it
                                           defaults to 'gzip'
--key-serializer <String:                The class name of the message encoder
  encoder_class>                           implementation to use for
                                           serializing keys. (default: kafka.
                                           serializer.DefaultEncoder)
--line-reader <String: reader_class>     The class name of the class to use for
                                           reading lines from standard in. By
                                           default each line is read as a
                                           separate message. (default: kafka.
                                           tools.
                                           ConsoleProducer$LineMessageReader)
--max-block-ms <Long: max block on       The max time that the producer will
  send>                                    block for during a send request
                                           (default: 600

As you can see bootstrap-server parameter occurs only for consumer. On the other side - broker-list is on parameter list only for producer.

Moreover:

kafka-console-consumer.sh --zookeeper localost:2181 --topic bets
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].

So as cricket-007 noticed bootstrap-server and zookeeper looks to have similiar purpose. The difference is --zookeeper should points to Zookeeper nodes on the other side --bootstrap-server points Kafka nodes and ports.

Reasuming, bootstrap-server is being used as consumer parameter and broker-list as producer parameter.

Upvotes: 17

Related Questions