Monitor data is essential in a monitor system for the following analysis, graph, and alarm. How does kafka eagle deal with the problem of data collection?
For Kafka, that we can collect the following data
That's quite a lot, which come from different interfaces, such as JMX, Kafka API, internal topic etc.
Let's focus on the three representative categories above.
The collection of this category is universal. We can collect data through Kafka broker JMX, Kafka API etc.
It should be noted here that if data acquisition fails, it is necessary to insist on whether the network is limited, such as firewall policy.
We can test whether the corresponding port is available on the server where Kafka Eagle is deployed. The command is as follows:
# Test Kafka Broker Server telnet kafka01 9092 # Test Kafka Broker JMX telnet kafka01 9999 # Test Zookeeper Server telnet zk01 2181
For the introduction of JMX, please visit here.
For Kafka Broker JMX port, it can be set to any available port on the server, and Kafka Eagle will automatically identify.
Service indicators such as QPS, TPS and RT reflect the performance of Kafka Broker services. These indicators are collected by different timers. After has finished collection, the collected data (from JMX or API) is stored in the database (such as MySQL or SQLite). Finally, the data is rendered in the dashboard to form a friendly graph web page.
You can check if static resources are limited:
Open the browser review element, switch to the network module, and check
Switch to the console module, refresh the browser, and observe whether the console module is abnormal. If there is an exception, you can use the search engine to solve it according to the exception prompt.
Indicators such as consumer groups, consumers, producers, and topics reflect the health of client programs. We can analyze whether our application is normal by observing these indicators on the Kafka Eagle web page.
We don't need to make any settings for these indicator data. After the client program starts, Kafka Eagle will automatically identify, collect and store the data.
If the Kafka Eagle log throws an exception when collecting data, check whether the configuration file setting of Kafka Eagle is correct.
If your Kafka version is less than 0.10.x (0.8.x, or 0.9.x etc.), set as follows:
# Set Kafka Offsets Storage cluster1.kafka.eagle.offset.storage=zk
If your Kafka version is greater than 0.10.x (1.x, or 2.x etc.), set as follows:
# Set Kafka Offsets Storage cluster1.kafka.eagle.offset.storage=kafka