I'm software tester. And I like backend testing.
If something breaks in the system, logs always come to the rescue. Thanks to them, we can clarify incredible moments in critical situations and create a detailed analysis of what happened. Many companies use a tool like Kibana to collect and analyze logs. But there is a problem in the fact that this tool is rarely or hardly ever used.
Why is this happening? The truth is that a person is used to analyzing logs directly on the instance: read them from the source. Indeed, this is the best way, but many people don't like to change their habits. Not everyone is ready to find a way out of the comfort zone.
There are situations when you cannot go directly to the instance. For example, it is necessary to analyze an incident in production, and we do not have access to this environment for all known reasons.
Another situation is if the service runs on the Windows operating system, and admission is needed for three or more employees at the same time. As we all well know, Windows has a policy of no more than two people working at the same time when logging in via RDP (Remote Desktop Protocol).
If you want to get simultaneous RDP access to a more significant number of employees, it is necessary to buy a license, but not every company is ready to do this.
Kibana is part of the ELK stack, which also includes Elasticsearch and Logstash. Kibana is used to visualize data in various formats and for quickly searching and analyzing logs. And today, we will focus on how to comfortably navigation this tool and its hidden possibilities.
To begin with, you can enter the number of an operation in the ''Search'' field and select the period for which you want to search. As a result of this request, a timeline will be displayed with the hits' quantity for this number. It is very convenient to track various trends and sudden bursts of errors on a time chart.
You can also make complex search queries. There is a particular query language called KQL (Kibana Query Language). With this language, you can create multi-level questions that help filter out the information you want.
For example, you can select a test environment and give it a specific name. If you need to find any phrase, then double quotes would help. If you enclose two or more words in double quotes, the whole phrase will be searched.
You can find more information about composing complex queries on the official elastic website.
For those who like studying logs in chronological order, Kibana has this option as well. When we find a specific error in our log, we would like to see what happened before and after. You need to click on the discovered fragment of the log and then click on ''View surrounding documents''.
You'll get a list of logs, where your request will be highlighted in grey, and multiple log lines will load, leading up to our error. These logs will be at the bottom, along with a few lines that came after our mistake. These lines will be higher than the mistake.
In Kibana, you should read logs from bottom to top instead of instance logs that go read conversely. By default, five lines are loaded before and five after, but you can change this value and then click the ''Load'' button. After that, the specified number of log lines will be loaded above our error. The same you can do with previous logs.
The default value ''five'' can be changed in the system settings by going to the: Stack Management / Advanced Settings.
In this article, I didn't intend to teach you how to use the Kibana tool. You can find many detailed reports and videos about this on the Internet, and the official website also has plenty of information. I wanted to show you how convenient and effortless you can start using different reading logs.
You can solve several problems simultaneously, ranging from leaving the comfort zone and ending with the most complex queries combined with the speed of obtaining the result.
Create your free account to unlock your custom reading experience.