paint-brush
Fetching Large Logs from Loki in Kubernetesby@dmitriikhalezhin
416 reads
416 reads

Fetching Large Logs from Loki in Kubernetes

by Dmitrii KhalezhinOctober 6th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

When cold, the application writes about 60 lines per minute, and when someone interacts with the application, it can write 2000-5000 lines of logs per minute. Our project setup did not include a configured log export, and our primary log viewing tool was Grafana. accessing logs directly from the Kubernetes pod was not an option due to storage limitations within the pod itself.
featured image - Fetching Large Logs from Loki in Kubernetes
Dmitrii Khalezhin HackerNoon profile picture


Recently in my practice, I faced a significant challenge: extracting application logs per day from Loki in the Kubernetes environment. When cold, the application writes about 60 lines per minute, and when someone interacts with the application, it can write 2000-5000 lines of logs per minute – it turns out that was necessary to get more than 300,000 lines of logs. The setup did not include a configured log export, and the primary log viewing tool was Grafana, which imposes a 5000-line limit on log retrieval. Increasing this limit was not feasible as it would significantly strain our resources and was unnecessary for this one-time task. Additionally, accessing logs directly from the Kubernetes pod was not an option due to storage limitations within the pod itself.


So, I need to download logs directly from Loki without changing configurations.


Preparation

Used tools

Additional steps

To ensure that the query will use to search for logs is correct, follow these steps:


  1. Navigate to Grafana explore:
    • Go to Grafana > Explore
  2. Set the required label:
    • Apply the necessary label to filter logs by the service.
  3. Filter by date:
    • Use the operation filter to display lines containing the desired date.


Example query:

{instance="our-service"} |= `2024-07-12`

Execution

  1. Install LogCli:

  2. Set Loki address:

    • Configure the Loki address for LogCli using an environment variable:
    export LOKI_ADDR=http://localhost:8000
    
  3. Port forwarding:

    • Forward local ports to the Loki pod to allow local access:
    kubectl --namespace loki port-forward svc/loki-stack 8000:3100
    
  4. Extract logs:

    • Use LogCli to query and save the logs to a file:
    logcli query '{instance="our-service"} |= `2024-07-12`' --limit=5000000 --since=72h -o raw > our-service-2024-07-12.log
    
    • In this command:
      • --limit is set with a high value to ensure all logs are captured.

      • --since is set to 72 hours to cover a sufficient time range.



Conclusion

This entire process took approximately 10 minutes, resulting in a file with the complete application logs for the specified date. If needed, this process can be further optimized or automated.