Skip to content

nateziemann/draft-guide-app-logging

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 

Repository files navigation

permalink
/guides/app-logging/

Application Logging on OKD with Elasticsearch, FluentD, and Kibana

Pod processes running in Kubernetes frequently produce logs. To effectively manage this log data and ensure no loss of log data occurs when a pod terminates, deploy a log aggregation tool on the user’s OKD cluster. Log aggregation tools help users persist, search, and visualize the log data that is gathered from the pods across the OKD cluster. There are several log aggregation tools in the market. This guide describes the process of deploying a log aggregation tool, EFK (Elasticsearch, FluentD, and Kibana), using the ansible-playbook script provided by OKD.

Use the ansible-playbook that resides in the openshift-ansible repository to install EFK stacks on an OKD cluster. Use this preconfigured EFK stack to aggregate all container logs. After a successful installation, the EFK deployments reside inside the openshift-logging namespace of the OKD cluster.

Using the steps below, you will set up two separate EFK stacks. One stack is for logs from Kubernetes and OKD components. The other EFK stack is for user application logs.

There are several advantages to having an EFK stack for user applications.

  • It is much easier to find application logs in Kibana, as logs are not intermixed with verbose Kubernetes logs.

  • It provides more flexibility for memory allocation, because users can independently assign system memory to each of the EFK deployments.

Install openshift-logging

Installing EFK on OKD requires an inventory host file and the openshift-ansible playbook for logging the installation. After you create the inventory file, add the following Ansible variables to the [OSEv3:vars] section of the inventory file.

openshift_logging_use_ops=True
openshift_logging_es_ops_nodeselector={"node-role.kubernetes.io/infra":"true"}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra":"true"}
openshift_logging_es_ops_memory_limit=5G
openshift_logging_es_memory_limit=3G

There are many other Ansible variables provided by OKD for advanced fine-tuning of the EFK stack. Detailed information about the installation and configuration can be found in OKD documentation Aggregating Container Logs.

Let’s examine each of the variables in detail.

  • Setting openshift_logging_use_ops=True instructs Ansible to install two EFKs with a dedicated ops deployment.

  • openshift_logging_es_nodeselector and openshift_logging_es_ops_nodeselector are two variables required by the openshift-logging ansible-playbook to install Elasticsearch. You usually set these variables to the infra nodes.

  • openshift_logging_es_memory_limit and openshift_logging_es_ops_memory_limit are self-explanatory and can be set according to your preference. For stable operation, allocate at least 2 GB of memory for each of the Elasticsearch deployments. If memory is not specified explicitly, OKD allocates 16 GB of memory for each of the Elasticsearch deployments by default. It is highly recommended to install openshift-logging on systems with at least 32 GB of RAM when openshift_logging_use_ops is set to true.

After all the variables are set in the inventory file, run the ansible-playbook command to install the EFK stack onto its current OKD Cluster.

ansible-playbook -i <inventory_file> openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=true

The installation process can take a few minutes to complete. After the installation completes without any error, you can see the following pods that are running in the openshift-logging namespace.

[root@rhel7-okd ~]# oc get pods -n openshift-logging

NAME                                          READY     STATUS      RESTARTS   AGE
logging-curator-1565163000-9fvpf              0/1       Completed   0          20h
logging-curator-ops-1565163000-5l5tx          0/1       Completed   0          20h
logging-es-data-master-iay9qoim-4-cbtjg       2/2       Running     0          3d
logging-es-ops-data-master-hsmsi5l8-3-vlrgs   2/2       Running     0          3d
logging-fluentd-vssj2                         1/1       Running     1          3d
logging-kibana-2-tplkv                        2/2       Running     6          4d
logging-kibana-ops-1-bgl8k                    2/2       Running     2          3d

You can see that Elasticsearch, Kibana, and Curator all have two types of pods: the main one and a secondary set of pods with -ops postfix, except for Fluentd. This situation is expected as the split of application logs, and the OKD operations logs occurs inside the single Fluentd instance. All of the node system logs and the logs from the default, openshift, and openshift-infra projects are considered to be operation (or ops) logs and are aggregated to the ops Elasticsearch server. The logs from any other namespaces are aggregated to the main Elasticsearch server.

The openshift-logging ansible-playbook also exposes two routes for external access to the Kibana and ops Kibana web consoles.

[root@rhel7-okd ~]# oc get routes -n openshift-logging

NAME                 HOST/PORT                             PATH      SERVICES             PORT      TERMINATION          WILDCARD
logging-kibana       kibana.apps.9.37.135.153.nip.io                 logging-kibana       <all>     reencrypt/Redirect   None
logging-kibana-ops   kibana-ops.apps.9.37.135.153.nip.io             logging-kibana-ops   <all>     reencrypt/Redirect   None

If you examine the logging-kibana-ops URL, all the operation logs generated by OKD and Kubernetes are visible on Kibana’s Discover page.

Kibana ops page with operation log entries

Figure 1: Kibana ops page with operation log entries

View application logs on Kibana

Before you use the logging-kibana for application logs, make sure that the application is already deployed in a namespace NOT from one of the ops namespaces. The ops namespaces are default, openshift, and openshift-infra.

In cases where the application server provides the option, output application logs in JSON format. This will let you fully take advantage of Kibana’s dashboard functions. Kibana is then able to process the data from each individual field of the JSON object to create customized visualizations for that field.

See the Kibana dashboard page by using the routes URL https://kibana.apps.9.37.135.153.nip.io. Log in using your OKD user and password, then the browser should redirect you to Kibana’s Discover page where the newest logs of the selected index are being streamed. Select the project.\* index to view the application logs generated by the deployed application.

Kibana page with the application log entries

Figure 2: Kibana page with the application log entries

The project.\* index contains only a set of default fields at the start, which does not include all of the fields from the deployed application’s JSON log object. Therefore, the index needs to be refreshed to have all the fields from the application’s log object available to Kibana.

To refresh the index, click on the Management option on the left pane.

Click Index Pattern, and find the project.\* index in Index Pattern. Then, click the refresh fields button, which is on the right. After Kibana is updated with all the available fields in the project.\* index, import the preconfigured dashboards to view the application logs.

Index refresh button on Kibana

Figure 3: Index refresh button on Kibana

To import the dashboard and its associated objects, navigate back to the Management page and click Saved Objects. Click Import and select the dashboard file. When prompted, click the Yes, overwrite all option

Head back to the Dashboard page and enjoy navigating logs on the newly imported dashboard.

Kibana dashboard for Open Liberty application logs

Figure 4: Kibana dashboard for Open Liberty application logs

Reinstalling and uninstalling openshift-logging

If changes need to be made for the installed EFK stack, rerun the ansible-playbook installation command with updated ansible variables values in the inventory file. If the aggregated container’s logging stack is no longer needed in the current cluster, you can use the same ansible-playbook command to uninstall the openshift-logging feature. Uninstall the feature by setting the openshift_logging_install_logging variable to False.

About

Guide for logging on OKD

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published