What is better: logStash agents on the appserver or the remote kibana server?

560 views Asked by At

I have 12 log (log4j) files I want to be indexed in Logstash/Kibana.

Is it best to change each log4j.xml file to append to single logstash agent on the local host (1 to 1 mapping), that in turn pushes to ElasticSearch on the remote host where Kibana is running?

So 12 logstash agents on the app server.

OR

Is it best to change each log4j.xml file to append to matching logstash agent on the remote host (1 to 1 mapping), that pushes into ElasticSearch (that is running on the same host as Kibana)?

So 12 logstash agents on the remove server that is running ElasticSearch and Kibana?

If I take option 2, then I need to define 12 ports to be opened on the Kibana server.

If I take option 1, then I only need the one port opening, the 9200 (Elastic Search port).

What is the suggested approach I should take? Have I missed a trick with Logstash?

Note: its 12 log files from one QA server, if I include the other 3 QA servers then I will need to open 12 ports for each of them. Not something I'm keen to do.

I'm using the log4j appender since the servers are Windows and I've yet to get the logstash agent to read the application logs directly (file locking issues).

  • Logstash : 1.5.0
  • Elastic Search : 1.6.0
  • Kibana : 4.1.0-windows
1

There are 1 answers

2
Alain Collins On

Logstash can read multiple files or receive from multiple sources with just one instance, so you shouldn't need to setup 12 logstashes or do anything that complicated.

The main issue with your question is: what happens when logstash or elasticsearch are down? Will log4j queue the messages until the services are brought back up again? If not, you'll lose messages.

While it's old school, this is one reason I like writing to local log files. They act as a free distributed cache when logstash is down. You can also setup a broker (redis, rabbitmq), but then that's another service to support.