Category Archives: Apache

Elasticsearch

Installation from RPM

[root@NLRTM1-S0503 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[root@NLRTM1-S0503 ~]# vi /etc/yum.repos.d/elastic.repo

[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[kibana-6.x]
name=Kibana repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[root@NLRTM1-S0503 ~]# yum install elasticsearch
[root@NLRTM1-S0503 ~]# yum install kibana
[root@NLRTM1-S0503 ~]# yum install logstash

User preparation

Logstash

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using –path.settings. Continuing using the defaults

[root@NLRTM1-S0503 logstash]# ./bin/logstash -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-12-06 00:08:16.015 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2018-12-06 00:08:16.024 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
ERROR: Failed to read pipelines yaml file. Location: /usr/share/logstash/config/pipelines.yml

remedy:

[support@NLRTM1-S0503 logstash]$ ./bin/logstash --path.settings /etc/logstash -t

Start logstash as a service

Check logstash service user and correct permissions

[root@NLRTM1-S0503 logstash]# vi /etc/systemd/system/logstash.service 
[root@NLRTM1-S0503 etc]# chown -R logstash:logstash /etc/logstash
[root@NLRTM1-S0503 etc]# chown -R logstash:logstash /usr/share/logstash

# chmod -R g+rwx /usr/share/logstash/
# chown -R logstash:logstash /var/log/logstash
[root@NLRTM1-S0503 logstash]# /bin/systemctl daemon-reload
[root@NLRTM1-S0503 logstash]# systemctl enable logstash.service
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@NLRTM1-S0503 logstash]# systemctl start logstash.service

Setting up ELK components as service

[root@cilacap etc]# chown -R logstash:logstash logstash
[root@cilacap etc]# chown -R elasticsearch:elasticsearch elasticsearch
[root@cilacap etc]# chown -R kibana:kibana kibana

[root@cilacap etc]# usermod -aG logstash elastic
[root@cilacap etc]# usermod -aG elasticsearch elastic
[root@cilacap etc]# usermod -aG kibana elastic
[root@cilacap etc]# groups elastic
elastic : elastic wheel logstash elasticsearch kibana

[root@cilacap etc]# sudo /bin/systemctl daemon-reload
[root@cilacap etc]# sudo /bin/systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[root@cilacap etc]# systemctl enable logstash.service
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@cilacap etc]# systemctl enable kibana.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
[root@cilacap etc]#


# cp /etc/systemd/system/logstash-ecn4.service /etc/systemd/system/logstash-apex.service
# vi /etc/systemd/system/logstash-apex.service
# cd /etc/logstash/
# cp -R ecn4 apex
# cd apex
# vi logstash.yml 
[root@NLRTM1-S0503 logstash]# vi /etc/logstash/apex/pipelines.yml
[root@NLRTM1-S0503 logstash]# chown -R logstash:logstash apex
[root@NLRTM1-S0503 logstash]# systemctl enable logstash-apex.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash-apex.service to /etc/systemd/system/logstash-apex.service. 

Filebeat module APACHE2

[root@one filebeat]# ./filebeat modules enable apache2

[root@one filebeat]# nohup ./filebeat >/dev/null 2>&1 &

[root@one filebeat]# tail logs/filebeat
2018-12-08T21:06:44.246+0100	ERROR	pipeline/output.go:100	Failed to connect to backoff(elasticsearch(http://www.atikin.nl:49200)): Connection marked as failed because the onConnect callback failed: Error loading pipeline for fileset apache2/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:
    sudo bin/elasticsearch-plugin install ingest-user-agent
    sudo bin/elasticsearch-plugin install ingest-geoip

Install Elasticsearch plugins

[root@cilacap ~]# cd /usr/share/elasticsearch/
[root@cilacap elasticsearch]# ./bin/elasticsearch-plugin install ingest-user-agent
-> Downloading ingest-user-agent from elastic
[=================================================] 100%   
-> Installed ingest-user-agent
[root@cilacap elasticsearch]# ./bin/elasticsearch-plugin install ingest-geoip
-> Downloading ingest-geoip from elastic
[=================================================] 100%   
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@     WARNING: plugin requires additional permissions     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.lang.RuntimePermission accessDeclaredMembers
* java.lang.reflect.ReflectPermission suppressAccessChecks
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.
Continue with installation? [y/N]y
-> Installed ingest-geoip

[root@cilacap elasticsearch]# systemctl restart elasticsearch

Apache module configuration

[root@one filebeat]# vi /opt/elastic/filebeat/modules.d/apache2.yml
- module: apache2
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/httpd/paulowna_site.com-access_log*","/var/log/httpd/paulowna_shop.com-access_log*" ]

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/httpd/paulowna_site.com_error_log", "/var/log/httpd/paulowna_shop.com-error_log"]

Run filebeat

[root@one filebeat]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat -d "publish" 

Run filebeat in background

nohup /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat  -d "publish" 2>&1 >/dev/null &

Journalctl

[root@NLRTM1-S0503 es00]# journalctl -u elasticsearch.service

Webalizer on Red Hat and CentOS

https://blog.100tb.com/analyze-your-website-statistics-with-webalizer-on-red-hat-and-centos

Analyze Your Website Statistics with Webalizer on Red Hat and CentOS

When running a webserver your log files can rapidly fill with information about the visitors to your site, Webalizer can help.

Your webserver’s log files can be a mine of useful information with regards to the users visiting your website. Unfortunately, reading this information from the logs isn’t the simplest of tasks. To make this resource more useful there are tools available that look through the log files and generate statistics from them. Webalizer is one of these tools: it runs at regular intervals and creates statistics from your website logs as well as charts of usage. It is free and open source, being licensed under the GNU GPL.

How do I install Webalizer?

***For infomation on installing Webalizer on Debian & Ubuntu, read yesterdays post on the 100TB blog

Red Hat & CentOS

Installing Webalizer in Red Hat and CentOS is pretty straightforward as it is in the base repositories. So the install is as simple as the following command:

yum install webalizer

If you are using Apache, in its default configuration then your task of installing Webalizer is complete. Webalizer comes pre-configured to use Apache’s default log file for its data source, and then output its information to /var/www/usage with Apache configured to serve that directory as a subdirectory of the main website under /usage. To test this, simply run the following command:

webalizer

If all has worked correctly, Webalizer should have placed the various files that it creates in the /var/www/usage directory. If so, then you are done and the default cron task that is created through the installation should see you fine with keeping the statistics up to date.

 

100TB offers arround the clock technical support as a resource to help whenever you need answers. 

 

Apache with Virtualhost

If, on the other hand, you are using Apache with Virtualhosts then you have some work ahead of you, the first thing needing to be done is to create configuration files for each of your Virtualhosts. For this I’d suggest making a directory for these files then making copies of the webalizer.conf file in there for each Virtualhost domain you are running:

mkdir /etc/webalizer

cp /etc/webalizer.conf /etc/webalizer/webalizer.yourdomain.com.conf

The above commands create the webalizer config directory and then adds a config file. Note that you need to change yourdomain.com for the domain that you are using webalizer on. The next thing you need to do is edit the new configuration file to fit your configuration. For the following example we will be using a server configured to store log files in /var/log/httpd/yourdomain.com_access.log and the website files in the /var/www/yourdomain.com directory. The configuration file will need editing – I’m going to use nano in this example, but other text editors are available.

nano /etc/webalizer/webalizer.yourdomain.com.conf

The main lines to change are the LogFile line and the OutputDir line, so find those and edit them to match your configuration.

LogFile /var/log/httpd/yourdomain.com_access.log

OutputDir /var/www/yourdomain.com/webalizer

You can now save and exit this file. To avoid having to create a lot of extra configuration files for Apache, I’m using a subdirectory within the website directory for the Webalizer output. This means that it would be accessible from the web as below:

http://yourdomain.com/webalizer

The next step is to populate the directory for which we’ll need to run Webalizer:

webalizer -c /etc/webalizer/webalizer.yourdomain.com.conf

The -c flag tells Webalizer to use the specified configuration file rather than its default, so it should process the new configuration file and create the correct output. If this has worked properly then you should see the files in the directory you uses for the OutputDir.

 

Finalizing Webalizer

The last step is to create the cron task required to generate the webalizer output. This is where putting the configuration all within one directory will come in handy as we can create a simple BASH script to process the configuration files. Edit the Webalizer cron task created when Webalizer was installed and then use it to continue:

nano /etc/cron.daily/00webalizer

Remove all the content of this and then paste in the following code:

#!/bin/bash

# Update website statistics for Virtualhosts using /etc/webalizer directory

for i in /etc/webalizer/*.conf; do

  [ -f $i ] || continue;

  /usr/bin/webalizer -c ${i} -Q

done;

log4j2

Screen Shot 2014-05-31 at 03.57.21Eclipse put log4j2.xml on classpath.
Open Run Configurations -> Arguments -> VM Arguments and enter: -Dlog4j.configurationFile=log4j2.xml

Apache (13)Permission denied:

SE Linux issue

#setenforce Permissive

#systiem-config-selinux

(13)Permission denied: proxy: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed

This error is not really about file permissions or anything like that. What it actually means is that httpd has been denied permission to connect to that IP address and port.

The most common cause of this is SELinux not permitting httpd to make network connections.

To resolve it, you need to change an SELinux boolean value (which will automatically persist across reboots). You may also want to restart httpd to reset the proxy worker, although this isn’t strictly required.

# setsebool -P httpd_can_network_connect 1

For more information on how SELinux can affect httpd, read the httpd_selinux man page.

Subversion Repository Quickguide

Op de Subversion server

su root

Creeer template

mkdir svn.repos
cd svn.repos
mkdir trunk
mkdir tags
mkdir branches
echo "SVN Repository" > readme.txt
echo "Trunk" > trunk/readme.txt
echo "Tags" > tags/readme.txt
echo "Branches" > branches/readme.txt

Creeer repository

svnadmin create /var/www/svn/samplerepos
svn import svn.repos/ file:///var/www/svn/samplerepos
chown -R apache:apache /var/www/svn/samplerepos

## If you have SELinux enabled (you can check it with "sestatus" command) ##
## then change SELinux security context with chcon command ##

chcon -R -t httpd_sys_content_t /var/www/svn/samplerepos

## Following enables commits over http ##
chcon -R -t httpd_sys_rw_content_t /var/www/svn/samplerepos

Initiele checkout om een working copy van de repository te maken.

svn checkout http://shunyo.xs4all.nl/svn/samplerepos/trunk projectname

Inchecken werk.

cd projectname
svn add filename
svn commit -m "Commit Message"