khanhnnvn

Install OpenVAS on CentOS 7 / RHEL 7

 Pentest  Comments Off on Install OpenVAS on CentOS 7 / RHEL 7
Sep 072015
 

OpenVASOpenVAS known as Open Vulnerability Assessment System is the open source vulnerability suite to run the test against servers for known vulnerabilities using the database (Network Vulnerability Tests), OpenVAS is a free software, its components are licensed under GNU General Public License (GNU GPL). Here is the small guide to setup the OpenVAS on CentOS 7 / RHEL 7.

Setup Repository:

Issue the following command in the terminal to install atomic repo.

# wget -q -O - http://www.atomicorp.com/installers/atomic |sh

Accept the license Agreement.

Atomic Free Unsupported Archive installer, version 2.0.12BY INSTALLING THIS SOFTWARE AND BY USING ANY AND ALL SOFTWARE
PROVIDED BY ATOMICORP LIMITED YOU ACKNOWLEDGE AND AGREE:THIS SOFTWARE AND ALL SOFTWARE PROVIDED IN THIS REPOSITORY IS
PROVIDED BY ATOMICORP LIMITED AS IS, IS UNSUPPORTED AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ATOMICORP LIMITED, THE
COPYRIGHT OWNER OR ANY CONTRIBUTOR TO ANY AND ALL SOFTWARE PROVIDED
BY OR PUBLISHED IN THIS REPOSITORY BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
OF THE POSSIBILITY OF SUCH DAMAGE.====================================================================
THIS SOFTWARE IS UNSUPPORTED.  IF YOU REQUIRE SUPPORTED SOFWARE
PLEASE SEE THE URL BELOW TO PURCHASE A NUCLEUS LICENSE AND DO NOT
PROCEED WITH INSTALLING THIS PACKAGE.
====================================================================For supported software packages please purchase a Nucleus license:
https://www.atomicorp.com/products/nucleus.html
All atomic repository rpms are UNSUPPORTED.
Do you agree to these terms? (yes/no) [Default: yes] yesConfiguring the [atomic] yum archive for this systemInstalling the Atomic GPG key: OK
Downloading atomic-release-1.0-19.el7.art.noarch.rpm: OKThe Atomic Rocket Turtle archive has now been installed and configured for your system
The following channels are available:
atomic          – [ACTIVATED] – contains the stable tree of ART packages
atomic-testing  – [DISABLED]  – contains the testing tree of ART packages
atomic-bleeding – [DISABLED]  – contains the development tree of ART packages

System Repo (Only for RHEL):

OpenVAS installation requires additional packages to be downloaded from internet, if your system does not have Redhat subscription you need to setup the CentOS repository.

# vi /etc/yum.repos.d/centos.repo

Add the following lines.

[CentOS]
name=centos
baseurl=http://mirror.centos.org/centos/7/os/x86_64/
enabled=1
gpgcheck=0

PS: CentOS machines do not requires the above repo setup, system automatically creates it during the installation.

Install & Setup OpenVAS:

Issue the following command to install OpenVAS.

# yum -y install openvas

It will do the package installation.

 texlive-tipa                                     noarch                     2:svn29349.1.3-32.el7                                    base                        2.8 M
 texlive-tools                                    noarch                     2:svn26263.0-32.el7                                      base                         61 k
 texlive-underscore                               noarch                     2:svn18261.0-32.el7                                      base                         21 k
 texlive-unicode-math                             noarch                     2:svn29413.0.7d-32.el7                                   base                         60 k
 texlive-url                                      noarch                     2:svn16864.3.2-32.el7                                    base                         25 k
 texlive-varwidth                                 noarch                     2:svn24104.0.92-32.el7                                   base                         20 k
 texlive-xcolor                                   noarch                     2:svn15878.2.11-32.el7                                   base                         34 k
 texlive-xkeyval                                  noarch                     2:svn27995.2.6a-32.el7                                   base                         26 k
 texlive-xunicode                                 noarch                     2:svn23897.0.981-32.el7                                  base                         43 k
 unzip                                            x86_64                     6.0-13.el7                                               base                        165 k
 wapiti                                           noarch                     2.3.0-5.el7.art                                          atomic                      290 k
 which                                            x86_64                     2.20-7.el7                                               base                         41 k
 wmi                                              x86_64                     1.3.14-4.el7.art                                         atomic                      7.7 M
 zip                                              x86_64                     3.0-10.el7                                               base                        260 k
 zziplib                                          x86_64                     0.13.62-5.el7                                            base                         81 k
 
Transaction Summary
========================================================================================================================================================================
Install  1 Package (+262 Dependent packages)
 
Total download size: 84 M
Installed size: 280 M
Is this ok [y/d/N]: y
(1/263): bzip2-1.0.6-12.el7.x86_64.rpm                                                                                                           |  52 kB  00:00:00
warning: /var/cache/yum/x86_64/7/atomic/packages/alien-8.90-2.el7.art.noarch.rpm: Header V3 RSA/SHA1 Signature, key ID 4520afa9: NOKEY
Public key for alien-8.90-2.el7.art.noarch.rpm is not installed
(2/263): alien-8.90-2.el7.art.noarch.rpm                                                                                                         |  90 kB  00:00:00
(3/263): automake-1.13.4-3.el7.noarch.rpm                                                                                                        | 679 kB  00:00:00
(4/263): autoconf-2.69-11.el7.noarch.rpm                                                                                                         | 701 kB  00:00:00
(5/263): debconf-1.5.52-2.el7.art.noarch.rpm                                                                                                     | 186 kB  00:00:00
(6/263): dirb-221-2.el7.art.x86_64.rpm                                                                                                           |  46 kB  00:00:00
(7/263): dpkg-perl-1.16.15-1.el7.art.noarch.rpm                                                                                                  | 292 kB  00:00:00
(8/263): debhelper-9.20140228-1.el7.art.noarch.rpm                                                                                               | 750 kB  00:00:00
(9/263): doxygen-1.8.5-3.el7.x86_64.rpm                                                                                                          | 3.6 MB  00:00:00
(10/263): dpkg-1.16.15-1.el7.art.x86_64.rpm                                                                                                      | 1.2 MB  00:00:00
 texlive-tetex-bin.noarch 2:svn27344.0-32.20130427_r30134.el7                       texlive-thumbpdf.noarch 2:svn26689.3.15-32.el7
 texlive-thumbpdf-bin.noarch 2:svn6898.0-32.20130427_r30134.el7                     texlive-tipa.noarch 2:svn29349.1.3-32.el7
 texlive-tools.noarch 2:svn26263.0-32.el7                                           texlive-underscore.noarch 2:svn18261.0-32.el7
 texlive-unicode-math.noarch 2:svn29413.0.7d-32.el7                                 texlive-url.noarch 2:svn16864.3.2-32.el7
 texlive-varwidth.noarch 2:svn24104.0.92-32.el7                                     texlive-xcolor.noarch 2:svn15878.2.11-32.el7
 texlive-xkeyval.noarch 2:svn27995.2.6a-32.el7                                      texlive-xunicode.noarch 2:svn23897.0.981-32.el7
 unzip.x86_64 0:6.0-13.el7                                                          wapiti.noarch 0:2.3.0-5.el7.art
 which.x86_64 0:2.20-7.el7                                                          wmi.x86_64 0:1.3.14-4.el7.art
 zip.x86_64 0:3.0-10.el7                                                            zziplib.x86_64 0:0.13.62-5.el7
 
 Complete!

Once the installation is completed, start the OpenVAS setup.

# openvas-setup

Setup will start to download the latest database from internet, Upon completion, setup would ask you to configure listening ip address.

Step 2: Configure GSAD
The Greenbone Security Assistant is a Web Based front end
for managing scans. By default it is configured to only allow
connections from localhost.
Allow connections from any IP? [Default: yes]
Restarting gsad (via systemctl):                           [  OK  ]

Configure admin user.

Step 3: Choose the GSAD admin users password.
The admin user is used to configure accounts,
Update NVT's manually, and manage roles.
Enter administrator username [Default: admin] : admin
Enter Administrator Password:
Verify Administrator Password:

Once completed, you would see the following message.

Setup complete, you can now access GSAD at:
https://<IP>:9392

Disable Iptables.

# systemctl stop iptables.service

Create Certificate for OpenVAS manager.

# openvas-mkcert-client -n om -i

You do not require to enter any information, it will automatically creates for you.

Generating RSA private key, 1024 bit long modulus
…………………..++++++
………………………..++++++
e is 65537 (0x10001)
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [DE]:State or Province Name (full name) [Some-State]:Locality Name (eg, city) []:Organization Name (eg, company) [Internet Widgits Pty Ltd]:Organizational Unit Name (eg, section) []:Common Name (eg, your name or your server’s hostname) []:Email Address []:Using configuration from /tmp/openvas-mkcert-client.2827/stdC.cnf
Check that the request matches the signature
Signature ok
The Subject’s Distinguished Name is as follows
countryName           :PRINTABLE:’DE’
localityName          :PRINTABLE:’Berlin’
commonName            :PRINTABLE:’om’
Certificate is to be certified until Aug  5 19:43:32 2015 GMT (365 days)Write out database with 1 new entries
Data Base Updated
Your client certificates are in /tmp/openvas-mkcert-client.2827 .You will have to copy them by hand.

Now Rebuild the OpenVAS database (If required)

# openvasmd --rebuild

Once Completed, Start the OpenVAS manager.

# openvasmd

Open your browser and point to https://your-ip-address:9392. Login as admin using the password created by you.

CentOS 7 - OpenVAS Login
CentOS 7 – OpenVAS Login

You can start the quick scan by entering ip address in the quick scan field.

CentOS 7 - Scan Machine
CentOS 7 – Scan Machine

After that you would the see immediate task like below. currently 98% scanning is completed

CentOS 7 - Scanning Status
CentOS 7 – Scanning Status

Click on the task to view the details of the scan, details will be like below. Once the scan is completed, click on “Date” to see the report.

CentOS 7 - VA Scanning Completed
CentOS 7 – VA Scanning Completed

In report page you have option to download the report in multiple format like pdf, html,xml, etc,.. or you can click on the each Vulnerability to see the full information.

CentOS 7 - OpenVAS Report Page
CentOS 7 – OpenVAS Report Page

Actual report will look like below.

CentOS 7 - OpenVAS Report
CentOS 7 – OpenVAS Report

That’s All, Place your valuable comments below.

Monitor remote machine with Icinga on CentOS 7

 Monitoring  Comments Off on Monitor remote machine with Icinga on CentOS 7
Sep 072015
 

Icinga Logo

Once you installed the Icinga, you can monitor the system via web interface; by default it is limited to monitor the local machine where Icinga installed. If you like to monitor the remote Linux box or Windows box, you need to have Nagios plugin and NRPE add-on installed on the remote box. Once installed the plugin and add-on, you need to configure the Icinga server to collect the information from the remote machine and display it on the web interface.

If you are yet to install the Icinga; visit the post of Installing Icinga on Centos 7. Monitoring the Remote Linux system includes the 6 Steps

Icinga Remote Host:

  1. Add User Account
  2. Download & Install Nagios Plugin
  3. Install NRPE Add-on
  4. Configure NRPE Add-On

Icinga Server Host:

  1. Configure Icinga Server
  2. Monitor remote machine.

Icinga Remote Host:

Install the required packages.

yum install gcc cpp glibc-devel glibc-headers kernel-headers libgomp libmpc mpfr make openssl* xinetd

Add User account:

Before proceeding the installation, create a new user in the name of “icinga” and give it a password.

useradd icinga

Set the password.

passwd icinga

Download and Install Nagios Plugin:

Download the Nagios plugins on the remote host using the following command ( For latest version VisitNagios Website )

cd /tmp
wget http://nagios-plugins.org/download/nagios-plugins-2.0.3.tar.gz

Extract the Nagios plugins tarball.

tar -zxvf /tmp/nagios-plugins-2.0.3.tar.gz
cd /tmp/nagios-plugins-2.0.3

Compile and install the plugin.

./configure --prefix=/usr/local/icinga --with-cgiurl=/icinga/cgi-bin --with-nagios-user=icinga --with-nagios-group=icinga
make
make install

Change the permission of the nagios plugin directory on the remote host.

chown icinga:icinga /usr/local/icinga/
chown -R icinga:icinga /usr/local/icinga/libexec/

Download and install NRPE Add-on:

Visit the Nagios download page and download the NRPE Add-on.

cd /tmp
wget http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz

Once downloaded, Extract the tar ball.

tar -zxvf /tmp/nrpe-2.15.tar.gz
cd /tmp/nrpe-2.15

Compile it.

./configure --with-nagios-user=icinga --with-nagios-group=icinga --with-nrpe-group=icinga --with-nrpe-user=icinga --prefix=/usr/local/icinga
make all
make install-plugin
make install-daemon
make install-daemon-config
make install-xinetd

Configure NRPE Add-on:

NRPE is the part of the xinetd daemon, modify the NRPE configurations file to accept the connection from the Nagios server, Edit the /etc/xinetd.d/nrpe.

vi /etc/xinetd.d/nrpe

Add the Nagios server ip address like below.

only_from = 127.0.0.1 192.168.12.151

Add NRPE port at the end of the /etc/services file.

nrpe 5666/tcp # NRPE

Restart the Xinetd.d service.

systemctl restart xinetd.service

Confirm that NRPE Listening.

netstat -at | grep 5666
tcp6       0      0 :::5666                 :::*                    LISTEN      26780/xinetd

Confirm the NRPE functioning.

/usr/local/nagios/libexec/check_nrpe -H 127.0.0.1
NRPE v2.15

Modify NRPE Config file:

Modify the /usr/local/icinga/etc/nrpe.cfg on the remote host, it contains the command argument to check the service on the remote host. The nrpe.cfg file contains the basic commands to check the remote services, below are the command lines to check the cpu load and running process. Thecheck_load and check_total_procs has to be entered on template file on the server host to enable the monitoring.

command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200

In the above command -w stands for warning whereas -c stands for critical alert. For example if you execute the second command on the terminal, it will check the running process; it will warn when the process more than 150 or it will report critical when the process more than 200 and at the same time it will say OK if process below 150.

/usr/local/icinga/libexec/check_procs -w 150 -c 200
PROCS OK: 17 processes | procs=17;150;200;0;

Change warning to 15 and critical to 150 for testing purpose, since process running  on the server is very less. Now you can see the warning message, according to your requirement you can modify it.

/usr/local/icinga/libexec/check_procs -w 15 -c 200
PROCS WARNING: 17 processes | procs=17;15;200;0;

Icinga Server Host:

On Icinga server we must have NRPE add-on installed and template of the remote host.

Install NRPE Add-on:

Visit the Nagios download page and download the NRPE Add-on.

cd /tmp
wget http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz

Once downloaded, Extract the tar ball.

tar -zxvf /tmp/nrpe-2.15.tar.gz
cd /tmp/nrpe-2.15

Compile it.

./configure --with-nagios-user=icinga --with-nagios-group=icinga --with-nrpe-group=icinga --with-nrpe-user=icinga --prefix=/usr/local/icinga
make all
make install-plugin

Configure Nagios Server:

Now its the time configure the Icinga server to monitor the remote client, You’ll need to create a command definition in one of your Icinga object configuration files in order to use the check_nrpe plugin. Edit commands.cfg file.

vi /usr/local/icinga/etc/objects/commands.cfg

Add the following command definition to the file.

# .check_nrpe. command definition
define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -t 30 -c $ARG1$
}

Create the new configuration file (/usr/local/icinga/etc/objects/remote.cfg) to define the host and service definition. You can use the following template, modify according to your requirement. The following is configured to display the CPU Load,Disk Space,Current Users. Etc..,

define host{
use linux-server
host_name remote.itzgeek.com
alias Client 1
address 192.168.12.102
}
define hostgroup{
hostgroup_name Linux Client
alias Linux Client 1
members remote.itzgeek.com
}
define service{
use local-service
host_name remote.itzgeek.com
service_description Root Partition
check_command check_nrpe!check_hda1
}
define service{
use local-service
host_name remote.itzgeek.com
service_description Current Users
check_command check_nrpe!check_users
}
define service{
use local-service
host_name remote.itzgeek.com
service_description Total Processes
check_command check_nrpe!check_total_procs
}
define service{
use local-service
host_name remote.itzgeek.com
service_description Current Load
check_command check_nrpe!check_load
}

Add the new template on the icinga.cfg configuration file, so that it will read the new entries.

vi /usr/local/icinga/etc/icinga.cfg

Add below line.

# Definitions for monitoring the Remote (Linux) host
cfg_file=/usr/local/icinga/etc/objects/remote.cfg

Restart the icinga server.

/etc/init.d/icinga restart

Monitor the remote machine:

Now login to the web interface and start do the monitoring. The following screenshot shows the remote Linux server with the default service available.

CentOS 7 - Icinga With Remote Monitoring
CentOS 7 – Icinga With Remote Monitoring

That’s All. Now you can easily monitor the remote machine with Icinga.

Setup Icinga Monitoring Tool on CentOS 7 / RHEL 7

 Monitoring  Comments Off on Setup Icinga Monitoring Tool on CentOS 7 / RHEL 7
Sep 072015
 

Icinga Logo

Icinga is a fork of famous Ngaios monitoring tool, it is very compatible with Nagios and can be integrated with Nagios plugins. Icinga is very similar to Nagios, so you wont find any difficult in moving to Icinga. Icinga is one step ahead on multiple factors, the import factor is advanced reporting using we based jasper reports,  the most improved web interface and its comes as virtual appliance.

This post will help you to setup Icinga on CentOS 7 / RHEL 7.

Prerequisites:

Before we go ahead, lets install the required packages for Icinga.

# yum -y install wget httpd mod_ssl gd gd-devel mariadb-server php-mysql php-xmlrpc gcc mariadb libdbi libdbi-devel libdbi-drivers libdbi-dbd-mysql

Disable SELinux.

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

Reboot once done.

Create icinga user and icinga-cmd group (allowing the external commands to be executed through the web interface) , add  icinga and apache user to the part of icinga-cmd group.

# useradd icinga
# groupadd icinga-cmd
# usermod -a -G icinga-cmd icinga
# usermod -a -G icinga-cmd apache

Download latest Icinga source tarball.

# cd /tmp/
# wget http://downloads.sourceforge.net/project/icinga/icinga/1.10.1/icinga-1.10.1.tar.gz
# tar -zxvf /tmp/icinga-1.10.1.tar.gz
# cd /tmp/icinga-1.10.1

Compile and Install Icinga:

# ./configure --with-command-group=icinga-cmd --enable-idoutils
# make all
# make install
# make install-init
# make install-config
# make install-commandmode
# make install-webconf
# make install-idoutils

Configure Icinga:

Sample configuration files have now been installed in the /usr/local/icinga/etc/ directory. These sample files should work fine for getting started with Icinga. You’ll need to make just one change before you proceed. Edit the /usr/local/icinga/etc/objects/contacts.cfg config file with your favorite editor and change the email address associated with the nagiosadmin contact definition to the address you’d like to use for receiving alerts.

# vi /usr/local/icinga/etc/objects/contacts.cfg

Change the Email address field to receive the notification.

email                           icinga@localhost

to

email                           [email protected]

Move sample idoutils configuration files to Icinga base directory.

# cd /usr/local/icinga/etc/
# mv idomod.cfg-sample idomod.cfg
# mv ido2db.cfg-sample ido2db.cfg
# cd modules/
# mv idoutils.cfg-sample idoutils.cfg

Create database for idoutils:

# systemctl start mariadb.service
# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE icinga;
MariaDB [(none)]> GRANT USAGE ON icinga.* TO 'icinga'@'localhost' IDENTIFIED BY 'icinga' WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0;
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> quit

Import Database.

# mysql -u root -p icinga < /tmp/icinga-1.10.1/module/idoutils/db/mysql/mysql.sql

Configure Web Interface:

Create a icingaadmin account for logging into the Icinga web interface. Remember the password that you assign to this user – you’ll need it later.

# htpasswd -c /usr/local/icinga/etc/htpasswd.users icingaadmin

Restart Apache to make the new settings take effect.

systemctl restart httpd.service

Download and Install Nagios Plugins:

Download Nagios Plugins to /tmp directory.

# cd /tmp
# wget http://nagios-plugins.org/download/nagios-plugins-2.0.3.tar.gz
# tar -zxvf /tmp/nagios-plugins-2.0.3.tar.gz
# cd /tmp/nagios-plugins-2.0.3/

Compile and install the plugins.

# ./configure --prefix=/usr/local/icinga --with-cgiurl=/icinga/cgi-bin --with-nagios-user=icinga --with-nagios-group=icinga
# make
# make install

Starting Icinga:

Verify the sample Icinga configuration files.

# /usr/local/icinga/bin/icinga -v /usr/local/icinga/etc/icinga.cfg

If there are no errors, start Nagios and Idoutils.

# /etc/init.d/icinga start
# /etc/init.d/ido2db start

Start Icinga and Idoutils on system startup.

# chkconfig ido2db on
# chkconfig icinga on
# systemctl enable httpd.service
# systemctl enable mariadb.service

Access Web Interface:

Now access Icinga web interface using the following URL. You’ll be prompted for the username (icingaadmin) and password you specified earlier.

http://ip-address/icinga/
CentOS 7 - Icinga Dashboard
CentOS 7 – Icinga Dashboard

Click on service details to check the status.

CentOS 7 - Icinga localhost Service details
CentOS 7 – Icinga localhost Service details

Troubleshooting:

If you get any unknown waring for ping check.

CentOS 7 - Icinga Ping Unknown Warning
CentOS 7 – Icinga Ping Unknown Warning

please execute the following command in the terminal to resolve the issue.

# chmod u+s /bin/ping

If you get any waring for httpd check.

CentOS 7 - Icinga Httpd Warning
CentOS 7 – Icinga Httpd Warning

Place index.html on document root.

# echo "Home Page" > /var/www/html/index.html

That’s All!. You have successfully installed Icinga on CentOS 7 / RHEL 7

Install Jetty web server on CentOS 7 / RHEL 7

 Linux  Comments Off on Install Jetty web server on CentOS 7 / RHEL 7
Sep 072015
 

Jetty_logo

Jetty web server is a java based http server and servlet container, web servers are normally used for serving static content to client, nowadays jetty is used  for server to server communication, within large frameworks. Jetty is developed under open source license,part of Eclipse foundation, it is used in multiple active products such as Apache ActiveMQ,Alfresco, Apache Geronimo,Apache Maven, Apache Spark and also in open source project such as Hadoop, Eucalyptus and Red5.
Jetty supports the latest Java Servlet API as well as protocols SPDY and WebSocket, this guide will help you to set up jetty on CentOS 7 / RHEL 7.

Jetty requires java jdk, go ahead and install it.

#  yum -y install java-1.7.0-openjdk wget

Download latest version of jetty.

# wget http://download.eclipse.org/jetty/stable-9/dist/jetty-distribution-9.2.5.v20141112.tar.gz

Extract the downloaded archive file to /opt

# tar zxvf jetty-distribution-9.2.5.v20141112.tar.gz -C /opt/

Rename it to jetty

# mv /opt/jetty-distribution-9.2.5.v20141112/ /opt/jetty

Create a user called jetty to run jetty web server on system start-up.

# useradd -m jetty

Change ownership of extracted jetty directory.

# chown -R jetty:jetty /opt/jetty/

Copy or Symlink jetty.sh to /etc/init.d directory to create a start up script file for jetty web server.

# ln -s /opt/jetty/bin/jetty.sh /etc/init.d/jetty

Add script.

# chkconfig --add jetty

Auto start at 3,4 and 5 levels.

chkconfig --level 345 jetty on

Add the following information in /etc/default/jetty, replace port and listening address with your value.

vi /etc/default/jetty
 
JETTY_HOME=/opt/jetty
JETTY_USER=jetty
JETTY_PORT=8080
JETTY_HOST=192.168.12.10
JETTY_LOGS=/opt/jetty/logs/

Now start the jetty service.

service jetty start

Jetty can be accessed using web browser http://your-ip-address:8080. 

How to install Graylog2 on CentOS 7 / RHEL 7

 Linux  Comments Off on How to install Graylog2 on CentOS 7 / RHEL 7
Sep 072015
 
Graylog Logo
Graylog

Graylog (formerly known as Graylog2) is an open source log management platform, helps you to collect, index and analyze any machine logs on a centralized location. This guide helps you to install Graylog2 on CentOS 7 / RHEL 7, also focus on installation of four other components that makes Graylog2 a power full log management tool.

1. MongoDB – Stores the configurations and meta information.

2. Elasticsearch – Stores the log messages and offers a searching facility, nodes should have high memory as all the I/O operations are happens here.

3. GrayLog – Log parser, it collect the logs from various inputs.

4. GrayLog Web interface = provides you the web-based portal for managing the logs.

Pre-requisites:

1. Since the Elasticsearch is based on java, we would require to install either openJDK or Oracle JDK. It is recommended to install Oracle JDK, verify the java version by using the following command.

# java -version

java version "1.8.0_11"
Java(TM) SE Runtime Environment (build 1.8.0_11-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.11-b03, mixed mode)

2. Configure EPEL repository on CentOS 7 / RHEL 7.

Install Elasticsearch:

Elasticsearch is an open source search server, it offers a realtime distributed search and analytics with RESTful web interface. Elasticsearch stores all the logs sent by the Graylog server and displays the messages when the graylog web interface requests for full filling user request over the web interface. This topic covers configuration settings that is required for Graylog, you can also take a look on Install Elasticsearch on CentOS 7 / Ubuntu 14.10 / Linux Mint 17.1 for detailed instruction.

Let’s install the Elasticsearch, it can be downloaded from official website. You can use the following command to download via terminal and install it.

# wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.4.noarch.rpm

# rpm -Uvh elasticsearch-1.4.4.noarch.rpm

Configure Elasticseach to start during system startup.

# systemctl daemon-reload
# systemctl enable elasticsearch.service

The only important thing is to set a cluster name as “graylog2“, that is being used by graylog. Now edit the configuration file of Elasticsearch.

# vi /etc/elasticsearch/elasticsearch.yml

cluster.name: graylog2

Disable dynamic scripts to avoid remote execution, that can be done by adding the following line at the end of above file.

script.disable_dynamic: true

Once it is done, we are good to go. Before that, restart the Elasticsearch services to load the modified configuration.

# systemctl restart elasticsearch.service

Wait for at least a minute to let the Elasticsearch get fully restarted, otherwise testing will fail. Elastisearch should be now listen on 9200 for processing HTTP request, we can use CURL to get the response. Ensure that it returns with cluster name as “graylog2

# curl -X GET http://localhost:9200

{
"status" : 200,
"name" : "Sinister",
"cluster_name" : "graylog2",
"version" : {
"number" : "1.4.4",
"build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512",
"build_timestamp" : "2015-02-19T13:05:36Z",
"build_snapshot" : false,
"lucene_version" : "4.10.3"
},
"tagline" : "You Know, for Search"

Optional: Use the following command to check the Elasticsearch cluster health, you must get a cluster status as “green” for graylog to work.

# curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

{
"cluster_name" : "graylog2",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

Install MongoDB:

MongoDB is available in RPM format and same can be downloaded from the official website. Add the following repository information on the system to install MongoDB using yum.

# vi /etc/yum.repos.d/mongodb-org-3.0.repo

[mongodb-org-3.0]
name=MongoDB Repository
baseurl=http://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1

Install MongoDB using the following command.

# yum install -y mongodb-org

Run the following command to configure SELinux to allow MongoDB to start.

# semanage port -a -t mongod_port_t -p tcp 27017

Or, if you do not use SELinux on the system, consider disabling of SELinux.

Start the MongoDB service and enable it to start automatically during the system start-up.

# service mongod start

# chkconfig mongod on

Th above steps are enough for configuring graylog2, you can find the detailed configuration here.

Install Graylog2:

Graylog-server accepts and process the log messages, also spawns the RESTAPI for the requests that comes from graylog-web-interface. Download the latest version of graylog from graylog.org, use the following command to download using terminal.

# wget https://packages.graylog2.org/releases/graylog2-server/graylog-1.0.1.tgz

Extract and move it to /opt.

# tar -zxvf graylog-1.0.1.tgz

# mv graylog-1.0.1 /opt/graylog

Copy the sample configuration file to /etc/graylog/server, create the directory if it does not exists.

# mkdir -p /etc/graylog/server

# cp /opt/graylog/graylog.conf.example /etc/graylog/server/server.conf

Edit the server.conf file.

# vi /etc/graylog/server/server.conf

Configure the following variables in the above file.

Set a secret to secure the user passwords, use the following command to generate a secret, use at least 64 character’s.

# pwgen -N 1 -s 96

5uxJaeL4vgP9uKQ1VFdbS5hpAXMXLq0KDvRgARmlI7oxKWQbH9tElSSKTzxmj4PUGlHIpOkoMMwjICYZubUGc9we5tY1FjLB

If you get a “pwgen: command not found“, use the following command to install pwgen.

# yum -y install pwgen

Place the secret.

password_secret = 5uxJaeL4vgP9uKQ1VFdbS5hpAXMXLq0KDvRgARmlI7oxKWQbH9tElSSKTzxmj4PUGlHIpOkoMMwjICYZubUGc9we5tY1FjLB

Next is to set a hash password for the root user (not to be confused with system user, root user of graylog is admin). You will use this password for login into the web interface, admin’s password can not be changed using web interface, must edit this variable to set.

Replace “yourpassword” with the choice of your’s.

# echo -n yourpassword | sha256sum

e3c652f0ba0b4801205814f8b6bc49672c4c74e25b497770bb89b22cdeb4e951

Place the hash password.

root_password_sha2 = e3c652f0ba0b4801205814f8b6bc49672c4c74e25b497770bb89b22cdeb4e951

Graylog will try to find the Elasticsearch nodes automatically, it uses multicast mode for the same. But when it comes to larger network, it is recommended to use unicast mode which is best suited one for production setups. So add the following two entries to graylog server.conf file, replace ipaddress with live hostname or ipaddress, multiple hosts can be added with comma separated.

elasticsearch_http_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = ipaddress:9300

Set only one master node by defining the below variable, default setting is true, you must set it as a false to make the particular node as a slave. Master node performs some periodic tasks that slave won’t perform.

is_master = true

The following variable sets the number of log messages to keep per index, it is recommended to have several smaller indices instead of larger ones.

elasticsearch_max_docs_per_index = 20000000

The following parameter defines to have total number of indices, if the this number is reached old index will be deleted.

elasticsearch_max_number_of_indices = 20

Shards setting is really depends on the number of nodes in the Elasticsearch cluster, if you have only one node, set it as 1.

elasticsearch_shards = 1

The number of replicas for your indices, if you have only one node in Elasticsearch cluster; set it as 0.

elasticsearch_replicas = 0

Enter your MongoDB authentication information.

# MongoDB Configuration
mongodb_useauth = false  #If this is set to false, you do not need to enter the authentication information
#mongodb_user = grayloguser
#mongodb_password = 123
mongodb_host = 127.0.0.1
#mongodb_replica_set = localhost:27017,localhost:27018,localhost:27019
mongodb_database = graylog2
mongodb_port = 27017

Start the graylog server using the following command.

# /opt/graylog/bin/graylogctl start

You can check out the server startup logs, it will be useful for you to troubleshoot the graylog in case of any issue.

# tailf /opt/graylog/log/graylog-server.log

On successful start of graylog-server, you should get the following message in the log file.

2015-03-23 16:28:15,825 INFO : org.graylog2.shared.initializers.RestApiService - Started REST API at <http://127.0.0.1:12900/>

You may also want to configure init script for Graylog-server.

Install Graylog web interface:

To configure graylog-web-interface, you must have at least one graylog-server node; download the same version number to make sure that it is compatible

# wget https://packages.graylog2.org/releases/graylog2-web-interface/graylog-web-interface-1.0.1.tgz

Extract the archive and move it to /opt.

# tar -zxvf graylog-web-interface-1.0.1.tgz
# mv graylog-web-interface-1.0.1 /opt/graylog-web-interface

Edit the configuration file and set the following parameters.

# vi /opt/graylog-web-interface/conf/graylog-web-interface.conf

This is the list of graylog-server nodes, you can add multiple nodes, separate by commas.

graylog2-server.uris="http://127.0.0.1:12900/"

Set the application scret and can be generated using pwgen -N 1 -s 96.

application.secret="sNXyFf6B4Au3GqSlZwq7En86xp10JimdxxYiLtpptOejX6tIUpUE4DGRJOrcMj07wcK0wugPaapvzEzCYinEWj7BOtHXVl5Z"

Start the gralog-web-interface in background using the following command,

# nohup /opt/graylog-web-interface/bin/graylog-web-interface &

You may also want to configure Init script for Graylog-Web-Interface.

The web interface will listen on port 9000. Point your browser to it. Log in with username admin and the password you configured at root_password_sha2 on server.conf.

Configure the firewall to allow traffic on port 9000.

firewall-cmd --permanent --zone=public --add-port=9000/tcp
firewall-cmd --reload
Install Graylog2 - Login page
Install Graylog2 – Login page

Once you logged in, you will get the following search page.

Install Graylog2 - Search Page
Install Graylog2 – Search Page

That’s All!, you have successfully installed Graylog2 on CentOS 7 / RHEL 7.

Install Apache Hadoop on Ubuntu 14.10 / CentOS 7 (Single Node Cluster)

 Linux  Comments Off on Install Apache Hadoop on Ubuntu 14.10 / CentOS 7 (Single Node Cluster)
Sep 072015
 
Hadoop Logo
Hadoop

Apache Hadoop is a an open-source software framework written in Java for distributed storage and distributed process, it handles very large size of data sets by distributing it across computer clusters. Rather than rely on hardware high availability, hadoop modules are designed to detect and handle the failure at application layer, so gives you high-available serveice.

Hodoop framework consists of following modules,

  •  Hadoop Common – It contains common set of libraries and utilities that support  other Hadoop modules
  •  Hadoop Distributed File System (HDFS) – is a java based distributed file-system that stores data, providing very high-throughput to the application.
  •  Hadoop YARN –  It manages resources on compute clusters and using them for scheduling user’s applications.
  • Hadoop MapReduce – is a framework for large-scale data processing.

This guide will help you to get apache hadoop installed on Ubuntu 14.10 / CentOS 7.

Prerequisites:

Since hadoop is based on java, make sure you have java jdk installed on the system. Incase your machine don’t have a java, follow the below steps. You may also skip this if you have it already installed.

Download oracle java by using the following command, on assumption of 64 bit operating system.

# wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u5-b13/jdk-8u5-linux-x64.tar.gz

Extract the downloaded archive, move it to /usr.

# tar -zxvf jdk-8u5-linux-x64.tar.gz
# mv jdk1.8.0_05/ /usr/

Create Hadoop user:

It is recommended to create a normal user to configure apache hadoop, create a user using following command.

# useradd -m -d /home/hadoop hadoop

# passwd hadoop

Once you created a user, configure a passwordless ssh to local system. Create a ssh key using following command

# su - hadoop

$ ssh-keygen

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

verify the passwordless communication to local system, if you are doing ssh for the first tim, type “yes” to add RSA keys to known hosts.

$ ssh 127.0.0.1

Download Hadoop:

You can visit apache hadoop page to download the latest hadoop package, or simple issue the following command in terminal to download Hadoop 2.6.0.

$ wget http://apache.bytenet.in/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz

$ tar -zxvf hadoop-2.6.0.tar.gz

$ mv hadoop-2.6.0 hadoop

Install apache Hadoop:

Hadoop supports three modes of clusters

  1.     Local (Standalone) Mode – It runs as single java process.
  2.     Pseudo-Distributed Mode – Each hadoop daemon runs in a separate process.
  3.     Fully Distributed Mode – Actual multinode cluster ranging from few nodes to extremely large cluster.

Setup environmental variables:

Here we will be configuring hadoop in Pseudo-Distributed mode, configure environmental variable in ~/.bashrc file.

$ vi ~/.bashrc

export JAVA_HOME=/opt/jdk1.8.0_05/
export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

Apply environmental variables to current running session.

$ source ~/.bashrc

Modify Configuartion files:

Edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh and set JAVA_HOME environment variable.

export JAVA_HOME=/usr/jdk1.8.0_05/

Hadoop has many configuration files depend on the cluster modes, since we are to set up Pseudo-Distributed cluster, edit the following files.

$ cd $HADOOP_HOME/etc/hadoop

Edit core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

Edit hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>

<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>

Edit mapred-site.xml

$ cp $HADOOP_HOME/etc/hadoop/mapred-site.xml.template $HADOOP_HOME/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

Edit yarn-site.xml

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

Now format namenode using following command, do not forget to check the storage directory.

$ hdfs namenode -format

Start NameNode daemon and DataNode daemon by using the scripts provided by hadoop, make sure you are in sbin directory of hadoop.

$ cd $HADOOP_HOME/sbin/
$ start-dfs.sh

Browse the web interface for the NameNode; by default it is available at: http://your-ip-address:50070/

Hadoop NameNode Information
Hadoop NameNode Information

Start ResourceManager daemon and NodeManager daemon:

$ start-yarn.sh

Browse the web interface for the ResourceManager; by default it is available at: http://your-ip-address:8088/

Hadoop YARN - Cluster Information
Hadoop YARN – Cluster Information

Testing Hadoop single node cluster:

Before carryiging out the upload, lets create a directory at HDFS in order to upload a files.

$ hdfs dfs -mkdir /raj

Lets upload messages file into  HDFS directory called “raj”

$ hdfs dfs -put /var/log/messages /raj

Uploaded files can be viewed by visiting the following url. http://your-ip-address:50070/explorer.html#/raj

Hadoop Directory browsing

Copy the files from HDFS to your local file systems.

$ hdfs dfs -get /raj /tmp/

You can delete the files and directories using the following commands.

hdfs dfs -rm  /raj/messages
hdfs dfs -r -f /raj

That’s All!, you have successfully configured single node hadoop cluster.

OpenStack Kilo on Ubuntu 14.04.2 – Configure Swift #1

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Swift #1
Sep 072015
 

OpenStack Logo

The Swift AKA OpenStack Object Storage is a multi-tenant object storage system, provides a distributed scale-out object store across storage nodes that are in the cluster. This guide helps you to configure swift on Ubuntu 14.04.2.

There are two main components in Swift:

Swift proxy:

It accepts API and raw http requests to upload files, modify metadata, and create containers. Since the requests are done through REST API, it uses HTTP verbs with simple commands such as PUT and GET. When the user sends a data to be written, request will be go to proxy server and it will choose perfect storage node to store the data. You can have a multiple proxy servers for performance and redundancy. In our case, we will use controller node as a swift proxy server.

Storage node:

This is where the user data gets stored, you can have multiple storage nodes in your environment. Swift is a replicated based system, all the data stored inside of it will be stored at multiple times (replicas) to ensure high availability of data.

Prerequisites:

The following is the network configuration of Proxy and Storage node, Storage Node will have one network interface on the management network.

ROLE NW CARD 1
PROXY SERVER (CONTROLLER NODE) 192.168.12.21 / 24, GW=192.168.12.2
(MANAGEMENT NETWORK)
OBJECT STORAGE NODE 1 192.168.12.25 / 24, GW=192.168.12.2
(MANAGEMENT NETWORK)
OBJECT STORAGE NODE 1 192.168.12.26 / 24, GW=192.168.12.2
(MANAGEMENT NETWORK)
OBJECT STORAGE NODE 1 192.168.12.27 / 24, GW=192.168.12.2
(MANAGEMENT NETWORK)

Install and configure swift proxy on the controller node:

Load your admin credential from the environment script.

# source admin-openrc.sh

Create the swift user for creating service credentials.

# openstack user create --password-prompt swift
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | 023c019a62f3476d986627e8615b034f |
| name     | swift                            |
| username | swift                            |
+----------+----------------------------------+

Add the admin role to the swift user.

# openstack role add --project service --user swift admin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 33af4f957aa34cc79451c23bf014af6f |
| name  | admin                            |
+-------+----------------------------------+

Create the swift service entity.

# openstack service create --name swift --description "OpenStack Object Storage" object-store
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Object Storage         |
| enabled     | True                             |
| id          | b835a5fbfe3d4a9592f6dbd69ddb148d |
| name        | swift                            |
| type        | object-store                     |
+-------------+----------------------------------+

Create the Object Storage service API endpoint.

# openstack endpoint create --publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' --internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' --adminurl http://controller:8080 --region RegionOne  object-store

+--------------+----------------------------------------------+
| Field        | Value                                        |
+--------------+----------------------------------------------+
| adminurl     | http://controller:8080                       |
| id           | d250217af148491abc611e2b72a227b8             |
| internalurl  | http://controller:8080/v1/AUTH_%(tenant_id)s |
| publicurl    | http://controller:8080/v1/AUTH_%(tenant_id)s |
| region       | RegionOne                                    |
| service_id   | b835a5fbfe3d4a9592f6dbd69ddb148d             |
| service_name | swift                                        |
| service_type | object-store                                 |
+--------------+----------------------------------------------+

Install the packages on the Controller node.

# apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached

Create the /etc/swift directory.

# mkdir /etc/swift

Get the proxy configuration file from the source repository.

# curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/kilo

Edit the /etc/swift/proxy-server.conf file.

# nano /etc/swift/proxy-server.conf

Modify the below settings and make sure to place an entries in the proper sections. Some time you may need to add sections if it does not exists and also you require to add some entries which are missing in the file, not all.

[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo proxy-logging proxy-server

[app:proxy-server]
...
account_autocreate = true

[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,user

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = password
## Replace "password" with the password you chose for swift user in the identity service
delay_auth_decision = true
## Comment out or remove any other options in the [filter:authtoken] section

[filter:cache]
...
memcache_servers = 127.0.0.1:11211

That’s All!!!, in our next tutorial we will configure storage nodes.

OpenStack Kilo on Ubuntu 14.04.2 – Configure Cinder #2

 Virtualization  Comments Off on OpenStack Kilo on Ubuntu 14.04.2 – Configure Cinder #2
Sep 072015
 

OpenStack Logo

This is the second part of OpenStack Kilo on Ubuntu 14.04.2 – Configure Cinder, in this tutorial we will install and configure Storage Node for the Cinder service. For a demo purpose, will configure this storage node with a block storage device /dev/sdb that contains partition /dev/sdb1 occupying the entire disk.

Prerequisites:

The following is the network configuration of storage node, Storage Node will have one network interface on the management network.

ROLE NW CARD 1 NW CARD 2 NW CARD 3
STORAGE NODE 192.168.12.24 / 24, GW=192.168.12.2
(MANAGEMENT NETWORK)
NA NA

Set the hostname of the node to block.

Copy the host entry from the controller node to storage node and add the following to it. Final output will look like below.

192.168.12.21 controller
192.168.12.22 network
192.168.12.23 compute
192.168.12.24 block

Install NTP package on Storage Node.

# apt-get install ntp

Edit the below configuration file.

# nano /etc/ntp.conf

Remove other ntp servers from the file, just hash out the lines that are starts with word “server”. Add below entry to get our nodes sync with controller node.

server controller

Restart the NTP service.

# service ntp restart

OpenStack packages:

Install the Ubuntu Cloud archive keyring and repository.

# apt-get install ubuntu-cloud-keyring

# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list

Update the repositories on your system.

# apt-get update

Install lvm2 packages, if required.

#  apt-get install lvm2

Create the physical volume /dev/sdb1

# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created

Create the volume group vg_cinder.

# vgcreate vg_cinder /dev/sdb1
Volume group "vg_cinder" successfully created

Edit the /etc/lvm/lvm.conf file and add a filter that accepts the /dev/sdb device and rejects all other devices.

# nano /etc/lvm/lvm.conf

In the devices section, change

From

filter = [ "a/.*/ " ]

To

filter = [ "a/sdb/", "r/.*/" ]

Install and configure Cinder components:

Install the packages on the storage node.

# apt-get install cinder-volume python-mysqldb

Edit the /etc/cinder/cinder.conf file.

# nano /etc/cinder/cinder.conf

Modify the below settings and make sure to place an entries in the proper sections. Some time you may need to add sections if it does not exists and also you require to add some entries which are missing in the file, not all.

[DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.12.24
## Management IP of Storage Node
enabled_backends = lvm
glance_host = controller
verbose = True

[database]
connection = mysql://cinder:password@controller/cinder
## Replace "password" with the password you chose for cinder database

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password
## Replace "password" with the password you chose for the openstack account in RabbitMQ.
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = password
## Replace "password" with the password you chose for cinder user in the identity service
## Comment out or remove any other options in the [keystone_authtoken] section

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = vg_cinder
iscsi_protocol = iscsi
iscsi_helper = tgtadm

## Replace vg_cinder with your volume group.

[oslo_concurrency]
lock_path = /var/lock/cinder

## Comment out the lock_path in (DEFAULT) section.

Restart the block storage service.

# service tgt restart
# service cinder-volume restart

Remove the SQLite database file.

# rm -f /var/lib/cinder/cinder.sqlite

Troubleshooting:

Go through the log for any errors.

# cat /var/log/cinder/cinder-volume.log

For errors like below.

"Unknown column 'volumes.instance_uuid' in 'field list'")

"Unknown column 'volumes.attach_time' in 'field list

"Unknown column 'volumes.mountpoint' in 'field list'"

"Unknown column 'volumes.attached_host' in 'field list'")

Visit: Unknown Column

For errors like below.

AMQP server on controller:5672 is unreachable: Too many heartbeats missed. Trying again in 1 seconds.

Visit: Too many heartbeats missed.

Verification:

Run the following command to configure the Block Storage client to use API version 2.0.

# echo "export OS_VOLUME_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh

Load the credentials.

# source admin-openrc.sh

List the service components.

# cinder service-list

+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2015-07-07T20:11:21.000000 |       None      |
|  cinder-volume   | block@lvm  | nova | enabled |   up  | 2015-07-07T20:11:18.000000 |       None      |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

Attach a volume to an instance:

Create a virtual disk “disk01″ with 5GB, run the following command on controller node.

# cinder create --name disk01 5
+---------------------------------------+--------------------------------------+
|                Property               |                Value                 |
+---------------------------------------+--------------------------------------+
|              attachments              |                  []                  |
|           availability_zone           |                 nova                 |
|                bootable               |                false                 |
|          consistencygroup_id          |                 None                 |
|               created_at              |      2015-07-07T20:18:34.000000      |
|              description              |                 None                 |
|               encrypted               |                False                 |
|                   id                  | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |
|                metadata               |                  {}                  |
|              multiattach              |                False                 |
|                  name                 |                disk01                |
|         os-vol-host-attr:host         |                 None                 |
|     os-vol-mig-status-attr:migstat    |                 None                 |
|     os-vol-mig-status-attr:name_id    |                 None                 |
|      os-vol-tenant-attr:tenant_id     |   9b05e6bffdb94c8081d665561d05e31e   |
|   os-volume-replication:driver_data   |                 None                 |
| os-volume-replication:extended_status |                 None                 |
|           replication_status          |               disabled               |
|                  size                 |                  5                   |
|              snapshot_id              |                 None                 |
|              source_volid             |                 None                 |
|                 status                |               creating               |
|                user_id                |   127a9a6b822a4e3eba69fa54128873cd   |
|              volume_type              |                 None                 |
+---------------------------------------+--------------------------------------+

List the available volumes, status should be available.

# cinder list
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+
|                  ID                  |   Status  |  Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+
| dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 | available | disk01 |  5   |     None    |  false   |             |
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+

Attach the disk01 volume to our running instance “My-Fedora”

# nova volume-attach MY-Fedora dbd9afb1-48fd-46d1-8f66-1ef5195b6a94
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |
| serverId | 7432030a-3cbe-49c6-956a-3e725e22196d |
| volumeId | dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 |
+----------+--------------------------------------+

List the volumes, you can see the status as in-use and it should be attached to the My-Fedora’s instance ID.

# cinder list
+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status |  Name  | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
| dbd9afb1-48fd-46d1-8f66-1ef5195b6a94 | in-use | disk01 |  5   |     None    |  false   | 7432030a-3cbe-49c6-956a-3e725e22196d |
+--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+

Login to MY-Fedora instance using SSH and run fdisk -l command to list down the disks.

 # ssh -i mykey [email protected]

Last login: Mon Jul  6 17:59:46 2015 from 192.168.0.103
[fedora@my-fedora ~]$ sudo su -
[root@my-fedora ~]# fdisk -l
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf1cc8d9d

Device     Boot Start      End  Sectors Size Id Type
/dev/vda1  *     2048 41943039 41940992  20G 83 Linux

Disk /dev/vdb: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

From the above you can see, new disk /dev/vdb added with 5GB. This is the one we have attached earlier and now it is visible in guest OS.

That’s All!!!, You have successfully configured block storage service (Cinder) on Ubuntu 14.04.2

Cài đặt và cấu hình Zabbix

 Linux  Comments Off on Cài đặt và cấu hình Zabbix
Sep 072015
 

1. Yêu cầu hệ thống

 

 

a. Yêu cầu các gói phần mềm: GCC, Automake, MySQL (http://www.mysql.com/)

 

·        zlib-devel

 

·        mysql-devel (for MySQL support)

 

·        glibc-devel

 

·        curl-devel (for web monitoring)

 

·        libidn-devel (curl-devel might depend on it)

 

·        openssl-devel (curl-devel might depend on it)

 

·        net-snmp-devel (for SNMP support)

 

·        popt-devel (net-snmp-devel might depend on it)

 

·        rpm-devel (net-snmp-devel might depend on it)

 

·        OpenIPMI-devel (for IPMI support)

 

·        libssh2-devel (for direct SSH checks)

 

b. Yêu cầu phần cứng hệ thống:  
         RAM: 128MB
         CPU Pentium II trở lên.

  Các thành phần của hệ thống zabbix: zabbix-server, zabbix-agent, zabbix-proxy

2. Cài đặt:
a. Cài đặt các thành phần liên quan:
– Cài đặt Apache, PHP: bạn có thể sử dụng câu lệnh:

yum install apache php

– Khởi động apache

service httpd restart

 

Cấu hinh Certificate Authorization trên Linux

 Linux  Comments Off on Cấu hinh Certificate Authorization trên Linux
Sep 072015
 

I. Mô hình

 

 

Hướng dẫn chuẩn bị hệ thống như hình vẽ

1.      Cấu hình card mang ở chế độ host-only
2.      Cấu hình địa chỉ Ip như hình vẽ và kiểm tra kết nối
3.      Cấu hình DNS với bản sau
Server
Server Name
IP
CA server
ca.lablinux.vn
192.168.1.3/24
Web Server
web.lablinux.vn
192.168.1.2/24
client
client.lablinux.vn
192.168.1.1/24
4.      Kiểm tra cấu hình
[root@client  tmp]# ping ca.lablinux.vn
[root@client  tmp]# ping web.lablinux.vn

Khởi tạo Certificate Authorization Server

5.      Kiểm tra bộ cài
[root@ca  tmp]# rpm –q  openssl
6.      Tạo thư mục thử nghiệm
[root@ca tmp]# mkdir -m 0755  /etc/pki
7.      Tạo thư mục để lưu CA
[root@ca tmp]# mkdir -m 0755 /etc/pki/myCA  /etc/pki/myCA/private   /etc/pki/myCA/certs  /etc/pki/myCA/newcerts   /etc/pki/myCA/crl
[root@test myCA]# touch index.txt
[root@test myCA]# echo '01' > serial
8.      Tạo file cấu hình:
[root@ca pki]#cd /etc/pki/myCA
[root@ca  myCA]#vi  testssl.conf
[ ca ]
default_ca      = CA_default            # The default ca section
[ CA_default ]
 dir            = ./                    # top dir
 certs       = $dir/certs
 crl_dir     = $dir/crl
 database       = $dir/index.txt        # index file.
 new_certs_dir  = $dir/newcerts         # new certs dir
 certificate    = $dir/certs/ca.crt    # The CA cert
 serial         = $dir/serial           # serial no file
 private_key    = $dir/private/cakey.pem# CA private key
 RANDFILE       = $dir/private/.rand    # random number file
 crl_dir        = $dir/crl
 default_days   = 365                   # how long to certify for
 default_crl_days= 30                   # how long before next CRL
 default_md     = md5                   # md to use
 policy         = policy_any            # default policy
 email_in_dn    = no                    # Don’t add the email into cert DN
 name_opt       = ca_default            # Subject name display option
 cert_opt       = ca_default            # Certificate display option
 copy_extensions = none                 # Don’t copy extensions from request
 [ policy_any ]
 countryName            = supplied
 stateOrProvinceName    = optional
 organizationName       = optional
 organizationalUnitName = optional
 commonName             = supplied
 emailAddress           = optional
9.      Tạo một certificate cho bản thân mình
[root@ca myCA]# cd /etc/pki/myCA
[root@ca myCA]# openssl req -new -x509 -keyout private/ca.key -out certs/ca.crt -days 1825
10.  Phân quyền để bảo mật khóa private
[root@ca tmp]# chmod 0400 /etc/pki/myCA/private/ca.key

Tạo một certificate request từ Web Server

1.      Kiểm tra bộ cài
[root@ web tmp]#rpm –q  openssl
2.      Tạo thư mục thử nghiệm và không sử dụng mục mặc định /etc/pki/
[root@web tmp]# mkdir -m 0755 /etc/pki
3.      Tạo thư mục để lưu CA
[root@ web tmp]#mkdir -m 0755 /etc/pki/myCA /etc/pki/myCA/private  /etc/pki/myCA/certs  /etc/pki/myCA/newcerts  /etc/pki/myCA/crl
4.      Tạo một certificate request:
[root@web myCA]#cd /etc/pki/myCA
[root@ web myCA]#openssl req -new -nodes -keyout private/server.key -out server.csr -days 365
Chú ý: Common Name (CN) là tên dịch vụ của bạn
5.      Giới hạn quyền truy cập file private
[root@web myCA]#chown root.apache /etc/pki /myCA/private/server.key
[root@webr myCA]#chmod 0440 /etc/pki/myCA/private/server.key
6.      Gửi chứng chỉ tới CA server
[root@web myCA]#scp  server.csr    [email protected]:/etc/pki/myCA/

Cấp phát chứng chỉ cho Web Server

1.      Chấp nhận một chứng chỉ
[root@ca   ~]# cd /etc/pki/myCA/
[root@ca myCA]# openssl ca –config testssl.conf  -out certs/server.crt -infiles server.csr
2.      Xóa certificate request
[root@ca myCA]# rm -f /etc/pki/myCA/server.csr
3.      Kiểm tra Certificate
[root@ca myCA]# openssl x509 -subject -issuer -enddate -noout -in /etc/pki /myCA/certs/server.crt
Hoặc
[root@ca myCA]# openssl x509 -in certs/server.crt -noout -text
4.      Kiểm tra chứng thực với chứng thực máy chủ CA
[root@ca myCA]#  openssl verify -purpose sslserver -CAfile /etc/pki/myCA/certs/ca.crt   /etc/pki/myCA/certs/server.crt
5.      Tạo mới một CRL (Certificate Revokation List):
# openssl ca -config testssl.conf -gencrl -out crl/myca.crl
6.      Gửi lại chứng chỉ cho máy chủ Web
[root@ca myCA]#  scp  /etc/pki/myCA/certs/server.crt     root@webserver:/etc/pki/myCA

Cấu hình Web server sử dụng Certificate

1.      Copy  certificate  và key tới vị trí để Apache có thể nhận biết
[root@server myCA]#mv /etc/httpd/conf/ssl.crt/server.crt /etc/httpd/conf/ssl.crt/server1.crt
[root@server myCA]#cp /etc/pki/myCA/server.crt  /etc/httpd/conf/ssl.crt/
[root@server myCA]#mv /etc/httpd/conf/ssl.key/server.key /etc/httpd/conf/ssl.key/server1.key
[root@server myCA]#cp /etc/pki/myCA/private/server.key /etc/httpd/conf/ssl.key/server.key
2.      Tạo trang web thử nghiệm
[root@server myCA]# cd /var/www/html/
[root@server html]# vi index.html
<html>
<header>
</header>
<body>
  This is a test
</body>
</html>
3.      Chạy máy chủ web
[root@server html]# chkconfig httpd on
[root@server html]#service httpd start

Cấu hình client sử dụng Certificate

Trong máy client chạy Internet Explorer tới địa chỉ:  http://web.lablinux.vn

 

 

   Trong máy client chạy Internet Explorer tới địa chỉ:  https://web.lablinux.vn

 

 

   Import Certificate của CA
a.       Chọn Option