Thursday, April 5, 2018

Kerberos Server Setup Steps


Here are the steps provided for setting up of Kerberos server for securing Hadoop clusters by  providing this Kerberos server info.

Steps:
Step 1: Install a new version of the KDC server:
#yum install krb5-server krb5-libs krb5-workstation

Step 2: Change the [realms] section
[root@bkumar3 hdp]# cat /etc/krb5.conf




Note :
#Here "admin_server" and "kdc" are the host FQDN's of the machine where we installed Kerberos.

Step 3: Use the utility kdb5_util to create the Kerberos database.
[root@bkumar3 ~]# kdb5_util  create -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'EXAMPLE.COM',
master key name 'K/M@EXAMPLE.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: <hadoop>
Re-enter KDC database master key to verify: <hadoop>

Step 4:Start the KDC server and the KDC admin server.
[root@bkumar3 ~]# systemctl start krb5kdc
[root@bkumar3 ~]#
[root@bkumar3 ~]# systemctl start kadmin

Step 5: Create a KDC admin by creating an admin principal.
[root@bkumar3 ~]# kadmin.local  -q "addprinc root/admin@EXAMPLE.COM"
Authenticating as principal root/admin@EXAMPLE.COM with password.
WARNING: no policy specified for root/admin@EXAMPLE.COM; defaulting to no policy
Enter password for principal "root/admin@EXAMPLE.COM":
Re-enter password for principal "root/admin@EXAMPLE.COM":
Principal "root/admin@EXAMPLE.COM" created.


Reference link:
https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-security/content/optional_install_a_new_mit_kdc.html

Thursday, March 1, 2018

Ansible Automation script configuration steup



Environment:
# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)


Step 1 :
 Install Ansible software.
[root@psnode140 home]# yum install ansible -y


# ansible --version
ansible 2.4.2.0

Step 2: 
Add the host info(hostname's) at hosts file.
# vim  hosts
[all]
testNode140
testNode141
testNode142
testNode181
testNode182

Step 2:
Sample command to verify proper installation:
# ansible -i hosts  -m command  -a "hostname -f " all
Error:
The authenticity of host 'testNode140 (testNode140)' can't be established.
ECDSA key fingerprint is 44:3a:e5:e7:07:fb:5d:d0:d4:29:31:33:b8:7e:e3:9a.
Are you sure you want to continue connecting (yes/no)? yes  testNode140 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'testNode140' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n",
    "unreachable": true
}


Step 3:
Generate the Password-less authentication for nodes using ssh-keygen.
# ssh-keygen -t rsa

#ssh-copy-id root@testNode140
#ssh-copy-id root@testNode141
#ssh-copy-id root@testNode142
#ssh-copy-id root@testNode181
#ssh-copy-id root@testNode182

Step 4:
Check for sample ansible command:

# ansible -i hosts  -m command  -a "hostname -f " all

The following output we'll see at SUCCESS message from  terminal
testNode140.ps.lab
testNode141.ps.lab
testNode142.ps.lab
testNode181.ps.lab
testNode182.ps.lab


Sunday, February 4, 2018

Collecting jstat data





The jstat utility uses the built-in instrumentation in the Java HotSpot VM to provide information about performance and resource consumption of running applications.

The tool can be used when diagnosing performance issues, and in particular issues related to heap sizing and garbage collection.

Syntax:
```
$ jstat -gc   <PID>
```

Example:


Here Column names are defined as below.

Friday, June 9, 2017

Multi-mfs Enablement



For DB workloads on high-end servers, MapR has made several performance enhancements. For MapR-DB deployments on clusters with SSDs, two fileserver instances are configured on nodes with at least two SPs.
On servers with SSDs, this feature is automatically enabled with a fresh install or upgrade.

List of SP's

[root@VM202 ~]# /opt/mapr/server/mrconfig  sp list
ListSPs resp: status 0:2
No. of SPs (2), totalsize 47539 MB, totalfree 46644 MB

SP 0: name SP1, Online, size 24179 MB, free 23686 MB, path /dev/sdb
SP 1: name SP2, Online, size 23360 MB, free 22958 MB, path /dev/sdd

Currently Running mfs instances


[root@VM202 ~]# /opt/mapr/server/mrconfig  info instances
1
5660

For now, I have 2 SP's ,for each SP, I am enabling 1 mfs instance. So for 1 SP , one dedicated mfs process(5660) and for  another SP, another dedicated mfs process (5661) will be assigning.

Here are the way to enable multi-mfs.
[root@VM202 ~]#   maprcli config save -values {multimfs.numinstances.pernode:2}
[root@VM202 ~]#   maprcli config save -values {multimfs.numsps.perinstance:1}

Restart the warden after  changing process.
[root@VM202 ~]# service mapr-warden restart

After Resatrt the warden , check the mfs instances.
[root@VM202 ~]# /opt/mapr/server/mrconfig  info instances
2
5660 5661
here we are able to see 2 mfs processes are running with id's 5660,5661

And in logs directory , we can see two logs generated for each one.
[root@VM202 ~]# ll /opt/mapr/logs/
mfs.log-0
mfs.log.1-0

mfs.log-1
mfs.log.1-1

mfs.log.1-2
mfs.log-2

mfs.log-3
mfs.log.1-3

mfs.log-4
mfs.log.1-4

Thursday, June 8, 2017

MySQL Installation , Configuration and creating Database and User


In this blog, would like  to add steps to  MySQL
  • Installation 
  • Configuration 
  • Creating Database 
  • Creating  User
  • Logging into mysql using  newly created user 
Installation

1. Install below mysql packages on the node.

[root@VM200 ~]# rpm -qa| grep mysql
mysql-5.1.73-8.el6_8.x86_64
mysql-connector-java-5.1.17-6.el6.noarch
mysql-libs-5.1.73-8.el6_8.x86_64
mysql-devel-5.1.73-8.el6_8.x86_64
mysql-server-5.1.73-8.el6_8.x86_64

#yum install mysql mysql-connector-java mysql-devel mysql-server  -y

Configuration
2. Check the mysql services is runing or not using below command.

[root@VM200 ~]# service mysqld status
mysqld is stopped

Start the mysql service.
[root@VM200 ~]# service mysqld start

Or inst
All packages are available in below location:
https://dev.mysql.com/downloads/mysql/5.6.html#downloads

[root@psnode142 mapr]# rpm -Uvh mysql-community-release-el7-5.noarch.rpm
[root@psnode142 mapr]#  yum install mysql-server

[root@psnode142 mapr]# service  mysqld status
Redirecting to /bin/systemctl status mysqld.service
● mysqld.service - MySQL Community Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-02-27 09:12:20 PST; 2s ago

[root@psnode142 mapr]# netstat -plant| grep 3306
tcp6       0      0 :::3306                 :::*                    LISTEN      19006/mysqld
[root@psnode142 mapr]#


3. Run the secure installation by providing the root password of underlying OS and generate new password for mysql logging in.
[root@VM200 ~]# /usr/bin/mysql_secure_installation


Enter current password for root (enter for none):  (linux VM PWD: yy)

Change the root password? [Y/n] y
New password: (MySQL root PWD: x)
Re-enter new password: (x)

Remove anonymous users? [Y/n] y

Disallow root login remotely? [Y/n] n

Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y


Creating Database & User
4. Logging into MySQL


[root@tVM200 mapr]# mysql -u root -p
Enter password: <mapr>
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 5.6.39 MySQL Community Server (GPL)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

mysql> create database sdb;
Query OK, 1 row affected (0.00 sec)

mysql> grant all on sdb.* to 'suser'@'%' identified by 'mapr';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit
Bye
[root@VM200 mapr]# mysql -u suser -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 5.6.39 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| sdb                |
+--------------------+
2 rows in set (0.00 sec)

mysql> use sdb;
Database changed
mysql> show tables;
Empty set (0.00 sec)

Wednesday, June 7, 2017

Install , Configure and Use Clush



Clush is an open source tool that allows you to execute commands in parallel across the nodes in the cluster. This blog describes how to install clush, configure and use clush to run commands on multiple nodes in parallel.

The clush utility need to be only installed on one node, usually the primary node (10.10.72.200) in the cluster where we will run commands in parallel and gather stats .


Step 1: Install clustershell package
[root@VM200 ~]# yum --enablerepo=epel install clustershell

To get all cluster nodes and services on mapr cluster.

[root@VM200 ~]# maprcli node list -columns svc
service                                                                       hostname  ip
fileserver,historyserver,webserver,nodemanager,hoststats                      VM200     10.10.72.200
fileserver,hivemeta,webserver,nodemanager,hs2,hoststats                       VM201     10.10.72.201
nodemanager,spark-historyserver,cldb,fileserver,hoststats,hue                 VM202     10.10.72.202
tasktracker,nodemanager,cldb,fileserver,resourcemanager,hoststats,jobtracker  VM203     10.10.72.203


Step 2: Create a "groups" file under "/etc/clustershell/" and add all cluster nodes.
[root@VM200 ~]# vi /etc/clustershell/groups
all:10.10.72.200,10.10.72.201,10.10.72.202,10.10.72.203
[root@VM200 ~]#



[root@VM204 ~]# clush -a date
10.10.72.200: Host key verification failed.
clush: 10.10.72.200: exited with exit code 255
10.10.72.201: Host key verification failed.
clush: 10.10.72.201: exited with exit code 255
10.10.72.202: Host key verification failed.
clush: 10.10.72.202: exited with exit code 255
10.10.72.203: Host key verification failed.
clush: 10.10.72.203: exited with exit code 255

To avoid this "Host key verification failed." message , follow below host key verification steps.
########## Gnerate the key dont give any password on prompt

# ssh-keygen 


Example:

[root@VM204 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
4a:a0:74:96:51:e2:fa:4f:9d:4d:80:20:1a:96:11:5a root@VM204.mapr.com
The key's randomart image is:
+--[ RSA 2048]----+
....
...


-----------------------------------------------------------------
#### Copy the key to all the nodes for activating password less ssh

# ssh-copy-id 10.10.72.200
# ssh-copy-id 10.10.72.201
# ssh-copy-id 10.10.72.202
# ssh-copy-id 10.10.72.203
# ssh-copy-id 10.10.72.204

Example:

[root@VM200 ~]# ssh-copy-id 10.10.72.200
The authenticity of host '10.10.72.200 (10.10.72.200)' can't be established.
RSA key fingerprint is 20:d4:f6:6e:d5:4b:af:80:bc:21:2b:f6:21:51:19:65.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.72.200' (RSA) to the list of known hosts.
root@10.10.72.200's password:
Now try logging into the machine, with "ssh '10.10.72.200'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@VM200 ~]# ssh-copy-id 10.10.72.201
[root@VM200 ~]# ssh-copy-id 10.10.72.202
[root@VM200 ~]# ssh-copy-id 10.10.72.203



[root@VM200 ~]# clush -a date
10.10.72.200: Wed Jun  7 17:16:58 IST 2017
10.10.72.201: Wed Jun  7 17:16:43 IST 2017
10.10.72.203: Wed Jun  7 17:18:07 IST 2017
10.10.72.202: Wed Jun  7 17:16:59 IST 2017
[root@VM200 ~]#

Tuesday, February 7, 2017

Hue-MariaDB intergation in CentOS-7.x



Issue:
Whenever you face an issue while launching HUE web UI as specified below in CentOS7/RHEL7 in MapR-.2 platform, please follow below steps.
raise errorclass, errorvalue
ProgrammingError: (1146, "Table 'hue.auth_user' doesn't exist")

To over come this issue, please use below steps:

Environment:
[root@VM206 mapr]# rpm -qa| grep mapr
mapr-core-internal-5.2
mapr-core-5.2
mapr-hue-base-3.9.0

Steps:
Step 1:
Run the following commands to install MariaDB and the Redhat 6 compatibility library:
#yum install mariadb
#ver=$(rpm -qa mariadb|cut -d- -f2)
#rpm -ivh --nodeps http://yum.mariadb.org/$ver/rhel7-amd64/rpms/MariaDB-$ver-centos7-x86_64-compat.rpm

Logging into  MariaDB
[root@VM206 mapr]# mysql -u root -p
Enter password:<pwd> (= <puli> )
MariaDB [(none)]> create database hue;
MariaDB [(none)]> grant all on hue.* to 'hue'@10.10.72.206 identified by 'puli';

MariaDB [(none)]> flush privileges;
MariaDB [(none)]> use hue;
MariaDB [hue]> show tables;
Empty set (0.00 sec)

step 2:
Run the following command to create a symlink for the Cyrus SASL library:

#ln -s /lib64/libsasl2.so.3.0.0 /lib64/libsasl2.so.2

Step 3:
Run the following command to reconfigure Hue:
#bash -c "source /opt/mapr/hue/hue-3.9.0/build/env/bin/activate;
      /opt/mapr/hue/hue-3.9.0/build/env/bin/hue syncdb --noinput;
      /opt/mapr/hue/hue-3.9.0/build/env/bin/hue migrate"
    
Step 4:
Run the following command to restart Hue:
      # maprcli node services -name hue -action restart -nodes `hostname`
      
Verification:

Log in to Hue host:
http://10.10.72.206:8888/

You will be able to see launching the HUE web UI without any issues.

Check in MariaDB:

[root@VM206 mapr]# mysql -u root -p
Enter password:<pwd>

MariaDB [(none)]> use hue;

MariaDB [hue]> show tables;
+--------------------------------+
| Tables_in_hue                  |
 auth_group                     |
| auth_group_permissions         |
| auth_permission                |
| auth_user                      |
| auth_user_groups      
.......
......
(You will see here lot of tables . Around  74 tables.