Friday, June 9, 2017

Multi-mfs Enablement



For DB workloads on high-end servers, MapR has made several performance enhancements. For MapR-DB deployments on clusters with SSDs, two fileserver instances are configured on nodes with at least two SPs.
On servers with SSDs, this feature is automatically enabled with a fresh install or upgrade.

List of SP's

[root@VM202 ~]# /opt/mapr/server/mrconfig  sp list
ListSPs resp: status 0:2
No. of SPs (2), totalsize 47539 MB, totalfree 46644 MB

SP 0: name SP1, Online, size 24179 MB, free 23686 MB, path /dev/sdb
SP 1: name SP2, Online, size 23360 MB, free 22958 MB, path /dev/sdd

Currently Running mfs instances


[root@VM202 ~]# /opt/mapr/server/mrconfig  info instances
1
5660

For now, I have 2 SP's ,for each SP, I am enabling 1 mfs instance. So for 1 SP , one dedicated mfs process(5660) and for  another SP, another dedicated mfs process (5661) will be assigning.

Here are the way to enable multi-mfs.
[root@VM202 ~]#   maprcli config save -values {multimfs.numinstances.pernode:2}
[root@VM202 ~]#   maprcli config save -values {multimfs.numsps.perinstance:1}

Restart the warden after  changing process.
[root@VM202 ~]# service mapr-warden restart

After Resatrt the warden , check the mfs instances.
[root@VM202 ~]# /opt/mapr/server/mrconfig  info instances
2
5660 5661
here we are able to see 2 mfs processes are running with id's 5660,5661

And in logs directory , we can see two logs generated for each one.
[root@VM202 ~]# ll /opt/mapr/logs/
mfs.log-0
mfs.log.1-0

mfs.log-1
mfs.log.1-1

mfs.log.1-2
mfs.log-2

mfs.log-3
mfs.log.1-3

mfs.log-4
mfs.log.1-4

Thursday, June 8, 2017

MySQL Installation , Configuration and creating Database and User


In this blog, would like  to add steps to  MySQL
  • Installation 
  • Configuration 
  • Creating Database 
  • Creating  User
  • Logging into mysql using  newly created user 
Installation

1. Install below mysql packages on the node.

[root@VM200 ~]# rpm -qa| grep mysql
mysql-5.1.73-8.el6_8.x86_64
mysql-connector-java-5.1.17-6.el6.noarch
mysql-libs-5.1.73-8.el6_8.x86_64
mysql-devel-5.1.73-8.el6_8.x86_64
mysql-server-5.1.73-8.el6_8.x86_64

#yum install mysql mysql-connector-java mysql-devel mysql-server  -y

Configuration
2. Check the mysql services is runing or not using below command.

[root@VM200 ~]# service mysqld status
mysqld is stopped

Start the mysql service.
[root@VM200 ~]# service mysqld start

Or inst
All packages are available in below location:
https://dev.mysql.com/downloads/mysql/5.6.html#downloads

[root@psnode142 mapr]# rpm -Uvh mysql-community-release-el7-5.noarch.rpm
[root@psnode142 mapr]#  yum install mysql-server

[root@psnode142 mapr]# service  mysqld status
Redirecting to /bin/systemctl status mysqld.service
● mysqld.service - MySQL Community Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-02-27 09:12:20 PST; 2s ago

[root@psnode142 mapr]# netstat -plant| grep 3306
tcp6       0      0 :::3306                 :::*                    LISTEN      19006/mysqld
[root@psnode142 mapr]#


3. Run the secure installation by providing the root password of underlying OS and generate new password for mysql logging in.
[root@VM200 ~]# /usr/bin/mysql_secure_installation


Enter current password for root (enter for none):  (linux VM PWD: yy)

Change the root password? [Y/n] y
New password: (MySQL root PWD: x)
Re-enter new password: (x)

Remove anonymous users? [Y/n] y

Disallow root login remotely? [Y/n] n

Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y


Creating Database & User
4. Logging into MySQL


[root@tVM200 mapr]# mysql -u root -p
Enter password: <mapr>
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 5.6.39 MySQL Community Server (GPL)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

mysql> create database sdb;
Query OK, 1 row affected (0.00 sec)

mysql> grant all on sdb.* to 'suser'@'%' identified by 'mapr';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit
Bye
[root@VM200 mapr]# mysql -u suser -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 5.6.39 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| sdb                |
+--------------------+
2 rows in set (0.00 sec)

mysql> use sdb;
Database changed
mysql> show tables;
Empty set (0.00 sec)

Wednesday, June 7, 2017

Install , Configure and Use Clush



Clush is an open source tool that allows you to execute commands in parallel across the nodes in the cluster. This blog describes how to install clush, configure and use clush to run commands on multiple nodes in parallel.

The clush utility need to be only installed on one node, usually the primary node (10.10.72.200) in the cluster where we will run commands in parallel and gather stats .


Step 1: Install clustershell package
[root@VM200 ~]# yum --enablerepo=epel install clustershell

To get all cluster nodes and services on mapr cluster.

[root@VM200 ~]# maprcli node list -columns svc
service                                                                       hostname  ip
fileserver,historyserver,webserver,nodemanager,hoststats                      VM200     10.10.72.200
fileserver,hivemeta,webserver,nodemanager,hs2,hoststats                       VM201     10.10.72.201
nodemanager,spark-historyserver,cldb,fileserver,hoststats,hue                 VM202     10.10.72.202
tasktracker,nodemanager,cldb,fileserver,resourcemanager,hoststats,jobtracker  VM203     10.10.72.203


Step 2: Create a "groups" file under "/etc/clustershell/" and add all cluster nodes.
[root@VM200 ~]# vi /etc/clustershell/groups
all:10.10.72.200,10.10.72.201,10.10.72.202,10.10.72.203
[root@VM200 ~]#



[root@VM204 ~]# clush -a date
10.10.72.200: Host key verification failed.
clush: 10.10.72.200: exited with exit code 255
10.10.72.201: Host key verification failed.
clush: 10.10.72.201: exited with exit code 255
10.10.72.202: Host key verification failed.
clush: 10.10.72.202: exited with exit code 255
10.10.72.203: Host key verification failed.
clush: 10.10.72.203: exited with exit code 255

To avoid this "Host key verification failed." message , follow below host key verification steps.
########## Gnerate the key dont give any password on prompt

# ssh-keygen 


Example:

[root@VM204 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
4a:a0:74:96:51:e2:fa:4f:9d:4d:80:20:1a:96:11:5a root@VM204.mapr.com
The key's randomart image is:
+--[ RSA 2048]----+
....
...


-----------------------------------------------------------------
#### Copy the key to all the nodes for activating password less ssh

# ssh-copy-id 10.10.72.200
# ssh-copy-id 10.10.72.201
# ssh-copy-id 10.10.72.202
# ssh-copy-id 10.10.72.203
# ssh-copy-id 10.10.72.204

Example:

[root@VM200 ~]# ssh-copy-id 10.10.72.200
The authenticity of host '10.10.72.200 (10.10.72.200)' can't be established.
RSA key fingerprint is 20:d4:f6:6e:d5:4b:af:80:bc:21:2b:f6:21:51:19:65.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.72.200' (RSA) to the list of known hosts.
root@10.10.72.200's password:
Now try logging into the machine, with "ssh '10.10.72.200'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@VM200 ~]# ssh-copy-id 10.10.72.201
[root@VM200 ~]# ssh-copy-id 10.10.72.202
[root@VM200 ~]# ssh-copy-id 10.10.72.203



[root@VM200 ~]# clush -a date
10.10.72.200: Wed Jun  7 17:16:58 IST 2017
10.10.72.201: Wed Jun  7 17:16:43 IST 2017
10.10.72.203: Wed Jun  7 17:18:07 IST 2017
10.10.72.202: Wed Jun  7 17:16:59 IST 2017
[root@VM200 ~]#