Wednesday, 27 February 2013

Basics of Mysql





























mysql> CREATE TABLE authors (id INT, name VARCHAR(20), email VARCHAR(20));

9. Insert to Table
mysql> INSERT INTO authors (id,name,email) VALUES(1,"Vivek","xuz@abc.com");

10. Update Table.
mysql> update authors set name="test" where id=1;

OpenVZ Virtualization

OpenVz is an open source operating system-level virtualization technology based on the Linux Kernel. OpenVz allows a Physical server to run multiple isolated operating system instance, Known as Virtual Private server (VPS) . The main Difference between Openvz and other virtualization software such as Vmware and KVM is that Under OpenVZ all the virtualized Operating system use the same kernel of the host operating system while compare in vmware and kvm each vps uses separated kernel which makes extra overhead. OpenVZ is the open-source branch of Virtuozzo, a commercial virtualization solution used by many providers that offer VPS.

Virtualization and Isolation Level Provided by the OpenVZ

Users and groups → Each VPS has its own root users, as well as other users and groups. Process tree → VPS only sees its own processes (starting from init). PIDs are virtualized, so that the init PID is 1. Network → Virtual network device, which allows a VPS to have its own IP addresses, as well as a set of netfilter (iptables) and routing rules.
Storage → Each VPS has its own Hard Disk which is dynamically resizable.Files → VPS has their own System libraries, virtualized   /proc.
Devices → We can grant access to any VPS to use the real hardware like the network interfaces, storage Devices on the Host Operating system. 


HN(Host Node) → Computer in which you run all the VPS The HOST NODE has access to all the Hardware resources available, HN can control the process running its self and inside a VPS . Another name for HOST NODE CT0 AND VE0
CT/VE/VPS (ConTainer / Virtual Environment / Virtual Private Server) → Another names for the VPS or the OS that we installed inside the HOST NODE. VPS is nothing but an isolated program execution environment which looks and feel like a separate physical server. Each VPS has file system, root user , normal users , firewall , system quota and much more . You can setup multiple VPSs within a single HOST NODE. Different VPSs can different Linux Distributions But all VPSs operate Under the same Kernel
CTID(Container ID) → Each VPS has a unique number called CTID (a ConTainer's IDentifer). CTID is just a number like 101 or 102. An administrator can use this number to identify the VPS. We set this number to a VPS at the time we create that VPS.  After that  admin can use this number to start, stop, restart, delete VPS and other administrative jobs related to your VPS.
VPS Templates → VPS templates are images of linux Distributions which are used to create a VPS. A template contain only minimal number of packages . Template will be in a compressed format ( Gzip , Bzip ). When we create our VPS using templates we have to Extract it to some folder.


VPS commands

"Vzctl is a management tool Provided by the Openvz to create , start , stop , and setting various type of resource parameters for a VPS like Memory,Disk quota, swap space . And so on."

1. Creating a VPS -
 
syntax :: vzctl   create  CTID    --ostemplate    TEMPLATE_NAME
   
[root@openvz] # vzctl   create  101   --ostemplate   centos-6-x86_64 

2. Setting IP Address - 

syntax :: vzctl   set   CTID    --ipadd    IP_ADDRESS    --save
   
[root@openvz] # vzctl   set   101  --ipadd    192.168.0.101      --save 

" --save switch will save the configuration permanently "

3. Setting the Host Name - 

syntax ::  vzctl  set  CTID   --hostname    HOST_NAME   --save
[root@openvz] # vzctl   set  101   --hostname    virtual101.virtual.com    --save 

4. Setting the Name server - 

syntax :: vzctl   set   CTID  --nameserver    NAME_SERVER   --save
[root@openvz] # vzctl   set  101   --nameserver   192.168.0.254    --save 

5. Setting the Hard Disk Size - 

syntax :: vzctl  set  CTID  --diskspace  5G:5G  --save
[root@openvz] # vzctl   set   101   --diskspace   5G:5G  --save 

6. Setting the root Passwd

Syntax :: vzctl  set  101  --userpasswd   root:PASSWORD
[root@openvz] # vzctl  set  101   --userpasswd    root:linuxway

"vzctl command is use to create and mange  VPS  the Create option is  for creating VPS  and set options is for setting various parameters of   the VPS like Hard disk space, Name Server IP , Root Password , IP Address ".

7. Listing all the Container/VPS Using vzlist-

[root@openvz ~]# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
2034 31 running 172.16.9.208 ranjith.net
2035 - stopped 192.168.2.102 ranjith-test1.net
2037 29 running 172.16.9.209 ranjith-test2.net
2038 16 running 192.168.2.101 ranjith-test4.net
2039 16 running 192.168.2.100 ranjith-test3.net
[root@openvz ~]#
 "vzlist -a  shows that Container 2035 is stopped That is why the Number of Process is empty "
Command vzlist without any argument shows the running VPS only, otherwise vzlist -a shows all the VPS(both running and stopped)
[root@openvz ~]# vzlist
CTID NPROC STATUS IP_ADDR HOSTNAME
2034 31 running 172.16.9.208 ranjith.net
2037 29 running 172.16.9.209 ranjith-test2.net
2038 16 running 192.168.2.101 ranjith-test4.net
2039 16 running 192.168.2.100 ranjith-test3.net
[root@openvz ~]# 

8. Starting VPS - 

Syntax :: vzctl  start  CTID
[root@openvz ~]# vzctl start 2035
Starting container ...
Running: /usr/sbin/vzquota show 2035
Running: /usr/sbin/vzquota on 2035 -r 0 -b 5242980 -B 6291556 -i 200100 -I 220100 -e 0 -n 0 -s 0
Mounting root: /vz/root/2035 /vz/private/2035
Container is mounted
Set iptables mask 0x000017bf
Set features mask 0000000000000003/0000000000000003
Adding IP address(es): 192.168.2.102
Running: /usr/lib64/vzctl/scripts/vps-net_add
Running container script: /etc/vz/dists/scripts/redhat-add_ip.sh
Setting CPU units: 1000
Configuring meminfo: 262144
Set hostname: ranjith-test1.admod.net
Running container script: /etc/vz/dists/scripts/redhat-set_hostname.sh
Running: /usr/sbin/vzquota stat 2035 -f
Running: vzquota setlimit 2035 -b 5242880 -B 6291456 -i 200000 -I 220000 -e 0 -n 0
Container start in progress...
[root@openvz ~]# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
2034 31 running 172.16.9.208 ezranjith.net
2035 29 running 192.168.2.102 ranjith-test1.net
2037 29 running 172.16.9.209 ranjith-test2.net
2038 16 running 192.168.2.101 ranjith-test4.net
2039 16 running 192.168.2.100 ranjith-test3.net
[root@openvz ~]#
"Now The VPS is up and Running and we can connect to the  VPS  from the Host-Node using SSH"

9. Connecting to the Container from outside-

~$ ssh root@172.16.9.209
The authenticity of host '172.16.9.209 (172.16.9.209)' can't be established.
RSA key fingerprint is b9:96:17:7e:63:5d:16:f3:64:e8:6b:a9:79:d3:54:dd.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.9.209' (RSA) to the list of known hosts.
root@172.16.9.209's password:
Last login: Wed Feb 27 17:09:14 2013 from 172.16.9.208
[root@ranjith-test2 ~]# 

10. Checking the  Disk Usage inside the VPS-

[root@openvz ~]# vzctl enter 2037
entered into CT 2037
Open /dev/pts/0
[root@ranjith-test2 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/simfs 4.7G 1.4G 2.4G 37% /
[root@ranjith-test2 /]#

10. Checking the  memory  Usage inside the VPS-

[root@ranjith-test2 /]# free -m
total used free shared buffers cached
Mem: 1024 165 858 0 0 0
-/+ buffers/cache: 165 858
Swap: 0 0 0
[root@ranjith-test2 /]#

11. Checking the memory usage of the Host Node while VPS is running-

[root@openvz ~]# free -m
total used free shared buffers cached
Mem: 2006 620 1386 0 22 314
-/+ buffers/cache: 283 1723
Swap: 1027 0 1027
[root@openvz ~]# 

12. Stopping the VPS - 

Syntax :: vzctl   stop   CTID
[root@openvz ~]# vzctl stop 2035
Stopping container ...
Container was stopped
Running: /usr/lib64/vzctl/scripts/vps-net_del
Running: /usr/sbin/vzquota stat 2035 -f
Running: /usr/sbin/vzquota off 2035
Container is unmounted
[root@openvz ~]# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
2034 34 running 172.16.9.208 ranjith.net
2035 - stopped 192.168.2.102 ranjith-test1.net
2037 29 running 172.16.9.209 ranjith-test2.net
2038 16 running 192.168.2.101 ranjith-test4.net
2039 16 running 192.168.2.100 ranjith-test3.net
[root@openvz ~]#
 
13. Restarting the VPS - 

Syntax :: vzctl   restart   CTID
[root@openvz ~]# vzctl restart 2035
Restarting container
Starting container ...
Running: /usr/sbin/vzquota show 2035
Running: /usr/sbin/vzquota on 2035 -r 0 -b 5242980 -B 6291556 -i 200100 -I 220100 -e 0 -n 0 -s 0
Mounting root: /vz/root/2035 /vz/private/2035
Container is mounted
Set iptables mask 0x000017bf
Set features mask 0000000000000003/0000000000000003
Adding IP address(es): 192.168.2.102
Running: /usr/lib64/vzctl/scripts/vps-net_add
Running container script: /etc/vz/dists/scripts/redhat-add_ip.sh
Setting CPU units: 1000
Configuring meminfo: 262144
Set hostname: ranjith-test1.admod.net
Running container script: /etc/vz/dists/scripts/redhat-set_hostname.sh
Running: /usr/sbin/vzquota stat 2035 -f
Running: vzquota setlimit 2035 -b 5242880 -B 6291456 -i 200000 -I 220000 -e 0 -n 0
Container start in progress...
[root@openvz ~]# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
2034 34 running 172.16.9.208 ezranjith.net
2035 29 running 192.168.2.102 ranjith-test1.net
2037 29 running 172.16.9.209 ranjith-test2.net
2038 16 running 192.168.2.101 ranjith-test4.net
2039 16 running 192.168.2.100 ranjith-test3.net
[root@openvz ~]#

Each VPS have their own configuration file where it save's all its  information like ipaddress, disk space, domain name etc.., vzctl uses the set option to set the values and saves these values permanently when  --save option is used. At the time of the VPS creation  vzctl  will create a folder under the /vz/private/  this where openvz saves its file system and data permanently . When we start the VPS  openvz will use this directory /vz/root/ .  as a  mount point to mount its file system which are extracted  into the  /vz/private/ . when we stop the VPS openvz will  umount the VPS file system from  /vz/root/ , and saves all the changes permanently  to  /vz/private/ .

Private  Directory of VPS 2034: /vz/private/2034
root Directory of VPS 2034 : /vz/root/2034
Configuration file of VPS 2034 : /etc/vz/conf/2034.conf
OpenVz Configuration file : /etc/vz/vz.conf

14. Executing commands inside VPS from HostNode-
Vzctl  provides a method for executing the command  inside the VPS without entering into the VPS. These can be very useful when we can to run only a single command. 

     Eg. run  df -h  inside VPS 2034
[root@openvz ~]# vzctl exec 2034 df -h
Filesystem Size Used Avail Use% Mounted on
/dev/simfs 4.7G 1.3G 2.4G 36% /
[root@openvz ~]#




Tuesday, 26 February 2013

MySQL Replication Error with Relay log read failure Last_Errno: 1594()

Last_Errno: 1594
Last_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.

Recently I found this error with mysql replication on the slave server and the replication was broken. To view the slave status run show slave status.

mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 10.37.6.138
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000428
Read_Master_Log_Pos: 321127513
Relay_Log_File: relay-bin.001203
Relay_Log_Pos: 900979739
Relay_Master_Log_File: mysql-bin.000426
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1594
Last_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
Skip_Counter: 0
Exec_Master_Log_Pos: 900979593
Relay_Log_Space: 2468708504
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1594
Last_SQL_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
1 row in set (0.00 sec)

To fix this error, current binlog files on slave should be discarded and set new position. Before setting new binlog position note down Relay_Master_Log_File and Exec_Master_Log_Pos values. Note down these values after stoping slave.

Steps: 
mysql> stop slave;
mysql> show slave status \G;
note down, Relay_Master_Log_File: mysql-bin.000426 and Exec_Master_Log_Pos: 900979593
Reset slave so that the salve forget its replication position in the master's binary log
mysql> reset slave;
Change slave to start reading from stopped position
mysql> change master to master_log_file='mysql-bin.000426', master_log_pos=900979593;
Then Start slave
mysql> start slave;
Check the slave status, 
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.37.6.138
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000428
Read_Master_Log_Pos: 401489223
Relay_Log_File: relay-bin.000004
Relay_Log_Pos: 556888345
Relay_Master_Log_File: mysql-bin.000427
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 556888199
Relay_Log_Space: 1475232050
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 7231
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
1 row in set (0.00 sec)