Friday 25 November 2016

Red Hat Enterprise Linux Release Dates



The tables below list the major and minor Red Hat Enterprise Linux updates, their release dates, and the kernel versions that shipped with them.
Red Hat does not generally disclose future release schedules.

The tables below list the major and minor Red Hat Enterprise Linux updates, their release dates, and the kernel versions that shipped with them.
Red Hat does not generally disclose future release schedules.
Refer to the Red Hat Enterprise Linux Life Cycle Policy for details on the life cycle of Red Hat Enterprise Linux releases.
To find your Red Hat Enterprise Linux release please:
cat /etc/redhat-release
To find your kernel version please:
uname -a

Red Hat Enterprise Linux 7

ReleaseGeneral Availability Dateredhat-release Errata Date*Kernel Version
RHEL 7.32016-11-032016-11-03 RHSA-2016:2574-13.10.0-514
RHEL 7.22015-11-192015-11-19 RHEA-2015:24613.10.0-327
RHEL 7.12015-03-052015-03-05 RHEA-2015:05243.10.0-229
RHEL 7.0 GA2014-06-09-3.10.0-123
RHEL 7.0 Beta2013-12-11-3.10.0-54.0.1
Codename: Maipo (based on a mix of Fedora 19, Fedora 20, and several modifications)

Red Hat Enterprise Linux 6

ReleaseGeneral Availability Dateredhat-release Errata Date*Kernel Version
RHEL 6.82016-05-102016-05-10 RHSA-2016:0855-12.6.32-642
RHEL 6.72015-07-222015-07-22 RHEA-2015:14232.6.32-573
RHEL 6.62014-10-142014-10-13 RHEA-2014:16082.6.32-504
RHEL 6.52013-11-212013-11-20 RHSA-2013:1645-22.6.32-431
RHEL 6.42013-02-212013-02-21 RHSA-2013-04962.6.32-358
RHEL 6.32012-06-202012-06-19 RHSA-2012-08622.6.32-279
RHEL 6.22011-12-062011-12-06 RHEA-2011:17432.6.32-220
RHEL 6.12011-05-192011-05-19 RHEA-2011:05402.6.32-131.0.15
RHEL 6.02010-11-09-2.6.32-71
Codename: Santiago (based on a mix of Fedora 12, Fedora 13, and several modifications)

Red Hat Enterprise Linux 5

ReleaseGeneral Availability Dateredhat-release Errata Date*Kernel Version
RHEL 5.112014-09-162014-09-16 RHEA-2014-12382.6.18-398
RHEL 5.102013-10-012013-09-30 RHEA-2013-13112.6.18-371
RHEL 5.92013-01-072013-01-07 RHEA-2013-00212.6.18-348
RHEL 5.82012-02-202012-02-20 RHEA-2012:03152.6.18-308
RHEL 5.72011-07-212011-07-20 RHEA-2011:09772.6.18-274
RHEL 5.62011-01-132011-01-12 RHEA-2011:00202.6.18-238
RHEL 5.52010-03-302010-03-30 RHEA-2010:02072.6.18-194
RHEL 5.42009-09-022009-09-02 RHEA-2009:14002.6.18-164
RHEL 5.32009-01-202009-01-20 RHEA-2009:01332.6.18-128
RHEL 5.22008-05-212008-05-20 RHEA-2008:04362.6.18-92
RHEL 5. 12007-11-072007-11-07 RHEA-2007:08542.6.18-53
RHEL 5.02007-03-15-2.6.18-8
Codename: Tikanga (based on Fedora Core 6)

Red Hat Enterprise Linux 4

Release/UpdateGeneral Availability Dateredhat-release Errata Date*Kernel Version
RHEL 4 Update 92011-02-162011-02-16 RHEA-2011:02512.6.9-100
RHEL 4 Update 82009-05-192009-05-18 RHEA-2009:10022.6.9-89
RHEL 4 Update 72008-07-292008-07-24 RHEA-2008:07692.6.9-78
RHEL 4 Update 62007-11-152007-11-15 RHBA-2007:08972.6.9-67
RHEL 4 Update 52007-05-012007-04-27 RHBA-2007:01962.6.9-55
RHEL 4 Update 42006-08-102006-08-10 RHBA-2006:06012.6.9-42
RHEL 4 Update 32006-03-122006-03-07 RHBA-2006:01492.6.9-34
RHEL 4 Update 22005-10-052005-10-05 RHEA-2005:7862.6.9-22
RHEL 4 Update 12005-06-082005-06-08 RHEA-2005:3182.6.9-11
RHEL 4 GA2005-02-15-2.6.9-5
Codename: Nahant (based on Fedora Core 3)

Red Hat Enterprise Linux 3

Release/UpdateGeneral Availability DateKernel Version
RHEL 3 Update 92007-06-202.4.21-50
RHEL 3 Update 82006-07-202.4.21-47
RHEL 3 Update 72006-03-172.4.21-40
RHEL 3 Update 62005-09-282.4.21-37
RHEL 3 Update 52005-05-182.4.21-32
RHEL 3 Update 42004-12-122.4.21-27
RHEL 3 Update 32004-09-032.4.21-20
RHEL 3 Update 22004-05-122.4.21-15
RHEL 3 Update 12004-01-162.4.21-9
RHEL 3 GA2003-10-222.4.21-4
Codename: Taroon (based on Red Hat Linux 9)

Red Hat Enterprise Linux 2.1

Release/UpdateGeneral Availability DateKernel Version
RHEL 2.1 Update 72005-04-28-
RHEL 2.1 Update 62004-12-132.4.9-e.57
RHEL 2.1 Update 52004-08-182.4.9-e.49
RHEL 2.1 Update 42004-04-212.4.9-e.40
RHEL 2.1 Update 32004-12-192.4.9-e.34
RHEL 2.1 Update 22003-03-292.4.9-e.24
RHEL 2.1 Update 12003-02-142.4.9-e.12
RHEL 2.1 GA2002-03-232.4.9-e.3
Codename: Pensacola (AS) / Panama (ES) (based on Red Hat Linux 7.2)
* Helpful when cloning channels in Satellite for a minor version plus all errata prior to the next minor release using spacewalk-clone-by-date or the webUI.

What’s New in Ubuntu 17.04 (Zesty Zapus) – Overview



Ubuntu 17.04, code named Zesty Zapus, is the future release that will succeed Ubuntu 16.10, and even though it’s End of life date has been scheduled for January 2018, the development team aims to bring a lot of upgrades, fixes, and additions in this release.
Its official release has been scheduled for April 2017.
Its codename, Zesty, is an adjective for ‘great enthusiasm and energy’, while Zapus, is the genus name of a North-American mouse that is said to be the only mammal on Earth that has up to 18 teeth in total.

Regarding the codename, Mark wrote on his blog that:
Ubuntu is moving even faster to the center of the cloud and edge operations. From AWS to the zaniest new devices, Ubuntu helps people get things done faster, cleaner, and more efficiently, thanks to you. We love the pace of change and we change the face of love

What’s New in Ubuntu 17.04 (Zesty Zapus)

The Zesty Zapus will have an updated Linux kernel, version 4.8, which is the latest line of Linux release series upon which Ubuntu 16.10 is based.
Ubuntu 17.04 Kernel Version
Ubuntu 17.04 Kernel Version

Future Changes include

Canonical is yet to release any information about the changes to look forward to in this future release, but we are sure a release will be around the corner soon.

Upgrade to Ubuntu 17.04 (Zesty Zapus)

With Ubuntu, you can trust that upgrading to later distro releases is easy.
All you need to do is enter
$ sudo do-release-upgrade

to upgrade to a newer release or to upgrade to a development version of Ubuntu (like in this case of 17.04).
$ sudo do-release-upgrade -d
Upgrade to Ubuntu 17.04
Upgrade to Ubuntu 17.04
You can also check the article on upgrading to 17.04 from 16.10.

Download and Install Ubuntu 17.04 (Zesty Zapus)

To avoid potential bugs and data loss I advise that you perform a clean installation on a virtual machine since this release is not official yet.
You can download the disk image for either 32-bit or 64-bit architecture:
Including the images for Ubuntu 17.04 official flavors, and then go about installing the image like you would any other.
Do you plan on trying out Ubuntu 17.04? Or are you already a steady user? Share your thoughts in the comments section.

Tuesday 22 November 2016

Introduction to Gluster File System Install Configure Gluster HA




GlusterFS is a scalable network file system. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software. In this Article we are going to see Introduction Gluster File System Install Configure Gluster HA.

Why GlusterFS – Introduction to Gluster File System

  • GlusterFS is a distributed file system defined to be used in user space that can scale out in building-block fashion to store multiple petabytes of data under a single mount point.
  • It is a software based file system which accounts to its own flexibility feature.
  • It uses already available disk file systems like ext3, ext4, xfs etc to store data and client will able to access the storage as a local file system.
  • GlusterFS cluster aggregates storage blocks over Infiniband
  • RDMA (Remote Direct Memory Access) and/or TCP/IP interconnect in a single global namespace. Introduction Gluster File System Install Configure Gluster HA

Advantages of GlusterFS

  • Gluster File system uses replication to survive hardware failures and automatically performs self-healing to restore performance. Aggregates on top of existing file systems.
  • GlusterFS has no single point of failure. Completely distributed. No centralised meta-data server like Lustre.
  • Extensible scheduling interface with modules loaded based on user’s storage I/O access pattern.
  • Supports Infiniband RDMA and TCP/IP. Introduction Gluster File System Install Configure Gluster HA
  • Entirely implemented in user-space. Easy to port, debug and maintain. Scales on demand. Too easy to deploy and maintain

Targeted Audience

Anyone who is having basic knowledge in Linux/Unix and ability to understand the file system concepts.

How to install and Configure Gluster in HA

Before implementing the Gluster setup, let’s go through the storage concepts in Gluster.
  • Brick – Brick is nothing but a directory which can be shared among the trusted storage pool.
  • Trusted Storage Pool – Collection of shared files or directories Block Storage – Devices through which the data is being moved across systems in form of blocks
  • Cluster – Collaboration of storage servers based on a defined protocol
  • Distributed File System – A file system in which data is spread across multiple nodes where users can access the files without knowing the actual location of the server
  • FUSE – Loadable kernel module which allows users to create a file system without involving any of the kernel code
  • Glusterd – GlusterFS management daemon which is the backbone of the file system which will be running all the time
  • Volume – A logical collection of bricks Introduction Gluster File System Install Configure Gluster HA

Ports required

  • 24007 TCP for the Gluster Daemon
  • 24008 TCP for Infiniband management (optional unless you are using IB (InfiniBand) )
  • One TCP port for each brick in a volume. So, for example, if you have 4 bricks in a volume, port 24009 – 24012 would be used in GlusterFS 3.3 & below, 49152 – 49155 from GlusterFS 3.4 & later.
Note: by default Gluster/NFS does not provide services over UDP, it is TCP only. You would need to enable the nfs.mount-udp option if you want to add UDP support for the MOUNT protocol. That’s completely optional and is up to your requirement to use.

Installation and Configuration Gluster HA

Now let us start installing and configuring the GlusterFS. Take two Centos 7 servers and name them as per your choice say Glusterfs1 and Glusterfs2.
[root@ip-172-31-31-246 ~]# cat /etc/os-release 
NAME=”CentOS Linux” 
VERSION=”7 (Core)” ID=”centos” 
ID_LIKE=”rhel fedora” 
VERSION_ID=”7″ 
PRETTY_NAME=”CentOS Linux 7 (Core)” 
ANSI_COLOR=”0;31″ 
CPE_NAME=”cpe:/o:centos:centos:7″ 
HOME_URL=”https://www.centos.org/” 
BUG_REPORT_URL=”https://bugs.centos.org/” 

[root@ip-172-31-31-247 ~]# cat /etc/os-release 
NAME=”CentOS Linux” 
VERSION=”7 (Core)” 
ID=”centos” 
ID_LIKE=”rhel fedora” 
VERSION_ID=”7″ 
PRETTY_NAME=”CentOS Linux 7 (Core)” 
ANSI_COLOR=”0;31″ 
CPE_NAME=”cpe:/o:centos:centos:7″ 
HOME_URL=”https://www.centos.org/” 
BUG_REPORT_URL=”https://bugs.centos.org/”
Note: Make Sure that SELinux and firewall are not blocking the above ports between the two servers.
In /etc/hosts create an entry of like below in both the servers.  Make sure that you are able to reach both the servers in both servers.

Glusterfs1

[root@ip-172-31-31-246 ~]# cat /etc/hosts
 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
 172.31.31.246 glusterfs1
 172.31.31.247 glusterfs2

[root@ip-172-31-31-246 ~]# ping -c 2 glusterfs1
 PING glusterfs1 (172.31.31.246) 56(84) bytes of data.
 64 bytes from glusterfs1 (172.31.31.246): icmp_seq=1 ttl=64 time=0.016 ms

[root@ip-172-31-31-246 ~]# ping -c 2 glusterfs2
 PING glusterfs2 (172.31.31.247) 56(84) bytes of data.
 64 bytes from glusterfs2 (172.31.31.247): icmp_seq=1 ttl=64 time=0.992 ms

Glusterfs2

[root@ip-172-31-31-247 ~]# cat /etc/hosts
 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
 172.31.31.247 glusterfs2
 172.31.31.246 glusterfs1
 [root@ip-172-31-31-247 ~]# ping -c 2 glusterfs2
 PING glusterfs2 (172.31.31.247) 56(84) bytes of data.
 64 bytes from glusterfs2 (172.31.31.247): icmp_seq=1 ttl=64 time=0.016 ms
 64 bytes from glusterfs2 (172.31.31.247): icmp_seq=2 ttl=64 time=0.023 ms

[root@ip-172-31-31-247 ~]# ping -c 2 glusterfs1
 PING glusterfs1 (172.31.31.246) 56(84) bytes of data.
 64 bytes from glusterfs1 (172.31.31.246): icmp_seq=1 ttl=64 time=1.36 ms
 64 bytes from glusterfs1 (172.31.31.246): icmp_seq=2 ttl=64 time=0.619 ms
Enable the Gluster Repo in both the servers. Create a repo file and add the contents as shown below.
[root@ip-172-31-31-247 ~]# cat /etc/yum.repos.d/gluster.repo
 [glusterfs]
 name=GlusterFS – Distributed File System
 baseurl=http://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.7/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0
[root@ip-172-31-31-247 ~]# yum clean all
[root@ip-172-31-31-246 ~]# cat /etc/yum.repos.d/gluster.repo
 [glusterfs]
 name=GlusterFS – Distributed File System
 baseurl=http://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.7/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0

[root@ip-172-31-31-246 ~]# yum clean all
Install Glusterfs server and start the service. Introduction Gluster File System Install Configure Gluster HA
Install the gluster server on both the servers Introduction to Gluster File System Install Configure Gluster HA
[root@ip-172-31-31-246 ~]# yum install glusterfs-server -y

[root@ip-172-31-31-246 ~]# service glusterd start
 Redirecting to /bin/systemctl start glusterd.service

[root@ip-172-31-31-246 ~]# service glusterd status
 Redirecting to /bin/systemctl status glusterd.service
 ● glusterd.service – GlusterFS, a clustered file-system server
 Loaded: loaded (/usr/lib/systemd/system/glusterd.service;disabled; vendor preset: disabled)
 Active: active (running) since Fri 2016-09-30 07:10:44 UTC; 29min ago
 Process: 11956 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid –log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 11957 (glusterd)
 CGroup: /system.slice/glusterd.service
 └─11957 /usr/sbin/glusterd -p /var/run/glusterd.pid –loglevel INFO

 [root@ip-172-31-31-246 ~]# chkconfig glusterd on
 Note: Forwarding request to ‘systemctl enable glusterd.service’.
 Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.

[root@ip-172-31-31-247 ~]# yum install glusterfs-server -y

[root@ip-172-31-31-247 ~]# service glusterd start
 Redirecting to /bin/systemctl start glusterd.service

[root@ip-172-31-31-247 ~]# service glusterd status
 Redirecting to /bin/systemctl status glusterd.service
 ● glusterd.service – GlusterFS, a clustered file-system server
 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
 Active: active (running) since Fri 2016-09-30 07:10:44 UTC; 29min ago
 Process: 11956 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid –log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code =exited, status=0/SUCCESS) Main PID: 11957 (glusterd)
 CGroup: /system.slice/glusterd.service
 └─11957 /usr/sbin/glusterd -p /var/run/glusterd.pid –loglevel INFO
 
[root@ip-172-31-31-247 ~]# chkconfig glusterd on
 Note: Forwarding request to ‘systemctl enable glusterd.service’.
 Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
Now that installation has been completed. Let’s create the trusted storage pool.

From glusterfs1 Host

[root@ip-172-31-31-246 ~]# gluster peer probe glusterfs2
 peer probe: success.

From Glusterfs2 Host

[root@ip-172-31-31-247 ~]# gluster peer probe glusterfs1
 peer probe: success. Host glusterfs1 port 24007 already in peer list
Now verify whether both the servers are in the pool or not with the below command, from any of the two servers.
[root@ip-172-31-31-246 ~]# gluster pool list
 UUID Hostname State 2d7c96c0-31d3-48bc-8d61-a9c97d01f07d glusterfs2 Connected
 af8c77f4-f10a-4b28-915e-ae7bea2af4c6 localhost Connected
From the above, it is clear that both the servers are added to Trusted storage Pool.
To add extra servers to the pool list follow the same process as mentioned above.
Attach a new disk to each server and mount on any directory of your choice. We will use this mounted directory as a brick while creating the volumes.
Now we have the GlusterFS server is up and running, let’s create volumes and start accessing the data.
I have two new disks attached to both the servers, which will be used as a brick.
[root@ip-172-31-31-246 ~]# df -h | grep brick
 /dev/xvdf 976M 2.6M 907M 1% /mnt/brick1

[root@ip-172-31-31-247 ~]# df -h | grep brick
 /dev/xvdf 976M 2.6M 907M 1% /mnt/brick2
From any of the Gluster server run the following command to create a Gluster volume.
[root@ip-172-31-31-246 ~]# gluster volume create gvol glusterfs1:/mnt/brcik1 glusterfs2:/mnt/brick2 force
 volume create: gvol: success: please start the volume to access data
Gluster volume has been created successfully. Now let us start the Gluster volume.
[root@ip-172-31-31-246 ~]# gluster volume start gvol
 volume start: gvol: success
Gluster volume has been started successfully and now it is ready to serve the data. Before mounting this in client side, let us first check the gluster volume information.
[root@ip-172-31-31-246 ~]# gluster volume info gvol
 Volume Name: gvol
 Type: Distribute
 Volume ID: 74b01cf1-b1ac-4524-b2ad-ca465c02a888
 Status: Started
 Number of Bricks: 2
 Transport-type: tcp
 Bricks:
 Brick1: glusterfs1:/mnt/brcik1
 Brick2: glusterfs2:/mnt/brick2
 Options Reconfigured: performance.readdir-ahead: on

Mount the Gluster in clients (Windows and Linux)

Linux
Let us mount the gluster volume gvol in client machines to access the data. Add the Gluster servers IP addresses to /etc/hosts in the client machine.
[root@ip-172-31-18-13 ~]# cat /etc/hosts
 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
 172.31.31.247 glusterfs2
 172.31.31.246 glusterfs1
Enable the Gluster Repo as we did on the servers. Create a repo file and add the contents as shown below.
[root@ip-172-31-18-13 ~]# cat /etc/yum.repos.d/gluster.repo
 [glusterfs]
 name=GlusterFS – Distributed File System
 baseurl=http://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.7/
 enabled=1
 skip_if_unavailable=1
 gpgcheck=0

[root@ip-172-31-18-13 ~]# yum clean all
Install the client packages with the following command
[root@ip-172-31-18-13 yum.repos.d]# yum install glusterfs glusterfs-fuse

Create a directory.

[root@ip-172-31-18-13 ~]# mkdir /mnt/glusterclient1
Mount it with the following command.
[root@ip-172-31-18-13 ~]# mount.glusterfs glusterfs1:/gvol /mnt/glusterclient1/
If all goes well, you will be able to see the gluster volume mounted successfully.
[root@ip-172-31-18-13 ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/xvda1 8.0G 1.1G 7.0G 14% /
 devtmpfs 478M 0 478M 0% /dev
 tmpfs 496M 0 496M 0% /dev/shm
 tmpfs 496M 13M 484M 3% /run
 tmpfs 496M 0 496M 0% /sys/fs/cgroup
 tmpfs 100M 0 100M 0% /run/user/1000
 /dev/xvdf 976M 2.7M 907M 1% /mnt/brick2
 glusterfs1:/gvol 9.0G 1.1G 7.9G 12% /mnt/glusterclient1
Create files from client and you will be able to see the files in the server

Mount GlusterFS in Windows

For windows machines we need to setup Samba configuration to export the mount point of the Gluster volume. Example, if a Gluster volume is mounted on /mnt/glusterclient1, you must edit smb.conf file to enable exporting this through CIFS.
Open smb.conf file in an editor and add the following lines for a simple configuration:
[gvol]
 comment = Gluster Stuff
 path = /mnt/gluster
 public = yes
 writable = yes
 printable = no
 Restart the samba service.
 service smb restart
Go to the windows machine and follow the steps below.
  • Open “Computer” then “Network”
  • Then search for the storage network \glusterserverip
  • The CIFS exported volume will appear.
  • Double click on it.
  • Enter username and password specified during smb user creation and then connect and access the share.
 Gluster file system supports different types of volumes based on the requirements. Some volumes are good for scaling storage size, some for improving performance and some for both which are explained below. 
source:-https://arkit.co.in Introduction Gluster File System Install Configure Gluster HA