2012년 9월 21일 금요일

OpenStack Foundation launches

Last Wednesday(19th of September), I had Openstack Foundation meetup in Korea.

The leader, Jaesuk Ahn introduced the background of this foundation
and new features in next version of Openstack, Folsom. 







The meaning of this is that Openstack will be neutral and cannot be the ownership of one or some companies. 

Well, I am not currently dealing with Openstack, instead, I've been involing in a project in associated with VMware.
It's like it's been a long time to post related to Openstack. 

Ah! right., VMware has recently been a member of Opestack. It's not far subject from center. 

Openstack members are follows: 
- Platinum member: AT&T, Canonical, HP, IBM, Nebula, Rackspace, Red Hat and SUSE. 
- Gold member: Cisco, Dell, NetApp, Piston Cloud Computing, Intel, NEC and VMware

Click here to know further about Openstack Foundation.

Nagios: check_logfiles

Altibase is often called as hybrid database, which means store data both tablespace in a file system and memory. Normally, traditional database makes tablespace for storing data. But Altibase uses tablespace in memory. 
Memory is faster than Disk, this why we often store data into cache. 

Altimon is a monitoring deamon for this DB. Altimon periodically executes system queries which are written in configuration file. If Altimon detects one of results is over threshold, It writes error in another file. This post is for how I interact this log file with Nagios. 


1. What is check_logfiles ? 
check_logfiles is used to scan the lines of a file for regular expressions.
The plugin check_logfiles was designed to operate in mission critical environments where missing log lines after a logfile rotation could not be tolerated. 
When such a logfile rotation takes place, check_logfiles detects this and analyses the lines of the archived logfile, even if it's compressed.

It normally scans only the lines of a logfile which were added since the last run of the plugin.
The main features are:
- multiple regular expressions can be given


- expressions can be categorized as warning or critical
- it can handle any logfile rotation strategy
- hook scripts (either external scripts or a piece of perl-code in the configuration file) are possible, taking actions when a line matches a pattern. (for example, whenever a critical pattern is found, a nsca message is sent to the nagios server) 

2. Install check_logfiles & Test 
It is not able to install check_logfiles plugin via yum on CentOS. It hasn't been included its repository. It needs to download source code and install it manually. 

# Downloads check_logfiles
$ cd /downloads/
$ wget http://labs.consol.de/download/shinken-nagios-plugins/check_logfiles-3.5.1.tar.gz
$ ls -al
total 102490364
-rw-r--r--   1 root   root         138465 Dec 28  2007 check_logfiles-3.5.1.tar.gz
...



$ tar xzvf check_logfiles-3.5.1.tar.gz
$ cd check_logfiles-3.5.1

# The default directory of my nagios is  /usr/lib64/nagios/plugins/ 
$ ./configure --prefix=/usr/lib64/nagios/plugins/
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether make sets $(MAKE)... (cached) yes
variable with_seekfiles_dir is /var/tmp/check_logfiles
checking for sh... /bin/sh
checking for perl... /usr/bin/perl
checking for gzip... /bin/gzip
checking for gawk... /bin/gawk
checking for echo... /bin/echo
checking for sed... /bin/sed
checking for cat... /bin/cat
configure: creating ./config.status
config.status: creating Makefile
config.status: creating plugins-scripts/Makefile
config.status: creating plugins-scripts/subst
config.status: creating t/Makefile
                       --with-perl: /usr/bin/perl
                       --with-gzip: /bin/gzip
              --with-seekfiles-dir: /var/tmp/check_logfiles
              --with-protocols-dir: /tmp
               --with-trusted-path: /bin:/sbin:/usr/bin:/usr/sbin
                --with-nagios-user: nagios
               --with-nagios-group: nagios
                           
$ make
Making all in plugins-scripts
make[1]: Entering directory `/downloads/check_logfiles-3.5.1/plugins-scripts'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/downloads/check_logfiles-3.5.1/plugins-scripts'
Making all in t
make[1]: Entering directory `/downloads/check_logfiles-3.5.1/t'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/downloads/check_logfiles-3.5.1/t'
make[1]: Entering directory `/downloads/check_logfiles-3.5.1'
make[1]: Nothing to be done for `all-am'.
make[1]: Leaving directory `/downloads/check_logfiles-3.5.1'

$ make install
Making install in plugins-scripts
make[1]: Entering directory `/downloads/check_logfiles-3.5.1/plugins-scripts'
make[2]: Entering directory `/downloads/check_logfiles-3.5.1/plugins-scripts'
test -z "/usr/lib64/nagios/plugins/libexec" || mkdir -p -- "/usr/lib64/nagios/plugins/libexec"
/usr/bin/install -c 'check_logfiles' '/usr/lib64/nagios/plugins/libexec/check_logfiles'
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/downloads/check_logfiles-3.5.1/plugins-scripts'
make[1]: Leaving directory `/downloads/check_logfiles-3.5.1/plugins-scripts'
Making install in t
make[1]: Entering directory `/downloads/check_logfiles-3.5.1/t'
make[2]: Entering directory `/downloads/check_logfiles-3.5.1/t'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/downloads/check_logfiles-3.5.1/t'
make[1]: Leaving directory `/downloads/check_logfiles-3.5.1/t'
make[1]: Entering directory `/downloads/check_logfiles-3.5.1'
make[2]: Entering directory `/downloads/check_logfiles-3.5.1'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/downloads/check_logfiles-3.5.1'
make[1]: Leaving directory `/downloads/check_logfiles-3.5.1'

$ ls -al /usr/lib64/nagios/plugins/libexec/
total 192
-rwxr-xr-x 1 root root 194274 Sep 20 15:26 check_logfiles

# Test check_files, I created file called test.log in /downloads and I added lines of words that 
# contained "ARLRM". 
$ /usr/lib64/nagios/plugins/libexec/check_logfiles --tag=altibase --logfile=/downloads/test.log --criticalpattern="ALARM"
CRITICAL - (4 errors in check_logfiles.protocol-2012-09-20-15-31-02) - [2012/06/12 00:00:12] [ALARM]:: [SESSION_COUNT.SID_COUNT] current [129] > checkValue [1] ...|altibase_lines=4 altibase_warnings=0 altibase_criticals=4 altibase_unknowns=0

# Next, added parameter "report=long", I wanted to see whole lines of errors.
/usr/lib64/nagios/plugins/libexec/check_logfiles --tag=altibase --logfile=/downloads/test.log --criticalpattern="ALARM" --report=long
CRITICAL - (6 errors in check_logfiles.protocol-2012-09-20-15-52-55) - [2012/06/12 11:38:22] [ALARM]:: [MEM_DATABASE_USE.ALLOC_MEM_MB] current [11296.09] > checkValue [7000] ...|altibase_lines=6 altibase_warnings=0 altibase_criticals=6 altibase_unknowns=0
tag altibase CRITICAL
[2012/06/12 00:00:11] [ALARM]:: [PROCESS.MEM_USAGE(KB)] Current (15456872) >= Limit (10240000)
[2012/06/12 00:00:12] [ALARM]:: [MEMSTAT_SUM.MAX_TOTAL_MB] current [17818.98] > checkValue [10240]
[2012/06/12 00:00:12] [ALARM]:: [MEM_DATABASE_USE.ALLOC_MEM_MB] current [11296.09] > checkValue [7000]
[2012/06/12 00:00:12] [ALARM]:: [SESSION_COUNT.SID_COUNT] current [129] > checkValue [1]
[2012/06/12 11:38:21] [ALARM]:: [MEMSTAT_SUM.MAX_TOTAL_MB] current [17866.01] > checkValue [10240]
[2012/06/12 11:38:22] [ALARM]:: [MEM_DATABASE_USE.ALLOC_MEM_MB] current [11296.09] > checkValue [7000]

Default parameters are follows: 
--tag= This is for Idenfication, If you want check one more combination of logfile/pattern, you should use this. (Optional) 
--logfile= log file for scan 
--criticalpattern= Regular expression for Critical 
--warningpattern= Regular expression for Warning  (Optional) 
--noprotocol=Switch off of logging match results in a separated file (default protocol files are created in /tmp



** Protocol – The matching lines can be written to a protocol file the name of which will be included in the plugin’s output. 
The path of protocol file definition can be defined in configuration file. The definitions in this file are written with Perl-syntax.
$protocolsdir: The default is /tmp or the directory which has been specified with the –with-protocol-dir of ./configure.
$protocolretention: The lifetime of protocol files in days. After these days the files are deleted automatically (default 7 days)


References: 
1. http://labs.consol.de/lang/en/nagios/check_logfiles/
2. http://exchange.nagios.org/directory/Plugins/Operating-Systems/Linux/check_logfiles/details

2012년 9월 12일 수요일

VMware error: Device eth0 does not seem to be present

After I successfully deployed a VM from template via VI client, I got an error in associated with network while I was configuration the database through the console of the VM.

At first, I checked the configuration of its NIC
$ ping www.google.com
ping: nuknown host www.google.com 
$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"
HWADDR="00:50:56:B0:7F:24"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.20.174
NETMASK=255.255.255.0
DNS=168.126.63.1
DNS=168.126.63.2
GATEWAY=192.168.20.1
IPV6INIT=no

then, I restarted network of the VM.
$ service network restart
....
Bringing up interface eth0: Device eth0 does not seem to be present, defining initialization                                                [FAILED]



I found a post which include how to solve it. According to a post: http://aaronwalrath.wordpress.com/2011/02/26/cloned-red-hatcentosscientific-linux-virtual-machines-and-device-eth0-does-not-seem-to-be-present-message/

It was said like the following: 
"As it turns out there is a device manager for the Linux kernel named 'udev' which remembers the settings from the NIC of the virtual machine before it was cloned.  I was not familiar with udev because it was not installed in my previous Linux VM install, which were mainly CentOS 5."

I checked network device again.
$ ls /sys/class/net/
eth1   lo

It was wrong information. I intended that I added the NIC named "eth0". 

To solve this, I had to changed some information in /etc/udev/rules.d/70-persistent-net.rules file and in /etc/sysconfig/network-scripts/ifcfg-eth0.

1) Replace value of "ATTR{address} of "eth0 with the value of "eth1"
$ vi /etc/udev/rules.d/70-persistent-net.rules
.......
# PCI device 0x15ad:0x07b0 (vmxnet3) 
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}== "00:50:56:B0:7F:24", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
     -> wrong info.

# PCI device 0x15ad:0x07b0 (vmxnet3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:8e:27:fe", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"            -> right info. 


My /etc/udev/rules.d/70-persistent-net.rules file was modified. 
$ cat /etc/udev/rules.d/70-persistent-net.rules
....
# PCI device 0x15ad:0x07b0 (vmxnet3) 
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}== " 00:50:56:8e:27:fe", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

2) Change NIC's mac address with the same value of above case.
$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
HWADDR=" 00:50:56:8e:27:fe"
NM_CONTROLLED="no"
.....

3) Restart the VM.

The network of the VM worked as "eth0" after reboot.

2012년 9월 9일 일요일

Deploying VM from Template VMware 5

I'm going to describe how to deploy a Linux virtual machine (CentOS 6.2) from the template on VMware 5. This is preferred way when creating VM on other hypervisors like Hyper-V and Xen. 

I installed vCenter server and Infrastructure client on Windows machine to centrally manage ESXi host and then I converted a running vm into a template which was for ready deploying new VM.

I used "Deploy virtual machine from template" menu on client. (you can see the menu when click right button of the mouse after select a template.)

This menu has below four sections: 

  • Name and Location: Name of the VM and location where to deploy.
  • Host / Cluster: Host and Cluster which is located at 
  • Storage: Select provisioning type and repository
  • Guest Customization

I'd like to focus on explaining Guest Customization section

Guest Customization


Guest Customization: Computer name


Guest Customization: Time Zone


Guest Customization: Network 


Guest Customization: Network -> Custom Setting


Guest Customization: Network -> Custom Setting


Guest Customization: DNS and Domain Setting


Guest Customization: Save spec.


Guest Customization: Save spec.



As a result, 
1) I had easier configuration way to deploy Linux VM from template on VMware than Xen. When I had configure network on Xen, It needed more complex steps for network setting.

2) I couldn't find to change property values related to How many CPU and Memory I allocate for the VM. It needs to further test.

3) Network didn't work after finishing deploy of the VM.
I checked configuration file of the network in the VM.
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
USECTL=no
NETMASK=255.255.255.0
IPADDR=192.168.20.172
GATEWAY=192.168.20.1
PEERDNS=no

check_link_down() {
    return 1;
}

DNS information was added neither in /etc/sysconfig/network-scripts/ifcfg-eth1 or in /etc/resolv.conf

I manually changed ifcfg-eth1 file, looking at the following:


I could able to ping to the outside. I resolved the network problem. But, I wondered what PEERDNS meant in ifcfg-eth1 ( I'd never used this option)
The meaning of PEERDNS  (yes and no)
  • yes(default):  This interface will modify your system's /etc/resolv.conf file entries to use the DNS servers provided by the remote system when a connection is established. That is, If you define the DNS setting in ifcfg-eth*, the system automatically change entries in /etc/resolv.conf whenever connection is established.
  • no:  Do not modify /etc/resolv.conf. 
To make network available with using PEERDNS=no, you should put your DNS addresses in your /etc/resolv.conf 

2012년 8월 26일 일요일

Ubuntu Korean community meeting (August, 2012)

I've been involving Linux world since September, 2011. It's been a year. I started with Redhat and CentOS. Two were Linux OSes which at least familiar with for me and time was passed.


As time goes by, I had to examine new technologies, Openstack for example. I am not intending to blame Redhat and CentOS, those are fantastic systems. However, It was true that deploying, testing is hard working on those OSes for the new technology. In the middle of trying and fail, I realized the existence of Ubuntu.

I've joined in some open communities. Ubuntu Korean community is one of them and I had a monthly periodical meeting yesterday. The post is to review this.


There were the following Agenda of the meeting.
1. To survive as a software engineer in IT era.
2. How to use KDE and its benefits.
3. Introducing commandline utilities
4. CUBRID (Open Database)


1. To survive as a software engineer in IT era.

  • Having fun for development: Development is hard working, Become curious -> Learning, Studying
  • Sharing, Responsive: Sharing your knowledge, Communications and Improving.
  • Change: prefered technology is changeable. Knows trends.            
2. How to use KDE and its benefits
  • KUbuntu (KDE on Ubuntu) and some useful programs: Kate (editor), Dolphine (file manager), Konqueror (file manager), KRunner (search), etc.
3. Introducing command-line utilities

  • Command line interface Advanced users
  • eBuntu is Operating System for embedded system.
  • Utilties in Command line: mocp / mpg321 / mplayer (Multimedia), w3m / ircii / irssi / rtorrent (Internet)
4. CUBRID

Cubrid is totally open license (GPL), They don't have commercial license. but Customer have to pay for technical support.
- DB engine: GPL
- UI: BSD
- Technical Support: Payable

MySQL history        
1994    MySQL development
1999    Became Open source
2008    Merger to Sun ($10B)
2010    Merger to Oracle
2011   Ship 5.5 & Change license policy

Current MySQL status
- Alternatives: MariaDB, PostgreSQL

Cubrid Global
- Upload source code to SourceForge(http://sourceforge.net/projects/cubrid)
- Cubrid.org (http://www.cubrid.org)

Performance, Scalability, Usability, Availability, Reliability
- Performance - Higher Read speed by using caching
- HA: HA with replication (Shared nothing)
- Shading: Similar with NoSQL
- Usability: Manager for DBAs, Query brower for developers, Web manager, Migration toolkit (Oracle, Mysql)

Q&A
- Has nearly same SQL syntax with MySQL (90%)
- Seminar per year (on September)
- Facebook page: http://www.facebook.com/cubrid 



2012년 8월 15일 수요일

Redhat becomes fully cloud provider

This post is based on a article of eweek (http://t.eweek.com/eweek/#!/entry/red-hat-openstack-distribution-released-in-preview,5029295f6fab9e43ee6f3317)

Redhat announced the release of a preview of Openstack. Currently, they think they will introduce Essex version first and then they will upgrade to Folsom version.

Like other cloud venders, Redhat has prepared a portfolio of components as cloud provider.


Cloud Type
Component
Storage Redhat Storage (formerly known as Gluster)
Platform as a Service Open Shift
Virtualization KVM or Redhat Enterprise Virtualization
Infrastructure as a Service Redhat hybrid IaaS or Openstack
Hybrid cloud management framework Redhat CloudForms

The benefit of releasing Openstack is that Redhat will have the commercial supportability for fast adopting enterprise cloud. Redhat has been working as the third contributor for Openstack. This year, this preview will be continuing and it'll introduce fully supported version in 2013.

Similarly, RackSpace also have a plan of releasing Openstack-based private cloud software for free. It is installed on Ubuntu 12.04 LST and KVM hypervisors. Its components are Essex version with compute, image, authentication and dashboard. Customers can use free support forum or purchase escalation support service to solve problems.

2012년 8월 4일 토요일

Comparing Open Source Private Cloud (IaaS) Platforms


This excellent document was written by Lance Albertson, at OSU(Oregon State University) Open Source Lab.

http://assets.en.oreilly.com/1/event/80/Comparing%20Open%20Source%20Private%20Cloud%20Platforms%20Presentation.pdf

The purpose is for comparison between Openstack, Cloudstack, Eucalyptus and Ganeti from a private cloud perspective.

Storage Comparison
Type OpenStack Eucalyptus CloudStack Ganeti
Disk Image yes yes yes yes [1]
Block devices yes [2] yes [2] yes [3] yes [4]
Fault Tolerance yes [5] yes [6] yes [7] yes
1. Disk Image support has limitations
2. Via an elastic block storage service
3. iSCSI, OCFS2, CLVM (depends on hypervisor)
4. Primary storage method, also has sharedfs support
5. Uses rsync in the backend
6. Not added until version 3.0, uses DRBD (Distributed Replicated Block Device)
7. Parts are built-in, Storage is on your own

VM Image Comparison
TypeOpenStackEucalyptusCloudStackGaneti
Image Serviceyesyesyesno
Self Service [1]yesyesyesno [2]
Amazon APIyes [3]yesyesno
1. Ability for users to create and manage their own VM images
2. Third-party applications can offer this
3. Some support

Self Service Comparison
TypeOpenStackEucalyptusCloudStackGaneti
Web Interfaceyesyes [2]yesyes [1]
Users & Quotasyesyesyesyes [1]
Console accessyesyesyesyes [1]
User managementyesyesyesyes [1] 

1. Available via third-party application Ganeti Web Manager
2. Although it has own web UI, it doesn't have powerful functionality. Use ElastickFox or HybridFox instead.    


Networking Management
TypeOpenStackEucalyptusCloudStackGaneti
Auto-allocationyesyesyesno [1]
Floating IPsyesyesyesno 
User definedyesyesyesno 
Layer 2yesyesyesno 
1. Proposal submitted but not yet implemented


Strength & Weakness
Type
Weakness
Strength
Openstack
- Young codebase


- Uncertain future


- Initial configuration
- Single codebase


- Growing community


- Corporate support
Eucalyptus
- Install requirements


- Configurable but not very 
customizable


- Community Inclusion
- Excellent commercial support


- Fault-tolerance


- Offers a hybrid-cloud solution with AWS
CloudStack 
- Very GUI centric


- Single java core


- AWS integration weak
- Well-rounded GUI


- Stack is fairly simple


- Customization of the storage backend
Ganeti
- Admin centric


- VM Deployment


- No AWS integration
- Fault-tolerance built-in


- Customizable


- Very simple to manage and maintain