Articles

Adventures in Red Hat Enterprise Linux, CentOS, Fedora, OpenBSD and other open source solutions.

Zabbix LLD (low level discovery) SNMP examples

In my opinion it's not easy to understand the low level discovery mechanism that Zabbix now offers. It's however a very useful tool to setup a simple template to monitor hundreds of items at once.

The Zabbix documentation about low level discovery is good to setup one type of discovery; network interfaces.

Although that's a pretty important discovery, there are more tricks to use. I ran into a problem where a Juniper SRX ran out of diskspace. This was not monitored, so I added a discovery rule to find all storage devices and see how full the are. I added this discovery rule to a template calles "SNMP devices". This means all devices that have that template applied will be "discovered". Many of these devices will not have local storage though. Not an issue, the discovery will simply fail for these devices.

I added this discovery rule:

  • Name: Available storage devices
  • Type: SNMPv2 Agent
  • Key: snmp.discovery.storage
  • SNMP OID: hrStorageDescr
  • SNMP community: {$SNMPCOMMUNITY} (This variable is set on host level and referred to here.)
  • Port: 161
  • Update interval (in sec): 3600 (Set this to 60 temporarily, to speed up the discovery process, but remember to set it back.)
  • Keep lost resources period (in days): 1
  • Filter: Macro: {#SNMPVALUE} Regexp: ^/dev/da|^/dev/bo (This ensures only mounts that have a physical underlying storage device are found, the rest will be ignored.)

That rule will discover devices such as these:

  1. /dev/da0s1a
  2. /dev/bo0s1e
  3. /dev/bo0s1f

Now that these devices have been discovered, you can get all kinds of information about them. This is done using the item prototypes. I created two; one to get the size of the device, the other to get the usage of the device. Those two can be used to calculate a percentage later, with a trigger prototype. Here is one of the two item prototypes:

  • Name: hrStorageSize {#SNMPVALUE}
  • Type: SNMPv2 Agent
  • Key: hrStorageSize.["{#SNMPINDEX}"]
  • SNMP OID: hrStorageSize.{#SNMPINDEX}
  • SNMP community: {$SNMPCOMMUNITY}
  • Port: 161
  • Type of information: Numeric (unsigned)
  • Data type: Decimal
  • Units: bytes
  • Use custom multiplier: 2048 (Because SNMP reports sectors here, which is less logical to understand in my opinion.)
  • Update interval (in sec): 1800 (Pretty long, but the size of a device will not change quickly.)

And this other item prototype is to see how many bytes (sectors) are used: (I cloned the previous one and changed only these values:)

  • Name: hrStorageUsed {#SNMPVALUE}
  • Key: hrStorageUsed.["{#SNMPINDEX}"]
  • SNMP OID: hrStorageUsed.{#SNMPINDEX}
  • Update interval (in sec): 60 (Shorter, this will change.)

Now check if these items are being found by checking the "latest data" for the host. You should start to see a few items appear. In that case you can setup the trigger prototype. This is a bit complex, because I want to report on 95% full.

  • Name: Disk space available on {#SNMPVALUE} ({ITEM.LASTVALUE1}/{ITEM.LASTVALUE2})
  • Expression: 100*{Template_SNMP_Devices:hrStorageUsed.["{#SNMPINDEX}"].last(0)}/{Template_SNMP_Devices:hrStorageSize.["{#SNMPINDEX}"].last(0)}>95

That should start to alarm when the disk is 95% full or more.

I hope this article helps to understand the capabilities of Zabbix LLD. It's a great feature which I use to monitor blades, power supplies in chassis, network interfaces, disks and TCP ports. It makes templates much simpler which I really like.

User authentication on CentOS 6 with Active Directory based on hosts and groups

Follow this article when you would like users to be able to login to a CentOS 6 host, authenticating to Active directory based on:

  1. Group membership of a user (a group like "Linux Administrators") (or)
  2. A "host" attribute set per user to allow fine grained host-based permissions

This has a major benefit. You can add users to an administrative group and besides that you can assign permissions to login to a user per host. Once you have set this up, you can manage permissions fully through Active Directory.

Install required pacakges

You need to install one single package:

yum install nss-pam-ldapd

Configuration

There are quite few files to configure, I know that system-config-auth exists, but don't know if it gives the right results. So here are the files one-by-one:

/etc/nslcd.conf

# This program runs under this user and group, these are local/system (/etc/passwd) users.
uid nslcd
gid ldap
# The base is where to start looking for users. Your Windows colleagues will know this value.
base dc=nl,dc=example,dc=com
# This is the URI that describes how to connect to the LDAP server/active directory server. You may use a DNS round-robin name here to point to multiple Domain Controllers.
uri ldaps://ldap.nl.example.com:636/
# This is a user that can authenticate to Active Directory. It's used to connect to AD and query stuff.
binddn [email protected]
bindpw SoMePaSsWoRd
# Don't exactly know where I got these settings from, man-page has more information.
scope  group  sub
scope  hosts  sub
# If there are many results, paging is used.
pagesize 1000
# LDAP servers can refer you to another location, in my experience this slow down authentication dramatically.
referrals off
# This is the trick to match users from a certain group and users that have a host-attribute filled in.
# Note that the value of the variable "host" should be set to the hostname where this file in installed.
filter passwd (&(objectClass=user)(!(objectClass=computer))(unixHomeDirectory=*)(|(host=mylinuxhost.nl.example.com)(memberOf=CN=Linux Administrators,OU=Groups,DC=nl,DC=example,DC=com)))
# Active Directory may store some values in attributes that need to be mapped.
map    passwd homeDirectory    unixHomeDirectory
filter shadow (&(objectClass=user)(!(objectClass=computer))(unixHomeDirectory=*))
map    shadow shadowLastChange pwdLastSet
# This filters out groups that have a "gidNumber" set. This typically only happens for groups that need to be available on Linux.
filter group  (&(objectClass=group)(gidNumber=*))
map    group  uniqueMember     member
# Some time limits.
bind_timelimit 3
timelimit 3
scope sub
# Secure Socket Layer, yes we do!
ssl on
tls_reqcert never

/etc/pam_ldap.conf

This file looks very much like /etc/nslcd.conf, don't know why there are two actually. It confuses people.

bind_timelimit 3
timelimit 3
network_timeout 3
bind_policy hard
scope sub
nss_base_passwd dc=nl,dc=example,dc=com
nss_base_shadow dc=nl,dc=example,dc=com
nss_base_group dc=nl,dc=example,dc=com
nss_map_objectclass posixAccount user
nss_map_objectclass shadowAccount user
nss_map_objectclass posixGroup Group
nss_map_attribute homeDirectory unixHomeDirectory
nss_map_attribute uniqueMember member
nss_map_attribute shadowLastChange pwdLastSet
pam_login_attribute uid
pam_filter objectClass=user
pam_password ad
pam_member_attribute member
pam_min_uid 10000
pam_groupdn CN=Linux Administrators,OU=Groups,DC=nl,DC=example,DC=com
base dc=nl,dc=example,dc=com
uri ldaps://ldap.nl.example.com:636/
binddn [email protected]
bindpw SoMePaSsWoRd
bind_timelimit 3
timelimit 3
scope sub
ssl on
tls_reqcert never

/etc/pam.d/system-auth-ac and /etc/pam.d/password-auth-ac

These two files contain the same.

auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        sufficient    pam_krb5.so
auth        required      pam_deny.so

account     [default=bad user_unknown=ignore success=ok authinfo_unavail=ignore] pam_krb5.so
account     required      pam_unix.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_autht ok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     required      pam_mkhomedir.so skel=/etc/skel umask=0077

/etc/nsswitch.conf

This determines to send certain resolving queries to what facility. Make sure these lines are in:

passwd:     files ldap [NOTFOUND=return UNAVAIL=return] db
shadow:     files ldap [NOTFOUND=return UNAVAIL=return] db
group:      files ldap [NOTFOUND=return UNAVAIL=return] db
sudoers:    files ldap [NOTFOUND=return UNAVAIL=return] db

Starting of daemons

When all configuration changes are done, make sure to startup nslcd:

service nslcd start
chkconfig nslcd on

Troubleshooting

There is a caching mechanism in nslcd. I don't know how to flush that cache, but it caches negative hits too. (So when a user is not found, it will keep on saying that the user is not found) Waiting (a night) clears that cache, but this does not help you to solve the problem today.

You may stop nslcd and run in in debug mode:

service nslcd stop
nslcd -d

This will show you all queries sent to the ldap server.

Add a Zabbix proxy to an existing Zabbix server

So; you have an existing and working Zabbix server and would like to add a Zabbix proxy server? Here are the steps:

Install zabbix-proxy

On a new host (likely in some remote network) install the software package zabbix-proxy:

yum install zabbix-proxy
chkconfig zabbix-proxy on

We'll need to configure it, but that's a later step.

Create a database

Another easy step, maybe you already have a database infrastructure on the remote network, otherwise you can always install a database server locally:

yum install mysql
/usr/bin/mysqladmin -u root password 'MyPassword'
chkconfig mysql on
service mysqld start

No matter where the database server is located, zabbix-proxy needs it's own database:

GRANT SELECT,INSERT,UPDATE,DELETE ON zabbix.* TO 'zabbix'@'localhost' identified by 'MyZabbixPassword';

The database schema also needs to be populated. That schema can be found from the zabbix source code package, in database/mysql/ .

mysql -u root -p zabbix < data.sql

Configure zabbix-proxy

There are a few items to configure in /etc/zabbix/zabbix_proxy.conf.

The Server should point to your existing Zabbix server.

Server=existing-zabbix-server.example.com

The Hostname should be set, and should match exactly on what you configure later in the existing Zabbix server webinterface.

Hostname=zabbix-proxy-01.example.com

You also need to configure the Zabbix proxy to be able to connect to the database:

DBHost=localhost
DBName=zabbix
DBUser=zabbix
DBPassword=MyZabbixPassword

Configure zabbix-server

This is a very easy step; go the the webinterface and click to:
Administration - DM
Click on Create Proxy

Fill in the name of the proxy, exactly like you set it on the proxy in /etc/zabbix/zabbix_proxy.conf under Hostname.

Start zabbix-proxy

service zabbix-proxy start

On the Zabbix server you should now see under Administation - DM that the "last seen" field is updated. (It might take a minute or so.)

Monitor a node with the zabbix-proxy

Now you may add hosts to zabbix that are monitored by that proxy. On the configuration of that host, select the newly configured proxy at "Monitored by proxy".

If you can't get the Zabbix proxy to be seen by the server, make sure that these ports are open:

Source Source port Destination Destination port Description
Zabbix proxy any Zabbix server 10051/tcp The Zabbix proxy sends the traffic to the Zabbix server over this port
Zabbix proxy any Zabbix hosts (agents) 10050/tcp The Zabbix proxy connects to monitored hosts on this port for "passive" items.
Zabbix hosts (agents) any Zabbix proxy 10051/tcp The Zabbix hosts connect to the Zabbix proxy for "active" items.

Also check the logfile /var/log/zabbix_proxy.log on the Zabbix proxy.

Apache Tomcat and Apache HTTP in combination with LDAP authentication

Apache Tomcat is a web application server and it's rather logical to place Apache Tomcat behind Apache HTTP, the well know webserver.

Once you have Apache Tomcat running and a web application is installed, start to install Apache HTTP:

yum install httpd

Add a file in /etc/httpd/conf.d/apache-tomcat.conf:

<Location />
ProxyPass http://localhost:8080/my-app/
ProxyPassReverse http://localhost:8080/my-app/
AuthBasicProvider ldap
AuthType Basic
AuthzLDAPAuthoritative on
AuthName "My App Authentication"
AuthLDAPURL "ldap://your.ldap-or-ad-server.com:3268/DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" STARTTLS
AuthLDAPBindDN "[email protected]"
AuthLDAPBindPassword MySuperSecurePassword
AuthLDAPRemoteUserIsDN off
Require valid-user
Require ldap-group CN=My App Group,DC=company,DC=com
</Location>

(Be sure to have some users in that "My App Group", only those are allowed to authenticate.)

Edit the Apache Tomcat configuration to only allow connections from localhost. This is done in /opt/apache-tomcat/conf/server.xml. Find the port 8080 connector and add the 127.0.0.1 address:

    <Connector address="127.0.0.1" port="8080" protocol="HTTP/1.1"

CloudFlare in front of Amazon Elastic Compute Cloud (EC2) webserver(s).

So you have setup one or more websites on one or more Amazon Elastic Compute Cloud. There might be a few drawbacks that can easily be mitigated:

  • Amazon EC2 instances might not be very close to the location where most visitors come from, causing increased latency.
  • Having a single Amazon EC2 node is a bit fragile, configuration errors, webserver reloads and reboots will cause downtime.
  • Amazon EC2 machines (just as any other (Linux) machine) might be a bit vulnerable to attacks.
  • Traffic spikes might cause slow loading of pages.

The solution to these and other problems is free and easy to implement, it's called CloudFlare. This product is described by CloudFlare like this:
CloudFlare protects and accelerates any website online. Once your website is a part of the CloudFlare community, its web traffic is routed through our intelligent global network.

So far my experience is very good.

Implementation takes 15 minutes or so. It's easy, simply register your site at CloudFlare, pickup all existing DNS records and change the nameserver (NS) records for a domain to nameservers at CloudFlare.

All (web) traffic is routed through CloudFlare from that moment onward. This help to:

  • Save traffic to a webserver/loadbalancer. CloudFlare gives away this bandwidth for free, thank you!
  • Speed up websites by implementing caching and compression.
  • Reduce the number of hops from a visitor to the website. The website is actually served from any of the world-wide locations hosted by CloudFlare.
  • Show the original website (with a warning) is the webserver is down.
  • Reduce comment-spam by filtering out or challenging potential spammers with a Captcha.

All in all, a great service that's very easy to implement and maintain.

Bezoek thepiratebay.org vanuit Nederland

Lang verhaal kort: Bezoek The Pirate Bay via Me in IT Consultancy in plaats van de directe URL, zodat je weer torrents kunt downloaden. Als je Ziggo of XS4All gebruikt heeft de rechter besloten dat je thepiratebay.org niet meer mag bezoeken. Right, alsof dat helpt...

Kort verhaal lang: Met Apache, mod_proxy en mod_proxy_html kun je andere websites beschikbaar maken met een Location op een website. Als de webserver in een andere locatie staat, is het goed mogelijk dat je de "afgesloten" website weer kunt bezoeken.

Technisch gezien heb je deze ingredienten nodig om het aan de praat te krijgen.

  1. CentOS - Ik gebruik 5, maar 6 zou ook moeten werken.
  2. Apache - Gewoon installeren met "yum install httpd".
  3. mod_proxy_html - Ik heb een SRC RPM voor mod_proxy_html gemaakt.
  4. configuratie files - Zie hieronder.

De configuratie ziet er zo uit:

<VirtualHost *.80>
...
  ProxyRequests off

ProxyPass /thepiratebay.org/ http://thepiratebay.org/
ProxyPass /static.thepiratebay.org/ http://static.thepiratebay.org/
ProxyPass /rss.thepiratebay.org/ http://rss.thepiratebay.org/
ProxyPass /torrents.thepiratebay.org http://torrents.thepiratebay.org

<Location /thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /static.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /static.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /rss.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /rss.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /torrents.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap / /rss.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

...
</VirtualHost>

Zabbix triggers with "flap-detection" and a grace period.

Monitoring an environment with some monitoring system gives control, so it's pretty important. But it can be a challenge to setup a monitoring system; it should not alert too fast, but also not too slow.

Nagios uses "flap detection" to prevent many ERROR's and OK's being sent right after each other. Zabbix calls this "hysteresis". Zabbix's hysteresis is rather difficult to understand, so I'd like to share some triggers that I have setup for Zabbix that implement both flap detection/hysteresis and grace.

Grace can be defined like this: "When a value is higher (or lower) then a threshold, make sure it's a little lower (or higher) as the threshold that caused the trigger to alert, before recovering a trigger." I know; it's not easy to understand... Let's look at some examples.

Thresholds that should be above a certain value

With values that need to be below a threshold, like cpu load, number of users logged in or number of processes running:

({TRIGGER.VALUE}=0&{TEMPLATE:CHECK[ITEM].min(300)>ALERTVALUE)|({TRIGGER.VALUE}=1&{TEMPLATE:CHECK[ITEM].max(300)<RECOVERYVALUE)

Just to clarify the different part of the trigger:

  1. {TRIGGER.VALUE} makes sure the first part (before the |) is evaluated when there is no alert, the part after | indicates the trigger should be on/in alert.
  2. .min(300) makes sure the values are minimally as high as ALERTVALUE for 300 seconds.
  3. The last part (after the |) makes sure the trigger recovers when the measured value is lower than RECOVERYVALUE for 300 seconds.

For example CPU load with an ALERTVALUE of 5 and a RECOVERYVALUE of 4:

({TRIGGER.VALUE}=0&{Template_Linux:system.cpu.load[,avg1].min(300)}>5)|({TRIGGER.VALUE}=1&{Template_Linux:system.cpu.load[,avg1].max(300)}<4)

Thresholds that should be below a certain value

With values that need to be above a threshold, like percentage diskspace free, number of inodes free or number of httpd processes running:

({TRIGGER.VALUE}=0&{TEMPLATE:CHECK[ITEM].max(300)<ALERTVALUE)|({TRIGGER.VALUE}=1&{TEMPLATE:CHECK[ITEM].min(300)>RECOVERYVALUE)

For example disk space of /var free in percent with an ALERTVALUE of 10 and a RECOVERYVALUE of 11:

({TRIGGER.VALUE}=0&{Template_Linux:vfs.fs.size[/var,pfree].max(300)}<10)|({TRIGGER.VALUE}=1&{Template_Linux:vfs.fs.size[/var,pfree].min(300)}>11)

These rather complex triggers will prevent spikes of load or diskusage to cause an alert, but the drawback it that you might miss certain interesting spikes too. Overall my opinion is that a monitoring system should not drive people crazy because alerts will be ignored when too many are received.

Examples for batch on linux

Linux has a few ways to schedule jobs to be executed. I am sure most are familiar with crontab and at, but batch is lesser known.

"batch" can be used to: (from the man-page on batch)
executes commands when system load levels permit; in other words, when the load average drops below 0.8, or the value specified in the invocation of atrun.

So:

  • crontab is used for periodic scheduling.
  • at is used for executing something once at a specific time.
  • batch can be used to execute commands when your system has resourses.

You can also combine crontab and batch. Imagine you need to run a sequence of command in a specific order every hour. crontab does not guarantee one command is finished when it executs the next command.
batch can be used from crontab like so:

crontab -l
0 * * * * /usr/bin/batch now /usr/local/bin/prepare-something.sh
1 * * * * /usr/bin/batch now /usr/local/bin/process-something.sh
2 * * * * /usr/bin/batch now /usr/local/bin/report-something.sh

This batches these three commands in a specific order, one after the other, when the systemload is not too high.

One specific situation where I use this; Drupal needs to run a program (cron.php) every hour. crontab would be perfect for that, but when the load is too high, it's not a problem that this program is executed a little later. This is what I have setup:

0 * * * * /usr/bin/batch now /usr/bin/wget -o /dev/null -O /dev/null http://1.example.com/cron.php
1 * * * * /usr/bin/batch now /usr/bin/wget -o /dev/null -O /dev/null http://2.example.com/cron.php
2 * * * * /usr/bin/batch now /usr/bin/wget -o /dev/null -O /dev/null http://3.example.com/cron.php

This ensures that every hour cron.php is ran, but not if the systemload is too high (0.8 or more). One disadvantage of this solution; when your system is overloaded for a long period of time, these batch jobs pile up, then when the load drops below 0.8, all batched commands will be executed. Happily Drupals cron.php will not consume that much resources when it's ran twice.

Release scheme for RPM based Linux distributions

It can be rather confusing what the differences and similarities are on Fedora, Red Hat Enterprise Linux and CentOS. Especially with different versions. This article explains what release schedule and relations the various RPM based Linux distributions have.

Fedora is a Red Hat sponsored community project. Fedora is release approximately every 6 month. Fedora "supports" (supplies updates) for 13 month only. Clearly this is a development distribution.

Red Hat picks up a Fedora version and adds a few patches and call that "Red Hat Enterprise Linux". Red Hat supports that version for quite some time. Red Hat releases more conservatively; every 2 years. Red Hat supports a release for about 5 years after releasing, making this distribution much more "enterprise".

Fedora - Red Hat release relationship
Fedora release Red Hat release
Fedora Core 3 Red Hat Enterprise Linux 4
Fedora Core 6 Red Hat Enterprise Linux 5
Fedora Core 13 Red Hat Enterprise Linux 6

CentOS picks up the source code that Red Hat published for Red Hat Enterprise Linux. The CentOS community patches the artwork and very few other things. CentOS "supports" (provides updates) for as long as Red Hat supplies updates to Red Hat Enterprise Linux.
Interesting to know; once you have choosen to use a certain main version of CentOS, you'll automatically update to the most recent child-version of that release when using "yum update". So; if you install "CentOS 5.0" and run "yum update", you will automatically have "CentOS 5.7". (at the time of this writing.)

So; this makes this image:

Apache Tomcat 7 spec file RPM

I tried to find an RPM for Apache Tomcat version 7, but could not find one. You can use this one, it requires the source code, downloadable from Apache Tomcat's website, under "Source Code Distributions".

This SPEC file creates an RPM "apache-tomcat" that installs in /opt/tomcat, and the default web applications (apache-tomcat-manager, apache-tomcat-ROOT, apache-tomcat-docs, apache-tomcat-examples, apache-tomcat-host-manager). An init-script is included at the bottom, that needs to be available in the SOURCES directory, named "apache-tomcat-iniscript".

Downloads:
SOURCE:
Apache Tomcat SRC rpm

x86_64:
Apache Tomcat 86_64 rpm
Apache Tomcat ROOT application 86_64 rpm
Apache Tomcat docs 86_64 rpm
Apache Tomcat example application 86_64 rpm
Apache Tomcat host manager application 86_64 rpm
Apache Tomcat manager application 86_64 rpm

So far it's been fine, but any comments would be appreciated.

apache-tomcat.spec:

Name: apache-tomcat
Version: 7.0.20
Release: 1
Summary: Open source software implementation of the Java Servlet and JavaServer Pages technologies.
Group: Productivity/Networking/Web/Servers
License: Apache Software License.
Url: http://tomcat.apache.org
Source: %{name}-%{version}-src.tar.gz

BuildRoot: %{_tmppath}/%{name}-%{version}-build
BuildRequires: ant
BuildRequires: ant-trax
Requires: java-1.6.0-openjdk
BuildArch: x86_64

%description
Apache Tomcat is an open source software implementation of the Java Servlet and JavaServer Pages technologies. The Java Servlet and JavaServer Pages specifications are developed under the Java Community Process.

%package manager
Summary: The management web application of Apache Tomcat.
Group: System Environmnet/Applications
Requires: %{name} = %{version}-%{release}
BuildArch: noarch

%description manager
The management web application of Apache Tomcat.

%package ROOT
Summary: The ROOT web application of Apache Tomcat.
Group: System Environmnet/Applications
Requires: %{name}-%{version}-%{release}
BuildArch: noarch

%description ROOT
The ROOT web application of Apache Tomcat.

%package docs
Summary: The docs web application of Apache Tomcat.
Group: System Environmnet/Applications
Requires: %{name}-%{version}-%{release}
BuildArch: noarch

%description docs
The docs web application of Apache Tomcat.

%package examples
Summary: The examples web application of Apache Tomcat.
Group: System Environmnet/Applications
Requires: %{name}-%{version}-%{release}
BuildArch: noarch

%description examples
The examples web application of Apache Tomcat.

%package host-manager
Summary: The host-manager web application of Apache Tomcat.
Group: System Environmnet/Applications
Requires: %{name}-%{version}-%{release}
BuildArch: noarch

%description host-manager
The host-manager web application of Apache Tomcat.

%prep

%setup -q -n %{name}-%{version}-src
# This tells ant to install software in a specific directory.
cat << EOF >> build.properties
base.path=%{buildroot}/opt/apache-tomcat
EOF

%build
ant

%install
rm -Rf %{buildroot}
mkdir -p %{buildroot}/opt/apache-tomcat
mkdir -p %{buildroot}/opt/apache-tomcat/pid
mkdir -p %{buildroot}/opt/apache-tomcat/webapps
mkdir -p %{buildroot}/etc/init.d/
mkdir -p %{buildroot}/var/run/apache-tomcat
%{__cp} -Rip ./output/build/{bin,conf,lib,logs,temp,webapps} %{buildroot}/opt/apache-tomcat
%{__cp} %{_sourcedir}/apache-tomcat-initscript %{buildroot}/etc/init.d/apache-tomcat

%clean
rm -rf %{buildroot}

%pre
getent group tomcat > /dev/null || groupadd -r tomcat
getent passwd tomcat > /dev/null || useradd -r -g tomcat tomcat

%post
chkconfig --add %{name}

%preun
if [ "$1" = "0" ] ; then
service %{name} stop > /dev/null 2>&1
chkconfig --del %{name}
fi

%files
%defattr(-,tomcat,tomcat,-)
%dir /opt/apache-tomcat
%config /opt/apache-tomcat/conf/*
/opt/apache-tomcat/bin
/opt/apache-tomcat/lib
/opt/apache-tomcat/logs
/opt/apache-tomcat/temp
/opt/apache-tomcat/pid
%dir /opt/apache-tomcat/webapps
/var/run/apache-tomcat
%attr(0755,root,root) /etc/init.d/apache-tomcat

%files manager
/opt/apache-tomcat/webapps/manager

%files ROOT
/opt/apache-tomcat/webapps/ROOT

%files docs
/opt/apache-tomcat/webapps/docs

%files examples
/opt/apache-tomcat/webapps/examples

%files host-manager
/opt/apache-tomcat/webapps/host-manager

%changelog
* Fri Aug 19 2011 - robert (at) meinit.nl
- Updated to apache tomcat 7.0.20
- Split (example) applications into their own RPM.
* Mon Jul 4 2011 - robert (at) meinit.nl
- Initial release.

apache-tomcat-initscript:

#!/bin/sh
#
# apache-tomcat
#
# chkconfig: - 85 15
# description: Jakarta Tomcat Java Servlets and JSP server
# processname: java
# pidfile: /var/run/apache-tomcat/pid

. /etc/rc.d/init.d/functions

# Set Tomcat environment.
USER=tomcat
LOCKFILE=/var/lock/apache-tomcat
export BASEDIR=/opt/apache-tomcat
export TOMCAT_HOME=$BASEDIR
export CATALINA_PID=/var/run/apache-tomcat/pid
export CATALINA_OPTS="-DHOME=$BASEDIR/home -Xmx512m -Djava.awt.headless=true"

case "$1" in
  start)
        echo -n "Starting apache-tomcat: "
        status -p $CATALINA_PID apache-tomcat > /dev/null && failure || (su -p -s /bin/sh $USER -c "$TOMCAT_HOME/bin/catalina.sh start" > /dev/null && (touch $LOCKFILE ; success))
        echo
        ;;
  stop)
        echo -n "Shutting down apache-tomcat: "
        status -p $CATALINA_PID apache-tomcat > /dev/null && su -p -s /bin/sh $USER -c "$TOMCAT_HOME/bin/catalina.sh stop" > /dev/null && (rm -f $LOCKFILE ; success) || failure
        echo
        ;;
  restart)
        $0 stop
        $0 start
        ;;
  condrestart)
       [ -e $LOCKFILE ] && $0 restart
       ;;
  status)
        status -p $CATALINA_PID apache-tomcat
        ;;
  *)
        echo "Usage: $0 {start|stop|restart|condrestart|status}"
        exit 1
        ;;
esac

Syndicate content