Me in IT UNIX/Linux Consultancy is based in Utrecht, The Netherlands and specialized in UNIX and Linux consultancy. Experience with Red Hat Enterprise Linux (Red Hat Certified Architect), Fedora Project, CentOS, OpenBSD and related released Open Source products makes Me in IT UNIX/Linux Consultancy a great partner in implementing, maintaining and upgrading your environment.

Open Source software is an important aspect of any Linux distribution. Me in IT UNIX/Linux Consultancy tries to use Open Source software where possible and tries to share experiences actively. In the articles section you will find many UNIX/Linux adventures shared for others to benefit.

Deploying web applications (war) using RPM packages

I'm actually not sure if this is the most logical approach, but you can deploy web archives (wars) into an application server like Apache Tomcat.

There are benefits:

  • Deployments are very similar all the time.
  • You can check the version installed. (rpm -q APPLICATION)
  • You can verify if the installtion is still valid. (rpm -qV APPLICATION)
  • You can use puppet to deploy these application.

And there are drawbacks:

  • Apache Tomcat has to be stopped to deploy. This makes all installed web applications unavailable for a moment.
  • RPM is a package, WAR is also a package. A package in a package is not very logical.
  • Apache Tomcat "explodes" (unpacks) the WAR. That exploded directory is no managed by the RPM.

I've been using this method for a few years now. My conclusion; the benefits win from the drawbacks.

Here is what a SPEC file looks like:

Name: APPLICATION
Version: 1.2.3
Release: 1
Summary: The package for APPLICATION.
Group: Applications/Productivity
License: internal
Source: %{name}-%{version}.tar.gz
Requires: httpd
Requires: apache-tomcat
Requires: apache-tomcat-ojdbc5
Requires: apache-tomcat-jt400

BuildRoot: %{_tmppath}/%{name}-%{version}-build
BuildArch: noarch

%description
The package for APPLICATION

#%prep
#%setup -n %{name}-dist-%{version}

%{__cat} << 'EOF' > %{name}.conf
<Location "/%{name}">
ProxyPass http://localhost:8080/%{name}
ProxyPassReverse http://localhost:8080/%{name}
</Location>
EOF

%{__cat} <<'EOF' > %{name}.xml
<?xml version="1.0" encoding="UTF-8"?>
<Context>
<Resource name="jdbc/DATABASE"
    auth="Container"
    type="javax.sql.DataSource"
    validationQuery="select sysdate from dual"
    validationInterval="30000"
    timeBetweenEvictionRunsMillis="30000"
    maxActive="100"
    minIdle="10"
    maxWait="10000"
    initialSize="10"
    removeAbandonedTimeout="60"
    removeAbandoned="true"
    minEvictableIdleTimeMillis="30000"
    jmxEnabled="true"
    username="USERNAME"
    password="PASSWORD"
    driverClassName="oracle.jdbc.driver.OracleDriver"
    url="DATABASEURL"/>
</Context>
EOF

%install
rm -Rf %{buildroot}
mkdir -p %{buildroot}/opt/apache-tomcat/webapps/
cp ../SOURCES/%{name}-%{version}.war %{buildroot}/opt/apache-tomcat/webapps/%{name}.war
mkdir -p %{buildroot}/opt/apache-tomcat/conf/Catalina/localhost
cp %{name}.xml %{buildroot}/opt/apache-tomcat/conf/Catalina/localhost/%{name}.xml
mkdir -p %{buildroot}/etc/httpd/conf.d/
cp %{name}.conf %{buildroot}/etc/httpd/conf.d/

%clean
rm -rf %{buildroot}

%files
%defattr(-,tomcat,tomcat,-)
/opt/apache-tomcat/webapps/%{name}.war
%config /etc/httpd/conf.d/%{name}.conf
%config /opt/apache-tomcat/conf/Catalina/localhost/%{name}.xml

%changelog
* Tue Sep 9 2014 - robert (at) meinit.nl
- Initial build.

Puppet manifests for DTAP environments

Here is how I implement a manifest to install an application on different environment.

1. I package the application into an RPM.

2. I build a manifest (init.pp) that hold the shared properties:

# mkdir -p /etc/puppet/modules/APPLICATION/{manifest,file,template}s

# cat /etc/puppet/modules/APPLICATION/manifests/init.pp
class APPLICATION {
package { APPLICATION:
ensure => present,
}

file { "/opt/apache-tomcat/conf/Catalina/localhost/APPLICATION.xml":
content => template("/etc/puppet/modules/APPLICATION/templates/APPLICATION.xml.erb"),
notify => Service["apache-tomcat"],
require => Package["APPLICATION"],
}
}

Add the template to the module:

# cat /etc/puppet/modules/APPLICATION/templates/APPLICATION.conf.erb
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE Context>
<Context>
<Resource name="jdbc/APPLICATION"
auth="Container"
type="javax.sql.DataSource"
testWhileIdle="true"
testOnBorrow="true"
testOnReturn="false"
validationQuery="select sysdate from dual"
validationInterval="30000"
timeBetweenEvictionRunsMillis="30000"
maxActive="100"
minIdle="10"
maxWait="10000"
initialSize="10"
removeAbandonedTimeout="60"
removeAbandoned="true"
logAbandoned="true"
minEvictableIdleTimeMillis="30000"
jmxEnabled="true"
username="<%= APPLICATIONUSERNAME %>"
password="<%= APPLICATIONPASSWORD %>"
driverClassName="oracle.jdbc.driver.OracleDriver"
url="<%= APPLICATIONDBURL %>"/>
</Context>

Now I make a manifest for each environment:

# cat /etc/puppet/modules/APPLICATION/manifests/development.pp
class APPLICATION::development {
$APPLICATIONDBURL = "jdbc:oracle:thin:@HOSTNAME:1521/SID"
$APPLICATIONUSERNAME = "USERNAME"
$APPLICATIONPASSWORD = "PASSWORD"

include APPLICATION
}

# cat /etc/puppet/modules/APPLICATION/manifests/test.pp
class APPLICATION::test {
$APPLICATIONDBURL = "jdbc:oracle:thin:@HOSTNAME:1521/SID"
$APPLICATIONUSERNAME = "USERNAME"
$APPLICATIONPASSWORD = "PASSWORD"

include APPLICATION
}

# cat /etc/puppet/modules/APPLICATION/manifests/acceptance.pp
class APPLICATION::acceptance {
$APPLICATIONDBURL = "jdbc:oracle:thin:@HOSTNAME:1521/SID"
$APPLICATIONUSERNAME = "USERNAME"
$APPLICATIONPASSWORD = "PASSWORD"

include APPLICATION
}

# cat /etc/puppet/modules/APPLICATION/manifests/production.pp
class APPLICATION::production {
$APPLICATIONDBURL = "jdbc:oracle:thin:@HOSTNAME:1521/SID"
$APPLICATIONUSERNAME = "USERNAME"
$APPLICATIONPASSWORD = "PASSWORD"

include APPLICATION
}

On a machine simple include:

include APPLICATION::development

Doing HTTPS requests from the command line with (basic) password authentication

Imagine you want to test a web service/site secured by SSL and a password. Here is how to do that from the commandline:

In this example these values are used:

  • username: username
  • password: password
  • hostname: example.com

Generate the base64 encoded username and password combination:

echo -n "username:password"  | openssl base64 -base64

The output will be something like dXNlcm5hbWU6cGFzc3dvcmQ=. Paste that string into a file called "input.txt".

GET /some/directory/some-file.html HTTP/1.1
Host: example.com
Authentication: basic dXNlcm5hbWU6cGFzc3dvcmQ=

N.B. The two empty lines are required.

Next, throw that file into openssl:

(cat input.txt ; sleep 3) | openssl s_client -connect example.com:443

The output will show all headers and HTML content so you can grep all you want.

CloudFlare and F5 LTM X-Forwarded-For and X-Forwarded-Proto

If you want an application (such as Hippo) be able to determine what page is served with what protocol (http/https), you must insert an HTTP header when using a Apache ProxyPass.

When you use CloudFlare, the correct headers are inserted by default.

When you use an F5 loadbalancer, or in fact any loadbalancer or proxy, you must tell the loadbalancer to insert these two headers:

When you use a combination of the two, you have to make the loadbalancer a little smarter; it must detect if the header is present and add the header if not. That can be done by iRules.

The first iRule is to add "X-Forwarded-For" to the header:

when HTTP_REQUEST {
if {![HTTP::header exists X-Forwarded-For]}{
HTTP::header insert X-Forwarded-For [IP::remote_addr]
}
}

The second one is a bit more complex; it needs to verify if the X-Forwarded-Proto is available, and if not, add it, but based on it's original request to either port 80 (http) or port 443 (https):

when HTTP_REQUEST {
if {![HTTP::header exists X-Forwarded-Proto]}{
if {[TCP::local_port] equals 80}{
HTTP::header insert X-Forwarded-Proto "http"
} elseif {[TCP::local_port] equals 443}{
HTTP::header insert X-Forwarded-Proto "https"
}
}
}

Add these two iRules to your Virtual Service and with or without CloudFlare (or any other CDN) and your application can find the two headers to decide how to rewrite traffic.

Zabbix Low Level Discovery for TCP ports on a host

You can let Zabbix do a portscan of a host and monitor the ports that have been reported as open. I really like that option, it gives you the option to quickly add a host and monitor changes on TCP ports.

You'd need to:

  1. Place a script on the Zabbix server and all Zabbix proxies.
  2. Be sure "nmap" is installed. That's a port scanning tool.
  3. Create a Discovery rule on a template.

Place a script

Place this script in /etc/zabbix/externalscripts/zabbix_tcpport_lld.sh and change owner to the user that is running Zabbix server. (I presume zabbix:zabbix) Also change mode to 750.

#!/bin/sh

echo '{'
echo ' "data":['

nmap -T4 -F ${1} | grep 'open' | while read portproto state protocol ; do
port=$(echo ${portproto} | cut -d/ -f1)
proto=$(echo ${portproto} | cut -d/ -f2)
echo '  { "{#PORT}":"'${port}'", "{#PROTO}":"'${proto}'" },'
done

echo ' ]'
echo '}'

Install NMAP

Depending on your distribution:

RHEL/CentOS/Fedora Debian
sudo yum install nmap sudo apt-get install nmap

Configure a Discovery rule Zabbix

Select a template that you would like to add this discovery rule to. I've greated a "Network" template that does a few pings and has this discover rule.

I've listed the parameters that are required, the rest can be filled in however you like to use Zabbix.

Discovery

  • Name: Open TCP ports
  • Type: External check
  • Key: zabbix_tcpport_lld.sh[{HOST.CONN}]

This makes the variable {#PORT} and {#PROTO} available for use in the items and triggers.

Item Prototypes

  • Name: Status of port {#PORT}/{#PROTO}
  • Type: Simple check
  • Key: net.tcp.service[{#PROTO},,{#PORT}]
  • Type of information: Numeric (unsigned)
  • Data type: Boolean

Trigger Prototypes

  • Name: {#PROTO} port {#PORT}
  • Expression: {Template_network:net.tcp.service[{#PROTO},,{#PORT}].last(0)}=0

Now simply attach a host to this template to let it portscan and monitored the open (TCP) ports found.

Automounting Windows CIFS Shares

It can be very useful to mount a Windows (CIFS) Share on a Linux system. It's super easy to setup automount to go to mulitple servers and multiple shares on those servers using automount.

The goal is to tell automount to pickup the hostname and share from the path, so that a user can simple do:

cd /mnt/hostname/share

Use these setup to set this up:

Install autofs:

yum install autofs

Add a few lines to auto.master:

echo "/mnt /etc/auto.smb-root.top" >> /etc/auto.master

This tells autofs that "/mnt" is managed by autofs.

Create /etc/auto.smb-root.top:

echo "* -fstype=autofs,rw,-Dhost=& file:/etc/auto.smb.sub" > /etc/auto.smb-root.top

Create /etc/auto.smb.sub:

echo "* -fstype=cifs,rw,credentials=/etc/${host:-default}.cred ://${host}/&" > /etc/auto.smb.sub

Create a credentials file for each server:

cat << EOF > /etc/hostname.cred
username=WindowsUsername
password=WindowsPassword
domain=WindowsDomain
EOF

And create a file with default credentials:

cat << EOF > /etc/default.cred
username=WindowsUsername
password=WindowsPassword
domain=WindowsDomain
EOF

Restart autofs:

service autofs restart

Now you should be ready to cd into /mnt/hostname/share. You will notice this takes a second or so to complete, this second is used to mount the share and present you with the data directly.

One drawback of this solution; the username/password is assigned to the hostname, so if a share requires a different username/password, that's a problem.

Popularity of Fedora Spins

Fedora has introduced Spins. These spins are ISO's that allow a user to choose quickly try a Live DVD of Fedora, tailored to their needs.

Ordered on popularity as measured by me using bittorrent to upload this DVD's to the rest of the world. The Ratio column means the number of times the data has been uploaded.

Spin Ratio
Desktop i686 14.00
Desktop x86_64 13.80
MATE Compiz x86_64 11.50
LXDE i686 11.40
Design suite x86_64 10.30
Security x86_64 9.14
Xfce i686 9.030
MATE Compiz i686 8.89
Scientific KDE x86_64 8.54
Electronic Lab x86_64 8.24
Xface x86_64 7.97
KDE i686 7.52
Design suite i686 7.50
KDE x86_64 7.48
Games x86_64 7.31
Electronic lab i686 6.69
LXDE x86_86 6.68
Security i686 6.63
Jam KDE x86_64 5.72
Games i686 5.64
SoaS x86_64 4.78
Scientific KDE i686 4.64
Robotics x86_64 4.11
SoaS i686 3.98
Original (no spin) x86_64 3.91
Jam KDE i686 3.58
Robotics i686 3.28
Original (no spin) i686 3.04
Original (no spin) source 2.54

Without taking the architecture (x86_64 or i686) in consideration, this table show the most popular spins:

Spin x86_64 i686 Total
Desktop 14 13.8 27.80
MATE Compiz 11.5 8.89 20.39
LXDE 6.68 11.4 18.08
Design suite 10.4 7.50 17.9
Xfce 9.03 7.79 16.82
Security 9.14 6.63 15.77
KDE 7.48 7.52 15.00
Electronic lab 8.24 6.69 14.93
Scientific KDE 8.54 4.64 13.18
Games 7.31 5.64 12.94
Jam KDE 5.72 3.58 9.30
SoaS 4.78 3.98 8.76
Robotics 4.11 3.28 7.39
Original (no spin) 3.91 3.04 6.95
Source (no spin) source - - 2.54

And just to complete the overview, the popularity of the architectures:

Architecture Ratio
x86_64 110.84
i686 94.29

So; I'm sure some spins will be here to stay.

Interesting is that the non-branded (no-spin) DVD is not that popular. Most people choose a specific spin.

Some spins see more popularity on the i686 architecture:

  • LXDE
  • KDE

Zabbix LLD (low level discovery) SNMP examples

In my opinion it's not easy to understand the low level discovery mechanism that Zabbix now offers. It's however a very useful tool to setup a simple template to monitor hundreds of items at once.

The Zabbix documentation about low level discovery is good to setup one type of discovery; network interfaces.

Although that's a pretty important discovery, there are more tricks to use. I ran into a problem where a Juniper SRX ran out of diskspace. This was not monitored, so I added a discovery rule to find all storage devices and see how full the are. I added this discovery rule to a template calles "SNMP devices". This means all devices that have that template applied will be "discovered". Many of these devices will not have local storage though. Not an issue, the discovery will simply fail for these devices.

I added this discovery rule:

  • Name: Available storage devices
  • Type: SNMPv2 Agent
  • Key: snmp.discovery.storage
  • SNMP OID: hrStorageDescr
  • SNMP community: {$SNMPCOMMUNITY} (This variable is set on host level and referred to here.)
  • Port: 161
  • Update interval (in sec): 3600 (Set this to 60 temporarily, to speed up the discovery process, but remember to set it back.)
  • Keep lost resources period (in days): 1
  • Filter: Macro: {#SNMPVALUE} Regexp: ^/dev/da|^/dev/bo (This ensures only mounts that have a physical underlying storage device are found, the rest will be ignored.)

That rule will discover devices such as these:

  1. /dev/da0s1a
  2. /dev/bo0s1e
  3. /dev/bo0s1f

Now that these devices have been discovered, you can get all kinds of information about them. This is done using the item prototypes. I created two; one to get the size of the device, the other to get the usage of the device. Those two can be used to calculate a percentage later, with a trigger prototype. Here is one of the two item prototypes:

  • Name: hrStorageSize {#SNMPVALUE}
  • Type: SNMPv2 Agent
  • Key: hrStorageSize.["{#SNMPINDEX}"]
  • SNMP OID: hrStorageSize.{#SNMPINDEX}
  • SNMP community: {$SNMPCOMMUNITY}
  • Port: 161
  • Type of information: Numeric (unsigned)
  • Data type: Decimal
  • Units: bytes
  • Use custom multiplier: 2048 (Because SNMP reports sectors here, which is less logical to understand in my opinion.)
  • Update interval (in sec): 1800 (Pretty long, but the size of a device will not change quickly.)

And this other item prototype is to see how many bytes (sectors) are used: (I cloned the previous one and changed only these values:)

  • Name: hrStorageUsed {#SNMPVALUE}
  • Key: hrStorageUsed.["{#SNMPINDEX}"]
  • SNMP OID: hrStorageUsed.{#SNMPINDEX}
  • Update interval (in sec): 60 (Shorter, this will change.)

Now check if these items are being found by checking the "latest data" for the host. You should start to see a few items appear. In that case you can setup the trigger prototype. This is a bit complex, because I want to report on 95% full.

  • Name: Disk space available on {#SNMPVALUE} ({ITEM.LASTVALUE1}/{ITEM.LASTVALUE2})
  • Expression: 100*{Template_SNMP_Devices:hrStorageUsed.["{#SNMPINDEX}"].last(0)}/{Template_SNMP_Devices:hrStorageSize.["{#SNMPINDEX}"].last(0)}>95

That should start to alarm when the disk is 95% full or more.

I hope this article helps to understand the capabilities of Zabbix LLD. It's a great feature which I use to monitor blades, power supplies in chassis, network interfaces, disks and TCP ports. It makes templates much simpler which I really like.

Connect Rundeck to Active Directory

The Rundeck authentication documentation contains some errors and is not very explicit. That's why this information could help you connect your rundeck installation to Active Directory.

Reconfigure Rundeck

Firstly, tell Rundeck to have a look at a different file for authentication information:
Here is a part of /etc/rundeck/profile:

export RDECK_JVM="-Djava.security.auth.login.config=/etc/rundeck/jaas-activedirectory.conf \
        -Dloginmodule.name=activedirectory \

As you can see "loginmodule.name" refers to "activedirectory". So, create a file /etc/rundeck/jaas-activedirectory.conf:

activedirectory {
com.dtolabs.rundeck.jetty.jaas.JettyCachingLdapLoginModule required
debug="true"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
providerUrl="ldaps://ldap.eu.company.com:636"
bindDn="CN=Some User,CN=Users,DC=eu,DC=company,DC=com"
bindPassword="MyPaSsWoRd"
authenticationMethod="simple"
forceBindingLogin="true"
userBaseDn="dc=eu,dc=company,dc=com"
userRdnAttribute="sAMAccountName"
userIdAttribute="sAMAccountName"
userPasswordAttribute="unicodePwd"
userObjectClass="user"
roleBaseDn="dc=eu,dc=company,dc=com"
roleNameAttribute="cn"
roleMemberAttribute="member"
roleObjectClass="group"
cacheDurationMillis="300000"
reportStatistics="true";
};

Import Active Directories Certificate Authority certificates

Obtain (all) the certificates:

$ openssl s_client -showcerts -connect ldap.eu.company.com:636

In my case I got a chain of 3 certificates and had to import them all. I did this by saving each certificate in a file and running:
keytool -import -alias company1 -file company1 -keystore /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/security/cacerts -storepass changeit

Authorization

After connecting to Active Directory, I could authenticate, but miss authorizations, so edit /etc/rundeck/admin.aclpolicy:

description: Admin, all access.
context:
  application: 'rundeck'
for:
  resource:
    - allow: '*' # allow create of projects
  project:
    - allow: '*' # allow view/admin of all projects
by:
  group: admin

description: Full access.
context:
  project: '.*' # all projects
for:
  resource:
    - allow: '*' # allow read/create all kinds
  adhoc:
    - allow: '*' # allow read/running/killing adhoc jobs
  job:
    - allow: '*' # allow read/write/delete/run/kill of all jobs
  node:
    - allow: '*' # allow read/run for all nodes
by:
  group: [MyWindowsGroupName]

---

description: Admin, all access.
context:
  application: 'rundeck'
for:
  resource:
    - allow: '*' # allow create of projects
  project:
    - allow: '*' # allow view/admin of all projects
by:
  group: [MyWindowsGroupName]

There is an open bug, now you have to edit web.xml to point to a group that any user that need to login to RunDeck has.

...
        <security-role>
                <role-name>Some-AD-Group</role-name>
        </security-role>
...

User authentication on CentOS 6 with Active Directory based on hosts and groups

Follow this article when you would like users to be able to login to a CentOS 6 host, authenticating to Active directory based on:

  1. Group membership of a user (a group like "Linux Administrators") (or)
  2. A "host" attribute set per user to allow fine grained host-based permissions

This has a major benefit. You can add users to an administrative group and besides that you can assign permissions to login to a user per host. Once you have set this up, you can manage permissions fully through Active Directory.

Install required pacakges

You need to install one single package:

yum install nss-pam-ldapd

Configuration

There are quite few files to configure, I know that system-config-auth exists, but don't know if it gives the right results. So here are the files one-by-one:

/etc/nslcd.conf

# This program runs under this user and group, these are local/system (/etc/passwd) users.
uid nslcd
gid ldap
# The base is where to start looking for users. Your Windows colleagues will know this value.
base dc=nl,dc=example,dc=com
# This is the URI that describes how to connect to the LDAP server/active directory server. You may use a DNS round-robin name here to point to multiple Domain Controllers.
uri ldaps://ldap.nl.example.com:636/
# This is a user that can authenticate to Active Directory. It's used to connect to AD and query stuff.
binddn [email protected]
bindpw SoMePaSsWoRd
# Don't exactly know where I got these settings from, man-page has more information.
scope  group  sub
scope  hosts  sub
# If there are many results, paging is used.
pagesize 1000
# LDAP servers can refer you to another location, in my experience this slow down authentication dramatically.
referrals off
# This is the trick to match users from a certain group and users that have a host-attribute filled in.
# Note that the value of the variable "host" should be set to the hostname where this file in installed.
filter passwd (&(objectClass=user)(!(objectClass=computer))(unixHomeDirectory=*)(|(host=mylinuxhost.nl.example.com)(memberOf=CN=Linux Administrators,OU=Groups,DC=nl,DC=example,DC=com)))
# Active Directory may store some values in attributes that need to be mapped.
map    passwd homeDirectory    unixHomeDirectory
filter shadow (&(objectClass=user)(!(objectClass=computer))(unixHomeDirectory=*))
map    shadow shadowLastChange pwdLastSet
# This filters out groups that have a "gidNumber" set. This typically only happens for groups that need to be available on Linux.
filter group  (&(objectClass=group)(gidNumber=*))
map    group  uniqueMember     member
# Some time limits.
bind_timelimit 3
timelimit 3
scope sub
# Secure Socket Layer, yes we do!
ssl on
tls_reqcert never

/etc/pam_ldap.conf

This file looks very much like /etc/nslcd.conf, don't know why there are two actually. It confuses people.

bind_timelimit 3
timelimit 3
network_timeout 3
bind_policy hard
scope sub
nss_base_passwd dc=nl,dc=example,dc=com
nss_base_shadow dc=nl,dc=example,dc=com
nss_base_group dc=nl,dc=example,dc=com
nss_map_objectclass posixAccount user
nss_map_objectclass shadowAccount user
nss_map_objectclass posixGroup Group
nss_map_attribute homeDirectory unixHomeDirectory
nss_map_attribute uniqueMember member
nss_map_attribute shadowLastChange pwdLastSet
pam_login_attribute uid
pam_filter objectClass=user
pam_password ad
pam_member_attribute member
pam_min_uid 10000
pam_groupdn CN=Linux Administrators,OU=Groups,DC=nl,DC=example,DC=com
base dc=nl,dc=example,dc=com
uri ldaps://ldap.nl.example.com:636/
binddn [email protected]
bindpw SoMePaSsWoRd
bind_timelimit 3
timelimit 3
scope sub
ssl on
tls_reqcert never

/etc/pam.d/system-auth-ac and /etc/pam.d/password-auth-ac

These two files contain the same.

auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        sufficient    pam_krb5.so
auth        required      pam_deny.so

account     [default=bad user_unknown=ignore success=ok authinfo_unavail=ignore] pam_krb5.so
account     required      pam_unix.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_autht ok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     required      pam_mkhomedir.so skel=/etc/skel umask=0077

/etc/nsswitch.conf

This determines to send certain resolving queries to what facility. Make sure these lines are in:

passwd:     files ldap [NOTFOUND=return UNAVAIL=return] db
shadow:     files ldap [NOTFOUND=return UNAVAIL=return] db
group:      files ldap [NOTFOUND=return UNAVAIL=return] db
sudoers:    files ldap [NOTFOUND=return UNAVAIL=return] db

Starting of daemons

When all configuration changes are done, make sure to startup nslcd:

service nslcd start
chkconfig nslcd on

Troubleshooting

There is a caching mechanism in nslcd. I don't know how to flush that cache, but it caches negative hits too. (So when a user is not found, it will keep on saying that the user is not found) Waiting (a night) clears that cache, but this does not help you to solve the problem today.

You may stop nslcd and run in in debug mode:

service nslcd stop
nslcd -d

This will show you all queries sent to the ldap server.

About Consultancy Articles Contact




References Red Hat Certified Architect By Robert de Bock Robert de Bock
Curriculum Vitae By Fred Clausen +31 6 14 39 58 72
By Nelson Manning [email protected]
Syndicate content