Me in IT UNIX/Linux Consultancy is based in Utrecht, The Netherlands and specialized in UNIX and Linux consultancy. Experience with Red Hat Enterprise Linux (Red Hat Certified Architect), Fedora Project, CentOS, OpenBSD and related released Open Source products makes Me in IT UNIX/Linux Consultancy a great partner in implementing, maintaining and upgrading your environment.

Open Source software is an important aspect of any Linux distribution. Me in IT UNIX/Linux Consultancy tries to use Open Source software where possible and tries to share experiences actively. In the articles section you will find many UNIX/Linux adventures shared for others to benefit.

Connect Rundeck to Active Directory

The Rundeck authentication documentation contains some errors and is not very explicit. That's why this information could help you connect your rundeck installation to Active Directory.

Reconfigure Rundeck

Firstly, tell Rundeck to have a look at a different file for authentication information:
Here is a part of /etc/rundeck/profile:

export RDECK_JVM="-Djava.security.auth.login.config=/etc/rundeck/jaas-activedirectory.conf \
        -Dloginmodule.name=activedirectory \

As you can see "loginmodule.name" refers to "activedirectory". So, create a file /etc/rundeck/jaas-activedirectory.conf:

activedirectory {
com.dtolabs.rundeck.jetty.jaas.JettyCachingLdapLoginModule required
debug="true"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
providerUrl="ldaps://ldap.eu.company.com:636"
bindDn="CN=Some User,CN=Users,DC=eu,DC=company,DC=com"
bindPassword="MyPaSsWoRd"
authenticationMethod="simple"
forceBindingLogin="true"
userBaseDn="dc=eu,dc=company,dc=com"
userRdnAttribute="sAMAccountName"
userIdAttribute="sAMAccountName"
userPasswordAttribute="unicodePwd"
userObjectClass="user"
roleBaseDn="dc=eu,dc=company,dc=com"
roleNameAttribute="cn"
roleMemberAttribute="member"
roleObjectClass="group"
cacheDurationMillis="300000"
reportStatistics="true";
};

Import Active Directories Certificate Authority certificates

Obtain (all) the certificates:

$ openssl s_client -showcerts -connect ldap.eu.company.com:636

In my case I got a chain of 3 certificates and had to import them all. I did this by saving each certificate in a file and running:
keytool -import -alias company1 -file company1 -keystore /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/security/cacerts -storepass changeit

Authorization

After connecting to Active Directory, I could authenticate, but miss authorizations, so edit /etc/rundeck/admin.aclpolicy:

description: Admin, all access.
context:
  application: 'rundeck'
for:
  resource:
    - allow: '*' # allow create of projects
  project:
    - allow: '*' # allow view/admin of all projects
by:
  group: admin

description: Full access.
context:
  project: '.*' # all projects
for:
  resource:
    - allow: '*' # allow read/create all kinds
  adhoc:
    - allow: '*' # allow read/running/killing adhoc jobs
  job:
    - allow: '*' # allow read/write/delete/run/kill of all jobs
  node:
    - allow: '*' # allow read/run for all nodes
by:
  group: [MyWindowsGroupName]

---

description: Admin, all access.
context:
  application: 'rundeck'
for:
  resource:
    - allow: '*' # allow create of projects
  project:
    - allow: '*' # allow view/admin of all projects
by:
  group: [MyWindowsGroupName]

There is an open bug, now you have to edit web.xml to point to a group that any user that need to login to RunDeck has.

...
        <security-role>
                <role-name>Some-AD-Group</role-name>
        </security-role>
...

User authentication on CentOS 6 with Active Directory based on hosts and groups

Follow this article when you would like users to be able to login to a CentOS 6 host, authenticating to Active directory based on:

  1. Group membership of a user (a group like "Linux Administrators") (or)
  2. A "host" attribute set per user to allow fine grained host-based permissions

This has a major benefit. You can add users to an administrative group and besides that you can assign permissions to login to a user per host. Once you have set this up, you can manage permissions fully through Active Directory.

Install required pacakges

You need to install one single package:

yum install nss-pam-ldapd

Configuration

There are quite few files to configure, I know that system-config-auth exists, but don't know if it gives the right results. So here are the files one-by-one:

/etc/nslcd.conf

# This program runs under this user and group, these are local/system (/etc/passwd) users.
uid nslcd
gid ldap
# The base is where to start looking for users. Your Windows colleagues will know this value.
base dc=nl,dc=example,dc=com
# This is the URI that describes how to connect to the LDAP server/active directory server. You may use a DNS round-robin name here to point to multiple Domain Controllers.
uri ldaps://ldap.nl.example.com:636/
# This is a user that can authenticate to Active Directory. It's used to connect to AD and query stuff.
binddn [email protected]
bindpw SoMePaSsWoRd
# Don't exactly know where I got these settings from, man-page has more information.
scope  group  sub
scope  hosts  sub
# If there are many results, paging is used.
pagesize 1000
# LDAP servers can refer you to another location, in my experience this slow down authentication dramatically.
referrals off
# This is the trick to match users from a certain group and users that have a host-attribute filled in.
# Note that the value of the variable "host" should be set to the hostname where this file in installed.
filter passwd (&(objectClass=user)(!(objectClass=computer))(unixHomeDirectory=*)(|(host=mylinuxhost.nl.example.com)(memberOf=CN=Linux Administrators,OU=Groups,DC=nl,DC=example,DC=com)))
# Active Directory may store some values in attributes that need to be mapped.
map    passwd homeDirectory    unixHomeDirectory
filter shadow (&(objectClass=user)(!(objectClass=computer))(unixHomeDirectory=*))
map    shadow shadowLastChange pwdLastSet
# This filters out groups that have a "gidNumber" set. This typically only happens for groups that need to be available on Linux.
filter group  (&(objectClass=group)(gidNumber=*))
map    group  uniqueMember     member
# Some time limits.
bind_timelimit 3
timelimit 3
scope sub
# Secure Socket Layer, yes we do!
ssl on
tls_reqcert never

/etc/pam_ldap.conf

This file looks very much like /etc/nslcd.conf, don't know why there are two actually. It confuses people.

bind_timelimit 3
timelimit 3
network_timeout 3
bind_policy hard
scope sub
nss_base_passwd dc=nl,dc=example,dc=com
nss_base_shadow dc=nl,dc=example,dc=com
nss_base_group dc=nl,dc=example,dc=com
nss_map_objectclass posixAccount user
nss_map_objectclass shadowAccount user
nss_map_objectclass posixGroup Group
nss_map_attribute homeDirectory unixHomeDirectory
nss_map_attribute uniqueMember member
nss_map_attribute shadowLastChange pwdLastSet
pam_login_attribute uid
pam_filter objectClass=user
pam_password ad
pam_member_attribute member
pam_min_uid 10000
pam_groupdn CN=Linux Administrators,OU=Groups,DC=nl,DC=example,DC=com
base dc=nl,dc=example,dc=com
uri ldaps://ldap.nl.example.com:636/
binddn [email protected]
bindpw SoMePaSsWoRd
bind_timelimit 3
timelimit 3
scope sub
ssl on
tls_reqcert never

/etc/pam.d/system-auth-ac and /etc/pam.d/password-auth-ac

These two files contain the same.

auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        sufficient    pam_krb5.so
auth        required      pam_deny.so

account     [default=bad user_unknown=ignore success=ok authinfo_unavail=ignore] pam_krb5.so
account     required      pam_unix.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_autht ok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     required      pam_mkhomedir.so skel=/etc/skel umask=0077

/etc/nsswitch.conf

This determines to send certain resolving queries to what facility. Make sure these lines are in:

passwd:     files ldap [NOTFOUND=return UNAVAIL=return] db
shadow:     files ldap [NOTFOUND=return UNAVAIL=return] db
group:      files ldap [NOTFOUND=return UNAVAIL=return] db
sudoers:    files ldap [NOTFOUND=return UNAVAIL=return] db

Starting of daemons

When all configuration changes are done, make sure to startup nslcd:

service nslcd start
chkconfig nslcd on

Troubleshooting

There is a caching mechanism in nslcd. I don't know how to flush that cache, but it caches negative hits too. (So when a user is not found, it will keep on saying that the user is not found) Waiting (a night) clears that cache, but this does not help you to solve the problem today.

You may stop nslcd and run in in debug mode:

service nslcd stop
nslcd -d

This will show you all queries sent to the ldap server.

Add a Zabbix proxy to an existing Zabbix server

So; you have an existing and working Zabbix server and would like to add a Zabbix proxy server? Here are the steps:

Install zabbix-proxy

On a new host (likely in some remote network) install the software package zabbix-proxy:

yum install zabbix-proxy
chkconfig zabbix-proxy on

We'll need to configure it, but that's a later step.

Create a database

Another easy step, maybe you already have a database infrastructure on the remote network, otherwise you can always install a database server locally:

yum install mysql
/usr/bin/mysqladmin -u root password 'MyPassword'
chkconfig mysql on
service mysqld start

No matter where the database server is located, zabbix-proxy needs it's own database:

GRANT SELECT,INSERT,UPDATE,DELETE ON zabbix.* TO 'zabbix'@'localhost' identified by 'MyZabbixPassword';

The database schema also needs to be populated. That schema can be found from the zabbix source code package, in database/mysql/ .

mysql -u root -p zabbix < data.sql

Configure zabbix-proxy

There are a few items to configure in /etc/zabbix/zabbix_proxy.conf.

The Server should point to your existing Zabbix server.

Server=existing-zabbix-server.example.com

The Hostname should be set, and should match exactly on what you configure later in the existing Zabbix server webinterface.

Hostname=zabbix-proxy-01.example.com

You also need to configure the Zabbix proxy to be able to connect to the database:

DBHost=localhost
DBName=zabbix
DBUser=zabbix
DBPassword=MyZabbixPassword

Configure zabbix-server

This is a very easy step; go the the webinterface and click to:
Administration - DM
Click on Create Proxy

Fill in the name of the proxy, exactly like you set it on the proxy in /etc/zabbix/zabbix_proxy.conf under Hostname.

Start zabbix-proxy

service zabbix-proxy start

On the Zabbix server you should now see under Administation - DM that the "last seen" field is updated. (It might take a minute or so.)

Monitor a node with the zabbix-proxy

Now you may add hosts to zabbix that are monitored by that proxy. On the configuration of that host, select the newly configured proxy at "Monitored by proxy".

If you can't get the Zabbix proxy to be seen by the server, make sure that these ports are open:

Source Source port Destination Destination port Description
Zabbix proxy any Zabbix server 10051/tcp The Zabbix proxy sends the traffic to the Zabbix server over this port
Zabbix proxy any Zabbix hosts (agents) 10050/tcp The Zabbix proxy connects to monitored hosts on this port for "passive" items.
Zabbix hosts (agents) any Zabbix proxy 10051/tcp The Zabbix hosts connect to the Zabbix proxy for "active" items.

Also check the logfile /var/log/zabbix_proxy.log on the Zabbix proxy.

Apache Tomcat and Apache HTTP in combination with LDAP authentication

Apache Tomcat is a web application server and it's rather logical to place Apache Tomcat behind Apache HTTP, the well know webserver.

Once you have Apache Tomcat running and a web application is installed, start to install Apache HTTP:

yum install httpd

Add a file in /etc/httpd/conf.d/apache-tomcat.conf:

<Location />
ProxyPass http://localhost:8080/my-app/
ProxyPassReverse http://localhost:8080/my-app/
AuthBasicProvider ldap
AuthType Basic
AuthzLDAPAuthoritative on
AuthName "My App Authentication"
AuthLDAPURL "ldap://your.ldap-or-ad-server.com:3268/DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" STARTTLS
AuthLDAPBindDN "[email protected]"
AuthLDAPBindPassword MySuperSecurePassword
AuthLDAPRemoteUserIsDN off
Require valid-user
Require ldap-group CN=My App Group,DC=company,DC=com
</Location>

(Be sure to have some users in that "My App Group", only those are allowed to authenticate.)

Edit the Apache Tomcat configuration to only allow connections from localhost. This is done in /opt/apache-tomcat/conf/server.xml. Find the port 8080 connector and add the 127.0.0.1 address:

    <Connector address="127.0.0.1" port="8080" protocol="HTTP/1.1"

CloudFlare in front of Amazon Elastic Compute Cloud (EC2) webserver(s).

So you have setup one or more websites on one or more Amazon Elastic Compute Cloud. There might be a few drawbacks that can easily be mitigated:

  • Amazon EC2 instances might not be very close to the location where most visitors come from, causing increased latency.
  • Having a single Amazon EC2 node is a bit fragile, configuration errors, webserver reloads and reboots will cause downtime.
  • Amazon EC2 machines (just as any other (Linux) machine) might be a bit vulnerable to attacks.
  • Traffic spikes might cause slow loading of pages.

The solution to these and other problems is free and easy to implement, it's called CloudFlare. This product is described by CloudFlare like this:
CloudFlare protects and accelerates any website online. Once your website is a part of the CloudFlare community, its web traffic is routed through our intelligent global network.

So far my experience is very good.

Implementation takes 15 minutes or so. It's easy, simply register your site at CloudFlare, pickup all existing DNS records and change the nameserver (NS) records for a domain to nameservers at CloudFlare.

All (web) traffic is routed through CloudFlare from that moment onward. This help to:

  • Save traffic to a webserver/loadbalancer. CloudFlare gives away this bandwidth for free, thank you!
  • Speed up websites by implementing caching and compression.
  • Reduce the number of hops from a visitor to the website. The website is actually served from any of the world-wide locations hosted by CloudFlare.
  • Show the original website (with a warning) is the webserver is down.
  • Reduce comment-spam by filtering out or challenging potential spammers with a Captcha.

All in all, a great service that's very easy to implement and maintain.

Quickly get an idea of the (least) busy time on an Apache webserver

If ever you need to determine what the least or most busy time is on an Apache webserver, you can use this set of Linux commands to get a report:

cut -d: -f 2 /var/log/httpd/*access_log* | sort | uniq -c

The output will be something like this:

290873 00
184948 01
115479 02
84129 03
71059 04
67632 05
88071 06
149285 07
275537 08
431069 09
529708 10
586744 11
599993 12
591466 13
565942 14
585796 15
611814 16
639781 17
625244 18
622163 19
574962 20
558504 21
503386 22
412359 23

The first column is the number of hits on the webserver, the second column is the time of the day. In this example the 5th hour (05:00 - 05:59) is the least busy hour, the 18th hour (18:00 - 18:59) is the busiest hour.

Access thepiratebay.org from The Netherlands

Long story short: Visit The Pirate Bay through Me in IT Consultancy instead of typing it in the URL directly and you'll be able to download torrents again, because from The Netherlands, using Ziggo or XS4All, it's going to be difficult to access thepiratebay.org.

Short story long: With Apache, mod_proxy and mod_proxy_html you can make other website available through a Location on another website. If the web server is in a different region, chances are you'll be able to visit "blocked" websites.

To technically make this work, I used these ingredients:

  1. CentOS - I used 5, but 6 should work too.
  2. Apache - Just install it with "yum install httpd".
  3. mod_proxy_html - I created an SRC RPM for mod_proxy_html.
  4. configuration files - See below.

The configuration looks like this:

<VirtualHost *.80>
...
ProxyRequests off

ProxyPass /thepiratebay.org/ http://thepiratebay.org/
ProxyPass /static.thepiratebay.org/ http://static.thepiratebay.org/
ProxyPass /rss.thepiratebay.org/ http://rss.thepiratebay.org/
ProxyPass /torrents.thepiratebay.org http://torrents.thepiratebay.org

<Location /thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /static.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /static.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /rss.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /rss.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /torrents.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap / /rss.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>
...
</VirtualHost>

Zabbix triggers with "flap-detection" and a grace period.

Monitoring an environment with some monitoring system gives control, so it's pretty important. But it can be a challenge to setup a monitoring system; it should not alert too fast, but also not too slow.

Nagios uses "flap detection" to prevent many ERROR's and OK's being sent right after each other. Zabbix calls this "hysteresis". Zabbix's hysteresis is rather difficult to understand, so I'd like to share some triggers that I have setup for Zabbix that implement both flap detection/hysteresis and grace.

Grace can be defined like this: "When a value is higher (or lower) then a threshold, make sure it's a little lower (or higher) as the threshold that caused the trigger to alert, before recovering a trigger." I know; it's not easy to understand... Let's look at some examples.

Thresholds that should be above a certain value

With values that need to be below a threshold, like cpu load, number of users logged in or number of processes running:

({TRIGGER.VALUE}=0&{TEMPLATE:CHECK[ITEM].min(300)>ALERTVALUE)|({TRIGGER.VALUE}=1&{TEMPLATE:CHECK[ITEM].max(300)<RECOVERYVALUE)

Just to clarify the different part of the trigger:

  1. {TRIGGER.VALUE} makes sure the first part (before the |) is evaluated when there is no alert, the part after | indicates the trigger should be on/in alert.
  2. .min(300) makes sure the values are minimally as high as ALERTVALUE for 300 seconds.
  3. The last part (after the |) makes sure the trigger recovers when the measured value is lower than RECOVERYVALUE for 300 seconds.

For example CPU load with an ALERTVALUE of 5 and a RECOVERYVALUE of 4:

({TRIGGER.VALUE}=0&{Template_Linux:system.cpu.load[,avg1].min(300)}>5)|({TRIGGER.VALUE}=1&{Template_Linux:system.cpu.load[,avg1].max(300)}<4)

Thresholds that should be below a certain value

With values that need to be above a threshold, like percentage diskspace free, number of inodes free or number of httpd processes running:

({TRIGGER.VALUE}=0&{TEMPLATE:CHECK[ITEM].max(300)<ALERTVALUE)|({TRIGGER.VALUE}=1&{TEMPLATE:CHECK[ITEM].min(300)>RECOVERYVALUE)

For example disk space of /var free in percent with an ALERTVALUE of 10 and a RECOVERYVALUE of 11:

({TRIGGER.VALUE}=0&{Template_Linux:vfs.fs.size[/var,pfree].max(300)}<10)|({TRIGGER.VALUE}=1&{Template_Linux:vfs.fs.size[/var,pfree].min(300)}>11)

These rather complex triggers will prevent spikes of load or diskusage to cause an alert, but the drawback it that you might miss certain interesting spikes too. Overall my opinion is that a monitoring system should not drive people crazy because alerts will be ignored when too many are received.

Examples for batch on linux

Linux has a few ways to schedule jobs to be executed. I am sure most are familiar with crontab and at, but batch is lesser known.

"batch" can be used to: (from the man-page on batch)
executes commands when system load levels permit; in other words, when the load average drops below 0.8, or the value specified in the invocation of atrun.

So:

  • crontab is used for periodic scheduling.
  • at is used for executing something once at a specific time.
  • batch can be used to execute commands when your system has resourses.

You can also combine crontab and batch. Imagine you need to run a sequence of command in a specific order every hour. crontab does not guarantee one command is finished when it executs the next command.
batch can be used from crontab like so:

crontab -l
0 * * * * /usr/bin/batch now /usr/local/bin/prepare-something.sh
1 * * * * /usr/bin/batch now /usr/local/bin/process-something.sh
2 * * * * /usr/bin/batch now /usr/local/bin/report-something.sh

This batches these three commands in a specific order, one after the other, when the systemload is not too high.

One specific situation where I use this; Drupal needs to run a program (cron.php) every hour. crontab would be perfect for that, but when the load is too high, it's not a problem that this program is executed a little later. This is what I have setup:

0 * * * * /usr/bin/batch now /usr/bin/wget -o /dev/null -O /dev/null http://1.example.com/cron.php
1 * * * * /usr/bin/batch now /usr/bin/wget -o /dev/null -O /dev/null http://2.example.com/cron.php
2 * * * * /usr/bin/batch now /usr/bin/wget -o /dev/null -O /dev/null http://3.example.com/cron.php

This ensures that every hour cron.php is ran, but not if the systemload is too high (0.8 or more). One disadvantage of this solution; when your system is overloaded for a long period of time, these batch jobs pile up, then when the load drops below 0.8, all batched commands will be executed. Happily Drupals cron.php will not consume that much resources when it's ran twice.

Release scheme for RPM based Linux distributions

It can be rather confusing what the differences and similarities are on Fedora, Red Hat Enterprise Linux and CentOS. Especially with different versions. This article explains what release schedule and relations the various RPM based Linux distributions have.

Fedora is a Red Hat sponsored community project. Fedora is release approximately every 6 month. Fedora "supports" (supplies updates) for 13 month only. Clearly this is a development distribution.

Red Hat picks up a Fedora version and adds a few patches and call that "Red Hat Enterprise Linux". Red Hat supports that version for quite some time. Red Hat releases more conservatively; every 2 years. Red Hat supports a release for about 5 years after releasing, making this distribution much more "enterprise".

Fedora - Red Hat release relationship
Fedora release Red Hat release
Fedora Core 3 Red Hat Enterprise Linux 4
Fedora Core 6 Red Hat Enterprise Linux 5
Fedora Core 13 Red Hat Enterprise Linux 6

CentOS picks up the source code that Red Hat published for Red Hat Enterprise Linux. The CentOS community patches the artwork and very few other things. CentOS "supports" (provides updates) for as long as Red Hat supplies updates to Red Hat Enterprise Linux.
Interesting to know; once you have choosen to use a certain main version of CentOS, you'll automatically update to the most recent child-version of that release when using "yum update". So; if you install "CentOS 5.0" and run "yum update", you will automatically have "CentOS 5.7". (at the time of this writing.)

So; this makes this image:

About Consultancy Articles Contact




References Red Hat Certified Architect By Robert de Bock Robert de Bock
Curriculum Vitae By Fred Clausen +31 6 14 39 58 72
By Nelson Manning [email protected]
Syndicate content