Articles

Adventures in Red Hat Enterprise Linux, CentOS, Fedora, OpenBSD and other open source solutions.

CloudFlare and F5 LTM X-Forwarded-For and X-Forwarded-Proto

If you want an application (such as Hippo) be able to determine what page is served with what protocol (http/https), you must insert an HTTP header when using a Apache ProxyPass.

When you use CloudFlare, the correct headers are inserted by default.

When you use an F5 loadbalancer, or in fact any loadbalancer or proxy, you must tell the loadbalancer to insert these two headers:

When you use a combination of the two, you have to make the loadbalancer a little smarter; it must detect if the header is present and add the header if not. That can be done by iRules.

The first iRule is to add "X-Forwarded-For" to the header:

when HTTP_REQUEST {
if {![HTTP::header exists X-Forwarded-For]}{
HTTP::header insert X-Forwarded-For [IP::remote_addr]
}
}

The second one is a bit more complex; it needs to verify if the X-Forwarded-Proto is available, and if not, add it, but based on it's original request to either port 80 (http) or port 443 (https):

when HTTP_REQUEST {
if {![HTTP::header exists X-Forwarded-Proto]}{
if {[TCP::local_port] equals 80}{
HTTP::header insert X-Forwarded-Proto "http"
} elseif {[TCP::local_port] equals 443}{
HTTP::header insert X-Forwarded-Proto "https"
}
}
}

Add these two iRules to your Virtual Service and with or without CloudFlare (or any other CDN) and your application can find the two headers to decide how to rewrite traffic.

Zabbix Low Level Discovery for TCP ports on a host

You can let Zabbix do a portscan of a host and monitor the ports that have been reported as open. I really like that option, it gives you the option to quickly add a host and monitor changes on TCP ports.

You'd need to:

  1. Place a script on the Zabbix server and all Zabbix proxies.
  2. Be sure "nmap" is installed. That's a port scanning tool.
  3. Create a Discovery rule on a template.

Place a script

Place this script in /etc/zabbix/externalscripts/zabbix_tcpport_lld.sh and change owner to the user that is running Zabbix server. (I presume zabbix:zabbix) Also change mode to 750.

#!/bin/sh

echo '{'
echo ' "data":['

nmap -T4 -F ${1} | grep 'open' | while read portproto state protocol ; do
port=$(echo ${portproto} | cut -d/ -f1)
proto=$(echo ${portproto} | cut -d/ -f2)
echo '  { "{#PORT}":"'${port}'", "{#PROTO}":"'${proto}'" },'
done

echo ' ]'
echo '}'

Install NMAP

Depending on your distribution:

RHEL/CentOS/Fedora Debian
sudo yum install nmap sudo apt-get install nmap

Configure a Discovery rule Zabbix

Select a template that you would like to add this discovery rule to. I've greated a "Network" template that does a few pings and has this discover rule.

I've listed the parameters that are required, the rest can be filled in however you like to use Zabbix.

Discovery

  • Name: Open TCP ports
  • Type: External check
  • Key: zabbix_tcpport_lld.sh[{HOST.CONN}]

This makes the variable {#PORT} and {#PROTO} available for use in the items and triggers.

Item Prototypes

  • Name: Status of port {#PORT}/{#PROTO}
  • Type: Simple check
  • Key: net.tcp.service[{#PROTO},,{#PORT}]
  • Type of information: Numeric (unsigned)
  • Data type: Boolean

Trigger Prototypes

  • Name: {#PROTO} port {#PORT}
  • Expression: {Template_network:net.tcp.service[{#PROTO},,{#PORT}].last(0)}=0

Now simply attach a host to this template to let it portscan and monitored the open (TCP) ports found.

Automounting Windows CIFS Shares

It can be very useful to mount a Windows (CIFS) Share on a Linux system. It's super easy to setup automount to go to mulitple servers and multiple shares on those servers using automount.

The goal is to tell automount to pickup the hostname and share from the path, so that a user can simple do:

cd /mnt/hostname/share

Use these setup to set this up:

Install autofs:

yum install autofs

Add a few lines to auto.master:

echo "/mnt /etc/auto.smb-root.top" >> /etc/auto.master

This tells autofs that "/mnt" is managed by autofs.

Create /etc/auto.smb-root.top:

echo "* -fstype=autofs,rw,-Dhost=& file:/etc/auto.smb.sub" > /etc/auto.smb-root.top

Create /etc/auto.smb.sub:

echo "* -fstype=cifs,rw,credentials=/etc/${host:-default}.cred ://${host}/&" > /etc/auto.smb.sub

Create a credentials file for each server:

cat << EOF > /etc/hostname.cred
username=WindowsUsername
password=WindowsPassword
domain=WindowsDomain
EOF

And create a file with default credentials:

cat << EOF > /etc/default.cred
username=WindowsUsername
password=WindowsPassword
domain=WindowsDomain
EOF

Restart autofs:

service autofs restart

Now you should be ready to cd into /mnt/hostname/share. You will notice this takes a second or so to complete, this second is used to mount the share and present you with the data directly.

One drawback of this solution; the username/password is assigned to the hostname, so if a share requires a different username/password, that's a problem.

Popularity of Fedora Spins

Fedora has introduced Spins. These spins are ISO's that allow a user to choose quickly try a Live DVD of Fedora, tailored to their needs.

Ordered on popularity as measured by me using bittorrent to upload this DVD's to the rest of the world. The Ratio column means the number of times the data has been uploaded.

Spin Ratio
Desktop i686 14.00
Desktop x86_64 13.80
MATE Compiz x86_64 11.50
LXDE i686 11.40
Design suite x86_64 10.30
Security x86_64 9.14
Xfce i686 9.030
MATE Compiz i686 8.89
Scientific KDE x86_64 8.54
Electronic Lab x86_64 8.24
Xface x86_64 7.97
KDE i686 7.52
Design suite i686 7.50
KDE x86_64 7.48
Games x86_64 7.31
Electronic lab i686 6.69
LXDE x86_86 6.68
Security i686 6.63
Jam KDE x86_64 5.72
Games i686 5.64
SoaS x86_64 4.78
Scientific KDE i686 4.64
Robotics x86_64 4.11
SoaS i686 3.98
Original (no spin) x86_64 3.91
Jam KDE i686 3.58
Robotics i686 3.28
Original (no spin) i686 3.04
Original (no spin) source 2.54

Without taking the architecture (x86_64 or i686) in consideration, this table show the most popular spins:

Spin x86_64 i686 Total
Desktop 14 13.8 27.80
MATE Compiz 11.5 8.89 20.39
LXDE 6.68 11.4 18.08
Design suite 10.4 7.50 17.9
Xfce 9.03 7.79 16.82
Security 9.14 6.63 15.77
KDE 7.48 7.52 15.00
Electronic lab 8.24 6.69 14.93
Scientific KDE 8.54 4.64 13.18
Games 7.31 5.64 12.94
Jam KDE 5.72 3.58 9.30
SoaS 4.78 3.98 8.76
Robotics 4.11 3.28 7.39
Original (no spin) 3.91 3.04 6.95
Source (no spin) source - - 2.54

And just to complete the overview, the popularity of the architectures:

Architecture Ratio
x86_64 110.84
i686 94.29

So; I'm sure some spins will be here to stay.

Interesting is that the non-branded (no-spin) DVD is not that popular. Most people choose a specific spin.

Some spins see more popularity on the i686 architecture:

  • LXDE
  • KDE

Zabbix LLD (low level discovery) SNMP examples

In my opinion it's not easy to understand the low level discovery mechanism that Zabbix now offers. It's however a very useful tool to setup a simple template to monitor hundreds of items at once.

The Zabbix documentation about low level discovery is good to setup one type of discovery; network interfaces.

Although that's a pretty important discovery, there are more tricks to use. I ran into a problem where a Juniper SRX ran out of diskspace. This was not monitored, so I added a discovery rule to find all storage devices and see how full the are. I added this discovery rule to a template calles "SNMP devices". This means all devices that have that template applied will be "discovered". Many of these devices will not have local storage though. Not an issue, the discovery will simply fail for these devices.

I added this discovery rule:

  • Name: Available storage devices
  • Type: SNMPv2 Agent
  • Key: snmp.discovery.storage
  • SNMP OID: hrStorageDescr
  • SNMP community: {$SNMPCOMMUNITY} (This variable is set on host level and referred to here.)
  • Port: 161
  • Update interval (in sec): 3600 (Set this to 60 temporarily, to speed up the discovery process, but remember to set it back.)
  • Keep lost resources period (in days): 1
  • Filter: Macro: {#SNMPVALUE} Regexp: ^/dev/da|^/dev/bo (This ensures only mounts that have a physical underlying storage device are found, the rest will be ignored.)

That rule will discover devices such as these:

  1. /dev/da0s1a
  2. /dev/bo0s1e
  3. /dev/bo0s1f

Now that these devices have been discovered, you can get all kinds of information about them. This is done using the item prototypes. I created two; one to get the size of the device, the other to get the usage of the device. Those two can be used to calculate a percentage later, with a trigger prototype. Here is one of the two item prototypes:

  • Name: hrStorageSize {#SNMPVALUE}
  • Type: SNMPv2 Agent
  • Key: hrStorageSize.["{#SNMPINDEX}"]
  • SNMP OID: hrStorageSize.{#SNMPINDEX}
  • SNMP community: {$SNMPCOMMUNITY}
  • Port: 161
  • Type of information: Numeric (unsigned)
  • Data type: Decimal
  • Units: bytes
  • Use custom multiplier: 2048 (Because SNMP reports sectors here, which is less logical to understand in my opinion.)
  • Update interval (in sec): 1800 (Pretty long, but the size of a device will not change quickly.)

And this other item prototype is to see how many bytes (sectors) are used: (I cloned the previous one and changed only these values:)

  • Name: hrStorageUsed {#SNMPVALUE}
  • Key: hrStorageUsed.["{#SNMPINDEX}"]
  • SNMP OID: hrStorageUsed.{#SNMPINDEX}
  • Update interval (in sec): 60 (Shorter, this will change.)

Now check if these items are being found by checking the "latest data" for the host. You should start to see a few items appear. In that case you can setup the trigger prototype. This is a bit complex, because I want to report on 95% full.

  • Name: Disk space available on {#SNMPVALUE} ({ITEM.LASTVALUE1}/{ITEM.LASTVALUE2})
  • Expression: 100*{Template_SNMP_Devices:hrStorageUsed.["{#SNMPINDEX}"].last(0)}/{Template_SNMP_Devices:hrStorageSize.["{#SNMPINDEX}"].last(0)}>95

That should start to alarm when the disk is 95% full or more.

I hope this article helps to understand the capabilities of Zabbix LLD. It's a great feature which I use to monitor blades, power supplies in chassis, network interfaces, disks and TCP ports. It makes templates much simpler which I really like.

User authentication on CentOS 6 with Active Directory based on hosts and groups

Follow this article when you would like users to be able to login to a CentOS 6 host, authenticating to Active directory based on:

  1. Group membership of a user (a group like "Linux Administrators") (or)
  2. A "host" attribute set per user to allow fine grained host-based permissions

This has a major benefit. You can add users to an administrative group and besides that you can assign permissions to login to a user per host. Once you have set this up, you can manage permissions fully through Active Directory.

Install required pacakges

You need to install one single package:

yum install nss-pam-ldapd

Configuration

There are quite few files to configure, I know that system-config-auth exists, but don't know if it gives the right results. So here are the files one-by-one:

/etc/nslcd.conf

# This program runs under this user and group, these are local/system (/etc/passwd) users.
uid nslcd
gid ldap
# The base is where to start looking for users. Your Windows colleagues will know this value.
base dc=nl,dc=example,dc=com
# This is the URI that describes how to connect to the LDAP server/active directory server. You may use a DNS round-robin name here to point to multiple Domain Controllers.
uri ldaps://ldap.nl.example.com:636/
# This is a user that can authenticate to Active Directory. It's used to connect to AD and query stuff.
binddn [email protected]
bindpw SoMePaSsWoRd
# Don't exactly know where I got these settings from, man-page has more information.
scope  group  sub
scope  hosts  sub
# If there are many results, paging is used.
pagesize 1000
# LDAP servers can refer you to another location, in my experience this slow down authentication dramatically.
referrals off
# This is the trick to match users from a certain group and users that have a host-attribute filled in.
# Note that the value of the variable "host" should be set to the hostname where this file in installed.
filter passwd (&(objectClass=user)(!(objectClass=computer))(unixHomeDirectory=*)(|(host=mylinuxhost.nl.example.com)(memberOf=CN=Linux Administrators,OU=Groups,DC=nl,DC=example,DC=com)))
# Active Directory may store some values in attributes that need to be mapped.
map    passwd homeDirectory    unixHomeDirectory
filter shadow (&(objectClass=user)(!(objectClass=computer))(unixHomeDirectory=*))
map    shadow shadowLastChange pwdLastSet
# This filters out groups that have a "gidNumber" set. This typically only happens for groups that need to be available on Linux.
filter group  (&(objectClass=group)(gidNumber=*))
map    group  uniqueMember     member
# Some time limits.
bind_timelimit 3
timelimit 3
scope sub
# Secure Socket Layer, yes we do!
ssl on
tls_reqcert never

/etc/pam_ldap.conf

This file looks very much like /etc/nslcd.conf, don't know why there are two actually. It confuses people.

bind_timelimit 3
timelimit 3
network_timeout 3
bind_policy hard
scope sub
nss_base_passwd dc=nl,dc=example,dc=com
nss_base_shadow dc=nl,dc=example,dc=com
nss_base_group dc=nl,dc=example,dc=com
nss_map_objectclass posixAccount user
nss_map_objectclass shadowAccount user
nss_map_objectclass posixGroup Group
nss_map_attribute homeDirectory unixHomeDirectory
nss_map_attribute uniqueMember member
nss_map_attribute shadowLastChange pwdLastSet
pam_login_attribute uid
pam_filter objectClass=user
pam_password ad
pam_member_attribute member
pam_min_uid 10000
pam_groupdn CN=Linux Administrators,OU=Groups,DC=nl,DC=example,DC=com
base dc=nl,dc=example,dc=com
uri ldaps://ldap.nl.example.com:636/
binddn [email protected]
bindpw SoMePaSsWoRd
bind_timelimit 3
timelimit 3
scope sub
ssl on
tls_reqcert never

/etc/pam.d/system-auth-ac and /etc/pam.d/password-auth-ac

These two files contain the same.

auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        sufficient    pam_krb5.so
auth        required      pam_deny.so

account     [default=bad user_unknown=ignore success=ok authinfo_unavail=ignore] pam_krb5.so
account     required      pam_unix.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_autht ok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     required      pam_mkhomedir.so skel=/etc/skel umask=0077

/etc/nsswitch.conf

This determines to send certain resolving queries to what facility. Make sure these lines are in:

passwd:     files ldap [NOTFOUND=return UNAVAIL=return] db
shadow:     files ldap [NOTFOUND=return UNAVAIL=return] db
group:      files ldap [NOTFOUND=return UNAVAIL=return] db
sudoers:    files ldap [NOTFOUND=return UNAVAIL=return] db

Starting of daemons

When all configuration changes are done, make sure to startup nslcd:

service nslcd start
chkconfig nslcd on

Troubleshooting

There is a caching mechanism in nslcd. I don't know how to flush that cache, but it caches negative hits too. (So when a user is not found, it will keep on saying that the user is not found) Waiting (a night) clears that cache, but this does not help you to solve the problem today.

You may stop nslcd and run in in debug mode:

service nslcd stop
nslcd -d

This will show you all queries sent to the ldap server.

Add a Zabbix proxy to an existing Zabbix server

So; you have an existing and working Zabbix server and would like to add a Zabbix proxy server? Here are the steps:

Install zabbix-proxy

On a new host (likely in some remote network) install the software package zabbix-proxy:

yum install zabbix-proxy
chkconfig zabbix-proxy on

We'll need to configure it, but that's a later step.

Create a database

Another easy step, maybe you already have a database infrastructure on the remote network, otherwise you can always install a database server locally:

yum install mysql
/usr/bin/mysqladmin -u root password 'MyPassword'
chkconfig mysql on
service mysqld start

No matter where the database server is located, zabbix-proxy needs it's own database:

GRANT SELECT,INSERT,UPDATE,DELETE ON zabbix.* TO 'zabbix'@'localhost' identified by 'MyZabbixPassword';

The database schema also needs to be populated. That schema can be found from the zabbix source code package, in database/mysql/ .

mysql -u root -p zabbix < data.sql

Configure zabbix-proxy

There are a few items to configure in /etc/zabbix/zabbix_proxy.conf.

The Server should point to your existing Zabbix server.

Server=existing-zabbix-server.example.com

The Hostname should be set, and should match exactly on what you configure later in the existing Zabbix server webinterface.

Hostname=zabbix-proxy-01.example.com

You also need to configure the Zabbix proxy to be able to connect to the database:

DBHost=localhost
DBName=zabbix
DBUser=zabbix
DBPassword=MyZabbixPassword

Configure zabbix-server

This is a very easy step; go the the webinterface and click to:
Administration - DM
Click on Create Proxy

Fill in the name of the proxy, exactly like you set it on the proxy in /etc/zabbix/zabbix_proxy.conf under Hostname.

Start zabbix-proxy

service zabbix-proxy start

On the Zabbix server you should now see under Administation - DM that the "last seen" field is updated. (It might take a minute or so.)

Monitor a node with the zabbix-proxy

Now you may add hosts to zabbix that are monitored by that proxy. On the configuration of that host, select the newly configured proxy at "Monitored by proxy".

If you can't get the Zabbix proxy to be seen by the server, make sure that these ports are open:

Source Source port Destination Destination port Description
Zabbix proxy any Zabbix server 10051/tcp The Zabbix proxy sends the traffic to the Zabbix server over this port
Zabbix proxy any Zabbix hosts (agents) 10050/tcp The Zabbix proxy connects to monitored hosts on this port for "passive" items.
Zabbix hosts (agents) any Zabbix proxy 10051/tcp The Zabbix hosts connect to the Zabbix proxy for "active" items.

Also check the logfile /var/log/zabbix_proxy.log on the Zabbix proxy.

Apache Tomcat and Apache HTTP in combination with LDAP authentication

Apache Tomcat is a web application server and it's rather logical to place Apache Tomcat behind Apache HTTP, the well know webserver.

Once you have Apache Tomcat running and a web application is installed, start to install Apache HTTP:

yum install httpd

Add a file in /etc/httpd/conf.d/apache-tomcat.conf:

<Location />
ProxyPass http://localhost:8080/my-app/
ProxyPassReverse http://localhost:8080/my-app/
AuthBasicProvider ldap
AuthType Basic
AuthzLDAPAuthoritative on
AuthName "My App Authentication"
AuthLDAPURL "ldap://your.ldap-or-ad-server.com:3268/DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" STARTTLS
AuthLDAPBindDN "[email protected]"
AuthLDAPBindPassword MySuperSecurePassword
AuthLDAPRemoteUserIsDN off
Require valid-user
Require ldap-group CN=My App Group,DC=company,DC=com
</Location>

(Be sure to have some users in that "My App Group", only those are allowed to authenticate.)

Edit the Apache Tomcat configuration to only allow connections from localhost. This is done in /opt/apache-tomcat/conf/server.xml. Find the port 8080 connector and add the 127.0.0.1 address:

    <Connector address="127.0.0.1" port="8080" protocol="HTTP/1.1"

CloudFlare in front of Amazon Elastic Compute Cloud (EC2) webserver(s).

So you have setup one or more websites on one or more Amazon Elastic Compute Cloud. There might be a few drawbacks that can easily be mitigated:

  • Amazon EC2 instances might not be very close to the location where most visitors come from, causing increased latency.
  • Having a single Amazon EC2 node is a bit fragile, configuration errors, webserver reloads and reboots will cause downtime.
  • Amazon EC2 machines (just as any other (Linux) machine) might be a bit vulnerable to attacks.
  • Traffic spikes might cause slow loading of pages.

The solution to these and other problems is free and easy to implement, it's called CloudFlare. This product is described by CloudFlare like this:
CloudFlare protects and accelerates any website online. Once your website is a part of the CloudFlare community, its web traffic is routed through our intelligent global network.

So far my experience is very good.

Implementation takes 15 minutes or so. It's easy, simply register your site at CloudFlare, pickup all existing DNS records and change the nameserver (NS) records for a domain to nameservers at CloudFlare.

All (web) traffic is routed through CloudFlare from that moment onward. This help to:

  • Save traffic to a webserver/loadbalancer. CloudFlare gives away this bandwidth for free, thank you!
  • Speed up websites by implementing caching and compression.
  • Reduce the number of hops from a visitor to the website. The website is actually served from any of the world-wide locations hosted by CloudFlare.
  • Show the original website (with a warning) is the webserver is down.
  • Reduce comment-spam by filtering out or challenging potential spammers with a Captcha.

All in all, a great service that's very easy to implement and maintain.

Access thepiratebay.org from The Netherlands

Long story short: Visit The Pirate Bay through Me in IT Consultancy instead of typing it in the URL directly and you'll be able to download torrents again, because from The Netherlands, using Ziggo or XS4All, it's going to be difficult to access thepiratebay.org.

Short story long: With Apache, mod_proxy and mod_proxy_html you can make other website available through a Location on another website. If the web server is in a different region, chances are you'll be able to visit "blocked" websites.

To technically make this work, I used these ingredients:

  1. CentOS - I used 5, but 6 should work too.
  2. Apache - Just install it with "yum install httpd".
  3. mod_proxy_html - I created an SRC RPM for mod_proxy_html.
  4. configuration files - See below.

The configuration looks like this:

<VirtualHost *.80>
...
ProxyRequests off

ProxyPass /thepiratebay.org/ http://thepiratebay.org/
ProxyPass /static.thepiratebay.org/ http://static.thepiratebay.org/
ProxyPass /rss.thepiratebay.org/ http://rss.thepiratebay.org/
ProxyPass /torrents.thepiratebay.org http://torrents.thepiratebay.org

<Location /thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /static.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /static.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /rss.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap / /rss.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>

<Location /torrents.thepiratebay.org/>
  ProxyPassReverse /
  ProxyHTMLEnable On
  ProxyHTMLURLMap http://thepiratebay.org /thepiratebay.org
  ProxyHTMLURLMap http://static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap http://rss.thepiratebay.org /rss.thepiratebay.org
  ProxyHTMLURLMap http://torrents.thepiratebay.org /torrents.thepiratebay.org
  ProxyHTMLURLMap //static.thepiratebay.org /static.thepiratebay.org
  ProxyHTMLURLMap / /rss.thepiratebay.org/
  RequestHeader unset Accept-Encoding
</Location>
...
</VirtualHost>

About Consultancy Articles Contact




References Red Hat Certified Architect By Robert de Bock Robert de Bock
Curriculum Vitae By Fred Clausen +31 6 14 39 58 72
By Nelson Manning [email protected]
Syndicate content