Simple iptables Rules for Ubuntu/ Debian VPS

The following iptables rules are are a starting point to add basic firewall security to a public facing server, such as a public VPS. The primary focus is to stop any inbound traffic other than SSH, which is required for shell access.

[the_ad id=”2698″]

The biggest issue with public VPS providers is that often some iptables features are disabled – many OpenVZ container providers don’t allow state checking in iptables, for example. If you’ve got one of these VPS’s you’ll likely see the following error:

iptables: No chain/target/match by that name.

These rules are engineered so that they will work with most VPS’s where iptables is installed.

The following rules will block all incoming connections except SSH, including PING requests. Outgoing is open for HTTP and HTTPS TCP requests and DNS UDP requests.

See the links at the bottom of the page for a more in depth look at iptables rules.

# Loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

# Inbound SSH
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT

# Outbound HTTP/S
iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --sport 80 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT
iptables -A INPUT -p tcp --sport 443 -j ACCEPT

# Outbound DNS
iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
iptables -A INPUT -p udp --sport 53 -j ACCEPT

# default policy
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -P FORWARD DROP

If you’re using Ubuntu, you can easily make the rules persist:

apt-get install iptables-persistent
service iptables save

 


Setting Memory Resource Limits With LXC

Category : How-to

linux_containers_logo

 

Linux Container (LXC) management is now often dealt with by LXD, the Canonical lead project built on top of LXC.

LXD offers a suite of options for controlling Linux Container resources and setting limits where appropriate. This post will talk about setting constraints on CPU, however other options are available for limiting almost any sort of resource, such as network, disk I/O, memory and so on.

Available Limits

CPU management is done in 1 of 4 ways, depending on your expected workload and host CPU management regime.

  1. Number of CPUs – set the number of CPU cores that LXC can use with this container and automatically distribute CPU time amongst guests when there is competition for CPU time. The value used is an integer, for example 2.
  2. Specific cores – specify specific physical core(s) for the container to use and distribute available CPU time between containers when multiple containers use the same cores.The value used is an integer or range and can be comma separated, for example 2, 0-1 or 0-1,3,5-9.
  3. Capped share – allow a specified percentage of CPU time for the container, or more if it’s available. When the host is not under load then a container can use any available CPU however when there is contention for CPU then the container will be limited to the specified amount. The container will see all host CPU cores (in TOP, for example).
  4. Limited time share – will limit the container CPU time to be whatever is specified out of each 200ms. Even if more CPU is available, only what is specified per 200ms slice is allowed. The container will see all host CPU cores (in TOP, for example).

Setting Limits

Setting limits is done with the lxc command. There are then two options; limits.cpu for the above points 1 and 2, or limit.cpu.allowance for points 3 and 4.

lxc config set [CONTAINER] limits.cpu [VALUE]
  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 1 or 2 above.

OR

lxc config set [CONTAINER] limits.cpu.allowance [VALUE]
  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 3 or 4 above.

CPU Limit Examples

Set the container nginx-proxy to use any 2 CPUs on the host.

lxc config set nginx-proxy limits.cpu 2

Set the container nginx-proxy to use physical CPU 0, 3, 7, 8 and 9 on the host.

lxc config set nginx-proxy limits.cpu 0,3,7-9

Set the container nginx-proxy to use 20% of the available CPU on the host or more if it’s available.

lxc config set nginx-proxy limits.cpu.allowance 20%

Set the container nginx-proxy to use no more than 50% of the available CPU on the host, or 100ms for every 200ms of CPU time available.

lxc config set nginx-proxy limits.cpu.allowance 100ms/200ms

You can view /proc/cpuinfo to see the available cores on your container, however it will not include any additional scheduling limits or priorities.

cat /proc/cpuinfo | grep processor
processor: 0
processor: 1

CPU Priority

The last option around CPU limiting is the priority of CPU time. This option only kicks in when the host is overcommitted on CPU resource and containers are fighting for CPU time. This can either be on a single core (if using above points 1 or 2) or system wide (if no CPU limiting is in place or using above points 3 or 4).

Available values are 0 – 10 inclusive and lower numbers mean a lower priority – a higher number will mean the machine gets CPU time before lower numbers.

The below command sets the container nginx-proxy to have a CPU priority of 5.

lxc config set nginx-proxy limits.cpu.priority 5

The below command sets the container php-backend to have a CPU priority of 2 and therefore would get less CPU time than container nginx-proxy when CPU is under contention.

lxc config set php-backend limits.cpu.priority 5

LXC 2.x/ LXD Cheat Sheet

Tags :

Category : Cheat Sheets

linux_containers_logoHere are some of the most used commands for creating and managing an LXC/ LXD host and containers. It’s assumed that you have a working environment and a privileged SSH connection to the LXC server for issuing the commands.

Basics

Start and Stop a LXC Container

Starting out with the basics here – starting and stopping an LXC container.

lxc start [CONTAINER]
lxc stop [CONTAINER]
List Containers

Display a list of container details for started and stopped containers. The name field is what’s usually used in other commands to reference the specific container.

lxc list
Create Container from Image

There are further details below on managing images and remote image repository, which you’ll need when creating a new container.

This example will create a new container and start it using the Ubuntu 1604 template. Change [CONTAINER] to be the name of the new container.

lxc launch ubuntu:16.04 [CONTAINER]
Delete Container

Removing a container cannot be undone – beware!

lxc delete [CONTAINER]

Images

Linux Containers are created from templates or images that are stored locally or downloaded from remote servers.

List Image Repositories

Local and Remote LXC servers and remote image servers can be added to your LXC installation and can be used to download images from when required. Run the below command to see what sources you have.

lxc remote list
List available images

Images that have been downloaded, imported or cached are stored locally in the image repository. The output will list the image name, size and various other details.

lxc image list

Remote images that reside on an image repository or remote LXC server can also be listed. This is great for seeing what images are available when creating new containers. Change [REMOTE_NAME] to be the name of the image repository from the image list command. Note: you’ll need to keep the : symbol at the end.

lxc image list [REMOTE_NAME]:
Get image details

Further details can be obtained from an image file than what’s displayed with image list. The below command will detail all information known about the image. Replace [IMAGE_NAME] with a valid image name displayed in the image list command, such as ubuntu-xenial.

lxc image info [IMAGE_NAME]
Add a new Image Repository

There are various public image repositories that can be added to your LXC installation. LinuxContainers.org is a common one and hosts several distribution types. Replace [NAME] with the text name you’d like to give to the repository (it’s just an alias) and [HOST] with the address of the repository.

lxc remote add [NAME] [HOST]

For example

lxc remote add lxc-org images.linuxcontainers.org
Delete a local image

Replace [IMAGE_NAME] with the the alias or fingerprint of the image.

lxc image delete [IMAGE_NAME]
Create new Image from Running Container

You can create a new image from an existing container with a simple command however it’s important to ensure that the created template will contain everything that the running container contained – such as SSH keys, data, etc. It’s therefore important to ensure you clean up anything which may be sensitve before running this command.

lxc publish [CONTAINER] --alias [ALIAS]

You’ll need to change [CONTAINER] to your Linux container name and [ALIAS] to the name you’d like to use for your new image.

Configuration

All the below instructions will assume you’re referring to a container alias called [CONTAINER]. You’ll need to replace this, wherever it’s seen, with the name of the Linux Container you’re acting on.

And config command using set can be altered to use get to retrieve what the current setting is. If the get returns nothing then it means it has not been manually set and the default value will be used.

Auto Start Container

Set the container to start automatically when the LXC service starts – usually at host boot time. Use to enable and 0 to disable.

lxc config set [CONTAINER] boot.autostart 1

You can also use boot.autostart.delay to set a delay in seconds after starting this container, before starting the next.

lxc config set [CONTAINER] boot.autostart.delay 30

Start up can be ordered using lxc.autostart.order to prioritise which containers are started first. Higher numbers are started first.

lxc config set [CONTAINER1] boot.autostart.order 10
lxc config set [CONTAINER2] boot.autostart.order 8
CPU Limits

See CPU Resource Limits for more information on constraining CPU resources.


Setting CPU Resource Limits With LXC

Category : How-to

linux_containers_logoLinux Container (LXC) management is now often dealt with by LXD, the Canonical lead project built on top of LXC.

LXD offers a suite of options for controlling Linux Container resources and setting limits where appropriate. This post will talk about setting constraints on CPU, however other options are available for limiting almost any sort of resource, such as network, disk I/O, memory and so on.

Available Limits

CPU management is done in 1 of 4 ways, depending on your expected workload and host CPU management regime.

  1. Number of CPUs – set the number of CPU cores that LXC can use with this container and automatically distribute CPU time amongst guests when there is competition for CPU time. The value used is an integer, for example 2.
  2. Specific cores – specify specific physical core(s) for the container to use and distribute available CPU time between containers when multiple containers use the same cores.The value used is an integer or range and can be comma separated, for example 2, 0-1 or 0-1,3,5-9.
  3. Capped share – allow a specified percentage of CPU time for the container, or more if it’s available. When the host is not under load then a container can use any available CPU however when there is contention for CPU then the container will be limited to the specified amount. The container will see all host CPU cores (in TOP, for example).
  4. Limited time share – will limit the container CPU time to be whatever is specified out of each 200ms. Even if more CPU is available, only what is specified per 200ms slice is allowed. The container will see all host CPU cores (in TOP, for example).

Setting Limits

Setting limits is done with the lxc command. There are then two options; limits.cpu for the above points 1 and 2, or limit.cpu.allowance for points 3 and 4.

lxc config set [CONTAINER] limits.cpu [VALUE]
  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 1 or 2 above.

OR

lxc config set [CONTAINER] limits.cpu.allowance [VALUE]
  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 3 or 4 above.

CPU Limit Examples

Set the container nginx-proxy to use any 2 CPUs on the host.

lxc config set nginx-proxy limits.cpu 2

Set the container nginx-proxy to use physical CPU 0, 3, 7, 8 and 9 on the host.

lxc config set nginx-proxy limits.cpu 0,3,7-9

Set the container nginx-proxy to use 20% of the available CPU on the host or more if it’s available.

lxc config set nginx-proxy limits.cpu.allowance 20%

Set the container nginx-proxy to use no more than 50% of the available CPU on the host, or 100ms for every 200ms of CPU time available.

lxc config set nginx-proxy limits.cpu.allowance 100ms/200ms

You can view /proc/cpuinfo to see the available cores on your container, however it will not include any additional scheduling limits or priorities.

cat /proc/cpuinfo | grep processor
processor: 0
processor: 1

CPU Priority

The last option around CPU limiting is the priority of CPU time. This option only kicks in when the host is overcommitted on CPU resource and containers are fighting for CPU time. This can either be on a single core (if using above points 1 or 2) or system wide (if no CPU limiting is in place or using above points 3 or 4).

Available values are 0 – 10 inclusive and lower numbers mean a lower priority – a higher number will mean the machine gets CPU time before lower numbers.

The below command sets the container nginx-proxy to have a CPU priority of 5.

lxc config set nginx-proxy limits.cpu.priority 5

The below command sets the container php-backend to have a CPU priority of 2 and therefore would get less CPU time than container nginx-proxy when CPU is under contention.

lxc config set php-backend limits.cpu.priority 5

Add systemd Startup Script For CouchDB

couchdb-whiteCurrently, version 2.0 of CouchDB doesn’t come with any form of startup script. I’m sure that as the CouchDB 2 branch becomes more mature and it’s added to the various software repositories startup scripts will be shipped as standard, but until then we have to make do.

The below script is a systemd startup script with a cat command to create the file with the required content in the systemd config directories. Run the below script to create the startup file. You’ll need to change /usr/bin/couchdb to be the location of your couchdb executable.

cat <<EOT >> /etc/systemd/system/couchdb.service
[Unit]
Description=Couchdb service
After=network.target

[Service]
Type=simple
User=couchdb
ExecStart=/usr/bin/couchdb -o /dev/stdout -e /dev/stderr
Restart=always
EOT

You’ll then need to reload the systemd daemon and add the couchdb service to the startup routine. Run the below commands to enable CouchDB at machine startup.

systemctl  daemon-reload
systemctl  start couchdb.service
systemctl  enable couchdb.service

 


Basic IPTable Rules

Category : How-to

Here are some basic IPTable rules to enable essential connectivity from the host. Outbound connectivity such as ping, DNS and HTTP are all enabled, along with inbound SSH.

All external sources are enabled for SSH so it’s advisable to restrict this further once you’re up and running. This IPTables script is intended to be a starting point and may need to be tailored for your security requirements.

Paste the below script in order to get started.

Optional, run iptables -F to clear existing rules.

iptables -F

 

# Loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

# Established
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT

# Drop invalid
iptables -A INPUT -m conntrack --ctstate INVALID -j DROP

# Incoming SSH
iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT

# Outgoing HTTPS
iptables -A OUTPUT -o eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT

# Outgoing HTTP
iptables -A OUTPUT -o eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT

# Outgoing DNS
iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT

# Outgoing Ping
iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT

# Default chain
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP

See the cheat sheet for more information.


Visit our advertisers

Quick Poll

What type of VPN protocol do you use?

Visit our advertisers