Category Archives: How-to

Change Listening Port of MySQL or MariaDB Server

Category : How-to

Get Social!

mysql-logoThe MySQL and MariaDB server both use a file called my.cnf for parameters that are used to configure the server. This is where the port number and, if you use it, the local socket can be configured. The default port number for both MySQL and MariaDB is 3306 but you can change it as required.

A local socket is the prefered method of connecting to a database as it removes much of the overhead of creating a TCP connection and transferring data. This comes with the limitation that it can only be used if the application accessing the database is on the same machine. In larger or highly available systems this may not be possible.

A TCP connection is the only option of connecting to your MySQL or MariaDB database from a remote machine. It incurs a small penalty over a local socket and therefore slightly higher latencies. MySQL server and MariaDB can be configured to use a local socket, TCP connections or both.

We’ll be editing the my.cnf file for the following sections. Open the file in your favourite editor.

vi /etc/mysql/my.cnf

Configuring local socket use

The socket option indicates the filesystem path to the location of the socket you’d like to use. Specify a filesystem path, usually /var/run/mysql/mysqld.sock and the socket will be created when the server next starts. Remove or comment (#) the line to disable socket access.

socket = /var/run/mysqld/mysqld.sock

Restart the server for the changes to take effect.

service mysql restart

Setting or changing the TCP port

The port option sets the MySQL or MariaDB server port number that will be used when listening for TCP/ IP connections. The default port number is 3306 but you can change it as required. Use the port option with the bind option to control the interface where the port will be listening. Use 0.0.0.0 to listen on all IP addresses on the host, or specify a single one directly to listen on a single interface. Omit both of these options to disable TCP/ IP connections.

port = 1234
bind = 10.10.10.10

Restart the server for the changes to take effect.

service mysql restart

Reset The root MYSQL/ MariaDB Password

Category : How-to

Get Social!

mysql-logoIf you’ve lost or forgotten the root user password on a MySQL or MariaDB server you’ll want to reset it and leave all the other accounts and data intact. Fortunately it’s possible, but you’ll need access to an SSH account hosting the instance and the ability to stop and start the database service.

Before going any further, make sure your instance of MySQL or MariaDB is shutdown.

service mysql stop

Start the server in safe mode and don’t load the table grants and permissions.

mysqld_safe --skip-grant-tables &

Log into the local instance with the root user.

mysql -u root mysql

Run the below commands SQL, once connected, and reset your password. Be sure to substitute new-password with the new password for your root account.

use mysql;
UPDATE mysql.user SET Password=PASSWORD('new-password') WHERE User='root';
FLUSH PRIVILEGES;
exit;

Finally, start the SQL server instance and use your new root account password.

service mysql restart
mysql -u root -p

 


Persistent Ceph Mount Point

Tags :

Category : How-to

Get Social!

ceph-logoOnce you’ve got a Ceph cluster up and running you’re going to want to mount it somewhere. This guide assumes that the mount point will be on a machine that isn’t running Ceph, however if you’re mounting the storage on one of the Ceph server nodes then you can skip the package installation steps.

Install the Ceph Client

Before we start mounting anything, we’re going to need the required software installed. Assuming you’re on Debian run the below commands to add the key and the software repository for the Ceph binaries.

wget --no-check-certificate -q -O- 'https://git.ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -
echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list

Then run the apt-get commands to update your software index and install the Ceph binaries for the client.

apt-get update && apt-get install -y ceph-fs-common

Mount a Ceph device as a folder

Here we’re going to use /mnt/ha-pool as the mount point but you can change that to whatever you’d like. Run this command on any machine that you’d like to mount the Ceph volume on.

mkdir /mnt/ha-pool

Then we need to export the key so that the ceph-client can authenticate with the Ceph daemon. You could turn authentication off, or even create a non-admin user secret but for this tutorial we’ll just use the admin user. Run this command on your admin machine for your Ceph cluster (NOT on the client you’re setting up the mount point).

ceph-authtool --name client.admin /etc/ceph/ceph.client.admin.keyring --print-key

You’ll be presented with a string of letters and numbers. Copy this and add it to a file stored on your Ceph client machine. This is the ‘password’ or secret that the Ceph client will use to authenticate with the Ceph server. Paste the string into a file – you can store this anywhere but we’ll use /etc/ceph/admin.secret.

mkdir /etc/ceph/ 
vi /etc/ceph/admin.secret

Automatic mount

If you’d like the Ceph mount point to persist across client machine reboots then you’ll need to add an entry to /etc/fstab. Run the below command to add an entry to your fstab file so that the Ceph volume will be automatically mounted on machine start. This will mount the Ceph volume at /mnt/ha-pool and is referencing the Ceph monitor server nodes ceph1, ceph2 and ceph3 – make sure you change these values for your environment. You don’t have to specify more than one Ceph monitor server node, but it makes sense, just incase one of your nodes fails.

echo "cehp1,ceph2,ceph3:/ /mnt/ha-pool/ ceph name=admin,secretfile=/etc/ceph/admin.secret,noatime 0 2" >> /etc/fstab

Then to mount the volume, run the below mount command

mount /mnt/ha-pool

Manually mount filesystem

If you don’t need the mount to persist you can simply use the mount command. The parameters are very similar to the above section, with the Ceph monitor servers, secret file and mount point all specified. This will mount the Ceph volume at /mnt/ha-pool and is referencing the Ceph monitor server nodes ceph1, ceph2 and ceph3 – make sure you change these values for your environment.

mount -t ceph ceph1,ceph2,ceph3:/ /mnt/ha-pool -o name=admin,secretfile=/etc/ceph/admin.secret

Ceph mount ports and additional options

By default, and if left unspecified like the above examples, the Ceph client will use 6789 for your monitor server daemon. If you’ve specified a different port for your monitor daemon then you can specify them in the mount command. The same syntax can be used in your fstab.

mount -t ceph ceph1:1234,ceph2:4567,ceph3:8910/ /mnt/ha-pool -o name=admin,secretfile=/etc/ceph/admin.secret

You can also specify your secret key directly, rather than a file that contains it. I won’t go into the security implications of this here, but I’m sure you can imagine one or two. Again, the same syntax can be used in your fstab.

mount -t ceph ceph1,ceph2,ceph3:/ /mnt/ha-pool -o name=admin,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==

Small Scale Ceph Replicated Storage

Category : How-to

Get Social!

I’ve written a few posts about Ceph, how it works and how it’s set up and it mostly revolves around large scale storage for storing things like virtual machines. This post will focus on using Ceph  provide fault tolerant storage for a small amount of data in a low resource environment. Because of this, the main focus has been moved away from performance and switched to:

  • availability – the storage should always be available and recoverable in the event of disaster
  • portability – the storage isn’t tied to a machine and can be moved with relative ease.
  • scalability – more machines can use the storage as required.

This tutorial will focus on a small scale Ceph setup, fit for something like a Raspberry Pi or low resource VPS. We’ll use 3 machines but you could easily add more machines if your scenario requires it.

If you are looking for a larger setup, then see this blog post on installing Ceph.

ceph-local

The above diagram shows the topology of the layout. Each machine will have a file /ceph-file that will be mounted as a block device on /dev/loop0 and that’s the space that will be assigned to Ceph. Ceph will replicate any data stored to the file and ensure the data is available to all Ceph clients. The Ceph storage will be accessed from a mountpoint at /mnt/ha-pool.

Ceph block device

The first step in creating a Ceph storage pool is to set aside some storage that can be used by Ceph. Ceph stores everything twice, by default, so whatever storage you provision will be halved. For this example we’re going to use a file created with dd as the Ceph storage device, however you could use a drive mounted in /dev/ if you have one. A whole drive is by far the preferred solution, however as I’ve stated, the main goal of this post isn’t just performance.

If you’re going to use a file for storage, follow my post on creating a block device from a file and mount it on loop0. Otherwise you can continue to the next step.

OpenVZ: if you’re using Ceph inside of an OpenVZ container, make sure you pass the loop device through to the container.

Installing Ceph

At this point it’s worth noting that Ceph, in addition to the application requirements, will use approximately 1MB of RAM for each GB of storage provisioned. This means that 1TB of provisioned storage (which in today’s world is rather small) would take 1GB of RAM plus the requirements of running the Ceph daemons. For our low memory footprint, only provision the storage that you’ll need.

Before starting the install, you’ll need a couple of things in place:

  • SSH Keys are set up between all nodes in your cluster – see this post for information on how to set up SSH Keys. For security it’s good practice to set up a new user on all machines you’re going to install Ceph onto and use it to run Ceph. The key should also be copied to all machines using the ssh-copy-id command.
  • NTP is set up on all nodes in your cluster to keep the time in sync. You can install it with: apt-get install ntp

The following commands are for installing Ceph on Debian (wheezy) and should be executed on all machines that need to run Ceph. In our example, these commands will be executed on Server 1Server 2 and Server 3.

First let’s add the release key and repositories to the apt package manager. Run the following as root:

wget --no-check-certificate -q -O- 'https://git.ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -
echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list

Next let’s update our apt cache and install Ceph and a few other bits.

apt-get update && apt-get install ceph-deploy ceph ceph-common

Setup and configuring for minimal resource requirements

The next step should be done on just one of your Ceph machines. This will create the monitor service and make each machine aware of the other machines running Ceph.

The command references each machine you’re going to be running Ceph on by hostname or DNS entry. Before running the command, make sure that all of your machines resolve via DNS or hosts file. Because I’m only running this in a lab, I’ve used the hosts file route and added an entry to each machine in the hosts file of all Ceph machines.

vi /etc/hosts

Add your Ceph machine IP and hostnames.

10.10.10.1 ceph1
10.10.10.2 ceph2
10.10.10.3 ceph3

You can test that each machine can see the others by using the ping command. If it works then you should be in business!

ping ceph2
ping ceph3

Once you’re happy that all machines can reference the other machines then run the ceph-deploy command:

ceph-deploy new ceph1 ceph2 ceph3

If you haven’t used your ssh keys since setting them up you may be presented with the following warning. Just type yes to continue.

The authenticity of host 'ceph1 (10.10.10.1)' can't be established.
ECDSA key fingerprint is 66:44:a8:90:e2:8e:12:0e:05:4a:c4:93:a1:43:d1:fd.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph1' (ECDSA) to the list of known hosts.

We now need to configure Ceph with our low resource settings. These settings are not performance driven, but instead set to minimise system resources.

See ceph.conf for the script and add the content to the ceph.conf file

vi ~/ceph.conf

Create the initial mds daemons, monitor daemons and set the proper permissions on the keyring file.

ceph-deploy mon create-initial
ceph-deploy admin ceph1 ceph2 ceph3
ceph-deploy mds create ceph1 ceph2 ceph3

ssh ceph1 "chmod 644 /etc/ceph/ceph.client.admin.keyring"
ssh ceph2 "chmod 644 /etc/ceph/ceph.client.admin.keyring"
ssh ceph3 "chmod 644 /etc/ceph/ceph.client.admin.keyring"

Test Ceph is deployed and monitors are running

At this point it’s good to take a step back and check everything is up and running. We’ve still not assigned any storage to our Ceph cluster so we can’t run it yet, but we should have the monitor daemons running and the cluster configuration be deployed on all servers.

Run the below command and take a look at the output.

ceph -s

The output should show

cluster 51e1ddff-ff28-4f58-af7e-e94448e5324b
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 3 mons at {ceph1=10.10.10.1:6789/0,ceph2=10.10.10.2:6789/0,ceph3=10.10.10.3:6789/0}, election epoch 6, quorum 0,1,2 ceph1,ceph2,ceph3
   osdmap e1: 0 osds: 0 up, 0 in
    pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e8: 1/1/1 up {0=web1=up:active}, 2 up:standby

As you can see, three Ceph servers are referenced on port 6789 which is the monitor daemon port number.

Add storage to the Ceph cluster

We’ve got our Ceph cluster, and we’ve got our storage device that we created as the first step, it’s time to put the two together. Run the below commands on the same machine that you ran the above steps on. You’ll need to replace /dev/sda with the block device on each ceph machines that you’d  like to use. Note that the block device (sda) does not need to be the same on all machines.

ceph-deploy osd create --fs-type ext4 ceph1:/dev/sda
ceph-deploy osd create --fs-type ext4 ceph2:/dev/sda
ceph-deploy osd create --fs-type ext4 ceph3:/dev/sda

Or…

You can use a directory as storage for Ceph, rather than a block device.

If you’re following this tutorial and creating a loop device to use with Ceph then you’ll need to ensure there is a filesystem on the loop0 device and that it’s mounted. You can skip these next step if you are just using an existing directory.

Run the below commands (if you’re using a loop device) on each of the machines that has a loop device you’d like to use. We’re assuming that you’re loop device is loop0. For this example we’ll run it on each of the three machines; ceph1, ceph2 and ceph3.

mkfs.ext4 /dev/loop0
mkdir /mnt/ceph-backing0
echo "/dev/loop0 /mnt/ceph-backing0 ext4 defaults 1 1" >> /etc/fstab
mount /mnt/ceph-backing0

You can use a directory path on the Ceph machine as the OSD device. This may be an option if you’re in an OpenVZ or Docker container that doesn’t allow you to pass through block devices.

ceph-deploy osd prepare ceph1:/mnt/ceph-backing0
ceph-deploy osd prepare ceph2:/mnt/ceph-backing0
ceph-deploy osd prepare ceph3:/mnt/ceph-backing0

And then activate the storage:

ceph-deploy osd activate ceph1:/mnt/ceph-backing0
ceph-deploy osd activate ceph2:/mnt/ceph-backing0
ceph-deploy osd activate ceph3:/mnt/ceph-backing0

Mount a Ceph device as a folder

That’s the server side done! The last step to using our Ceph storage cluster is to mount the cluster to a mountpoint on the local filesystem. Here we’re going to use /mnt/ha-pool as the mount point but you can change that to whatever you’d like. Run these commands on any machines that you’d like to mount the Ceph volume on.

First create the mount point where the Ceph storage will be accessible from.

mkdir /mnt/ha-pool

Then we need to export the key so that the ceph-client can authenticate with the Ceph daemon. You could turn authentication off, or even create a non-admin user secret but for this tutorial we’ll just use the admin user.

ceph-authtool --name client.admin /etc/ceph/ceph.client.admin.keyring --print-key >> /etc/ceph/admin.secret

Then run the below command to add an entry to your fstab file so that the Ceph volume will be automatically mounted on machine start. This will mount the Ceph volume at /mnt/ha-pool.

echo "ceph1,ceph2,ceph3:/ /mnt/ha-pool/ ceph name=admin,secretfile=/etc/ceph/admin.secret,noatime 0 2" >> /etc/fstab

Finally run the mount command

mount /mnt/ha-pool

One last check to make sure you’re up and running:

df -h | grep ha-pool
10.10.10.1,10.10.10.2,10.10.10.3:/                    6G   3G   3G  54% /mnt/ha-pool

And that’s it! You have a working Ceph cluster up and running!


Mount A loop Device In An OpenVZ Container

Category : How-to

Get Social!

You can pass a loop device through to an OpenVZ container with the vzctl command. You’ll need to mount the loop device on your host, as there is no support within an OpenVZ container to mount the device within the container. This means that the source of the loop device will also need to be available on the guest.

Note: This has been disabled in the latest versions of vzctl.

The device will then be passed through to the container so that it can be used by the container.

Run the vzctl command with the Container ID and loop device name that you’d like to use.

vzctl set [CTID] --devnodes [LOOP]:rw

For example, to pass loop0 through to Container ID 100 use:

vzctl set 100 --devnodes loop0:rw

Ceph Minimal Resource ceph.conf

Tags :

Category : Supporting Scripts

Get Social!

The below file content should be added to your ceph.conf file to reduce the resource footprint for low powered machines.

The file may need to be tweaked and tested, as with any configuration, but pay particular attention to osd journal size. As with many data storage systems, Ceph creates a journal file of content that’s waiting to be committed to ‘proper’ storage. The osd journal size sets the the maximum amount of data that can be stored in the journal.

It should be calculated as follows:

2 * (T * filestore max sync interval)

T in this scenario is the lowest maximum throughput that’s expected through the network or on the disk. For example, a standard mechanical hard disk writes at roughly 100MB/ s. A 1GBPS network has a maximum throughput of 125 MB/s and therefore 100MB is the value of T. The parameter filestore max sync interval is 5 by default.

Therefore, 2 * (100 * 5 ) = 1000.

  # Disable in-memory logs
  debug_lockdep = 0/0
  debug_context = 0/0
  debug_crush = 0/0
  debug_buffer = 0/0
  debug_timer = 0/0
  debug_filer = 0/0
  debug_objecter = 0/0
  debug_rados = 0/0
  debug_rbd = 0/0
  debug_journaler = 0/0
  debug_objectcatcher = 0/0
  debug_client = 0/0
  debug_osd = 0/0
  debug_optracker = 0/0
  debug_objclass = 0/0
  debug_filestore = 0/0
  debug_journal = 0/0
  debug_ms = 0/0
  debug_monc = 0/0
  debug_tp = 0/0
  debug_auth = 0/0
  debug_finisher = 0/0
  debug_heartbeatmap = 0/0
  debug_perfcounter = 0/0
  debug_asok = 0/0
  debug_throttle = 0/0
  debug_mon = 0/0
  debug_paxos = 0/0
  debug_rgw = 0/0
  osd heartbeat grace = 8

[mon]
  mon compact on start = true
  mon osd down out subtree_limit = host

[osd]
  # Filesystem Optimizations
  osd mkfs type = btrfs
  osd journal size = 512

  # Performance tuning
  max open files = 327680
  osd op threads = 2
  filestore op threads = 2
  
  #Capacity Tuning
  osd backfill full ratio = 0.95
  mon osd nearfull ratio = 0.90
  mon osd full ratio = 0.95

  # Recovery tuning
  osd recovery max active = 1
  osd recovery max single start = 1
  osd max backfills = 1
  osd recovery op priority = 1

  # Optimize Filestore Merge and Split
  filestore merge threshold = 40
  filestore split multiple = 8

With thanks to Bryan Apperson for the config.


Visit our advertisers

Quick Poll

Are you using Docker.io?

Visit our advertisers