GlusterFS Error cannot open /dev/fuse

GlusterFS Error cannot open /dev/fuse

Category : How-to

Get Social!

After installing glusterfs-client on my Debian server I received the below error when trying to mount a remote GlusterFS volume. The error indicates that the device at /dev/fuse cannot be found, however ls showed that it was available.

This was the error displayed in the Gluster log after running the mount command:

[2016-04-12 17:39:58.948364] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.10 (args: /usr/sbin/glusterfs --volfile-server=glustercluster1 --volfile-id=/data-volume /mnt/data-volume)
[2016-04-12 17:39:59.030349] E [mount.c:341:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
[2016-04-12 17:39:59.030385] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
[2016-04-12 17:43:29.644266] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.10 (args: /usr/sbin/glusterfs --volfile-server=glustercluster1 --volfile-id=/data-volume /mnt/data-volume)
[2016-04-12 17:43:29.661947] E [mount.c:341:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
[2016-04-12 17:43:29.662014] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again

A quick check of the kernel fuse module using modprobe gave an error:

modprobe fuse
ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg)

And some Googleing indicated that it’s because fuse-utils was missing. In my case it wasn’t.

apt-get install fuse-utils

Further investigation showed that the kernel had recently been updated, but the machine hadn’t been restarted so the latest installed kernel wasn’t the kernel that was running. There seemed to be some kind of mismatch between the loaded kernel and the fuse library.

A reboot of the machine fixed the issue – the fuse module loaded correctly and the Gluster mount executed without error.

reboot

GlusterFS Error volume add-brick: failed: Pre Validation failed on BRICK is already part of a volume

Category : How-to

Get Social!

I received the below error today after I tried to add a ‘new’ brick to a GlusterFS volume. I’ve put the word ‘new’ into quotes because although the brick was new to the GlusterFS volume, the disk being added had been used as a brick before. The disk had all data removed from it, however somehow GlusterFS knew that the disk held some remnants of the previous brick.

The following command failed, trying to add the new brick:

gluster v add-brick data-volume replica 3 gluster3:/mnt/brick1/data

The error received:

volume add-brick: failed: Pre Validation failed on gluster3. /mnt/brick1/data is already part of a volume

Solution!

The solution is to use setfattr to clear the hidden filesystem attributes containing GlusterFS information about the bricks previous life. Run the following commands on the server that has the drive that you’re trying to add as a new brick.

setfattr -x trusted.glusterfs.volume-id /mnt/brick1/data
setfattr -x trusted.gfid /mnt/brick1/data

Of course, you should also ensure that the filesystem you’re adding is cleared, especially the .glusterfs hidden directory.

rm -rf /mnt/brick1/data/.glusterfs

And that’s it! Try running the add-brick command again, and you should be in business.

gluster v add-brick data-volume replica 3 gluster3:/mnt/brick1/data

GlusterFS Mount failed. Please check the log file for more details.

Category : How-to

Get Social!

gluster-orange-antYou may get the following error when trying to mount a GlusterFS volume locally. The error displayed gives no indication why the volume failed to mount, but it does hint at where you can get more information about the error.

This is the error presented when running the mount command:

Mount failed. Please check the log file for more details.

The log file could be in numerous places, depending on your Linux distribution and Gluster settings, however generally it will be in /var/log/glusterfs.

Take a look at the log file for further information on why the volume cannot be mounted. An example is included below, showing an issue with the fuse kernel module.

vi /var/log/glusterfs/mnt-data-volume.log
[2016-04-12 17:39:58.948364] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.10 (args: /usr/sbin/glusterfs --volfile-server=glustercluster1 --volfile-id=/data-volume /mnt/data-volume)
[2016-04-12 17:39:59.030349] E [mount.c:341:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
[2016-04-12 17:39:59.030385] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
[2016-04-12 17:43:29.644266] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.10 (args: /usr/sbin/glusterfs --volfile-server=glustercluster1 --volfile-id=/data-volume /mnt/data-volume)
[2016-04-12 17:43:29.661947] E [mount.c:341:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
[2016-04-12 17:43:29.662014] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again

Your issue could vary, and as such we can’t cover every eventuality here. At least you now know how to get more details around your specific issue.


Synchronise a GlusterFS volume to a remote site using geo replication

Get Social!

gluster-orange-antGlusterFS can be used to synchronise a directory to a remote server on a local network for data redundancy or load balancing to provide a highly scalable and available file system.

The problem is when the storage you would like to replicate to is on a remote network, possibly in a different location, GlusterFS does not work very well. This is because GlusterFS is not designed to work when there is a high latency between replication nodes.

GlusterFS provides a feature called geo replication to perform batch based replication of a local volume to a remote machine over SSH.

The below example will use three servers:

  • gfs1.jamescoyle.net is one of the two running GlusterFS volume servers.
  • gfs2.jamescoyle.net is the second of the two running GlusterFS volume servers. gfs1 and gfs2 both server a single GlusterFS replicated volume called datastore.
  • remote.jamescoyle.net is the remote file server which the GlusterFS volume will be replicated to.

GlusterFS uses an SSH connection to the remote host using SSH keys instead of passwords. We’ll need to create an SSH key using ssh-keygen to use for our connection. Run the below command and press return when asked to enter the passphrase to create a key without a passphrase. 

ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem

The output will look like the below:

Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/glusterd/geo-replication/secret.pem.
Your public key has been saved in/var/lib/glusterd/geo-replication/secret.pem.pub.
The key fingerprint is:
46:ba:02:fd:2f:9c:b9:39:ec:6c:90:50:d8:ec:7b:00 root@gfs1
The key's randomart image is:
+--[ RSA 2048]----+
|   +             |
|  E +            |
|   +    .        |
|  ..o  o         |
|  ...+. S        |
|   .+..o         |
|    .=oo         |
|     oOo         |
|     o=+.        |
+-----------------+

Now you need to copy the public certificate to your remote server in the authorized_keys file. The remote user must be a super user (currently a limitation of GlusterFS) which is root in the below example. If you have multiple GlusterFS volumes in a cluster then you will need to copy the key to all GlusterFS servers.

cat /var/lib/glusterd/geo-replication/secret.pem.pub | ssh [email protected] "cat >> ~/.ssh/authorized_keys"

Make sure the remote server has glusterfs-server installed. Run the below command to install glusterfs-server on remote.jamescoyle.net. You may need to use yum instead of apt-get for Red Hat versions of Linux.

apt-get install glusterfs-server

Create a folder on remote.jamescoyle.net which will be used for the remote replication. All data which transferrs to this machine will be stored in this folder.

mkdir /gluster
mkdir /gluster/geo-replication

Create the geo-replication volume with Gluster and replace the below values with your own:

  • [SOURCE_DATASTORE] – is the local Gluster data volume which will be replicated to the remote server.
  • [REMOTE_SERVER] – is the remote server to receive all the replication data.
  • [REMOATE_PATH] – is the path on the remote server to store the files.
gluster volume geo-replication [SOURCE_DATASTORE] [REMOTE_SERVER]:[REMOTE_PATH] start

Example:

gluster volume geo-replication datastore remote.jamescoyle.net:/gluster/geo-replication/ start

Starting geo-replication session between datastore & remote.jamescoyle.net:/gluster/geo-replication/ has been successful

Sometimes on the remote machine, gsyncd (part of the GlusterFS package) may be installed in a different location to the local GlusterFS nodes.

Your log file may show a message similar to below:

Popen: ssh> bash: /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd: No such file or directory

In this scenario you can specify the config command the remote gsyncd location.

gluster volume geo-replication datastore remote.jamescoyle.net:/gluster/geo-replication config remote-gsyncd /usr/lib/glusterfs/glusterfs/gsyncd

You will then need to run the start command to start the volume synchronisation.

gluster volume geo-replication datastore remote.jamescoyle.net:/gluster/geo-replication/ start

You can view the status of your replication task by running the status command.

gluster volume geo-replication datastore remote.jamescoyle.net:/gluster/geo-replication/ status

You can stop your volume replication at any time by running the stop command.

gluster volume geo-replication datastore remote.jamescoyle.net:/gluster/geo-replication/ stop

Advanced GlusterFS Log Rotation

Category : How-to

Get Social!

gluster-orange-ant

If installing GlusterFS on Debian using the launchpad repository then a log rotate entry will be automatically set up. This entry handles all the logs created by the GlusterFS application by rotating them daily and keeping 7 days of old log files.

Other packages or install methods may  vary and log and configuration paths may be different.

On Debian and Ubuntu the log files are kept in the following location:

/var/log/glusterfs/

And the logrotate configuration files is found here:

/etc/logrotate.d/glusterfs-common

The default configuration is to rotate the logs each day, compress old logs and keep them for 7 days. Here is the default configuration file:

/var/log/glusterfs/*.log {
 daily
 rotate 7
 delaycompress
 compress
 notifempty
 missingok
 postrotate
 [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat /var/run/glusterd.pid`
 endscript
}

You can edit this file directly to make any changes you may need in your GlusterFS environment. See my cheat sheet for details on the logrotate commands.


Simple GlusterFS log rotation

Category : How-to

Get Social!

gluster-orange-ant

You’ll be glad to know that GlusterFS has built in log rotation! This means that you can use a simple gluster command to rotate the log for a specific volume. This is very helpful when troubleshooting where it can be easiest to truncate logs before regenerating the issue.

For enduring log rotation I recommend using logrotate which I will cover in a future blog post.

Logs are rotated per volume, so you will need to know the volume name before issuing the command to rotate the log. Use the below command to list all the volumes available on your server:

gluster volume info all | grep "Volume Name"

Use the gluster command, replacing [VOL_NAME] with the volume name of the log file you would like to rotate.

gluster volume log rotate [VOL_NAME]

In the below example, the logs for volume datastore will be rotated.

gluster volume log rotate datastore

Below is the result on the file system displayed with the ls command.

ls -l /var/log/glusterfs/bricks/
total 28
-rw------- 1 root root 119 Nov 1 19:06 mnt-gfs_block.log
-rw------- 1 root root 9085 Nov 1 13:46 mnt-gfs_block.log.1383313581
-rw------- 1 root root 236 Nov 1 13:48 mnt-gfs_block.log.1383313684
-rw------- 1 root root 236 Nov 1 13:49 mnt-gfs_block.log.1383313742
-rw------- 1 root root 236 Nov 1 19:06 mnt-gfs_block.log.1383332808

And that’s it! Your log file will be moved and have a time stamp appended, and a new log will be started for the volume.

See my other post on using logrotate for more advanced log rotation.


Visit our advertisers

Quick Poll

Are you using Docker.io?

Visit our advertisers