Mount a GlusterFS volume
Category : How-to
GlusterFS is an open source distributed file system which provides easy replication over multiple storage nodes. These nodes are then combined into storage volumes which you can easily mount using fstab in Ubuntu/ Debian and Red Hat/ CentOS. To see how to set up a GlusterFS volume, see this blog post.
Before we can mount the volume, we need to install the GlusterFS client. In Ubuntu we can simply apt-get the required package, or yum in Red Hat/ CentOS. For Ubuntu/ Debian:
apt-get install glusterfs-client
For Red Hat, OEL and CentOS:
yum install glusterfs-client
Once the install is complete, open the fstab and add a new line pointing to your server. The server used here is the server which contains the information on where to get the volume, and not necessarily where the data is. The client will connect to the server holding the data. The following steps are the same on both Debian and Red Hat based Linux distributions.
Easy way to mount
vi /etc/fstab
Replace [HOST] with your GlusterFS server, [VOLNAME] with the Gluster FS volume to mount and [MOUNT] with the location to mount the storage to.
[HOST]:/[VOLUME] /[MOUNT] glusterfs defaults,_netdev 0 0
Example:
gfs1.jamescoyle.net:/datastore /mnt/datastore glusterfs defaults,_netdev 0 0
Finally, reboot your machine to make the volume appear in df.
df -h gfs1.jamescoyle.net:/testvol 30G 1.2G 27G 5% /mnt/volume
More redundant mount
The trouble with the above method is that there is a single point of failure. The client only has one GlusterFS server to connect to. To set up a more advanced mount, we have two options; create a volume config file, or use backupvolfile-server in the fstab mount. Remember this is not to specify where all the distributed volumes are, it’s to specify a server to query all the volume bricks.
fstab method
We can use the parameter backupvolfile-server to point to our secondary server. The below example indicates how this could be used.
gfs1.jamescoyle.net:/datastore /mnt/datastore glusterfs defaults,_netdev,backupvolfile-server=gfs2.jamescoyle.net 0 0
Using a volume config file
Create a volume config file for your GlusterFS client.
vi /etc/glusterfs/datastore.vol
Create the above file and replace [HOST1] with your GlusterFS server 1, [HOST2] with your GlusterFS server 2 and [VOLNAME] with the Gluster FS volume to mount.
volume remote1 type protocol/client option transport-type tcp option remote-host [HOST1] option remote-subvolume [VOLNAME] end-volume volume remote2 type protocol/client option transport-type tcp option remote-host [HOST2] option remote-subvolume [VOLNAME] end-volume volume replicate type cluster/replicate subvolumes remote1 remote2 end-volume volume writebehind type performance/write-behind option window-size 1MB subvolumes replicate end-volume volume cache type performance/io-cache option cache-size 512MB subvolumes writebehind end-volume
Example:
volume remote1 type protocol/client option transport-type tcp option remote-host gfs1.jamescoyle.net option remote-subvolume /mnt/datastore end-volume volume remote2 type protocol/client option transport-type tcp option remote-host gfs2.jamescoyle.net option remote-subvolume /mnt/datastore end-volume volume replicate type cluster/replicate subvolumes remote1 remote2 end-volume volume writebehind type performance/write-behind option window-size 1MB subvolumes replicate end-volume volume cache type performance/io-cache option cache-size 512MB subvolumes writebehind end-volume
Finally, edit fstab to add this config file and it’s mount point. Replace [MOUNT] with the location to mount the storage to.
/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0
28 Comments
Steffan
22-Aug-2013 at 1:12 pmQuestion:
I want to make a glusterFS with replication on 2 hosts for my webserver cluster to share.
If i on webserver1 and webserver2 mount the GlusterFS using the command you specified above:
[HOST]:/[VOLUME] /[MOUNT] glusterfs defaults,_netdev 0 0
WHAT IF, [HOST] goes down ? how does it know to connect to the other host instead?? so i have high availability? is this even possible?
james.coyle
22-Aug-2013 at 2:08 pmHi Steffan.
I have updated the article as you have a good point – for this type of replication, a single server address is a critical single point of failure. Please see the revised article on how to do what you require.
Regards
James.
LD
11-Nov-2014 at 3:47 pmHi. I have tried your high-availability configuration with the config file and then an entry for it in etc/fstab but it wouldn’t mount: it says:
ERROR: Server name/volume name unspecified cannot proceed further..
Please specify correct format
Usage:
man 8 /sbin/mount.glusterfs
I do not want to mount to a single server, i really desire the HA but there is no info anywhere else about this issue really (?) Do you have any suggestions? Thanks.
Dizzystreak
15-Dec-2014 at 7:02 pmI got the exact same error message on CentOS 6 and 7 clients.
I would also like to know the solution to this.
Steffan
23-Aug-2013 at 9:06 amTHANKS!
I really like your guide, you seem to know what you are doing!.
However i do miss some more details and explanations on what the different config variables are there for, and what they do..
The stuff i’m curious about are marked in bold below:
/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0
“max_read=…” in the above, why that exact number? what happens if i turn it up? do i get more performance or what does it do?
volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate
end-volume
volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume
The above is from your /etc/glusterfs/datastore.vol example, i curious about the two lines i marked with bold…
Windows-size: What is this, do i get more performance with a bigger window?
cache-size: Where does the cache get saved? on disk, or in memory(RAM)?
I hope you can answer my questions, i really like your blog. Is there any way i can subscribe so i get an email when you post something new?
james.coyle
2-Sep-2013 at 10:54 amHi Stephen,
I am working on a couple of performance related posts now, but they won’t be ready for publishing for a few more days.
To answer your above questions…
Firstly, it’s important to note that there is no hard rule when it comes to performance attributes. On different systems, and under different workloads you will notice different results and therefore you must tune each attribute to your environment.
max-read – This is taken from NFS and is often used when mounting remote filesystems. It’s the maximum amount of data which will be read in one operation. There is also a max_write which can help you tune write performance. Generally, set this larger for large files, smaller for small files. As I say above, test this with different values under different workloads to test what works best.
window-size – this is the size of the write behind, per file buffer. This holds data of a changed file which has not yet been committed to disk. Use this with care!
cache-size – This is the size in bytes of the local file cache. The GlusterFS client will cache read operations on the client to save the amount of network IO traffic. I am not sure where this data resides, however even if it is not explicitly committed to RAM, it’s likely the kernel would cache it.
Ivan Rossi
23-Sep-2013 at 2:12 pmTo have mount redundancy you write that it is necessary to create a volume config file.
By reading gluster admin manual at page 25, it seems that the backupvolfile=myredundanteserver option in /etc/fstab should get the same result.
Do you imply that this piece of documentation is not correct?
james.coyle
23-Sep-2013 at 3:08 pmThank you for the info, Ivan. I have updated the article.
junaid
27-Dec-2013 at 6:34 amHi James,
Does glusterfs work on the disks created with openstack as well. I am trying to back up my openstack disk with glusterfs. Not sure if this will work. Example this is what I have right now,
—–
-rw-rw-rw- 2 root root 5368709120 Dec 24 08:35 volume-10891405-569b-4630-aa62-9deb9a6f1087
—-
Thats the disk I am trying to back up with glusterfs.
james.coyle
27-Dec-2013 at 11:03 amHi Junaid,
Great question – unfortunatly I have never used OpenStack so I cannot say for sure. Something I can tell you is GlusterFS will work on anything which can be accessed as a folder on the OS. If you can mount your volume, you will be able to GlusterFS is.
Cheers,
James.
cheis
10-Feb-2014 at 9:44 amHi,
I’ve configured a redundant datashare with a volume config file. The problem is the automounting at boot. In a debian Wheezy I get the error:
[2014-02-10 09:08:35.156119] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.1 (/usr/sbin/glusterfs –log-file=/var/log/gluster.log –fuse-mountopts=allow_other,default_permissions,max_read=131072,loglevel=WARNING –volfile=/etc/glusterfs/datashare.vol –fuse-mountopts=allow_other,default_permissions,max_read=131072,loglevel=WARNING /mnt/share)
[2014-02-10 09:08:35.195355] E [mount.c:267:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)
[2014-02-10 09:08:35.195369] E [xlator.c:390:xlator_init] 0-fuse: Initialization of volume ‘fuse’ failed, review your volfile again
It seems a problem with the order of fuse during boot. I’ve found this link:
Florian
19-Jun-2014 at 2:38 pmThanks for all your guides!
I’m trying to mount two gluster servers (with a replicated volume) for use in proxmox. Since it is only possible to specify one server through the GUI, I must do it manually, using your guide here…
This all works fine, but I’m unsure how to add the mounted volume as a “shared” storage to proxmox (so that CTs can be migrated away from *dead* nodes)? If I add it as “directory” in the proxmox storage GUI, the storage is still considered local and I can’t migrate a CT that was running on a just-failed node away from that node (I get the usual 595 no route to host error, similar to the case when using “regular” local storage…)
Thanks for any pointers!
Florian
19-Jun-2014 at 3:05 pmI should have read your other article (/how-to/533-glusterfs-storage-mount-in-proxmox) more closely: There’s no need to add all server IPs, one’s enough as it communicates the other redundant server IPs automatically…
That said, in case of a particular gluster server failure, there’s a 42s delay before things continue to work, so if you are testing this – give it a minute ;-)
james.coyle
19-Jun-2014 at 4:06 pmI’m glad you got it all working :)
semiosis
2-Sep-2014 at 3:50 pmUse dns round-robin for a more redundant mount, as described here: http://goo.gl/ktI6p
Joe Julian
2-Sep-2014 at 4:24 pmBesides the “backupvolfile=myredundanteserver” option which is fine in very small installations, for larger installations the correct method for having redundant mount servers is to use round robin dns (rrdns). http://edwyseguru.wordpress.com/2012/01/09/using-rrdns-to-allow-mount-failover-with-glusterfs/
Mounting using volfiles precludes the ability to make live changes to your volumes and is much more open to user-error that can have catastrophic results (ie, loss of access, loss of data, etc.)
Peter H
3-Oct-2014 at 12:39 pmI followed your post, trying to mount using a volfile. However, this does not work?
Mounting from any one of the 2 storage servers works fine, every change on that volume immediately replicates to second server, so the cluster in itself works.
It’s just mounting using your volfile setup that throws an error:
E [client-handshake.c:1778:client_query_portmap_cbk] 0-remote1: failed to get the port number for remote subvolume. Please run ‘gluster volume status’ on server to see if brick process is running.
Any ideas what’s going wrong?
The cluster works, gluster volume status & gluster volume info shows everything is ok. No firewall, selinux permissive, Centos 6.5 on all servers.
Andreas
6-Oct-2014 at 1:34 pmhm, i do experience the same problem like Peter H on a debian wheezy, gluster version 3.5.2. Mounting the volume on the client manually works, but it doesn’t work during reboot. Either fuse isn’t loaded on time or before network is up, or something else is…
Andreas
6-Oct-2014 at 2:03 pmhm, the problem seems to be quite … interesting:
mount -t glusterfs /etc/glusterfs/datastore.vol /mnt/glusterdata
but the glusterfs servers aren’t listening on port 24007 – they listen on port 49152. wtf?
Andreas
6-Oct-2014 at 2:16 pmno good day today, should just start thinking before posting a comment… forget the comment above and this one, only the first one is relevant.
Yiorgos Stamoulis
18-Dec-2014 at 1:31 pmUsing Centos7,
I also got the “ERROR: Server name/volume name unspecified cannot proceed further..” error reported by others above while trying with a datastore.vol configuration file .
Looking into /sbin/mount.glusterfs (bash script), it does not seem capable of handling .vol type config files (as its sed’ing $1 for hostname:/nfs-type-export and bailing out with above when if does not find them).
Not much lost though as “-o backupvolfile-server=xx” works just fine. Will have to check however how important are all the other options that appear in the .vol file above and whether they can also be supplied as options.
Hope this was helpful.
Roger Marcus
11-Feb-2015 at 8:28 pmQuestion: I get different results depending if I mount a glusterfs with mount “-t glusterfs” or “-t nfs”. Some files are missing from the glusterfs mount.
Details: I have two bananapi’s with a volume setup following your two bananapi how to sheet. The client to the volume is an Ubuntu server running 14.04 LTS.
From the two bananas, everything is intact. Also when the server is mounted with nfs, the missing files are accessible and show up with an “ls”. It’s just the -t glusterfs which doesn’t seem to see 50% of the files.
Any thoughts?
Tam
19-Mar-2015 at 5:24 amThanks for the wonderful info. However I could not make the following scenario work.
Our setup is as follows,
– 2 gluster server with replication.
– from another server, I am mounting the gluster volume as redundant mount point.
Query,
Everything works fine as expected. If any one of the gluster server goes down, within 60 seconds from the third server I am able to access the gluster volume seamlessly. Now consider the case when the first server (mentioned in /etc/fstab) is brought down. The third server can access the gluster volume from the second gluster server. Good and it worked as expected. Now if I umount the mount point and try to re-mount, then it did not work when the first gluster server being down. Any solution to this ?
Sebastian Krätzig
12-Aug-2016 at 3:59 pmHey,
thanks for this guide, but you should update these lines:
> option remote-subvolume [VOLNAME]
…to something like this:
> option remote-subvolume [/path/to/brick]
VOLNAME would be the name of a volume, which will be shown by the command “gluster volume info/status”:
> :~# gluster volume info
>
> Volume Name: gfsvbackup
> Type: Distribute
> Volume ID: 0b7b6027-0b35-497d-bddb-4d64b34828b0
> Status: Started
> Number of Bricks: 3
> Transport-type: tcp
> Bricks:
> Brick1: server01:/export/vdc1/brick
> Brick2: server02:/export/vdd1/brick
> Brick3: server03:/export/vde1/brick
But in this case, you really need the path to the brick. For example “/export/vdc1/brick”.
Thanks again,
Sebastian
fdir
13-Jun-2017 at 9:37 pmlifesaver, thanks seb!
fdir
14-Jun-2017 at 3:10 pmThis gave me errors when trying to mount
gfs1.jamescoyle.net:/datastore /mnt/datastore glusterfs defaults,_netdev,backupvolfile-server=gfs2.jamescoyle.net 0 0
needed an extra dash (-) and an “s” to work on 3.10 for me, “backup-volfile-servers” instead of “backupvolfile-server”
gfs1.jamescoyle.net:/datastore /mnt/datastore glusterfs defaults,_netdev,backup-volfile-servers=gfs2.jamescoyle.net 0 0
m4rcc
24-Nov-2017 at 7:07 amHi,
There is something i don’t get, this “backupvolfile-server” described here it is used just first time we mount the GlusterFS volume, right? I mean once we established the connectivity with that server FIRST TIME, from the GlusterFS Docs pasted below i understand we won’t need any further “backupvolfile-server”, is this right?
######The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount).#######
Dragomir
1-Mar-2021 at 11:34 pmVolume config file doesn’t work anymore in Debian 10, the fstab method should be used instead.