Kadalu Storage is now integrated with NFS Ganesha!

Now users would be able to mount Kadalu Storage volumes on NFS protocol client machines with NFS Ganesha!

Challenges Faced during integration:

During the integration of nfs-ganesha with Kadalu Storage, we encountered several challenges. Kadalu Storage works with NFS Ganesha's Gluster FSAL without the need of Kadalu FSAL since Kadalu Storage makes use of core Gluster File System Layer but NFS Ganesha had lacking of support of option for multiple backup volfile servers and custom ports.

  • Previously, GlusterFS with GlusterD served volfiles by utilizing the daemon that was actively running on the server node. However, this presented a challenge for Kadalu Storage, which does not use the GlusterD management system instead used a much modern Rest API based management layer. Read here for their differences & advantages.
    To overcome this issue, we explored alternative methods to serve volfiles through the glusterfsd or brick processes running on all storage nodes. This approach eliminated the need for additional servers or API requests to the Kadalu Manager (mgr) to serve client volfiles required by applications, resulting in improved performance and reduced overhead.

  • NFS Ganesha has a modular design that includes various FSALs (File System Abstraction Layers) to establish connections with different backend file systems and perform file-system-specific operations. However, the FSAL Gluster used to connect GlusterFS via libgfapi was dependent on the Gluster Daemon to obtain the Volume ID, which posed a challenge. To overcome this problem, we added the volume ID key at the io-stat xlator level in the gluster client volfile. This allowed nfs-ganesha, through libgfapi, to retrieve the volume ID without relying on the Gluster Daemon, provided that the volume ID was present in the client volfile.

  • The next hindrance was a result of a side effect of the better volume management feature in Kadalu Storage.

    Previously, it was only possible to create GlusterFS volumes at the default port of 24007 without any option for custom ports. However, with Kadalu Storage Manager, it is now possible to create volumes with custom ports and have the volume start at that specific port upon node reboot.

    The NFS Ganesha FSAL for GlusterFS was designed with a hardcoded connection to glusterd on port 24007 to fetch volfiles and set that daemon as the backup volfile server.
    rc = glfs_set_volfile_server(fs, "rdma", params.glhostname, 24007);

    Since Kadalu Storage now supported serving of volfiles through the brick process (glusterfsd), there was no need to contact glusterd on port 24007. Instead, the node running the brick process and its port could be added as a backup_volfile_server to fetch volfiles.
    rc = glfs_set_volfile_server(fs, gtransport, ghostname, gport);

    We added support for multiple backup volfile servers with custom ports and ensured backward compatibility. This way, if one of the volfile server nodes went down, the NFS Ganesha server could still contact another backup volfile server for volfiles.

These changes were made in GlusterFS upstream and NFS Ganesha upstream, which were enough to complete the integration of Kadalu Storage with NFS Ganesha.

Mounting Kadalu Storage Volume with NFS Ganesha:

Please go through quick start guide to know setting up of Kadalu Storage.

  • Create a Kadalu Storage Volume vol1 :
    Here vol1 volume is distributed across 3 nodes

      kadalu volume create dev/vol1 server1:/exports/vol1/s1 server2:/exports/vol1/s2 server3:/exports/vol1/s3
    
  • Create NFS Ganesha export configuration:

    Previously only one Hostname could be given for backup volfile server, now users can give multiple comma separated hostnames "host1:port1, host2:port2, host3:port3, ...." where host is hostname of node and port is the brick process port.

      EXPORT
      {
             # Export Id (mandatory, each EXPORT must have a unique Export_Id)
             Export_Id = 1;
    
             # Exported path (mandatory)
             Path = "/vol1";
    
             # Pseudo Path (required for NFS v4)
             Pseudo = "/vol1";
    
             # Required for access (default is None)
             # Could use CLIENT blocks instead
             Access_Type = RW;
    
             # Allow root access
             Squash = No_Root_Squash;
    
             # Security flavor supported
             SecType = "sys";
    
             # Exporting FSAL
             FSAL {
                     Name = "GLUSTER";
                     Hostname = "server1:49252, server2:49253, server3:49254";
                     Volume = "vol1";
                     enable_upcall = true;
                     Transport = tcp; # tcp or rdma
             }
      }
    

    Rest of the steps is similar to exporting any other volume to NFS Ganesha.

  • Start the NFS-GANESHA daemon and mount the volume vol1:

      systemctl restart nfs-ganesha
      systemctl enable nfs-ganesha
      systemctl status nfs-ganesha
      # if failed, try to check log at /var/log/ganesha/ganesha.log
    
      showmount -e server1
    
      mount -t nfs server1:/vol1 /mnt/vol1_nfs
    

    Verify if the volume is mounted or not with df -h .

Soon we will be creating client package for nfs-ganesha-kadalu for easy install and use with Kadalu Storage. The next version of Kadalu Storage 1.1 will be released by end of february with very exciting new features.

Also NFS Ganesha Mount will used in Kadalu Kubernetes Native Storage CSI as default, with optional support for FUSE Mount which is present currently for nodeservers.*

For any queries please feel free to contact:
Mail: vatsa@kadalu.tech
Or Raise any issues at Kadalu Storage Manager Upstream.