This blog post mainly focuses on individuals who are interested in the upstream world and would like to work on NSF-Ganesha. Since we have seen and read tons of articles on ceph and nfs-ganesha, this blog will not be focusing more on the explanation of both, instead, this is a complete manual on how to build nfs-ganesha from upstream community code and connect it to the ceph cluster that you have (upstream or downstream)
First things first, if you are unaware of the upstream GitHub repository for nfs-ganesha development, then here it is : https://github.com/nfs-ganesha/nfs-ganesha and the latest code can be found in the "next" branch
Let's look at step by step process on building and bringing up the Ganesha service from the upstream code
The below steps are for centos machines.
Pre-requisites
- Install the dependencies
dnf update -y
dnf config-manager --set-enabled crb
dnf install -y epel-release
dnf -y install centos-release-ceph epel-release
dnf install -y git bison cmake dbus-devel flex gcc-c++ krb5-devel libacl-devel libblkid-devel libcap-devel redhat-rpm-config rpm-build xfsprogs-devel
dnf install -y libnsl2-devel libnfsidmap-devel libwbclient-devel userspace-rcu-devel
dnf install -y libcephfs-devel
- Clone the NFS-Ganesha Git repo
git clone https://github.com/nfs-ganesha/nfs-ganesha.git
- Perform git submodule recursive update (Reason being prometheus-cpp-lite changed from being a submodule of ganesha to a submodule of ntirpc)
git submodule update --recursive --init || git submodule sync
This will build the Ganesha and you can see Ganesha.nfsd in the build folder
Setup Upstream Ceph Cluster
Note: For reference, single node cluster is used
sudo dnf install -y cephadm
sudo cephadm add-repo --release squid
sudo dnf install -y ceph
sudo cephadm bootstrap --mon-ip $(hostname -I | awk '{print $1}')
sudo ceph orch apply osd --all-available-devices
sudo ceph fs volume create cephfs
Bring up the Ganesha service
- Create the Ganesha.conf file. (refer the below sample conf file)
EXPORT {
Export_ID = 1;
Path = "/";
Pseudo = "/nfs/cephfs";
Protocols = 4;
Transports = TCP;
Access_Type = RW;
Squash = None;
FSAL {
Name = "CEPH";
User_Id = "admin";
Secret_Access_Key = "";
}
}
Mount