Network File Systems - The journey to the setup



For the last 4-6 months or so, I have been using SMB for communication of files on a local server of mine that also acts as a pi-hole, but I always wondered about the hoops I have to do to make the drive compatible on different situations - most recent one being adding compatibility for SMB1 given a Windows XP VM that needed to access files to PIC programmer files - that at this point it makes trying to get native speed / compatibility a mess.

While watching videos before going to bed, I saw a video by Citrus Titus Tech that showcased NFS functionality on Windows, while describing the inconveniences of SMB along the way. At this point I never knew about this protocol, despite it being more compatible on Linux/Unix systems.

This could be a mayor upgrade for my server, considering the fact I don't have any actual Windows computer, only a M1 Macbook Air and a Macbook Pro running Debian, so NFS would fit right in.


While researching this, I started to notice that the initial steps of the setup were similar to SMB:

  • Create the folders that will be the mounting points that other computers will connect to.
  • Give the appropiate permissions to the folder for the specific user/group.

I initially followed steps from this guide:, which then transitioned to the Red Hat Linux Network Configuration section.

The Setup

After that, it was time to setup the exports file (/etc/exports), which is a simple file that contains the listing of the directories (or mounting points) that will be shared on the network. This just needs a couple of simple settings.

/TheDrive       *(rw,sync,no_subtree_check,root_squash)

And then the problem began at this moment. Both the debian machine and the Linux machine did not connect to the drive: the Debian machine just kept exclaiming "Invalid Protocol" under Dolphin, while MacOS just did not connect at all. So next step would be figuring out if the mounting of the drives requires a different setup. And then I noticed that in every tutorial, everyone would mount the drives with the 'mount' command, which is the same command I use to mount the external drives on the server. So, the next command came along to mount it. First on the debian machine:

mount -t nfs /mnt/nfsdrive

And voila! I've got a file listing containing the files for the folder. However I cannot write to it at all. This is because NFS, is host authenticated rather than user, so if I was user "jose", but the mounts and folders were created by "root" and tagged as such, I will be able to read them as "jose", but to write anything to them, I must be "root" (This is best explained here). This was a simple change to perform, but then I hit another snag given the external drive, as the command always mounts the drive with the root user: however it was quickily fixable with the -o flag, which lets you assign the user and group ID to the mounting point, so I just added those.

mount -o uis=1000,gid=1000,umask=0000 /dev/sda2 /HDDDrive

And boom, the hard drive folder mounted as "jose", allowing me to write on computers that contain the same user.

Some security

Oh course, a little bit of security shouldn't harm when dealing with this network drive. If you've noticed above, I used *, which points to everyone. So everyone can access this drive, which is actually OK since this is also shared with family members, but I don't want outside actors, so it's time for a firewall rule.

ufw allow from to any port nfs

And back on the exports file:


The results

Immediately, I saw a mayor improvement on read speed across smaller files, it was night and day. I could finally explore my SM Archive local backup without having to wait 4-5 seconds for it to finish fetching the file listing back in SMB. Write speeds are relatively the same however once you start dealing with heavier files, but it really shines on smaller ones. I still have my SMB instance running alongside the NFS one for compatibility purposes, given that XP VM I mentioned previously and many other devices that aren't fully UNIX complaint like IPhones or TVs.

An additional note that I found out while trying to get NFS running on the MacOS client: If you mount the volume with sudo, that means that the mount is inserted as root, obviously. But this means that root is the user that accesses the NFS drive, which, if not mapped on the exports file, won't be able to write anything because of permissions, just like how the mount on the server side was mounted to root, not allowing to write on literally any device. So for this I swapped the root_squash flag to all_squash, which now also includes root to be squashed to a specific UID and GID provided on the export file. This was needed given the username on the Macbook has a different username, so I can just assign the GID and UID from the mounting step in here, and when it connects, it already becomes jose, and its ready to write!