NFS

最後更新: 2023-09-05

目錄

  • 安裝
    Debian 5
    Centos 6
    Centos 7
  • NFS Version
  • Server 設定 Share Folder
     - /etc/exportfs
     - exportfs

介紹

詳見:

NFS 共分為四個版本

  1. NFS V4.1
  2. NFS V4 rfc3010
  3. NFS V3 rfc1813
  4. NFS V2 rfc1094

當然, 最新版功能比較強啦

NFS Server 與 Client 之間是通過 RPC 來實現通訊的, 並透過不同的程式實現不同的工作

Daemon

  • portmap:                      # port mapping (RPC)
  • rpc.nfsd:                      # 管理登入
  • rpc.mountd:                 # 負責NFS的檔案系統
  • rpc.lockd: (optional)      # 它會跟 "service nfs start" 一起 start
  • rpc.statd: (optional)      # /etc/init.d/nfslock

 


安裝

 

Debian 5

nfs-kernel-server - support for NFS kernel server

apt-get install nfs-kernel-server

Centos 6

yum -y install nfs-utils

提供的program:

/usr/sbin/rpc.svcgssd
/usr/sbin/rpc.nfsd
/usr/sbin/exportfs
/usr/sbin/rpc.mountd

Setting

/etc/sysconfig/nfs

# rpc.mountd
# Define which protocol versions mountd
MOUNTD_NFS_V1="no"
MOUNTD_NFS_V2="no"
MOUNTD_NFS_V3="yes"

# rpc.nfsd
# Turn off v2 and v3 protocol support
RPCNFSDARGS="-N 2 -N 3"
# total number of NFS proccess (number of clients that access same time)
RPCNFSDCOUNT=8


# 指定不同 Service 用什麼 port
LOCKD_TCPPORT=32803
MOUNTD_PORT=892
STATD_PORT=662

chkconfig nfs on

service nfs start

service nfs status

rpc.svcgssd is stopped
rpc.mountd (pid 4649) is running...
nfsd (pid 4716 4715 4714 4713 4712 4711 4710 4709) is running...
rpc.rquotad (pid 4645) is running...

rpc.mountd (892)

The rpc.mountd daemon implements the server side of the NFS MOUNT protocol
rpc.mountd uses these ACL (/etc/exports) to determine whether an NFS client is permitted to access a given file system.
It provides an ancillary service needed to satisfy mount requests by NFS clients.
rpc.mountd returns an NFS file handle for the export's root directory to the client.
rpc.mountd daemon registers every successful MNT request by adding an entry to the "/var/lib/nfs/rmtab"

NFS MOUNT protocol

MNT (mount an export)
     - pathname of the root directory of the export
     - sender's IP address
UMNT (unmount an export)

rpc.nfsd ( TCP / UDP port: 2049)

    implements the user level part of the NFS service
    The user space program merely specifies

       * sort of sockets the kernel service should listen  on,  
       * what NFS versions it should support,
       * how many kernel threads it should use.

The main functionality is handled by the nfsd kernel module

lsmod | grep nfsd

Centos 7

yum -y install nfs-utils

# nfs 依賴 rpcbind

當 nfs 唔 work 時, 可以用以下 cmd 去 check

systemctl status rpcbind

# re-generates config file

# configure file: /run/sysconfig/nfs-utils

systemctl restart nfs-config

cat  /run/sysconfig/nfs-utils

RPCNFSDARGS=" 8"
RPCMOUNTDARGS=""
STATDARGS=""
SMNOTIFYARGS=""
RPCIDMAPDARGS=""
GSSDARGS=""
BLKMAPDARGS=""
GSS_USE_PROXY="yes"

# status

systemctl status nfs

 


NFS Version

 

NFSv3

The NFS version 2 and 3 protocols are stateless.(The NFSv4 protocol introduces state)

NFSv4

NFS Version 4 combines the disparate NFS protocols (stat, NLM, mount, ACL, and NFS)

    into a single protocol specification to allow better compatibility with network firewalls.

 * mountd, statd, and lockd are not required in a pure NFSv4 environment.

 * single target port (2049/tcp)

 * NFSv4 clients are required to renew leases on files and filesystems on regular basis.
   => This activity keeps the TCP session active.

Stateful

Opening, locking, reading, and writing, carry state information that notify the server of the intentions on the object by the client.

The server can then return information to a client about other clients that also have intentions on the same object.

Locking

  • Support for byte-range locking
  • Locking in NFSv4 is lease-based

String data

All "string data" used by the protocol is represented in UTF-8 as it crosses the network.
User and group information is passed between the client and server in string form, not as numeric values as in previous versions.

Remark

NFS version 4 performance might be slower than with NFS version 3 for many applications.

The performance impact varies significantly depending on which new functions you use.

NFSv4.1

pNFS extension

 


Server 設定 Share Folder

 

/etc/exportfs

 

[共享的目錄] [Client1_IP(設定1,設定2)]  Client2_IP *(ro)

當設定漏空時, 就會用 default 的 sync,ro,root_squash,no_delay

IP

address/netmask

netmask: /255.255.252.0' or `/22'

Opts

ro /  rw                    只讀 (Default) / 讀寫

sync / async             sync (Default)
                               async: reply to client requests as soon as it has processed the request and handed it off to the local file system

secure / insecure      通過 port 1024以下 TCP/IP Port 溝通 (Default)  /  1024 以上

wdelay(Default)    Causes the NFS server to delay writing to the disk if it suspects another write request is imminent.
no_wdelay               如果多個用戶要寫入NFS目錄,則立即寫入,當使用async時,無需此設置。

hide                        在NFS共享目錄中, 不個別共享其子目錄 (Default)
nohide                     共享NFS目錄的子目錄

subtree_check             如果共享/usr/bin之類的子目錄時,強制NFS檢查父目錄的權限(Default)
no_subtree_check        與 subtree_check 相對, 不檢查父目錄權限

all_squash                   # 共享文件的UID和GID映射到 nobody 和 nogroup 上
no_all_squash              # 保留共享文件的UID和GID(Default)
root_squash                # root用戶的所有請求映射成如anonymous用戶一樣的權限(Default)
no_root_squash         # root用戶具有根目錄的完全管理訪問權限

anonuid=xxx            # anonymous用戶 UID (nobody 65534)
anongid=xxx            # anonymous用戶 GID (nogroup 65534)

pnfs                        # pNFS extension. NFSv4.1 or higher and the fs supports pNFS
                              # pNFS clients can bypass the server and perform I/O directly to storage devices

Help

man exports

 

exportfs

可以在不影響  /etc/exports 去建立或取消分施

可用選項:

-a    All / Export
-u    Unexport
-r     Reexport
-v     verbose

initialized share

exportfs -a

讀取 /etc/exports 去生成 /var/lib/nfs/etab 文件

mountd 會去讀它, 去決定 Client 的 Access

etab 包括原整設定

/home/share     192.168.123.112(rw,sync,wdelay,hide,nocrossmnt,secure,
                                root_squash,no_all_squash,no_subtree_check,secure_locks,acl,
                                anonuid=65534,anongid=65534)

支援 address/netmask

查看什麼 export 了

exportfs

/home/share     192.168.123.112

看 export opt

exportfs -v

/home/vhosts/X/api/public_html/uploads
                172.16.11.11(rw,sync,wdelay,hide,no_subtree_check,sec=sys,secure,no_root_squash,no_all_squash)

重新讀取 /etc/exports 文件

exportfs -r

取消分享

取消所有分享

exportfs -au

單是 IP:/usr/tmp 被取消

exportfs -u IP:/usr/tmp

 


查看 Network 及 Statistics

 

lsof -i|grep rpc

nfsstat [-l]

-l = list form         # Print information in list form

-c, --client            # Print only client-side statistics.

-s, --server          # Print only server-side statistics.

Server rpc stats:
calls      badcalls   badclnt    badauth    xdrcall
0          0          0          0          0

-l

nfs v3 client        total:      604
------------- ------------- --------
nfs v3 client      getattr:       66
nfs v3 client      setattr:        4
nfs v3 client       lookup:       16
nfs v3 client       access:       11
nfs v3 client        write:      481
nfs v3 client       create:        8
nfs v3 client       remove:        3
nfs v3 client  readdirplus:        4
nfs v3 client       fsinfo:        6
nfs v3 client     pathconf:        1
nfs v3 client       commit:        4

* 每次操作都是有 commit

在 /proc 有關的 file

/proc/fs/nfsd/ 內有

export_features
exports
filehandle
max_block_size
nfsv4gracetime
nfsv4leasetime
nfsv4recoverydir
pool_stats
pool_threads
portlist
reply_cache_stats
supported_krb5_enctypes
threads
unlock_filesystem
unlock_ip
versions

/proc/fs/nfs/exports

# Version 1.2
# Path Client(Flags) # IPs

 


Remote Procedure Call

 

告訴 Client 每個 NFS 功能所對應的 port number
(因為 NFS 用來傳輸的 port 是隨機小於 1024 的)

NFS 隨機取用Port  ---注冊-->  RPC

Notes

Portmapper is a service that is utilized for mapping network service ports
to RPC (Remote Procedure Call) program numbers.

Default port: 111/TCP/UDP

rpcinfo 查看什麼 program 注冊了 rpc

  • -p      Probe

Example:

rpcinfo -p localhost

Default:

[root@hyps2 root]# rpcinfo -p localhost
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    826  status
    100024    1   tcp    829  status

啟用 nfs 後:

   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    826  status
    100024    1   tcp    829  status
    100011    1   udp    895  rquotad
    100011    2   udp    895  rquotad
    100011    1   tcp    898  rquotad
    100011    2   tcp    898  rquotad
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100021    1   udp  57975  nlockmgr
    100021    3   udp  57975  nlockmgr
    100021    4   udp  57975  nlockmgr
    100021    1   tcp  32779  nlockmgr
    100021    3   tcp  32779  nlockmgr
    100021    4   tcp  32779  nlockmgr
    100005    1   udp    911  mountd
    100005    1   tcp    914  mountd
    100005    2   udp    911  mountd
    100005    2   tcp    914  mountd
    100005    3   udp    911  mountd
    100005    3   tcp    914  mountd

 

再開 Service

/etc/init.d/portmap restart

 

Default:

portmap         0:off   1:off   2:off   3:off   4:off   5:off   6:off

P.S.

RPC 若重新啟動時, 原本註冊的資料會不見

與 rpcgssd, rpcidmapd, rpcsvcgssd 無關

rpcgssd

The rpcsec_gss protocol gives a means of using the gss-api generic security api to provide security for protocols using rpc (in particular, nfs). Before exchanging any rpc requests using rpcsec_gss, the rpc client must first establish a security context. The linux kernel's implementation of rpcsec_gss depends on the userspace daemon rpc.gssd to establish security contexts. The rpc.gssd daemon uses files in the rpc_pipefs filesystem to communicate with the kernel.

-f Runs rpc.gssd in the foreground and sends output to stderr (as opposed to syslogd)

rpcidmapd

rpc.idmapd is the NFSv4 ID <-> name mapping daemon. It provides functionality to the NFSv4 kernel client and server, to which it communicates via upcalls, by translating user and group IDs to names, and vice versa.

rpcsvcgssd

The rpcsec_gss protocol gives a means of using the gss-api generic security api to provide security for protocols using rpc (in particular, nfs). Before exchanging any rpc requests using rpcsec_gss, the rpc client must first establish a security context with the rpc server. The linux kernel's implementation of rpcsec_gss depends on the userspace daemon rpc.svcgssd to handle context establishment on the rpc server. The daemon uses files in the proc filesystem to communicate with the kernel.

 


用戶身份

 

Server 上是用 Client 的提供的 UID 及 GID 來認證身份

苦在 Server 上並沒有 Client 所提供的 UID 時, 系統就會用 nobody  去取代

所以,  用 NIS 去保持身份一致是個不錯的選擇

 


Client Side

 

安裝:

apt-get install nfs-common

提供工具:

  • /sbin/showmount
  • /sbin/mount.nfs
  • /sbin/umount.nfs
  • /usr/sbin/nfsstat

查看資源:

showmount  [-e[Server_IP]]

  • -a                   client IP and mounted directory
  • -e                   Export list
  • no options       什麼 Client 連上來

i.e. localhost

showmount

Hosts on debianA:
192.168.123.112

i.e. ip: 192.168.123.111

showmount -e 192.168.123.111

Export list for 192.168.123.111:
/home/share (everyone)

掛載:

mount Server_IP:/home/share /mnt/share

可用設定:

intr                # 允許通知中斷一個NFS調用

udp/tcp

TCP advantage:

On lossy networks, a single dropped packet can be retransmitted, without the retransmission of the entire RPC request
 

noac                     # 關閉cache機制

retry=n

hard / soft          # Default: hard

*hard 不斷的嘗試

*soft 收到錯誤信息後終止mount嘗試(Default)

An NFS request after retrans retransmissions have been sent, causing the NFS client to return an error to the calling application.

Example:

mount -t nfs -o soft,udp,noatime  Server_IP:/DIR      /Local_DIR

開機時自動掛載

在 /etc/fstab 加下

Server_IP:/home/share      /mnt/share    nfs     rsize=8192,wsize=8192,timeo=14

 


防火牆 (Running NFS Behind a Firewall)

 

Allow ACL

# Allow TCP and UDP port 2049 for NFS.
# Allow TCP and UDP port 111 (rpcbind(portmap))
-A INPUT -p tcp -m state --state NEW -m multiport --dport 111,2049 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m multiport --dport 111,2049 -j ACCEPT

 * NFS requires rpcbind, which dynamically assigns ports for RPC services

 * NFS4 開 2 個 port 已經夠 (111,2049)

Port:

1-1024: mountd

禁止某 IP 登入

我們可以透過hosts.deny 文件 或 iptables 去實施

在 hosts.deny上實施:

在 /etc/hosts.deny 內加入禁止的 host

portmap:IP

在iptables 上實施:

iptables -A INPUT -i eth0 -p TCP -s IP --dport 111 -j DROP

 

Dynamically port to Static port

Centos7

# control which ports the required RPC services run on

/etc/sysconfig/nfs

# mountd (rpc.mountd) [TCP/UDP]
MOUNTD_PORT=892

# status (rpc.statd) [TCP/UDP]
STATD_PORT=662

# Restart service

systemctl restart nfs

systemctl restart rpc-statd

lockd

# 方法 1

/etc/modprobe.d/lockd.conf

# 方法 2

vim /etc/sysctl.conf

fs.nfs.nlm_tcpport=57269
fs.nfs.nlm_udpport=57269

# 套用 setting

sysctl -p

checking

cat /proc/sys/fs/nfs/nlm_*port

Checking

# report RPC information

rpcinfo -p

OR

rpcinfo -p | awk '{print $3 "\t" $4 "\t" $5}' | sort -u

# by log

tail -f /var/log/messages

firewalld

rich-rule

firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="tcp" port="111" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="udp" port="111" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="tcp" port="2049" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="udp" port="2049" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="tcp" port="892" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="udp" port="892" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="tcp" port="662" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="udp" port="662" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="tcp" port="57269" accept'
firewall-cmd --permanent \
--add-rich-rule='rule family="ipv4" source address="172.16.11.11" port protocol="udp" port="57269" accept'

firewall-cmd --reload

service.xml

firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=rpcbind
firewall-cmd --permanent --add-service=mountd
firewall-cmd --reload

nfs

<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>NFS4</short>
  <description>...</description>
  <port protocol="tcp" port="2049"/>
</service>

mountd

<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>mountd</short>
  <description>NFS Mount Lock Daemon</description>
  <port protocol="tcp" port="20048"/>
  <port protocol="udp" port="20048"/>
</service>

 


NFS Version

 

# Default: NFS uses the highest supported version by the kernel and mount command.

Checking

nfsstat -m

... vers=4 ...

設定

vers=3|4

 


Performance

 

Client

vers

vers=3

retrans / timeo

retrans=3,timeo=7

the client will resend the RPC packet 3 times and each time it will wait for 7/10 seconds

# statistics for retranmission of packets

nfsstat

Client rpc stats:
calls      retrans    authrefrsh
133        2          133

Client nfs v4:
null         read         write        commit       open         open_conf
0         0% 0         0% 0         0% 0         0% 0         0% 0         0%
...

wsize / rsize

wsize 和 rsize 設定了 SERVER 與 CLIENT 之間往來數據塊 (chunks of data) 的大小, 單位為 bytes

NFS V2 最大值 8K (8192 byte)

NFS V3 最大值 64K (65536 byte)

 * 不同的操作系統有不同的最佳數值

Example:

mount -t nfs -o rsize=1024,wsize=1024,noatime,nodiratime  Server_IP:/DIR /Local_DIR

Server

number of nfs threads

When a request is made for an NFS service, the Services for UNIX NFS server generates a thread to handle the request.

Each thread can process one NFS request. A large pool of threads can allow a server to handle more NFS requests in parallel.

A dual-processor server typically uses 20 threads.

Centos Setting

/etc/sysconfig/nfs

# the number of threads that will be started.
RPCNFSDCOUNT=8

# Checking

grep th /proc/net/rpc/nfsd

# first:  total number of NFS server threads started.
# second: indicates whether at any time all of the threads were running at once.
# remaining numbers: a thread count time histogram.

 


mount

 

# tcp with nfsv4

mount -t nfs -o proto=tcp,vers=4 192.168.88.210:/home/data /home/data

 


Statistics - /proc/net/rpc/nfsd

 

 

 


有關 Packages

 

nfs-common - NFS support files common to client and server

Adding new user `statd' (UID 108) with group `nogroup'

showmount

nfsboot - Allow clients to boot over the network

Linux kernel's "nfsroot=" boot option
nfsroot=192.168.2.72:/tftpboot/nfsroot,v3

/etc/dhcpd.conf settings for the client machine:

option root-path "/tftpboot/nfsroot,v3";

nfsbooted - Prepares your image for nfs boot

unfs3 - User-space NFSv3 Server

nfswatch

nfs4-acl-tools

 


Disable ubuntu nfs and portmap service

 

# service portmap stop

tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1583/rpcbind

# service nfs-kernel-server stop

tcp        0      0 0.0.0.0:52662           0.0.0.0:*               LISTEN      2891/rpc.mountd
tcp        0      0 0.0.0.0:35834           0.0.0.0:*               LISTEN      2891/rpc.mountd
tcp        0      0 0.0.0.0:34522           0.0.0.0:*               LISTEN      1695/rpc.statd
tcp        0      0 0.0.0.0:55237           0.0.0.0:*               LISTEN      2891/rpc.mountd

 


Troubleshoot

 

Error 1:

Starting NFS daemon: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)
rpc.nfsd: unable to set any sockets for nfsd

Fix 1:

service rpcbind restart

chkconfig rpcbind on

Error 2:

rsync: chown "PATH" failed: Operation not permitted (1)

Fix 2:

改用 NFS v3

Error 3:

exportfs: internal: no supported addresses in nfs_client
exportfs: 192.168.1.252:/home/vm/migrate: No such file or directory

Fix 3:

/etc/init.d/nfs start

 


Remark

 

rpc.statd

A daemon that listens for reboot notifications from other hosts,

and manages the list of  hosts  to  be notified when the local system reboots

After an NFS client reboots, an NFS server must release all file locks held by applications that were running on  that client. 

After a server reboots, a client must remind the server of file locks held by applications running on that client.

NSM state number for this host

# NSM (Network  Status  Monitor)  protocol is used to notify NFS peers of reboots.

/var/lib/nfs/statd/state

directory containing monitor list

ls /var/lib/nfs/statd/sm

 

 

 

 

Creative Commons license icon Creative Commons license icon