i

Fundamentals Of Linux

Configuration of autofs, NFS security

Configuration of autofs
The essential configuration file for the automounter is /etc/auto.master, which is likewise alluded to as the master map. The master map records autofs-controlled mount points on the computer, and their relating configuration files or network sources called automount maps. The scheme of the master map is as per the following:

where:
The mount-point is the autofs mount point. E.g. /home.
map-name is the name of a map source that includes a rundown of mount points, and the filesystem area from where those mount points ought to be mounted. Given beneath is the syntax for a map entry.
options whenever provided, will apply to every entry in the given map provided they do not themselves have options indicated. This conduct is not the same as autofs version 4 where the options were aggregate. This has been altered to meet our essential objective of mixed environment compatibility.
The general configuration of maps is like the master map, anyway the "options" show up between the mount point and the area rather than toward the finish of the entry as in the master map:

[]

where:
The is the autofs mount point. This can be a solitary directory name for an indirect mount or the full path of the mount point for direct mounts. Every indirect and direct map entry key ( above) might be trailed by a space-separated list of offset directories (sub-directory names each starting with a "/") making them what is known as a multi-mount entry.
whenever provided, are the mount options for the map entries that don't determine their very own options.
is the filesystem area, for example, a local filesystem path (went before with the Sun map design escape character ":" for map names starting with a "/"), an NFS filesystem or another authentic filesystem area.


The autofs mount point is demonstrated in the first column in a map file. The subsequent column shows the options for the autofs mount while the third column demonstrates the source of the mount. The automounter will make the directories on the off chance that they don't exist. In the event that the directories exist before the automounter was begun, the automounter won't expel them when it exits. You can begin or restart the automount daemon by executing the accompanying commands:

$/sbin/service autofs start

or on the other hand

$/sbin/service autofs restart

Utilizing the above design, if a process expects access to an autofs unmounted directory, for example, /home/payroll/2006/July.sxc, the automount daemon consequently mounts the directory. On the off chance that a timeout is determined, the directory will automatically be unmounted if the directory isn't accessed for the timeout duration. You can see the status of the automount daemon by executing the accompanying command in the terminal:
$/sbin/service/autofs status

Configuration of NFS security
NFS depends on two comparable yet particular protocols: NFS and MOUNT. Both utilize a data object called file handle. There is additionally a circulated protocol for file locking, which isn't actually part of NFS, and which doesn't have any conspicuous security implications. While there are no undeniable security ramifications for the lock protocol itself (other than the conspicuous denial of service issues), the lockd daemon that actualizes the protocol was the subject of a few buffer overflow issues found in the late 1990s.
The following are a few essential security issues with NFS:
NFS is based on Sun's RPC (Remote Procedure Call), and by and large uses RPC for user authentication. Except if a safe type of RPC is utilized, NFS can be effectively spoofed.
In any event, when Secure RPC is utilized, data sent by NFS over the network isn't encrypted and is consequently liable to eavesdropping and monitoring. The information can be captured and supplanted (in this manner Trojaning or corrupting files being imported by means of NFS).
For access control, NFS utilizes the standard Unix filesystem. This opens the networked filesystem to a significant number of indistinguishable issues similar to while using a local filesystem.


Fundamental Security for NFS

Access to NFS is given by a mount to a specific directory recorded in the /etc/exports file. IP addresses or names are setup for machines that are permitted access. On the off chance that a client's IP coordinates a share, it can mount, notwithstanding on the off chance that it is actually the IP it professes to be. This makes it insecure. Somebody spoofing IP addresses or a hacked machine can mount on your access points. Normal file access controls are utilized for performing file access as access control isn't a function of NFS particularly. This implies when a drive is mounted, group and user permissions assume control over the access control of the files.

These means will assist you with diminishing some security risk, yet not all.

Edit /etc/hosts.deny with "ALL:ALL" to deny everybody
Edit /etc/hosts.allow with "portmap: your_subnet/subnet_mask"
Do the equivalent for mountd, lockd, statd and rquotad
This will permit everybody in your subnet to access portmap and other NFS daemons however no one else, these settings can be changed diversely in the event that you need increasingly explicit ranges
Use IP numbers in these fields, not hostnames
On the server machine, ensure you explicitly include the line "root_squash" to your shares in the /etc/exports file
This tells the server that we would prefer not to confide in root on the client machines
"root_squash" will look something like this in /etc/exports
/home/share host_ip(rw, root_squash)
These options state the host PC at "host_ip" is sharing "/home/share" with write/read access and no root UID
Even with this alternative, somebody could still "su" and change users. That would not benefit from outside intervention, however, you can make all files on the shared drive owned by root, and since root is the only account a client can't change as well, the files can't be altered.
Another great choice is to make everything read-only with "ro"
Some great choices for the client machine are "nosuid" and "noexec". nosuid limits root executables on the client, noexec restricts all executables on the mount however this might be illogical

The Firewalld services to be active on the NFS server are as follows:

For the NFS server to work, turn on the nfs, mountd, and rpc-bind services in the significant zone in the firewall-config application or utilizing firewall-cmd:

    # firewall-cmd --add-service=nfs --zone=internal --permanent
    # firewall-cmd --add-service=mountd --zone=internal --permanent
    # firewall-cmd --add-service=rpc-bind --zone=internal –permanent


File Permissions

When the NFS filesystem is mounted write/read by a remote host, the main security each shared file has is its permissions. On the off chance that the same NFS filesystem is mounted by two users that have the same user ID value, they can alter the other's files. Also, anybody signed in as root on the client computer can utilize the su command to turn into a user who could access specific files by means of the NFS share. Naturally, access control lists (ACLs) are upheld by NFS under Red Hat Enterprise Linux. It isn't suggested that this element be turned off.

The default conduct when sending out a filesystem through NFS is to utilize root squashing. This sets the user ID of anybody accessing the NFS share as the root user on their local machine to a value of the server's nfsnobody account. Root squashing shouldn't be turned off ever. On the off chance of trading an NFS share as read-only, consider utilizing the all_squash option, which makes each user accessing the sent out filesystem take the user ID of the nfsnobody user.