dect
/
linux-2.6
Archived
13
0
Fork 0

Merge nfs containerization work from Trond's tree

The nfs containerization work is a prerequisite for Jeff Layton's reboot
recovery rework.
This commit is contained in:
J. Bruce Fields 2012-03-21 16:42:14 -04:00
commit 1df00640c9
229 changed files with 6077 additions and 3826 deletions

View File

@ -102,9 +102,12 @@ X!Iinclude/linux/kobject.h
!Iinclude/linux/device.h
</sect1>
<sect1><title>Device Drivers Base</title>
!Idrivers/base/init.c
!Edrivers/base/driver.c
!Edrivers/base/core.c
!Edrivers/base/syscore.c
!Edrivers/base/class.c
!Idrivers/base/node.c
!Edrivers/base/firmware_class.c
!Edrivers/base/transport_class.c
<!-- Cannot be included, because
@ -113,13 +116,18 @@ X!Iinclude/linux/kobject.h
exceed allowed 44 characters maximum
X!Edrivers/base/attribute_container.c
-->
!Edrivers/base/sys.c
!Edrivers/base/dd.c
<!--
X!Edrivers/base/interface.c
-->
!Iinclude/linux/platform_device.h
!Edrivers/base/platform.c
!Edrivers/base/bus.c
</sect1>
<sect1><title>Device Drivers DMA Management</title>
!Edrivers/base/dma-buf.c
!Edrivers/base/dma-coherent.c
!Edrivers/base/dma-mapping.c
</sect1>
<sect1><title>Device Drivers Power Management</title>
!Edrivers/base/power/main.c
@ -219,7 +227,7 @@ X!Isound/sound_firmware.c
<chapter id="uart16x50">
<title>16x50 UART Driver</title>
!Edrivers/tty/serial/serial_core.c
!Edrivers/tty/serial/8250.c
!Edrivers/tty/serial/8250/8250.c
</chapter>
<chapter id="fbdev">

View File

@ -4,13 +4,21 @@ ID Mapper
=========
Id mapper is used by NFS to translate user and group ids into names, and to
translate user and group names into ids. Part of this translation involves
performing an upcall to userspace to request the information. Id mapper will
user request-key to perform this upcall and cache the result. The program
/usr/sbin/nfs.idmap should be called by request-key, and will perform the
translation and initialize a key with the resulting information.
performing an upcall to userspace to request the information. There are two
ways NFS could obtain this information: placing a call to /sbin/request-key
or by placing a call to the rpc.idmap daemon.
NFS will attempt to call /sbin/request-key first. If this succeeds, the
result will be cached using the generic request-key cache. This call should
only fail if /etc/request-key.conf is not configured for the id_resolver key
type, see the "Configuring" section below if you wish to use the request-key
method.
If the call to /sbin/request-key fails (if /etc/request-key.conf is not
configured with the id_resolver key type), then the idmapper will ask the
legacy rpc.idmap daemon for the id mapping. This result will be stored
in a custom NFS idmap cache.
NFS_USE_NEW_IDMAPPER must be selected when configuring the kernel to use this
feature.
===========
Configuring

View File

@ -53,3 +53,57 @@ lseg maintains an extra reference corresponding to the NFS_LSEG_VALID
bit which holds it in the pnfs_layout_hdr's list. When the final lseg
is removed from the pnfs_layout_hdr's list, the NFS_LAYOUT_DESTROYED
bit is set, preventing any new lsegs from being added.
layout drivers
--------------
PNFS utilizes what is called layout drivers. The STD defines 3 basic
layout types: "files" "objects" and "blocks". For each of these types
there is a layout-driver with a common function-vectors table which
are called by the nfs-client pnfs-core to implement the different layout
types.
Files-layout-driver code is in: fs/nfs/nfs4filelayout.c && nfs4filelayoutdev.c
Objects-layout-deriver code is in: fs/nfs/objlayout/.. directory
Blocks-layout-deriver code is in: fs/nfs/blocklayout/.. directory
objects-layout setup
--------------------
As part of the full STD implementation the objlayoutdriver.ko needs, at times,
to automatically login to yet undiscovered iscsi/osd devices. For this the
driver makes up-calles to a user-mode script called *osd_login*
The path_name of the script to use is by default:
/sbin/osd_login.
This name can be overridden by the Kernel module parameter:
objlayoutdriver.osd_login_prog
If Kernel does not find the osd_login_prog path it will zero it out
and will not attempt farther logins. An admin can then write new value
to the objlayoutdriver.osd_login_prog Kernel parameter to re-enable it.
The /sbin/osd_login is part of the nfs-utils package, and should usually
be installed on distributions that support this Kernel version.
The API to the login script is as follows:
Usage: $0 -u <URI> -o <OSDNAME> -s <SYSTEMID>
Options:
-u target uri e.g. iscsi://<ip>:<port>
(allways exists)
(More protocols can be defined in the future.
The client does not interpret this string it is
passed unchanged as recieved from the Server)
-o osdname of the requested target OSD
(Might be empty)
(A string which denotes the OSD name, there is a
limit of 64 chars on this string)
-s systemid of the requested target OSD
(Might be empty)
(This string, if not empty is always an hex
representation of the 20 bytes osd_system_id)
blocks-layout setup
-------------------
TODO: Document the setup needs of the blocks layout driver

View File

@ -1657,6 +1657,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
of returning the full 64-bit number.
The default is to return 64-bit inode numbers.
nfs.max_session_slots=
[NFSv4.1] Sets the maximum number of session slots
the client will attempt to negotiate with the server.
This limits the number of simultaneous RPC requests
that the client can send to the NFSv4.1 server.
Note that there is little point in setting this
value higher than the max_tcp_slot_table_limit.
nfs.nfs4_disable_idmapping=
[NFSv4] When set to the default of '1', this option
ensures that both the RPC level authentication
@ -1670,6 +1678,21 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
back to using the idmapper.
To turn off this behaviour, set the value to '0'.
nfs.send_implementation_id =
[NFSv4.1] Send client implementation identification
information in exchange_id requests.
If zero, no implementation identification information
will be sent.
The default is to send the implementation identification
information.
objlayoutdriver.osd_login_prog=
[NFS] [OBJLAYOUT] sets the pathname to the program which
is used to automatically discover and login into new
osd-targets. Please see:
Documentation/filesystems/pnfs.txt for more explanations
nmi_debug= [KNL,AVR32,SH] Specify one or more actions to take
when a NMI is triggered.
Format: [state][,regs][,debounce][,die]

View File

@ -159,7 +159,7 @@ S: Maintained
F: drivers/net/ethernet/realtek/r8169.c
8250/16?50 (AND CLONE UARTS) SERIAL DRIVER
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org
W: http://serial.sourceforge.net
S: Maintained
@ -1783,9 +1783,9 @@ X: net/wireless/wext*
CHAR and MISC DRIVERS
M: Arnd Bergmann <arnd@arndb.de>
M: Greg Kroah-Hartman <greg@kroah.com>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
S: Maintained
S: Supported
F: drivers/char/*
F: drivers/misc/*
@ -2320,7 +2320,7 @@ F: lib/lru_cache.c
F: Documentation/blockdev/drbd/
DRIVER CORE, KOBJECTS, DEBUGFS AND SYSFS
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core-2.6.git
S: Supported
F: Documentation/kobject.txt
@ -6276,15 +6276,15 @@ S: Maintained
F: arch/alpha/kernel/srm_env.c
STABLE BRANCH
M: Greg Kroah-Hartman <greg@kroah.com>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: stable@vger.kernel.org
S: Maintained
S: Supported
STAGING SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
L: devel@driverdev.osuosl.org
S: Maintained
S: Supported
F: drivers/staging/
STAGING - AGERE HERMES II and II.5 WIRELESS DRIVERS
@ -6669,8 +6669,8 @@ S: Maintained
K: ^Subject:.*(?i)trivial
TTY LAYER
M: Greg Kroah-Hartman <gregkh@suse.de>
S: Maintained
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty-2.6.git
F: drivers/tty/
F: drivers/tty/serial/serial_core.c
@ -6958,7 +6958,7 @@ S: Maintained
F: drivers/usb/serial/digi_acceleport.c
USB SERIAL DRIVER
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-usb@vger.kernel.org
S: Supported
F: Documentation/usb/usb-serial.txt
@ -6973,9 +6973,8 @@ S: Maintained
F: drivers/usb/serial/empeg.c
USB SERIAL KEYSPAN DRIVER
M: Greg Kroah-Hartman <greg@kroah.com>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-usb@vger.kernel.org
W: http://www.kroah.com/linux/
S: Maintained
F: drivers/usb/serial/*keyspan*
@ -7003,7 +7002,7 @@ F: Documentation/video4linux/sn9c102.txt
F: drivers/media/video/sn9c102/
USB SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-usb@vger.kernel.org
W: http://www.linux-usb.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb-2.6.git
@ -7090,7 +7089,7 @@ F: fs/hppfs/
USERSPACE I/O (UIO)
M: "Hans J. Koch" <hjk@hansjkoch.de>
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Maintained
F: Documentation/DocBook/uio-howto.tmpl
F: drivers/uio/

View File

@ -26,7 +26,6 @@
#include <linux/cache.h>
#include <linux/of_platform.h>
#include <linux/dma-mapping.h>
#include <linux/cpu.h>
#include <asm/cacheflush.h>
#include <asm/entry.h>
#include <asm/cpuinfo.h>
@ -227,23 +226,5 @@ static int __init setup_bus_notifier(void)
return 0;
}
arch_initcall(setup_bus_notifier);
static DEFINE_PER_CPU(struct cpu, cpu_devices);
static int __init topology_init(void)
{
int i, ret;
for_each_present_cpu(i) {
struct cpu *c = &per_cpu(cpu_devices, i);
ret = register_cpu(c, i);
if (ret)
printk(KERN_WARNING "topology_init: register_cpu %d "
"failed (%d)\n", i, ret);
}
return 0;
}
subsys_initcall(topology_init);

View File

@ -33,6 +33,7 @@ config SPARC
config SPARC32
def_bool !64BIT
select GENERIC_ATOMIC64
select CLZ_TAB
config SPARC64
def_bool 64BIT

View File

@ -17,23 +17,9 @@ along with GNU CC; see the file COPYING. If not, write to
the Free Software Foundation, 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA. */
.data
.align 8
.globl __clz_tab
__clz_tab:
.byte 0,1,2,2,3,3,3,3,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5
.byte 6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6
.byte 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7
.byte 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7
.byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8
.byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8
.byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8
.byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8
.size __clz_tab,256
.global .udiv
.text
.align 4
.global .udiv
.globl __divdi3
__divdi3:
save %sp,-104,%sp

View File

@ -145,13 +145,13 @@ extern void __add_wrong_size(void)
#ifdef __HAVE_ARCH_CMPXCHG
#define cmpxchg(ptr, old, new) \
__cmpxchg((ptr), (old), (new), sizeof(*ptr))
__cmpxchg(ptr, old, new, sizeof(*(ptr)))
#define sync_cmpxchg(ptr, old, new) \
__sync_cmpxchg((ptr), (old), (new), sizeof(*ptr))
__sync_cmpxchg(ptr, old, new, sizeof(*(ptr)))
#define cmpxchg_local(ptr, old, new) \
__cmpxchg_local((ptr), (old), (new), sizeof(*ptr))
__cmpxchg_local(ptr, old, new, sizeof(*(ptr)))
#endif
/*

View File

@ -252,7 +252,8 @@ int __kprobes __die(const char *str, struct pt_regs *regs, long err)
unsigned short ss;
unsigned long sp;
#endif
printk(KERN_EMERG "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
printk(KERN_DEFAULT
"%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
#ifdef CONFIG_PREEMPT
printk("PREEMPT ");
#endif

View File

@ -129,7 +129,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
if (!stack) {
if (regs)
stack = (unsigned long *)regs->sp;
else if (task && task != current)
else if (task != current)
stack = (unsigned long *)task->thread.sp;
else
stack = &dummy;
@ -269,11 +269,11 @@ void show_registers(struct pt_regs *regs)
unsigned char c;
u8 *ip;
printk(KERN_EMERG "Stack:\n");
printk(KERN_DEFAULT "Stack:\n");
show_stack_log_lvl(NULL, regs, (unsigned long *)sp,
0, KERN_EMERG);
0, KERN_DEFAULT);
printk(KERN_EMERG "Code: ");
printk(KERN_DEFAULT "Code: ");
ip = (u8 *)regs->ip - code_prologue;
if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) {

View File

@ -39,6 +39,14 @@ static int reboot_mode;
enum reboot_type reboot_type = BOOT_ACPI;
int reboot_force;
/* This variable is used privately to keep track of whether or not
* reboot_type is still set to its default value (i.e., reboot= hasn't
* been set on the command line). This is needed so that we can
* suppress DMI scanning for reboot quirks. Without it, it's
* impossible to override a faulty reboot quirk without recompiling.
*/
static int reboot_default = 1;
#if defined(CONFIG_X86_32) && defined(CONFIG_SMP)
static int reboot_cpu = -1;
#endif
@ -67,6 +75,12 @@ bool port_cf9_safe = false;
static int __init reboot_setup(char *str)
{
for (;;) {
/* Having anything passed on the command line via
* reboot= will cause us to disable DMI checking
* below.
*/
reboot_default = 0;
switch (*str) {
case 'w':
reboot_mode = 0x1234;
@ -295,14 +309,6 @@ static struct dmi_system_id __initdata reboot_dmi_table[] = {
DMI_MATCH(DMI_BOARD_NAME, "P4S800"),
},
},
{ /* Handle problems with rebooting on VersaLogic Menlow boards */
.callback = set_bios_reboot,
.ident = "VersaLogic Menlow based board",
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "VersaLogic Corporation"),
DMI_MATCH(DMI_BOARD_NAME, "VersaLogic Menlow board"),
},
},
{ /* Handle reboot issue on Acer Aspire one */
.callback = set_kbd_reboot,
.ident = "Acer Aspire One A110",
@ -316,7 +322,12 @@ static struct dmi_system_id __initdata reboot_dmi_table[] = {
static int __init reboot_init(void)
{
dmi_check_system(reboot_dmi_table);
/* Only do the DMI check if reboot_type hasn't been overridden
* on the command line
*/
if (reboot_default) {
dmi_check_system(reboot_dmi_table);
}
return 0;
}
core_initcall(reboot_init);
@ -465,7 +476,12 @@ static struct dmi_system_id __initdata pci_reboot_dmi_table[] = {
static int __init pci_reboot_init(void)
{
dmi_check_system(pci_reboot_dmi_table);
/* Only do the DMI check if reboot_type hasn't been overridden
* on the command line
*/
if (reboot_default) {
dmi_check_system(pci_reboot_dmi_table);
}
return 0;
}
core_initcall(pci_reboot_init);

View File

@ -673,7 +673,7 @@ no_context(struct pt_regs *regs, unsigned long error_code,
stackend = end_of_stack(tsk);
if (tsk != &init_task && *stackend != STACK_END_MAGIC)
printk(KERN_ALERT "Thread overran stack, or stack corrupted\n");
printk(KERN_EMERG "Thread overran stack, or stack corrupted\n");
tsk->thread.cr2 = address;
tsk->thread.trap_no = 14;
@ -684,7 +684,7 @@ no_context(struct pt_regs *regs, unsigned long error_code,
sig = 0;
/* Executive summary in case the body of the oops scrolled away */
printk(KERN_EMERG "CR2: %016lx\n", address);
printk(KERN_DEFAULT "CR2: %016lx\n", address);
oops_end(flags, regs, sig);
}

View File

@ -380,6 +380,7 @@ static int rbd_get_client(struct rbd_device *rbd_dev, const char *mon_addr,
rbdc = __rbd_client_find(opt);
if (rbdc) {
ceph_destroy_options(opt);
kfree(rbd_opts);
/* using an existing client */
kref_get(&rbdc->kref);
@ -406,15 +407,15 @@ done_err:
/*
* Destroy ceph client
*
* Caller must hold node_lock.
*/
static void rbd_client_release(struct kref *kref)
{
struct rbd_client *rbdc = container_of(kref, struct rbd_client, kref);
dout("rbd_release_client %p\n", rbdc);
spin_lock(&node_lock);
list_del(&rbdc->node);
spin_unlock(&node_lock);
ceph_destroy_client(rbdc->client);
kfree(rbdc->rbd_opts);
@ -427,7 +428,9 @@ static void rbd_client_release(struct kref *kref)
*/
static void rbd_put_client(struct rbd_device *rbd_dev)
{
spin_lock(&node_lock);
kref_put(&rbd_dev->rbd_client->kref, rbd_client_release);
spin_unlock(&node_lock);
rbd_dev->rbd_client = NULL;
rbd_dev->client = NULL;
}

View File

@ -263,6 +263,7 @@ static inline struct fw_ohci *fw_ohci(struct fw_card *card)
static char ohci_driver_name[] = KBUILD_MODNAME;
#define PCI_DEVICE_ID_AGERE_FW643 0x5901
#define PCI_DEVICE_ID_CREATIVE_SB1394 0x4001
#define PCI_DEVICE_ID_JMICRON_JMB38X_FW 0x2380
#define PCI_DEVICE_ID_TI_TSB12LV22 0x8009
#define PCI_DEVICE_ID_TI_TSB12LV26 0x8020
@ -289,6 +290,9 @@ static const struct {
{PCI_VENDOR_ID_ATT, PCI_DEVICE_ID_AGERE_FW643, 6,
QUIRK_NO_MSI},
{PCI_VENDOR_ID_CREATIVE, PCI_DEVICE_ID_CREATIVE_SB1394, PCI_ANY_ID,
QUIRK_RESET_PACKET},
{PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB38X_FW, PCI_ANY_ID,
QUIRK_NO_MSI},
@ -299,7 +303,7 @@ static const struct {
QUIRK_NO_MSI},
{PCI_VENDOR_ID_RICOH, PCI_ANY_ID, PCI_ANY_ID,
QUIRK_CYCLE_TIMER},
QUIRK_CYCLE_TIMER | QUIRK_NO_MSI},
{PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_TSB12LV22, PCI_ANY_ID,
QUIRK_CYCLE_TIMER | QUIRK_RESET_PACKET | QUIRK_NO_1394A},

View File

@ -54,9 +54,10 @@ struct bit_entry {
int bit_table(struct drm_device *, u8 id, struct bit_entry *);
enum dcb_gpio_tag {
DCB_GPIO_TVDAC0 = 0xc,
DCB_GPIO_PANEL_POWER = 0x01,
DCB_GPIO_TVDAC0 = 0x0c,
DCB_GPIO_TVDAC1 = 0x2d,
DCB_GPIO_PWM_FAN = 0x9,
DCB_GPIO_PWM_FAN = 0x09,
DCB_GPIO_FAN_SENSE = 0x3d,
DCB_GPIO_UNUSED = 0xff
};

View File

@ -219,6 +219,16 @@ nouveau_display_init(struct drm_device *dev)
if (ret)
return ret;
/* power on internal panel if it's not already. the init tables of
* some vbios default this to off for some reason, causing the
* panel to not work after resume
*/
if (nouveau_gpio_func_get(dev, DCB_GPIO_PANEL_POWER) == 0) {
nouveau_gpio_func_set(dev, DCB_GPIO_PANEL_POWER, true);
msleep(300);
}
/* enable polling for external displays */
drm_kms_helper_poll_enable(dev);
/* enable hotplug interrupts */

View File

@ -124,7 +124,7 @@ MODULE_PARM_DESC(ctxfw, "Use external HUB/GPC ucode (fermi)\n");
int nouveau_ctxfw;
module_param_named(ctxfw, nouveau_ctxfw, int, 0400);
MODULE_PARM_DESC(ctxfw, "Santise DCB table according to MXM-SIS\n");
MODULE_PARM_DESC(mxmdcb, "Santise DCB table according to MXM-SIS\n");
int nouveau_mxmdcb = 1;
module_param_named(mxmdcb, nouveau_mxmdcb, int, 0400);

View File

@ -379,6 +379,25 @@ retry:
return 0;
}
static int
validate_sync(struct nouveau_channel *chan, struct nouveau_bo *nvbo)
{
struct nouveau_fence *fence = NULL;
int ret = 0;
spin_lock(&nvbo->bo.bdev->fence_lock);
if (nvbo->bo.sync_obj)
fence = nouveau_fence_ref(nvbo->bo.sync_obj);
spin_unlock(&nvbo->bo.bdev->fence_lock);
if (fence) {
ret = nouveau_fence_sync(fence, chan);
nouveau_fence_unref(&fence);
}
return ret;
}
static int
validate_list(struct nouveau_channel *chan, struct list_head *list,
struct drm_nouveau_gem_pushbuf_bo *pbbo, uint64_t user_pbbo_ptr)
@ -393,7 +412,7 @@ validate_list(struct nouveau_channel *chan, struct list_head *list,
list_for_each_entry(nvbo, list, entry) {
struct drm_nouveau_gem_pushbuf_bo *b = &pbbo[nvbo->pbbo_index];
ret = nouveau_fence_sync(nvbo->bo.sync_obj, chan);
ret = validate_sync(chan, nvbo);
if (unlikely(ret)) {
NV_ERROR(dev, "fail pre-validate sync\n");
return ret;
@ -416,7 +435,7 @@ validate_list(struct nouveau_channel *chan, struct list_head *list,
return ret;
}
ret = nouveau_fence_sync(nvbo->bo.sync_obj, chan);
ret = validate_sync(chan, nvbo);
if (unlikely(ret)) {
NV_ERROR(dev, "fail post-validate sync\n");
return ret;

View File

@ -656,7 +656,16 @@ nouveau_mxm_init(struct drm_device *dev)
if (mxm_shadow(dev, mxm[0])) {
MXM_MSG(dev, "failed to locate valid SIS\n");
#if 0
/* we should, perhaps, fall back to some kind of limited
* mode here if the x86 vbios hasn't already done the
* work for us (so we prevent loading with completely
* whacked vbios tables).
*/
return -EINVAL;
#else
return 0;
#endif
}
MXM_MSG(dev, "MXMS Version %d.%d\n",

View File

@ -495,9 +495,9 @@ nv50_pm_clocks_pre(struct drm_device *dev, struct nouveau_pm_level *perflvl)
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nv50_pm_state *info;
struct pll_lims pll;
int ret = -EINVAL;
int clk, ret = -EINVAL;
int N, M, P1, P2;
u32 clk, out;
u32 out;
if (dev_priv->chipset == 0xaa ||
dev_priv->chipset == 0xac)

View File

@ -1184,7 +1184,7 @@ static int dce4_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(EVERGREEN_GRPH_ENABLE + radeon_crtc->crtc_offset, 1);
WREG32(EVERGREEN_DESKTOP_HEIGHT + radeon_crtc->crtc_offset,
crtc->mode.vdisplay);
target_fb->height);
x &= ~3;
y &= ~1;
WREG32(EVERGREEN_VIEWPORT_START + radeon_crtc->crtc_offset,
@ -1353,7 +1353,7 @@ static int avivo_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(AVIVO_D1GRPH_ENABLE + radeon_crtc->crtc_offset, 1);
WREG32(AVIVO_D1MODE_DESKTOP_HEIGHT + radeon_crtc->crtc_offset,
crtc->mode.vdisplay);
target_fb->height);
x &= ~3;
y &= ~1;
WREG32(AVIVO_D1MODE_VIEWPORT_START + radeon_crtc->crtc_offset,

View File

@ -564,9 +564,21 @@ int radeon_dp_get_panel_mode(struct drm_encoder *encoder,
ENCODER_OBJECT_ID_NUTMEG)
panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE;
else if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) ==
ENCODER_OBJECT_ID_TRAVIS)
panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;
else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
ENCODER_OBJECT_ID_TRAVIS) {
u8 id[6];
int i;
for (i = 0; i < 6; i++)
id[i] = radeon_read_dpcd_reg(radeon_connector, 0x503 + i);
if (id[0] == 0x73 &&
id[1] == 0x69 &&
id[2] == 0x76 &&
id[3] == 0x61 &&
id[4] == 0x72 &&
id[5] == 0x54)
panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE;
else
panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;
} else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
u8 tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP);
if (tmp & 1)
panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;

View File

@ -468,27 +468,42 @@ set_default_state(struct radeon_device *rdev)
radeon_ring_write(ring, sq_stack_resource_mgmt_2);
}
#define I2F_MAX_BITS 15
#define I2F_MAX_INPUT ((1 << I2F_MAX_BITS) - 1)
#define I2F_SHIFT (24 - I2F_MAX_BITS)
/*
* Converts unsigned integer into 32-bit IEEE floating point representation.
* Conversion is not universal and only works for the range from 0
* to 2^I2F_MAX_BITS-1. Currently we only use it with inputs between
* 0 and 16384 (inclusive), so I2F_MAX_BITS=15 is enough. If necessary,
* I2F_MAX_BITS can be increased, but that will add to the loop iterations
* and slow us down. Conversion is done by shifting the input and counting
* down until the first 1 reaches bit position 23. The resulting counter
* and the shifted input are, respectively, the exponent and the fraction.
* The sign is always zero.
*/
static uint32_t i2f(uint32_t input)
{
u32 result, i, exponent, fraction;
if ((input & 0x3fff) == 0)
result = 0; /* 0 is a special case */
WARN_ON_ONCE(input > I2F_MAX_INPUT);
if ((input & I2F_MAX_INPUT) == 0)
result = 0;
else {
exponent = 140; /* exponent biased by 127; */
fraction = (input & 0x3fff) << 10; /* cheat and only
handle numbers below 2^^15 */
for (i = 0; i < 14; i++) {
exponent = 126 + I2F_MAX_BITS;
fraction = (input & I2F_MAX_INPUT) << I2F_SHIFT;
for (i = 0; i < I2F_MAX_BITS; i++) {
if (fraction & 0x800000)
break;
else {
fraction = fraction << 1; /* keep
shifting left until top bit = 1 */
fraction = fraction << 1;
exponent = exponent - 1;
}
}
result = exponent << 23 | (fraction & 0x7fffff); /* mask
off top bit; assumed 1 */
result = exponent << 23 | (fraction & 0x7fffff);
}
return result;
}

View File

@ -59,8 +59,9 @@ static int radeon_atrm_call(acpi_handle atrm_handle, uint8_t *bios,
obj = (union acpi_object *)buffer.pointer;
memcpy(bios+offset, obj->buffer.pointer, obj->buffer.length);
len = obj->buffer.length;
kfree(buffer.pointer);
return obj->buffer.length;
return len;
}
bool radeon_atrm_supported(struct pci_dev *pdev)

View File

@ -883,6 +883,8 @@ int radeon_suspend_kms(struct drm_device *dev, pm_message_t state)
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
drm_kms_helper_poll_disable(dev);
/* turn off display hw */
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
@ -972,6 +974,8 @@ int radeon_resume_kms(struct drm_device *dev)
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
}
drm_kms_helper_poll_enable(dev);
return 0;
}

View File

@ -958,6 +958,7 @@ struct radeon_i2c_chan *radeon_i2c_create_dp(struct drm_device *dev,
i2c->rec = *rec;
i2c->adapter.owner = THIS_MODULE;
i2c->adapter.class = I2C_CLASS_DDC;
i2c->adapter.dev.parent = &dev->pdev->dev;
i2c->dev = dev;
snprintf(i2c->adapter.name, sizeof(i2c->adapter.name),
"Radeon aux bus %s", name);

View File

@ -808,9 +808,12 @@ static ssize_t ucma_accept(struct ucma_file *file, const char __user *inbuf,
return PTR_ERR(ctx);
if (cmd.conn_param.valid) {
ctx->uid = cmd.uid;
ucma_copy_conn_param(&conn_param, &cmd.conn_param);
mutex_lock(&file->mut);
ret = rdma_accept(ctx->cm_id, &conn_param);
if (!ret)
ctx->uid = cmd.uid;
mutex_unlock(&file->mut);
} else
ret = rdma_accept(ctx->cm_id, NULL);

View File

@ -1485,6 +1485,7 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file,
qp->event_handler = attr.event_handler;
qp->qp_context = attr.qp_context;
qp->qp_type = attr.qp_type;
atomic_set(&qp->usecnt, 0);
atomic_inc(&pd->usecnt);
atomic_inc(&attr.send_cq->usecnt);
if (attr.recv_cq)

View File

@ -421,6 +421,7 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
qp->uobject = NULL;
qp->qp_type = qp_init_attr->qp_type;
atomic_set(&qp->usecnt, 0);
if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) {
qp->event_handler = __ib_shared_qp_event_handler;
qp->qp_context = qp;
@ -430,7 +431,6 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
qp->xrcd = qp_init_attr->xrcd;
atomic_inc(&qp_init_attr->xrcd->usecnt);
INIT_LIST_HEAD(&qp->open_list);
atomic_set(&qp->usecnt, 0);
real_qp = qp;
qp = __ib_open_qp(real_qp, qp_init_attr->event_handler,

View File

@ -89,7 +89,7 @@ static int create_file(const char *name, umode_t mode,
error = ipathfs_mknod(parent->d_inode, *dentry,
mode, fops, data);
else
error = PTR_ERR(dentry);
error = PTR_ERR(*dentry);
mutex_unlock(&parent->d_inode->i_mutex);
return error;

View File

@ -257,12 +257,9 @@ static int ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
return IB_MAD_RESULT_SUCCESS;
/*
* Don't process SMInfo queries or vendor-specific
* MADs -- the SMA can't handle them.
* Don't process SMInfo queries -- the SMA can't handle them.
*/
if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO ||
((in_mad->mad_hdr.attr_id & IB_SMP_ATTR_VENDOR_MASK) ==
IB_SMP_ATTR_VENDOR_MASK))
if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO)
return IB_MAD_RESULT_SUCCESS;
} else if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT ||
in_mad->mad_hdr.mgmt_class == MLX4_IB_VENDOR_CLASS1 ||

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -233,6 +233,7 @@ static int send_mpa_reject(struct nes_cm_node *cm_node)
u8 *start_ptr = &start_addr;
u8 **start_buff = &start_ptr;
u16 buff_len = 0;
struct ietf_mpa_v1 *mpa_frame;
skb = dev_alloc_skb(MAX_CM_BUFFER);
if (!skb) {
@ -242,6 +243,8 @@ static int send_mpa_reject(struct nes_cm_node *cm_node)
/* send an MPA reject frame */
cm_build_mpa_frame(cm_node, start_buff, &buff_len, NULL, MPA_KEY_REPLY);
mpa_frame = (struct ietf_mpa_v1 *)*start_buff;
mpa_frame->flags |= IETF_MPA_FLAGS_REJECT;
form_cm_frame(skb, cm_node, NULL, 0, *start_buff, buff_len, SET_ACK | SET_FIN);
cm_node->state = NES_CM_STATE_FIN_WAIT1;
@ -1360,8 +1363,7 @@ static int nes_addr_resolve_neigh(struct nes_vnic *nesvnic, u32 dst_ip, int arpi
if (!memcmp(nesadapter->arp_table[arpindex].mac_addr,
neigh->ha, ETH_ALEN)) {
/* Mac address same as in nes_arp_table */
ip_rt_put(rt);
return rc;
goto out;
}
nes_manage_arp_cache(nesvnic->netdev,
@ -1377,6 +1379,8 @@ static int nes_addr_resolve_neigh(struct nes_vnic *nesvnic, u32 dst_ip, int arpi
neigh_event_send(neigh, NULL);
}
}
out:
rcu_read_unlock();
ip_rt_put(rt);
return rc;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel-NE, Inc. All rights reserved.
* Copyright (c) 2006 - 2011 Intel-NE, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2010 Intel-NE, Inc. All rights reserved.
* Copyright (c) 2006 - 2011 Intel-NE, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -3427,6 +3427,8 @@ static int nes_post_send(struct ib_qp *ibqp, struct ib_send_wr *ib_wr,
set_wqe_32bit_value(wqe->wqe_words,
NES_IWARP_SQ_FMR_WQE_LENGTH_LOW_IDX,
ib_wr->wr.fast_reg.length);
set_wqe_32bit_value(wqe->wqe_words,
NES_IWARP_SQ_FMR_WQE_LENGTH_HIGH_IDX, 0);
set_wqe_32bit_value(wqe->wqe_words,
NES_IWARP_SQ_FMR_WQE_MR_STAG_IDX,
ib_wr->wr.fast_reg.rkey);
@ -3724,7 +3726,7 @@ static int nes_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *entry)
entry->opcode = IB_WC_SEND;
break;
case NES_IWARP_SQ_OP_LOCINV:
entry->opcode = IB_WR_LOCAL_INV;
entry->opcode = IB_WC_LOCAL_INV;
break;
case NES_IWARP_SQ_OP_FAST_REG:
entry->opcode = IB_WC_FAST_REG_MR;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two

View File

@ -2105,7 +2105,7 @@ static void alloc_dummy_hdrq(struct qib_devdata *dd)
dd->cspec->dummy_hdrq = dma_alloc_coherent(&dd->pcidev->dev,
dd->rcd[0]->rcvhdrq_size,
&dd->cspec->dummy_hdrq_phys,
GFP_KERNEL | __GFP_COMP);
GFP_ATOMIC | __GFP_COMP);
if (!dd->cspec->dummy_hdrq) {
qib_devinfo(dd->pcidev, "Couldn't allocate dummy hdrq\n");
/* fallback to just 0'ing */

View File

@ -560,7 +560,7 @@ static int qib_tune_pcie_coalesce(struct qib_devdata *dd)
* BIOS may not set PCIe bus-utilization parameters for best performance.
* Check and optionally adjust them to maximize our throughput.
*/
static int qib_pcie_caps = 0x51;
static int qib_pcie_caps;
module_param_named(pcie_caps, qib_pcie_caps, int, S_IRUGO);
MODULE_PARM_DESC(pcie_caps, "Max PCIe tuning: Payload (0..3), ReadReq (4..7)");

View File

@ -1279,3 +1279,4 @@ static struct usb_driver go7007_usb_driver = {
};
module_usb_driver(go7007_usb_driver);
MODULE_LICENSE("GPL v2");

View File

@ -641,10 +641,10 @@ static int __cap_is_valid(struct ceph_cap *cap)
unsigned long ttl;
u32 gen;
spin_lock(&cap->session->s_cap_lock);
spin_lock(&cap->session->s_gen_ttl_lock);
gen = cap->session->s_cap_gen;
ttl = cap->session->s_cap_ttl;
spin_unlock(&cap->session->s_cap_lock);
spin_unlock(&cap->session->s_gen_ttl_lock);
if (cap->cap_gen < gen || time_after_eq(jiffies, ttl)) {
dout("__cap_is_valid %p cap %p issued %s "

View File

@ -975,10 +975,10 @@ static int dentry_lease_is_valid(struct dentry *dentry)
di = ceph_dentry(dentry);
if (di->lease_session) {
s = di->lease_session;
spin_lock(&s->s_cap_lock);
spin_lock(&s->s_gen_ttl_lock);
gen = s->s_cap_gen;
ttl = s->s_cap_ttl;
spin_unlock(&s->s_cap_lock);
spin_unlock(&s->s_gen_ttl_lock);
if (di->lease_gen == gen &&
time_before(jiffies, dentry->d_time) &&

View File

@ -262,6 +262,7 @@ static int parse_reply_info(struct ceph_msg *msg,
/* trace */
ceph_decode_32_safe(&p, end, len, bad);
if (len > 0) {
ceph_decode_need(&p, end, len, bad);
err = parse_reply_info_trace(&p, p+len, info, features);
if (err < 0)
goto out_bad;
@ -270,6 +271,7 @@ static int parse_reply_info(struct ceph_msg *msg,
/* extra */
ceph_decode_32_safe(&p, end, len, bad);
if (len > 0) {
ceph_decode_need(&p, end, len, bad);
err = parse_reply_info_extra(&p, p+len, info, features);
if (err < 0)
goto out_bad;
@ -398,9 +400,11 @@ static struct ceph_mds_session *register_session(struct ceph_mds_client *mdsc,
s->s_con.peer_name.type = CEPH_ENTITY_TYPE_MDS;
s->s_con.peer_name.num = cpu_to_le64(mds);
spin_lock_init(&s->s_cap_lock);
spin_lock_init(&s->s_gen_ttl_lock);
s->s_cap_gen = 0;
s->s_cap_ttl = 0;
spin_lock_init(&s->s_cap_lock);
s->s_renew_requested = 0;
s->s_renew_seq = 0;
INIT_LIST_HEAD(&s->s_caps);
@ -2326,10 +2330,10 @@ static void handle_session(struct ceph_mds_session *session,
case CEPH_SESSION_STALE:
pr_info("mds%d caps went stale, renewing\n",
session->s_mds);
spin_lock(&session->s_cap_lock);
spin_lock(&session->s_gen_ttl_lock);
session->s_cap_gen++;
session->s_cap_ttl = 0;
spin_unlock(&session->s_cap_lock);
spin_unlock(&session->s_gen_ttl_lock);
send_renew_caps(mdsc, session);
break;

View File

@ -117,10 +117,13 @@ struct ceph_mds_session {
void *s_authorizer_buf, *s_authorizer_reply_buf;
size_t s_authorizer_buf_len, s_authorizer_reply_buf_len;
/* protected by s_cap_lock */
spinlock_t s_cap_lock;
/* protected by s_gen_ttl_lock */
spinlock_t s_gen_ttl_lock;
u32 s_cap_gen; /* inc each time we get mds stale msg */
unsigned long s_cap_ttl; /* when session caps expire */
/* protected by s_cap_lock */
spinlock_t s_cap_lock;
struct list_head s_caps; /* all caps issued by this session */
int s_nr_caps, s_trim_caps;
int s_num_cap_releases;

View File

@ -111,8 +111,10 @@ static size_t ceph_vxattrcb_layout(struct ceph_inode_info *ci, char *val,
}
static struct ceph_vxattr_cb ceph_file_vxattrs[] = {
{ true, "ceph.file.layout", ceph_vxattrcb_layout},
/* The following extended attribute name is deprecated */
{ true, "ceph.layout", ceph_vxattrcb_layout},
{ NULL, NULL }
{ true, NULL, NULL }
};
static struct ceph_vxattr_cb *ceph_inode_vxattrs(struct inode *inode)

View File

@ -598,7 +598,7 @@ static struct rpc_procinfo nlm4_procedures[] = {
PROC(GRANTED_RES, res, norep),
};
struct rpc_version nlm_version4 = {
const struct rpc_version nlm_version4 = {
.number = 4,
.nrprocs = ARRAY_SIZE(nlm4_procedures),
.procs = nlm4_procedures,

View File

@ -62,7 +62,8 @@ struct nlm_host *nlmclnt_init(const struct nlmclnt_initdata *nlm_init)
host = nlmclnt_lookup_host(nlm_init->address, nlm_init->addrlen,
nlm_init->protocol, nlm_version,
nlm_init->hostname, nlm_init->noresvport);
nlm_init->hostname, nlm_init->noresvport,
nlm_init->net);
if (host == NULL) {
lockd_down();
return ERR_PTR(-ENOLCK);

View File

@ -596,19 +596,19 @@ static struct rpc_procinfo nlm_procedures[] = {
PROC(GRANTED_RES, res, norep),
};
static struct rpc_version nlm_version1 = {
static const struct rpc_version nlm_version1 = {
.number = 1,
.nrprocs = ARRAY_SIZE(nlm_procedures),
.procs = nlm_procedures,
};
static struct rpc_version nlm_version3 = {
static const struct rpc_version nlm_version3 = {
.number = 3,
.nrprocs = ARRAY_SIZE(nlm_procedures),
.procs = nlm_procedures,
};
static struct rpc_version *nlm_versions[] = {
static const struct rpc_version *nlm_versions[] = {
[1] = &nlm_version1,
[3] = &nlm_version3,
#ifdef CONFIG_LOCKD_V4
@ -618,7 +618,7 @@ static struct rpc_version *nlm_versions[] = {
static struct rpc_stat nlm_rpc_stats;
struct rpc_program nlm_program = {
const struct rpc_program nlm_program = {
.name = "lockd",
.number = NLM_PROGRAM,
.nrvers = ARRAY_SIZE(nlm_versions),

View File

@ -17,6 +17,8 @@
#include <linux/lockd/lockd.h>
#include <linux/mutex.h>
#include <linux/sunrpc/svc_xprt.h>
#include <net/ipv6.h>
#define NLMDBG_FACILITY NLMDBG_HOSTCACHE
@ -54,6 +56,7 @@ struct nlm_lookup_host_info {
const char *hostname; /* remote's hostname */
const size_t hostname_len; /* it's length */
const int noresvport; /* use non-priv port */
struct net *net; /* network namespace to bind */
};
/*
@ -155,6 +158,7 @@ static struct nlm_host *nlm_alloc_host(struct nlm_lookup_host_info *ni,
INIT_LIST_HEAD(&host->h_reclaim);
host->h_nsmhandle = nsm;
host->h_addrbuf = nsm->sm_addrbuf;
host->net = ni->net;
out:
return host;
@ -206,7 +210,8 @@ struct nlm_host *nlmclnt_lookup_host(const struct sockaddr *sap,
const unsigned short protocol,
const u32 version,
const char *hostname,
int noresvport)
int noresvport,
struct net *net)
{
struct nlm_lookup_host_info ni = {
.server = 0,
@ -217,6 +222,7 @@ struct nlm_host *nlmclnt_lookup_host(const struct sockaddr *sap,
.hostname = hostname,
.hostname_len = strlen(hostname),
.noresvport = noresvport,
.net = net,
};
struct hlist_head *chain;
struct hlist_node *pos;
@ -231,6 +237,8 @@ struct nlm_host *nlmclnt_lookup_host(const struct sockaddr *sap,
chain = &nlm_client_hosts[nlm_hash_address(sap)];
hlist_for_each_entry(host, pos, chain, h_hash) {
if (host->net != net)
continue;
if (!rpc_cmp_addr(nlm_addr(host), sap))
continue;
@ -318,6 +326,7 @@ struct nlm_host *nlmsvc_lookup_host(const struct svc_rqst *rqstp,
struct nsm_handle *nsm = NULL;
struct sockaddr *src_sap = svc_daddr(rqstp);
size_t src_len = rqstp->rq_daddrlen;
struct net *net = rqstp->rq_xprt->xpt_net;
struct nlm_lookup_host_info ni = {
.server = 1,
.sap = svc_addr(rqstp),
@ -326,6 +335,7 @@ struct nlm_host *nlmsvc_lookup_host(const struct svc_rqst *rqstp,
.version = rqstp->rq_vers,
.hostname = hostname,
.hostname_len = hostname_len,
.net = net,
};
dprintk("lockd: %s(host='%*s', vers=%u, proto=%s)\n", __func__,
@ -339,6 +349,8 @@ struct nlm_host *nlmsvc_lookup_host(const struct svc_rqst *rqstp,
chain = &nlm_server_hosts[nlm_hash_address(ni.sap)];
hlist_for_each_entry(host, pos, chain, h_hash) {
if (host->net != net)
continue;
if (!rpc_cmp_addr(nlm_addr(host), ni.sap))
continue;
@ -431,7 +443,7 @@ nlm_bind_host(struct nlm_host *host)
.to_retries = 5U,
};
struct rpc_create_args args = {
.net = &init_net,
.net = host->net,
.protocol = host->h_proto,
.address = nlm_addr(host),
.addrsize = host->h_addrlen,
@ -553,12 +565,8 @@ void nlm_host_rebooted(const struct nlm_reboot *info)
nsm_release(nsm);
}
/*
* Shut down the hosts module.
* Note that this routine is called only at server shutdown time.
*/
void
nlm_shutdown_hosts(void)
nlm_shutdown_hosts_net(struct net *net)
{
struct hlist_head *chain;
struct hlist_node *pos;
@ -570,6 +578,8 @@ nlm_shutdown_hosts(void)
/* First, make all hosts eligible for gc */
dprintk("lockd: nuking all hosts...\n");
for_each_host(host, pos, chain, nlm_server_hosts) {
if (net && host->net != net)
continue;
host->h_expires = jiffies - 1;
if (host->h_rpcclnt) {
rpc_shutdown_client(host->h_rpcclnt);
@ -580,15 +590,29 @@ nlm_shutdown_hosts(void)
/* Then, perform a garbage collection pass */
nlm_gc_hosts();
mutex_unlock(&nlm_host_mutex);
}
/*
* Shut down the hosts module.
* Note that this routine is called only at server shutdown time.
*/
void
nlm_shutdown_hosts(void)
{
struct hlist_head *chain;
struct hlist_node *pos;
struct nlm_host *host;
nlm_shutdown_hosts_net(NULL);
/* complain if any hosts are left */
if (nrhosts != 0) {
printk(KERN_WARNING "lockd: couldn't shutdown host module!\n");
dprintk("lockd: %lu hosts left:\n", nrhosts);
for_each_host(host, pos, chain, nlm_server_hosts) {
dprintk(" %s (cnt %d use %d exp %ld)\n",
dprintk(" %s (cnt %d use %d exp %ld net %p)\n",
host->h_name, atomic_read(&host->h_count),
host->h_inuse, host->h_expires);
host->h_inuse, host->h_expires, host->net);
}
}
}

View File

@ -47,7 +47,7 @@ struct nsm_res {
u32 state;
};
static struct rpc_program nsm_program;
static const struct rpc_program nsm_program;
static LIST_HEAD(nsm_handles);
static DEFINE_SPINLOCK(nsm_lock);
@ -62,14 +62,14 @@ static inline struct sockaddr *nsm_addr(const struct nsm_handle *nsm)
return (struct sockaddr *)&nsm->sm_addr;
}
static struct rpc_clnt *nsm_create(void)
static struct rpc_clnt *nsm_create(struct net *net)
{
struct sockaddr_in sin = {
.sin_family = AF_INET,
.sin_addr.s_addr = htonl(INADDR_LOOPBACK),
};
struct rpc_create_args args = {
.net = &init_net,
.net = net,
.protocol = XPRT_TRANSPORT_UDP,
.address = (struct sockaddr *)&sin,
.addrsize = sizeof(sin),
@ -83,7 +83,8 @@ static struct rpc_clnt *nsm_create(void)
return rpc_create(&args);
}
static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res)
static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res,
struct net *net)
{
struct rpc_clnt *clnt;
int status;
@ -99,7 +100,7 @@ static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res)
.rpc_resp = res,
};
clnt = nsm_create();
clnt = nsm_create(net);
if (IS_ERR(clnt)) {
status = PTR_ERR(clnt);
dprintk("lockd: failed to create NSM upcall transport, "
@ -149,7 +150,7 @@ int nsm_monitor(const struct nlm_host *host)
*/
nsm->sm_mon_name = nsm_use_hostnames ? nsm->sm_name : nsm->sm_addrbuf;
status = nsm_mon_unmon(nsm, NSMPROC_MON, &res);
status = nsm_mon_unmon(nsm, NSMPROC_MON, &res, host->net);
if (unlikely(res.status != 0))
status = -EIO;
if (unlikely(status < 0)) {
@ -183,7 +184,7 @@ void nsm_unmonitor(const struct nlm_host *host)
&& nsm->sm_monitored && !nsm->sm_sticky) {
dprintk("lockd: nsm_unmonitor(%s)\n", nsm->sm_name);
status = nsm_mon_unmon(nsm, NSMPROC_UNMON, &res);
status = nsm_mon_unmon(nsm, NSMPROC_UNMON, &res, host->net);
if (res.status != 0)
status = -EIO;
if (status < 0)
@ -534,19 +535,19 @@ static struct rpc_procinfo nsm_procedures[] = {
},
};
static struct rpc_version nsm_version1 = {
static const struct rpc_version nsm_version1 = {
.number = 1,
.nrprocs = ARRAY_SIZE(nsm_procedures),
.procs = nsm_procedures
};
static struct rpc_version * nsm_version[] = {
static const struct rpc_version *nsm_version[] = {
[1] = &nsm_version1,
};
static struct rpc_stat nsm_stats;
static struct rpc_program nsm_program = {
static const struct rpc_program nsm_program = {
.name = "statd",
.number = NSM_PROGRAM,
.nrvers = ARRAY_SIZE(nsm_version),

12
fs/lockd/netns.h Normal file
View File

@ -0,0 +1,12 @@
#ifndef __LOCKD_NETNS_H__
#define __LOCKD_NETNS_H__
#include <net/netns/generic.h>
struct lockd_net {
unsigned int nlmsvc_users;
};
extern int lockd_net_id;
#endif

View File

@ -35,6 +35,8 @@
#include <linux/lockd/lockd.h>
#include <linux/nfs.h>
#include "netns.h"
#define NLMDBG_FACILITY NLMDBG_SVC
#define LOCKD_BUFSIZE (1024 + NLMSVC_XDRSIZE)
#define ALLOWED_SIGS (sigmask(SIGKILL))
@ -50,6 +52,8 @@ static struct task_struct *nlmsvc_task;
static struct svc_rqst *nlmsvc_rqst;
unsigned long nlmsvc_timeout;
int lockd_net_id;
/*
* These can be set at insmod time (useful for NFS as root filesystem),
* and also changed through the sysctl interface. -- Jamie Lokier, Aug 2003
@ -189,27 +193,29 @@ lockd(void *vrqstp)
}
static int create_lockd_listener(struct svc_serv *serv, const char *name,
const int family, const unsigned short port)
struct net *net, const int family,
const unsigned short port)
{
struct svc_xprt *xprt;
xprt = svc_find_xprt(serv, name, family, 0);
xprt = svc_find_xprt(serv, name, net, family, 0);
if (xprt == NULL)
return svc_create_xprt(serv, name, &init_net, family, port,
return svc_create_xprt(serv, name, net, family, port,
SVC_SOCK_DEFAULTS);
svc_xprt_put(xprt);
return 0;
}
static int create_lockd_family(struct svc_serv *serv, const int family)
static int create_lockd_family(struct svc_serv *serv, struct net *net,
const int family)
{
int err;
err = create_lockd_listener(serv, "udp", family, nlm_udpport);
err = create_lockd_listener(serv, "udp", net, family, nlm_udpport);
if (err < 0)
return err;
return create_lockd_listener(serv, "tcp", family, nlm_tcpport);
return create_lockd_listener(serv, "tcp", net, family, nlm_tcpport);
}
/*
@ -222,16 +228,16 @@ static int create_lockd_family(struct svc_serv *serv, const int family)
* Returns zero if all listeners are available; otherwise a
* negative errno value is returned.
*/
static int make_socks(struct svc_serv *serv)
static int make_socks(struct svc_serv *serv, struct net *net)
{
static int warned;
int err;
err = create_lockd_family(serv, PF_INET);
err = create_lockd_family(serv, net, PF_INET);
if (err < 0)
goto out_err;
err = create_lockd_family(serv, PF_INET6);
err = create_lockd_family(serv, net, PF_INET6);
if (err < 0 && err != -EAFNOSUPPORT)
goto out_err;
@ -245,6 +251,47 @@ out_err:
return err;
}
static int lockd_up_net(struct net *net)
{
struct lockd_net *ln = net_generic(net, lockd_net_id);
struct svc_serv *serv = nlmsvc_rqst->rq_server;
int error;
if (ln->nlmsvc_users)
return 0;
error = svc_rpcb_setup(serv, net);
if (error)
goto err_rpcb;
error = make_socks(serv, net);
if (error < 0)
goto err_socks;
return 0;
err_socks:
svc_rpcb_cleanup(serv, net);
err_rpcb:
return error;
}
static void lockd_down_net(struct net *net)
{
struct lockd_net *ln = net_generic(net, lockd_net_id);
struct svc_serv *serv = nlmsvc_rqst->rq_server;
if (ln->nlmsvc_users) {
if (--ln->nlmsvc_users == 0) {
nlm_shutdown_hosts_net(net);
svc_shutdown_net(serv, net);
}
} else {
printk(KERN_ERR "lockd_down_net: no users! task=%p, net=%p\n",
nlmsvc_task, net);
BUG();
}
}
/*
* Bring up the lockd process if it's not already up.
*/
@ -252,13 +299,16 @@ int lockd_up(void)
{
struct svc_serv *serv;
int error = 0;
struct net *net = current->nsproxy->net_ns;
mutex_lock(&nlmsvc_mutex);
/*
* Check whether we're already up and running.
*/
if (nlmsvc_rqst)
if (nlmsvc_rqst) {
error = lockd_up_net(net);
goto out;
}
/*
* Sanity check: if there's no pid,
@ -275,7 +325,7 @@ int lockd_up(void)
goto out;
}
error = make_socks(serv);
error = make_socks(serv, net);
if (error < 0)
goto destroy_and_out;
@ -313,8 +363,12 @@ int lockd_up(void)
destroy_and_out:
svc_destroy(serv);
out:
if (!error)
if (!error) {
struct lockd_net *ln = net_generic(net, lockd_net_id);
ln->nlmsvc_users++;
nlmsvc_users++;
}
mutex_unlock(&nlmsvc_mutex);
return error;
}
@ -328,8 +382,10 @@ lockd_down(void)
{
mutex_lock(&nlmsvc_mutex);
if (nlmsvc_users) {
if (--nlmsvc_users)
if (--nlmsvc_users) {
lockd_down_net(current->nsproxy->net_ns);
goto out;
}
} else {
printk(KERN_ERR "lockd_down: no users! task=%p\n",
nlmsvc_task);
@ -497,24 +553,55 @@ module_param_call(nlm_tcpport, param_set_port, param_get_int,
module_param(nsm_use_hostnames, bool, 0644);
module_param(nlm_max_connections, uint, 0644);
static int lockd_init_net(struct net *net)
{
return 0;
}
static void lockd_exit_net(struct net *net)
{
}
static struct pernet_operations lockd_net_ops = {
.init = lockd_init_net,
.exit = lockd_exit_net,
.id = &lockd_net_id,
.size = sizeof(struct lockd_net),
};
/*
* Initialising and terminating the module.
*/
static int __init init_nlm(void)
{
int err;
#ifdef CONFIG_SYSCTL
err = -ENOMEM;
nlm_sysctl_table = register_sysctl_table(nlm_sysctl_root);
return nlm_sysctl_table ? 0 : -ENOMEM;
#else
return 0;
if (nlm_sysctl_table == NULL)
goto err_sysctl;
#endif
err = register_pernet_subsys(&lockd_net_ops);
if (err)
goto err_pernet;
return 0;
err_pernet:
#ifdef CONFIG_SYSCTL
unregister_sysctl_table(nlm_sysctl_table);
#endif
err_sysctl:
return err;
}
static void __exit exit_nlm(void)
{
/* FIXME: delete all NLM clients */
nlm_shutdown_hosts();
unregister_pernet_subsys(&lockd_net_ops);
#ifdef CONFIG_SYSCTL
unregister_sysctl_table(nlm_sysctl_table);
#endif

View File

@ -46,7 +46,6 @@ static void nlmsvc_remove_block(struct nlm_block *block);
static int nlmsvc_setgrantargs(struct nlm_rqst *call, struct nlm_lock *lock);
static void nlmsvc_freegrantargs(struct nlm_rqst *call);
static const struct rpc_call_ops nlmsvc_grant_ops;
static const char *nlmdbg_cookie2a(const struct nlm_cookie *cookie);
/*
* The list of blocked locks to retry
@ -54,6 +53,35 @@ static const char *nlmdbg_cookie2a(const struct nlm_cookie *cookie);
static LIST_HEAD(nlm_blocked);
static DEFINE_SPINLOCK(nlm_blocked_lock);
#ifdef LOCKD_DEBUG
static const char *nlmdbg_cookie2a(const struct nlm_cookie *cookie)
{
/*
* We can get away with a static buffer because we're only
* called with BKL held.
*/
static char buf[2*NLM_MAXCOOKIELEN+1];
unsigned int i, len = sizeof(buf);
char *p = buf;
len--; /* allow for trailing \0 */
if (len < 3)
return "???";
for (i = 0 ; i < cookie->len ; i++) {
if (len < 2) {
strcpy(p-3, "...");
break;
}
sprintf(p, "%02x", cookie->data[i]);
p += 2;
len -= 2;
}
*p = '\0';
return buf;
}
#endif
/*
* Insert a blocked lock into the global list
*/
@ -935,32 +963,3 @@ nlmsvc_retry_blocked(void)
return timeout;
}
#ifdef RPC_DEBUG
static const char *nlmdbg_cookie2a(const struct nlm_cookie *cookie)
{
/*
* We can get away with a static buffer because we're only
* called with BKL held.
*/
static char buf[2*NLM_MAXCOOKIELEN+1];
unsigned int i, len = sizeof(buf);
char *p = buf;
len--; /* allow for trailing \0 */
if (len < 3)
return "???";
for (i = 0 ; i < cookie->len ; i++) {
if (len < 2) {
strcpy(p-3, "...");
break;
}
sprintf(p, "%02x", cookie->data[i]);
p += 2;
len -= 2;
}
*p = '\0';
return buf;
}
#endif

View File

@ -152,9 +152,6 @@ static struct page *logfs_mtd_find_first_sb(struct super_block *sb, u64 *ofs)
filler_t *filler = logfs_mtd_readpage;
struct mtd_info *mtd = super->s_mtd;
if (!mtd_can_have_bb(mtd))
return NULL;
*ofs = 0;
while (mtd_block_isbad(mtd, *ofs)) {
*ofs += mtd->erasesize;
@ -172,9 +169,6 @@ static struct page *logfs_mtd_find_last_sb(struct super_block *sb, u64 *ofs)
filler_t *filler = logfs_mtd_readpage;
struct mtd_info *mtd = super->s_mtd;
if (!mtd_can_have_bb(mtd))
return NULL;
*ofs = mtd->size - mtd->erasesize;
while (mtd_block_isbad(mtd, *ofs)) {
*ofs -= mtd->erasesize;

View File

@ -64,6 +64,7 @@ config NFS_V4
bool "NFS client support for NFS version 4"
depends on NFS_FS
select SUNRPC_GSS
select KEYS
help
This option enables support for version 4 of the NFS protocol
(RFC 3530) in the kernel's NFS client.
@ -98,6 +99,18 @@ config PNFS_OBJLAYOUT
depends on NFS_FS && NFS_V4_1 && SCSI_OSD_ULD
default m
config NFS_V4_1_IMPLEMENTATION_ID_DOMAIN
string "NFSv4.1 Implementation ID Domain"
depends on NFS_V4_1
default "kernel.org"
help
This option defines the domain portion of the implementation ID that
may be sent in the NFS exchange_id operation. The value must be in
the format of a DNS domain name and should be set to the DNS domain
name of the distribution.
If the NFS client is unchanged from the upstream kernel, this
option should be set to the default "kernel.org".
config ROOT_NFS
bool "Root file system on NFS"
depends on NFS_FS=y && IP_PNP
@ -130,16 +143,10 @@ config NFS_USE_KERNEL_DNS
bool
depends on NFS_V4 && !NFS_USE_LEGACY_DNS
select DNS_RESOLVER
select KEYS
default y
config NFS_USE_NEW_IDMAPPER
bool "Use the new idmapper upcall routine"
depends on NFS_V4 && KEYS
help
Say Y here if you want NFS to use the new idmapper upcall functions.
You will need /sbin/request-key (usually provided by the keyutils
package). For details, read
<file:Documentation/filesystems/nfs/idmapper.txt>.
If you are unsure, say N.
config NFS_DEBUG
bool
depends on NFS_FS && SUNRPC_DEBUG
select CRC32
default y

View File

@ -46,9 +46,6 @@ MODULE_LICENSE("GPL");
MODULE_AUTHOR("Andy Adamson <andros@citi.umich.edu>");
MODULE_DESCRIPTION("The NFSv4.1 pNFS Block layout driver");
struct dentry *bl_device_pipe;
wait_queue_head_t bl_wq;
static void print_page(struct page *page)
{
dprintk("PRINTPAGE page %p\n", page);
@ -236,12 +233,11 @@ bl_read_pagelist(struct nfs_read_data *rdata)
sector_t isect, extent_length = 0;
struct parallel_io *par;
loff_t f_offset = rdata->args.offset;
size_t count = rdata->args.count;
struct page **pages = rdata->args.pages;
int pg_index = rdata->args.pgbase >> PAGE_CACHE_SHIFT;
dprintk("%s enter nr_pages %u offset %lld count %Zd\n", __func__,
rdata->npages, f_offset, count);
dprintk("%s enter nr_pages %u offset %lld count %u\n", __func__,
rdata->npages, f_offset, (unsigned int)rdata->args.count);
par = alloc_parallel(rdata);
if (!par)
@ -1025,10 +1021,128 @@ static const struct rpc_pipe_ops bl_upcall_ops = {
.destroy_msg = bl_pipe_destroy_msg,
};
static struct dentry *nfs4blocklayout_register_sb(struct super_block *sb,
struct rpc_pipe *pipe)
{
struct dentry *dir, *dentry;
dir = rpc_d_lookup_sb(sb, NFS_PIPE_DIRNAME);
if (dir == NULL)
return ERR_PTR(-ENOENT);
dentry = rpc_mkpipe_dentry(dir, "blocklayout", NULL, pipe);
dput(dir);
return dentry;
}
static void nfs4blocklayout_unregister_sb(struct super_block *sb,
struct rpc_pipe *pipe)
{
if (pipe->dentry)
rpc_unlink(pipe->dentry);
}
static int rpc_pipefs_event(struct notifier_block *nb, unsigned long event,
void *ptr)
{
struct super_block *sb = ptr;
struct net *net = sb->s_fs_info;
struct nfs_net *nn = net_generic(net, nfs_net_id);
struct dentry *dentry;
int ret = 0;
if (!try_module_get(THIS_MODULE))
return 0;
if (nn->bl_device_pipe == NULL) {
module_put(THIS_MODULE);
return 0;
}
switch (event) {
case RPC_PIPEFS_MOUNT:
dentry = nfs4blocklayout_register_sb(sb, nn->bl_device_pipe);
if (IS_ERR(dentry)) {
ret = PTR_ERR(dentry);
break;
}
nn->bl_device_pipe->dentry = dentry;
break;
case RPC_PIPEFS_UMOUNT:
if (nn->bl_device_pipe->dentry)
nfs4blocklayout_unregister_sb(sb, nn->bl_device_pipe);
break;
default:
ret = -ENOTSUPP;
break;
}
module_put(THIS_MODULE);
return ret;
}
static struct notifier_block nfs4blocklayout_block = {
.notifier_call = rpc_pipefs_event,
};
static struct dentry *nfs4blocklayout_register_net(struct net *net,
struct rpc_pipe *pipe)
{
struct super_block *pipefs_sb;
struct dentry *dentry;
pipefs_sb = rpc_get_sb_net(net);
if (!pipefs_sb)
return NULL;
dentry = nfs4blocklayout_register_sb(pipefs_sb, pipe);
rpc_put_sb_net(net);
return dentry;
}
static void nfs4blocklayout_unregister_net(struct net *net,
struct rpc_pipe *pipe)
{
struct super_block *pipefs_sb;
pipefs_sb = rpc_get_sb_net(net);
if (pipefs_sb) {
nfs4blocklayout_unregister_sb(pipefs_sb, pipe);
rpc_put_sb_net(net);
}
}
static int nfs4blocklayout_net_init(struct net *net)
{
struct nfs_net *nn = net_generic(net, nfs_net_id);
struct dentry *dentry;
init_waitqueue_head(&nn->bl_wq);
nn->bl_device_pipe = rpc_mkpipe_data(&bl_upcall_ops, 0);
if (IS_ERR(nn->bl_device_pipe))
return PTR_ERR(nn->bl_device_pipe);
dentry = nfs4blocklayout_register_net(net, nn->bl_device_pipe);
if (IS_ERR(dentry)) {
rpc_destroy_pipe_data(nn->bl_device_pipe);
return PTR_ERR(dentry);
}
nn->bl_device_pipe->dentry = dentry;
return 0;
}
static void nfs4blocklayout_net_exit(struct net *net)
{
struct nfs_net *nn = net_generic(net, nfs_net_id);
nfs4blocklayout_unregister_net(net, nn->bl_device_pipe);
rpc_destroy_pipe_data(nn->bl_device_pipe);
nn->bl_device_pipe = NULL;
}
static struct pernet_operations nfs4blocklayout_net_ops = {
.init = nfs4blocklayout_net_init,
.exit = nfs4blocklayout_net_exit,
};
static int __init nfs4blocklayout_init(void)
{
struct vfsmount *mnt;
struct path path;
int ret;
dprintk("%s: NFSv4 Block Layout Driver Registering...\n", __func__);
@ -1037,32 +1151,17 @@ static int __init nfs4blocklayout_init(void)
if (ret)
goto out;
init_waitqueue_head(&bl_wq);
mnt = rpc_get_mount();
if (IS_ERR(mnt)) {
ret = PTR_ERR(mnt);
goto out_remove;
}
ret = vfs_path_lookup(mnt->mnt_root,
mnt,
NFS_PIPE_DIRNAME, 0, &path);
ret = rpc_pipefs_notifier_register(&nfs4blocklayout_block);
if (ret)
goto out_putrpc;
bl_device_pipe = rpc_mkpipe(path.dentry, "blocklayout", NULL,
&bl_upcall_ops, 0);
path_put(&path);
if (IS_ERR(bl_device_pipe)) {
ret = PTR_ERR(bl_device_pipe);
goto out_putrpc;
}
goto out_remove;
ret = register_pernet_subsys(&nfs4blocklayout_net_ops);
if (ret)
goto out_notifier;
out:
return ret;
out_putrpc:
rpc_put_mount();
out_notifier:
rpc_pipefs_notifier_unregister(&nfs4blocklayout_block);
out_remove:
pnfs_unregister_layoutdriver(&blocklayout_type);
return ret;
@ -1073,9 +1172,9 @@ static void __exit nfs4blocklayout_exit(void)
dprintk("%s: NFSv4 Block Layout Driver Unregistering...\n",
__func__);
rpc_pipefs_notifier_unregister(&nfs4blocklayout_block);
unregister_pernet_subsys(&nfs4blocklayout_net_ops);
pnfs_unregister_layoutdriver(&blocklayout_type);
rpc_unlink(bl_device_pipe);
rpc_put_mount();
}
MODULE_ALIAS("nfs-layouttype4-3");

View File

@ -37,6 +37,7 @@
#include <linux/sunrpc/rpc_pipe_fs.h>
#include "../pnfs.h"
#include "../netns.h"
#define PAGE_CACHE_SECTORS (PAGE_CACHE_SIZE >> SECTOR_SHIFT)
#define PAGE_CACHE_SECTOR_SHIFT (PAGE_CACHE_SHIFT - SECTOR_SHIFT)
@ -50,6 +51,7 @@ struct pnfs_block_dev {
struct list_head bm_node;
struct nfs4_deviceid bm_mdevid; /* associated devid */
struct block_device *bm_mdev; /* meta device itself */
struct net *net;
};
enum exstate4 {
@ -151,9 +153,9 @@ BLK_LSEG2EXT(struct pnfs_layout_segment *lseg)
return BLK_LO2EXT(lseg->pls_layout);
}
struct bl_dev_msg {
int32_t status;
uint32_t major, minor;
struct bl_pipe_msg {
struct rpc_pipe_msg msg;
wait_queue_head_t *bl_wq;
};
struct bl_msg_hdr {
@ -161,9 +163,6 @@ struct bl_msg_hdr {
u16 totallen; /* length of entire message, including hdr itself */
};
extern struct dentry *bl_device_pipe;
extern wait_queue_head_t bl_wq;
#define BL_DEVICE_UMOUNT 0x0 /* Umount--delete devices */
#define BL_DEVICE_MOUNT 0x1 /* Mount--create devices*/
#define BL_DEVICE_REQUEST_INIT 0x0 /* Start request */

View File

@ -46,7 +46,7 @@ static int decode_sector_number(__be32 **rp, sector_t *sp)
*rp = xdr_decode_hyper(*rp, &s);
if (s & 0x1ff) {
printk(KERN_WARNING "%s: sector not aligned\n", __func__);
printk(KERN_WARNING "NFS: %s: sector not aligned\n", __func__);
return -1;
}
*sp = s >> SECTOR_SHIFT;
@ -79,27 +79,30 @@ int nfs4_blkdev_put(struct block_device *bdev)
return blkdev_put(bdev, FMODE_READ);
}
static struct bl_dev_msg bl_mount_reply;
ssize_t bl_pipe_downcall(struct file *filp, const char __user *src,
size_t mlen)
{
struct nfs_net *nn = net_generic(filp->f_dentry->d_sb->s_fs_info,
nfs_net_id);
if (mlen != sizeof (struct bl_dev_msg))
return -EINVAL;
if (copy_from_user(&bl_mount_reply, src, mlen) != 0)
if (copy_from_user(&nn->bl_mount_reply, src, mlen) != 0)
return -EFAULT;
wake_up(&bl_wq);
wake_up(&nn->bl_wq);
return mlen;
}
void bl_pipe_destroy_msg(struct rpc_pipe_msg *msg)
{
struct bl_pipe_msg *bl_pipe_msg = container_of(msg, struct bl_pipe_msg, msg);
if (msg->errno >= 0)
return;
wake_up(&bl_wq);
wake_up(bl_pipe_msg->bl_wq);
}
/*
@ -111,29 +114,33 @@ nfs4_blk_decode_device(struct nfs_server *server,
{
struct pnfs_block_dev *rv;
struct block_device *bd = NULL;
struct rpc_pipe_msg msg;
struct bl_pipe_msg bl_pipe_msg;
struct rpc_pipe_msg *msg = &bl_pipe_msg.msg;
struct bl_msg_hdr bl_msg = {
.type = BL_DEVICE_MOUNT,
.totallen = dev->mincount,
};
uint8_t *dataptr;
DECLARE_WAITQUEUE(wq, current);
struct bl_dev_msg *reply = &bl_mount_reply;
int offset, len, i, rc;
struct net *net = server->nfs_client->net;
struct nfs_net *nn = net_generic(net, nfs_net_id);
struct bl_dev_msg *reply = &nn->bl_mount_reply;
dprintk("%s CREATING PIPEFS MESSAGE\n", __func__);
dprintk("%s: deviceid: %s, mincount: %d\n", __func__, dev->dev_id.data,
dev->mincount);
memset(&msg, 0, sizeof(msg));
msg.data = kzalloc(sizeof(bl_msg) + dev->mincount, GFP_NOFS);
if (!msg.data) {
bl_pipe_msg.bl_wq = &nn->bl_wq;
memset(msg, 0, sizeof(*msg));
msg->data = kzalloc(sizeof(bl_msg) + dev->mincount, GFP_NOFS);
if (!msg->data) {
rv = ERR_PTR(-ENOMEM);
goto out;
}
memcpy(msg.data, &bl_msg, sizeof(bl_msg));
dataptr = (uint8_t *) msg.data;
memcpy(msg->data, &bl_msg, sizeof(bl_msg));
dataptr = (uint8_t *) msg->data;
len = dev->mincount;
offset = sizeof(bl_msg);
for (i = 0; len > 0; i++) {
@ -142,13 +149,13 @@ nfs4_blk_decode_device(struct nfs_server *server,
len -= PAGE_CACHE_SIZE;
offset += PAGE_CACHE_SIZE;
}
msg.len = sizeof(bl_msg) + dev->mincount;
msg->len = sizeof(bl_msg) + dev->mincount;
dprintk("%s CALLING USERSPACE DAEMON\n", __func__);
add_wait_queue(&bl_wq, &wq);
rc = rpc_queue_upcall(bl_device_pipe->d_inode, &msg);
add_wait_queue(&nn->bl_wq, &wq);
rc = rpc_queue_upcall(nn->bl_device_pipe, msg);
if (rc < 0) {
remove_wait_queue(&bl_wq, &wq);
remove_wait_queue(&nn->bl_wq, &wq);
rv = ERR_PTR(rc);
goto out;
}
@ -156,7 +163,7 @@ nfs4_blk_decode_device(struct nfs_server *server,
set_current_state(TASK_UNINTERRUPTIBLE);
schedule();
__set_current_state(TASK_RUNNING);
remove_wait_queue(&bl_wq, &wq);
remove_wait_queue(&nn->bl_wq, &wq);
if (reply->status != BL_DEVICE_REQUEST_PROC) {
dprintk("%s failed to open device: %d\n",
@ -181,13 +188,14 @@ nfs4_blk_decode_device(struct nfs_server *server,
rv->bm_mdev = bd;
memcpy(&rv->bm_mdevid, &dev->dev_id, sizeof(struct nfs4_deviceid));
rv->net = net;
dprintk("%s Created device %s with bd_block_size %u\n",
__func__,
bd->bd_disk->disk_name,
bd->bd_block_size);
out:
kfree(msg.data);
kfree(msg->data);
return rv;
}

View File

@ -38,9 +38,10 @@
#define NFSDBG_FACILITY NFSDBG_PNFS_LD
static void dev_remove(dev_t dev)
static void dev_remove(struct net *net, dev_t dev)
{
struct rpc_pipe_msg msg;
struct bl_pipe_msg bl_pipe_msg;
struct rpc_pipe_msg *msg = &bl_pipe_msg.msg;
struct bl_dev_msg bl_umount_request;
struct bl_msg_hdr bl_msg = {
.type = BL_DEVICE_UMOUNT,
@ -48,36 +49,38 @@ static void dev_remove(dev_t dev)
};
uint8_t *dataptr;
DECLARE_WAITQUEUE(wq, current);
struct nfs_net *nn = net_generic(net, nfs_net_id);
dprintk("Entering %s\n", __func__);
memset(&msg, 0, sizeof(msg));
msg.data = kzalloc(1 + sizeof(bl_umount_request), GFP_NOFS);
if (!msg.data)
bl_pipe_msg.bl_wq = &nn->bl_wq;
memset(msg, 0, sizeof(*msg));
msg->data = kzalloc(1 + sizeof(bl_umount_request), GFP_NOFS);
if (!msg->data)
goto out;
memset(&bl_umount_request, 0, sizeof(bl_umount_request));
bl_umount_request.major = MAJOR(dev);
bl_umount_request.minor = MINOR(dev);
memcpy(msg.data, &bl_msg, sizeof(bl_msg));
dataptr = (uint8_t *) msg.data;
memcpy(msg->data, &bl_msg, sizeof(bl_msg));
dataptr = (uint8_t *) msg->data;
memcpy(&dataptr[sizeof(bl_msg)], &bl_umount_request, sizeof(bl_umount_request));
msg.len = sizeof(bl_msg) + bl_msg.totallen;
msg->len = sizeof(bl_msg) + bl_msg.totallen;
add_wait_queue(&bl_wq, &wq);
if (rpc_queue_upcall(bl_device_pipe->d_inode, &msg) < 0) {
remove_wait_queue(&bl_wq, &wq);
add_wait_queue(&nn->bl_wq, &wq);
if (rpc_queue_upcall(nn->bl_device_pipe, msg) < 0) {
remove_wait_queue(&nn->bl_wq, &wq);
goto out;
}
set_current_state(TASK_UNINTERRUPTIBLE);
schedule();
__set_current_state(TASK_RUNNING);
remove_wait_queue(&bl_wq, &wq);
remove_wait_queue(&nn->bl_wq, &wq);
out:
kfree(msg.data);
kfree(msg->data);
}
/*
@ -90,10 +93,10 @@ static void nfs4_blk_metadev_release(struct pnfs_block_dev *bdev)
dprintk("%s Releasing\n", __func__);
rv = nfs4_blkdev_put(bdev->bm_mdev);
if (rv)
printk(KERN_ERR "%s nfs4_blkdev_put returns %d\n",
printk(KERN_ERR "NFS: %s nfs4_blkdev_put returns %d\n",
__func__, rv);
dev_remove(bdev->bm_mdev->bd_dev);
dev_remove(bdev->net, bdev->bm_mdev->bd_dev);
}
void bl_free_block_dev(struct pnfs_block_dev *bdev)

View File

@ -147,7 +147,7 @@ static int _preload_range(struct pnfs_inval_markings *marks,
count = (int)(end - start) / (int)tree->mtt_step_size;
/* Pre-malloc what memory we might need */
storage = kmalloc(sizeof(*storage) * count, GFP_NOFS);
storage = kcalloc(count, sizeof(*storage), GFP_NOFS);
if (!storage)
return -ENOMEM;
for (i = 0; i < count; i++) {

View File

@ -13,6 +13,7 @@
#include <linux/slab.h>
#include <linux/sunrpc/cache.h>
#include <linux/sunrpc/rpc_pipe_fs.h>
#include <net/net_namespace.h>
#include "cache_lib.h"
@ -111,30 +112,54 @@ int nfs_cache_wait_for_upcall(struct nfs_cache_defer_req *dreq)
return 0;
}
int nfs_cache_register(struct cache_detail *cd)
int nfs_cache_register_sb(struct super_block *sb, struct cache_detail *cd)
{
struct vfsmount *mnt;
struct path path;
int ret;
struct dentry *dir;
mnt = rpc_get_mount();
if (IS_ERR(mnt))
return PTR_ERR(mnt);
ret = vfs_path_lookup(mnt->mnt_root, mnt, "/cache", 0, &path);
if (ret)
goto err;
ret = sunrpc_cache_register_pipefs(path.dentry, cd->name, 0600, cd);
path_put(&path);
if (!ret)
return ret;
err:
rpc_put_mount();
dir = rpc_d_lookup_sb(sb, "cache");
BUG_ON(dir == NULL);
ret = sunrpc_cache_register_pipefs(dir, cd->name, 0600, cd);
dput(dir);
return ret;
}
void nfs_cache_unregister(struct cache_detail *cd)
int nfs_cache_register_net(struct net *net, struct cache_detail *cd)
{
sunrpc_cache_unregister_pipefs(cd);
rpc_put_mount();
struct super_block *pipefs_sb;
int ret = 0;
pipefs_sb = rpc_get_sb_net(net);
if (pipefs_sb) {
ret = nfs_cache_register_sb(pipefs_sb, cd);
rpc_put_sb_net(net);
}
return ret;
}
void nfs_cache_unregister_sb(struct super_block *sb, struct cache_detail *cd)
{
if (cd->u.pipefs.dir)
sunrpc_cache_unregister_pipefs(cd);
}
void nfs_cache_unregister_net(struct net *net, struct cache_detail *cd)
{
struct super_block *pipefs_sb;
pipefs_sb = rpc_get_sb_net(net);
if (pipefs_sb) {
nfs_cache_unregister_sb(pipefs_sb, cd);
rpc_put_sb_net(net);
}
}
void nfs_cache_init(struct cache_detail *cd)
{
sunrpc_init_cache_detail(cd);
}
void nfs_cache_destroy(struct cache_detail *cd)
{
sunrpc_destroy_cache_detail(cd);
}

View File

@ -23,5 +23,11 @@ extern struct nfs_cache_defer_req *nfs_cache_defer_req_alloc(void);
extern void nfs_cache_defer_req_put(struct nfs_cache_defer_req *dreq);
extern int nfs_cache_wait_for_upcall(struct nfs_cache_defer_req *dreq);
extern int nfs_cache_register(struct cache_detail *cd);
extern void nfs_cache_unregister(struct cache_detail *cd);
extern void nfs_cache_init(struct cache_detail *cd);
extern void nfs_cache_destroy(struct cache_detail *cd);
extern int nfs_cache_register_net(struct net *net, struct cache_detail *cd);
extern void nfs_cache_unregister_net(struct net *net, struct cache_detail *cd);
extern int nfs_cache_register_sb(struct super_block *sb,
struct cache_detail *cd);
extern void nfs_cache_unregister_sb(struct super_block *sb,
struct cache_detail *cd);

View File

@ -85,7 +85,7 @@ nfs4_callback_svc(void *vrqstp)
}
if (err < 0) {
if (err != preverr) {
printk(KERN_WARNING "%s: unexpected error "
printk(KERN_WARNING "NFS: %s: unexpected error "
"from svc_recv (%d)\n", __func__, err);
preverr = err;
}
@ -101,12 +101,12 @@ nfs4_callback_svc(void *vrqstp)
/*
* Prepare to bring up the NFSv4 callback service
*/
struct svc_rqst *
nfs4_callback_up(struct svc_serv *serv)
static struct svc_rqst *
nfs4_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
{
int ret;
ret = svc_create_xprt(serv, "tcp", &init_net, PF_INET,
ret = svc_create_xprt(serv, "tcp", xprt->xprt_net, PF_INET,
nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS);
if (ret <= 0)
goto out_err;
@ -114,7 +114,7 @@ nfs4_callback_up(struct svc_serv *serv)
dprintk("NFS: Callback listener port = %u (af %u)\n",
nfs_callback_tcpport, PF_INET);
ret = svc_create_xprt(serv, "tcp", &init_net, PF_INET6,
ret = svc_create_xprt(serv, "tcp", xprt->xprt_net, PF_INET6,
nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS);
if (ret > 0) {
nfs_callback_tcpport6 = ret;
@ -172,7 +172,7 @@ nfs41_callback_svc(void *vrqstp)
/*
* Bring up the NFSv4.1 callback service
*/
struct svc_rqst *
static struct svc_rqst *
nfs41_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
{
struct svc_rqst *rqstp;
@ -183,7 +183,7 @@ nfs41_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
* fore channel connection.
* Returns the input port (0) and sets the svc_serv bc_xprt on success
*/
ret = svc_create_xprt(serv, "tcp-bc", &init_net, PF_INET, 0,
ret = svc_create_xprt(serv, "tcp-bc", xprt->xprt_net, PF_INET, 0,
SVC_SOCK_ANONYMOUS);
if (ret < 0) {
rqstp = ERR_PTR(ret);
@ -269,7 +269,7 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
serv, xprt, &rqstp, &callback_svc);
if (!minorversion_setup) {
/* v4.0 callback setup */
rqstp = nfs4_callback_up(serv);
rqstp = nfs4_callback_up(serv, xprt);
callback_svc = nfs4_callback_svc;
}
@ -332,7 +332,6 @@ void nfs_callback_down(int minorversion)
int
check_gss_callback_principal(struct nfs_client *clp, struct svc_rqst *rqstp)
{
struct rpc_clnt *r = clp->cl_rpcclient;
char *p = svc_gss_principal(rqstp);
if (rqstp->rq_authop->flavour != RPC_AUTH_GSS)
@ -353,7 +352,7 @@ check_gss_callback_principal(struct nfs_client *clp, struct svc_rqst *rqstp)
if (memcmp(p, "nfs@", 4) != 0)
return 0;
p += 4;
if (strcmp(p, r->cl_server) != 0)
if (strcmp(p, clp->cl_hostname) != 0)
return 0;
return 1;
}

View File

@ -38,7 +38,8 @@ enum nfs4_callback_opnum {
struct cb_process_state {
__be32 drc_status;
struct nfs_client *clp;
int slotid;
u32 slotid;
struct net *net;
};
struct cb_compound_hdr_arg {

View File

@ -8,6 +8,7 @@
#include <linux/nfs4.h>
#include <linux/nfs_fs.h>
#include <linux/slab.h>
#include <linux/rcupdate.h>
#include "nfs4_fs.h"
#include "callback.h"
#include "delegation.h"
@ -33,7 +34,7 @@ __be32 nfs4_callback_getattr(struct cb_getattrargs *args,
res->bitmap[0] = res->bitmap[1] = 0;
res->status = htonl(NFS4ERR_BADHANDLE);
dprintk("NFS: GETATTR callback request from %s\n",
dprintk_rcu("NFS: GETATTR callback request from %s\n",
rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
inode = nfs_delegation_find_inode(cps->clp, &args->fh);
@ -73,7 +74,7 @@ __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
if (!cps->clp) /* Always set for v4.0. Set in cb_sequence for v4.1 */
goto out;
dprintk("NFS: RECALL callback request from %s\n",
dprintk_rcu("NFS: RECALL callback request from %s\n",
rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
res = htonl(NFS4ERR_BADHANDLE);
@ -86,8 +87,7 @@ __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
res = 0;
break;
case -ENOENT:
if (res != 0)
res = htonl(NFS4ERR_BAD_STATEID);
res = htonl(NFS4ERR_BAD_STATEID);
break;
default:
res = htonl(NFS4ERR_RESOURCE);
@ -98,52 +98,64 @@ out:
return res;
}
int nfs4_validate_delegation_stateid(struct nfs_delegation *delegation, const nfs4_stateid *stateid)
{
if (delegation == NULL || memcmp(delegation->stateid.data, stateid->data,
sizeof(delegation->stateid.data)) != 0)
return 0;
return 1;
}
#if defined(CONFIG_NFS_V4_1)
static u32 initiate_file_draining(struct nfs_client *clp,
struct cb_layoutrecallargs *args)
/*
* Lookup a layout by filehandle.
*
* Note: gets a refcount on the layout hdr and on its respective inode.
* Caller must put the layout hdr and the inode.
*
* TODO: keep track of all layouts (and delegations) in a hash table
* hashed by filehandle.
*/
static struct pnfs_layout_hdr * get_layout_by_fh_locked(struct nfs_client *clp, struct nfs_fh *fh)
{
struct nfs_server *server;
struct pnfs_layout_hdr *lo;
struct inode *ino;
bool found = false;
u32 rv = NFS4ERR_NOMATCHING_LAYOUT;
LIST_HEAD(free_me_list);
struct pnfs_layout_hdr *lo;
spin_lock(&clp->cl_lock);
rcu_read_lock();
list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) {
list_for_each_entry(lo, &server->layouts, plh_layouts) {
if (nfs_compare_fh(&args->cbl_fh,
&NFS_I(lo->plh_inode)->fh))
if (nfs_compare_fh(fh, &NFS_I(lo->plh_inode)->fh))
continue;
ino = igrab(lo->plh_inode);
if (!ino)
continue;
found = true;
/* Without this, layout can be freed as soon
* as we release cl_lock.
*/
get_layout_hdr(lo);
break;
return lo;
}
if (found)
break;
}
return NULL;
}
static struct pnfs_layout_hdr * get_layout_by_fh(struct nfs_client *clp, struct nfs_fh *fh)
{
struct pnfs_layout_hdr *lo;
spin_lock(&clp->cl_lock);
rcu_read_lock();
lo = get_layout_by_fh_locked(clp, fh);
rcu_read_unlock();
spin_unlock(&clp->cl_lock);
if (!found)
return lo;
}
static u32 initiate_file_draining(struct nfs_client *clp,
struct cb_layoutrecallargs *args)
{
struct inode *ino;
struct pnfs_layout_hdr *lo;
u32 rv = NFS4ERR_NOMATCHING_LAYOUT;
LIST_HEAD(free_me_list);
lo = get_layout_by_fh(clp, &args->cbl_fh);
if (!lo)
return NFS4ERR_NOMATCHING_LAYOUT;
ino = lo->plh_inode;
spin_lock(&ino->i_lock);
if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
mark_matching_lsegs_invalid(lo, &free_me_list,
@ -213,17 +225,13 @@ static u32 initiate_bulk_draining(struct nfs_client *clp,
static u32 do_callback_layoutrecall(struct nfs_client *clp,
struct cb_layoutrecallargs *args)
{
u32 res = NFS4ERR_DELAY;
u32 res;
dprintk("%s enter, type=%i\n", __func__, args->cbl_recall_type);
if (test_and_set_bit(NFS4CLNT_LAYOUTRECALL, &clp->cl_state))
goto out;
if (args->cbl_recall_type == RETURN_FILE)
res = initiate_file_draining(clp, args);
else
res = initiate_bulk_draining(clp, args);
clear_bit(NFS4CLNT_LAYOUTRECALL, &clp->cl_state);
out:
dprintk("%s returning %i\n", __func__, res);
return res;
@ -303,21 +311,6 @@ out:
return res;
}
int nfs41_validate_delegation_stateid(struct nfs_delegation *delegation, const nfs4_stateid *stateid)
{
if (delegation == NULL)
return 0;
if (stateid->stateid.seqid != 0)
return 0;
if (memcmp(&delegation->stateid.stateid.other,
&stateid->stateid.other,
NFS4_STATEID_OTHER_SIZE))
return 0;
return 1;
}
/*
* Validate the sequenceID sent by the server.
* Return success if the sequenceID is one more than what we last saw on
@ -441,7 +434,7 @@ __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
int i;
__be32 status = htonl(NFS4ERR_BADSESSION);
clp = nfs4_find_client_sessionid(args->csa_addr, &args->csa_sessionid);
clp = nfs4_find_client_sessionid(cps->net, args->csa_addr, &args->csa_sessionid);
if (clp == NULL)
goto out;
@ -517,7 +510,7 @@ __be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy,
if (!cps->clp) /* set in cb_sequence */
goto out;
dprintk("NFS: RECALL_ANY callback request from %s\n",
dprintk_rcu("NFS: RECALL_ANY callback request from %s\n",
rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
status = cpu_to_be32(NFS4ERR_INVAL);
@ -552,7 +545,7 @@ __be32 nfs4_callback_recallslot(struct cb_recallslotargs *args, void *dummy,
if (!cps->clp) /* set in cb_sequence */
goto out;
dprintk("NFS: CB_RECALL_SLOT request from %s target max slots %d\n",
dprintk_rcu("NFS: CB_RECALL_SLOT request from %s target max slots %d\n",
rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR),
args->crsa_target_max_slots);

View File

@ -9,6 +9,8 @@
#include <linux/sunrpc/svc.h>
#include <linux/nfs4.h>
#include <linux/nfs_fs.h>
#include <linux/ratelimit.h>
#include <linux/printk.h>
#include <linux/slab.h>
#include <linux/sunrpc/bc_xprt.h>
#include "nfs4_fs.h"
@ -73,7 +75,7 @@ static __be32 *read_buf(struct xdr_stream *xdr, int nbytes)
p = xdr_inline_decode(xdr, nbytes);
if (unlikely(p == NULL))
printk(KERN_WARNING "NFSv4 callback reply buffer overflowed!\n");
printk(KERN_WARNING "NFS: NFSv4 callback reply buffer overflowed!\n");
return p;
}
@ -138,10 +140,10 @@ static __be32 decode_stateid(struct xdr_stream *xdr, nfs4_stateid *stateid)
{
__be32 *p;
p = read_buf(xdr, 16);
p = read_buf(xdr, NFS4_STATEID_SIZE);
if (unlikely(p == NULL))
return htonl(NFS4ERR_RESOURCE);
memcpy(stateid->data, p, 16);
memcpy(stateid, p, NFS4_STATEID_SIZE);
return 0;
}
@ -155,7 +157,7 @@ static __be32 decode_compound_hdr_arg(struct xdr_stream *xdr, struct cb_compound
return status;
/* We do not like overly long tags! */
if (hdr->taglen > CB_OP_TAGLEN_MAXSZ - 12) {
printk("NFSv4 CALLBACK %s: client sent tag of length %u\n",
printk("NFS: NFSv4 CALLBACK %s: client sent tag of length %u\n",
__func__, hdr->taglen);
return htonl(NFS4ERR_RESOURCE);
}
@ -167,7 +169,7 @@ static __be32 decode_compound_hdr_arg(struct xdr_stream *xdr, struct cb_compound
if (hdr->minorversion <= 1) {
hdr->cb_ident = ntohl(*p++); /* ignored by v4.1 */
} else {
printk(KERN_WARNING "%s: NFSv4 server callback with "
pr_warn_ratelimited("NFS: %s: NFSv4 server callback with "
"illegal minor version %u!\n",
__func__, hdr->minorversion);
return htonl(NFS4ERR_MINOR_VERS_MISMATCH);
@ -759,14 +761,14 @@ static void nfs4_callback_free_slot(struct nfs4_session *session)
* Let the state manager know callback processing done.
* A single slot, so highest used slotid is either 0 or -1
*/
tbl->highest_used_slotid = -1;
tbl->highest_used_slotid = NFS4_NO_SLOT;
nfs4_check_drain_bc_complete(session);
spin_unlock(&tbl->slot_tbl_lock);
}
static void nfs4_cb_free_slot(struct cb_process_state *cps)
{
if (cps->slotid != -1)
if (cps->slotid != NFS4_NO_SLOT)
nfs4_callback_free_slot(cps->clp->cl_session);
}
@ -860,7 +862,8 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
struct cb_process_state cps = {
.drc_status = 0,
.clp = NULL,
.slotid = -1,
.slotid = NFS4_NO_SLOT,
.net = rqstp->rq_xprt->xpt_net,
};
unsigned int nops = 0;
@ -876,7 +879,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
return rpc_garbage_args;
if (hdr_arg.minorversion == 0) {
cps.clp = nfs4_find_client_ident(hdr_arg.cb_ident);
cps.clp = nfs4_find_client_ident(rqstp->rq_xprt->xpt_net, hdr_arg.cb_ident);
if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp))
return rpc_drop_reply;
}

View File

@ -39,6 +39,8 @@
#include <net/ipv6.h>
#include <linux/nfs_xdr.h>
#include <linux/sunrpc/bc_xprt.h>
#include <linux/nsproxy.h>
#include <linux/pid_namespace.h>
#include <asm/system.h>
@ -49,15 +51,12 @@
#include "internal.h"
#include "fscache.h"
#include "pnfs.h"
#include "netns.h"
#define NFSDBG_FACILITY NFSDBG_CLIENT
static DEFINE_SPINLOCK(nfs_client_lock);
static LIST_HEAD(nfs_client_list);
static LIST_HEAD(nfs_volume_list);
static DECLARE_WAIT_QUEUE_HEAD(nfs_client_active_wq);
#ifdef CONFIG_NFS_V4
static DEFINE_IDR(cb_ident_idr); /* Protected by nfs_client_lock */
/*
* Get a unique NFSv4.0 callback identifier which will be used
@ -66,15 +65,16 @@ static DEFINE_IDR(cb_ident_idr); /* Protected by nfs_client_lock */
static int nfs_get_cb_ident_idr(struct nfs_client *clp, int minorversion)
{
int ret = 0;
struct nfs_net *nn = net_generic(clp->net, nfs_net_id);
if (clp->rpc_ops->version != 4 || minorversion != 0)
return ret;
retry:
if (!idr_pre_get(&cb_ident_idr, GFP_KERNEL))
if (!idr_pre_get(&nn->cb_ident_idr, GFP_KERNEL))
return -ENOMEM;
spin_lock(&nfs_client_lock);
ret = idr_get_new(&cb_ident_idr, clp, &clp->cl_cb_ident);
spin_unlock(&nfs_client_lock);
spin_lock(&nn->nfs_client_lock);
ret = idr_get_new(&nn->cb_ident_idr, clp, &clp->cl_cb_ident);
spin_unlock(&nn->nfs_client_lock);
if (ret == -EAGAIN)
goto retry;
return ret;
@ -89,7 +89,7 @@ static bool nfs4_disable_idmapping = true;
/*
* RPC cruft for NFS
*/
static struct rpc_version *nfs_version[5] = {
static const struct rpc_version *nfs_version[5] = {
[2] = &nfs_version2,
#ifdef CONFIG_NFS_V3
[3] = &nfs_version3,
@ -99,7 +99,7 @@ static struct rpc_version *nfs_version[5] = {
#endif
};
struct rpc_program nfs_program = {
const struct rpc_program nfs_program = {
.name = "nfs",
.number = NFS_PROGRAM,
.nrvers = ARRAY_SIZE(nfs_version),
@ -115,11 +115,11 @@ struct rpc_stat nfs_rpcstat = {
#ifdef CONFIG_NFS_V3_ACL
static struct rpc_stat nfsacl_rpcstat = { &nfsacl_program };
static struct rpc_version * nfsacl_version[] = {
static const struct rpc_version *nfsacl_version[] = {
[3] = &nfsacl_version3,
};
struct rpc_program nfsacl_program = {
const struct rpc_program nfsacl_program = {
.name = "nfsacl",
.number = NFS_ACL_PROGRAM,
.nrvers = ARRAY_SIZE(nfsacl_version),
@ -135,6 +135,7 @@ struct nfs_client_initdata {
const struct nfs_rpc_ops *rpc_ops;
int proto;
u32 minorversion;
struct net *net;
};
/*
@ -171,6 +172,7 @@ static struct nfs_client *nfs_alloc_client(const struct nfs_client_initdata *cl_
clp->cl_rpcclient = ERR_PTR(-EINVAL);
clp->cl_proto = cl_init->proto;
clp->net = get_net(cl_init->net);
#ifdef CONFIG_NFS_V4
err = nfs_get_cb_ident_idr(clp, cl_init->minorversion);
@ -202,8 +204,11 @@ error_0:
#ifdef CONFIG_NFS_V4_1
static void nfs4_shutdown_session(struct nfs_client *clp)
{
if (nfs4_has_session(clp))
if (nfs4_has_session(clp)) {
nfs4_deviceid_purge_client(clp);
nfs4_destroy_session(clp->cl_session);
}
}
#else /* CONFIG_NFS_V4_1 */
static void nfs4_shutdown_session(struct nfs_client *clp)
@ -233,16 +238,20 @@ static void nfs4_shutdown_client(struct nfs_client *clp)
}
/* idr_remove_all is not needed as all id's are removed by nfs_put_client */
void nfs_cleanup_cb_ident_idr(void)
void nfs_cleanup_cb_ident_idr(struct net *net)
{
idr_destroy(&cb_ident_idr);
struct nfs_net *nn = net_generic(net, nfs_net_id);
idr_destroy(&nn->cb_ident_idr);
}
/* nfs_client_lock held */
static void nfs_cb_idr_remove_locked(struct nfs_client *clp)
{
struct nfs_net *nn = net_generic(clp->net, nfs_net_id);
if (clp->cl_cb_ident)
idr_remove(&cb_ident_idr, clp->cl_cb_ident);
idr_remove(&nn->cb_ident_idr, clp->cl_cb_ident);
}
static void pnfs_init_server(struct nfs_server *server)
@ -260,7 +269,7 @@ static void nfs4_shutdown_client(struct nfs_client *clp)
{
}
void nfs_cleanup_cb_ident_idr(void)
void nfs_cleanup_cb_ident_idr(struct net *net)
{
}
@ -292,10 +301,10 @@ static void nfs_free_client(struct nfs_client *clp)
if (clp->cl_machine_cred != NULL)
put_rpccred(clp->cl_machine_cred);
nfs4_deviceid_purge_client(clp);
put_net(clp->net);
kfree(clp->cl_hostname);
kfree(clp->server_scope);
kfree(clp->impl_id);
kfree(clp);
dprintk("<-- nfs_free_client()\n");
@ -306,15 +315,18 @@ static void nfs_free_client(struct nfs_client *clp)
*/
void nfs_put_client(struct nfs_client *clp)
{
struct nfs_net *nn;
if (!clp)
return;
dprintk("--> nfs_put_client({%d})\n", atomic_read(&clp->cl_count));
nn = net_generic(clp->net, nfs_net_id);
if (atomic_dec_and_lock(&clp->cl_count, &nfs_client_lock)) {
if (atomic_dec_and_lock(&clp->cl_count, &nn->nfs_client_lock)) {
list_del(&clp->cl_share_link);
nfs_cb_idr_remove_locked(clp);
spin_unlock(&nfs_client_lock);
spin_unlock(&nn->nfs_client_lock);
BUG_ON(!list_empty(&clp->cl_superblocks));
@ -392,6 +404,7 @@ static int nfs_sockaddr_cmp_ip4(const struct sockaddr *sa1,
(sin1->sin_port == sin2->sin_port);
}
#if defined(CONFIG_NFS_V4_1)
/*
* Test if two socket addresses represent the same actual socket,
* by comparing (only) relevant fields, excluding the port number.
@ -410,6 +423,7 @@ static int nfs_sockaddr_match_ipaddr(const struct sockaddr *sa1,
}
return 0;
}
#endif /* CONFIG_NFS_V4_1 */
/*
* Test if two socket addresses represent the same actual socket,
@ -430,10 +444,10 @@ static int nfs_sockaddr_cmp(const struct sockaddr *sa1,
return 0;
}
#if defined(CONFIG_NFS_V4_1)
/* Common match routine for v4.0 and v4.1 callback services */
bool
nfs4_cb_match_client(const struct sockaddr *addr, struct nfs_client *clp,
u32 minorversion)
static bool nfs4_cb_match_client(const struct sockaddr *addr,
struct nfs_client *clp, u32 minorversion)
{
struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
@ -453,6 +467,7 @@ nfs4_cb_match_client(const struct sockaddr *addr, struct nfs_client *clp,
return true;
}
#endif /* CONFIG_NFS_V4_1 */
/*
* Find an nfs_client on the list that matches the initialisation data
@ -462,8 +477,9 @@ static struct nfs_client *nfs_match_client(const struct nfs_client_initdata *dat
{
struct nfs_client *clp;
const struct sockaddr *sap = data->addr;
struct nfs_net *nn = net_generic(data->net, nfs_net_id);
list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
const struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
/* Don't match clients that failed to initialise properly */
if (clp->cl_cons_state < 0)
@ -501,13 +517,14 @@ nfs_get_client(const struct nfs_client_initdata *cl_init,
{
struct nfs_client *clp, *new = NULL;
int error;
struct nfs_net *nn = net_generic(cl_init->net, nfs_net_id);
dprintk("--> nfs_get_client(%s,v%u)\n",
cl_init->hostname ?: "", cl_init->rpc_ops->version);
/* see if the client already exists */
do {
spin_lock(&nfs_client_lock);
spin_lock(&nn->nfs_client_lock);
clp = nfs_match_client(cl_init);
if (clp)
@ -515,7 +532,7 @@ nfs_get_client(const struct nfs_client_initdata *cl_init,
if (new)
goto install_client;
spin_unlock(&nfs_client_lock);
spin_unlock(&nn->nfs_client_lock);
new = nfs_alloc_client(cl_init);
} while (!IS_ERR(new));
@ -526,8 +543,8 @@ nfs_get_client(const struct nfs_client_initdata *cl_init,
/* install a new client and return with it unready */
install_client:
clp = new;
list_add(&clp->cl_share_link, &nfs_client_list);
spin_unlock(&nfs_client_lock);
list_add(&clp->cl_share_link, &nn->nfs_client_list);
spin_unlock(&nn->nfs_client_lock);
error = cl_init->rpc_ops->init_client(clp, timeparms, ip_addr,
authflavour, noresvport);
@ -542,7 +559,7 @@ install_client:
* - make sure it's ready before returning
*/
found_client:
spin_unlock(&nfs_client_lock);
spin_unlock(&nn->nfs_client_lock);
if (new)
nfs_free_client(new);
@ -642,7 +659,7 @@ static int nfs_create_rpc_client(struct nfs_client *clp,
{
struct rpc_clnt *clnt = NULL;
struct rpc_create_args args = {
.net = &init_net,
.net = clp->net,
.protocol = clp->cl_proto,
.address = (struct sockaddr *)&clp->cl_addr,
.addrsize = clp->cl_addrlen,
@ -696,6 +713,7 @@ static int nfs_start_lockd(struct nfs_server *server)
.nfs_version = clp->rpc_ops->version,
.noresvport = server->flags & NFS_MOUNT_NORESVPORT ?
1 : 0,
.net = clp->net,
};
if (nlm_init.nfs_version > 3)
@ -831,6 +849,7 @@ static int nfs_init_server(struct nfs_server *server,
.addrlen = data->nfs_server.addrlen,
.rpc_ops = &nfs_v2_clientops,
.proto = data->nfs_server.protocol,
.net = data->net,
};
struct rpc_timeout timeparms;
struct nfs_client *clp;
@ -1029,25 +1048,30 @@ static void nfs_server_copy_userdata(struct nfs_server *target, struct nfs_serve
static void nfs_server_insert_lists(struct nfs_server *server)
{
struct nfs_client *clp = server->nfs_client;
struct nfs_net *nn = net_generic(clp->net, nfs_net_id);
spin_lock(&nfs_client_lock);
spin_lock(&nn->nfs_client_lock);
list_add_tail_rcu(&server->client_link, &clp->cl_superblocks);
list_add_tail(&server->master_link, &nfs_volume_list);
list_add_tail(&server->master_link, &nn->nfs_volume_list);
clear_bit(NFS_CS_STOP_RENEW, &clp->cl_res_state);
spin_unlock(&nfs_client_lock);
spin_unlock(&nn->nfs_client_lock);
}
static void nfs_server_remove_lists(struct nfs_server *server)
{
struct nfs_client *clp = server->nfs_client;
struct nfs_net *nn;
spin_lock(&nfs_client_lock);
if (clp == NULL)
return;
nn = net_generic(clp->net, nfs_net_id);
spin_lock(&nn->nfs_client_lock);
list_del_rcu(&server->client_link);
if (clp && list_empty(&clp->cl_superblocks))
if (list_empty(&clp->cl_superblocks))
set_bit(NFS_CS_STOP_RENEW, &clp->cl_res_state);
list_del(&server->master_link);
spin_unlock(&nfs_client_lock);
spin_unlock(&nn->nfs_client_lock);
synchronize_rcu();
}
@ -1086,6 +1110,8 @@ static struct nfs_server *nfs_alloc_server(void)
return NULL;
}
ida_init(&server->openowner_id);
ida_init(&server->lockowner_id);
pnfs_init_server(server);
return server;
@ -1111,6 +1137,8 @@ void nfs_free_server(struct nfs_server *server)
nfs_put_client(server->nfs_client);
ida_destroy(&server->lockowner_id);
ida_destroy(&server->openowner_id);
nfs_free_iostats(server->io_stats);
bdi_destroy(&server->backing_dev_info);
kfree(server);
@ -1186,48 +1214,22 @@ error:
}
#ifdef CONFIG_NFS_V4
/*
* NFSv4.0 callback thread helper
*
* Find a client by IP address, protocol version, and minorversion
*
* Called from the pg_authenticate method. The callback identifier
* is not used as it has not been decoded.
*
* Returns NULL if no such client
*/
struct nfs_client *
nfs4_find_client_no_ident(const struct sockaddr *addr)
{
struct nfs_client *clp;
spin_lock(&nfs_client_lock);
list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
if (nfs4_cb_match_client(addr, clp, 0) == false)
continue;
atomic_inc(&clp->cl_count);
spin_unlock(&nfs_client_lock);
return clp;
}
spin_unlock(&nfs_client_lock);
return NULL;
}
/*
* NFSv4.0 callback thread helper
*
* Find a client by callback identifier
*/
struct nfs_client *
nfs4_find_client_ident(int cb_ident)
nfs4_find_client_ident(struct net *net, int cb_ident)
{
struct nfs_client *clp;
struct nfs_net *nn = net_generic(net, nfs_net_id);
spin_lock(&nfs_client_lock);
clp = idr_find(&cb_ident_idr, cb_ident);
spin_lock(&nn->nfs_client_lock);
clp = idr_find(&nn->cb_ident_idr, cb_ident);
if (clp)
atomic_inc(&clp->cl_count);
spin_unlock(&nfs_client_lock);
spin_unlock(&nn->nfs_client_lock);
return clp;
}
@ -1240,13 +1242,14 @@ nfs4_find_client_ident(int cb_ident)
* Returns NULL if no such client
*/
struct nfs_client *
nfs4_find_client_sessionid(const struct sockaddr *addr,
nfs4_find_client_sessionid(struct net *net, const struct sockaddr *addr,
struct nfs4_sessionid *sid)
{
struct nfs_client *clp;
struct nfs_net *nn = net_generic(net, nfs_net_id);
spin_lock(&nfs_client_lock);
list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
spin_lock(&nn->nfs_client_lock);
list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
if (nfs4_cb_match_client(addr, clp, 1) == false)
continue;
@ -1259,17 +1262,17 @@ nfs4_find_client_sessionid(const struct sockaddr *addr,
continue;
atomic_inc(&clp->cl_count);
spin_unlock(&nfs_client_lock);
spin_unlock(&nn->nfs_client_lock);
return clp;
}
spin_unlock(&nfs_client_lock);
spin_unlock(&nn->nfs_client_lock);
return NULL;
}
#else /* CONFIG_NFS_V4_1 */
struct nfs_client *
nfs4_find_client_sessionid(const struct sockaddr *addr,
nfs4_find_client_sessionid(struct net *net, const struct sockaddr *addr,
struct nfs4_sessionid *sid)
{
return NULL;
@ -1284,16 +1287,18 @@ static int nfs4_init_callback(struct nfs_client *clp)
int error;
if (clp->rpc_ops->version == 4) {
struct rpc_xprt *xprt;
xprt = rcu_dereference_raw(clp->cl_rpcclient->cl_xprt);
if (nfs4_has_session(clp)) {
error = xprt_setup_backchannel(
clp->cl_rpcclient->cl_xprt,
error = xprt_setup_backchannel(xprt,
NFS41_BC_MIN_CALLBACKS);
if (error < 0)
return error;
}
error = nfs_callback_up(clp->cl_mvops->minor_version,
clp->cl_rpcclient->cl_xprt);
error = nfs_callback_up(clp->cl_mvops->minor_version, xprt);
if (error < 0) {
dprintk("%s: failed to start callback. Error = %d\n",
__func__, error);
@ -1344,6 +1349,7 @@ int nfs4_init_client(struct nfs_client *clp,
rpc_authflavor_t authflavour,
int noresvport)
{
char buf[INET6_ADDRSTRLEN + 1];
int error;
if (clp->cl_cons_state == NFS_CS_READY) {
@ -1359,6 +1365,20 @@ int nfs4_init_client(struct nfs_client *clp,
1, noresvport);
if (error < 0)
goto error;
/* If no clientaddr= option was specified, find a usable cb address */
if (ip_addr == NULL) {
struct sockaddr_storage cb_addr;
struct sockaddr *sap = (struct sockaddr *)&cb_addr;
error = rpc_localaddr(clp->cl_rpcclient, sap, sizeof(cb_addr));
if (error < 0)
goto error;
error = rpc_ntop(sap, buf, sizeof(buf));
if (error < 0)
goto error;
ip_addr = (const char *)buf;
}
strlcpy(clp->cl_ipaddr, ip_addr, sizeof(clp->cl_ipaddr));
error = nfs_idmap_new(clp);
@ -1393,7 +1413,7 @@ static int nfs4_set_client(struct nfs_server *server,
const char *ip_addr,
rpc_authflavor_t authflavour,
int proto, const struct rpc_timeout *timeparms,
u32 minorversion)
u32 minorversion, struct net *net)
{
struct nfs_client_initdata cl_init = {
.hostname = hostname,
@ -1402,6 +1422,7 @@ static int nfs4_set_client(struct nfs_server *server,
.rpc_ops = &nfs_v4_clientops,
.proto = proto,
.minorversion = minorversion,
.net = net,
};
struct nfs_client *clp;
int error;
@ -1453,6 +1474,7 @@ struct nfs_client *nfs4_set_ds_client(struct nfs_client* mds_clp,
.rpc_ops = &nfs_v4_clientops,
.proto = ds_proto,
.minorversion = mds_clp->cl_minorversion,
.net = mds_clp->net,
};
struct rpc_timeout ds_timeout = {
.to_initval = 15 * HZ,
@ -1580,7 +1602,8 @@ static int nfs4_init_server(struct nfs_server *server,
data->auth_flavors[0],
data->nfs_server.protocol,
&timeparms,
data->minorversion);
data->minorversion,
data->net);
if (error < 0)
goto error;
@ -1675,9 +1698,10 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
data->addrlen,
parent_client->cl_ipaddr,
data->authflavor,
parent_server->client->cl_xprt->prot,
rpc_protocol(parent_server->client),
parent_server->client->cl_timeout,
parent_client->cl_mvops->minor_version);
parent_client->cl_mvops->minor_version,
parent_client->net);
if (error < 0)
goto error;
@ -1770,6 +1794,18 @@ out_free_server:
return ERR_PTR(error);
}
void nfs_clients_init(struct net *net)
{
struct nfs_net *nn = net_generic(net, nfs_net_id);
INIT_LIST_HEAD(&nn->nfs_client_list);
INIT_LIST_HEAD(&nn->nfs_volume_list);
#ifdef CONFIG_NFS_V4
idr_init(&nn->cb_ident_idr);
#endif
spin_lock_init(&nn->nfs_client_lock);
}
#ifdef CONFIG_PROC_FS
static struct proc_dir_entry *proc_fs_nfs;
@ -1823,13 +1859,15 @@ static int nfs_server_list_open(struct inode *inode, struct file *file)
{
struct seq_file *m;
int ret;
struct pid_namespace *pid_ns = file->f_dentry->d_sb->s_fs_info;
struct net *net = pid_ns->child_reaper->nsproxy->net_ns;
ret = seq_open(file, &nfs_server_list_ops);
if (ret < 0)
return ret;
m = file->private_data;
m->private = PDE(inode)->data;
m->private = net;
return 0;
}
@ -1839,9 +1877,11 @@ static int nfs_server_list_open(struct inode *inode, struct file *file)
*/
static void *nfs_server_list_start(struct seq_file *m, loff_t *_pos)
{
struct nfs_net *nn = net_generic(m->private, nfs_net_id);
/* lock the list against modification */
spin_lock(&nfs_client_lock);
return seq_list_start_head(&nfs_client_list, *_pos);
spin_lock(&nn->nfs_client_lock);
return seq_list_start_head(&nn->nfs_client_list, *_pos);
}
/*
@ -1849,7 +1889,9 @@ static void *nfs_server_list_start(struct seq_file *m, loff_t *_pos)
*/
static void *nfs_server_list_next(struct seq_file *p, void *v, loff_t *pos)
{
return seq_list_next(v, &nfs_client_list, pos);
struct nfs_net *nn = net_generic(p->private, nfs_net_id);
return seq_list_next(v, &nn->nfs_client_list, pos);
}
/*
@ -1857,7 +1899,9 @@ static void *nfs_server_list_next(struct seq_file *p, void *v, loff_t *pos)
*/
static void nfs_server_list_stop(struct seq_file *p, void *v)
{
spin_unlock(&nfs_client_lock);
struct nfs_net *nn = net_generic(p->private, nfs_net_id);
spin_unlock(&nn->nfs_client_lock);
}
/*
@ -1866,9 +1910,10 @@ static void nfs_server_list_stop(struct seq_file *p, void *v)
static int nfs_server_list_show(struct seq_file *m, void *v)
{
struct nfs_client *clp;
struct nfs_net *nn = net_generic(m->private, nfs_net_id);
/* display header on line 1 */
if (v == &nfs_client_list) {
if (v == &nn->nfs_client_list) {
seq_puts(m, "NV SERVER PORT USE HOSTNAME\n");
return 0;
}
@ -1880,12 +1925,14 @@ static int nfs_server_list_show(struct seq_file *m, void *v)
if (clp->cl_cons_state != NFS_CS_READY)
return 0;
rcu_read_lock();
seq_printf(m, "v%u %s %s %3d %s\n",
clp->rpc_ops->version,
rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_HEX_ADDR),
rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_HEX_PORT),
atomic_read(&clp->cl_count),
clp->cl_hostname);
rcu_read_unlock();
return 0;
}
@ -1897,13 +1944,15 @@ static int nfs_volume_list_open(struct inode *inode, struct file *file)
{
struct seq_file *m;
int ret;
struct pid_namespace *pid_ns = file->f_dentry->d_sb->s_fs_info;
struct net *net = pid_ns->child_reaper->nsproxy->net_ns;
ret = seq_open(file, &nfs_volume_list_ops);
if (ret < 0)
return ret;
m = file->private_data;
m->private = PDE(inode)->data;
m->private = net;
return 0;
}
@ -1913,9 +1962,11 @@ static int nfs_volume_list_open(struct inode *inode, struct file *file)
*/
static void *nfs_volume_list_start(struct seq_file *m, loff_t *_pos)
{
struct nfs_net *nn = net_generic(m->private, nfs_net_id);
/* lock the list against modification */
spin_lock(&nfs_client_lock);
return seq_list_start_head(&nfs_volume_list, *_pos);
spin_lock(&nn->nfs_client_lock);
return seq_list_start_head(&nn->nfs_volume_list, *_pos);
}
/*
@ -1923,7 +1974,9 @@ static void *nfs_volume_list_start(struct seq_file *m, loff_t *_pos)
*/
static void *nfs_volume_list_next(struct seq_file *p, void *v, loff_t *pos)
{
return seq_list_next(v, &nfs_volume_list, pos);
struct nfs_net *nn = net_generic(p->private, nfs_net_id);
return seq_list_next(v, &nn->nfs_volume_list, pos);
}
/*
@ -1931,7 +1984,9 @@ static void *nfs_volume_list_next(struct seq_file *p, void *v, loff_t *pos)
*/
static void nfs_volume_list_stop(struct seq_file *p, void *v)
{
spin_unlock(&nfs_client_lock);
struct nfs_net *nn = net_generic(p->private, nfs_net_id);
spin_unlock(&nn->nfs_client_lock);
}
/*
@ -1942,9 +1997,10 @@ static int nfs_volume_list_show(struct seq_file *m, void *v)
struct nfs_server *server;
struct nfs_client *clp;
char dev[8], fsid[17];
struct nfs_net *nn = net_generic(m->private, nfs_net_id);
/* display header on line 1 */
if (v == &nfs_volume_list) {
if (v == &nn->nfs_volume_list) {
seq_puts(m, "NV SERVER PORT DEV FSID FSC\n");
return 0;
}
@ -1959,6 +2015,7 @@ static int nfs_volume_list_show(struct seq_file *m, void *v)
(unsigned long long) server->fsid.major,
(unsigned long long) server->fsid.minor);
rcu_read_lock();
seq_printf(m, "v%u %s %s %-7s %-17s %s\n",
clp->rpc_ops->version,
rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_HEX_ADDR),
@ -1966,6 +2023,7 @@ static int nfs_volume_list_show(struct seq_file *m, void *v)
dev,
fsid,
nfs_server_fscache_state(server));
rcu_read_unlock();
return 0;
}

View File

@ -105,7 +105,7 @@ again:
continue;
if (!test_bit(NFS_DELEGATED_STATE, &state->flags))
continue;
if (memcmp(state->stateid.data, stateid->data, sizeof(state->stateid.data)) != 0)
if (!nfs4_stateid_match(&state->stateid, stateid))
continue;
get_nfs_open_context(ctx);
spin_unlock(&inode->i_lock);
@ -139,8 +139,7 @@ void nfs_inode_reclaim_delegation(struct inode *inode, struct rpc_cred *cred,
if (delegation != NULL) {
spin_lock(&delegation->lock);
if (delegation->inode != NULL) {
memcpy(delegation->stateid.data, res->delegation.data,
sizeof(delegation->stateid.data));
nfs4_stateid_copy(&delegation->stateid, &res->delegation);
delegation->type = res->delegation_type;
delegation->maxsize = res->maxsize;
oldcred = delegation->cred;
@ -236,8 +235,7 @@ int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct
delegation = kmalloc(sizeof(*delegation), GFP_NOFS);
if (delegation == NULL)
return -ENOMEM;
memcpy(delegation->stateid.data, res->delegation.data,
sizeof(delegation->stateid.data));
nfs4_stateid_copy(&delegation->stateid, &res->delegation);
delegation->type = res->delegation_type;
delegation->maxsize = res->maxsize;
delegation->change_attr = inode->i_version;
@ -250,19 +248,22 @@ int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct
old_delegation = rcu_dereference_protected(nfsi->delegation,
lockdep_is_held(&clp->cl_lock));
if (old_delegation != NULL) {
if (memcmp(&delegation->stateid, &old_delegation->stateid,
sizeof(old_delegation->stateid)) == 0 &&
if (nfs4_stateid_match(&delegation->stateid,
&old_delegation->stateid) &&
delegation->type == old_delegation->type) {
goto out;
}
/*
* Deal with broken servers that hand out two
* delegations for the same file.
* Allow for upgrades to a WRITE delegation, but
* nothing else.
*/
dfprintk(FILE, "%s: server %s handed out "
"a duplicate delegation!\n",
__func__, clp->cl_hostname);
if (delegation->type <= old_delegation->type) {
if (delegation->type == old_delegation->type ||
!(delegation->type & FMODE_WRITE)) {
freeme = delegation;
delegation = NULL;
goto out;
@ -455,17 +456,24 @@ static void nfs_client_mark_return_all_delegation_types(struct nfs_client *clp,
rcu_read_unlock();
}
static void nfs_client_mark_return_all_delegations(struct nfs_client *clp)
{
nfs_client_mark_return_all_delegation_types(clp, FMODE_READ|FMODE_WRITE);
}
static void nfs_delegation_run_state_manager(struct nfs_client *clp)
{
if (test_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state))
nfs4_schedule_state_manager(clp);
}
void nfs_remove_bad_delegation(struct inode *inode)
{
struct nfs_delegation *delegation;
delegation = nfs_detach_delegation(NFS_I(inode), NFS_SERVER(inode));
if (delegation) {
nfs_inode_find_state_and_recover(inode, &delegation->stateid);
nfs_free_delegation(delegation);
}
}
EXPORT_SYMBOL_GPL(nfs_remove_bad_delegation);
/**
* nfs_expire_all_delegation_types
* @clp: client to process
@ -488,18 +496,6 @@ void nfs_expire_all_delegations(struct nfs_client *clp)
nfs_expire_all_delegation_types(clp, FMODE_READ|FMODE_WRITE);
}
/**
* nfs_handle_cb_pathdown - return all delegations after NFS4ERR_CB_PATH_DOWN
* @clp: client to process
*
*/
void nfs_handle_cb_pathdown(struct nfs_client *clp)
{
if (clp == NULL)
return;
nfs_client_mark_return_all_delegations(clp);
}
static void nfs_mark_return_unreferenced_delegations(struct nfs_server *server)
{
struct nfs_delegation *delegation;
@ -531,7 +527,7 @@ void nfs_expire_unreferenced_delegations(struct nfs_client *clp)
/**
* nfs_async_inode_return_delegation - asynchronously return a delegation
* @inode: inode to process
* @stateid: state ID information from CB_RECALL arguments
* @stateid: state ID information
*
* Returns zero on success, or a negative errno value.
*/
@ -545,7 +541,7 @@ int nfs_async_inode_return_delegation(struct inode *inode,
rcu_read_lock();
delegation = rcu_dereference(NFS_I(inode)->delegation);
if (!clp->cl_mvops->validate_stateid(delegation, stateid)) {
if (!clp->cl_mvops->match_stateid(&delegation->stateid, stateid)) {
rcu_read_unlock();
return -ENOENT;
}
@ -684,21 +680,25 @@ int nfs_delegations_present(struct nfs_client *clp)
* nfs4_copy_delegation_stateid - Copy inode's state ID information
* @dst: stateid data structure to fill in
* @inode: inode to check
* @flags: delegation type requirement
*
* Returns one and fills in "dst->data" * if inode had a delegation,
* otherwise zero is returned.
* Returns "true" and fills in "dst->data" * if inode had a delegation,
* otherwise "false" is returned.
*/
int nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode)
bool nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode,
fmode_t flags)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_delegation *delegation;
int ret = 0;
bool ret;
flags &= FMODE_READ|FMODE_WRITE;
rcu_read_lock();
delegation = rcu_dereference(nfsi->delegation);
if (delegation != NULL) {
memcpy(dst->data, delegation->stateid.data, sizeof(dst->data));
ret = 1;
ret = (delegation != NULL && (delegation->type & flags) == flags);
if (ret) {
nfs4_stateid_copy(dst, &delegation->stateid);
nfs_mark_delegation_referenced(delegation);
}
rcu_read_unlock();
return ret;

View File

@ -42,9 +42,9 @@ void nfs_super_return_all_delegations(struct super_block *sb);
void nfs_expire_all_delegations(struct nfs_client *clp);
void nfs_expire_all_delegation_types(struct nfs_client *clp, fmode_t flags);
void nfs_expire_unreferenced_delegations(struct nfs_client *clp);
void nfs_handle_cb_pathdown(struct nfs_client *clp);
int nfs_client_return_marked_delegations(struct nfs_client *clp);
int nfs_delegations_present(struct nfs_client *clp);
void nfs_remove_bad_delegation(struct inode *inode);
void nfs_delegation_mark_reclaim(struct nfs_client *clp);
void nfs_delegation_reap_unclaimed(struct nfs_client *clp);
@ -53,7 +53,7 @@ void nfs_delegation_reap_unclaimed(struct nfs_client *clp);
int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid, int issync);
int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state *state, const nfs4_stateid *stateid);
int nfs4_lock_delegation_recall(struct nfs4_state *state, struct file_lock *fl);
int nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode);
bool nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode, fmode_t flags);
void nfs_mark_delegation_referenced(struct nfs_delegation *delegation);
int nfs_have_delegation(struct inode *inode, fmode_t flags);

View File

@ -207,7 +207,7 @@ struct nfs_cache_array_entry {
};
struct nfs_cache_array {
unsigned int size;
int size;
int eof_index;
u64 last_cookie;
struct nfs_cache_array_entry array[0];
@ -1429,6 +1429,7 @@ static struct dentry *nfs_atomic_lookup(struct inode *dir, struct dentry *dentry
}
open_flags = nd->intent.open.flags;
attr.ia_valid = 0;
ctx = create_nfs_open_context(dentry, open_flags);
res = ERR_CAST(ctx);
@ -1437,11 +1438,14 @@ static struct dentry *nfs_atomic_lookup(struct inode *dir, struct dentry *dentry
if (nd->flags & LOOKUP_CREATE) {
attr.ia_mode = nd->intent.open.create_mode;
attr.ia_valid = ATTR_MODE;
attr.ia_valid |= ATTR_MODE;
attr.ia_mode &= ~current_umask();
} else {
} else
open_flags &= ~(O_EXCL | O_CREAT);
attr.ia_valid = 0;
if (open_flags & O_TRUNC) {
attr.ia_valid |= ATTR_SIZE;
attr.ia_size = 0;
}
/* Open the file on the server */
@ -1495,6 +1499,7 @@ static int nfs_open_revalidate(struct dentry *dentry, struct nameidata *nd)
struct inode *inode;
struct inode *dir;
struct nfs_open_context *ctx;
struct iattr attr;
int openflags, ret = 0;
if (nd->flags & LOOKUP_RCU)
@ -1523,19 +1528,27 @@ static int nfs_open_revalidate(struct dentry *dentry, struct nameidata *nd)
/* We cannot do exclusive creation on a positive dentry */
if ((openflags & (O_CREAT|O_EXCL)) == (O_CREAT|O_EXCL))
goto no_open_dput;
/* We can't create new files, or truncate existing ones here */
openflags &= ~(O_CREAT|O_EXCL|O_TRUNC);
/* We can't create new files here */
openflags &= ~(O_CREAT|O_EXCL);
ctx = create_nfs_open_context(dentry, openflags);
ret = PTR_ERR(ctx);
if (IS_ERR(ctx))
goto out;
attr.ia_valid = 0;
if (openflags & O_TRUNC) {
attr.ia_valid |= ATTR_SIZE;
attr.ia_size = 0;
nfs_wb_all(inode);
}
/*
* Note: we're not holding inode->i_mutex and so may be racing with
* operations that change the directory. We therefore save the
* change attribute *before* we do the RPC call.
*/
inode = NFS_PROTO(dir)->open_context(dir, ctx, openflags, NULL);
inode = NFS_PROTO(dir)->open_context(dir, ctx, openflags, &attr);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
switch (ret) {

View File

@ -265,9 +265,7 @@ static void nfs_direct_read_release(void *calldata)
}
static const struct rpc_call_ops nfs_read_direct_ops = {
#if defined(CONFIG_NFS_V4_1)
.rpc_call_prepare = nfs_read_prepare,
#endif /* CONFIG_NFS_V4_1 */
.rpc_call_done = nfs_direct_read_result,
.rpc_release = nfs_direct_read_release,
};
@ -554,9 +552,7 @@ static void nfs_direct_commit_release(void *calldata)
}
static const struct rpc_call_ops nfs_commit_direct_ops = {
#if defined(CONFIG_NFS_V4_1)
.rpc_call_prepare = nfs_write_prepare,
#endif /* CONFIG_NFS_V4_1 */
.rpc_call_done = nfs_direct_commit_result,
.rpc_release = nfs_direct_commit_release,
};
@ -696,9 +692,7 @@ out_unlock:
}
static const struct rpc_call_ops nfs_write_direct_ops = {
#if defined(CONFIG_NFS_V4_1)
.rpc_call_prepare = nfs_write_prepare,
#endif /* CONFIG_NFS_V4_1 */
.rpc_call_done = nfs_direct_write_result,
.rpc_release = nfs_direct_write_release,
};

View File

@ -10,8 +10,9 @@
#include <linux/sunrpc/clnt.h>
#include <linux/dns_resolver.h>
#include "dns_resolve.h"
ssize_t nfs_dns_resolve_name(char *name, size_t namelen,
ssize_t nfs_dns_resolve_name(struct net *net, char *name, size_t namelen,
struct sockaddr *sa, size_t salen)
{
ssize_t ret;
@ -20,7 +21,7 @@ ssize_t nfs_dns_resolve_name(char *name, size_t namelen,
ip_len = dns_query(NULL, name, namelen, NULL, &ip_addr, NULL);
if (ip_len > 0)
ret = rpc_pton(ip_addr, ip_len, sa, salen);
ret = rpc_pton(net, ip_addr, ip_len, sa, salen);
else
ret = -ESRCH;
kfree(ip_addr);
@ -40,15 +41,15 @@ ssize_t nfs_dns_resolve_name(char *name, size_t namelen,
#include <linux/sunrpc/clnt.h>
#include <linux/sunrpc/cache.h>
#include <linux/sunrpc/svcauth.h>
#include <linux/sunrpc/rpc_pipe_fs.h>
#include "dns_resolve.h"
#include "cache_lib.h"
#include "netns.h"
#define NFS_DNS_HASHBITS 4
#define NFS_DNS_HASHTBL_SIZE (1 << NFS_DNS_HASHBITS)
static struct cache_head *nfs_dns_table[NFS_DNS_HASHTBL_SIZE];
struct nfs_dns_ent {
struct cache_head h;
@ -224,7 +225,7 @@ static int nfs_dns_parse(struct cache_detail *cd, char *buf, int buflen)
len = qword_get(&buf, buf1, sizeof(buf1));
if (len <= 0)
goto out;
key.addrlen = rpc_pton(buf1, len,
key.addrlen = rpc_pton(cd->net, buf1, len,
(struct sockaddr *)&key.addr,
sizeof(key.addr));
@ -259,21 +260,6 @@ out:
return ret;
}
static struct cache_detail nfs_dns_resolve = {
.owner = THIS_MODULE,
.hash_size = NFS_DNS_HASHTBL_SIZE,
.hash_table = nfs_dns_table,
.name = "dns_resolve",
.cache_put = nfs_dns_ent_put,
.cache_upcall = nfs_dns_upcall,
.cache_parse = nfs_dns_parse,
.cache_show = nfs_dns_show,
.match = nfs_dns_match,
.init = nfs_dns_ent_init,
.update = nfs_dns_ent_update,
.alloc = nfs_dns_ent_alloc,
};
static int do_cache_lookup(struct cache_detail *cd,
struct nfs_dns_ent *key,
struct nfs_dns_ent **item,
@ -336,8 +322,8 @@ out:
return ret;
}
ssize_t nfs_dns_resolve_name(char *name, size_t namelen,
struct sockaddr *sa, size_t salen)
ssize_t nfs_dns_resolve_name(struct net *net, char *name,
size_t namelen, struct sockaddr *sa, size_t salen)
{
struct nfs_dns_ent key = {
.hostname = name,
@ -345,28 +331,118 @@ ssize_t nfs_dns_resolve_name(char *name, size_t namelen,
};
struct nfs_dns_ent *item = NULL;
ssize_t ret;
struct nfs_net *nn = net_generic(net, nfs_net_id);
ret = do_cache_lookup_wait(&nfs_dns_resolve, &key, &item);
ret = do_cache_lookup_wait(nn->nfs_dns_resolve, &key, &item);
if (ret == 0) {
if (salen >= item->addrlen) {
memcpy(sa, &item->addr, item->addrlen);
ret = item->addrlen;
} else
ret = -EOVERFLOW;
cache_put(&item->h, &nfs_dns_resolve);
cache_put(&item->h, nn->nfs_dns_resolve);
} else if (ret == -ENOENT)
ret = -ESRCH;
return ret;
}
int nfs_dns_resolver_cache_init(struct net *net)
{
int err = -ENOMEM;
struct nfs_net *nn = net_generic(net, nfs_net_id);
struct cache_detail *cd;
struct cache_head **tbl;
cd = kzalloc(sizeof(struct cache_detail), GFP_KERNEL);
if (cd == NULL)
goto err_cd;
tbl = kzalloc(NFS_DNS_HASHTBL_SIZE * sizeof(struct cache_head *),
GFP_KERNEL);
if (tbl == NULL)
goto err_tbl;
cd->owner = THIS_MODULE,
cd->hash_size = NFS_DNS_HASHTBL_SIZE,
cd->hash_table = tbl,
cd->name = "dns_resolve",
cd->cache_put = nfs_dns_ent_put,
cd->cache_upcall = nfs_dns_upcall,
cd->cache_parse = nfs_dns_parse,
cd->cache_show = nfs_dns_show,
cd->match = nfs_dns_match,
cd->init = nfs_dns_ent_init,
cd->update = nfs_dns_ent_update,
cd->alloc = nfs_dns_ent_alloc,
nfs_cache_init(cd);
err = nfs_cache_register_net(net, cd);
if (err)
goto err_reg;
nn->nfs_dns_resolve = cd;
return 0;
err_reg:
nfs_cache_destroy(cd);
kfree(cd->hash_table);
err_tbl:
kfree(cd);
err_cd:
return err;
}
void nfs_dns_resolver_cache_destroy(struct net *net)
{
struct nfs_net *nn = net_generic(net, nfs_net_id);
struct cache_detail *cd = nn->nfs_dns_resolve;
nfs_cache_unregister_net(net, cd);
nfs_cache_destroy(cd);
kfree(cd->hash_table);
kfree(cd);
}
static int rpc_pipefs_event(struct notifier_block *nb, unsigned long event,
void *ptr)
{
struct super_block *sb = ptr;
struct net *net = sb->s_fs_info;
struct nfs_net *nn = net_generic(net, nfs_net_id);
struct cache_detail *cd = nn->nfs_dns_resolve;
int ret = 0;
if (cd == NULL)
return 0;
if (!try_module_get(THIS_MODULE))
return 0;
switch (event) {
case RPC_PIPEFS_MOUNT:
ret = nfs_cache_register_sb(sb, cd);
break;
case RPC_PIPEFS_UMOUNT:
nfs_cache_unregister_sb(sb, cd);
break;
default:
ret = -ENOTSUPP;
break;
}
module_put(THIS_MODULE);
return ret;
}
static struct notifier_block nfs_dns_resolver_block = {
.notifier_call = rpc_pipefs_event,
};
int nfs_dns_resolver_init(void)
{
return nfs_cache_register(&nfs_dns_resolve);
return rpc_pipefs_notifier_register(&nfs_dns_resolver_block);
}
void nfs_dns_resolver_destroy(void)
{
nfs_cache_unregister(&nfs_dns_resolve);
rpc_pipefs_notifier_unregister(&nfs_dns_resolver_block);
}
#endif

View File

@ -15,12 +15,22 @@ static inline int nfs_dns_resolver_init(void)
static inline void nfs_dns_resolver_destroy(void)
{}
static inline int nfs_dns_resolver_cache_init(struct net *net)
{
return 0;
}
static inline void nfs_dns_resolver_cache_destroy(struct net *net)
{}
#else
extern int nfs_dns_resolver_init(void);
extern void nfs_dns_resolver_destroy(void);
extern int nfs_dns_resolver_cache_init(struct net *net);
extern void nfs_dns_resolver_cache_destroy(struct net *net);
#endif
extern ssize_t nfs_dns_resolve_name(char *name, size_t namelen,
struct sockaddr *sa, size_t salen);
extern ssize_t nfs_dns_resolve_name(struct net *net, char *name,
size_t namelen, struct sockaddr *sa, size_t salen);
#endif

View File

@ -530,6 +530,8 @@ static int nfs_vm_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
if (mapping != dentry->d_inode->i_mapping)
goto out_unlock;
wait_on_page_writeback(page);
pagelen = nfs_page_length(page);
if (pagelen == 0)
goto out_unlock;

View File

@ -327,7 +327,7 @@ void nfs_fscache_reset_inode_cookie(struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_server *nfss = NFS_SERVER(inode);
struct fscache_cookie *old = nfsi->fscache;
NFS_IFDEBUG(struct fscache_cookie *old = nfsi->fscache);
nfs_fscache_inode_lock(inode);
if (nfsi->fscache) {

View File

@ -34,11 +34,29 @@
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <linux/types.h>
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/parser.h>
#include <linux/fs.h>
#include <linux/nfs_idmap.h>
#include <net/net_namespace.h>
#include <linux/sunrpc/rpc_pipe_fs.h>
#include <linux/nfs_fs.h>
#include <linux/nfs_fs_sb.h>
#include <linux/key.h>
#include <linux/keyctl.h>
#include <linux/key-type.h>
#include <keys/user-type.h>
#include <linux/module.h>
#include "internal.h"
#include "netns.h"
#define NFS_UINT_MAXLEN 11
/* Default cache timeout is 10 minutes */
unsigned int nfs_idmap_cache_timeout = 600;
static const struct cred *id_resolver_cache;
static struct key_type key_type_id_resolver_legacy;
/**
* nfs_fattr_init_names - initialise the nfs_fattr owner_name/group_name fields
@ -142,24 +160,7 @@ static int nfs_map_numeric_to_string(__u32 id, char *buf, size_t buflen)
return snprintf(buf, buflen, "%u", id);
}
#ifdef CONFIG_NFS_USE_NEW_IDMAPPER
#include <linux/cred.h>
#include <linux/sunrpc/sched.h>
#include <linux/nfs4.h>
#include <linux/nfs_fs_sb.h>
#include <linux/keyctl.h>
#include <linux/key-type.h>
#include <linux/rcupdate.h>
#include <linux/err.h>
#include <keys/user-type.h>
#define NFS_UINT_MAXLEN 11
const struct cred *id_resolver_cache;
struct key_type key_type_id_resolver = {
static struct key_type key_type_id_resolver = {
.name = "id_resolver",
.instantiate = user_instantiate,
.match = user_match,
@ -169,13 +170,14 @@ struct key_type key_type_id_resolver = {
.read = user_read,
};
int nfs_idmap_init(void)
static int nfs_idmap_init_keyring(void)
{
struct cred *cred;
struct key *keyring;
int ret = 0;
printk(KERN_NOTICE "Registering the %s key type\n", key_type_id_resolver.name);
printk(KERN_NOTICE "NFS: Registering the %s key type\n",
key_type_id_resolver.name);
cred = prepare_kernel_cred(NULL);
if (!cred)
@ -210,7 +212,7 @@ failed_put_cred:
return ret;
}
void nfs_idmap_quit(void)
static void nfs_idmap_quit_keyring(void)
{
key_revoke(id_resolver_cache->thread_keyring);
unregister_key_type(&key_type_id_resolver);
@ -245,8 +247,10 @@ static ssize_t nfs_idmap_get_desc(const char *name, size_t namelen,
return desclen;
}
static ssize_t nfs_idmap_request_key(const char *name, size_t namelen,
const char *type, void *data, size_t data_size)
static ssize_t nfs_idmap_request_key(struct key_type *key_type,
const char *name, size_t namelen,
const char *type, void *data,
size_t data_size, struct idmap *idmap)
{
const struct cred *saved_cred;
struct key *rkey;
@ -259,8 +263,12 @@ static ssize_t nfs_idmap_request_key(const char *name, size_t namelen,
goto out;
saved_cred = override_creds(id_resolver_cache);
rkey = request_key(&key_type_id_resolver, desc, "");
if (idmap)
rkey = request_key_with_auxdata(key_type, desc, "", 0, idmap);
else
rkey = request_key(&key_type_id_resolver, desc, "");
revert_creds(saved_cred);
kfree(desc);
if (IS_ERR(rkey)) {
ret = PTR_ERR(rkey);
@ -293,31 +301,46 @@ out:
return ret;
}
static ssize_t nfs_idmap_get_key(const char *name, size_t namelen,
const char *type, void *data,
size_t data_size, struct idmap *idmap)
{
ssize_t ret = nfs_idmap_request_key(&key_type_id_resolver,
name, namelen, type, data,
data_size, NULL);
if (ret < 0) {
ret = nfs_idmap_request_key(&key_type_id_resolver_legacy,
name, namelen, type, data,
data_size, idmap);
}
return ret;
}
/* ID -> Name */
static ssize_t nfs_idmap_lookup_name(__u32 id, const char *type, char *buf, size_t buflen)
static ssize_t nfs_idmap_lookup_name(__u32 id, const char *type, char *buf,
size_t buflen, struct idmap *idmap)
{
char id_str[NFS_UINT_MAXLEN];
int id_len;
ssize_t ret;
id_len = snprintf(id_str, sizeof(id_str), "%u", id);
ret = nfs_idmap_request_key(id_str, id_len, type, buf, buflen);
ret = nfs_idmap_get_key(id_str, id_len, type, buf, buflen, idmap);
if (ret < 0)
return -EINVAL;
return ret;
}
/* Name -> ID */
static int nfs_idmap_lookup_id(const char *name, size_t namelen,
const char *type, __u32 *id)
static int nfs_idmap_lookup_id(const char *name, size_t namelen, const char *type,
__u32 *id, struct idmap *idmap)
{
char id_str[NFS_UINT_MAXLEN];
long id_long;
ssize_t data_size;
int ret = 0;
data_size = nfs_idmap_request_key(name, namelen, type, id_str, NFS_UINT_MAXLEN);
data_size = nfs_idmap_get_key(name, namelen, type, id_str, NFS_UINT_MAXLEN, idmap);
if (data_size <= 0) {
ret = -EINVAL;
} else {
@ -327,114 +350,103 @@ static int nfs_idmap_lookup_id(const char *name, size_t namelen,
return ret;
}
int nfs_map_name_to_uid(const struct nfs_server *server, const char *name, size_t namelen, __u32 *uid)
{
if (nfs_map_string_to_numeric(name, namelen, uid))
return 0;
return nfs_idmap_lookup_id(name, namelen, "uid", uid);
}
int nfs_map_group_to_gid(const struct nfs_server *server, const char *name, size_t namelen, __u32 *gid)
{
if (nfs_map_string_to_numeric(name, namelen, gid))
return 0;
return nfs_idmap_lookup_id(name, namelen, "gid", gid);
}
int nfs_map_uid_to_name(const struct nfs_server *server, __u32 uid, char *buf, size_t buflen)
{
int ret = -EINVAL;
if (!(server->caps & NFS_CAP_UIDGID_NOMAP))
ret = nfs_idmap_lookup_name(uid, "user", buf, buflen);
if (ret < 0)
ret = nfs_map_numeric_to_string(uid, buf, buflen);
return ret;
}
int nfs_map_gid_to_group(const struct nfs_server *server, __u32 gid, char *buf, size_t buflen)
{
int ret = -EINVAL;
if (!(server->caps & NFS_CAP_UIDGID_NOMAP))
ret = nfs_idmap_lookup_name(gid, "group", buf, buflen);
if (ret < 0)
ret = nfs_map_numeric_to_string(gid, buf, buflen);
return ret;
}
#else /* CONFIG_NFS_USE_NEW_IDMAPPER not defined */
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/init.h>
#include <linux/socket.h>
#include <linux/in.h>
#include <linux/sched.h>
#include <linux/sunrpc/clnt.h>
#include <linux/workqueue.h>
#include <linux/sunrpc/rpc_pipe_fs.h>
#include <linux/nfs_fs.h>
#include "nfs4_fs.h"
#define IDMAP_HASH_SZ 128
/* Default cache timeout is 10 minutes */
unsigned int nfs_idmap_cache_timeout = 600 * HZ;
static int param_set_idmap_timeout(const char *val, struct kernel_param *kp)
{
char *endp;
int num = simple_strtol(val, &endp, 0);
int jif = num * HZ;
if (endp == val || *endp || num < 0 || jif < num)
return -EINVAL;
*((int *)kp->arg) = jif;
return 0;
}
module_param_call(idmap_cache_timeout, param_set_idmap_timeout, param_get_int,
&nfs_idmap_cache_timeout, 0644);
struct idmap_hashent {
unsigned long ih_expires;
__u32 ih_id;
size_t ih_namelen;
char ih_name[IDMAP_NAMESZ];
};
struct idmap_hashtable {
__u8 h_type;
struct idmap_hashent h_entries[IDMAP_HASH_SZ];
};
/* idmap classic begins here */
module_param(nfs_idmap_cache_timeout, int, 0644);
struct idmap {
struct dentry *idmap_dentry;
wait_queue_head_t idmap_wq;
struct idmap_msg idmap_im;
struct mutex idmap_lock; /* Serializes upcalls */
struct mutex idmap_im_lock; /* Protects the hashtable */
struct idmap_hashtable idmap_user_hash;
struct idmap_hashtable idmap_group_hash;
struct rpc_pipe *idmap_pipe;
struct key_construction *idmap_key_cons;
};
enum {
Opt_find_uid, Opt_find_gid, Opt_find_user, Opt_find_group, Opt_find_err
};
static const match_table_t nfs_idmap_tokens = {
{ Opt_find_uid, "uid:%s" },
{ Opt_find_gid, "gid:%s" },
{ Opt_find_user, "user:%s" },
{ Opt_find_group, "group:%s" },
{ Opt_find_err, NULL }
};
static int nfs_idmap_legacy_upcall(struct key_construction *, const char *, void *);
static ssize_t idmap_pipe_downcall(struct file *, const char __user *,
size_t);
static void idmap_pipe_destroy_msg(struct rpc_pipe_msg *);
static unsigned int fnvhash32(const void *, size_t);
static const struct rpc_pipe_ops idmap_upcall_ops = {
.upcall = rpc_pipe_generic_upcall,
.downcall = idmap_pipe_downcall,
.destroy_msg = idmap_pipe_destroy_msg,
};
static struct key_type key_type_id_resolver_legacy = {
.name = "id_resolver",
.instantiate = user_instantiate,
.match = user_match,
.revoke = user_revoke,
.destroy = user_destroy,
.describe = user_describe,
.read = user_read,
.request_key = nfs_idmap_legacy_upcall,
};
static void __nfs_idmap_unregister(struct rpc_pipe *pipe)
{
if (pipe->dentry)
rpc_unlink(pipe->dentry);
}
static int __nfs_idmap_register(struct dentry *dir,
struct idmap *idmap,
struct rpc_pipe *pipe)
{
struct dentry *dentry;
dentry = rpc_mkpipe_dentry(dir, "idmap", idmap, pipe);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
pipe->dentry = dentry;
return 0;
}
static void nfs_idmap_unregister(struct nfs_client *clp,
struct rpc_pipe *pipe)
{
struct net *net = clp->net;
struct super_block *pipefs_sb;
pipefs_sb = rpc_get_sb_net(net);
if (pipefs_sb) {
__nfs_idmap_unregister(pipe);
rpc_put_sb_net(net);
}
}
static int nfs_idmap_register(struct nfs_client *clp,
struct idmap *idmap,
struct rpc_pipe *pipe)
{
struct net *net = clp->net;
struct super_block *pipefs_sb;
int err = 0;
pipefs_sb = rpc_get_sb_net(net);
if (pipefs_sb) {
if (clp->cl_rpcclient->cl_dentry)
err = __nfs_idmap_register(clp->cl_rpcclient->cl_dentry,
idmap, pipe);
rpc_put_sb_net(net);
}
return err;
}
int
nfs_idmap_new(struct nfs_client *clp)
{
struct idmap *idmap;
struct rpc_pipe *pipe;
int error;
BUG_ON(clp->cl_idmap != NULL);
@ -443,19 +455,19 @@ nfs_idmap_new(struct nfs_client *clp)
if (idmap == NULL)
return -ENOMEM;
idmap->idmap_dentry = rpc_mkpipe(clp->cl_rpcclient->cl_path.dentry,
"idmap", idmap, &idmap_upcall_ops, 0);
if (IS_ERR(idmap->idmap_dentry)) {
error = PTR_ERR(idmap->idmap_dentry);
pipe = rpc_mkpipe_data(&idmap_upcall_ops, 0);
if (IS_ERR(pipe)) {
error = PTR_ERR(pipe);
kfree(idmap);
return error;
}
mutex_init(&idmap->idmap_lock);
mutex_init(&idmap->idmap_im_lock);
init_waitqueue_head(&idmap->idmap_wq);
idmap->idmap_user_hash.h_type = IDMAP_TYPE_USER;
idmap->idmap_group_hash.h_type = IDMAP_TYPE_GROUP;
error = nfs_idmap_register(clp, idmap, pipe);
if (error) {
rpc_destroy_pipe_data(pipe);
kfree(idmap);
return error;
}
idmap->idmap_pipe = pipe;
clp->cl_idmap = idmap;
return 0;
@ -468,211 +480,220 @@ nfs_idmap_delete(struct nfs_client *clp)
if (!idmap)
return;
rpc_unlink(idmap->idmap_dentry);
nfs_idmap_unregister(clp, idmap->idmap_pipe);
rpc_destroy_pipe_data(idmap->idmap_pipe);
clp->cl_idmap = NULL;
kfree(idmap);
}
/*
* Helper routines for manipulating the hashtable
*/
static inline struct idmap_hashent *
idmap_name_hash(struct idmap_hashtable* h, const char *name, size_t len)
static int __rpc_pipefs_event(struct nfs_client *clp, unsigned long event,
struct super_block *sb)
{
return &h->h_entries[fnvhash32(name, len) % IDMAP_HASH_SZ];
int err = 0;
switch (event) {
case RPC_PIPEFS_MOUNT:
BUG_ON(clp->cl_rpcclient->cl_dentry == NULL);
err = __nfs_idmap_register(clp->cl_rpcclient->cl_dentry,
clp->cl_idmap,
clp->cl_idmap->idmap_pipe);
break;
case RPC_PIPEFS_UMOUNT:
if (clp->cl_idmap->idmap_pipe) {
struct dentry *parent;
parent = clp->cl_idmap->idmap_pipe->dentry->d_parent;
__nfs_idmap_unregister(clp->cl_idmap->idmap_pipe);
/*
* Note: This is a dirty hack. SUNRPC hook has been
* called already but simple_rmdir() call for the
* directory returned with error because of idmap pipe
* inside. Thus now we have to remove this directory
* here.
*/
if (rpc_rmdir(parent))
printk(KERN_ERR "NFS: %s: failed to remove "
"clnt dir!\n", __func__);
}
break;
default:
printk(KERN_ERR "NFS: %s: unknown event: %ld\n", __func__,
event);
return -ENOTSUPP;
}
return err;
}
static struct idmap_hashent *
idmap_lookup_name(struct idmap_hashtable *h, const char *name, size_t len)
static struct nfs_client *nfs_get_client_for_event(struct net *net, int event)
{
struct idmap_hashent *he = idmap_name_hash(h, name, len);
struct nfs_net *nn = net_generic(net, nfs_net_id);
struct dentry *cl_dentry;
struct nfs_client *clp;
if (he->ih_namelen != len || memcmp(he->ih_name, name, len) != 0)
return NULL;
if (time_after(jiffies, he->ih_expires))
return NULL;
return he;
spin_lock(&nn->nfs_client_lock);
list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
if (clp->rpc_ops != &nfs_v4_clientops)
continue;
cl_dentry = clp->cl_idmap->idmap_pipe->dentry;
if (((event == RPC_PIPEFS_MOUNT) && cl_dentry) ||
((event == RPC_PIPEFS_UMOUNT) && !cl_dentry))
continue;
atomic_inc(&clp->cl_count);
spin_unlock(&nn->nfs_client_lock);
return clp;
}
spin_unlock(&nn->nfs_client_lock);
return NULL;
}
static inline struct idmap_hashent *
idmap_id_hash(struct idmap_hashtable* h, __u32 id)
static int rpc_pipefs_event(struct notifier_block *nb, unsigned long event,
void *ptr)
{
return &h->h_entries[fnvhash32(&id, sizeof(id)) % IDMAP_HASH_SZ];
}
struct super_block *sb = ptr;
struct nfs_client *clp;
int error = 0;
static struct idmap_hashent *
idmap_lookup_id(struct idmap_hashtable *h, __u32 id)
{
struct idmap_hashent *he = idmap_id_hash(h, id);
if (he->ih_id != id || he->ih_namelen == 0)
return NULL;
if (time_after(jiffies, he->ih_expires))
return NULL;
return he;
}
/*
* Routines for allocating new entries in the hashtable.
* For now, we just have 1 entry per bucket, so it's all
* pretty trivial.
*/
static inline struct idmap_hashent *
idmap_alloc_name(struct idmap_hashtable *h, char *name, size_t len)
{
return idmap_name_hash(h, name, len);
}
static inline struct idmap_hashent *
idmap_alloc_id(struct idmap_hashtable *h, __u32 id)
{
return idmap_id_hash(h, id);
}
static void
idmap_update_entry(struct idmap_hashent *he, const char *name,
size_t namelen, __u32 id)
{
he->ih_id = id;
memcpy(he->ih_name, name, namelen);
he->ih_name[namelen] = '\0';
he->ih_namelen = namelen;
he->ih_expires = jiffies + nfs_idmap_cache_timeout;
}
/*
* Name -> ID
*/
static int
nfs_idmap_id(struct idmap *idmap, struct idmap_hashtable *h,
const char *name, size_t namelen, __u32 *id)
{
struct rpc_pipe_msg msg;
struct idmap_msg *im;
struct idmap_hashent *he;
DECLARE_WAITQUEUE(wq, current);
int ret = -EIO;
im = &idmap->idmap_im;
/*
* String sanity checks
* Note that the userland daemon expects NUL terminated strings
*/
for (;;) {
if (namelen == 0)
return -EINVAL;
if (name[namelen-1] != '\0')
while ((clp = nfs_get_client_for_event(sb->s_fs_info, event))) {
error = __rpc_pipefs_event(clp, event, sb);
nfs_put_client(clp);
if (error)
break;
namelen--;
}
if (namelen >= IDMAP_NAMESZ)
return -EINVAL;
return error;
}
mutex_lock(&idmap->idmap_lock);
mutex_lock(&idmap->idmap_im_lock);
#define PIPEFS_NFS_PRIO 1
he = idmap_lookup_name(h, name, namelen);
if (he != NULL) {
*id = he->ih_id;
ret = 0;
static struct notifier_block nfs_idmap_block = {
.notifier_call = rpc_pipefs_event,
.priority = SUNRPC_PIPEFS_NFS_PRIO,
};
int nfs_idmap_init(void)
{
int ret;
ret = nfs_idmap_init_keyring();
if (ret != 0)
goto out;
}
memset(im, 0, sizeof(*im));
memcpy(im->im_name, name, namelen);
im->im_type = h->h_type;
im->im_conv = IDMAP_CONV_NAMETOID;
memset(&msg, 0, sizeof(msg));
msg.data = im;
msg.len = sizeof(*im);
add_wait_queue(&idmap->idmap_wq, &wq);
if (rpc_queue_upcall(idmap->idmap_dentry->d_inode, &msg) < 0) {
remove_wait_queue(&idmap->idmap_wq, &wq);
goto out;
}
set_current_state(TASK_UNINTERRUPTIBLE);
mutex_unlock(&idmap->idmap_im_lock);
schedule();
__set_current_state(TASK_RUNNING);
remove_wait_queue(&idmap->idmap_wq, &wq);
mutex_lock(&idmap->idmap_im_lock);
if (im->im_status & IDMAP_STATUS_SUCCESS) {
*id = im->im_id;
ret = 0;
}
out:
memset(im, 0, sizeof(*im));
mutex_unlock(&idmap->idmap_im_lock);
mutex_unlock(&idmap->idmap_lock);
ret = rpc_pipefs_notifier_register(&nfs_idmap_block);
if (ret != 0)
nfs_idmap_quit_keyring();
out:
return ret;
}
/*
* ID -> Name
*/
static int
nfs_idmap_name(struct idmap *idmap, struct idmap_hashtable *h,
__u32 id, char *name)
void nfs_idmap_quit(void)
{
struct rpc_pipe_msg msg;
rpc_pipefs_notifier_unregister(&nfs_idmap_block);
nfs_idmap_quit_keyring();
}
static int nfs_idmap_prepare_message(char *desc, struct idmap_msg *im,
struct rpc_pipe_msg *msg)
{
substring_t substr;
int token, ret;
memset(im, 0, sizeof(*im));
memset(msg, 0, sizeof(*msg));
im->im_type = IDMAP_TYPE_GROUP;
token = match_token(desc, nfs_idmap_tokens, &substr);
switch (token) {
case Opt_find_uid:
im->im_type = IDMAP_TYPE_USER;
case Opt_find_gid:
im->im_conv = IDMAP_CONV_NAMETOID;
ret = match_strlcpy(im->im_name, &substr, IDMAP_NAMESZ);
break;
case Opt_find_user:
im->im_type = IDMAP_TYPE_USER;
case Opt_find_group:
im->im_conv = IDMAP_CONV_IDTONAME;
ret = match_int(&substr, &im->im_id);
break;
default:
ret = -EINVAL;
goto out;
}
msg->data = im;
msg->len = sizeof(struct idmap_msg);
out:
return ret;
}
static int nfs_idmap_legacy_upcall(struct key_construction *cons,
const char *op,
void *aux)
{
struct rpc_pipe_msg *msg;
struct idmap_msg *im;
struct idmap_hashent *he;
DECLARE_WAITQUEUE(wq, current);
int ret = -EIO;
unsigned int len;
struct idmap *idmap = (struct idmap *)aux;
struct key *key = cons->key;
int ret;
im = &idmap->idmap_im;
mutex_lock(&idmap->idmap_lock);
mutex_lock(&idmap->idmap_im_lock);
he = idmap_lookup_id(h, id);
if (he) {
memcpy(name, he->ih_name, he->ih_namelen);
ret = he->ih_namelen;
goto out;
/* msg and im are freed in idmap_pipe_destroy_msg */
msg = kmalloc(sizeof(*msg), GFP_KERNEL);
if (IS_ERR(msg)) {
ret = PTR_ERR(msg);
goto out0;
}
memset(im, 0, sizeof(*im));
im->im_type = h->h_type;
im->im_conv = IDMAP_CONV_IDTONAME;
im->im_id = id;
memset(&msg, 0, sizeof(msg));
msg.data = im;
msg.len = sizeof(*im);
add_wait_queue(&idmap->idmap_wq, &wq);
if (rpc_queue_upcall(idmap->idmap_dentry->d_inode, &msg) < 0) {
remove_wait_queue(&idmap->idmap_wq, &wq);
goto out;
im = kmalloc(sizeof(*im), GFP_KERNEL);
if (IS_ERR(im)) {
ret = PTR_ERR(im);
goto out1;
}
set_current_state(TASK_UNINTERRUPTIBLE);
mutex_unlock(&idmap->idmap_im_lock);
schedule();
__set_current_state(TASK_RUNNING);
remove_wait_queue(&idmap->idmap_wq, &wq);
mutex_lock(&idmap->idmap_im_lock);
ret = nfs_idmap_prepare_message(key->description, im, msg);
if (ret < 0)
goto out2;
if (im->im_status & IDMAP_STATUS_SUCCESS) {
if ((len = strnlen(im->im_name, IDMAP_NAMESZ)) == 0)
goto out;
memcpy(name, im->im_name, len);
ret = len;
idmap->idmap_key_cons = cons;
ret = rpc_queue_upcall(idmap->idmap_pipe, msg);
if (ret < 0)
goto out2;
return ret;
out2:
kfree(im);
out1:
kfree(msg);
out0:
key_revoke(cons->key);
key_revoke(cons->authkey);
return ret;
}
static int nfs_idmap_instantiate(struct key *key, struct key *authkey, char *data)
{
return key_instantiate_and_link(key, data, strlen(data) + 1,
id_resolver_cache->thread_keyring,
authkey);
}
static int nfs_idmap_read_message(struct idmap_msg *im, struct key *key, struct key *authkey)
{
char id_str[NFS_UINT_MAXLEN];
int ret = -EINVAL;
switch (im->im_conv) {
case IDMAP_CONV_NAMETOID:
sprintf(id_str, "%d", im->im_id);
ret = nfs_idmap_instantiate(key, authkey, id_str);
break;
case IDMAP_CONV_IDTONAME:
ret = nfs_idmap_instantiate(key, authkey, im->im_name);
break;
}
out:
memset(im, 0, sizeof(*im));
mutex_unlock(&idmap->idmap_im_lock);
mutex_unlock(&idmap->idmap_lock);
return ret;
}
@ -681,115 +702,51 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
{
struct rpc_inode *rpci = RPC_I(filp->f_path.dentry->d_inode);
struct idmap *idmap = (struct idmap *)rpci->private;
struct idmap_msg im_in, *im = &idmap->idmap_im;
struct idmap_hashtable *h;
struct idmap_hashent *he = NULL;
struct key_construction *cons = idmap->idmap_key_cons;
struct idmap_msg im;
size_t namelen_in;
int ret;
if (mlen != sizeof(im_in))
return -ENOSPC;
if (copy_from_user(&im_in, src, mlen) != 0)
return -EFAULT;
mutex_lock(&idmap->idmap_im_lock);
ret = mlen;
im->im_status = im_in.im_status;
/* If we got an error, terminate now, and wake up pending upcalls */
if (!(im_in.im_status & IDMAP_STATUS_SUCCESS)) {
wake_up(&idmap->idmap_wq);
if (mlen != sizeof(im)) {
ret = -ENOSPC;
goto out;
}
/* Sanity checking of strings */
ret = -EINVAL;
namelen_in = strnlen(im_in.im_name, IDMAP_NAMESZ);
if (namelen_in == 0 || namelen_in == IDMAP_NAMESZ)
goto out;
switch (im_in.im_type) {
case IDMAP_TYPE_USER:
h = &idmap->idmap_user_hash;
break;
case IDMAP_TYPE_GROUP:
h = &idmap->idmap_group_hash;
break;
default:
goto out;
}
switch (im_in.im_conv) {
case IDMAP_CONV_IDTONAME:
/* Did we match the current upcall? */
if (im->im_conv == IDMAP_CONV_IDTONAME
&& im->im_type == im_in.im_type
&& im->im_id == im_in.im_id) {
/* Yes: copy string, including the terminating '\0' */
memcpy(im->im_name, im_in.im_name, namelen_in);
im->im_name[namelen_in] = '\0';
wake_up(&idmap->idmap_wq);
}
he = idmap_alloc_id(h, im_in.im_id);
break;
case IDMAP_CONV_NAMETOID:
/* Did we match the current upcall? */
if (im->im_conv == IDMAP_CONV_NAMETOID
&& im->im_type == im_in.im_type
&& strnlen(im->im_name, IDMAP_NAMESZ) == namelen_in
&& memcmp(im->im_name, im_in.im_name, namelen_in) == 0) {
im->im_id = im_in.im_id;
wake_up(&idmap->idmap_wq);
}
he = idmap_alloc_name(h, im_in.im_name, namelen_in);
break;
default:
if (copy_from_user(&im, src, mlen) != 0) {
ret = -EFAULT;
goto out;
}
/* If the entry is valid, also copy it to the cache */
if (he != NULL)
idmap_update_entry(he, im_in.im_name, namelen_in, im_in.im_id);
ret = mlen;
if (!(im.im_status & IDMAP_STATUS_SUCCESS)) {
ret = mlen;
complete_request_key(idmap->idmap_key_cons, -ENOKEY);
goto out_incomplete;
}
namelen_in = strnlen(im.im_name, IDMAP_NAMESZ);
if (namelen_in == 0 || namelen_in == IDMAP_NAMESZ) {
ret = -EINVAL;
goto out;
}
ret = nfs_idmap_read_message(&im, cons->key, cons->authkey);
if (ret >= 0) {
key_set_timeout(cons->key, nfs_idmap_cache_timeout);
ret = mlen;
}
out:
mutex_unlock(&idmap->idmap_im_lock);
complete_request_key(idmap->idmap_key_cons, ret);
out_incomplete:
return ret;
}
static void
idmap_pipe_destroy_msg(struct rpc_pipe_msg *msg)
{
struct idmap_msg *im = msg->data;
struct idmap *idmap = container_of(im, struct idmap, idmap_im);
if (msg->errno >= 0)
return;
mutex_lock(&idmap->idmap_im_lock);
im->im_status = IDMAP_STATUS_LOOKUPFAIL;
wake_up(&idmap->idmap_wq);
mutex_unlock(&idmap->idmap_im_lock);
}
/*
* Fowler/Noll/Vo hash
* http://www.isthe.com/chongo/tech/comp/fnv/
*/
#define FNV_P_32 ((unsigned int)0x01000193) /* 16777619 */
#define FNV_1_32 ((unsigned int)0x811c9dc5) /* 2166136261 */
static unsigned int fnvhash32(const void *buf, size_t buflen)
{
const unsigned char *p, *end = (const unsigned char *)buf + buflen;
unsigned int hash = FNV_1_32;
for (p = buf; p < end; p++) {
hash *= FNV_P_32;
hash ^= (unsigned int)*p;
}
return hash;
/* Free memory allocated in nfs_idmap_legacy_upcall() */
kfree(msg->data);
kfree(msg);
}
int nfs_map_name_to_uid(const struct nfs_server *server, const char *name, size_t namelen, __u32 *uid)
@ -798,16 +755,16 @@ int nfs_map_name_to_uid(const struct nfs_server *server, const char *name, size_
if (nfs_map_string_to_numeric(name, namelen, uid))
return 0;
return nfs_idmap_id(idmap, &idmap->idmap_user_hash, name, namelen, uid);
return nfs_idmap_lookup_id(name, namelen, "uid", uid, idmap);
}
int nfs_map_group_to_gid(const struct nfs_server *server, const char *name, size_t namelen, __u32 *uid)
int nfs_map_group_to_gid(const struct nfs_server *server, const char *name, size_t namelen, __u32 *gid)
{
struct idmap *idmap = server->nfs_client->cl_idmap;
if (nfs_map_string_to_numeric(name, namelen, uid))
if (nfs_map_string_to_numeric(name, namelen, gid))
return 0;
return nfs_idmap_id(idmap, &idmap->idmap_group_hash, name, namelen, uid);
return nfs_idmap_lookup_id(name, namelen, "gid", gid, idmap);
}
int nfs_map_uid_to_name(const struct nfs_server *server, __u32 uid, char *buf, size_t buflen)
@ -816,21 +773,19 @@ int nfs_map_uid_to_name(const struct nfs_server *server, __u32 uid, char *buf, s
int ret = -EINVAL;
if (!(server->caps & NFS_CAP_UIDGID_NOMAP))
ret = nfs_idmap_name(idmap, &idmap->idmap_user_hash, uid, buf);
ret = nfs_idmap_lookup_name(uid, "user", buf, buflen, idmap);
if (ret < 0)
ret = nfs_map_numeric_to_string(uid, buf, buflen);
return ret;
}
int nfs_map_gid_to_group(const struct nfs_server *server, __u32 uid, char *buf, size_t buflen)
int nfs_map_gid_to_group(const struct nfs_server *server, __u32 gid, char *buf, size_t buflen)
{
struct idmap *idmap = server->nfs_client->cl_idmap;
int ret = -EINVAL;
if (!(server->caps & NFS_CAP_UIDGID_NOMAP))
ret = nfs_idmap_name(idmap, &idmap->idmap_group_hash, uid, buf);
ret = nfs_idmap_lookup_name(gid, "group", buf, buflen, idmap);
if (ret < 0)
ret = nfs_map_numeric_to_string(uid, buf, buflen);
ret = nfs_map_numeric_to_string(gid, buf, buflen);
return ret;
}
#endif /* CONFIG_NFS_USE_NEW_IDMAPPER */

View File

@ -39,6 +39,7 @@
#include <linux/slab.h>
#include <linux/compat.h>
#include <linux/freezer.h>
#include <linux/crc32.h>
#include <asm/system.h>
#include <asm/uaccess.h>
@ -51,6 +52,7 @@
#include "fscache.h"
#include "dns_resolve.h"
#include "pnfs.h"
#include "netns.h"
#define NFSDBG_FACILITY NFSDBG_VFS
@ -388,9 +390,10 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr)
unlock_new_inode(inode);
} else
nfs_refresh_inode(inode, fattr);
dprintk("NFS: nfs_fhget(%s/%Ld ct=%d)\n",
dprintk("NFS: nfs_fhget(%s/%Ld fh_crc=0x%08x ct=%d)\n",
inode->i_sb->s_id,
(long long)NFS_FILEID(inode),
nfs_display_fhandle_hash(fh),
atomic_read(&inode->i_count));
out:
@ -401,7 +404,7 @@ out_no_inode:
goto out;
}
#define NFS_VALID_ATTRS (ATTR_MODE|ATTR_UID|ATTR_GID|ATTR_SIZE|ATTR_ATIME|ATTR_ATIME_SET|ATTR_MTIME|ATTR_MTIME_SET|ATTR_FILE)
#define NFS_VALID_ATTRS (ATTR_MODE|ATTR_UID|ATTR_GID|ATTR_SIZE|ATTR_ATIME|ATTR_ATIME_SET|ATTR_MTIME|ATTR_MTIME_SET|ATTR_FILE|ATTR_OPEN)
int
nfs_setattr(struct dentry *dentry, struct iattr *attr)
@ -423,7 +426,7 @@ nfs_setattr(struct dentry *dentry, struct iattr *attr)
/* Optimization: if the end result is no change, don't RPC */
attr->ia_valid &= NFS_VALID_ATTRS;
if ((attr->ia_valid & ~ATTR_FILE) == 0)
if ((attr->ia_valid & ~(ATTR_FILE|ATTR_OPEN)) == 0)
return 0;
/* Write all dirty data */
@ -1044,6 +1047,67 @@ struct nfs_fh *nfs_alloc_fhandle(void)
return fh;
}
#ifdef NFS_DEBUG
/*
* _nfs_display_fhandle_hash - calculate the crc32 hash for the filehandle
* in the same way that wireshark does
*
* @fh: file handle
*
* For debugging only.
*/
u32 _nfs_display_fhandle_hash(const struct nfs_fh *fh)
{
/* wireshark uses 32-bit AUTODIN crc and does a bitwise
* not on the result */
return ~crc32(0xFFFFFFFF, &fh->data[0], fh->size);
}
/*
* _nfs_display_fhandle - display an NFS file handle on the console
*
* @fh: file handle to display
* @caption: display caption
*
* For debugging only.
*/
void _nfs_display_fhandle(const struct nfs_fh *fh, const char *caption)
{
unsigned short i;
if (fh == NULL || fh->size == 0) {
printk(KERN_DEFAULT "%s at %p is empty\n", caption, fh);
return;
}
printk(KERN_DEFAULT "%s at %p is %u bytes, crc: 0x%08x:\n",
caption, fh, fh->size, _nfs_display_fhandle_hash(fh));
for (i = 0; i < fh->size; i += 16) {
__be32 *pos = (__be32 *)&fh->data[i];
switch ((fh->size - i - 1) >> 2) {
case 0:
printk(KERN_DEFAULT " %08x\n",
be32_to_cpup(pos));
break;
case 1:
printk(KERN_DEFAULT " %08x %08x\n",
be32_to_cpup(pos), be32_to_cpup(pos + 1));
break;
case 2:
printk(KERN_DEFAULT " %08x %08x %08x\n",
be32_to_cpup(pos), be32_to_cpup(pos + 1),
be32_to_cpup(pos + 2));
break;
default:
printk(KERN_DEFAULT " %08x %08x %08x %08x\n",
be32_to_cpup(pos), be32_to_cpup(pos + 1),
be32_to_cpup(pos + 2), be32_to_cpup(pos + 3));
}
}
}
#endif
/**
* nfs_inode_attrs_need_update - check if the inode attributes need updating
* @inode - pointer to inode
@ -1211,8 +1275,9 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
unsigned long now = jiffies;
unsigned long save_cache_validity;
dfprintk(VFS, "NFS: %s(%s/%ld ct=%d info=0x%x)\n",
dfprintk(VFS, "NFS: %s(%s/%ld fh_crc=0x%08x ct=%d info=0x%x)\n",
__func__, inode->i_sb->s_id, inode->i_ino,
nfs_display_fhandle_hash(NFS_FH(inode)),
atomic_read(&inode->i_count), fattr->valid);
if ((fattr->valid & NFS_ATTR_FATTR_FILEID) && nfsi->fileid != fattr->fileid)
@ -1406,7 +1471,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
/*
* Big trouble! The inode has become a different object.
*/
printk(KERN_DEBUG "%s: inode %ld mode changed, %07o to %07o\n",
printk(KERN_DEBUG "NFS: %s: inode %ld mode changed, %07o to %07o\n",
__func__, inode->i_ino, inode->i_mode, fattr->mode);
out_err:
/*
@ -1495,7 +1560,7 @@ static void init_once(void *foo)
INIT_LIST_HEAD(&nfsi->open_files);
INIT_LIST_HEAD(&nfsi->access_cache_entry_lru);
INIT_LIST_HEAD(&nfsi->access_cache_inode_lru);
INIT_RADIX_TREE(&nfsi->nfs_page_tree, GFP_ATOMIC);
INIT_LIST_HEAD(&nfsi->commit_list);
nfsi->npages = 0;
nfsi->ncommit = 0;
atomic_set(&nfsi->silly_count, 1);
@ -1552,6 +1617,28 @@ static void nfsiod_stop(void)
destroy_workqueue(wq);
}
int nfs_net_id;
EXPORT_SYMBOL_GPL(nfs_net_id);
static int nfs_net_init(struct net *net)
{
nfs_clients_init(net);
return nfs_dns_resolver_cache_init(net);
}
static void nfs_net_exit(struct net *net)
{
nfs_dns_resolver_cache_destroy(net);
nfs_cleanup_cb_ident_idr(net);
}
static struct pernet_operations nfs_net_ops = {
.init = nfs_net_init,
.exit = nfs_net_exit,
.id = &nfs_net_id,
.size = sizeof(struct nfs_net),
};
/*
* Initialize NFS
*/
@ -1561,9 +1648,13 @@ static int __init init_nfs_fs(void)
err = nfs_idmap_init();
if (err < 0)
goto out9;
goto out10;
err = nfs_dns_resolver_init();
if (err < 0)
goto out9;
err = register_pernet_subsys(&nfs_net_ops);
if (err < 0)
goto out8;
@ -1600,14 +1691,14 @@ static int __init init_nfs_fs(void)
goto out0;
#ifdef CONFIG_PROC_FS
rpc_proc_register(&nfs_rpcstat);
rpc_proc_register(&init_net, &nfs_rpcstat);
#endif
if ((err = register_nfs_fs()) != 0)
goto out;
return 0;
out:
#ifdef CONFIG_PROC_FS
rpc_proc_unregister("nfs");
rpc_proc_unregister(&init_net, "nfs");
#endif
nfs_destroy_directcache();
out0:
@ -1625,10 +1716,12 @@ out5:
out6:
nfs_fscache_unregister();
out7:
nfs_dns_resolver_destroy();
unregister_pernet_subsys(&nfs_net_ops);
out8:
nfs_idmap_quit();
nfs_dns_resolver_destroy();
out9:
nfs_idmap_quit();
out10:
return err;
}
@ -1640,12 +1733,12 @@ static void __exit exit_nfs_fs(void)
nfs_destroy_inodecache();
nfs_destroy_nfspagecache();
nfs_fscache_unregister();
unregister_pernet_subsys(&nfs_net_ops);
nfs_dns_resolver_destroy();
nfs_idmap_quit();
#ifdef CONFIG_PROC_FS
rpc_proc_unregister("nfs");
rpc_proc_unregister(&init_net, "nfs");
#endif
nfs_cleanup_cb_ident_idr();
unregister_nfs_fs();
nfs_fs_proc_exit();
nfsiod_stop();

View File

@ -123,6 +123,7 @@ struct nfs_parsed_mount_data {
} nfs_server;
struct security_mnt_opts lsm_opts;
struct net *net;
};
/* mount_clnt.c */
@ -137,20 +138,22 @@ struct nfs_mount_request {
int noresvport;
unsigned int *auth_flav_len;
rpc_authflavor_t *auth_flavs;
struct net *net;
};
extern int nfs_mount(struct nfs_mount_request *info);
extern void nfs_umount(const struct nfs_mount_request *info);
/* client.c */
extern struct rpc_program nfs_program;
extern const struct rpc_program nfs_program;
extern void nfs_clients_init(struct net *net);
extern void nfs_cleanup_cb_ident_idr(void);
extern void nfs_cleanup_cb_ident_idr(struct net *);
extern void nfs_put_client(struct nfs_client *);
extern struct nfs_client *nfs4_find_client_no_ident(const struct sockaddr *);
extern struct nfs_client *nfs4_find_client_ident(int);
extern struct nfs_client *nfs4_find_client_ident(struct net *, int);
extern struct nfs_client *
nfs4_find_client_sessionid(const struct sockaddr *, struct nfs4_sessionid *);
nfs4_find_client_sessionid(struct net *, const struct sockaddr *,
struct nfs4_sessionid *);
extern struct nfs_server *nfs_create_server(
const struct nfs_parsed_mount_data *,
struct nfs_fh *);
@ -329,6 +332,8 @@ void nfs_retry_commit(struct list_head *page_list,
void nfs_commit_clear_lock(struct nfs_inode *nfsi);
void nfs_commitdata_release(void *data);
void nfs_commit_release_pages(struct nfs_write_data *data);
void nfs_request_add_commit_list(struct nfs_page *req, struct list_head *head);
void nfs_request_remove_commit_list(struct nfs_page *req);
#ifdef CONFIG_MIGRATION
extern int nfs_migrate_page(struct address_space *,

View File

@ -16,7 +16,7 @@
#include <linux/nfs_fs.h>
#include "internal.h"
#ifdef RPC_DEBUG
#ifdef NFS_DEBUG
# define NFSDBG_FACILITY NFSDBG_MOUNT
#endif
@ -67,7 +67,7 @@ enum {
MOUNTPROC3_EXPORT = 5,
};
static struct rpc_program mnt_program;
static const struct rpc_program mnt_program;
/*
* Defined by OpenGroup XNFS Version 3W, chapter 8
@ -153,7 +153,7 @@ int nfs_mount(struct nfs_mount_request *info)
.rpc_resp = &result,
};
struct rpc_create_args args = {
.net = &init_net,
.net = info->net,
.protocol = info->protocol,
.address = info->sap,
.addrsize = info->salen,
@ -225,7 +225,7 @@ void nfs_umount(const struct nfs_mount_request *info)
.to_retries = 2,
};
struct rpc_create_args args = {
.net = &init_net,
.net = info->net,
.protocol = IPPROTO_UDP,
.address = info->sap,
.addrsize = info->salen,
@ -488,19 +488,19 @@ static struct rpc_procinfo mnt3_procedures[] = {
};
static struct rpc_version mnt_version1 = {
static const struct rpc_version mnt_version1 = {
.number = 1,
.nrprocs = ARRAY_SIZE(mnt_procedures),
.procs = mnt_procedures,
};
static struct rpc_version mnt_version3 = {
static const struct rpc_version mnt_version3 = {
.number = 3,
.nrprocs = ARRAY_SIZE(mnt3_procedures),
.procs = mnt3_procedures,
};
static struct rpc_version *mnt_version[] = {
static const struct rpc_version *mnt_version[] = {
NULL,
&mnt_version1,
NULL,
@ -509,7 +509,7 @@ static struct rpc_version *mnt_version[] = {
static struct rpc_stat mnt_stats;
static struct rpc_program mnt_program = {
static const struct rpc_program mnt_program = {
.name = "mount",
.number = NFS_MNT_PROGRAM,
.nrvers = ARRAY_SIZE(mnt_version),

View File

@ -276,7 +276,10 @@ out:
nfs_free_fattr(fattr);
nfs_free_fhandle(fh);
out_nofree:
dprintk("<-- nfs_follow_mountpoint() = %p\n", mnt);
if (IS_ERR(mnt))
dprintk("<-- %s(): error %ld\n", __func__, PTR_ERR(mnt));
else
dprintk("<-- %s() = %p\n", __func__, mnt);
return mnt;
}

27
fs/nfs/netns.h Normal file
View File

@ -0,0 +1,27 @@
#ifndef __NFS_NETNS_H__
#define __NFS_NETNS_H__
#include <net/net_namespace.h>
#include <net/netns/generic.h>
struct bl_dev_msg {
int32_t status;
uint32_t major, minor;
};
struct nfs_net {
struct cache_detail *nfs_dns_resolve;
struct rpc_pipe *bl_device_pipe;
struct bl_dev_msg bl_mount_reply;
wait_queue_head_t bl_wq;
struct list_head nfs_client_list;
struct list_head nfs_volume_list;
#ifdef CONFIG_NFS_V4
struct idr cb_ident_idr; /* Protected by nfs_client_lock */
#endif
spinlock_t nfs_client_lock;
};
extern int nfs_net_id;
#endif

View File

@ -1150,7 +1150,7 @@ struct rpc_procinfo nfs_procedures[] = {
PROC(STATFS, fhandle, statfsres, 0),
};
struct rpc_version nfs_version2 = {
const struct rpc_version nfs_version2 = {
.number = 2,
.nrprocs = ARRAY_SIZE(nfs_procedures),
.procs = nfs_procedures

View File

@ -192,7 +192,7 @@ struct posix_acl *nfs3_proc_getacl(struct inode *inode, int type)
.pages = pages,
};
struct nfs3_getaclres res = {
0
NULL,
};
struct rpc_message msg = {
.rpc_argp = &args,

View File

@ -428,6 +428,11 @@ nfs3_proc_unlink_setup(struct rpc_message *msg, struct inode *dir)
msg->rpc_proc = &nfs3_procedures[NFS3PROC_REMOVE];
}
static void nfs3_proc_unlink_rpc_prepare(struct rpc_task *task, struct nfs_unlinkdata *data)
{
rpc_call_start(task);
}
static int
nfs3_proc_unlink_done(struct rpc_task *task, struct inode *dir)
{
@ -445,6 +450,11 @@ nfs3_proc_rename_setup(struct rpc_message *msg, struct inode *dir)
msg->rpc_proc = &nfs3_procedures[NFS3PROC_RENAME];
}
static void nfs3_proc_rename_rpc_prepare(struct rpc_task *task, struct nfs_renamedata *data)
{
rpc_call_start(task);
}
static int
nfs3_proc_rename_done(struct rpc_task *task, struct inode *old_dir,
struct inode *new_dir)
@ -814,6 +824,11 @@ static void nfs3_proc_read_setup(struct nfs_read_data *data, struct rpc_message
msg->rpc_proc = &nfs3_procedures[NFS3PROC_READ];
}
static void nfs3_proc_read_rpc_prepare(struct rpc_task *task, struct nfs_read_data *data)
{
rpc_call_start(task);
}
static int nfs3_write_done(struct rpc_task *task, struct nfs_write_data *data)
{
if (nfs3_async_handle_jukebox(task, data->inode))
@ -828,6 +843,11 @@ static void nfs3_proc_write_setup(struct nfs_write_data *data, struct rpc_messag
msg->rpc_proc = &nfs3_procedures[NFS3PROC_WRITE];
}
static void nfs3_proc_write_rpc_prepare(struct rpc_task *task, struct nfs_write_data *data)
{
rpc_call_start(task);
}
static int nfs3_commit_done(struct rpc_task *task, struct nfs_write_data *data)
{
if (nfs3_async_handle_jukebox(task, data->inode))
@ -864,9 +884,11 @@ const struct nfs_rpc_ops nfs_v3_clientops = {
.create = nfs3_proc_create,
.remove = nfs3_proc_remove,
.unlink_setup = nfs3_proc_unlink_setup,
.unlink_rpc_prepare = nfs3_proc_unlink_rpc_prepare,
.unlink_done = nfs3_proc_unlink_done,
.rename = nfs3_proc_rename,
.rename_setup = nfs3_proc_rename_setup,
.rename_rpc_prepare = nfs3_proc_rename_rpc_prepare,
.rename_done = nfs3_proc_rename_done,
.link = nfs3_proc_link,
.symlink = nfs3_proc_symlink,
@ -879,8 +901,10 @@ const struct nfs_rpc_ops nfs_v3_clientops = {
.pathconf = nfs3_proc_pathconf,
.decode_dirent = nfs3_decode_dirent,
.read_setup = nfs3_proc_read_setup,
.read_rpc_prepare = nfs3_proc_read_rpc_prepare,
.read_done = nfs3_read_done,
.write_setup = nfs3_proc_write_setup,
.write_rpc_prepare = nfs3_proc_write_rpc_prepare,
.write_done = nfs3_write_done,
.commit_setup = nfs3_proc_commit_setup,
.commit_done = nfs3_commit_done,

View File

@ -2461,7 +2461,7 @@ struct rpc_procinfo nfs3_procedures[] = {
PROC(COMMIT, commit, commit, 5),
};
struct rpc_version nfs_version3 = {
const struct rpc_version nfs_version3 = {
.number = 3,
.nrprocs = ARRAY_SIZE(nfs3_procedures),
.procs = nfs3_procedures
@ -2489,7 +2489,7 @@ static struct rpc_procinfo nfs3_acl_procedures[] = {
},
};
struct rpc_version nfsacl_version3 = {
const struct rpc_version nfsacl_version3 = {
.number = 3,
.nrprocs = sizeof(nfs3_acl_procedures)/
sizeof(nfs3_acl_procedures[0]),

View File

@ -20,7 +20,6 @@ enum nfs4_client_state {
NFS4CLNT_RECLAIM_REBOOT,
NFS4CLNT_RECLAIM_NOGRACE,
NFS4CLNT_DELEGRETURN,
NFS4CLNT_LAYOUTRECALL,
NFS4CLNT_SESSION_RESET,
NFS4CLNT_RECALL_SLOT,
NFS4CLNT_LEASE_CONFIRM,
@ -44,7 +43,7 @@ struct nfs4_minor_version_ops {
struct nfs4_sequence_args *args,
struct nfs4_sequence_res *res,
int cache_reply);
int (*validate_stateid)(struct nfs_delegation *,
bool (*match_stateid)(const nfs4_stateid *,
const nfs4_stateid *);
int (*find_root_sec)(struct nfs_server *, struct nfs_fh *,
struct nfs_fsinfo *);
@ -53,26 +52,25 @@ struct nfs4_minor_version_ops {
const struct nfs4_state_maintenance_ops *state_renewal_ops;
};
/*
* struct rpc_sequence ensures that RPC calls are sent in the exact
* order that they appear on the list.
*/
struct rpc_sequence {
struct rpc_wait_queue wait; /* RPC call delay queue */
spinlock_t lock; /* Protects the list */
struct list_head list; /* Defines sequence of RPC calls */
struct nfs_unique_id {
struct rb_node rb_node;
__u64 id;
};
#define NFS_SEQID_CONFIRMED 1
struct nfs_seqid_counter {
struct rpc_sequence *sequence;
int owner_id;
int flags;
u32 counter;
spinlock_t lock; /* Protects the list */
struct list_head list; /* Defines sequence of RPC calls */
struct rpc_wait_queue wait; /* RPC call delay queue */
};
struct nfs_seqid {
struct nfs_seqid_counter *sequence;
struct list_head list;
struct rpc_task *task;
};
static inline void nfs_confirm_seqid(struct nfs_seqid_counter *seqid, int status)
@ -81,18 +79,12 @@ static inline void nfs_confirm_seqid(struct nfs_seqid_counter *seqid, int status
seqid->flags |= NFS_SEQID_CONFIRMED;
}
struct nfs_unique_id {
struct rb_node rb_node;
__u64 id;
};
/*
* NFS4 state_owners and lock_owners are simply labels for ordered
* sequences of RPC calls. Their sole purpose is to provide once-only
* semantics by allowing the server to identify replayed requests.
*/
struct nfs4_state_owner {
struct nfs_unique_id so_owner_id;
struct nfs_server *so_server;
struct list_head so_lru;
unsigned long so_expires;
@ -105,7 +97,6 @@ struct nfs4_state_owner {
unsigned long so_flags;
struct list_head so_states;
struct nfs_seqid_counter so_seqid;
struct rpc_sequence so_sequence;
};
enum {
@ -146,8 +137,6 @@ struct nfs4_lock_state {
#define NFS_LOCK_INITIALIZED 1
int ls_flags;
struct nfs_seqid_counter ls_seqid;
struct rpc_sequence ls_sequence;
struct nfs_unique_id ls_id;
nfs4_stateid ls_stateid;
atomic_t ls_count;
struct nfs4_lock_owner ls_owner;
@ -193,6 +182,7 @@ struct nfs4_exception {
long timeout;
int retry;
struct nfs4_state *state;
struct inode *inode;
};
struct nfs4_state_recovery_ops {
@ -224,7 +214,7 @@ extern int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait, boo
extern int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle);
extern int nfs4_proc_fs_locations(struct inode *dir, const struct qstr *name,
struct nfs4_fs_locations *fs_locations, struct page *page);
extern void nfs4_release_lockowner(const struct nfs4_lock_state *);
extern int nfs4_release_lockowner(struct nfs4_lock_state *);
extern const struct xattr_handler *nfs4_xattr_handlers[];
#if defined(CONFIG_NFS_V4_1)
@ -233,12 +223,13 @@ static inline struct nfs4_session *nfs4_get_session(const struct nfs_server *ser
return server->nfs_client->cl_session;
}
extern bool nfs4_set_task_privileged(struct rpc_task *task, void *dummy);
extern int nfs4_setup_sequence(const struct nfs_server *server,
struct nfs4_sequence_args *args, struct nfs4_sequence_res *res,
int cache_reply, struct rpc_task *task);
struct rpc_task *task);
extern int nfs41_setup_sequence(struct nfs4_session *session,
struct nfs4_sequence_args *args, struct nfs4_sequence_res *res,
int cache_reply, struct rpc_task *task);
struct rpc_task *task);
extern void nfs4_destroy_session(struct nfs4_session *session);
extern struct nfs4_session *nfs4_alloc_session(struct nfs_client *clp);
extern int nfs4_proc_create_session(struct nfs_client *);
@ -269,7 +260,7 @@ static inline struct nfs4_session *nfs4_get_session(const struct nfs_server *ser
static inline int nfs4_setup_sequence(const struct nfs_server *server,
struct nfs4_sequence_args *args, struct nfs4_sequence_res *res,
int cache_reply, struct rpc_task *task)
struct rpc_task *task)
{
return 0;
}
@ -319,7 +310,7 @@ static inline void nfs4_schedule_session_recovery(struct nfs4_session *session)
}
#endif /* CONFIG_NFS_V4_1 */
extern struct nfs4_state_owner * nfs4_get_state_owner(struct nfs_server *, struct rpc_cred *);
extern struct nfs4_state_owner *nfs4_get_state_owner(struct nfs_server *, struct rpc_cred *, gfp_t);
extern void nfs4_put_state_owner(struct nfs4_state_owner *);
extern void nfs4_purge_state_owners(struct nfs_server *);
extern struct nfs4_state * nfs4_get_open_state(struct inode *, struct nfs4_state_owner *);
@ -327,6 +318,8 @@ extern void nfs4_put_open_state(struct nfs4_state *);
extern void nfs4_close_state(struct nfs4_state *, fmode_t);
extern void nfs4_close_sync(struct nfs4_state *, fmode_t);
extern void nfs4_state_set_mode_locked(struct nfs4_state *, fmode_t);
extern void nfs_inode_find_state_and_recover(struct inode *inode,
const nfs4_stateid *stateid);
extern void nfs4_schedule_lease_recovery(struct nfs_client *);
extern void nfs4_schedule_state_manager(struct nfs_client *);
extern void nfs4_schedule_path_down_recovery(struct nfs_client *clp);
@ -337,7 +330,8 @@ extern void nfs41_handle_server_scope(struct nfs_client *,
struct server_scope **);
extern void nfs4_put_lock_state(struct nfs4_lock_state *lsp);
extern int nfs4_set_lock_state(struct nfs4_state *state, struct file_lock *fl);
extern void nfs4_copy_stateid(nfs4_stateid *, struct nfs4_state *, fl_owner_t, pid_t);
extern void nfs4_select_rw_stateid(nfs4_stateid *, struct nfs4_state *,
fmode_t, fl_owner_t, pid_t);
extern struct nfs_seqid *nfs_alloc_seqid(struct nfs_seqid_counter *counter, gfp_t gfp_mask);
extern int nfs_wait_on_sequence(struct nfs_seqid *seqid, struct rpc_task *task);
@ -346,6 +340,8 @@ extern void nfs_increment_lock_seqid(int status, struct nfs_seqid *seqid);
extern void nfs_release_seqid(struct nfs_seqid *seqid);
extern void nfs_free_seqid(struct nfs_seqid *seqid);
extern void nfs4_free_lock_state(struct nfs_server *server, struct nfs4_lock_state *lsp);
extern const nfs4_stateid zero_stateid;
/* nfs4xdr.c */
@ -357,6 +353,16 @@ struct nfs4_mount_data;
extern struct svc_version nfs4_callback_version1;
extern struct svc_version nfs4_callback_version4;
static inline void nfs4_stateid_copy(nfs4_stateid *dst, const nfs4_stateid *src)
{
memcpy(dst, src, sizeof(*dst));
}
static inline bool nfs4_stateid_match(const nfs4_stateid *dst, const nfs4_stateid *src)
{
return memcmp(dst, src, sizeof(*dst)) == 0;
}
#else
#define nfs4_close_state(a, b) do { } while (0)

View File

@ -33,7 +33,10 @@
#include <linux/nfs_page.h>
#include <linux/module.h>
#include <linux/sunrpc/metrics.h>
#include "internal.h"
#include "delegation.h"
#include "nfs4filelayout.h"
#define NFSDBG_FACILITY NFSDBG_PNFS_LD
@ -84,12 +87,27 @@ static int filelayout_async_handle_error(struct rpc_task *task,
struct nfs_client *clp,
int *reset)
{
struct nfs_server *mds_server = NFS_SERVER(state->inode);
struct nfs_client *mds_client = mds_server->nfs_client;
if (task->tk_status >= 0)
return 0;
*reset = 0;
switch (task->tk_status) {
/* MDS state errors */
case -NFS4ERR_DELEG_REVOKED:
case -NFS4ERR_ADMIN_REVOKED:
case -NFS4ERR_BAD_STATEID:
nfs_remove_bad_delegation(state->inode);
case -NFS4ERR_OPENMODE:
nfs4_schedule_stateid_recovery(mds_server, state);
goto wait_on_recovery;
case -NFS4ERR_EXPIRED:
nfs4_schedule_stateid_recovery(mds_server, state);
nfs4_schedule_lease_recovery(mds_client);
goto wait_on_recovery;
/* DS session errors */
case -NFS4ERR_BADSESSION:
case -NFS4ERR_BADSLOT:
case -NFS4ERR_BAD_HIGH_SLOT:
@ -115,8 +133,14 @@ static int filelayout_async_handle_error(struct rpc_task *task,
*reset = 1;
break;
}
out:
task->tk_status = 0;
return -EAGAIN;
wait_on_recovery:
rpc_sleep_on(&mds_client->cl_rpcwaitq, task, NULL);
if (test_bit(NFS4CLNT_MANAGER_RUNNING, &mds_client->cl_state) == 0)
rpc_wake_up_queued_task(&mds_client->cl_rpcwaitq, task);
goto out;
}
/* NFS_PROTO call done callback routines */
@ -173,7 +197,7 @@ static void filelayout_read_prepare(struct rpc_task *task, void *data)
if (nfs41_setup_sequence(rdata->ds_clp->cl_session,
&rdata->args.seq_args, &rdata->res.seq_res,
0, task))
task))
return;
rpc_call_start(task);
@ -189,10 +213,18 @@ static void filelayout_read_call_done(struct rpc_task *task, void *data)
rdata->mds_ops->rpc_call_done(task, data);
}
static void filelayout_read_count_stats(struct rpc_task *task, void *data)
{
struct nfs_read_data *rdata = (struct nfs_read_data *)data;
rpc_count_iostats(task, NFS_SERVER(rdata->inode)->client->cl_metrics);
}
static void filelayout_read_release(void *data)
{
struct nfs_read_data *rdata = (struct nfs_read_data *)data;
put_lseg(rdata->lseg);
rdata->mds_ops->rpc_release(data);
}
@ -254,7 +286,7 @@ static void filelayout_write_prepare(struct rpc_task *task, void *data)
if (nfs41_setup_sequence(wdata->ds_clp->cl_session,
&wdata->args.seq_args, &wdata->res.seq_res,
0, task))
task))
return;
rpc_call_start(task);
@ -268,10 +300,18 @@ static void filelayout_write_call_done(struct rpc_task *task, void *data)
wdata->mds_ops->rpc_call_done(task, data);
}
static void filelayout_write_count_stats(struct rpc_task *task, void *data)
{
struct nfs_write_data *wdata = (struct nfs_write_data *)data;
rpc_count_iostats(task, NFS_SERVER(wdata->inode)->client->cl_metrics);
}
static void filelayout_write_release(void *data)
{
struct nfs_write_data *wdata = (struct nfs_write_data *)data;
put_lseg(wdata->lseg);
wdata->mds_ops->rpc_release(data);
}
@ -282,24 +322,28 @@ static void filelayout_commit_release(void *data)
nfs_commit_release_pages(wdata);
if (atomic_dec_and_test(&NFS_I(wdata->inode)->commits_outstanding))
nfs_commit_clear_lock(NFS_I(wdata->inode));
put_lseg(wdata->lseg);
nfs_commitdata_release(wdata);
}
struct rpc_call_ops filelayout_read_call_ops = {
static const struct rpc_call_ops filelayout_read_call_ops = {
.rpc_call_prepare = filelayout_read_prepare,
.rpc_call_done = filelayout_read_call_done,
.rpc_count_stats = filelayout_read_count_stats,
.rpc_release = filelayout_read_release,
};
struct rpc_call_ops filelayout_write_call_ops = {
static const struct rpc_call_ops filelayout_write_call_ops = {
.rpc_call_prepare = filelayout_write_prepare,
.rpc_call_done = filelayout_write_call_done,
.rpc_count_stats = filelayout_write_count_stats,
.rpc_release = filelayout_write_release,
};
struct rpc_call_ops filelayout_commit_call_ops = {
static const struct rpc_call_ops filelayout_commit_call_ops = {
.rpc_call_prepare = filelayout_write_prepare,
.rpc_call_done = filelayout_write_call_done,
.rpc_count_stats = filelayout_write_count_stats,
.rpc_release = filelayout_commit_release,
};
@ -367,7 +411,8 @@ filelayout_write_pagelist(struct nfs_write_data *data, int sync)
idx = nfs4_fl_calc_ds_index(lseg, j);
ds = nfs4_fl_prepare_ds(lseg, idx);
if (!ds) {
printk(KERN_ERR "%s: prepare_ds failed, use MDS\n", __func__);
printk(KERN_ERR "NFS: %s: prepare_ds failed, use MDS\n",
__func__);
set_bit(lo_fail_bit(IOMODE_RW), &lseg->pls_layout->plh_flags);
set_bit(lo_fail_bit(IOMODE_READ), &lseg->pls_layout->plh_flags);
return PNFS_NOT_ATTEMPTED;
@ -575,7 +620,7 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
goto out_err_free;
fl->fh_array[i]->size = be32_to_cpup(p++);
if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) {
printk(KERN_ERR "Too big fh %d received %d\n",
printk(KERN_ERR "NFS: Too big fh %d received %d\n",
i, fl->fh_array[i]->size);
goto out_err_free;
}
@ -640,14 +685,16 @@ filelayout_alloc_lseg(struct pnfs_layout_hdr *layoutid,
int size = (fl->stripe_type == STRIPE_SPARSE) ?
fl->dsaddr->ds_num : fl->dsaddr->stripe_count;
fl->commit_buckets = kcalloc(size, sizeof(struct list_head), gfp_flags);
fl->commit_buckets = kcalloc(size, sizeof(struct nfs4_fl_commit_bucket), gfp_flags);
if (!fl->commit_buckets) {
filelayout_free_lseg(&fl->generic_hdr);
return NULL;
}
fl->number_of_buckets = size;
for (i = 0; i < size; i++)
INIT_LIST_HEAD(&fl->commit_buckets[i]);
for (i = 0; i < size; i++) {
INIT_LIST_HEAD(&fl->commit_buckets[i].written);
INIT_LIST_HEAD(&fl->commit_buckets[i].committing);
}
}
return &fl->generic_hdr;
}
@ -679,7 +726,7 @@ filelayout_pg_test(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
return (p_stripe == r_stripe);
}
void
static void
filelayout_pg_init_read(struct nfs_pageio_descriptor *pgio,
struct nfs_page *req)
{
@ -696,7 +743,7 @@ filelayout_pg_init_read(struct nfs_pageio_descriptor *pgio,
nfs_pageio_reset_read_mds(pgio);
}
void
static void
filelayout_pg_init_write(struct nfs_pageio_descriptor *pgio,
struct nfs_page *req)
{
@ -725,11 +772,6 @@ static const struct nfs_pageio_ops filelayout_pg_write_ops = {
.pg_doio = pnfs_generic_pg_writepages,
};
static bool filelayout_mark_pnfs_commit(struct pnfs_layout_segment *lseg)
{
return !FILELAYOUT_LSEG(lseg)->commit_through_mds;
}
static u32 select_bucket_index(struct nfs4_filelayout_segment *fl, u32 j)
{
if (fl->stripe_type == STRIPE_SPARSE)
@ -738,13 +780,49 @@ static u32 select_bucket_index(struct nfs4_filelayout_segment *fl, u32 j)
return j;
}
struct list_head *filelayout_choose_commit_list(struct nfs_page *req)
/* The generic layer is about to remove the req from the commit list.
* If this will make the bucket empty, it will need to put the lseg reference.
*/
static void
filelayout_clear_request_commit(struct nfs_page *req)
{
struct pnfs_layout_segment *freeme = NULL;
struct inode *inode = req->wb_context->dentry->d_inode;
spin_lock(&inode->i_lock);
if (!test_and_clear_bit(PG_COMMIT_TO_DS, &req->wb_flags))
goto out;
if (list_is_singular(&req->wb_list)) {
struct inode *inode = req->wb_context->dentry->d_inode;
struct pnfs_layout_segment *lseg;
/* From here we can find the bucket, but for the moment,
* since there is only one relevant lseg...
*/
list_for_each_entry(lseg, &NFS_I(inode)->layout->plh_segs, pls_list) {
if (lseg->pls_range.iomode == IOMODE_RW) {
freeme = lseg;
break;
}
}
}
out:
nfs_request_remove_commit_list(req);
spin_unlock(&inode->i_lock);
put_lseg(freeme);
}
static struct list_head *
filelayout_choose_commit_list(struct nfs_page *req,
struct pnfs_layout_segment *lseg)
{
struct pnfs_layout_segment *lseg = req->wb_commit_lseg;
struct nfs4_filelayout_segment *fl = FILELAYOUT_LSEG(lseg);
u32 i, j;
struct list_head *list;
if (fl->commit_through_mds)
return &NFS_I(req->wb_context->dentry->d_inode)->commit_list;
/* Note that we are calling nfs4_fl_calc_j_index on each page
* that ends up being committed to a data server. An attractive
* alternative is to add a field to nfs_write_data and nfs_page
@ -754,14 +832,30 @@ struct list_head *filelayout_choose_commit_list(struct nfs_page *req)
j = nfs4_fl_calc_j_index(lseg,
(loff_t)req->wb_index << PAGE_CACHE_SHIFT);
i = select_bucket_index(fl, j);
list = &fl->commit_buckets[i];
list = &fl->commit_buckets[i].written;
if (list_empty(list)) {
/* Non-empty buckets hold a reference on the lseg */
/* Non-empty buckets hold a reference on the lseg. That ref
* is normally transferred to the COMMIT call and released
* there. It could also be released if the last req is pulled
* off due to a rewrite, in which case it will be done in
* filelayout_remove_commit_req
*/
get_lseg(lseg);
}
set_bit(PG_COMMIT_TO_DS, &req->wb_flags);
return list;
}
static void
filelayout_mark_request_commit(struct nfs_page *req,
struct pnfs_layout_segment *lseg)
{
struct list_head *list;
list = filelayout_choose_commit_list(req, lseg);
nfs_request_add_commit_list(req, list);
}
static u32 calc_ds_index_from_commit(struct pnfs_layout_segment *lseg, u32 i)
{
struct nfs4_filelayout_segment *flseg = FILELAYOUT_LSEG(lseg);
@ -797,11 +891,12 @@ static int filelayout_initiate_commit(struct nfs_write_data *data, int how)
idx = calc_ds_index_from_commit(lseg, data->ds_commit_index);
ds = nfs4_fl_prepare_ds(lseg, idx);
if (!ds) {
printk(KERN_ERR "%s: prepare_ds failed, use MDS\n", __func__);
printk(KERN_ERR "NFS: %s: prepare_ds failed, use MDS\n",
__func__);
set_bit(lo_fail_bit(IOMODE_RW), &lseg->pls_layout->plh_flags);
set_bit(lo_fail_bit(IOMODE_READ), &lseg->pls_layout->plh_flags);
prepare_to_resend_writes(data);
data->mds_ops->rpc_release(data);
filelayout_commit_release(data);
return -EAGAIN;
}
dprintk("%s ino %lu, how %d\n", __func__, data->inode->i_ino, how);
@ -817,24 +912,87 @@ static int filelayout_initiate_commit(struct nfs_write_data *data, int how)
/*
* This is only useful while we are using whole file layouts.
*/
static struct pnfs_layout_segment *find_only_write_lseg(struct inode *inode)
static struct pnfs_layout_segment *
find_only_write_lseg_locked(struct inode *inode)
{
struct pnfs_layout_segment *lseg, *rv = NULL;
struct pnfs_layout_segment *lseg;
spin_lock(&inode->i_lock);
list_for_each_entry(lseg, &NFS_I(inode)->layout->plh_segs, pls_list)
if (lseg->pls_range.iomode == IOMODE_RW)
rv = get_lseg(lseg);
return lseg;
return NULL;
}
static struct pnfs_layout_segment *find_only_write_lseg(struct inode *inode)
{
struct pnfs_layout_segment *rv;
spin_lock(&inode->i_lock);
rv = find_only_write_lseg_locked(inode);
if (rv)
get_lseg(rv);
spin_unlock(&inode->i_lock);
return rv;
}
static int alloc_ds_commits(struct inode *inode, struct list_head *list)
static int
filelayout_scan_ds_commit_list(struct nfs4_fl_commit_bucket *bucket, int max,
spinlock_t *lock)
{
struct list_head *src = &bucket->written;
struct list_head *dst = &bucket->committing;
struct nfs_page *req, *tmp;
int ret = 0;
list_for_each_entry_safe(req, tmp, src, wb_list) {
if (!nfs_lock_request(req))
continue;
if (cond_resched_lock(lock))
list_safe_reset_next(req, tmp, wb_list);
nfs_request_remove_commit_list(req);
clear_bit(PG_COMMIT_TO_DS, &req->wb_flags);
nfs_list_add_request(req, dst);
ret++;
if (ret == max)
break;
}
return ret;
}
/* Move reqs from written to committing lists, returning count of number moved.
* Note called with i_lock held.
*/
static int filelayout_scan_commit_lists(struct inode *inode, int max,
spinlock_t *lock)
{
struct pnfs_layout_segment *lseg;
struct nfs4_filelayout_segment *fl;
int i, rv = 0, cnt;
lseg = find_only_write_lseg_locked(inode);
if (!lseg)
goto out_done;
fl = FILELAYOUT_LSEG(lseg);
if (fl->commit_through_mds)
goto out_done;
for (i = 0; i < fl->number_of_buckets && max != 0; i++) {
cnt = filelayout_scan_ds_commit_list(&fl->commit_buckets[i],
max, lock);
max -= cnt;
rv += cnt;
}
out_done:
return rv;
}
static unsigned int
alloc_ds_commits(struct inode *inode, struct list_head *list)
{
struct pnfs_layout_segment *lseg;
struct nfs4_filelayout_segment *fl;
struct nfs_write_data *data;
int i, j;
unsigned int nreq = 0;
/* Won't need this when non-whole file layout segments are supported
* instead we will use a pnfs_layout_hdr structure */
@ -843,28 +1001,27 @@ static int alloc_ds_commits(struct inode *inode, struct list_head *list)
return 0;
fl = FILELAYOUT_LSEG(lseg);
for (i = 0; i < fl->number_of_buckets; i++) {
if (list_empty(&fl->commit_buckets[i]))
if (list_empty(&fl->commit_buckets[i].committing))
continue;
data = nfs_commitdata_alloc();
if (!data)
goto out_bad;
break;
data->ds_commit_index = i;
data->lseg = lseg;
list_add(&data->pages, list);
nreq++;
}
put_lseg(lseg);
return 0;
out_bad:
/* Clean up on error */
for (j = i; j < fl->number_of_buckets; j++) {
if (list_empty(&fl->commit_buckets[i]))
if (list_empty(&fl->commit_buckets[i].committing))
continue;
nfs_retry_commit(&fl->commit_buckets[i], lseg);
nfs_retry_commit(&fl->commit_buckets[i].committing, lseg);
put_lseg(lseg); /* associated with emptying bucket */
}
put_lseg(lseg);
/* Caller will clean up entries put on list */
return -ENOMEM;
return nreq;
}
/* This follows nfs_commit_list pretty closely */
@ -874,40 +1031,40 @@ filelayout_commit_pagelist(struct inode *inode, struct list_head *mds_pages,
{
struct nfs_write_data *data, *tmp;
LIST_HEAD(list);
unsigned int nreq = 0;
if (!list_empty(mds_pages)) {
data = nfs_commitdata_alloc();
if (!data)
goto out_bad;
data->lseg = NULL;
list_add(&data->pages, &list);
if (data != NULL) {
data->lseg = NULL;
list_add(&data->pages, &list);
nreq++;
} else
nfs_retry_commit(mds_pages, NULL);
}
if (alloc_ds_commits(inode, &list))
goto out_bad;
nreq += alloc_ds_commits(inode, &list);
if (nreq == 0) {
nfs_commit_clear_lock(NFS_I(inode));
goto out;
}
atomic_add(nreq, &NFS_I(inode)->commits_outstanding);
list_for_each_entry_safe(data, tmp, &list, pages) {
list_del_init(&data->pages);
atomic_inc(&NFS_I(inode)->commits_outstanding);
if (!data->lseg) {
nfs_init_commit(data, mds_pages, NULL);
nfs_initiate_commit(data, NFS_CLIENT(inode),
data->mds_ops, how);
} else {
nfs_init_commit(data, &FILELAYOUT_LSEG(data->lseg)->commit_buckets[data->ds_commit_index], data->lseg);
nfs_init_commit(data, &FILELAYOUT_LSEG(data->lseg)->commit_buckets[data->ds_commit_index].committing, data->lseg);
filelayout_initiate_commit(data, how);
}
}
return 0;
out_bad:
list_for_each_entry_safe(data, tmp, &list, pages) {
nfs_retry_commit(&data->pages, data->lseg);
list_del_init(&data->pages);
nfs_commit_free(data);
}
nfs_retry_commit(mds_pages, NULL);
nfs_commit_clear_lock(NFS_I(inode));
return -ENOMEM;
out:
return PNFS_ATTEMPTED;
}
static void
@ -924,8 +1081,9 @@ static struct pnfs_layoutdriver_type filelayout_type = {
.free_lseg = filelayout_free_lseg,
.pg_read_ops = &filelayout_pg_read_ops,
.pg_write_ops = &filelayout_pg_write_ops,
.mark_pnfs_commit = filelayout_mark_pnfs_commit,
.choose_commit_list = filelayout_choose_commit_list,
.mark_request_commit = filelayout_mark_request_commit,
.clear_request_commit = filelayout_clear_request_commit,
.scan_commit_lists = filelayout_scan_commit_lists,
.commit_pagelist = filelayout_commit_pagelist,
.read_pagelist = filelayout_read_pagelist,
.write_pagelist = filelayout_write_pagelist,

View File

@ -74,6 +74,11 @@ struct nfs4_file_layout_dsaddr {
struct nfs4_pnfs_ds *ds_list[1];
};
struct nfs4_fl_commit_bucket {
struct list_head written;
struct list_head committing;
};
struct nfs4_filelayout_segment {
struct pnfs_layout_segment generic_hdr;
u32 stripe_type;
@ -84,7 +89,7 @@ struct nfs4_filelayout_segment {
struct nfs4_file_layout_dsaddr *dsaddr; /* Point to GETDEVINFO data */
unsigned int num_fh;
struct nfs_fh **fh_array;
struct list_head *commit_buckets; /* Sort commits to ds */
struct nfs4_fl_commit_bucket *commit_buckets; /* Sort commits to ds */
int number_of_buckets;
};

View File

@ -45,7 +45,7 @@
* - incremented when a device id maps a data server already in the cache.
* - decremented when deviceid is removed from the cache.
*/
DEFINE_SPINLOCK(nfs4_ds_cache_lock);
static DEFINE_SPINLOCK(nfs4_ds_cache_lock);
static LIST_HEAD(nfs4_data_server_cache);
/* Debug routines */
@ -108,58 +108,40 @@ same_sockaddr(struct sockaddr *addr1, struct sockaddr *addr2)
return false;
}
/*
* Lookup DS by addresses. The first matching address returns true.
* nfs4_ds_cache_lock is held
*/
static struct nfs4_pnfs_ds *
_data_server_lookup_locked(struct list_head *dsaddrs)
static bool
_same_data_server_addrs_locked(const struct list_head *dsaddrs1,
const struct list_head *dsaddrs2)
{
struct nfs4_pnfs_ds *ds;
struct nfs4_pnfs_ds_addr *da1, *da2;
list_for_each_entry(da1, dsaddrs, da_node) {
list_for_each_entry(ds, &nfs4_data_server_cache, ds_node) {
list_for_each_entry(da2, &ds->ds_addrs, da_node) {
if (same_sockaddr(
(struct sockaddr *)&da1->da_addr,
(struct sockaddr *)&da2->da_addr))
return ds;
}
}
/* step through both lists, comparing as we go */
for (da1 = list_first_entry(dsaddrs1, typeof(*da1), da_node),
da2 = list_first_entry(dsaddrs2, typeof(*da2), da_node);
da1 != NULL && da2 != NULL;
da1 = list_entry(da1->da_node.next, typeof(*da1), da_node),
da2 = list_entry(da2->da_node.next, typeof(*da2), da_node)) {
if (!same_sockaddr((struct sockaddr *)&da1->da_addr,
(struct sockaddr *)&da2->da_addr))
return false;
}
return NULL;
if (da1 == NULL && da2 == NULL)
return true;
return false;
}
/*
* Compare two lists of addresses.
* Lookup DS by addresses. nfs4_ds_cache_lock is held
*/
static bool
_data_server_match_all_addrs_locked(struct list_head *dsaddrs1,
struct list_head *dsaddrs2)
static struct nfs4_pnfs_ds *
_data_server_lookup_locked(const struct list_head *dsaddrs)
{
struct nfs4_pnfs_ds_addr *da1, *da2;
size_t count1 = 0,
count2 = 0;
struct nfs4_pnfs_ds *ds;
list_for_each_entry(da1, dsaddrs1, da_node)
count1++;
list_for_each_entry(da2, dsaddrs2, da_node) {
bool found = false;
count2++;
list_for_each_entry(da1, dsaddrs1, da_node) {
if (same_sockaddr((struct sockaddr *)&da1->da_addr,
(struct sockaddr *)&da2->da_addr)) {
found = true;
break;
}
}
if (!found)
return false;
}
return (count1 == count2);
list_for_each_entry(ds, &nfs4_data_server_cache, ds_node)
if (_same_data_server_addrs_locked(&ds->ds_addrs, dsaddrs))
return ds;
return NULL;
}
/*
@ -356,11 +338,6 @@ nfs4_pnfs_ds_add(struct list_head *dsaddrs, gfp_t gfp_flags)
dprintk("%s add new data server %s\n", __func__,
ds->ds_remotestr);
} else {
if (!_data_server_match_all_addrs_locked(&tmp_ds->ds_addrs,
dsaddrs)) {
dprintk("%s: multipath address mismatch: %s != %s",
__func__, tmp_ds->ds_remotestr, remotestr);
}
kfree(remotestr);
kfree(ds);
atomic_inc(&tmp_ds->ds_count);
@ -378,7 +355,7 @@ out:
* Currently only supports ipv4, ipv6 and one multi-path address.
*/
static struct nfs4_pnfs_ds_addr *
decode_ds_addr(struct xdr_stream *streamp, gfp_t gfp_flags)
decode_ds_addr(struct net *net, struct xdr_stream *streamp, gfp_t gfp_flags)
{
struct nfs4_pnfs_ds_addr *da = NULL;
char *buf, *portstr;
@ -457,7 +434,7 @@ decode_ds_addr(struct xdr_stream *streamp, gfp_t gfp_flags)
INIT_LIST_HEAD(&da->da_node);
if (!rpc_pton(buf, portstr-buf, (struct sockaddr *)&da->da_addr,
if (!rpc_pton(net, buf, portstr-buf, (struct sockaddr *)&da->da_addr,
sizeof(da->da_addr))) {
dprintk("%s: error parsing address %s\n", __func__, buf);
goto out_free_da;
@ -554,7 +531,7 @@ decode_device(struct inode *ino, struct pnfs_device *pdev, gfp_t gfp_flags)
cnt = be32_to_cpup(p);
dprintk("%s stripe count %d\n", __func__, cnt);
if (cnt > NFS4_PNFS_MAX_STRIPE_CNT) {
printk(KERN_WARNING "%s: stripe count %d greater than "
printk(KERN_WARNING "NFS: %s: stripe count %d greater than "
"supported maximum %d\n", __func__,
cnt, NFS4_PNFS_MAX_STRIPE_CNT);
goto out_err_free_scratch;
@ -585,7 +562,7 @@ decode_device(struct inode *ino, struct pnfs_device *pdev, gfp_t gfp_flags)
num = be32_to_cpup(p);
dprintk("%s ds_num %u\n", __func__, num);
if (num > NFS4_PNFS_MAX_MULTI_CNT) {
printk(KERN_WARNING "%s: multipath count %d greater than "
printk(KERN_WARNING "NFS: %s: multipath count %d greater than "
"supported maximum %d\n", __func__,
num, NFS4_PNFS_MAX_MULTI_CNT);
goto out_err_free_stripe_indices;
@ -593,7 +570,7 @@ decode_device(struct inode *ino, struct pnfs_device *pdev, gfp_t gfp_flags)
/* validate stripe indices are all < num */
if (max_stripe_index >= num) {
printk(KERN_WARNING "%s: stripe index %u >= num ds %u\n",
printk(KERN_WARNING "NFS: %s: stripe index %u >= num ds %u\n",
__func__, max_stripe_index, num);
goto out_err_free_stripe_indices;
}
@ -625,7 +602,8 @@ decode_device(struct inode *ino, struct pnfs_device *pdev, gfp_t gfp_flags)
mp_count = be32_to_cpup(p); /* multipath count */
for (j = 0; j < mp_count; j++) {
da = decode_ds_addr(&stream, gfp_flags);
da = decode_ds_addr(NFS_SERVER(ino)->nfs_client->net,
&stream, gfp_flags);
if (da)
list_add_tail(&da->da_node, &dsaddrs);
}
@ -686,7 +664,7 @@ decode_and_add_device(struct inode *inode, struct pnfs_device *dev, gfp_t gfp_fl
new = decode_device(inode, dev, gfp_flags);
if (!new) {
printk(KERN_WARNING "%s: Could not decode or add device\n",
printk(KERN_WARNING "NFS: %s: Could not decode or add device\n",
__func__);
return NULL;
}
@ -835,7 +813,7 @@ nfs4_fl_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx)
struct nfs4_pnfs_ds *ds = dsaddr->ds_list[ds_idx];
if (ds == NULL) {
printk(KERN_ERR "%s: No data server for offset index %d\n",
printk(KERN_ERR "NFS: %s: No data server for offset index %d\n",
__func__, ds_idx);
return NULL;
}

View File

@ -94,13 +94,14 @@ static int nfs4_validate_fspath(struct dentry *dentry,
}
static size_t nfs_parse_server_name(char *string, size_t len,
struct sockaddr *sa, size_t salen)
struct sockaddr *sa, size_t salen, struct nfs_server *server)
{
struct net *net = rpc_net_ns(server->client);
ssize_t ret;
ret = rpc_pton(string, len, sa, salen);
ret = rpc_pton(net, string, len, sa, salen);
if (ret == 0) {
ret = nfs_dns_resolve_name(string, len, sa, salen);
ret = nfs_dns_resolve_name(net, string, len, sa, salen);
if (ret < 0)
ret = 0;
}
@ -137,7 +138,8 @@ static struct vfsmount *try_location(struct nfs_clone_mount *mountdata,
continue;
mountdata->addrlen = nfs_parse_server_name(buf->data, buf->len,
mountdata->addr, addr_bufsize);
mountdata->addr, addr_bufsize,
NFS_SB(mountdata->sb));
if (mountdata->addrlen == 0)
continue;

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More