dect
/
linux-2.6
Archived
13
0
Fork 0

SCSI misc on 20121212

This patch set includes two large new drivers: mpt3sas (for the next gen
 fusion SAS hardware) and csiostor a FCoE offload driver for the Chelsio
 converged network cards (this includes some net changes which I've OK'd with
 DaveM).
 
 The rest of the patch is driver updates (qla2xxx, lpfc, hptiop, be2iscsi) plus
 a few assorted updates and bug fixes.
 
 We also have a Power Management rework in the Upper Layer Drivers preparatory
 to doing ACPI zero power optical devices, but the actual enabler is still
 being worked on.
 
 Signed-off-by: James Bottomley <JBottomley@Parallels.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.19 (GNU/Linux)
 
 iQEcBAABAgAGBQJQyE9RAAoJEDeqqVYsXL0Mf2oIAL+B2R7hM4RhZrVI0dq1x+og
 o/B4JOnDXC7gFJTJXRjejEuAqmJRN7O1mxFV9sEo/zRa++Sd9YVPwQlcCFdesw0a
 xU8aCAcy9hLlTcDK2pwhKN6i/anyIvl1Qec/574y9UhFxUsQz+7G9IvT7UmBqaYt
 zVTvd4zX4ZHRBIyMTNzkSLGUHcJzKeMOrTFekJwQNDQpHXPJknOCqNiokhLPv0ET
 Cl1JZS/jlF7g4FcePhmYyL/nGHfXp1/WYKneDVT7PFWNJc2RPTBDP3PfdN4mBJfc
 bQXl/vRLtAjYDpHUJ4IKJbdtfFLkm4KS9ET3kwTkpZ2K6U3c9NklmBK3or2Vo1I=
 =Gw2O
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull first round of SCSI updates from James Bottomley:
 "This patch set includes two large new drivers: mpt3sas (for the next
  gen fusion SAS hardware) and csiostor a FCoE offload driver for the
  Chelsio converged network cards (this includes some net changes which
  I've OK'd with DaveM).

  The rest of the patch is driver updates (qla2xxx, lpfc, hptiop,
  be2iscsi) plus a few assorted updates and bug fixes.

  We also have a Power Management rework in the Upper Layer Drivers
  preparatory to doing ACPI zero power optical devices, but the actual
  enabler is still being worked on.

  Signed-off-by: James Bottomley <JBottomley@Parallels.com>"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (72 commits)
  [SCSI] mpt3sas: add new driver supporting 12GB SAS
  [SCSI] scsi_transport_sas: add 12GB definitions for mpt3sas
  [SCSI] miscdevice: Adding support for MPT3SAS_MINOR(222)
  [SCSI] csiostor: remove unneeded memset()
  [SCSI] csiostor: Fix sparse warnings.
  [SCSI] qla2xxx: Display that driver is operating in legacy interrupt mode.
  [SCSI] qla2xxx: Dont clear drv active on iospace config failure.
  [SCSI] qla2xxx: Fix typo in qla2xxx driver.
  [SCSI] qla2xxx: Update ql2xextended_error_logging parameter description with new option.
  [SCSI] qla2xxx: Parameterize the link speed of hba rather than fcport.
  [SCSI] qla2xxx: Add 16Gb/s case to get port speed capability.
  [SCSI] qla2xxx: Move marking fcport online ahead of setting iiDMA speed.
  [SCSI] qla2xxx: Add acquiring of risc semaphore before doing ISP reset.
  [SCSI] qla2xxx: Ignore driver ack bit if corresponding presence bit is not set.
  [SCSI] qla2xxx: Fix typo in qla83xx_fw_dump function.
  [SCSI] qla2xxx: Add Gen3 PCIe speed 8GT/s to the log message.
  [SCSI] qla2xxx: Use correct Request-Q-Out register during bidirectional request processing
  [SCSI] qla2xxx: Move noisy Start scsi failed messages to verbose logging level.
  [SCSI] qla2xxx: Fix coccinelle warnings in qla2x00_relogin.
  [SCSI] qla2xxx: No fcport FC-4 type assignment in GA_NXT response.
  ...
This commit is contained in:
Linus Torvalds 2012-12-13 19:20:31 -08:00
commit e777d192ff
105 changed files with 52359 additions and 1011 deletions

View File

@ -37,7 +37,7 @@ For Intel IOP based adapters, the controller IOP is accessed via PCI BAR0:
0x40 Inbound Queue Port
0x44 Outbound Queue Port
For Marvell IOP based adapters, the IOP is accessed via PCI BAR0 and BAR1:
For Marvell not Frey IOP based adapters, the IOP is accessed via PCI BAR0 and BAR1:
BAR0 offset Register
0x20400 Inbound Doorbell Register
@ -55,9 +55,31 @@ For Marvell IOP based adapters, the IOP is accessed via PCI BAR0 and BAR1:
0x40-0x1040 Inbound Queue
0x1040-0x2040 Outbound Queue
For Marvell Frey IOP based adapters, the IOP is accessed via PCI BAR0 and BAR1:
I/O Request Workflow
----------------------
BAR0 offset Register
0x0 IOP configuration information.
BAR1 offset Register
0x4000 Inbound List Base Address Low
0x4004 Inbound List Base Address High
0x4018 Inbound List Write Pointer
0x402C Inbound List Configuration and Control
0x4050 Outbound List Base Address Low
0x4054 Outbound List Base Address High
0x4058 Outbound List Copy Pointer Shadow Base Address Low
0x405C Outbound List Copy Pointer Shadow Base Address High
0x4088 Outbound List Interrupt Cause
0x408C Outbound List Interrupt Enable
0x1020C PCIe Function 0 Interrupt Enable
0x10400 PCIe Function 0 to CPU Message A
0x10420 CPU to PCIe Function 0 Message A
0x10480 CPU to PCIe Function 0 Doorbell
0x10484 CPU to PCIe Function 0 Doorbell Enable
I/O Request Workflow of Not Marvell Frey
------------------------------------------
All queued requests are handled via inbound/outbound queue port.
A request packet can be allocated in either IOP or host memory.
@ -101,6 +123,45 @@ register 0. An outbound message with the same value indicates the completion
of an inbound message.
I/O Request Workflow of Marvell Frey
--------------------------------------
All queued requests are handled via inbound/outbound list.
To send a request to the controller:
- Allocate a free request in host DMA coherent memory.
Requests allocated in host memory must be aligned on 32-bytes boundary.
- Fill the request with index of the request in the flag.
Fill a free inbound list unit with the physical address and the size of
the request.
Set up the inbound list write pointer with the index of previous unit,
round to 0 if the index reaches the supported count of requests.
- Post the inbound list writer pointer to IOP.
- The IOP process the request. When the request is completed, the flag of
the request with or-ed IOPMU_QUEUE_MASK_HOST_BITS will be put into a
free outbound list unit and the index of the outbound list unit will be
put into the copy pointer shadow register. An outbound interrupt will be
generated.
- The host read the outbound list copy pointer shadow register and compare
with previous saved read ponter N. If they are different, the host will
read the (N+1)th outbound list unit.
The host get the index of the request from the (N+1)th outbound list
unit and complete the request.
Non-queued requests (reset communication/reset/flush etc) can be sent via PCIe
Function 0 to CPU Message A register. The CPU to PCIe Function 0 Message register
with the same value indicates the completion of message.
User-level Interface
---------------------
@ -112,7 +173,7 @@ The driver exposes following sysfs attributes:
-----------------------------------------------------------------------------
Copyright (C) 2006-2009 HighPoint Technologies, Inc. All Rights Reserved.
Copyright (C) 2006-2012 HighPoint Technologies, Inc. All Rights Reserved.
This file is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of

View File

@ -4655,13 +4655,16 @@ S: Maintained
F: fs/logfs/
LSILOGIC MPT FUSION DRIVERS (FC/SAS/SPI)
M: Eric Moore <Eric.Moore@lsi.com>
M: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com>
M: Sreekanth Reddy <Sreekanth.Reddy@lsi.com>
M: support@lsi.com
L: DL-MPTFusionLinux@lsi.com
L: linux-scsi@vger.kernel.org
W: http://www.lsilogic.com/support
S: Supported
F: drivers/message/fusion/
F: drivers/scsi/mpt2sas/
F: drivers/scsi/mpt3sas/
LSILOGIC/SYMBIOS/NCR 53C8XX and 53C1010 PCI-SCSI drivers
M: Matthew Wilcox <matthew@wil.cx>

View File

@ -3203,7 +3203,7 @@ static int adap_init1(struct adapter *adap, struct fw_caps_config_cmd *c)
memset(c, 0, sizeof(*c));
c->op_to_write = htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
FW_CMD_REQUEST | FW_CMD_READ);
c->retval_len16 = htonl(FW_LEN16(*c));
c->cfvalid_to_len16 = htonl(FW_LEN16(*c));
ret = t4_wr_mbox(adap, adap->fn, c, sizeof(*c), c);
if (ret < 0)
return ret;
@ -3397,7 +3397,7 @@ static int adap_init0_config(struct adapter *adapter, int reset)
htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
FW_CMD_REQUEST |
FW_CMD_READ);
caps_cmd.retval_len16 =
caps_cmd.cfvalid_to_len16 =
htonl(FW_CAPS_CONFIG_CMD_CFVALID |
FW_CAPS_CONFIG_CMD_MEMTYPE_CF(mtype) |
FW_CAPS_CONFIG_CMD_MEMADDR64K_CF(maddr >> 16) |
@ -3422,7 +3422,7 @@ static int adap_init0_config(struct adapter *adapter, int reset)
htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
FW_CMD_REQUEST |
FW_CMD_WRITE);
caps_cmd.retval_len16 = htonl(FW_LEN16(caps_cmd));
caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd));
ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, sizeof(caps_cmd),
NULL);
if (ret < 0)
@ -3497,7 +3497,7 @@ static int adap_init0_no_config(struct adapter *adapter, int reset)
memset(&caps_cmd, 0, sizeof(caps_cmd));
caps_cmd.op_to_write = htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
FW_CMD_REQUEST | FW_CMD_READ);
caps_cmd.retval_len16 = htonl(FW_LEN16(caps_cmd));
caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd));
ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, sizeof(caps_cmd),
&caps_cmd);
if (ret < 0)
@ -3929,7 +3929,7 @@ static int adap_init0(struct adapter *adap)
memset(&caps_cmd, 0, sizeof(caps_cmd));
caps_cmd.op_to_write = htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
FW_CMD_REQUEST | FW_CMD_READ);
caps_cmd.retval_len16 = htonl(FW_LEN16(caps_cmd));
caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd));
ret = t4_wr_mbox(adap, adap->mbox, &caps_cmd, sizeof(caps_cmd),
&caps_cmd);
if (ret < 0)

View File

@ -508,7 +508,7 @@ static inline void ring_fl_db(struct adapter *adap, struct sge_fl *q)
{
if (q->pend_cred >= 8) {
wmb();
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), DBPRIO |
t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), DBPRIO(1) |
QID(q->cntxt_id) | PIDX(q->pend_cred / 8));
q->pend_cred &= 7;
}
@ -2082,10 +2082,10 @@ int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq,
goto fl_nomem;
flsz = fl->size / 8 + s->stat_len / sizeof(struct tx_desc);
c.iqns_to_fl0congen = htonl(FW_IQ_CMD_FL0PACKEN |
c.iqns_to_fl0congen = htonl(FW_IQ_CMD_FL0PACKEN(1) |
FW_IQ_CMD_FL0FETCHRO(1) |
FW_IQ_CMD_FL0DATARO(1) |
FW_IQ_CMD_FL0PADEN);
FW_IQ_CMD_FL0PADEN(1));
c.fl0dcaen_to_fl0cidxfthresh = htons(FW_IQ_CMD_FL0FBMIN(2) |
FW_IQ_CMD_FL0FBMAX(3));
c.fl0size = htons(flsz);

View File

@ -648,12 +648,12 @@ static int sf1_read(struct adapter *adapter, unsigned int byte_cnt, int cont,
if (!byte_cnt || byte_cnt > 4)
return -EINVAL;
if (t4_read_reg(adapter, SF_OP) & BUSY)
if (t4_read_reg(adapter, SF_OP) & SF_BUSY)
return -EBUSY;
cont = cont ? SF_CONT : 0;
lock = lock ? SF_LOCK : 0;
t4_write_reg(adapter, SF_OP, lock | cont | BYTECNT(byte_cnt - 1));
ret = t4_wait_op_done(adapter, SF_OP, BUSY, 0, SF_ATTEMPTS, 5);
ret = t4_wait_op_done(adapter, SF_OP, SF_BUSY, 0, SF_ATTEMPTS, 5);
if (!ret)
*valp = t4_read_reg(adapter, SF_DATA);
return ret;
@ -676,14 +676,14 @@ static int sf1_write(struct adapter *adapter, unsigned int byte_cnt, int cont,
{
if (!byte_cnt || byte_cnt > 4)
return -EINVAL;
if (t4_read_reg(adapter, SF_OP) & BUSY)
if (t4_read_reg(adapter, SF_OP) & SF_BUSY)
return -EBUSY;
cont = cont ? SF_CONT : 0;
lock = lock ? SF_LOCK : 0;
t4_write_reg(adapter, SF_DATA, val);
t4_write_reg(adapter, SF_OP, lock |
cont | BYTECNT(byte_cnt - 1) | OP_WR);
return t4_wait_op_done(adapter, SF_OP, BUSY, 0, SF_ATTEMPTS, 5);
return t4_wait_op_done(adapter, SF_OP, SF_BUSY, 0, SF_ATTEMPTS, 5);
}
/**
@ -2252,14 +2252,14 @@ int t4_wol_pat_enable(struct adapter *adap, unsigned int port, unsigned int map,
t4_write_reg(adap, EPIO_REG(DATA0), mask0);
t4_write_reg(adap, EPIO_REG(OP), ADDRESS(i) | EPIOWR);
t4_read_reg(adap, EPIO_REG(OP)); /* flush */
if (t4_read_reg(adap, EPIO_REG(OP)) & BUSY)
if (t4_read_reg(adap, EPIO_REG(OP)) & SF_BUSY)
return -ETIMEDOUT;
/* write CRC */
t4_write_reg(adap, EPIO_REG(DATA0), crc);
t4_write_reg(adap, EPIO_REG(OP), ADDRESS(i + 32) | EPIOWR);
t4_read_reg(adap, EPIO_REG(OP)); /* flush */
if (t4_read_reg(adap, EPIO_REG(OP)) & BUSY)
if (t4_read_reg(adap, EPIO_REG(OP)) & SF_BUSY)
return -ETIMEDOUT;
}
#undef EPIO_REG
@ -2405,7 +2405,7 @@ int t4_fw_hello(struct adapter *adap, unsigned int mbox, unsigned int evt_mbox,
retry:
memset(&c, 0, sizeof(c));
INIT_CMD(c, HELLO, WRITE);
c.err_to_mbasyncnot = htonl(
c.err_to_clearinit = htonl(
FW_HELLO_CMD_MASTERDIS(master == MASTER_CANT) |
FW_HELLO_CMD_MASTERFORCE(master == MASTER_MUST) |
FW_HELLO_CMD_MBMASTER(master == MASTER_MUST ? mbox :
@ -2426,7 +2426,7 @@ retry:
return ret;
}
v = ntohl(c.err_to_mbasyncnot);
v = ntohl(c.err_to_clearinit);
master_mbox = FW_HELLO_CMD_MBMASTER_GET(v);
if (state) {
if (v & FW_HELLO_CMD_ERR)
@ -2774,7 +2774,7 @@ int t4_fw_config_file(struct adapter *adap, unsigned int mbox,
htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
FW_CMD_REQUEST |
FW_CMD_READ);
caps_cmd.retval_len16 =
caps_cmd.cfvalid_to_len16 =
htonl(FW_CAPS_CONFIG_CMD_CFVALID |
FW_CAPS_CONFIG_CMD_MEMTYPE_CF(mtype) |
FW_CAPS_CONFIG_CMD_MEMADDR64K_CF(maddr >> 16) |
@ -2797,7 +2797,7 @@ int t4_fw_config_file(struct adapter *adap, unsigned int mbox,
htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) |
FW_CMD_REQUEST |
FW_CMD_WRITE);
caps_cmd.retval_len16 = htonl(FW_LEN16(caps_cmd));
caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd));
return t4_wr_mbox(adap, mbox, &caps_cmd, sizeof(caps_cmd), NULL);
}

View File

@ -658,6 +658,7 @@ struct ulptx_sgl {
__be32 cmd_nsge;
#define ULPTX_CMD(x) ((x) << 24)
#define ULPTX_NSGE(x) ((x) << 0)
#define ULPTX_MORE (1U << 23)
__be32 len0;
__be64 addr0;
struct ulptx_sge_pair sge[0];

View File

@ -67,7 +67,7 @@
#define QID_MASK 0xffff8000U
#define QID_SHIFT 15
#define QID(x) ((x) << QID_SHIFT)
#define DBPRIO 0x00004000U
#define DBPRIO(x) ((x) << 14)
#define PIDX_MASK 0x00003fffU
#define PIDX_SHIFT 0
#define PIDX(x) ((x) << PIDX_SHIFT)
@ -193,6 +193,12 @@
#define SGE_FL_BUFFER_SIZE1 0x1048
#define SGE_FL_BUFFER_SIZE2 0x104c
#define SGE_FL_BUFFER_SIZE3 0x1050
#define SGE_FL_BUFFER_SIZE4 0x1054
#define SGE_FL_BUFFER_SIZE5 0x1058
#define SGE_FL_BUFFER_SIZE6 0x105c
#define SGE_FL_BUFFER_SIZE7 0x1060
#define SGE_FL_BUFFER_SIZE8 0x1064
#define SGE_INGRESS_RX_THRESHOLD 0x10a0
#define THRESHOLD_0_MASK 0x3f000000U
#define THRESHOLD_0_SHIFT 24
@ -217,6 +223,17 @@
#define EGRTHRESHOLD(x) ((x) << EGRTHRESHOLDshift)
#define EGRTHRESHOLD_GET(x) (((x) & EGRTHRESHOLD_MASK) >> EGRTHRESHOLDshift)
#define SGE_DBFIFO_STATUS 0x10a4
#define HP_INT_THRESH_SHIFT 28
#define HP_INT_THRESH_MASK 0xfU
#define HP_INT_THRESH(x) ((x) << HP_INT_THRESH_SHIFT)
#define LP_INT_THRESH_SHIFT 12
#define LP_INT_THRESH_MASK 0xfU
#define LP_INT_THRESH(x) ((x) << LP_INT_THRESH_SHIFT)
#define SGE_DOORBELL_CONTROL 0x10a8
#define ENABLE_DROP (1 << 13)
#define SGE_TIMER_VALUE_0_AND_1 0x10b8
#define TIMERVALUE0_MASK 0xffff0000U
#define TIMERVALUE0_SHIFT 16
@ -277,6 +294,10 @@
#define A_SGE_CTXT_CMD 0x11fc
#define A_SGE_DBQ_CTXT_BADDR 0x1084
#define PCIE_PF_CFG 0x40
#define AIVEC(x) ((x) << 4)
#define AIVEC_MASK 0x3ffU
#define PCIE_PF_CLI 0x44
#define PCIE_INT_CAUSE 0x3004
#define UNXSPLCPLERR 0x20000000U
@ -322,6 +343,13 @@
#define PCIE_MEM_ACCESS_OFFSET 0x306c
#define PCIE_FW 0x30b8
#define PCIE_FW_ERR 0x80000000U
#define PCIE_FW_INIT 0x40000000U
#define PCIE_FW_HALT 0x20000000U
#define PCIE_FW_MASTER_VLD 0x00008000U
#define PCIE_FW_MASTER(x) ((x) << 12)
#define PCIE_FW_MASTER_MASK 0x7
#define PCIE_FW_MASTER_GET(x) (((x) >> 12) & PCIE_FW_MASTER_MASK)
#define PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS 0x5908
#define RNPP 0x80000000U
@ -432,6 +460,9 @@
#define MBOWNER(x) ((x) << MBOWNER_SHIFT)
#define MBOWNER_GET(x) (((x) & MBOWNER_MASK) >> MBOWNER_SHIFT)
#define CIM_PF_HOST_INT_ENABLE 0x288
#define MBMSGRDYINTEN(x) ((x) << 19)
#define CIM_PF_HOST_INT_CAUSE 0x28c
#define MBMSGRDYINT 0x00080000U
@ -922,7 +953,7 @@
#define SF_DATA 0x193f8
#define SF_OP 0x193fc
#define BUSY 0x80000000U
#define SF_BUSY 0x80000000U
#define SF_LOCK 0x00000010U
#define SF_CONT 0x00000008U
#define BYTECNT_MASK 0x00000006U
@ -981,6 +1012,7 @@
#define I2CM 0x00000002U
#define CIM 0x00000001U
#define PL_INT_ENABLE 0x19410
#define PL_INT_MAP0 0x19414
#define PL_RST 0x19428
#define PIORST 0x00000002U

View File

@ -68,6 +68,7 @@ struct fw_wr_hdr {
};
#define FW_WR_OP(x) ((x) << 24)
#define FW_WR_OP_GET(x) (((x) >> 24) & 0xff)
#define FW_WR_ATOMIC(x) ((x) << 23)
#define FW_WR_FLUSH(x) ((x) << 22)
#define FW_WR_COMPL(x) ((x) << 21)
@ -222,6 +223,7 @@ struct fw_cmd_hdr {
#define FW_CMD_OP(x) ((x) << 24)
#define FW_CMD_OP_GET(x) (((x) >> 24) & 0xff)
#define FW_CMD_REQUEST (1U << 23)
#define FW_CMD_REQUEST_GET(x) (((x) >> 23) & 0x1)
#define FW_CMD_READ (1U << 22)
#define FW_CMD_WRITE (1U << 21)
#define FW_CMD_EXEC (1U << 20)
@ -229,6 +231,7 @@ struct fw_cmd_hdr {
#define FW_CMD_RETVAL(x) ((x) << 8)
#define FW_CMD_RETVAL_GET(x) (((x) >> 8) & 0xff)
#define FW_CMD_LEN16(x) ((x) << 0)
#define FW_LEN16(fw_struct) FW_CMD_LEN16(sizeof(fw_struct) / 16)
enum fw_ldst_addrspc {
FW_LDST_ADDRSPC_FIRMWARE = 0x0001,
@ -241,7 +244,8 @@ enum fw_ldst_addrspc {
FW_LDST_ADDRSPC_TP_MIB = 0x0012,
FW_LDST_ADDRSPC_MDIO = 0x0018,
FW_LDST_ADDRSPC_MPS = 0x0020,
FW_LDST_ADDRSPC_FUNC = 0x0028
FW_LDST_ADDRSPC_FUNC = 0x0028,
FW_LDST_ADDRSPC_FUNC_PCIE = 0x0029,
};
enum fw_ldst_mps_fid {
@ -303,6 +307,16 @@ struct fw_ldst_cmd {
__be64 data0;
__be64 data1;
} func;
struct fw_ldst_pcie {
u8 ctrl_to_fn;
u8 bnum;
u8 r;
u8 ext_r;
u8 select_naccess;
u8 pcie_fn;
__be16 nset_pkd;
__be32 data[12];
} pcie;
} u;
};
@ -312,6 +326,9 @@ struct fw_ldst_cmd {
#define FW_LDST_CMD_FID(x) ((x) << 15)
#define FW_LDST_CMD_CTL(x) ((x) << 0)
#define FW_LDST_CMD_RPLCPF(x) ((x) << 0)
#define FW_LDST_CMD_LC (1U << 4)
#define FW_LDST_CMD_NACCESS(x) ((x) << 0)
#define FW_LDST_CMD_FN(x) ((x) << 0)
struct fw_reset_cmd {
__be32 op_to_write;
@ -333,7 +350,7 @@ enum fw_hellow_cmd {
struct fw_hello_cmd {
__be32 op_to_write;
__be32 retval_len16;
__be32 err_to_mbasyncnot;
__be32 err_to_clearinit;
#define FW_HELLO_CMD_ERR (1U << 31)
#define FW_HELLO_CMD_INIT (1U << 30)
#define FW_HELLO_CMD_MASTERDIS(x) ((x) << 29)
@ -343,6 +360,7 @@ struct fw_hello_cmd {
#define FW_HELLO_CMD_MBMASTER(x) ((x) << FW_HELLO_CMD_MBMASTER_SHIFT)
#define FW_HELLO_CMD_MBMASTER_GET(x) \
(((x) >> FW_HELLO_CMD_MBMASTER_SHIFT) & FW_HELLO_CMD_MBMASTER_MASK)
#define FW_HELLO_CMD_MBASYNCNOTINT(x) ((x) << 23)
#define FW_HELLO_CMD_MBASYNCNOT(x) ((x) << 20)
#define FW_HELLO_CMD_STAGE(x) ((x) << 17)
#define FW_HELLO_CMD_CLEARINIT (1U << 16)
@ -428,6 +446,7 @@ enum fw_caps_config_iscsi {
enum fw_caps_config_fcoe {
FW_CAPS_CONFIG_FCOE_INITIATOR = 0x00000001,
FW_CAPS_CONFIG_FCOE_TARGET = 0x00000002,
FW_CAPS_CONFIG_FCOE_CTRL_OFLD = 0x00000004,
};
enum fw_memtype_cf {
@ -440,7 +459,7 @@ enum fw_memtype_cf {
struct fw_caps_config_cmd {
__be32 op_to_write;
__be32 retval_len16;
__be32 cfvalid_to_len16;
__be32 r2;
__be32 hwmbitmap;
__be16 nbmcaps;
@ -701,8 +720,8 @@ struct fw_iq_cmd {
#define FW_IQ_CMD_FL0FETCHRO(x) ((x) << 6)
#define FW_IQ_CMD_FL0HOSTFCMODE(x) ((x) << 4)
#define FW_IQ_CMD_FL0CPRIO(x) ((x) << 3)
#define FW_IQ_CMD_FL0PADEN (1U << 2)
#define FW_IQ_CMD_FL0PACKEN (1U << 1)
#define FW_IQ_CMD_FL0PADEN(x) ((x) << 2)
#define FW_IQ_CMD_FL0PACKEN(x) ((x) << 1)
#define FW_IQ_CMD_FL0CONGEN (1U << 0)
#define FW_IQ_CMD_FL0DCAEN(x) ((x) << 15)
@ -1190,6 +1209,14 @@ enum fw_port_dcb_cfg_rc {
FW_PORT_DCB_CFG_ERROR = 0x1
};
enum fw_port_dcb_type {
FW_PORT_DCB_TYPE_PGID = 0x00,
FW_PORT_DCB_TYPE_PGRATE = 0x01,
FW_PORT_DCB_TYPE_PRIORATE = 0x02,
FW_PORT_DCB_TYPE_PFC = 0x03,
FW_PORT_DCB_TYPE_APP_ID = 0x04,
};
struct fw_port_cmd {
__be32 op_to_portid;
__be32 action_to_len16;
@ -1257,6 +1284,7 @@ struct fw_port_cmd {
#define FW_PORT_CMD_TXIPG(x) ((x) << 19)
#define FW_PORT_CMD_LSTATUS (1U << 31)
#define FW_PORT_CMD_LSTATUS_GET(x) (((x) >> 31) & 0x1)
#define FW_PORT_CMD_LSPEED(x) ((x) << 24)
#define FW_PORT_CMD_LSPEED_GET(x) (((x) >> 24) & 0x3f)
#define FW_PORT_CMD_TXPAUSE (1U << 23)
@ -1305,6 +1333,9 @@ enum fw_port_module_type {
FW_PORT_MOD_TYPE_TWINAX_PASSIVE,
FW_PORT_MOD_TYPE_TWINAX_ACTIVE,
FW_PORT_MOD_TYPE_LRM,
FW_PORT_MOD_TYPE_ERROR = FW_PORT_CMD_MODTYPE_MASK - 3,
FW_PORT_MOD_TYPE_UNKNOWN = FW_PORT_CMD_MODTYPE_MASK - 2,
FW_PORT_MOD_TYPE_NOTSUPPORTED = FW_PORT_CMD_MODTYPE_MASK - 1,
FW_PORT_MOD_TYPE_NONE = FW_PORT_CMD_MODTYPE_MASK
};

View File

@ -536,7 +536,7 @@ static inline void ring_fl_db(struct adapter *adapter, struct sge_fl *fl)
if (fl->pend_cred >= FL_PER_EQ_UNIT) {
wmb();
t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_KDOORBELL,
DBPRIO |
DBPRIO(1) |
QID(fl->cntxt_id) |
PIDX(fl->pend_cred / FL_PER_EQ_UNIT));
fl->pend_cred %= FL_PER_EQ_UNIT;
@ -952,7 +952,7 @@ static inline void ring_tx_db(struct adapter *adapter, struct sge_txq *tq,
* Warn if we write doorbells with the wrong priority and write
* descriptors before telling HW.
*/
WARN_ON((QID(tq->cntxt_id) | PIDX(n)) & DBPRIO);
WARN_ON((QID(tq->cntxt_id) | PIDX(n)) & DBPRIO(1));
wmb();
t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_KDOORBELL,
QID(tq->cntxt_id) | PIDX(n));
@ -2126,8 +2126,8 @@ int t4vf_sge_alloc_rxq(struct adapter *adapter, struct sge_rspq *rspq,
cmd.iqns_to_fl0congen =
cpu_to_be32(
FW_IQ_CMD_FL0HOSTFCMODE(SGE_HOSTFCMODE_NONE) |
FW_IQ_CMD_FL0PACKEN |
FW_IQ_CMD_FL0PADEN);
FW_IQ_CMD_FL0PACKEN(1) |
FW_IQ_CMD_FL0PADEN(1));
cmd.fl0dcaen_to_fl0cidxfthresh =
cpu_to_be16(
FW_IQ_CMD_FL0FBMIN(SGE_FETCHBURSTMIN_64B) |

View File

@ -603,6 +603,7 @@ config SCSI_ARCMSR
source "drivers/scsi/megaraid/Kconfig.megaraid"
source "drivers/scsi/mpt2sas/Kconfig"
source "drivers/scsi/mpt3sas/Kconfig"
source "drivers/scsi/ufs/Kconfig"
config SCSI_HPTIOP
@ -1812,6 +1813,7 @@ config SCSI_VIRTIO
This is the virtual HBA driver for virtio. If the kernel will
be used in a virtual machine, say Y or M.
source "drivers/scsi/csiostor/Kconfig"
endif # SCSI_LOWLEVEL

View File

@ -90,6 +90,7 @@ obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/
obj-$(CONFIG_SCSI_QLA_ISCSI) += libiscsi.o qla4xxx/
obj-$(CONFIG_SCSI_LPFC) += lpfc/
obj-$(CONFIG_SCSI_BFA_FC) += bfa/
obj-$(CONFIG_SCSI_CHELSIO_FCOE) += csiostor/
obj-$(CONFIG_SCSI_PAS16) += pas16.o
obj-$(CONFIG_SCSI_T128) += t128.o
obj-$(CONFIG_SCSI_DMX3191D) += dmx3191d.o
@ -106,6 +107,7 @@ obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o
obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/
obj-$(CONFIG_MEGARAID_SAS) += megaraid/
obj-$(CONFIG_SCSI_MPT2SAS) += mpt2sas/
obj-$(CONFIG_SCSI_MPT3SAS) += mpt3sas/
obj-$(CONFIG_SCSI_UFSHCD) += ufs/
obj-$(CONFIG_SCSI_ACARD) += atp870u.o
obj-$(CONFIG_SCSI_SUNESP) += esp_scsi.o sun_esp.o

View File

@ -132,11 +132,13 @@ struct inquiry_data {
* M O D U L E G L O B A L S
*/
static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* sgmap);
static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* psg);
static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw* psg);
static unsigned long aac_build_sgraw2(struct scsi_cmnd *scsicmd, struct aac_raw_io2 *rio2, int sg_max);
static int aac_convert_sgraw2(struct aac_raw_io2 *rio2, int pages, int nseg, int nseg_new);
static long aac_build_sg(struct scsi_cmnd *scsicmd, struct sgmap *sgmap);
static long aac_build_sg64(struct scsi_cmnd *scsicmd, struct sgmap64 *psg);
static long aac_build_sgraw(struct scsi_cmnd *scsicmd, struct sgmapraw *psg);
static long aac_build_sgraw2(struct scsi_cmnd *scsicmd,
struct aac_raw_io2 *rio2, int sg_max);
static int aac_convert_sgraw2(struct aac_raw_io2 *rio2,
int pages, int nseg, int nseg_new);
static int aac_send_srb_fib(struct scsi_cmnd* scsicmd);
#ifdef AAC_DETAILED_STATUS_INFO
static char *aac_get_status_string(u32 status);
@ -971,6 +973,7 @@ static int aac_read_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u3
{
struct aac_dev *dev = fib->dev;
u16 fibsize, command;
long ret;
aac_fib_init(fib);
if (dev->comm_interface == AAC_COMM_MESSAGE_TYPE2 && !dev->sync_mode) {
@ -982,7 +985,10 @@ static int aac_read_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u3
readcmd2->byteCount = cpu_to_le32(count<<9);
readcmd2->cid = cpu_to_le16(scmd_id(cmd));
readcmd2->flags = cpu_to_le16(RIO2_IO_TYPE_READ);
aac_build_sgraw2(cmd, readcmd2, dev->scsi_host_ptr->sg_tablesize);
ret = aac_build_sgraw2(cmd, readcmd2,
dev->scsi_host_ptr->sg_tablesize);
if (ret < 0)
return ret;
command = ContainerRawIo2;
fibsize = sizeof(struct aac_raw_io2) +
((le32_to_cpu(readcmd2->sgeCnt)-1) * sizeof(struct sge_ieee1212));
@ -996,7 +1002,9 @@ static int aac_read_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u3
readcmd->flags = cpu_to_le16(RIO_TYPE_READ);
readcmd->bpTotal = 0;
readcmd->bpComplete = 0;
aac_build_sgraw(cmd, &readcmd->sg);
ret = aac_build_sgraw(cmd, &readcmd->sg);
if (ret < 0)
return ret;
command = ContainerRawIo;
fibsize = sizeof(struct aac_raw_io) +
((le32_to_cpu(readcmd->sg.count)-1) * sizeof(struct sgentryraw));
@ -1019,6 +1027,8 @@ static int aac_read_block64(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
{
u16 fibsize;
struct aac_read64 *readcmd;
long ret;
aac_fib_init(fib);
readcmd = (struct aac_read64 *) fib_data(fib);
readcmd->command = cpu_to_le32(VM_CtHostRead64);
@ -1028,7 +1038,9 @@ static int aac_read_block64(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
readcmd->pad = 0;
readcmd->flags = 0;
aac_build_sg64(cmd, &readcmd->sg);
ret = aac_build_sg64(cmd, &readcmd->sg);
if (ret < 0)
return ret;
fibsize = sizeof(struct aac_read64) +
((le32_to_cpu(readcmd->sg.count) - 1) *
sizeof (struct sgentry64));
@ -1050,6 +1062,8 @@ static int aac_read_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32
{
u16 fibsize;
struct aac_read *readcmd;
long ret;
aac_fib_init(fib);
readcmd = (struct aac_read *) fib_data(fib);
readcmd->command = cpu_to_le32(VM_CtBlockRead);
@ -1057,7 +1071,9 @@ static int aac_read_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u32
readcmd->block = cpu_to_le32((u32)(lba&0xffffffff));
readcmd->count = cpu_to_le32(count * 512);
aac_build_sg(cmd, &readcmd->sg);
ret = aac_build_sg(cmd, &readcmd->sg);
if (ret < 0)
return ret;
fibsize = sizeof(struct aac_read) +
((le32_to_cpu(readcmd->sg.count) - 1) *
sizeof (struct sgentry));
@ -1079,6 +1095,7 @@ static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
{
struct aac_dev *dev = fib->dev;
u16 fibsize, command;
long ret;
aac_fib_init(fib);
if (dev->comm_interface == AAC_COMM_MESSAGE_TYPE2 && !dev->sync_mode) {
@ -1093,7 +1110,10 @@ static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
(((aac_cache & 5) != 5) || !fib->dev->cache_protected)) ?
cpu_to_le16(RIO2_IO_TYPE_WRITE|RIO2_IO_SUREWRITE) :
cpu_to_le16(RIO2_IO_TYPE_WRITE);
aac_build_sgraw2(cmd, writecmd2, dev->scsi_host_ptr->sg_tablesize);
ret = aac_build_sgraw2(cmd, writecmd2,
dev->scsi_host_ptr->sg_tablesize);
if (ret < 0)
return ret;
command = ContainerRawIo2;
fibsize = sizeof(struct aac_raw_io2) +
((le32_to_cpu(writecmd2->sgeCnt)-1) * sizeof(struct sge_ieee1212));
@ -1110,7 +1130,9 @@ static int aac_write_raw_io(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u
cpu_to_le16(RIO_TYPE_WRITE);
writecmd->bpTotal = 0;
writecmd->bpComplete = 0;
aac_build_sgraw(cmd, &writecmd->sg);
ret = aac_build_sgraw(cmd, &writecmd->sg);
if (ret < 0)
return ret;
command = ContainerRawIo;
fibsize = sizeof(struct aac_raw_io) +
((le32_to_cpu(writecmd->sg.count)-1) * sizeof (struct sgentryraw));
@ -1133,6 +1155,8 @@ static int aac_write_block64(struct fib * fib, struct scsi_cmnd * cmd, u64 lba,
{
u16 fibsize;
struct aac_write64 *writecmd;
long ret;
aac_fib_init(fib);
writecmd = (struct aac_write64 *) fib_data(fib);
writecmd->command = cpu_to_le32(VM_CtHostWrite64);
@ -1142,7 +1166,9 @@ static int aac_write_block64(struct fib * fib, struct scsi_cmnd * cmd, u64 lba,
writecmd->pad = 0;
writecmd->flags = 0;
aac_build_sg64(cmd, &writecmd->sg);
ret = aac_build_sg64(cmd, &writecmd->sg);
if (ret < 0)
return ret;
fibsize = sizeof(struct aac_write64) +
((le32_to_cpu(writecmd->sg.count) - 1) *
sizeof (struct sgentry64));
@ -1164,6 +1190,8 @@ static int aac_write_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u3
{
u16 fibsize;
struct aac_write *writecmd;
long ret;
aac_fib_init(fib);
writecmd = (struct aac_write *) fib_data(fib);
writecmd->command = cpu_to_le32(VM_CtBlockWrite);
@ -1173,7 +1201,9 @@ static int aac_write_block(struct fib * fib, struct scsi_cmnd * cmd, u64 lba, u3
writecmd->sg.count = cpu_to_le32(1);
/* ->stable is not used - it did mean which type of write */
aac_build_sg(cmd, &writecmd->sg);
ret = aac_build_sg(cmd, &writecmd->sg);
if (ret < 0)
return ret;
fibsize = sizeof(struct aac_write) +
((le32_to_cpu(writecmd->sg.count) - 1) *
sizeof (struct sgentry));
@ -1235,8 +1265,11 @@ static int aac_scsi_64(struct fib * fib, struct scsi_cmnd * cmd)
{
u16 fibsize;
struct aac_srb * srbcmd = aac_scsi_common(fib, cmd);
long ret;
aac_build_sg64(cmd, (struct sgmap64*) &srbcmd->sg);
ret = aac_build_sg64(cmd, (struct sgmap64 *) &srbcmd->sg);
if (ret < 0)
return ret;
srbcmd->count = cpu_to_le32(scsi_bufflen(cmd));
memset(srbcmd->cdb, 0, sizeof(srbcmd->cdb));
@ -1263,8 +1296,11 @@ static int aac_scsi_32(struct fib * fib, struct scsi_cmnd * cmd)
{
u16 fibsize;
struct aac_srb * srbcmd = aac_scsi_common(fib, cmd);
long ret;
aac_build_sg(cmd, (struct sgmap*)&srbcmd->sg);
ret = aac_build_sg(cmd, (struct sgmap *)&srbcmd->sg);
if (ret < 0)
return ret;
srbcmd->count = cpu_to_le32(scsi_bufflen(cmd));
memset(srbcmd->cdb, 0, sizeof(srbcmd->cdb));
@ -2870,7 +2906,7 @@ static int aac_send_srb_fib(struct scsi_cmnd* scsicmd)
return -1;
}
static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* psg)
static long aac_build_sg(struct scsi_cmnd *scsicmd, struct sgmap *psg)
{
struct aac_dev *dev;
unsigned long byte_count = 0;
@ -2883,7 +2919,8 @@ static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* psg)
psg->sg[0].count = 0;
nseg = scsi_dma_map(scsicmd);
BUG_ON(nseg < 0);
if (nseg < 0)
return nseg;
if (nseg) {
struct scatterlist *sg;
int i;
@ -2912,7 +2949,7 @@ static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* psg)
}
static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* psg)
static long aac_build_sg64(struct scsi_cmnd *scsicmd, struct sgmap64 *psg)
{
struct aac_dev *dev;
unsigned long byte_count = 0;
@ -2927,7 +2964,8 @@ static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* p
psg->sg[0].count = 0;
nseg = scsi_dma_map(scsicmd);
BUG_ON(nseg < 0);
if (nseg < 0)
return nseg;
if (nseg) {
struct scatterlist *sg;
int i;
@ -2957,7 +2995,7 @@ static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* p
return byte_count;
}
static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw* psg)
static long aac_build_sgraw(struct scsi_cmnd *scsicmd, struct sgmapraw *psg)
{
unsigned long byte_count = 0;
int nseg;
@ -2972,7 +3010,8 @@ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw*
psg->sg[0].flags = 0;
nseg = scsi_dma_map(scsicmd);
BUG_ON(nseg < 0);
if (nseg < 0)
return nseg;
if (nseg) {
struct scatterlist *sg;
int i;
@ -3005,13 +3044,15 @@ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw*
return byte_count;
}
static unsigned long aac_build_sgraw2(struct scsi_cmnd *scsicmd, struct aac_raw_io2 *rio2, int sg_max)
static long aac_build_sgraw2(struct scsi_cmnd *scsicmd,
struct aac_raw_io2 *rio2, int sg_max)
{
unsigned long byte_count = 0;
int nseg;
nseg = scsi_dma_map(scsicmd);
BUG_ON(nseg < 0);
if (nseg < 0)
return nseg;
if (nseg) {
struct scatterlist *sg;
int i, conformable = 0;

View File

@ -12,7 +12,7 @@
*----------------------------------------------------------------------------*/
#ifndef AAC_DRIVER_BUILD
# define AAC_DRIVER_BUILD 29800
# define AAC_DRIVER_BUILD 29801
# define AAC_DRIVER_BRANCH "-ms"
#endif
#define MAXIMUM_NUM_CONTAINERS 32

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2011 Emulex
* Copyright (C) 2005 - 2012 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -28,7 +28,7 @@
/* BladeEngine Generation numbers */
#define BE_GEN2 2
#define BE_GEN3 3
#define BE_GEN4 4
struct be_dma_mem {
void *va;
dma_addr_t dma;
@ -84,9 +84,12 @@ static inline void queue_tail_inc(struct be_queue_info *q)
/*ISCSI */
struct be_eq_obj {
bool todo_mcc_cq;
bool todo_cq;
struct be_queue_info q;
struct beiscsi_hba *phba;
struct be_queue_info *cq;
struct work_struct work_cqs; /* Work Item */
struct blk_iopoll iopoll;
};

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2011 Emulex
* Copyright (C) 2005 - 2012 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -56,7 +56,7 @@ int beiscsi_pci_soft_reset(struct beiscsi_hba *phba)
writel(pconline0, (void *)pci_online0_offset);
writel(pconline1, (void *)pci_online1_offset);
sreset = BE2_SET_RESET;
sreset |= BE2_SET_RESET;
writel(sreset, (void *)pci_reset_offset);
i = 0;
@ -133,6 +133,87 @@ unsigned int alloc_mcc_tag(struct beiscsi_hba *phba)
return tag;
}
/*
* beiscsi_mccq_compl()- Wait for completion of MBX
* @phba: Driver private structure
* @tag: Tag for the MBX Command
* @wrb: the WRB used for the MBX Command
* @cmd_hdr: IOCTL Hdr for the MBX Cmd
*
* Waits for MBX completion with the passed TAG.
*
* return
* Success: 0
* Failure: Non-Zero
**/
int beiscsi_mccq_compl(struct beiscsi_hba *phba,
uint32_t tag, struct be_mcc_wrb **wrb,
void *cmd_hdr)
{
int rc = 0;
uint32_t mcc_tag_response;
uint16_t status = 0, addl_status = 0, wrb_num = 0;
struct be_mcc_wrb *temp_wrb;
struct be_cmd_req_hdr *ioctl_hdr;
struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
if (beiscsi_error(phba))
return -EIO;
/* wait for the mccq completion */
rc = wait_event_interruptible_timeout(
phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag],
msecs_to_jiffies(
BEISCSI_HOST_MBX_TIMEOUT));
if (rc <= 0) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_INIT | BEISCSI_LOG_EH |
BEISCSI_LOG_CONFIG,
"BC_%d : MBX Cmd Completion timed out\n");
rc = -EAGAIN;
goto release_mcc_tag;
} else
rc = 0;
mcc_tag_response = phba->ctrl.mcc_numtag[tag];
status = (mcc_tag_response & CQE_STATUS_MASK);
addl_status = ((mcc_tag_response & CQE_STATUS_ADDL_MASK) >>
CQE_STATUS_ADDL_SHIFT);
if (cmd_hdr) {
ioctl_hdr = (struct be_cmd_req_hdr *)cmd_hdr;
} else {
wrb_num = (mcc_tag_response & CQE_STATUS_WRB_MASK) >>
CQE_STATUS_WRB_SHIFT;
temp_wrb = (struct be_mcc_wrb *)queue_get_wrb(mccq, wrb_num);
ioctl_hdr = embedded_payload(temp_wrb);
if (wrb)
*wrb = temp_wrb;
}
if (status || addl_status) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_INIT | BEISCSI_LOG_EH |
BEISCSI_LOG_CONFIG,
"BC_%d : MBX Cmd Failed for "
"Subsys : %d Opcode : %d with "
"Status : %d and Extd_Status : %d\n",
ioctl_hdr->subsystem,
ioctl_hdr->opcode,
status, addl_status);
rc = -EAGAIN;
}
release_mcc_tag:
/* Release the MCC entry */
free_mcc_tag(&phba->ctrl, tag);
return rc;
}
void free_mcc_tag(struct be_ctrl_info *ctrl, unsigned int tag)
{
spin_lock(&ctrl->mbox_lock);
@ -168,11 +249,24 @@ static inline void be_mcc_compl_use(struct be_mcc_compl *compl)
compl->flags = 0;
}
/*
* be_mcc_compl_process()- Check the MBX comapletion status
* @ctrl: Function specific MBX data structure
* @compl: Completion status of MBX Command
*
* Check for the MBX completion status when BMBX method used
*
* return
* Success: Zero
* Failure: Non-Zero
**/
static int be_mcc_compl_process(struct be_ctrl_info *ctrl,
struct be_mcc_compl *compl)
{
u16 compl_status, extd_status;
struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem);
struct beiscsi_hba *phba = pci_get_drvdata(ctrl->pdev);
struct be_cmd_req_hdr *hdr = embedded_payload(wrb);
be_dws_le_to_cpu(compl, 4);
@ -184,7 +278,10 @@ static int be_mcc_compl_process(struct be_ctrl_info *ctrl,
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BC_%d : error in cmd completion: status(compl/extd)=%d/%d\n",
"BC_%d : error in cmd completion: "
"Subsystem : %d Opcode : %d "
"status(compl/extd)=%d/%d\n",
hdr->subsystem, hdr->opcode,
compl_status, extd_status);
return -EBUSY;
@ -314,11 +411,24 @@ int beiscsi_process_mcc(struct beiscsi_hba *phba)
return status;
}
/* Wait till no more pending mcc requests are present */
/*
* be_mcc_wait_compl()- Wait for MBX completion
* @phba: driver private structure
*
* Wait till no more pending mcc requests are present
*
* return
* Success: 0
* Failure: Non-Zero
*
**/
static int be_mcc_wait_compl(struct beiscsi_hba *phba)
{
int i, status;
for (i = 0; i < mcc_timeout; i++) {
if (beiscsi_error(phba))
return -EIO;
status = beiscsi_process_mcc(phba);
if (status)
return status;
@ -330,51 +440,83 @@ static int be_mcc_wait_compl(struct beiscsi_hba *phba)
if (i == mcc_timeout) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BC_%d : mccq poll timed out\n");
"BC_%d : FW Timed Out\n");
phba->fw_timeout = true;
beiscsi_ue_detect(phba);
return -EBUSY;
}
return 0;
}
/* Notify MCC requests and wait for completion */
/*
* be_mcc_notify_wait()- Notify and wait for Compl
* @phba: driver private structure
*
* Notify MCC requests and wait for completion
*
* return
* Success: 0
* Failure: Non-Zero
**/
int be_mcc_notify_wait(struct beiscsi_hba *phba)
{
be_mcc_notify(phba);
return be_mcc_wait_compl(phba);
}
/*
* be_mbox_db_ready_wait()- Check ready status
* @ctrl: Function specific MBX data structure
*
* Check for the ready status of FW to send BMBX
* commands to adapter.
*
* return
* Success: 0
* Failure: Non-Zero
**/
static int be_mbox_db_ready_wait(struct be_ctrl_info *ctrl)
{
#define long_delay 2000
void __iomem *db = ctrl->db + MPU_MAILBOX_DB_OFFSET;
int cnt = 0, wait = 5; /* in usecs */
struct beiscsi_hba *phba = pci_get_drvdata(ctrl->pdev);
int wait = 0;
u32 ready;
do {
if (beiscsi_error(phba))
return -EIO;
ready = ioread32(db) & MPU_MAILBOX_DB_RDY_MASK;
if (ready)
break;
if (cnt > 12000000) {
struct beiscsi_hba *phba = pci_get_drvdata(ctrl->pdev);
if (wait > BEISCSI_HOST_MBX_TIMEOUT) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BC_%d : mbox_db poll timed out\n");
"BC_%d : FW Timed Out\n");
phba->fw_timeout = true;
beiscsi_ue_detect(phba);
return -EBUSY;
}
if (cnt > 50) {
wait = long_delay;
mdelay(long_delay / 1000);
} else
udelay(wait);
cnt += wait;
mdelay(1);
wait++;
} while (true);
return 0;
}
/*
* be_mbox_notify: Notify adapter of new BMBX command
* @ctrl: Function specific MBX data structure
*
* Ring doorbell to inform adapter of a BMBX command
* to process
*
* return
* Success: 0
* Failure: Non-Zero
**/
int be_mbox_notify(struct be_ctrl_info *ctrl)
{
int status;
@ -391,13 +533,9 @@ int be_mbox_notify(struct be_ctrl_info *ctrl)
iowrite32(val, db);
status = be_mbox_db_ready_wait(ctrl);
if (status != 0) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BC_%d : be_mbox_db_ready_wait failed\n");
if (status)
return status;
}
val = 0;
val &= ~MPU_MAILBOX_DB_RDY_MASK;
val &= ~MPU_MAILBOX_DB_HI_MASK;
@ -405,13 +543,9 @@ int be_mbox_notify(struct be_ctrl_info *ctrl)
iowrite32(val, db);
status = be_mbox_db_ready_wait(ctrl);
if (status != 0) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BC_%d : be_mbox_db_ready_wait failed\n");
if (status)
return status;
}
if (be_mcc_compl_is_new(compl)) {
status = be_mcc_compl_process(ctrl, &mbox->compl);
be_mcc_compl_use(compl);
@ -499,7 +633,7 @@ void be_cmd_hdr_prepare(struct be_cmd_req_hdr *req_hdr,
req_hdr->opcode = opcode;
req_hdr->subsystem = subsystem;
req_hdr->request_length = cpu_to_le32(cmd_len - sizeof(*req_hdr));
req_hdr->timeout = 120;
req_hdr->timeout = BEISCSI_FW_MBX_TIMEOUT;
}
static void be_cmd_page_addrs_prepare(struct phys_addr *pages, u32 max_pages,
@ -649,18 +783,34 @@ int beiscsi_cmd_cq_create(struct be_ctrl_info *ctrl,
OPCODE_COMMON_CQ_CREATE, sizeof(*req));
req->num_pages = cpu_to_le16(PAGES_4K_SPANNED(q_mem->va, q_mem->size));
if (chip_skh_r(ctrl->pdev)) {
req->hdr.version = MBX_CMD_VER2;
req->page_size = 1;
AMAP_SET_BITS(struct amap_cq_context_v2, coalescwm,
ctxt, coalesce_wm);
AMAP_SET_BITS(struct amap_cq_context_v2, nodelay,
ctxt, no_delay);
AMAP_SET_BITS(struct amap_cq_context_v2, count, ctxt,
__ilog2_u32(cq->len / 256));
AMAP_SET_BITS(struct amap_cq_context_v2, valid, ctxt, 1);
AMAP_SET_BITS(struct amap_cq_context_v2, eventable, ctxt, 1);
AMAP_SET_BITS(struct amap_cq_context_v2, eqid, ctxt, eq->id);
AMAP_SET_BITS(struct amap_cq_context_v2, armed, ctxt, 1);
} else {
AMAP_SET_BITS(struct amap_cq_context, coalescwm,
ctxt, coalesce_wm);
AMAP_SET_BITS(struct amap_cq_context, nodelay, ctxt, no_delay);
AMAP_SET_BITS(struct amap_cq_context, count, ctxt,
__ilog2_u32(cq->len / 256));
AMAP_SET_BITS(struct amap_cq_context, valid, ctxt, 1);
AMAP_SET_BITS(struct amap_cq_context, solevent, ctxt, sol_evts);
AMAP_SET_BITS(struct amap_cq_context, eventable, ctxt, 1);
AMAP_SET_BITS(struct amap_cq_context, eqid, ctxt, eq->id);
AMAP_SET_BITS(struct amap_cq_context, armed, ctxt, 1);
AMAP_SET_BITS(struct amap_cq_context, func, ctxt,
PCI_FUNC(ctrl->pdev->devfn));
}
AMAP_SET_BITS(struct amap_cq_context, coalescwm, ctxt, coalesce_wm);
AMAP_SET_BITS(struct amap_cq_context, nodelay, ctxt, no_delay);
AMAP_SET_BITS(struct amap_cq_context, count, ctxt,
__ilog2_u32(cq->len / 256));
AMAP_SET_BITS(struct amap_cq_context, valid, ctxt, 1);
AMAP_SET_BITS(struct amap_cq_context, solevent, ctxt, sol_evts);
AMAP_SET_BITS(struct amap_cq_context, eventable, ctxt, 1);
AMAP_SET_BITS(struct amap_cq_context, eqid, ctxt, eq->id);
AMAP_SET_BITS(struct amap_cq_context, armed, ctxt, 1);
AMAP_SET_BITS(struct amap_cq_context, func, ctxt,
PCI_FUNC(ctrl->pdev->devfn));
be_dws_cpu_to_le(ctxt, sizeof(req->context));
be_cmd_page_addrs_prepare(req->pages, ARRAY_SIZE(req->pages), q_mem);

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2011 Emulex
* Copyright (C) 2005 - 2012 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -57,6 +57,16 @@ struct be_mcc_wrb {
#define CQE_STATUS_COMPL_SHIFT 0 /* bits 0 - 15 */
#define CQE_STATUS_EXTD_MASK 0xFFFF
#define CQE_STATUS_EXTD_SHIFT 16 /* bits 0 - 15 */
#define CQE_STATUS_ADDL_MASK 0xFF00
#define CQE_STATUS_MASK 0xFF
#define CQE_STATUS_ADDL_SHIFT 0x08
#define CQE_STATUS_WRB_MASK 0xFF0000
#define CQE_STATUS_WRB_SHIFT 16
#define BEISCSI_HOST_MBX_TIMEOUT (110 * 1000)
#define BEISCSI_FW_MBX_TIMEOUT 100
/* MBOX Command VER */
#define MBX_CMD_VER2 0x02
struct be_mcc_compl {
u32 status; /* dword 0 */
@ -183,7 +193,8 @@ struct be_cmd_req_hdr {
u8 domain; /* dword 0 */
u32 timeout; /* dword 1 */
u32 request_length; /* dword 2 */
u32 rsvd0; /* dword 3 */
u8 version; /* dword 3 */
u8 rsvd0[3]; /* dword 3 */
};
struct be_cmd_resp_hdr {
@ -483,10 +494,28 @@ struct amap_cq_context {
u8 rsvd5[32]; /* dword 3 */
} __packed;
struct amap_cq_context_v2 {
u8 rsvd0[12]; /* dword 0 */
u8 coalescwm[2]; /* dword 0 */
u8 nodelay; /* dword 0 */
u8 rsvd1[12]; /* dword 0 */
u8 count[2]; /* dword 0 */
u8 valid; /* dword 0 */
u8 rsvd2; /* dword 0 */
u8 eventable; /* dword 0 */
u8 eqid[16]; /* dword 1 */
u8 rsvd3[15]; /* dword 1 */
u8 armed; /* dword 1 */
u8 cqecount[16];/* dword 2 */
u8 rsvd4[16]; /* dword 2 */
u8 rsvd5[32]; /* dword 3 */
};
struct be_cmd_req_cq_create {
struct be_cmd_req_hdr hdr;
u16 num_pages;
u16 rsvd0;
u8 page_size;
u8 rsvd0;
u8 context[sizeof(struct amap_cq_context) / 8];
struct phys_addr pages[4];
} __packed;
@ -663,6 +692,9 @@ unsigned int be_cmd_get_initname(struct beiscsi_hba *phba);
unsigned int be_cmd_get_port_speed(struct beiscsi_hba *phba);
void free_mcc_tag(struct be_ctrl_info *ctrl, unsigned int tag);
int beiscsi_mccq_compl(struct beiscsi_hba *phba,
uint32_t tag, struct be_mcc_wrb **wrb, void *cmd_va);
/*ISCSI Functuions */
int be_cmd_fw_initialize(struct be_ctrl_info *ctrl);
@ -804,6 +836,59 @@ struct amap_sol_cqe_ring {
u8 valid; /* dword 3 */
} __packed;
struct amap_sol_cqe_v2 {
u8 hw_sts[8]; /* dword 0 */
u8 i_sts[8]; /* dword 0 */
u8 wrb_index[16]; /* dword 0 */
u8 i_exp_cmd_sn[32]; /* dword 1 */
u8 code[6]; /* dword 2 */
u8 cmd_cmpl; /* dword 2 */
u8 rsvd0; /* dword 2 */
u8 i_cmd_wnd[8]; /* dword 2 */
u8 cid[13]; /* dword 2 */
u8 u; /* dword 2 */
u8 o; /* dword 2 */
u8 s; /* dword 2 */
u8 i_res_cnt[31]; /* dword 3 */
u8 valid; /* dword 3 */
} __packed;
struct common_sol_cqe {
u32 exp_cmdsn;
u32 res_cnt;
u16 wrb_index;
u16 cid;
u8 hw_sts;
u8 cmd_wnd;
u8 res_flag; /* the s feild of structure */
u8 i_resp; /* for skh if cmd_complete is set then i_sts is response */
u8 i_flags; /* for skh or the u and o feilds */
u8 i_sts; /* for skh if cmd_complete is not-set then i_sts is status */
};
/*** iSCSI ack/driver message completions ***/
struct amap_it_dmsg_cqe {
u8 ack_num[32]; /* DWORD 0 */
u8 pdu_bytes_rcvd[32]; /* DWORD 1 */
u8 code[6]; /* DWORD 2 */
u8 cid[10]; /* DWORD 2 */
u8 wrb_idx[8]; /* DWORD 2 */
u8 rsvd0[8]; /* DWORD 2*/
u8 rsvd1[31]; /* DWORD 3*/
u8 valid; /* DWORD 3 */
} __packed;
struct amap_it_dmsg_cqe_v2 {
u8 ack_num[32]; /* DWORD 0 */
u8 pdu_bytes_rcvd[32]; /* DWORD 1 */
u8 code[6]; /* DWORD 2 */
u8 rsvd0[10]; /* DWORD 2 */
u8 wrb_idx[16]; /* DWORD 2 */
u8 rsvd1[16]; /* DWORD 3 */
u8 cid[13]; /* DWORD 3 */
u8 rsvd2[2]; /* DWORD 3 */
u8 valid; /* DWORD 3 */
} __packed;
/**
@ -992,8 +1077,6 @@ struct be_cmd_get_all_if_id_req {
#define CONNECTION_UPLOAD_ABORT_WITH_SEQ 4 /* Abortive upload with reset,
* sequence number by driver */
/* Returns byte size of given field with a structure. */
/* Returns the number of items in the field array. */
#define BE_NUMBER_OF_FIELD(_type_, _field_) \
(FIELD_SIZEOF(_type_, _field_)/sizeof((((_type_ *)0)->_field_[0])))\

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2011 Emulex
* Copyright (C) 2005 - 2012 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -531,9 +531,9 @@ static int be2iscsi_get_if_param(struct beiscsi_hba *phba,
break;
case ISCSI_NET_PARAM_IPV4_BOOTPROTO:
if (!if_info.dhcp_state)
len = sprintf(buf, "static");
len = sprintf(buf, "static\n");
else
len = sprintf(buf, "dhcp");
len = sprintf(buf, "dhcp\n");
break;
case ISCSI_NET_PARAM_IPV4_SUBNET:
len = sprintf(buf, "%pI4\n", &if_info.ip_addr.subnet_mask);
@ -541,7 +541,7 @@ static int be2iscsi_get_if_param(struct beiscsi_hba *phba,
case ISCSI_NET_PARAM_VLAN_ENABLED:
len = sprintf(buf, "%s\n",
(if_info.vlan_priority == BEISCSI_VLAN_DISABLE)
? "Disabled" : "Enabled");
? "Disabled\n" : "Enabled\n");
break;
case ISCSI_NET_PARAM_VLAN_ID:
if (if_info.vlan_priority == BEISCSI_VLAN_DISABLE)
@ -586,7 +586,7 @@ int be2iscsi_iface_get_param(struct iscsi_iface *iface,
len = be2iscsi_get_if_param(phba, iface, param, buf);
break;
case ISCSI_NET_PARAM_IFACE_ENABLE:
len = sprintf(buf, "enabled");
len = sprintf(buf, "enabled\n");
break;
case ISCSI_NET_PARAM_IPV4_GW:
memset(&gateway, 0, sizeof(gateway));
@ -690,11 +690,9 @@ int beiscsi_set_param(struct iscsi_cls_conn *cls_conn,
static int beiscsi_get_initname(char *buf, struct beiscsi_hba *phba)
{
int rc;
unsigned int tag, wrb_num;
unsigned short status, extd_status;
unsigned int tag;
struct be_mcc_wrb *wrb;
struct be_cmd_hba_name *resp;
struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
tag = be_cmd_get_initname(phba);
if (!tag) {
@ -702,26 +700,16 @@ static int beiscsi_get_initname(char *buf, struct beiscsi_hba *phba)
"BS_%d : Getting Initiator Name Failed\n");
return -EBUSY;
} else
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
}
wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
if (status || extd_status) {
rc = beiscsi_mccq_compl(phba, tag, &wrb, NULL);
if (rc) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BS_%d : MailBox Command Failed with "
"status = %d extd_status = %d\n",
status, extd_status);
free_mcc_tag(&phba->ctrl, tag);
return -EAGAIN;
"BS_%d : Initiator Name MBX Failed\n");
return rc;
}
wrb = queue_get_wrb(mccq, wrb_num);
free_mcc_tag(&phba->ctrl, tag);
resp = embedded_payload(wrb);
rc = sprintf(buf, "%s\n", resp->initiator_name);
return rc;
@ -731,7 +719,6 @@ static int beiscsi_get_initname(char *buf, struct beiscsi_hba *phba)
* beiscsi_get_port_state - Get the Port State
* @shost : pointer to scsi_host structure
*
* returns number of bytes
*/
static void beiscsi_get_port_state(struct Scsi_Host *shost)
{
@ -750,13 +737,12 @@ static void beiscsi_get_port_state(struct Scsi_Host *shost)
*/
static int beiscsi_get_port_speed(struct Scsi_Host *shost)
{
unsigned int tag, wrb_num;
unsigned short status, extd_status;
int rc;
unsigned int tag;
struct be_mcc_wrb *wrb;
struct be_cmd_ntwk_link_status_resp *resp;
struct beiscsi_hba *phba = iscsi_host_priv(shost);
struct iscsi_cls_host *ihost = shost->shost_data;
struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
tag = be_cmd_get_port_speed(phba);
if (!tag) {
@ -764,26 +750,14 @@ static int beiscsi_get_port_speed(struct Scsi_Host *shost)
"BS_%d : Getting Port Speed Failed\n");
return -EBUSY;
} else
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
if (status || extd_status) {
}
rc = beiscsi_mccq_compl(phba, tag, &wrb, NULL);
if (rc) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BS_%d : MailBox Command Failed with "
"status = %d extd_status = %d\n",
status, extd_status);
free_mcc_tag(&phba->ctrl, tag);
return -EAGAIN;
"BS_%d : Port Speed MBX Failed\n");
return rc;
}
wrb = queue_get_wrb(mccq, wrb_num);
free_mcc_tag(&phba->ctrl, tag);
resp = embedded_payload(wrb);
switch (resp->mac_speed) {
@ -937,6 +911,14 @@ static void beiscsi_set_params_for_offld(struct beiscsi_conn *beiscsi_conn,
session->initial_r2t_en);
AMAP_SET_BITS(struct amap_beiscsi_offload_params, imd, params,
session->imm_data_en);
AMAP_SET_BITS(struct amap_beiscsi_offload_params,
data_seq_inorder, params,
session->dataseq_inorder_en);
AMAP_SET_BITS(struct amap_beiscsi_offload_params,
pdu_seq_inorder, params,
session->pdu_inorder_en);
AMAP_SET_BITS(struct amap_beiscsi_offload_params, max_r2t, params,
session->max_r2t);
AMAP_SET_BITS(struct amap_beiscsi_offload_params, exp_statsn, params,
(conn->exp_statsn - 1));
}
@ -1027,12 +1009,10 @@ static int beiscsi_open_conn(struct iscsi_endpoint *ep,
{
struct beiscsi_endpoint *beiscsi_ep = ep->dd_data;
struct beiscsi_hba *phba = beiscsi_ep->phba;
struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
struct be_mcc_wrb *wrb;
struct tcp_connect_and_offload_out *ptcpcnct_out;
unsigned short status, extd_status;
struct be_dma_mem nonemb_cmd;
unsigned int tag, wrb_num;
unsigned int tag;
int ret = -ENOMEM;
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
@ -1084,35 +1064,26 @@ static int beiscsi_open_conn(struct iscsi_endpoint *ep,
pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
nonemb_cmd.va, nonemb_cmd.dma);
return -EAGAIN;
} else {
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
}
wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
if (status || extd_status) {
ret = beiscsi_mccq_compl(phba, tag, &wrb, NULL);
if (ret) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BS_%d : mgmt_open_connection Failed"
" status = %d extd_status = %d\n",
status, extd_status);
"BS_%d : mgmt_open_connection Failed");
free_mcc_tag(&phba->ctrl, tag);
pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
nonemb_cmd.va, nonemb_cmd.dma);
goto free_ep;
} else {
wrb = queue_get_wrb(mccq, wrb_num);
free_mcc_tag(&phba->ctrl, tag);
ptcpcnct_out = embedded_payload(wrb);
beiscsi_ep = ep->dd_data;
beiscsi_ep->fw_handle = ptcpcnct_out->connection_handle;
beiscsi_ep->cid_vld = 1;
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : mgmt_open_connection Success\n");
}
ptcpcnct_out = embedded_payload(wrb);
beiscsi_ep = ep->dd_data;
beiscsi_ep->fw_handle = ptcpcnct_out->connection_handle;
beiscsi_ep->cid_vld = 1;
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : mgmt_open_connection Success\n");
pci_free_consistent(phba->ctrl.pdev, nonemb_cmd.size,
nonemb_cmd.va, nonemb_cmd.dma);
return 0;
@ -1150,8 +1121,8 @@ beiscsi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
if (phba->state != BE_ADAPTER_UP) {
ret = -EBUSY;
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : The Adapter state is Not UP\n");
beiscsi_log(phba, KERN_WARNING, BEISCSI_LOG_CONFIG,
"BS_%d : The Adapter Port state is Down!!!\n");
return ERR_PTR(ret);
}
@ -1216,11 +1187,9 @@ static int beiscsi_close_conn(struct beiscsi_endpoint *beiscsi_ep, int flag)
beiscsi_ep->ep_cid);
ret = -EAGAIN;
} else {
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
free_mcc_tag(&phba->ctrl, tag);
}
ret = beiscsi_mccq_compl(phba, tag, NULL, NULL);
return ret;
}
@ -1281,12 +1250,9 @@ void beiscsi_ep_disconnect(struct iscsi_endpoint *ep)
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : mgmt_invalidate_connection Failed for cid=%d\n",
beiscsi_ep->ep_cid);
} else {
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
free_mcc_tag(&phba->ctrl, tag);
}
beiscsi_mccq_compl(phba, tag, NULL, NULL);
beiscsi_close_conn(beiscsi_ep, tcp_upload_flag);
beiscsi_free_ep(beiscsi_ep);
beiscsi_unbind_conn_to_cid(phba, beiscsi_ep->ep_cid);

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2011 Emulex
* Copyright (C) 2005 - 2012 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2011 Emulex
* Copyright (C) 2005 - 2012 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -36,12 +36,13 @@
#include "be.h"
#define DRV_NAME "be2iscsi"
#define BUILD_STR "4.4.58.0"
#define BUILD_STR "10.0.272.0"
#define BE_NAME "Emulex OneConnect" \
"Open-iSCSI Driver version" BUILD_STR
#define DRV_DESC BE_NAME " " "Driver"
#define BE_VENDOR_ID 0x19A2
#define ELX_VENDOR_ID 0x10DF
/* DEVICE ID's for BE2 */
#define BE_DEVICE_ID1 0x212
#define OC_DEVICE_ID1 0x702
@ -51,6 +52,9 @@
#define BE_DEVICE_ID2 0x222
#define OC_DEVICE_ID3 0x712
/* DEVICE ID for SKH */
#define OC_SKH_ID1 0x722
#define BE2_IO_DEPTH 1024
#define BE2_MAX_SESSIONS 256
#define BE2_CMDS_PER_CXN 128
@ -60,7 +64,11 @@
#define BE2_DEFPDU_HDR_SZ 64
#define BE2_DEFPDU_DATA_SZ 8192
#define MAX_CPUS 31
#define MAX_CPUS 64
#define BEISCSI_MAX_NUM_CPUS 7
#define OC_SKH_MAX_NUM_CPUS 63
#define BEISCSI_SGLIST_ELEMENTS 30
#define BEISCSI_CMD_PER_LUN 128 /* scsi_host->cmd_per_lun */
@ -257,6 +265,7 @@ struct invalidate_command_table {
unsigned short cid;
} __packed;
#define chip_skh_r(pdev) (pdev->device == OC_SKH_ID1)
struct beiscsi_hba {
struct hba_parameters params;
struct hwi_controller *phwi_ctrlr;
@ -270,12 +279,11 @@ struct beiscsi_hba {
struct be_bus_address pci_pa; /* CSR */
/* PCI representation of our HBA */
struct pci_dev *pcidev;
unsigned int state;
unsigned short asic_revision;
unsigned int num_cpus;
unsigned int nxt_cqid;
struct msix_entry msix_entries[MAX_CPUS + 1];
char *msi_name[MAX_CPUS + 1];
struct msix_entry msix_entries[MAX_CPUS];
char *msi_name[MAX_CPUS];
bool msix_enabled;
struct be_mem_descriptor *init_mem;
@ -325,12 +333,14 @@ struct beiscsi_hba {
spinlock_t cid_lock;
} fw_config;
unsigned int state;
bool fw_timeout;
bool ue_detected;
struct delayed_work beiscsi_hw_check_task;
u8 mac_address[ETH_ALEN];
unsigned short todo_cq;
unsigned short todo_mcc_cq;
char wq_name[20];
struct workqueue_struct *wq; /* The actuak work queue */
struct work_struct work_cqs; /* The work being queued */
struct be_ctrl_info ctrl;
unsigned int generation;
unsigned int interface_handle;
@ -338,7 +348,10 @@ struct beiscsi_hba {
struct invalidate_command_table inv_tbl[128];
unsigned int attr_log_enable;
int (*iotask_fn)(struct iscsi_task *,
struct scatterlist *sg,
uint32_t num_sg, uint32_t xferlen,
uint32_t writedir);
};
struct beiscsi_session {
@ -410,6 +423,9 @@ struct beiscsi_io_task {
struct be_cmd_bhs *cmd_bhs;
struct be_bus_address bhs_pa;
unsigned short bhs_len;
dma_addr_t mtask_addr;
uint32_t mtask_data_count;
uint8_t wrb_type;
};
struct be_nonio_bhs {
@ -457,6 +473,9 @@ struct beiscsi_offload_params {
#define OFFLD_PARAMS_HDE 0x00000008
#define OFFLD_PARAMS_IR2T 0x00000010
#define OFFLD_PARAMS_IMD 0x00000020
#define OFFLD_PARAMS_DATA_SEQ_INORDER 0x00000040
#define OFFLD_PARAMS_PDU_SEQ_INORDER 0x00000080
#define OFFLD_PARAMS_MAX_R2T 0x00FFFF00
/**
* Pseudo amap definition in which each bit of the actual structure is defined
@ -471,7 +490,10 @@ struct amap_beiscsi_offload_params {
u8 hde[1];
u8 ir2t[1];
u8 imd[1];
u8 pad[26];
u8 data_seq_inorder[1];
u8 pdu_seq_inorder[1];
u8 max_r2t[16];
u8 pad[8];
u8 exp_statsn[32];
};
@ -569,6 +591,20 @@ struct amap_i_t_dpdu_cqe {
u8 valid;
} __packed;
struct amap_i_t_dpdu_cqe_v2 {
u8 db_addr_hi[32]; /* DWORD 0 */
u8 db_addr_lo[32]; /* DWORD 1 */
u8 code[6]; /* DWORD 2 */
u8 num_cons; /* DWORD 2*/
u8 rsvd0[8]; /* DWORD 2 */
u8 dpl[17]; /* DWORD 2 */
u8 index[16]; /* DWORD 3 */
u8 cid[13]; /* DWORD 3 */
u8 rsvd1; /* DWORD 3 */
u8 final; /* DWORD 3 */
u8 valid; /* DWORD 3 */
} __packed;
#define CQE_VALID_MASK 0x80000000
#define CQE_CODE_MASK 0x0000003F
#define CQE_CID_MASK 0x0000FFC0
@ -617,6 +653,11 @@ struct iscsi_wrb {
} __packed;
#define WRB_TYPE_MASK 0xF0000000
#define SKH_WRB_TYPE_OFFSET 27
#define BE_WRB_TYPE_OFFSET 28
#define ADAPTER_SET_WRB_TYPE(pwrb, wrb_type, type_offset) \
(pwrb->dw[0] |= (wrb_type << type_offset))
/**
* Pseudo amap definition in which each bit of the actual structure is defined
@ -663,12 +704,57 @@ struct amap_iscsi_wrb {
} __packed;
struct amap_iscsi_wrb_v2 {
u8 r2t_exp_dtl[25]; /* DWORD 0 */
u8 rsvd0[2]; /* DWORD 0*/
u8 type[5]; /* DWORD 0 */
u8 ptr2nextwrb[8]; /* DWORD 1 */
u8 wrb_idx[8]; /* DWORD 1 */
u8 lun[16]; /* DWORD 1 */
u8 sgl_idx[16]; /* DWORD 2 */
u8 ref_sgl_icd_idx[16]; /* DWORD 2 */
u8 exp_data_sn[32]; /* DWORD 3 */
u8 iscsi_bhs_addr_hi[32]; /* DWORD 4 */
u8 iscsi_bhs_addr_lo[32]; /* DWORD 5 */
u8 cq_id[16]; /* DWORD 6 */
u8 rsvd1[16]; /* DWORD 6 */
u8 cmdsn_itt[32]; /* DWORD 7 */
u8 sge0_addr_hi[32]; /* DWORD 8 */
u8 sge0_addr_lo[32]; /* DWORD 9 */
u8 sge0_offset[24]; /* DWORD 10 */
u8 rsvd2[7]; /* DWORD 10 */
u8 sge0_last; /* DWORD 10 */
u8 sge0_len[17]; /* DWORD 11 */
u8 rsvd3[7]; /* DWORD 11 */
u8 diff_enbl; /* DWORD 11 */
u8 u_run; /* DWORD 11 */
u8 o_run; /* DWORD 11 */
u8 invalid; /* DWORD 11 */
u8 dsp; /* DWORD 11 */
u8 dmsg; /* DWORD 11 */
u8 rsvd4; /* DWORD 11 */
u8 lt; /* DWORD 11 */
u8 sge1_addr_hi[32]; /* DWORD 12 */
u8 sge1_addr_lo[32]; /* DWORD 13 */
u8 sge1_r2t_offset[24]; /* DWORD 14 */
u8 rsvd5[7]; /* DWORD 14 */
u8 sge1_last; /* DWORD 14 */
u8 sge1_len[17]; /* DWORD 15 */
u8 rsvd6[15]; /* DWORD 15 */
} __packed;
struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid);
void
free_mgmt_sgl_handle(struct beiscsi_hba *phba, struct sgl_handle *psgl_handle);
void beiscsi_process_all_cqs(struct work_struct *work);
static inline bool beiscsi_error(struct beiscsi_hba *phba)
{
return phba->ue_detected || phba->fw_timeout;
}
struct pdu_nop_out {
u32 dw[12];
};
@ -728,6 +814,7 @@ struct iscsi_target_context_update_wrb {
* Pseudo amap definition in which each bit of the actual structure is defined
* as a byte: used to calculate offset/shift/mask of each field
*/
#define BE_TGT_CTX_UPDT_CMD 0x07
struct amap_iscsi_target_context_update_wrb {
u8 lun[14]; /* DWORD 0 */
u8 lt; /* DWORD 0 */
@ -773,6 +860,47 @@ struct amap_iscsi_target_context_update_wrb {
} __packed;
#define BEISCSI_MAX_RECV_DATASEG_LEN (64 * 1024)
#define BEISCSI_MAX_CXNS 1
struct amap_iscsi_target_context_update_wrb_v2 {
u8 max_burst_length[24]; /* DWORD 0 */
u8 rsvd0[3]; /* DWORD 0 */
u8 type[5]; /* DWORD 0 */
u8 ptr2nextwrb[8]; /* DWORD 1 */
u8 wrb_idx[8]; /* DWORD 1 */
u8 rsvd1[16]; /* DWORD 1 */
u8 max_send_data_segment_length[24]; /* DWORD 2 */
u8 rsvd2[8]; /* DWORD 2 */
u8 first_burst_length[24]; /* DWORD 3 */
u8 rsvd3[8]; /* DOWRD 3 */
u8 max_r2t[16]; /* DWORD 4 */
u8 rsvd4[10]; /* DWORD 4 */
u8 hde; /* DWORD 4 */
u8 dde; /* DWORD 4 */
u8 erl[2]; /* DWORD 4 */
u8 imd; /* DWORD 4 */
u8 ir2t; /* DWORD 4 */
u8 stat_sn[32]; /* DWORD 5 */
u8 rsvd5[32]; /* DWORD 6 */
u8 rsvd6[32]; /* DWORD 7 */
u8 max_recv_dataseg_len[24]; /* DWORD 8 */
u8 rsvd7[8]; /* DWORD 8 */
u8 rsvd8[32]; /* DWORD 9 */
u8 rsvd9[32]; /* DWORD 10 */
u8 max_cxns[16]; /* DWORD 11 */
u8 rsvd10[11]; /* DWORD 11*/
u8 invld; /* DWORD 11 */
u8 rsvd11;/* DWORD 11*/
u8 dmsg; /* DWORD 11 */
u8 data_seq_inorder; /* DWORD 11 */
u8 pdu_seq_inorder; /* DWORD 11 */
u8 rsvd12[32]; /*DWORD 12 */
u8 rsvd13[32]; /* DWORD 13 */
u8 rsvd14[32]; /* DWORD 14 */
u8 rsvd15[32]; /* DWORD 15 */
} __packed;
struct be_ring {
u32 pages; /* queue size in pages */
u32 id; /* queue id assigned by beklib */
@ -837,7 +965,7 @@ struct hwi_context_memory {
u16 max_eqd; /* in usecs */
u16 cur_eqd; /* in usecs */
struct be_eq_obj be_eq[MAX_CPUS];
struct be_queue_info be_cq[MAX_CPUS];
struct be_queue_info be_cq[MAX_CPUS - 1];
struct be_queue_info be_def_hdrq;
struct be_queue_info be_def_dataq;

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2011 Emulex
* Copyright (C) 2005 - 2012 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -22,6 +22,138 @@
#include <scsi/scsi_bsg_iscsi.h>
#include "be_mgmt.h"
#include "be_iscsi.h"
#include "be_main.h"
/* UE Status Low CSR */
static const char * const desc_ue_status_low[] = {
"CEV",
"CTX",
"DBUF",
"ERX",
"Host",
"MPU",
"NDMA",
"PTC ",
"RDMA ",
"RXF ",
"RXIPS ",
"RXULP0 ",
"RXULP1 ",
"RXULP2 ",
"TIM ",
"TPOST ",
"TPRE ",
"TXIPS ",
"TXULP0 ",
"TXULP1 ",
"UC ",
"WDMA ",
"TXULP2 ",
"HOST1 ",
"P0_OB_LINK ",
"P1_OB_LINK ",
"HOST_GPIO ",
"MBOX ",
"AXGMAC0",
"AXGMAC1",
"JTAG",
"MPU_INTPEND"
};
/* UE Status High CSR */
static const char * const desc_ue_status_hi[] = {
"LPCMEMHOST",
"MGMT_MAC",
"PCS0ONLINE",
"MPU_IRAM",
"PCS1ONLINE",
"PCTL0",
"PCTL1",
"PMEM",
"RR",
"TXPB",
"RXPP",
"XAUI",
"TXP",
"ARM",
"IPC",
"HOST2",
"HOST3",
"HOST4",
"HOST5",
"HOST6",
"HOST7",
"HOST8",
"HOST9",
"NETC",
"Unknown",
"Unknown",
"Unknown",
"Unknown",
"Unknown",
"Unknown",
"Unknown",
"Unknown"
};
/*
* beiscsi_ue_detec()- Detect Unrecoverable Error on adapter
* @phba: Driver priv structure
*
* Read registers linked to UE and check for the UE status
**/
void beiscsi_ue_detect(struct beiscsi_hba *phba)
{
uint32_t ue_hi = 0, ue_lo = 0;
uint32_t ue_mask_hi = 0, ue_mask_lo = 0;
uint8_t i = 0;
if (phba->ue_detected)
return;
pci_read_config_dword(phba->pcidev,
PCICFG_UE_STATUS_LOW, &ue_lo);
pci_read_config_dword(phba->pcidev,
PCICFG_UE_STATUS_MASK_LOW,
&ue_mask_lo);
pci_read_config_dword(phba->pcidev,
PCICFG_UE_STATUS_HIGH,
&ue_hi);
pci_read_config_dword(phba->pcidev,
PCICFG_UE_STATUS_MASK_HI,
&ue_mask_hi);
ue_lo = (ue_lo & ~ue_mask_lo);
ue_hi = (ue_hi & ~ue_mask_hi);
if (ue_lo || ue_hi) {
phba->ue_detected = true;
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BG_%d : Error detected on the adapter\n");
}
if (ue_lo) {
for (i = 0; ue_lo; ue_lo >>= 1, i++) {
if (ue_lo & 1)
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG,
"BG_%d : UE_LOW %s bit set\n",
desc_ue_status_low[i]);
}
}
if (ue_hi) {
for (i = 0; ue_hi; ue_hi >>= 1, i++) {
if (ue_hi & 1)
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG,
"BG_%d : UE_HIGH %s bit set\n",
desc_ue_status_hi[i]);
}
}
}
/**
* mgmt_reopen_session()- Reopen a session based on reopen_type
@ -575,13 +707,20 @@ unsigned int mgmt_get_all_if_id(struct beiscsi_hba *phba)
return status;
}
/*
* mgmt_exec_nonemb_cmd()- Execute Non Embedded MBX Cmd
* @phba: Driver priv structure
* @nonemb_cmd: Address of the MBX command issued
* @resp_buf: Buffer to copy the MBX cmd response
* @resp_buf_len: respone lenght to be copied
*
**/
static int mgmt_exec_nonemb_cmd(struct beiscsi_hba *phba,
struct be_dma_mem *nonemb_cmd, void *resp_buf,
int resp_buf_len)
{
struct be_ctrl_info *ctrl = &phba->ctrl;
struct be_mcc_wrb *wrb = wrb_from_mccq(phba);
unsigned short status, extd_status;
struct be_sge *sge;
unsigned int tag;
int rc = 0;
@ -599,31 +738,25 @@ static int mgmt_exec_nonemb_cmd(struct beiscsi_hba *phba,
be_wrb_hdr_prepare(wrb, nonemb_cmd->size, false, 1);
sge->pa_hi = cpu_to_le32(upper_32_bits(nonemb_cmd->dma));
sge->pa_lo = cpu_to_le32(nonemb_cmd->dma & 0xFFFFFFFF);
sge->pa_lo = cpu_to_le32(lower_32_bits(nonemb_cmd->dma));
sge->len = cpu_to_le32(nonemb_cmd->size);
be_mcc_notify(phba);
spin_unlock(&ctrl->mbox_lock);
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
if (status || extd_status) {
rc = beiscsi_mccq_compl(phba, tag, NULL, nonemb_cmd->va);
if (rc) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX,
"BG_%d : mgmt_exec_nonemb_cmd Failed status = %d"
"extd_status = %d\n", status, extd_status);
"BG_%d : mgmt_exec_nonemb_cmd Failed status\n");
rc = -EIO;
goto free_tag;
goto free_cmd;
}
if (resp_buf)
memcpy(resp_buf, nonemb_cmd->va, resp_buf_len);
free_tag:
free_mcc_tag(&phba->ctrl, tag);
free_cmd:
pci_free_consistent(ctrl->pdev, nonemb_cmd->size,
nonemb_cmd->va, nonemb_cmd->dma);
@ -1009,10 +1142,9 @@ int be_mgmt_get_boot_shandle(struct beiscsi_hba *phba,
{
struct be_cmd_get_boot_target_resp *boot_resp;
struct be_mcc_wrb *wrb;
unsigned int tag, wrb_num;
unsigned int tag;
uint8_t boot_retry = 3;
unsigned short status, extd_status;
struct be_queue_info *mccq = &phba->ctrl.mcc_obj.q;
int rc;
do {
/* Get the Boot Target Session Handle and Count*/
@ -1022,24 +1154,16 @@ int be_mgmt_get_boot_shandle(struct beiscsi_hba *phba,
BEISCSI_LOG_CONFIG | BEISCSI_LOG_INIT,
"BG_%d : Getting Boot Target Info Failed\n");
return -EAGAIN;
} else
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
}
wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
if (status || extd_status) {
rc = beiscsi_mccq_compl(phba, tag, &wrb, NULL);
if (rc) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_INIT | BEISCSI_LOG_CONFIG,
"BG_%d : mgmt_get_boot_target Failed"
" status = %d extd_status = %d\n",
status, extd_status);
free_mcc_tag(&phba->ctrl, tag);
"BG_%d : MBX CMD get_boot_target Failed\n");
return -EBUSY;
}
wrb = queue_get_wrb(mccq, wrb_num);
free_mcc_tag(&phba->ctrl, tag);
boot_resp = embedded_payload(wrb);
/* Check if the there are any Boot targets configured */
@ -1064,24 +1188,15 @@ int be_mgmt_get_boot_shandle(struct beiscsi_hba *phba,
BEISCSI_LOG_INIT | BEISCSI_LOG_CONFIG,
"BG_%d : mgmt_reopen_session Failed\n");
return -EAGAIN;
} else
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
}
wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
if (status || extd_status) {
rc = beiscsi_mccq_compl(phba, tag, NULL, NULL);
if (rc) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_INIT | BEISCSI_LOG_CONFIG,
"BG_%d : mgmt_reopen_session Failed"
" status = %d extd_status = %d\n",
status, extd_status);
free_mcc_tag(&phba->ctrl, tag);
return -EBUSY;
"BG_%d : mgmt_reopen_session Failed");
return rc;
}
free_mcc_tag(&phba->ctrl, tag);
} while (--boot_retry);
/* Couldn't log into the boot target */
@ -1106,8 +1221,9 @@ int be_mgmt_get_boot_shandle(struct beiscsi_hba *phba,
int mgmt_set_vlan(struct beiscsi_hba *phba,
uint16_t vlan_tag)
{
unsigned int tag, wrb_num;
unsigned short status, extd_status;
int rc;
unsigned int tag;
struct be_mcc_wrb *wrb = NULL;
tag = be_cmd_set_vlan(phba, vlan_tag);
if (!tag) {
@ -1115,24 +1231,208 @@ int mgmt_set_vlan(struct beiscsi_hba *phba,
(BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX),
"BG_%d : VLAN Setting Failed\n");
return -EBUSY;
} else
wait_event_interruptible(phba->ctrl.mcc_wait[tag],
phba->ctrl.mcc_numtag[tag]);
wrb_num = (phba->ctrl.mcc_numtag[tag] & 0x00FF0000) >> 16;
extd_status = (phba->ctrl.mcc_numtag[tag] & 0x0000FF00) >> 8;
status = phba->ctrl.mcc_numtag[tag] & 0x000000FF;
if (status || extd_status) {
beiscsi_log(phba, KERN_ERR,
(BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX),
"BS_%d : status : %d extd_status : %d\n",
status, extd_status);
free_mcc_tag(&phba->ctrl, tag);
return -EAGAIN;
}
free_mcc_tag(&phba->ctrl, tag);
return 0;
rc = beiscsi_mccq_compl(phba, tag, &wrb, NULL);
if (rc) {
beiscsi_log(phba, KERN_ERR,
(BEISCSI_LOG_CONFIG | BEISCSI_LOG_MBOX),
"BS_%d : VLAN MBX Cmd Failed\n");
return rc;
}
return rc;
}
/**
* beiscsi_drvr_ver_disp()- Display the driver Name and Version
* @dev: ptr to device not used.
* @attr: device attribute, not used.
* @buf: contains formatted text driver name and version
*
* return
* size of the formatted string
**/
ssize_t
beiscsi_drvr_ver_disp(struct device *dev, struct device_attribute *attr,
char *buf)
{
return snprintf(buf, PAGE_SIZE, BE_NAME "\n");
}
/**
* beiscsi_adap_family_disp()- Display adapter family.
* @dev: ptr to device to get priv structure
* @attr: device attribute, not used.
* @buf: contains formatted text driver name and version
*
* return
* size of the formatted string
**/
ssize_t
beiscsi_adap_family_disp(struct device *dev, struct device_attribute *attr,
char *buf)
{
uint16_t dev_id = 0;
struct Scsi_Host *shost = class_to_shost(dev);
struct beiscsi_hba *phba = iscsi_host_priv(shost);
dev_id = phba->pcidev->device;
switch (dev_id) {
case BE_DEVICE_ID1:
case OC_DEVICE_ID1:
case OC_DEVICE_ID2:
return snprintf(buf, PAGE_SIZE, "BE2 Adapter Family\n");
break;
case BE_DEVICE_ID2:
case OC_DEVICE_ID3:
return snprintf(buf, PAGE_SIZE, "BE3-R Adapter Family\n");
break;
case OC_SKH_ID1:
return snprintf(buf, PAGE_SIZE, "Skyhawk-R Adapter Family\n");
break;
default:
return snprintf(buf, PAGE_SIZE,
"Unkown Adapter Family: 0x%x\n", dev_id);
break;
}
}
void beiscsi_offload_cxn_v0(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle,
struct be_mem_descriptor *mem_descr)
{
struct iscsi_wrb *pwrb = pwrb_handle->pwrb;
memset(pwrb, 0, sizeof(*pwrb));
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
max_send_data_segment_length, pwrb,
params->dw[offsetof(struct amap_beiscsi_offload_params,
max_send_data_segment_length) / 32]);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, type, pwrb,
BE_TGT_CTX_UPDT_CMD);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
first_burst_length,
pwrb,
params->dw[offsetof(struct amap_beiscsi_offload_params,
first_burst_length) / 32]);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, erl, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
erl) / 32] & OFFLD_PARAMS_ERL));
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, dde, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
dde) / 32] & OFFLD_PARAMS_DDE) >> 2);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, hde, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
hde) / 32] & OFFLD_PARAMS_HDE) >> 3);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, ir2t, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
ir2t) / 32] & OFFLD_PARAMS_IR2T) >> 4);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, imd, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
imd) / 32] & OFFLD_PARAMS_IMD) >> 5);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, stat_sn,
pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
exp_statsn) / 32] + 1));
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, wrb_idx,
pwrb, pwrb_handle->wrb_index);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
max_burst_length, pwrb, params->dw[offsetof
(struct amap_beiscsi_offload_params,
max_burst_length) / 32]);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, ptr2nextwrb,
pwrb, pwrb_handle->nxt_wrb_index);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
session_state, pwrb, 0);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, compltonack,
pwrb, 1);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, notpredblq,
pwrb, 0);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, mode, pwrb,
0);
mem_descr += ISCSI_MEM_GLOBAL_HEADER;
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
pad_buffer_addr_hi, pwrb,
mem_descr->mem_array[0].bus_address.u.a32.address_hi);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
pad_buffer_addr_lo, pwrb,
mem_descr->mem_array[0].bus_address.u.a32.address_lo);
}
void beiscsi_offload_cxn_v2(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle)
{
struct iscsi_wrb *pwrb = pwrb_handle->pwrb;
memset(pwrb, 0, sizeof(*pwrb));
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
max_burst_length, pwrb, params->dw[offsetof
(struct amap_beiscsi_offload_params,
max_burst_length) / 32]);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
max_burst_length, pwrb, params->dw[offsetof
(struct amap_beiscsi_offload_params,
max_burst_length) / 32]);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
type, pwrb,
BE_TGT_CTX_UPDT_CMD);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
ptr2nextwrb,
pwrb, pwrb_handle->nxt_wrb_index);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2, wrb_idx,
pwrb, pwrb_handle->wrb_index);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
max_send_data_segment_length, pwrb,
params->dw[offsetof(struct amap_beiscsi_offload_params,
max_send_data_segment_length) / 32]);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
first_burst_length, pwrb,
params->dw[offsetof(struct amap_beiscsi_offload_params,
first_burst_length) / 32]);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
max_recv_dataseg_len, pwrb, BEISCSI_MAX_RECV_DATASEG_LEN);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
max_cxns, pwrb, BEISCSI_MAX_CXNS);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2, erl, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
erl) / 32] & OFFLD_PARAMS_ERL));
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2, dde, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
dde) / 32] & OFFLD_PARAMS_DDE) >> 2);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2, hde, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
hde) / 32] & OFFLD_PARAMS_HDE) >> 3);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
ir2t, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
ir2t) / 32] & OFFLD_PARAMS_IR2T) >> 4);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2, imd, pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
imd) / 32] & OFFLD_PARAMS_IMD) >> 5);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
data_seq_inorder,
pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
data_seq_inorder) / 32] &
OFFLD_PARAMS_DATA_SEQ_INORDER) >> 6);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
pdu_seq_inorder,
pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
pdu_seq_inorder) / 32] &
OFFLD_PARAMS_PDU_SEQ_INORDER) >> 7);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2, max_r2t,
pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
max_r2t) / 32] &
OFFLD_PARAMS_MAX_R2T) >> 8);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2, stat_sn,
pwrb,
(params->dw[offsetof(struct amap_beiscsi_offload_params,
exp_statsn) / 32] + 1));
}

View File

@ -1,5 +1,5 @@
/**
* Copyright (C) 2005 - 2011 Emulex
* Copyright (C) 2005 - 2012 Emulex
* All rights reserved.
*
* This program is free software; you can redistribute it and/or
@ -30,6 +30,12 @@
#define IP_V6_LEN 16
#define IP_V4_LEN 4
/* UE Status and Mask register */
#define PCICFG_UE_STATUS_LOW 0xA0
#define PCICFG_UE_STATUS_HIGH 0xA4
#define PCICFG_UE_STATUS_MASK_LOW 0xA8
#define PCICFG_UE_STATUS_MASK_HI 0xAC
/**
* Pseudo amap definition in which each bit of the actual structure is defined
* as a byte: used to calculate offset/shift/mask of each field
@ -301,4 +307,19 @@ int be_mgmt_get_boot_shandle(struct beiscsi_hba *phba,
unsigned int mgmt_get_all_if_id(struct beiscsi_hba *phba);
int mgmt_set_vlan(struct beiscsi_hba *phba, uint16_t vlan_tag);
ssize_t beiscsi_drvr_ver_disp(struct device *dev,
struct device_attribute *attr, char *buf);
ssize_t beiscsi_adap_family_disp(struct device *dev,
struct device_attribute *attr, char *buf);
void beiscsi_offload_cxn_v0(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle,
struct be_mem_descriptor *mem_descr);
void beiscsi_offload_cxn_v2(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle);
void beiscsi_ue_detect(struct beiscsi_hba *phba);
#endif

View File

@ -800,7 +800,7 @@ extern struct device_attribute *bnx2i_dev_attributes[];
/*
* Function Prototypes
*/
extern void bnx2i_identify_device(struct bnx2i_hba *hba);
extern void bnx2i_identify_device(struct bnx2i_hba *hba, struct cnic_dev *dev);
extern void bnx2i_ulp_init(struct cnic_dev *dev);
extern void bnx2i_ulp_exit(struct cnic_dev *dev);

View File

@ -79,42 +79,33 @@ static struct notifier_block bnx2i_cpu_notifier = {
/**
* bnx2i_identify_device - identifies NetXtreme II device type
* @hba: Adapter structure pointer
* @cnic: Corresponding cnic device
*
* This function identifies the NX2 device type and sets appropriate
* queue mailbox register access method, 5709 requires driver to
* access MBOX regs using *bin* mode
*/
void bnx2i_identify_device(struct bnx2i_hba *hba)
void bnx2i_identify_device(struct bnx2i_hba *hba, struct cnic_dev *dev)
{
hba->cnic_dev_type = 0;
if ((hba->pci_did == PCI_DEVICE_ID_NX2_5706) ||
(hba->pci_did == PCI_DEVICE_ID_NX2_5706S))
set_bit(BNX2I_NX2_DEV_5706, &hba->cnic_dev_type);
else if ((hba->pci_did == PCI_DEVICE_ID_NX2_5708) ||
(hba->pci_did == PCI_DEVICE_ID_NX2_5708S))
set_bit(BNX2I_NX2_DEV_5708, &hba->cnic_dev_type);
else if ((hba->pci_did == PCI_DEVICE_ID_NX2_5709) ||
(hba->pci_did == PCI_DEVICE_ID_NX2_5709S)) {
set_bit(BNX2I_NX2_DEV_5709, &hba->cnic_dev_type);
hba->mail_queue_access = BNX2I_MQ_BIN_MODE;
} else if (hba->pci_did == PCI_DEVICE_ID_NX2_57710 ||
hba->pci_did == PCI_DEVICE_ID_NX2_57711 ||
hba->pci_did == PCI_DEVICE_ID_NX2_57711E ||
hba->pci_did == PCI_DEVICE_ID_NX2_57712 ||
hba->pci_did == PCI_DEVICE_ID_NX2_57712E ||
hba->pci_did == PCI_DEVICE_ID_NX2_57800 ||
hba->pci_did == PCI_DEVICE_ID_NX2_57800_MF ||
hba->pci_did == PCI_DEVICE_ID_NX2_57800_VF ||
hba->pci_did == PCI_DEVICE_ID_NX2_57810 ||
hba->pci_did == PCI_DEVICE_ID_NX2_57810_MF ||
hba->pci_did == PCI_DEVICE_ID_NX2_57810_VF ||
hba->pci_did == PCI_DEVICE_ID_NX2_57840 ||
hba->pci_did == PCI_DEVICE_ID_NX2_57840_MF ||
hba->pci_did == PCI_DEVICE_ID_NX2_57840_VF)
if (test_bit(CNIC_F_BNX2_CLASS, &dev->flags)) {
if (hba->pci_did == PCI_DEVICE_ID_NX2_5706 ||
hba->pci_did == PCI_DEVICE_ID_NX2_5706S) {
set_bit(BNX2I_NX2_DEV_5706, &hba->cnic_dev_type);
} else if (hba->pci_did == PCI_DEVICE_ID_NX2_5708 ||
hba->pci_did == PCI_DEVICE_ID_NX2_5708S) {
set_bit(BNX2I_NX2_DEV_5708, &hba->cnic_dev_type);
} else if (hba->pci_did == PCI_DEVICE_ID_NX2_5709 ||
hba->pci_did == PCI_DEVICE_ID_NX2_5709S) {
set_bit(BNX2I_NX2_DEV_5709, &hba->cnic_dev_type);
hba->mail_queue_access = BNX2I_MQ_BIN_MODE;
}
} else if (test_bit(CNIC_F_BNX2X_CLASS, &dev->flags)) {
set_bit(BNX2I_NX2_DEV_57710, &hba->cnic_dev_type);
else
} else {
printk(KERN_ALERT "bnx2i: unknown device, 0x%x\n",
hba->pci_did);
}
}

View File

@ -808,7 +808,7 @@ struct bnx2i_hba *bnx2i_alloc_hba(struct cnic_dev *cnic)
hba->pci_func = PCI_FUNC(hba->pcidev->devfn);
hba->pci_devno = PCI_SLOT(hba->pcidev->devfn);
bnx2i_identify_device(hba);
bnx2i_identify_device(hba, cnic);
bnx2i_setup_host_queue_size(hba, shost);
hba->reg_base = pci_resource_start(hba->pcidev, 0);

View File

@ -0,0 +1,19 @@
config SCSI_CHELSIO_FCOE
tristate "Chelsio Communications FCoE support"
depends on PCI && SCSI
select SCSI_FC_ATTRS
select FW_LOADER
help
This driver supports FCoE Offload functionality over
Chelsio T4-based 10Gb Converged Network Adapters.
For general information about Chelsio and our products, visit
our website at <http://www.chelsio.com>.
For customer support, please visit our customer support page at
<http://www.chelsio.com/support.html>.
Please send feedback to <linux-bugs@chelsio.com>.
To compile this driver as a module choose M here; the module
will be called csiostor.

View File

@ -0,0 +1,11 @@
#
## Chelsio FCoE driver
#
##
ccflags-y += -I$(srctree)/drivers/net/ethernet/chelsio/cxgb4
obj-$(CONFIG_SCSI_CHELSIO_FCOE) += csiostor.o
csiostor-objs := csio_attr.o csio_init.o csio_lnode.o csio_scsi.o \
csio_hw.o csio_isr.o csio_mb.o csio_rnode.o csio_wr.o

View File

@ -0,0 +1,796 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/mm.h>
#include <linux/jiffies.h>
#include <scsi/fc/fc_fs.h>
#include "csio_init.h"
static void
csio_vport_set_state(struct csio_lnode *ln);
/*
* csio_reg_rnode - Register a remote port with FC transport.
* @rn: Rnode representing remote port.
*
* Call fc_remote_port_add() to register this remote port with FC transport.
* If remote port is Initiator OR Target OR both, change the role appropriately.
*
*/
void
csio_reg_rnode(struct csio_rnode *rn)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
struct Scsi_Host *shost = csio_ln_to_shost(ln);
struct fc_rport_identifiers ids;
struct fc_rport *rport;
struct csio_service_parms *sp;
ids.node_name = wwn_to_u64(csio_rn_wwnn(rn));
ids.port_name = wwn_to_u64(csio_rn_wwpn(rn));
ids.port_id = rn->nport_id;
ids.roles = FC_RPORT_ROLE_UNKNOWN;
if (rn->role & CSIO_RNFR_INITIATOR || rn->role & CSIO_RNFR_TARGET) {
rport = rn->rport;
CSIO_ASSERT(rport != NULL);
goto update_role;
}
rn->rport = fc_remote_port_add(shost, 0, &ids);
if (!rn->rport) {
csio_ln_err(ln, "Failed to register rport = 0x%x.\n",
rn->nport_id);
return;
}
ln->num_reg_rnodes++;
rport = rn->rport;
spin_lock_irq(shost->host_lock);
*((struct csio_rnode **)rport->dd_data) = rn;
spin_unlock_irq(shost->host_lock);
sp = &rn->rn_sparm;
rport->maxframe_size = ntohs(sp->csp.sp_bb_data);
if (ntohs(sp->clsp[2].cp_class) & FC_CPC_VALID)
rport->supported_classes = FC_COS_CLASS3;
else
rport->supported_classes = FC_COS_UNSPECIFIED;
update_role:
if (rn->role & CSIO_RNFR_INITIATOR)
ids.roles |= FC_RPORT_ROLE_FCP_INITIATOR;
if (rn->role & CSIO_RNFR_TARGET)
ids.roles |= FC_RPORT_ROLE_FCP_TARGET;
if (ids.roles != FC_RPORT_ROLE_UNKNOWN)
fc_remote_port_rolechg(rport, ids.roles);
rn->scsi_id = rport->scsi_target_id;
csio_ln_dbg(ln, "Remote port x%x role 0x%x registered\n",
rn->nport_id, ids.roles);
}
/*
* csio_unreg_rnode - Unregister a remote port with FC transport.
* @rn: Rnode representing remote port.
*
* Call fc_remote_port_delete() to unregister this remote port with FC
* transport.
*
*/
void
csio_unreg_rnode(struct csio_rnode *rn)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
struct fc_rport *rport = rn->rport;
rn->role &= ~(CSIO_RNFR_INITIATOR | CSIO_RNFR_TARGET);
fc_remote_port_delete(rport);
ln->num_reg_rnodes--;
csio_ln_dbg(ln, "Remote port x%x un-registered\n", rn->nport_id);
}
/*
* csio_lnode_async_event - Async events from local port.
* @ln: lnode representing local port.
*
* Async events from local node that FC transport/SCSI ML
* should be made aware of (Eg: RSCN).
*/
void
csio_lnode_async_event(struct csio_lnode *ln, enum csio_ln_fc_evt fc_evt)
{
switch (fc_evt) {
case CSIO_LN_FC_RSCN:
/* Get payload of rscn from ln */
/* For each RSCN entry */
/*
* fc_host_post_event(shost,
* fc_get_event_number(),
* FCH_EVT_RSCN,
* rscn_entry);
*/
break;
case CSIO_LN_FC_LINKUP:
/* send fc_host_post_event */
/* set vport state */
if (csio_is_npiv_ln(ln))
csio_vport_set_state(ln);
break;
case CSIO_LN_FC_LINKDOWN:
/* send fc_host_post_event */
/* set vport state */
if (csio_is_npiv_ln(ln))
csio_vport_set_state(ln);
break;
case CSIO_LN_FC_ATTRIB_UPDATE:
csio_fchost_attr_init(ln);
break;
default:
break;
}
}
/*
* csio_fchost_attr_init - Initialize FC transport attributes
* @ln: Lnode.
*
*/
void
csio_fchost_attr_init(struct csio_lnode *ln)
{
struct Scsi_Host *shost = csio_ln_to_shost(ln);
fc_host_node_name(shost) = wwn_to_u64(csio_ln_wwnn(ln));
fc_host_port_name(shost) = wwn_to_u64(csio_ln_wwpn(ln));
fc_host_supported_classes(shost) = FC_COS_CLASS3;
fc_host_max_npiv_vports(shost) =
(csio_lnode_to_hw(ln))->fres_info.max_vnps;
fc_host_supported_speeds(shost) = FC_PORTSPEED_10GBIT |
FC_PORTSPEED_1GBIT;
fc_host_maxframe_size(shost) = ntohs(ln->ln_sparm.csp.sp_bb_data);
memset(fc_host_supported_fc4s(shost), 0,
sizeof(fc_host_supported_fc4s(shost)));
fc_host_supported_fc4s(shost)[7] = 1;
memset(fc_host_active_fc4s(shost), 0,
sizeof(fc_host_active_fc4s(shost)));
fc_host_active_fc4s(shost)[7] = 1;
}
/*
* csio_get_host_port_id - sysfs entries for nport_id is
* populated/cached from this function
*/
static void
csio_get_host_port_id(struct Scsi_Host *shost)
{
struct csio_lnode *ln = shost_priv(shost);
struct csio_hw *hw = csio_lnode_to_hw(ln);
spin_lock_irq(&hw->lock);
fc_host_port_id(shost) = ln->nport_id;
spin_unlock_irq(&hw->lock);
}
/*
* csio_get_port_type - Return FC local port type.
* @shost: scsi host.
*
*/
static void
csio_get_host_port_type(struct Scsi_Host *shost)
{
struct csio_lnode *ln = shost_priv(shost);
struct csio_hw *hw = csio_lnode_to_hw(ln);
spin_lock_irq(&hw->lock);
if (csio_is_npiv_ln(ln))
fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
else
fc_host_port_type(shost) = FC_PORTTYPE_NPORT;
spin_unlock_irq(&hw->lock);
}
/*
* csio_get_port_state - Return FC local port state.
* @shost: scsi host.
*
*/
static void
csio_get_host_port_state(struct Scsi_Host *shost)
{
struct csio_lnode *ln = shost_priv(shost);
struct csio_hw *hw = csio_lnode_to_hw(ln);
char state[16];
spin_lock_irq(&hw->lock);
csio_lnode_state_to_str(ln, state);
if (!strcmp(state, "READY"))
fc_host_port_state(shost) = FC_PORTSTATE_ONLINE;
else if (!strcmp(state, "OFFLINE"))
fc_host_port_state(shost) = FC_PORTSTATE_LINKDOWN;
else
fc_host_port_state(shost) = FC_PORTSTATE_UNKNOWN;
spin_unlock_irq(&hw->lock);
}
/*
* csio_get_host_speed - Return link speed to FC transport.
* @shost: scsi host.
*
*/
static void
csio_get_host_speed(struct Scsi_Host *shost)
{
struct csio_lnode *ln = shost_priv(shost);
struct csio_hw *hw = csio_lnode_to_hw(ln);
spin_lock_irq(&hw->lock);
switch (hw->pport[ln->portid].link_speed) {
case FW_PORT_CAP_SPEED_1G:
fc_host_speed(shost) = FC_PORTSPEED_1GBIT;
break;
case FW_PORT_CAP_SPEED_10G:
fc_host_speed(shost) = FC_PORTSPEED_10GBIT;
break;
default:
fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN;
break;
}
spin_unlock_irq(&hw->lock);
}
/*
* csio_get_host_fabric_name - Return fabric name
* @shost: scsi host.
*
*/
static void
csio_get_host_fabric_name(struct Scsi_Host *shost)
{
struct csio_lnode *ln = shost_priv(shost);
struct csio_rnode *rn = NULL;
struct csio_hw *hw = csio_lnode_to_hw(ln);
spin_lock_irq(&hw->lock);
rn = csio_rnode_lookup_portid(ln, FC_FID_FLOGI);
if (rn)
fc_host_fabric_name(shost) = wwn_to_u64(csio_rn_wwnn(rn));
else
fc_host_fabric_name(shost) = 0;
spin_unlock_irq(&hw->lock);
}
/*
* csio_get_host_speed - Return FC transport statistics.
* @ln: Lnode.
*
*/
static struct fc_host_statistics *
csio_get_stats(struct Scsi_Host *shost)
{
struct csio_lnode *ln = shost_priv(shost);
struct csio_hw *hw = csio_lnode_to_hw(ln);
struct fc_host_statistics *fhs = &ln->fch_stats;
struct fw_fcoe_port_stats fcoe_port_stats;
uint64_t seconds;
memset(&fcoe_port_stats, 0, sizeof(struct fw_fcoe_port_stats));
csio_get_phy_port_stats(hw, ln->portid, &fcoe_port_stats);
fhs->tx_frames += (be64_to_cpu(fcoe_port_stats.tx_bcast_frames) +
be64_to_cpu(fcoe_port_stats.tx_mcast_frames) +
be64_to_cpu(fcoe_port_stats.tx_ucast_frames) +
be64_to_cpu(fcoe_port_stats.tx_offload_frames));
fhs->tx_words += (be64_to_cpu(fcoe_port_stats.tx_bcast_bytes) +
be64_to_cpu(fcoe_port_stats.tx_mcast_bytes) +
be64_to_cpu(fcoe_port_stats.tx_ucast_bytes) +
be64_to_cpu(fcoe_port_stats.tx_offload_bytes)) /
CSIO_WORD_TO_BYTE;
fhs->rx_frames += (be64_to_cpu(fcoe_port_stats.rx_bcast_frames) +
be64_to_cpu(fcoe_port_stats.rx_mcast_frames) +
be64_to_cpu(fcoe_port_stats.rx_ucast_frames));
fhs->rx_words += (be64_to_cpu(fcoe_port_stats.rx_bcast_bytes) +
be64_to_cpu(fcoe_port_stats.rx_mcast_bytes) +
be64_to_cpu(fcoe_port_stats.rx_ucast_bytes)) /
CSIO_WORD_TO_BYTE;
fhs->error_frames += be64_to_cpu(fcoe_port_stats.rx_err_frames);
fhs->fcp_input_requests += ln->stats.n_input_requests;
fhs->fcp_output_requests += ln->stats.n_output_requests;
fhs->fcp_control_requests += ln->stats.n_control_requests;
fhs->fcp_input_megabytes += ln->stats.n_input_bytes >> 20;
fhs->fcp_output_megabytes += ln->stats.n_output_bytes >> 20;
fhs->link_failure_count = ln->stats.n_link_down;
/* Reset stats for the device */
seconds = jiffies_to_msecs(jiffies) - hw->stats.n_reset_start;
do_div(seconds, 1000);
fhs->seconds_since_last_reset = seconds;
return fhs;
}
/*
* csio_set_rport_loss_tmo - Set the rport dev loss timeout
* @rport: fc rport.
* @timeout: new value for dev loss tmo.
*
* If timeout is non zero set the dev_loss_tmo to timeout, else set
* dev_loss_tmo to one.
*/
static void
csio_set_rport_loss_tmo(struct fc_rport *rport, uint32_t timeout)
{
if (timeout)
rport->dev_loss_tmo = timeout;
else
rport->dev_loss_tmo = 1;
}
static void
csio_vport_set_state(struct csio_lnode *ln)
{
struct fc_vport *fc_vport = ln->fc_vport;
struct csio_lnode *pln = ln->pln;
char state[16];
/* Set fc vport state based on phyiscal lnode */
csio_lnode_state_to_str(pln, state);
if (strcmp(state, "READY")) {
fc_vport_set_state(fc_vport, FC_VPORT_LINKDOWN);
return;
}
if (!(pln->flags & CSIO_LNF_NPIVSUPP)) {
fc_vport_set_state(fc_vport, FC_VPORT_NO_FABRIC_SUPP);
return;
}
/* Set fc vport state based on virtual lnode */
csio_lnode_state_to_str(ln, state);
if (strcmp(state, "READY")) {
fc_vport_set_state(fc_vport, FC_VPORT_LINKDOWN);
return;
}
fc_vport_set_state(fc_vport, FC_VPORT_ACTIVE);
}
static int
csio_fcoe_alloc_vnp(struct csio_hw *hw, struct csio_lnode *ln)
{
struct csio_lnode *pln;
struct csio_mb *mbp;
struct fw_fcoe_vnp_cmd *rsp;
int ret = 0;
int retry = 0;
/* Issue VNP cmd to alloc vport */
/* Allocate Mbox request */
spin_lock_irq(&hw->lock);
mbp = mempool_alloc(hw->mb_mempool, GFP_ATOMIC);
if (!mbp) {
CSIO_INC_STATS(hw, n_err_nomem);
ret = -ENOMEM;
goto out;
}
pln = ln->pln;
ln->fcf_flowid = pln->fcf_flowid;
ln->portid = pln->portid;
csio_fcoe_vnp_alloc_init_mb(ln, mbp, CSIO_MB_DEFAULT_TMO,
pln->fcf_flowid, pln->vnp_flowid, 0,
csio_ln_wwnn(ln), csio_ln_wwpn(ln), NULL);
for (retry = 0; retry < 3; retry++) {
/* FW is expected to complete vnp cmd in immediate mode
* without much delay.
* Otherwise, there will be increase in IO latency since HW
* lock is held till completion of vnp mbox cmd.
*/
ret = csio_mb_issue(hw, mbp);
if (ret != -EBUSY)
break;
/* Retry if mbox returns busy */
spin_unlock_irq(&hw->lock);
msleep(2000);
spin_lock_irq(&hw->lock);
}
if (ret) {
csio_ln_err(ln, "Failed to issue mbox FCoE VNP command\n");
goto out_free;
}
/* Process Mbox response of VNP command */
rsp = (struct fw_fcoe_vnp_cmd *)(mbp->mb);
if (FW_CMD_RETVAL_GET(ntohl(rsp->alloc_to_len16)) != FW_SUCCESS) {
csio_ln_err(ln, "FCOE VNP ALLOC cmd returned 0x%x!\n",
FW_CMD_RETVAL_GET(ntohl(rsp->alloc_to_len16)));
ret = -EINVAL;
goto out_free;
}
ln->vnp_flowid = FW_FCOE_VNP_CMD_VNPI_GET(
ntohl(rsp->gen_wwn_to_vnpi));
memcpy(csio_ln_wwnn(ln), rsp->vnport_wwnn, 8);
memcpy(csio_ln_wwpn(ln), rsp->vnport_wwpn, 8);
csio_ln_dbg(ln, "FCOE VNPI: 0x%x\n", ln->vnp_flowid);
csio_ln_dbg(ln, "\tWWNN: %x%x%x%x%x%x%x%x\n",
ln->ln_sparm.wwnn[0], ln->ln_sparm.wwnn[1],
ln->ln_sparm.wwnn[2], ln->ln_sparm.wwnn[3],
ln->ln_sparm.wwnn[4], ln->ln_sparm.wwnn[5],
ln->ln_sparm.wwnn[6], ln->ln_sparm.wwnn[7]);
csio_ln_dbg(ln, "\tWWPN: %x%x%x%x%x%x%x%x\n",
ln->ln_sparm.wwpn[0], ln->ln_sparm.wwpn[1],
ln->ln_sparm.wwpn[2], ln->ln_sparm.wwpn[3],
ln->ln_sparm.wwpn[4], ln->ln_sparm.wwpn[5],
ln->ln_sparm.wwpn[6], ln->ln_sparm.wwpn[7]);
out_free:
mempool_free(mbp, hw->mb_mempool);
out:
spin_unlock_irq(&hw->lock);
return ret;
}
static int
csio_fcoe_free_vnp(struct csio_hw *hw, struct csio_lnode *ln)
{
struct csio_lnode *pln;
struct csio_mb *mbp;
struct fw_fcoe_vnp_cmd *rsp;
int ret = 0;
int retry = 0;
/* Issue VNP cmd to free vport */
/* Allocate Mbox request */
spin_lock_irq(&hw->lock);
mbp = mempool_alloc(hw->mb_mempool, GFP_ATOMIC);
if (!mbp) {
CSIO_INC_STATS(hw, n_err_nomem);
ret = -ENOMEM;
goto out;
}
pln = ln->pln;
csio_fcoe_vnp_free_init_mb(ln, mbp, CSIO_MB_DEFAULT_TMO,
ln->fcf_flowid, ln->vnp_flowid,
NULL);
for (retry = 0; retry < 3; retry++) {
ret = csio_mb_issue(hw, mbp);
if (ret != -EBUSY)
break;
/* Retry if mbox returns busy */
spin_unlock_irq(&hw->lock);
msleep(2000);
spin_lock_irq(&hw->lock);
}
if (ret) {
csio_ln_err(ln, "Failed to issue mbox FCoE VNP command\n");
goto out_free;
}
/* Process Mbox response of VNP command */
rsp = (struct fw_fcoe_vnp_cmd *)(mbp->mb);
if (FW_CMD_RETVAL_GET(ntohl(rsp->alloc_to_len16)) != FW_SUCCESS) {
csio_ln_err(ln, "FCOE VNP FREE cmd returned 0x%x!\n",
FW_CMD_RETVAL_GET(ntohl(rsp->alloc_to_len16)));
ret = -EINVAL;
}
out_free:
mempool_free(mbp, hw->mb_mempool);
out:
spin_unlock_irq(&hw->lock);
return ret;
}
static int
csio_vport_create(struct fc_vport *fc_vport, bool disable)
{
struct Scsi_Host *shost = fc_vport->shost;
struct csio_lnode *pln = shost_priv(shost);
struct csio_lnode *ln = NULL;
struct csio_hw *hw = csio_lnode_to_hw(pln);
uint8_t wwn[8];
int ret = -1;
ln = csio_shost_init(hw, &fc_vport->dev, false, pln);
if (!ln)
goto error;
if (fc_vport->node_name != 0) {
u64_to_wwn(fc_vport->node_name, wwn);
if (!CSIO_VALID_WWN(wwn)) {
csio_ln_err(ln,
"vport create failed. Invalid wwnn\n");
goto error;
}
memcpy(csio_ln_wwnn(ln), wwn, 8);
}
if (fc_vport->port_name != 0) {
u64_to_wwn(fc_vport->port_name, wwn);
if (!CSIO_VALID_WWN(wwn)) {
csio_ln_err(ln,
"vport create failed. Invalid wwpn\n");
goto error;
}
if (csio_lnode_lookup_by_wwpn(hw, wwn)) {
csio_ln_err(ln,
"vport create failed. wwpn already exists\n");
goto error;
}
memcpy(csio_ln_wwpn(ln), wwn, 8);
}
fc_vport_set_state(fc_vport, FC_VPORT_INITIALIZING);
if (csio_fcoe_alloc_vnp(hw, ln))
goto error;
*(struct csio_lnode **)fc_vport->dd_data = ln;
ln->fc_vport = fc_vport;
if (!fc_vport->node_name)
fc_vport->node_name = wwn_to_u64(csio_ln_wwnn(ln));
if (!fc_vport->port_name)
fc_vport->port_name = wwn_to_u64(csio_ln_wwpn(ln));
csio_fchost_attr_init(ln);
return 0;
error:
if (ln)
csio_shost_exit(ln);
return ret;
}
static int
csio_vport_delete(struct fc_vport *fc_vport)
{
struct csio_lnode *ln = *(struct csio_lnode **)fc_vport->dd_data;
struct Scsi_Host *shost = csio_ln_to_shost(ln);
struct csio_hw *hw = csio_lnode_to_hw(ln);
int rmv;
spin_lock_irq(&hw->lock);
rmv = csio_is_hw_removing(hw);
spin_unlock_irq(&hw->lock);
if (rmv) {
csio_shost_exit(ln);
return 0;
}
/* Quiesce ios and send remove event to lnode */
scsi_block_requests(shost);
spin_lock_irq(&hw->lock);
csio_scsim_cleanup_io_lnode(csio_hw_to_scsim(hw), ln);
csio_lnode_close(ln);
spin_unlock_irq(&hw->lock);
scsi_unblock_requests(shost);
/* Free vnp */
if (fc_vport->vport_state != FC_VPORT_DISABLED)
csio_fcoe_free_vnp(hw, ln);
csio_shost_exit(ln);
return 0;
}
static int
csio_vport_disable(struct fc_vport *fc_vport, bool disable)
{
struct csio_lnode *ln = *(struct csio_lnode **)fc_vport->dd_data;
struct Scsi_Host *shost = csio_ln_to_shost(ln);
struct csio_hw *hw = csio_lnode_to_hw(ln);
/* disable vport */
if (disable) {
/* Quiesce ios and send stop event to lnode */
scsi_block_requests(shost);
spin_lock_irq(&hw->lock);
csio_scsim_cleanup_io_lnode(csio_hw_to_scsim(hw), ln);
csio_lnode_stop(ln);
spin_unlock_irq(&hw->lock);
scsi_unblock_requests(shost);
/* Free vnp */
csio_fcoe_free_vnp(hw, ln);
fc_vport_set_state(fc_vport, FC_VPORT_DISABLED);
csio_ln_err(ln, "vport disabled\n");
return 0;
} else {
/* enable vport */
fc_vport_set_state(fc_vport, FC_VPORT_INITIALIZING);
if (csio_fcoe_alloc_vnp(hw, ln)) {
csio_ln_err(ln, "vport enabled failed.\n");
return -1;
}
csio_ln_err(ln, "vport enabled\n");
return 0;
}
}
static void
csio_dev_loss_tmo_callbk(struct fc_rport *rport)
{
struct csio_rnode *rn;
struct csio_hw *hw;
struct csio_lnode *ln;
rn = *((struct csio_rnode **)rport->dd_data);
ln = csio_rnode_to_lnode(rn);
hw = csio_lnode_to_hw(ln);
spin_lock_irq(&hw->lock);
/* return if driver is being removed or same rnode comes back online */
if (csio_is_hw_removing(hw) || csio_is_rnode_ready(rn))
goto out;
csio_ln_dbg(ln, "devloss timeout on rnode:%p portid:x%x flowid:x%x\n",
rn, rn->nport_id, csio_rn_flowid(rn));
CSIO_INC_STATS(ln, n_dev_loss_tmo);
/*
* enqueue devloss event to event worker thread to serialize all
* rnode events.
*/
if (csio_enqueue_evt(hw, CSIO_EVT_DEV_LOSS, &rn, sizeof(rn))) {
CSIO_INC_STATS(hw, n_evt_drop);
goto out;
}
if (!(hw->flags & CSIO_HWF_FWEVT_PENDING)) {
hw->flags |= CSIO_HWF_FWEVT_PENDING;
spin_unlock_irq(&hw->lock);
schedule_work(&hw->evtq_work);
return;
}
out:
spin_unlock_irq(&hw->lock);
}
/* FC transport functions template - Physical port */
struct fc_function_template csio_fc_transport_funcs = {
.show_host_node_name = 1,
.show_host_port_name = 1,
.show_host_supported_classes = 1,
.show_host_supported_fc4s = 1,
.show_host_maxframe_size = 1,
.get_host_port_id = csio_get_host_port_id,
.show_host_port_id = 1,
.get_host_port_type = csio_get_host_port_type,
.show_host_port_type = 1,
.get_host_port_state = csio_get_host_port_state,
.show_host_port_state = 1,
.show_host_active_fc4s = 1,
.get_host_speed = csio_get_host_speed,
.show_host_speed = 1,
.get_host_fabric_name = csio_get_host_fabric_name,
.show_host_fabric_name = 1,
.get_fc_host_stats = csio_get_stats,
.dd_fcrport_size = sizeof(struct csio_rnode *),
.show_rport_maxframe_size = 1,
.show_rport_supported_classes = 1,
.set_rport_dev_loss_tmo = csio_set_rport_loss_tmo,
.show_rport_dev_loss_tmo = 1,
.show_starget_port_id = 1,
.show_starget_node_name = 1,
.show_starget_port_name = 1,
.dev_loss_tmo_callbk = csio_dev_loss_tmo_callbk,
.dd_fcvport_size = sizeof(struct csio_lnode *),
.vport_create = csio_vport_create,
.vport_disable = csio_vport_disable,
.vport_delete = csio_vport_delete,
};
/* FC transport functions template - Virtual port */
struct fc_function_template csio_fc_transport_vport_funcs = {
.show_host_node_name = 1,
.show_host_port_name = 1,
.show_host_supported_classes = 1,
.show_host_supported_fc4s = 1,
.show_host_maxframe_size = 1,
.get_host_port_id = csio_get_host_port_id,
.show_host_port_id = 1,
.get_host_port_type = csio_get_host_port_type,
.show_host_port_type = 1,
.get_host_port_state = csio_get_host_port_state,
.show_host_port_state = 1,
.show_host_active_fc4s = 1,
.get_host_speed = csio_get_host_speed,
.show_host_speed = 1,
.get_host_fabric_name = csio_get_host_fabric_name,
.show_host_fabric_name = 1,
.get_fc_host_stats = csio_get_stats,
.dd_fcrport_size = sizeof(struct csio_rnode *),
.show_rport_maxframe_size = 1,
.show_rport_supported_classes = 1,
.set_rport_dev_loss_tmo = csio_set_rport_loss_tmo,
.show_rport_dev_loss_tmo = 1,
.show_starget_port_id = 1,
.show_starget_node_name = 1,
.show_starget_port_name = 1,
.dev_loss_tmo_callbk = csio_dev_loss_tmo_callbk,
};

View File

@ -0,0 +1,121 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CSIO_DEFS_H__
#define __CSIO_DEFS_H__
#include <linux/kernel.h>
#include <linux/stddef.h>
#include <linux/timer.h>
#include <linux/list.h>
#include <linux/bug.h>
#include <linux/pci.h>
#include <linux/jiffies.h>
#define CSIO_INVALID_IDX 0xFFFFFFFF
#define CSIO_INC_STATS(elem, val) ((elem)->stats.val++)
#define CSIO_DEC_STATS(elem, val) ((elem)->stats.val--)
#define CSIO_VALID_WWN(__n) ((*__n >> 4) == 0x5 ? true : false)
#define CSIO_DID_MASK 0xFFFFFF
#define CSIO_WORD_TO_BYTE 4
#ifndef readq
static inline u64 readq(void __iomem *addr)
{
return readl(addr) + ((u64)readl(addr + 4) << 32);
}
static inline void writeq(u64 val, void __iomem *addr)
{
writel(val, addr);
writel(val >> 32, addr + 4);
}
#endif
static inline int
csio_list_deleted(struct list_head *list)
{
return ((list->next == list) && (list->prev == list));
}
#define csio_list_next(elem) (((struct list_head *)(elem))->next)
#define csio_list_prev(elem) (((struct list_head *)(elem))->prev)
/* State machine */
typedef void (*csio_sm_state_t)(void *, uint32_t);
struct csio_sm {
struct list_head sm_list;
csio_sm_state_t sm_state;
};
static inline void
csio_set_state(void *smp, void *state)
{
((struct csio_sm *)smp)->sm_state = (csio_sm_state_t)state;
}
static inline void
csio_init_state(struct csio_sm *smp, void *state)
{
csio_set_state(smp, state);
}
static inline void
csio_post_event(void *smp, uint32_t evt)
{
((struct csio_sm *)smp)->sm_state(smp, evt);
}
static inline csio_sm_state_t
csio_get_state(void *smp)
{
return ((struct csio_sm *)smp)->sm_state;
}
static inline bool
csio_match_state(void *smp, void *state)
{
return (csio_get_state(smp) == (csio_sm_state_t)state);
}
#define CSIO_ASSERT(cond) BUG_ON(!(cond))
#ifdef __CSIO_DEBUG__
#define CSIO_DB_ASSERT(__c) CSIO_ASSERT((__c))
#else
#define CSIO_DB_ASSERT(__c)
#endif
#endif /* ifndef __CSIO_DEFS_H__ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,665 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CSIO_HW_H__
#define __CSIO_HW_H__
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/workqueue.h>
#include <linux/compiler.h>
#include <linux/cdev.h>
#include <linux/list.h>
#include <linux/mempool.h>
#include <linux/io.h>
#include <linux/spinlock_types.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_transport_fc.h>
#include "csio_wr.h"
#include "csio_mb.h"
#include "csio_scsi.h"
#include "csio_defs.h"
#include "t4_regs.h"
#include "t4_msg.h"
/*
* An error value used by host. Should not clash with FW defined return values.
*/
#define FW_HOSTERROR 255
#define CSIO_FW_FNAME "cxgb4/t4fw.bin"
#define CSIO_CF_FNAME "cxgb4/t4-config.txt"
#define FW_VERSION_MAJOR 1
#define FW_VERSION_MINOR 2
#define FW_VERSION_MICRO 8
#define CSIO_HW_NAME "Chelsio FCoE Adapter"
#define CSIO_MAX_PFN 8
#define CSIO_MAX_PPORTS 4
#define CSIO_MAX_LUN 0xFFFF
#define CSIO_MAX_QUEUE 2048
#define CSIO_MAX_CMD_PER_LUN 32
#define CSIO_MAX_DDP_BUF_SIZE (1024 * 1024)
#define CSIO_MAX_SECTOR_SIZE 128
/* Interrupts */
#define CSIO_EXTRA_MSI_IQS 2 /* Extra iqs for INTX/MSI mode
* (Forward intr iq + fw iq) */
#define CSIO_EXTRA_VECS 2 /* non-data + FW evt */
#define CSIO_MAX_SCSI_CPU 128
#define CSIO_MAX_SCSI_QSETS (CSIO_MAX_SCSI_CPU * CSIO_MAX_PPORTS)
#define CSIO_MAX_MSIX_VECS (CSIO_MAX_SCSI_QSETS + CSIO_EXTRA_VECS)
/* Queues */
enum {
CSIO_INTR_WRSIZE = 128,
CSIO_INTR_IQSIZE = ((CSIO_MAX_MSIX_VECS + 1) * CSIO_INTR_WRSIZE),
CSIO_FWEVT_WRSIZE = 128,
CSIO_FWEVT_IQLEN = 128,
CSIO_FWEVT_FLBUFS = 64,
CSIO_FWEVT_IQSIZE = (CSIO_FWEVT_WRSIZE * CSIO_FWEVT_IQLEN),
CSIO_HW_NIQ = 1,
CSIO_HW_NFLQ = 1,
CSIO_HW_NEQ = 1,
CSIO_HW_NINTXQ = 1,
};
struct csio_msix_entries {
unsigned short vector; /* Vector assigned by pci_enable_msix */
void *dev_id; /* Priv object associated w/ this msix*/
char desc[24]; /* Description of this vector */
};
struct csio_scsi_qset {
int iq_idx; /* Ingress index */
int eq_idx; /* Egress index */
uint32_t intr_idx; /* MSIX Vector index */
};
struct csio_scsi_cpu_info {
int16_t max_cpus;
};
extern int csio_dbg_level;
extern int csio_force_master;
extern unsigned int csio_port_mask;
extern int csio_msi;
#define CSIO_VENDOR_ID 0x1425
#define CSIO_ASIC_DEVID_PROTO_MASK 0xFF00
#define CSIO_ASIC_DEVID_TYPE_MASK 0x00FF
#define CSIO_FPGA 0xA000
#define CSIO_T4_FCOE_ASIC 0x4600
#define CSIO_GLBL_INTR_MASK (CIM | MPS | PL | PCIE | MC | EDC0 | \
EDC1 | LE | TP | MA | PM_TX | PM_RX | \
ULP_RX | CPL_SWITCH | SGE | \
ULP_TX | SF)
/*
* Hard parameters used to initialize the card in the absence of a
* configuration file.
*/
enum {
/* General */
CSIO_SGE_DBFIFO_INT_THRESH = 10,
CSIO_SGE_RX_DMA_OFFSET = 2,
CSIO_SGE_FLBUF_SIZE1 = 65536,
CSIO_SGE_FLBUF_SIZE2 = 1536,
CSIO_SGE_FLBUF_SIZE3 = 9024,
CSIO_SGE_FLBUF_SIZE4 = 9216,
CSIO_SGE_FLBUF_SIZE5 = 2048,
CSIO_SGE_FLBUF_SIZE6 = 128,
CSIO_SGE_FLBUF_SIZE7 = 8192,
CSIO_SGE_FLBUF_SIZE8 = 16384,
CSIO_SGE_TIMER_VAL_0 = 5,
CSIO_SGE_TIMER_VAL_1 = 10,
CSIO_SGE_TIMER_VAL_2 = 20,
CSIO_SGE_TIMER_VAL_3 = 50,
CSIO_SGE_TIMER_VAL_4 = 100,
CSIO_SGE_TIMER_VAL_5 = 200,
CSIO_SGE_INT_CNT_VAL_0 = 1,
CSIO_SGE_INT_CNT_VAL_1 = 4,
CSIO_SGE_INT_CNT_VAL_2 = 8,
CSIO_SGE_INT_CNT_VAL_3 = 16,
/* Storage specific - used by FW_PFVF_CMD */
CSIO_WX_CAPS = FW_CMD_CAP_PF, /* w/x all */
CSIO_R_CAPS = FW_CMD_CAP_PF, /* r all */
CSIO_NVI = 4,
CSIO_NIQ_FLINT = 34,
CSIO_NETH_CTRL = 32,
CSIO_NEQ = 66,
CSIO_NEXACTF = 32,
CSIO_CMASK = FW_PFVF_CMD_CMASK_MASK,
CSIO_PMASK = FW_PFVF_CMD_PMASK_MASK,
};
/* Slowpath events */
enum csio_evt {
CSIO_EVT_FW = 0, /* FW event */
CSIO_EVT_MBX, /* MBX event */
CSIO_EVT_SCN, /* State change notification */
CSIO_EVT_DEV_LOSS, /* Device loss event */
CSIO_EVT_MAX, /* Max supported event */
};
#define CSIO_EVT_MSG_SIZE 512
#define CSIO_EVTQ_SIZE 512
/* Event msg */
struct csio_evt_msg {
struct list_head list; /* evt queue*/
enum csio_evt type;
uint8_t data[CSIO_EVT_MSG_SIZE];
};
enum {
EEPROMVSIZE = 32768, /* Serial EEPROM virtual address space size */
SERNUM_LEN = 16, /* Serial # length */
EC_LEN = 16, /* E/C length */
ID_LEN = 16, /* ID length */
TRACE_LEN = 112, /* length of trace data and mask */
};
enum {
SF_PAGE_SIZE = 256, /* serial flash page size */
SF_SEC_SIZE = 64 * 1024, /* serial flash sector size */
SF_SIZE = SF_SEC_SIZE * 16, /* serial flash size */
};
enum { MEM_EDC0, MEM_EDC1, MEM_MC };
enum {
MEMWIN0_APERTURE = 2048,
MEMWIN0_BASE = 0x1b800,
MEMWIN1_APERTURE = 32768,
MEMWIN1_BASE = 0x28000,
MEMWIN2_APERTURE = 65536,
MEMWIN2_BASE = 0x30000,
};
/* serial flash and firmware constants */
enum {
SF_ATTEMPTS = 10, /* max retries for SF operations */
/* flash command opcodes */
SF_PROG_PAGE = 2, /* program page */
SF_WR_DISABLE = 4, /* disable writes */
SF_RD_STATUS = 5, /* read status register */
SF_WR_ENABLE = 6, /* enable writes */
SF_RD_DATA_FAST = 0xb, /* read flash */
SF_RD_ID = 0x9f, /* read ID */
SF_ERASE_SECTOR = 0xd8, /* erase sector */
FW_START_SEC = 8, /* first flash sector for FW */
FW_END_SEC = 15, /* last flash sector for FW */
FW_IMG_START = FW_START_SEC * SF_SEC_SIZE,
FW_MAX_SIZE = (FW_END_SEC - FW_START_SEC + 1) * SF_SEC_SIZE,
FLASH_CFG_MAX_SIZE = 0x10000 , /* max size of the flash config file*/
FLASH_CFG_OFFSET = 0x1f0000,
FLASH_CFG_START_SEC = FLASH_CFG_OFFSET / SF_SEC_SIZE,
FPGA_FLASH_CFG_OFFSET = 0xf0000 , /* if FPGA mode, then cfg file is
* at 1MB - 64KB */
FPGA_FLASH_CFG_START_SEC = FPGA_FLASH_CFG_OFFSET / SF_SEC_SIZE,
};
/*
* Flash layout.
*/
#define FLASH_START(start) ((start) * SF_SEC_SIZE)
#define FLASH_MAX_SIZE(nsecs) ((nsecs) * SF_SEC_SIZE)
enum {
/*
* Location of firmware image in FLASH.
*/
FLASH_FW_START_SEC = 8,
FLASH_FW_NSECS = 8,
FLASH_FW_START = FLASH_START(FLASH_FW_START_SEC),
FLASH_FW_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FW_NSECS),
};
#undef FLASH_START
#undef FLASH_MAX_SIZE
/* Management module */
enum {
CSIO_MGMT_EQ_WRSIZE = 512,
CSIO_MGMT_IQ_WRSIZE = 128,
CSIO_MGMT_EQLEN = 64,
CSIO_MGMT_IQLEN = 64,
};
#define CSIO_MGMT_EQSIZE (CSIO_MGMT_EQLEN * CSIO_MGMT_EQ_WRSIZE)
#define CSIO_MGMT_IQSIZE (CSIO_MGMT_IQLEN * CSIO_MGMT_IQ_WRSIZE)
/* mgmt module stats */
struct csio_mgmtm_stats {
uint32_t n_abort_req; /* Total abort request */
uint32_t n_abort_rsp; /* Total abort response */
uint32_t n_close_req; /* Total close request */
uint32_t n_close_rsp; /* Total close response */
uint32_t n_err; /* Total Errors */
uint32_t n_drop; /* Total request dropped */
uint32_t n_active; /* Count of active_q */
uint32_t n_cbfn; /* Count of cbfn_q */
};
/* MGMT module */
struct csio_mgmtm {
struct csio_hw *hw; /* Pointer to HW moduel */
int eq_idx; /* Egress queue index */
int iq_idx; /* Ingress queue index */
int msi_vec; /* MSI vector */
struct list_head active_q; /* Outstanding ELS/CT */
struct list_head abort_q; /* Outstanding abort req */
struct list_head cbfn_q; /* Completion queue */
struct list_head mgmt_req_freelist; /* Free poll of reqs */
/* ELSCT request freelist*/
struct timer_list mgmt_timer; /* MGMT timer */
struct csio_mgmtm_stats stats; /* ELS/CT stats */
};
struct csio_adap_desc {
char model_no[16];
char description[32];
};
struct pci_params {
uint16_t vendor_id;
uint16_t device_id;
uint32_t vpd_cap_addr;
uint16_t speed;
uint8_t width;
};
/* User configurable hw parameters */
struct csio_hw_params {
uint32_t sf_size; /* serial flash
* size in bytes
*/
uint32_t sf_nsec; /* # of flash sectors */
struct pci_params pci;
uint32_t log_level; /* Module-level for
* debug log.
*/
};
struct csio_vpd {
uint32_t cclk;
uint8_t ec[EC_LEN + 1];
uint8_t sn[SERNUM_LEN + 1];
uint8_t id[ID_LEN + 1];
};
struct csio_pport {
uint16_t pcap;
uint8_t portid;
uint8_t link_status;
uint16_t link_speed;
uint8_t mac[6];
uint8_t mod_type;
uint8_t rsvd1;
uint8_t rsvd2;
uint8_t rsvd3;
};
/* fcoe resource information */
struct csio_fcoe_res_info {
uint16_t e_d_tov;
uint16_t r_a_tov_seq;
uint16_t r_a_tov_els;
uint16_t r_r_tov;
uint32_t max_xchgs;
uint32_t max_ssns;
uint32_t used_xchgs;
uint32_t used_ssns;
uint32_t max_fcfs;
uint32_t max_vnps;
uint32_t used_fcfs;
uint32_t used_vnps;
};
/* HW State machine Events */
enum csio_hw_ev {
CSIO_HWE_CFG = (uint32_t)1, /* Starts off the State machine */
CSIO_HWE_INIT, /* Config done, start Init */
CSIO_HWE_INIT_DONE, /* Init Mailboxes sent, HW ready */
CSIO_HWE_FATAL, /* Fatal error during initialization */
CSIO_HWE_PCIERR_DETECTED,/* PCI error recovery detetced */
CSIO_HWE_PCIERR_SLOT_RESET, /* Slot reset after PCI recoviery */
CSIO_HWE_PCIERR_RESUME, /* Resume after PCI error recovery */
CSIO_HWE_QUIESCED, /* HBA quiesced */
CSIO_HWE_HBA_RESET, /* HBA reset requested */
CSIO_HWE_HBA_RESET_DONE, /* HBA reset completed */
CSIO_HWE_FW_DLOAD, /* FW download requested */
CSIO_HWE_PCI_REMOVE, /* PCI de-instantiation */
CSIO_HWE_SUSPEND, /* HW suspend for Online(hot) replacement */
CSIO_HWE_RESUME, /* HW resume for Online(hot) replacement */
CSIO_HWE_MAX, /* Max HW event */
};
/* hw stats */
struct csio_hw_stats {
uint32_t n_evt_activeq; /* Number of event in active Q */
uint32_t n_evt_freeq; /* Number of event in free Q */
uint32_t n_evt_drop; /* Number of event droped */
uint32_t n_evt_unexp; /* Number of unexpected events */
uint32_t n_pcich_offline;/* Number of pci channel offline */
uint32_t n_lnlkup_miss; /* Number of lnode lookup miss */
uint32_t n_cpl_fw6_msg; /* Number of cpl fw6 message*/
uint32_t n_cpl_fw6_pld; /* Number of cpl fw6 payload*/
uint32_t n_cpl_unexp; /* Number of unexpected cpl */
uint32_t n_mbint_unexp; /* Number of unexpected mbox */
/* interrupt */
uint32_t n_plint_unexp; /* Number of unexpected PL */
/* interrupt */
uint32_t n_plint_cnt; /* Number of PL interrupt */
uint32_t n_int_stray; /* Number of stray interrupt */
uint32_t n_err; /* Number of hw errors */
uint32_t n_err_fatal; /* Number of fatal errors */
uint32_t n_err_nomem; /* Number of memory alloc failure */
uint32_t n_err_io; /* Number of IO failure */
enum csio_hw_ev n_evt_sm[CSIO_HWE_MAX]; /* Number of sm events */
uint64_t n_reset_start; /* Start time after the reset */
uint32_t rsvd1;
};
/* Defines for hw->flags */
#define CSIO_HWF_MASTER 0x00000001 /* This is the Master
* function for the
* card.
*/
#define CSIO_HWF_HW_INTR_ENABLED 0x00000002 /* Are HW Interrupt
* enable bit set?
*/
#define CSIO_HWF_FWEVT_PENDING 0x00000004 /* FW events pending */
#define CSIO_HWF_Q_MEM_ALLOCED 0x00000008 /* Queues have been
* allocated memory.
*/
#define CSIO_HWF_Q_FW_ALLOCED 0x00000010 /* Queues have been
* allocated in FW.
*/
#define CSIO_HWF_VPD_VALID 0x00000020 /* Valid VPD copied */
#define CSIO_HWF_DEVID_CACHED 0X00000040 /* PCI vendor & device
* id cached */
#define CSIO_HWF_FWEVT_STOP 0x00000080 /* Stop processing
* FW events
*/
#define CSIO_HWF_USING_SOFT_PARAMS 0x00000100 /* Using FW config
* params
*/
#define CSIO_HWF_HOST_INTR_ENABLED 0x00000200 /* Are host interrupts
* enabled?
*/
#define csio_is_hw_intr_enabled(__hw) \
((__hw)->flags & CSIO_HWF_HW_INTR_ENABLED)
#define csio_is_host_intr_enabled(__hw) \
((__hw)->flags & CSIO_HWF_HOST_INTR_ENABLED)
#define csio_is_hw_master(__hw) ((__hw)->flags & CSIO_HWF_MASTER)
#define csio_is_valid_vpd(__hw) ((__hw)->flags & CSIO_HWF_VPD_VALID)
#define csio_is_dev_id_cached(__hw) ((__hw)->flags & CSIO_HWF_DEVID_CACHED)
#define csio_valid_vpd_copied(__hw) ((__hw)->flags |= CSIO_HWF_VPD_VALID)
#define csio_dev_id_cached(__hw) ((__hw)->flags |= CSIO_HWF_DEVID_CACHED)
/* Defines for intr_mode */
enum csio_intr_mode {
CSIO_IM_NONE = 0,
CSIO_IM_INTX = 1,
CSIO_IM_MSI = 2,
CSIO_IM_MSIX = 3,
};
/* Master HW structure: One per function */
struct csio_hw {
struct csio_sm sm; /* State machine: should
* be the 1st member.
*/
spinlock_t lock; /* Lock for hw */
struct csio_scsim scsim; /* SCSI module*/
struct csio_wrm wrm; /* Work request module*/
struct pci_dev *pdev; /* PCI device */
void __iomem *regstart; /* Virtual address of
* register map
*/
/* SCSI queue sets */
uint32_t num_sqsets; /* Number of SCSI
* queue sets */
uint32_t num_scsi_msix_cpus; /* Number of CPUs that
* will be used
* for ingress
* processing.
*/
struct csio_scsi_qset sqset[CSIO_MAX_PPORTS][CSIO_MAX_SCSI_CPU];
struct csio_scsi_cpu_info scsi_cpu_info[CSIO_MAX_PPORTS];
uint32_t evtflag; /* Event flag */
uint32_t flags; /* HW flags */
struct csio_mgmtm mgmtm; /* management module */
struct csio_mbm mbm; /* Mailbox module */
/* Lnodes */
uint32_t num_lns; /* Number of lnodes */
struct csio_lnode *rln; /* Root lnode */
struct list_head sln_head; /* Sibling node list
* list
*/
int intr_iq_idx; /* Forward interrupt
* queue.
*/
int fwevt_iq_idx; /* FW evt queue */
struct work_struct evtq_work; /* Worker thread for
* HW events.
*/
struct list_head evt_free_q; /* freelist of evt
* elements
*/
struct list_head evt_active_q; /* active evt queue*/
/* board related info */
char name[32];
char hw_ver[16];
char model_desc[32];
char drv_version[32];
char fwrev_str[32];
uint32_t optrom_ver;
uint32_t fwrev;
uint32_t tp_vers;
char chip_ver;
uint32_t cfg_finiver;
uint32_t cfg_finicsum;
uint32_t cfg_cfcsum;
uint8_t cfg_csum_status;
uint8_t cfg_store;
enum csio_dev_state fw_state;
struct csio_vpd vpd;
uint8_t pfn; /* Physical Function
* number
*/
uint32_t port_vec; /* Port vector */
uint8_t num_pports; /* Number of physical
* ports.
*/
uint8_t rst_retries; /* Reset retries */
uint8_t cur_evt; /* current s/m evt */
uint8_t prev_evt; /* Previous s/m evt */
uint32_t dev_num; /* device number */
struct csio_pport pport[CSIO_MAX_PPORTS]; /* Ports (XGMACs) */
struct csio_hw_params params; /* Hw parameters */
struct pci_pool *scsi_pci_pool; /* PCI pool for SCSI */
mempool_t *mb_mempool; /* Mailbox memory pool*/
mempool_t *rnode_mempool; /* rnode memory pool */
/* Interrupt */
enum csio_intr_mode intr_mode; /* INTx, MSI, MSIX */
uint32_t fwevt_intr_idx; /* FW evt MSIX/interrupt
* index
*/
uint32_t nondata_intr_idx; /* nondata MSIX/intr
* idx
*/
uint8_t cfg_neq; /* FW configured no of
* egress queues
*/
uint8_t cfg_niq; /* FW configured no of
* iq queues.
*/
struct csio_fcoe_res_info fres_info; /* Fcoe resource info */
/* MSIX vectors */
struct csio_msix_entries msix_entries[CSIO_MAX_MSIX_VECS];
struct dentry *debugfs_root; /* Debug FS */
struct csio_hw_stats stats; /* Hw statistics */
};
/* Register access macros */
#define csio_reg(_b, _r) ((_b) + (_r))
#define csio_rd_reg8(_h, _r) readb(csio_reg((_h)->regstart, (_r)))
#define csio_rd_reg16(_h, _r) readw(csio_reg((_h)->regstart, (_r)))
#define csio_rd_reg32(_h, _r) readl(csio_reg((_h)->regstart, (_r)))
#define csio_rd_reg64(_h, _r) readq(csio_reg((_h)->regstart, (_r)))
#define csio_wr_reg8(_h, _v, _r) writeb((_v), \
csio_reg((_h)->regstart, (_r)))
#define csio_wr_reg16(_h, _v, _r) writew((_v), \
csio_reg((_h)->regstart, (_r)))
#define csio_wr_reg32(_h, _v, _r) writel((_v), \
csio_reg((_h)->regstart, (_r)))
#define csio_wr_reg64(_h, _v, _r) writeq((_v), \
csio_reg((_h)->regstart, (_r)))
void csio_set_reg_field(struct csio_hw *, uint32_t, uint32_t, uint32_t);
/* Core clocks <==> uSecs */
static inline uint32_t
csio_core_ticks_to_us(struct csio_hw *hw, uint32_t ticks)
{
/* add Core Clock / 2 to round ticks to nearest uS */
return (ticks * 1000 + hw->vpd.cclk/2) / hw->vpd.cclk;
}
static inline uint32_t
csio_us_to_core_ticks(struct csio_hw *hw, uint32_t us)
{
return (us * hw->vpd.cclk) / 1000;
}
/* Easy access macros */
#define csio_hw_to_wrm(hw) ((struct csio_wrm *)(&(hw)->wrm))
#define csio_hw_to_mbm(hw) ((struct csio_mbm *)(&(hw)->mbm))
#define csio_hw_to_scsim(hw) ((struct csio_scsim *)(&(hw)->scsim))
#define csio_hw_to_mgmtm(hw) ((struct csio_mgmtm *)(&(hw)->mgmtm))
#define CSIO_PCI_BUS(hw) ((hw)->pdev->bus->number)
#define CSIO_PCI_DEV(hw) (PCI_SLOT((hw)->pdev->devfn))
#define CSIO_PCI_FUNC(hw) (PCI_FUNC((hw)->pdev->devfn))
#define csio_set_fwevt_intr_idx(_h, _i) ((_h)->fwevt_intr_idx = (_i))
#define csio_get_fwevt_intr_idx(_h) ((_h)->fwevt_intr_idx)
#define csio_set_nondata_intr_idx(_h, _i) ((_h)->nondata_intr_idx = (_i))
#define csio_get_nondata_intr_idx(_h) ((_h)->nondata_intr_idx)
/* Printing/logging */
#define CSIO_DEVID(__dev) ((__dev)->dev_num)
#define CSIO_DEVID_LO(__dev) (CSIO_DEVID((__dev)) & 0xFFFF)
#define CSIO_DEVID_HI(__dev) ((CSIO_DEVID((__dev)) >> 16) & 0xFFFF)
#define csio_info(__hw, __fmt, ...) \
dev_info(&(__hw)->pdev->dev, __fmt, ##__VA_ARGS__)
#define csio_fatal(__hw, __fmt, ...) \
dev_crit(&(__hw)->pdev->dev, __fmt, ##__VA_ARGS__)
#define csio_err(__hw, __fmt, ...) \
dev_err(&(__hw)->pdev->dev, __fmt, ##__VA_ARGS__)
#define csio_warn(__hw, __fmt, ...) \
dev_warn(&(__hw)->pdev->dev, __fmt, ##__VA_ARGS__)
#ifdef __CSIO_DEBUG__
#define csio_dbg(__hw, __fmt, ...) \
csio_info((__hw), __fmt, ##__VA_ARGS__);
#else
#define csio_dbg(__hw, __fmt, ...)
#endif
int csio_mgmt_req_lookup(struct csio_mgmtm *, struct csio_ioreq *);
void csio_hw_intr_disable(struct csio_hw *);
int csio_hw_slow_intr_handler(struct csio_hw *hw);
int csio_hw_start(struct csio_hw *);
int csio_hw_stop(struct csio_hw *);
int csio_hw_reset(struct csio_hw *);
int csio_is_hw_ready(struct csio_hw *);
int csio_is_hw_removing(struct csio_hw *);
int csio_fwevtq_handler(struct csio_hw *);
void csio_evtq_worker(struct work_struct *);
int csio_enqueue_evt(struct csio_hw *hw, enum csio_evt type,
void *evt_msg, uint16_t len);
void csio_evtq_flush(struct csio_hw *hw);
int csio_request_irqs(struct csio_hw *);
void csio_intr_enable(struct csio_hw *);
void csio_intr_disable(struct csio_hw *, bool);
struct csio_lnode *csio_lnode_alloc(struct csio_hw *);
int csio_config_queues(struct csio_hw *);
int csio_hw_mc_read(struct csio_hw *, uint32_t, __be32 *, uint64_t *);
int csio_hw_edc_read(struct csio_hw *, int, uint32_t, __be32 *, uint64_t *);
int csio_hw_init(struct csio_hw *);
void csio_hw_exit(struct csio_hw *);
#endif /* ifndef __CSIO_HW_H__ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,158 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CSIO_INIT_H__
#define __CSIO_INIT_H__
#include <linux/pci.h>
#include <linux/if_ether.h>
#include <scsi/scsi.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_transport_fc.h>
#include "csio_scsi.h"
#include "csio_lnode.h"
#include "csio_rnode.h"
#include "csio_hw.h"
#define CSIO_DRV_AUTHOR "Chelsio Communications"
#define CSIO_DRV_LICENSE "Dual BSD/GPL"
#define CSIO_DRV_DESC "Chelsio FCoE driver"
#define CSIO_DRV_VERSION "1.0.0"
#define CSIO_DEVICE(devid, idx) \
{ PCI_VENDOR_ID_CHELSIO, (devid), PCI_ANY_ID, PCI_ANY_ID, 0, 0, (idx) }
#define CSIO_IS_T4_FPGA(_dev) (((_dev) == CSIO_DEVID_PE10K) ||\
((_dev) == CSIO_DEVID_PE10K_PF1))
/* FCoE device IDs */
#define CSIO_DEVID_PE10K 0xA000
#define CSIO_DEVID_PE10K_PF1 0xA001
#define CSIO_DEVID_T440DBG_FCOE 0x4600
#define CSIO_DEVID_T420CR_FCOE 0x4601
#define CSIO_DEVID_T422CR_FCOE 0x4602
#define CSIO_DEVID_T440CR_FCOE 0x4603
#define CSIO_DEVID_T420BCH_FCOE 0x4604
#define CSIO_DEVID_T440BCH_FCOE 0x4605
#define CSIO_DEVID_T440CH_FCOE 0x4606
#define CSIO_DEVID_T420SO_FCOE 0x4607
#define CSIO_DEVID_T420CX_FCOE 0x4608
#define CSIO_DEVID_T420BT_FCOE 0x4609
#define CSIO_DEVID_T404BT_FCOE 0x460A
#define CSIO_DEVID_B420_FCOE 0x460B
#define CSIO_DEVID_B404_FCOE 0x460C
#define CSIO_DEVID_T480CR_FCOE 0x460D
#define CSIO_DEVID_T440LPCR_FCOE 0x460E
extern struct fc_function_template csio_fc_transport_funcs;
extern struct fc_function_template csio_fc_transport_vport_funcs;
void csio_fchost_attr_init(struct csio_lnode *);
/* INTx handlers */
void csio_scsi_intx_handler(struct csio_hw *, void *, uint32_t,
struct csio_fl_dma_buf *, void *);
void csio_fwevt_intx_handler(struct csio_hw *, void *, uint32_t,
struct csio_fl_dma_buf *, void *);
/* Common os lnode APIs */
void csio_lnodes_block_request(struct csio_hw *);
void csio_lnodes_unblock_request(struct csio_hw *);
void csio_lnodes_block_by_port(struct csio_hw *, uint8_t);
void csio_lnodes_unblock_by_port(struct csio_hw *, uint8_t);
struct csio_lnode *csio_shost_init(struct csio_hw *, struct device *, bool,
struct csio_lnode *);
void csio_shost_exit(struct csio_lnode *);
void csio_lnodes_exit(struct csio_hw *, bool);
static inline struct Scsi_Host *
csio_ln_to_shost(struct csio_lnode *ln)
{
return container_of((void *)ln, struct Scsi_Host, hostdata[0]);
}
/* SCSI -- locking version of get/put ioreqs */
static inline struct csio_ioreq *
csio_get_scsi_ioreq_lock(struct csio_hw *hw, struct csio_scsim *scsim)
{
struct csio_ioreq *ioreq;
unsigned long flags;
spin_lock_irqsave(&scsim->freelist_lock, flags);
ioreq = csio_get_scsi_ioreq(scsim);
spin_unlock_irqrestore(&scsim->freelist_lock, flags);
return ioreq;
}
static inline void
csio_put_scsi_ioreq_lock(struct csio_hw *hw, struct csio_scsim *scsim,
struct csio_ioreq *ioreq)
{
unsigned long flags;
spin_lock_irqsave(&scsim->freelist_lock, flags);
csio_put_scsi_ioreq(scsim, ioreq);
spin_unlock_irqrestore(&scsim->freelist_lock, flags);
}
/* Called in interrupt context */
static inline void
csio_put_scsi_ioreq_list_lock(struct csio_hw *hw, struct csio_scsim *scsim,
struct list_head *reqlist, int n)
{
unsigned long flags;
spin_lock_irqsave(&scsim->freelist_lock, flags);
csio_put_scsi_ioreq_list(scsim, reqlist, n);
spin_unlock_irqrestore(&scsim->freelist_lock, flags);
}
/* Called in interrupt context */
static inline void
csio_put_scsi_ddp_list_lock(struct csio_hw *hw, struct csio_scsim *scsim,
struct list_head *reqlist, int n)
{
unsigned long flags;
spin_lock_irqsave(&hw->lock, flags);
csio_put_scsi_ddp_list(scsim, reqlist, n);
spin_unlock_irqrestore(&hw->lock, flags);
}
#endif /* ifndef __CSIO_INIT_H__ */

View File

@ -0,0 +1,624 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/interrupt.h>
#include <linux/cpumask.h>
#include <linux/string.h>
#include "csio_init.h"
#include "csio_hw.h"
static irqreturn_t
csio_nondata_isr(int irq, void *dev_id)
{
struct csio_hw *hw = (struct csio_hw *) dev_id;
int rv;
unsigned long flags;
if (unlikely(!hw))
return IRQ_NONE;
if (unlikely(pci_channel_offline(hw->pdev))) {
CSIO_INC_STATS(hw, n_pcich_offline);
return IRQ_NONE;
}
spin_lock_irqsave(&hw->lock, flags);
csio_hw_slow_intr_handler(hw);
rv = csio_mb_isr_handler(hw);
if (rv == 0 && !(hw->flags & CSIO_HWF_FWEVT_PENDING)) {
hw->flags |= CSIO_HWF_FWEVT_PENDING;
spin_unlock_irqrestore(&hw->lock, flags);
schedule_work(&hw->evtq_work);
return IRQ_HANDLED;
}
spin_unlock_irqrestore(&hw->lock, flags);
return IRQ_HANDLED;
}
/*
* csio_fwevt_handler - Common FW event handler routine.
* @hw: HW module.
*
* This is the ISR for FW events. It is shared b/w MSIX
* and INTx handlers.
*/
static void
csio_fwevt_handler(struct csio_hw *hw)
{
int rv;
unsigned long flags;
rv = csio_fwevtq_handler(hw);
spin_lock_irqsave(&hw->lock, flags);
if (rv == 0 && !(hw->flags & CSIO_HWF_FWEVT_PENDING)) {
hw->flags |= CSIO_HWF_FWEVT_PENDING;
spin_unlock_irqrestore(&hw->lock, flags);
schedule_work(&hw->evtq_work);
return;
}
spin_unlock_irqrestore(&hw->lock, flags);
} /* csio_fwevt_handler */
/*
* csio_fwevt_isr() - FW events MSIX ISR
* @irq:
* @dev_id:
*
* Process WRs on the FW event queue.
*
*/
static irqreturn_t
csio_fwevt_isr(int irq, void *dev_id)
{
struct csio_hw *hw = (struct csio_hw *) dev_id;
if (unlikely(!hw))
return IRQ_NONE;
if (unlikely(pci_channel_offline(hw->pdev))) {
CSIO_INC_STATS(hw, n_pcich_offline);
return IRQ_NONE;
}
csio_fwevt_handler(hw);
return IRQ_HANDLED;
}
/*
* csio_fwevt_isr() - INTx wrapper for handling FW events.
* @irq:
* @dev_id:
*/
void
csio_fwevt_intx_handler(struct csio_hw *hw, void *wr, uint32_t len,
struct csio_fl_dma_buf *flb, void *priv)
{
csio_fwevt_handler(hw);
} /* csio_fwevt_intx_handler */
/*
* csio_process_scsi_cmpl - Process a SCSI WR completion.
* @hw: HW module.
* @wr: The completed WR from the ingress queue.
* @len: Length of the WR.
* @flb: Freelist buffer array.
*
*/
static void
csio_process_scsi_cmpl(struct csio_hw *hw, void *wr, uint32_t len,
struct csio_fl_dma_buf *flb, void *cbfn_q)
{
struct csio_ioreq *ioreq;
uint8_t *scsiwr;
uint8_t subop;
void *cmnd;
unsigned long flags;
ioreq = csio_scsi_cmpl_handler(hw, wr, len, flb, NULL, &scsiwr);
if (likely(ioreq)) {
if (unlikely(*scsiwr == FW_SCSI_ABRT_CLS_WR)) {
subop = FW_SCSI_ABRT_CLS_WR_SUB_OPCODE_GET(
((struct fw_scsi_abrt_cls_wr *)
scsiwr)->sub_opcode_to_chk_all_io);
csio_dbg(hw, "%s cmpl recvd ioreq:%p status:%d\n",
subop ? "Close" : "Abort",
ioreq, ioreq->wr_status);
spin_lock_irqsave(&hw->lock, flags);
if (subop)
csio_scsi_closed(ioreq,
(struct list_head *)cbfn_q);
else
csio_scsi_aborted(ioreq,
(struct list_head *)cbfn_q);
/*
* We call scsi_done for I/Os that driver thinks aborts
* have timed out. If there is a race caused by FW
* completing abort at the exact same time that the
* driver has deteced the abort timeout, the following
* check prevents calling of scsi_done twice for the
* same command: once from the eh_abort_handler, another
* from csio_scsi_isr_handler(). This also avoids the
* need to check if csio_scsi_cmnd(req) is NULL in the
* fast path.
*/
cmnd = csio_scsi_cmnd(ioreq);
if (unlikely(cmnd == NULL))
list_del_init(&ioreq->sm.sm_list);
spin_unlock_irqrestore(&hw->lock, flags);
if (unlikely(cmnd == NULL))
csio_put_scsi_ioreq_lock(hw,
csio_hw_to_scsim(hw), ioreq);
} else {
spin_lock_irqsave(&hw->lock, flags);
csio_scsi_completed(ioreq, (struct list_head *)cbfn_q);
spin_unlock_irqrestore(&hw->lock, flags);
}
}
}
/*
* csio_scsi_isr_handler() - Common SCSI ISR handler.
* @iq: Ingress queue pointer.
*
* Processes SCSI completions on the SCSI IQ indicated by scm->iq_idx
* by calling csio_wr_process_iq_idx. If there are completions on the
* isr_cbfn_q, yank them out into a local queue and call their io_cbfns.
* Once done, add these completions onto the freelist.
* This routine is shared b/w MSIX and INTx.
*/
static inline irqreturn_t
csio_scsi_isr_handler(struct csio_q *iq)
{
struct csio_hw *hw = (struct csio_hw *)iq->owner;
LIST_HEAD(cbfn_q);
struct list_head *tmp;
struct csio_scsim *scm;
struct csio_ioreq *ioreq;
int isr_completions = 0;
scm = csio_hw_to_scsim(hw);
if (unlikely(csio_wr_process_iq(hw, iq, csio_process_scsi_cmpl,
&cbfn_q) != 0))
return IRQ_NONE;
/* Call back the completion routines */
list_for_each(tmp, &cbfn_q) {
ioreq = (struct csio_ioreq *)tmp;
isr_completions++;
ioreq->io_cbfn(hw, ioreq);
/* Release ddp buffer if used for this req */
if (unlikely(ioreq->dcopy))
csio_put_scsi_ddp_list_lock(hw, scm, &ioreq->gen_list,
ioreq->nsge);
}
if (isr_completions) {
/* Return the ioreqs back to ioreq->freelist */
csio_put_scsi_ioreq_list_lock(hw, scm, &cbfn_q,
isr_completions);
}
return IRQ_HANDLED;
}
/*
* csio_scsi_isr() - SCSI MSIX handler
* @irq:
* @dev_id:
*
* This is the top level SCSI MSIX handler. Calls csio_scsi_isr_handler()
* for handling SCSI completions.
*/
static irqreturn_t
csio_scsi_isr(int irq, void *dev_id)
{
struct csio_q *iq = (struct csio_q *) dev_id;
struct csio_hw *hw;
if (unlikely(!iq))
return IRQ_NONE;
hw = (struct csio_hw *)iq->owner;
if (unlikely(pci_channel_offline(hw->pdev))) {
CSIO_INC_STATS(hw, n_pcich_offline);
return IRQ_NONE;
}
csio_scsi_isr_handler(iq);
return IRQ_HANDLED;
}
/*
* csio_scsi_intx_handler() - SCSI INTx handler
* @irq:
* @dev_id:
*
* This is the top level SCSI INTx handler. Calls csio_scsi_isr_handler()
* for handling SCSI completions.
*/
void
csio_scsi_intx_handler(struct csio_hw *hw, void *wr, uint32_t len,
struct csio_fl_dma_buf *flb, void *priv)
{
struct csio_q *iq = priv;
csio_scsi_isr_handler(iq);
} /* csio_scsi_intx_handler */
/*
* csio_fcoe_isr() - INTx/MSI interrupt service routine for FCoE.
* @irq:
* @dev_id:
*
*
*/
static irqreturn_t
csio_fcoe_isr(int irq, void *dev_id)
{
struct csio_hw *hw = (struct csio_hw *) dev_id;
struct csio_q *intx_q = NULL;
int rv;
irqreturn_t ret = IRQ_NONE;
unsigned long flags;
if (unlikely(!hw))
return IRQ_NONE;
if (unlikely(pci_channel_offline(hw->pdev))) {
CSIO_INC_STATS(hw, n_pcich_offline);
return IRQ_NONE;
}
/* Disable the interrupt for this PCI function. */
if (hw->intr_mode == CSIO_IM_INTX)
csio_wr_reg32(hw, 0, MYPF_REG(PCIE_PF_CLI));
/*
* The read in the following function will flush the
* above write.
*/
if (csio_hw_slow_intr_handler(hw))
ret = IRQ_HANDLED;
/* Get the INTx Forward interrupt IQ. */
intx_q = csio_get_q(hw, hw->intr_iq_idx);
CSIO_DB_ASSERT(intx_q);
/* IQ handler is not possible for intx_q, hence pass in NULL */
if (likely(csio_wr_process_iq(hw, intx_q, NULL, NULL) == 0))
ret = IRQ_HANDLED;
spin_lock_irqsave(&hw->lock, flags);
rv = csio_mb_isr_handler(hw);
if (rv == 0 && !(hw->flags & CSIO_HWF_FWEVT_PENDING)) {
hw->flags |= CSIO_HWF_FWEVT_PENDING;
spin_unlock_irqrestore(&hw->lock, flags);
schedule_work(&hw->evtq_work);
return IRQ_HANDLED;
}
spin_unlock_irqrestore(&hw->lock, flags);
return ret;
}
static void
csio_add_msix_desc(struct csio_hw *hw)
{
int i;
struct csio_msix_entries *entryp = &hw->msix_entries[0];
int k = CSIO_EXTRA_VECS;
int len = sizeof(entryp->desc) - 1;
int cnt = hw->num_sqsets + k;
/* Non-data vector */
memset(entryp->desc, 0, len + 1);
snprintf(entryp->desc, len, "csio-%02x:%02x:%x-nondata",
CSIO_PCI_BUS(hw), CSIO_PCI_DEV(hw), CSIO_PCI_FUNC(hw));
entryp++;
memset(entryp->desc, 0, len + 1);
snprintf(entryp->desc, len, "csio-%02x:%02x:%x-fwevt",
CSIO_PCI_BUS(hw), CSIO_PCI_DEV(hw), CSIO_PCI_FUNC(hw));
entryp++;
/* Name SCSI vecs */
for (i = k; i < cnt; i++, entryp++) {
memset(entryp->desc, 0, len + 1);
snprintf(entryp->desc, len, "csio-%02x:%02x:%x-scsi%d",
CSIO_PCI_BUS(hw), CSIO_PCI_DEV(hw),
CSIO_PCI_FUNC(hw), i - CSIO_EXTRA_VECS);
}
}
int
csio_request_irqs(struct csio_hw *hw)
{
int rv, i, j, k = 0;
struct csio_msix_entries *entryp = &hw->msix_entries[0];
struct csio_scsi_cpu_info *info;
if (hw->intr_mode != CSIO_IM_MSIX) {
rv = request_irq(hw->pdev->irq, csio_fcoe_isr,
(hw->intr_mode == CSIO_IM_MSI) ?
0 : IRQF_SHARED,
KBUILD_MODNAME, hw);
if (rv) {
if (hw->intr_mode == CSIO_IM_MSI)
pci_disable_msi(hw->pdev);
csio_err(hw, "Failed to allocate interrupt line.\n");
return -EINVAL;
}
goto out;
}
/* Add the MSIX vector descriptions */
csio_add_msix_desc(hw);
rv = request_irq(entryp[k].vector, csio_nondata_isr, 0,
entryp[k].desc, hw);
if (rv) {
csio_err(hw, "IRQ request failed for vec %d err:%d\n",
entryp[k].vector, rv);
goto err;
}
entryp[k++].dev_id = (void *)hw;
rv = request_irq(entryp[k].vector, csio_fwevt_isr, 0,
entryp[k].desc, hw);
if (rv) {
csio_err(hw, "IRQ request failed for vec %d err:%d\n",
entryp[k].vector, rv);
goto err;
}
entryp[k++].dev_id = (void *)hw;
/* Allocate IRQs for SCSI */
for (i = 0; i < hw->num_pports; i++) {
info = &hw->scsi_cpu_info[i];
for (j = 0; j < info->max_cpus; j++, k++) {
struct csio_scsi_qset *sqset = &hw->sqset[i][j];
struct csio_q *q = hw->wrm.q_arr[sqset->iq_idx];
rv = request_irq(entryp[k].vector, csio_scsi_isr, 0,
entryp[k].desc, q);
if (rv) {
csio_err(hw,
"IRQ request failed for vec %d err:%d\n",
entryp[k].vector, rv);
goto err;
}
entryp[k].dev_id = (void *)q;
} /* for all scsi cpus */
} /* for all ports */
out:
hw->flags |= CSIO_HWF_HOST_INTR_ENABLED;
return 0;
err:
for (i = 0; i < k; i++) {
entryp = &hw->msix_entries[i];
free_irq(entryp->vector, entryp->dev_id);
}
pci_disable_msix(hw->pdev);
return -EINVAL;
}
static void
csio_disable_msix(struct csio_hw *hw, bool free)
{
int i;
struct csio_msix_entries *entryp;
int cnt = hw->num_sqsets + CSIO_EXTRA_VECS;
if (free) {
for (i = 0; i < cnt; i++) {
entryp = &hw->msix_entries[i];
free_irq(entryp->vector, entryp->dev_id);
}
}
pci_disable_msix(hw->pdev);
}
/* Reduce per-port max possible CPUs */
static void
csio_reduce_sqsets(struct csio_hw *hw, int cnt)
{
int i;
struct csio_scsi_cpu_info *info;
while (cnt < hw->num_sqsets) {
for (i = 0; i < hw->num_pports; i++) {
info = &hw->scsi_cpu_info[i];
if (info->max_cpus > 1) {
info->max_cpus--;
hw->num_sqsets--;
if (hw->num_sqsets <= cnt)
break;
}
}
}
csio_dbg(hw, "Reduced sqsets to %d\n", hw->num_sqsets);
}
static int
csio_enable_msix(struct csio_hw *hw)
{
int rv, i, j, k, n, min, cnt;
struct csio_msix_entries *entryp;
struct msix_entry *entries;
int extra = CSIO_EXTRA_VECS;
struct csio_scsi_cpu_info *info;
min = hw->num_pports + extra;
cnt = hw->num_sqsets + extra;
/* Max vectors required based on #niqs configured in fw */
if (hw->flags & CSIO_HWF_USING_SOFT_PARAMS || !csio_is_hw_master(hw))
cnt = min_t(uint8_t, hw->cfg_niq, cnt);
entries = kzalloc(sizeof(struct msix_entry) * cnt, GFP_KERNEL);
if (!entries)
return -ENOMEM;
for (i = 0; i < cnt; i++)
entries[i].entry = (uint16_t)i;
csio_dbg(hw, "FW supp #niq:%d, trying %d msix's\n", hw->cfg_niq, cnt);
while ((rv = pci_enable_msix(hw->pdev, entries, cnt)) >= min)
cnt = rv;
if (!rv) {
if (cnt < (hw->num_sqsets + extra)) {
csio_dbg(hw, "Reducing sqsets to %d\n", cnt - extra);
csio_reduce_sqsets(hw, cnt - extra);
}
} else {
if (rv > 0) {
pci_disable_msix(hw->pdev);
csio_info(hw, "Not using MSI-X, remainder:%d\n", rv);
}
kfree(entries);
return -ENOMEM;
}
/* Save off vectors */
for (i = 0; i < cnt; i++) {
entryp = &hw->msix_entries[i];
entryp->vector = entries[i].vector;
}
/* Distribute vectors */
k = 0;
csio_set_nondata_intr_idx(hw, entries[k].entry);
csio_set_mb_intr_idx(csio_hw_to_mbm(hw), entries[k++].entry);
csio_set_fwevt_intr_idx(hw, entries[k++].entry);
for (i = 0; i < hw->num_pports; i++) {
info = &hw->scsi_cpu_info[i];
for (j = 0; j < hw->num_scsi_msix_cpus; j++) {
n = (j % info->max_cpus) + k;
hw->sqset[i][j].intr_idx = entries[n].entry;
}
k += info->max_cpus;
}
kfree(entries);
return 0;
}
void
csio_intr_enable(struct csio_hw *hw)
{
hw->intr_mode = CSIO_IM_NONE;
hw->flags &= ~CSIO_HWF_HOST_INTR_ENABLED;
/* Try MSIX, then MSI or fall back to INTx */
if ((csio_msi == 2) && !csio_enable_msix(hw))
hw->intr_mode = CSIO_IM_MSIX;
else {
/* Max iqs required based on #niqs configured in fw */
if (hw->flags & CSIO_HWF_USING_SOFT_PARAMS ||
!csio_is_hw_master(hw)) {
int extra = CSIO_EXTRA_MSI_IQS;
if (hw->cfg_niq < (hw->num_sqsets + extra)) {
csio_dbg(hw, "Reducing sqsets to %d\n",
hw->cfg_niq - extra);
csio_reduce_sqsets(hw, hw->cfg_niq - extra);
}
}
if ((csio_msi == 1) && !pci_enable_msi(hw->pdev))
hw->intr_mode = CSIO_IM_MSI;
else
hw->intr_mode = CSIO_IM_INTX;
}
csio_dbg(hw, "Using %s interrupt mode.\n",
(hw->intr_mode == CSIO_IM_MSIX) ? "MSIX" :
((hw->intr_mode == CSIO_IM_MSI) ? "MSI" : "INTx"));
}
void
csio_intr_disable(struct csio_hw *hw, bool free)
{
csio_hw_intr_disable(hw);
switch (hw->intr_mode) {
case CSIO_IM_MSIX:
csio_disable_msix(hw, free);
break;
case CSIO_IM_MSI:
if (free)
free_irq(hw->pdev->irq, hw);
pci_disable_msi(hw->pdev);
break;
case CSIO_IM_INTX:
if (free)
free_irq(hw->pdev->irq, hw);
break;
default:
break;
}
hw->intr_mode = CSIO_IM_NONE;
hw->flags &= ~CSIO_HWF_HOST_INTR_ENABLED;
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,255 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CSIO_LNODE_H__
#define __CSIO_LNODE_H__
#include <linux/kref.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
#include <scsi/fc/fc_els.h>
#include "csio_defs.h"
#include "csio_hw.h"
#define CSIO_FCOE_MAX_NPIV 128
#define CSIO_FCOE_MAX_RNODES 2048
/* FDMI port attribute unknown speed */
#define CSIO_HBA_PORTSPEED_UNKNOWN 0x8000
extern int csio_fcoe_rnodes;
extern int csio_fdmi_enable;
/* State machine evets */
enum csio_ln_ev {
CSIO_LNE_NONE = (uint32_t)0,
CSIO_LNE_LINKUP,
CSIO_LNE_FAB_INIT_DONE,
CSIO_LNE_LINK_DOWN,
CSIO_LNE_DOWN_LINK,
CSIO_LNE_LOGO,
CSIO_LNE_CLOSE,
CSIO_LNE_MAX_EVENT,
};
struct csio_fcf_info {
struct list_head list;
uint8_t priority;
uint8_t mac[6];
uint8_t name_id[8];
uint8_t fabric[8];
uint16_t vf_id;
uint8_t vlan_id;
uint16_t max_fcoe_size;
uint8_t fc_map[3];
uint32_t fka_adv;
uint32_t fcfi;
uint8_t get_next:1;
uint8_t link_aff:1;
uint8_t fpma:1;
uint8_t spma:1;
uint8_t login:1;
uint8_t portid;
uint8_t spma_mac[6];
struct kref kref;
};
/* Defines for flags */
#define CSIO_LNF_FIPSUPP 0x00000001 /* Fip Supported */
#define CSIO_LNF_NPIVSUPP 0x00000002 /* NPIV supported */
#define CSIO_LNF_LINK_ENABLE 0x00000004 /* Link enabled */
#define CSIO_LNF_FDMI_ENABLE 0x00000008 /* FDMI support */
/* Transport events */
enum csio_ln_fc_evt {
CSIO_LN_FC_LINKUP = 1,
CSIO_LN_FC_LINKDOWN,
CSIO_LN_FC_RSCN,
CSIO_LN_FC_ATTRIB_UPDATE,
};
/* Lnode stats */
struct csio_lnode_stats {
uint32_t n_link_up; /* Link down */
uint32_t n_link_down; /* Link up */
uint32_t n_err; /* error */
uint32_t n_err_nomem; /* memory not available */
uint32_t n_inval_parm; /* Invalid parameters */
uint32_t n_evt_unexp; /* unexpected event */
uint32_t n_evt_drop; /* dropped event */
uint32_t n_rnode_match; /* matched rnode */
uint32_t n_dev_loss_tmo; /* Device loss timeout */
uint32_t n_fdmi_err; /* fdmi err */
uint32_t n_evt_fw[RSCN_DEV_LOST]; /* fw events */
enum csio_ln_ev n_evt_sm[CSIO_LNE_MAX_EVENT]; /* State m/c events */
uint32_t n_rnode_alloc; /* rnode allocated */
uint32_t n_rnode_free; /* rnode freed */
uint32_t n_rnode_nomem; /* rnode alloc failure */
uint32_t n_input_requests; /* Input Requests */
uint32_t n_output_requests; /* Output Requests */
uint32_t n_control_requests; /* Control Requests */
uint32_t n_input_bytes; /* Input Bytes */
uint32_t n_output_bytes; /* Output Bytes */
uint32_t rsvd1;
};
/* Common Lnode params */
struct csio_lnode_params {
uint32_t ra_tov;
uint32_t fcfi;
uint32_t log_level; /* Module level for debugging */
};
struct csio_service_parms {
struct fc_els_csp csp; /* Common service parms */
uint8_t wwpn[8]; /* WWPN */
uint8_t wwnn[8]; /* WWNN */
struct fc_els_cssp clsp[4]; /* Class service params */
uint8_t vvl[16]; /* Vendor version level */
};
/* Lnode */
struct csio_lnode {
struct csio_sm sm; /* State machine + sibling
* lnode list.
*/
struct csio_hw *hwp; /* Pointer to the HW module */
uint8_t portid; /* Port ID */
uint8_t rsvd1;
uint16_t rsvd2;
uint32_t dev_num; /* Device number */
uint32_t flags; /* Flags */
struct list_head fcf_lsthead; /* FCF entries */
struct csio_fcf_info *fcfinfo; /* FCF in use */
struct csio_ioreq *mgmt_req; /* MGMT request */
/* FCoE identifiers */
uint8_t mac[6];
uint32_t nport_id;
struct csio_service_parms ln_sparm; /* Service parms */
/* Firmware identifiers */
uint32_t fcf_flowid; /*fcf flowid */
uint32_t vnp_flowid;
uint16_t ssn_cnt; /* Registered Session */
uint8_t cur_evt; /* Current event */
uint8_t prev_evt; /* Previous event */
/* Children */
struct list_head cln_head; /* Head of the children lnode
* list.
*/
uint32_t num_vports; /* Total NPIV/children LNodes*/
struct csio_lnode *pln; /* Parent lnode of child
* lnodes.
*/
struct list_head cmpl_q; /* Pending I/Os on this lnode */
/* Remote node information */
struct list_head rnhead; /* Head of rnode list */
uint32_t num_reg_rnodes; /* Number of rnodes registered
* with the host.
*/
uint32_t n_scsi_tgts; /* Number of scsi targets
* found
*/
uint32_t last_scan_ntgts;/* Number of scsi targets
* found per last scan.
*/
uint32_t tgt_scan_tick; /* timer started after
* new tgt found
*/
/* FC transport data */
struct fc_vport *fc_vport;
struct fc_host_statistics fch_stats;
struct csio_lnode_stats stats; /* Common lnode stats */
struct csio_lnode_params params; /* Common lnode params */
};
#define csio_lnode_to_hw(ln) ((ln)->hwp)
#define csio_root_lnode(ln) (csio_lnode_to_hw((ln))->rln)
#define csio_parent_lnode(ln) ((ln)->pln)
#define csio_ln_flowid(ln) ((ln)->vnp_flowid)
#define csio_ln_wwpn(ln) ((ln)->ln_sparm.wwpn)
#define csio_ln_wwnn(ln) ((ln)->ln_sparm.wwnn)
#define csio_is_root_ln(ln) (((ln) == csio_root_lnode((ln))) ? 1 : 0)
#define csio_is_phys_ln(ln) (((ln)->pln == NULL) ? 1 : 0)
#define csio_is_npiv_ln(ln) (((ln)->pln != NULL) ? 1 : 0)
#define csio_ln_dbg(_ln, _fmt, ...) \
csio_dbg(_ln->hwp, "%x:%x "_fmt, CSIO_DEVID_HI(_ln), \
CSIO_DEVID_LO(_ln), ##__VA_ARGS__);
#define csio_ln_err(_ln, _fmt, ...) \
csio_err(_ln->hwp, "%x:%x "_fmt, CSIO_DEVID_HI(_ln), \
CSIO_DEVID_LO(_ln), ##__VA_ARGS__);
#define csio_ln_warn(_ln, _fmt, ...) \
csio_warn(_ln->hwp, "%x:%x "_fmt, CSIO_DEVID_HI(_ln), \
CSIO_DEVID_LO(_ln), ##__VA_ARGS__);
/* HW->Lnode notifications */
enum csio_ln_notify {
CSIO_LN_NOTIFY_HWREADY = 1,
CSIO_LN_NOTIFY_HWSTOP,
CSIO_LN_NOTIFY_HWREMOVE,
CSIO_LN_NOTIFY_HWRESET,
};
void csio_fcoe_fwevt_handler(struct csio_hw *, __u8 cpl_op, __be64 *);
int csio_is_lnode_ready(struct csio_lnode *);
void csio_lnode_state_to_str(struct csio_lnode *ln, int8_t *str);
struct csio_lnode *csio_lnode_lookup_by_wwpn(struct csio_hw *, uint8_t *);
int csio_get_phy_port_stats(struct csio_hw *, uint8_t ,
struct fw_fcoe_port_stats *);
int csio_scan_done(struct csio_lnode *, unsigned long, unsigned long,
unsigned long, unsigned long);
void csio_notify_lnodes(struct csio_hw *, enum csio_ln_notify);
void csio_disable_lnodes(struct csio_hw *, uint8_t, bool);
void csio_lnode_async_event(struct csio_lnode *, enum csio_ln_fc_evt);
int csio_ln_fdmi_start(struct csio_lnode *, void *);
int csio_lnode_start(struct csio_lnode *);
void csio_lnode_stop(struct csio_lnode *);
void csio_lnode_close(struct csio_lnode *);
int csio_lnode_init(struct csio_lnode *, struct csio_hw *,
struct csio_lnode *);
void csio_lnode_exit(struct csio_lnode *);
#endif /* ifndef __CSIO_LNODE_H__ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,278 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CSIO_MB_H__
#define __CSIO_MB_H__
#include <linux/timer.h>
#include <linux/completion.h>
#include "t4fw_api.h"
#include "t4fw_api_stor.h"
#include "csio_defs.h"
#define CSIO_STATS_OFFSET (2)
#define CSIO_NUM_STATS_PER_MB (6)
struct fw_fcoe_port_cmd_params {
uint8_t portid;
uint8_t idx;
uint8_t nstats;
};
#define CSIO_DUMP_MB(__hw, __num, __mb) \
csio_dbg(__hw, "\t%llx %llx %llx %llx %llx %llx %llx %llx\n", \
(unsigned long long)csio_rd_reg64(__hw, __mb), \
(unsigned long long)csio_rd_reg64(__hw, __mb + 8), \
(unsigned long long)csio_rd_reg64(__hw, __mb + 16), \
(unsigned long long)csio_rd_reg64(__hw, __mb + 24), \
(unsigned long long)csio_rd_reg64(__hw, __mb + 32), \
(unsigned long long)csio_rd_reg64(__hw, __mb + 40), \
(unsigned long long)csio_rd_reg64(__hw, __mb + 48), \
(unsigned long long)csio_rd_reg64(__hw, __mb + 56))
#define CSIO_MB_MAX_REGS 8
#define CSIO_MAX_MB_SIZE 64
#define CSIO_MB_POLL_FREQ 5 /* 5 ms */
#define CSIO_MB_DEFAULT_TMO FW_CMD_MAX_TIMEOUT
/* Device master in HELLO command */
enum csio_dev_master { CSIO_MASTER_CANT, CSIO_MASTER_MAY, CSIO_MASTER_MUST };
enum csio_mb_owner { CSIO_MBOWNER_NONE, CSIO_MBOWNER_FW, CSIO_MBOWNER_PL };
enum csio_dev_state {
CSIO_DEV_STATE_UNINIT,
CSIO_DEV_STATE_INIT,
CSIO_DEV_STATE_ERR
};
#define FW_PARAM_DEV(param) \
(FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param))
#define FW_PARAM_PFVF(param) \
(FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param)| \
FW_PARAMS_PARAM_Y(0) | \
FW_PARAMS_PARAM_Z(0))
enum {
PAUSE_RX = 1 << 0,
PAUSE_TX = 1 << 1,
PAUSE_AUTONEG = 1 << 2
};
#define CSIO_INIT_MBP(__mbp, __cp, __tmo, __priv, __fn, __clear) \
do { \
if (__clear) \
memset((__cp), 0, \
CSIO_MB_MAX_REGS * sizeof(__be64)); \
INIT_LIST_HEAD(&(__mbp)->list); \
(__mbp)->tmo = (__tmo); \
(__mbp)->priv = (void *)(__priv); \
(__mbp)->mb_cbfn = (__fn); \
(__mbp)->mb_size = sizeof(*(__cp)); \
} while (0)
struct csio_mbm_stats {
uint32_t n_req; /* number of mbox req */
uint32_t n_rsp; /* number of mbox rsp */
uint32_t n_activeq; /* number of mbox req active Q */
uint32_t n_cbfnq; /* number of mbox req cbfn Q */
uint32_t n_tmo; /* number of mbox timeout */
uint32_t n_cancel; /* number of mbox cancel */
uint32_t n_err; /* number of mbox error */
};
/* Driver version of Mailbox */
struct csio_mb {
struct list_head list; /* for req/resp */
/* queue in driver */
__be64 mb[CSIO_MB_MAX_REGS]; /* MB in HW format */
int mb_size; /* Size of this
* mailbox.
*/
uint32_t tmo; /* Timeout */
struct completion cmplobj; /* MB Completion
* object
*/
void (*mb_cbfn) (struct csio_hw *, struct csio_mb *);
/* Callback fn */
void *priv; /* Owner private ptr */
};
struct csio_mbm {
uint32_t a_mbox; /* Async mbox num */
uint32_t intr_idx; /* Interrupt index */
struct timer_list timer; /* Mbox timer */
struct list_head req_q; /* Mbox request queue */
struct list_head cbfn_q; /* Mbox completion q */
struct csio_mb *mcurrent; /* Current mailbox */
uint32_t req_q_cnt; /* Outstanding mbox
* cmds
*/
struct csio_mbm_stats stats; /* Statistics */
};
#define csio_set_mb_intr_idx(_m, _i) ((_m)->intr_idx = (_i))
#define csio_get_mb_intr_idx(_m) ((_m)->intr_idx)
struct csio_iq_params;
struct csio_eq_params;
enum fw_retval csio_mb_fw_retval(struct csio_mb *);
/* MB helpers */
void csio_mb_hello(struct csio_hw *, struct csio_mb *, uint32_t,
uint32_t, uint32_t, enum csio_dev_master,
void (*)(struct csio_hw *, struct csio_mb *));
void csio_mb_process_hello_rsp(struct csio_hw *, struct csio_mb *,
enum fw_retval *, enum csio_dev_state *,
uint8_t *);
void csio_mb_bye(struct csio_hw *, struct csio_mb *, uint32_t,
void (*)(struct csio_hw *, struct csio_mb *));
void csio_mb_reset(struct csio_hw *, struct csio_mb *, uint32_t, int, int,
void (*)(struct csio_hw *, struct csio_mb *));
void csio_mb_params(struct csio_hw *, struct csio_mb *, uint32_t, unsigned int,
unsigned int, unsigned int, const u32 *, u32 *, bool,
void (*)(struct csio_hw *, struct csio_mb *));
void csio_mb_process_read_params_rsp(struct csio_hw *, struct csio_mb *,
enum fw_retval *, unsigned int , u32 *);
void csio_mb_ldst(struct csio_hw *hw, struct csio_mb *mbp, uint32_t tmo,
int reg);
void csio_mb_caps_config(struct csio_hw *, struct csio_mb *, uint32_t,
bool, bool, bool, bool,
void (*)(struct csio_hw *, struct csio_mb *));
void csio_rss_glb_config(struct csio_hw *, struct csio_mb *,
uint32_t, uint8_t, unsigned int,
void (*)(struct csio_hw *, struct csio_mb *));
void csio_mb_pfvf(struct csio_hw *, struct csio_mb *, uint32_t,
unsigned int, unsigned int, unsigned int,
unsigned int, unsigned int, unsigned int,
unsigned int, unsigned int, unsigned int,
unsigned int, unsigned int, unsigned int,
unsigned int, void (*) (struct csio_hw *, struct csio_mb *));
void csio_mb_port(struct csio_hw *, struct csio_mb *, uint32_t,
uint8_t, bool, uint32_t, uint16_t,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_mb_process_read_port_rsp(struct csio_hw *, struct csio_mb *,
enum fw_retval *, uint16_t *);
void csio_mb_initialize(struct csio_hw *, struct csio_mb *, uint32_t,
void (*)(struct csio_hw *, struct csio_mb *));
void csio_mb_iq_alloc_write(struct csio_hw *, struct csio_mb *, void *,
uint32_t, struct csio_iq_params *,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_mb_iq_alloc_write_rsp(struct csio_hw *, struct csio_mb *,
enum fw_retval *, struct csio_iq_params *);
void csio_mb_iq_free(struct csio_hw *, struct csio_mb *, void *,
uint32_t, struct csio_iq_params *,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_mb_eq_ofld_alloc_write(struct csio_hw *, struct csio_mb *, void *,
uint32_t, struct csio_eq_params *,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_mb_eq_ofld_alloc_write_rsp(struct csio_hw *, struct csio_mb *,
enum fw_retval *, struct csio_eq_params *);
void csio_mb_eq_ofld_free(struct csio_hw *, struct csio_mb *, void *,
uint32_t , struct csio_eq_params *,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_fcoe_read_res_info_init_mb(struct csio_hw *, struct csio_mb *,
uint32_t,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_write_fcoe_link_cond_init_mb(struct csio_lnode *, struct csio_mb *,
uint32_t, uint8_t, uint32_t, uint8_t, bool, uint32_t,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_fcoe_vnp_alloc_init_mb(struct csio_lnode *, struct csio_mb *,
uint32_t, uint32_t , uint32_t , uint16_t,
uint8_t [8], uint8_t [8],
void (*) (struct csio_hw *, struct csio_mb *));
void csio_fcoe_vnp_read_init_mb(struct csio_lnode *, struct csio_mb *,
uint32_t, uint32_t , uint32_t ,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_fcoe_vnp_free_init_mb(struct csio_lnode *, struct csio_mb *,
uint32_t , uint32_t, uint32_t ,
void (*) (struct csio_hw *, struct csio_mb *));
void csio_fcoe_read_fcf_init_mb(struct csio_lnode *, struct csio_mb *,
uint32_t, uint32_t, uint32_t,
void (*cbfn) (struct csio_hw *, struct csio_mb *));
void csio_fcoe_read_portparams_init_mb(struct csio_hw *hw,
struct csio_mb *mbp, uint32_t mb_tmo,
struct fw_fcoe_port_cmd_params *portparams,
void (*cbfn)(struct csio_hw *, struct csio_mb *));
void csio_mb_process_portparams_rsp(struct csio_hw *hw, struct csio_mb *mbp,
enum fw_retval *retval,
struct fw_fcoe_port_cmd_params *portparams,
struct fw_fcoe_port_stats *portstats);
/* MB module functions */
int csio_mbm_init(struct csio_mbm *, struct csio_hw *,
void (*)(uintptr_t));
void csio_mbm_exit(struct csio_mbm *);
void csio_mb_intr_enable(struct csio_hw *);
void csio_mb_intr_disable(struct csio_hw *);
int csio_mb_issue(struct csio_hw *, struct csio_mb *);
void csio_mb_completions(struct csio_hw *, struct list_head *);
int csio_mb_fwevt_handler(struct csio_hw *, __be64 *);
int csio_mb_isr_handler(struct csio_hw *);
struct csio_mb *csio_mb_tmo_handler(struct csio_hw *);
void csio_mb_cancel_all(struct csio_hw *, struct list_head *);
#endif /* ifndef __CSIO_MB_H__ */

View File

@ -0,0 +1,913 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/string.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_transport_fc.h>
#include <scsi/fc/fc_els.h>
#include <scsi/fc/fc_fs.h>
#include "csio_hw.h"
#include "csio_lnode.h"
#include "csio_rnode.h"
static int csio_rnode_init(struct csio_rnode *, struct csio_lnode *);
static void csio_rnode_exit(struct csio_rnode *);
/* Static machine forward declarations */
static void csio_rns_uninit(struct csio_rnode *, enum csio_rn_ev);
static void csio_rns_ready(struct csio_rnode *, enum csio_rn_ev);
static void csio_rns_offline(struct csio_rnode *, enum csio_rn_ev);
static void csio_rns_disappeared(struct csio_rnode *, enum csio_rn_ev);
/* RNF event mapping */
static enum csio_rn_ev fwevt_to_rnevt[] = {
CSIO_RNFE_NONE, /* None */
CSIO_RNFE_LOGGED_IN, /* PLOGI_ACC_RCVD */
CSIO_RNFE_NONE, /* PLOGI_RJT_RCVD */
CSIO_RNFE_PLOGI_RECV, /* PLOGI_RCVD */
CSIO_RNFE_LOGO_RECV, /* PLOGO_RCVD */
CSIO_RNFE_PRLI_DONE, /* PRLI_ACC_RCVD */
CSIO_RNFE_NONE, /* PRLI_RJT_RCVD */
CSIO_RNFE_PRLI_RECV, /* PRLI_RCVD */
CSIO_RNFE_PRLO_RECV, /* PRLO_RCVD */
CSIO_RNFE_NONE, /* NPORT_ID_CHGD */
CSIO_RNFE_LOGO_RECV, /* FLOGO_RCVD */
CSIO_RNFE_NONE, /* CLR_VIRT_LNK_RCVD */
CSIO_RNFE_LOGGED_IN, /* FLOGI_ACC_RCVD */
CSIO_RNFE_NONE, /* FLOGI_RJT_RCVD */
CSIO_RNFE_LOGGED_IN, /* FDISC_ACC_RCVD */
CSIO_RNFE_NONE, /* FDISC_RJT_RCVD */
CSIO_RNFE_NONE, /* FLOGI_TMO_MAX_RETRY */
CSIO_RNFE_NONE, /* IMPL_LOGO_ADISC_ACC */
CSIO_RNFE_NONE, /* IMPL_LOGO_ADISC_RJT */
CSIO_RNFE_NONE, /* IMPL_LOGO_ADISC_CNFLT */
CSIO_RNFE_NONE, /* PRLI_TMO */
CSIO_RNFE_NONE, /* ADISC_TMO */
CSIO_RNFE_NAME_MISSING, /* RSCN_DEV_LOST */
CSIO_RNFE_NONE, /* SCR_ACC_RCVD */
CSIO_RNFE_NONE, /* ADISC_RJT_RCVD */
CSIO_RNFE_NONE, /* LOGO_SNT */
CSIO_RNFE_LOGO_RECV, /* PROTO_ERR_IMPL_LOGO */
};
#define CSIO_FWE_TO_RNFE(_evt) ((_evt > PROTO_ERR_IMPL_LOGO) ? \
CSIO_RNFE_NONE : \
fwevt_to_rnevt[_evt])
int
csio_is_rnode_ready(struct csio_rnode *rn)
{
return csio_match_state(rn, csio_rns_ready);
}
static int
csio_is_rnode_uninit(struct csio_rnode *rn)
{
return csio_match_state(rn, csio_rns_uninit);
}
static int
csio_is_rnode_wka(uint8_t rport_type)
{
if ((rport_type == FLOGI_VFPORT) ||
(rport_type == FDISC_VFPORT) ||
(rport_type == NS_VNPORT) ||
(rport_type == FDMI_VNPORT))
return 1;
return 0;
}
/*
* csio_rn_lookup - Finds the rnode with the given flowid
* @ln - lnode
* @flowid - flowid.
*
* Does the rnode lookup on the given lnode and flowid.If no matching entry
* found, NULL is returned.
*/
static struct csio_rnode *
csio_rn_lookup(struct csio_lnode *ln, uint32_t flowid)
{
struct csio_rnode *rnhead = (struct csio_rnode *) &ln->rnhead;
struct list_head *tmp;
struct csio_rnode *rn;
list_for_each(tmp, &rnhead->sm.sm_list) {
rn = (struct csio_rnode *) tmp;
if (rn->flowid == flowid)
return rn;
}
return NULL;
}
/*
* csio_rn_lookup_wwpn - Finds the rnode with the given wwpn
* @ln: lnode
* @wwpn: wwpn
*
* Does the rnode lookup on the given lnode and wwpn. If no matching entry
* found, NULL is returned.
*/
static struct csio_rnode *
csio_rn_lookup_wwpn(struct csio_lnode *ln, uint8_t *wwpn)
{
struct csio_rnode *rnhead = (struct csio_rnode *) &ln->rnhead;
struct list_head *tmp;
struct csio_rnode *rn;
list_for_each(tmp, &rnhead->sm.sm_list) {
rn = (struct csio_rnode *) tmp;
if (!memcmp(csio_rn_wwpn(rn), wwpn, 8))
return rn;
}
return NULL;
}
/**
* csio_rnode_lookup_portid - Finds the rnode with the given portid
* @ln: lnode
* @portid: port id
*
* Lookup the rnode list for a given portid. If no matching entry
* found, NULL is returned.
*/
struct csio_rnode *
csio_rnode_lookup_portid(struct csio_lnode *ln, uint32_t portid)
{
struct csio_rnode *rnhead = (struct csio_rnode *) &ln->rnhead;
struct list_head *tmp;
struct csio_rnode *rn;
list_for_each(tmp, &rnhead->sm.sm_list) {
rn = (struct csio_rnode *) tmp;
if (rn->nport_id == portid)
return rn;
}
return NULL;
}
static int
csio_rn_dup_flowid(struct csio_lnode *ln, uint32_t rdev_flowid,
uint32_t *vnp_flowid)
{
struct csio_rnode *rnhead;
struct list_head *tmp, *tmp1;
struct csio_rnode *rn;
struct csio_lnode *ln_tmp;
struct csio_hw *hw = csio_lnode_to_hw(ln);
list_for_each(tmp1, &hw->sln_head) {
ln_tmp = (struct csio_lnode *) tmp1;
if (ln_tmp == ln)
continue;
rnhead = (struct csio_rnode *)&ln_tmp->rnhead;
list_for_each(tmp, &rnhead->sm.sm_list) {
rn = (struct csio_rnode *) tmp;
if (csio_is_rnode_ready(rn)) {
if (rn->flowid == rdev_flowid) {
*vnp_flowid = csio_ln_flowid(ln_tmp);
return 1;
}
}
}
}
return 0;
}
static struct csio_rnode *
csio_alloc_rnode(struct csio_lnode *ln)
{
struct csio_hw *hw = csio_lnode_to_hw(ln);
struct csio_rnode *rn = mempool_alloc(hw->rnode_mempool, GFP_ATOMIC);
if (!rn)
goto err;
memset(rn, 0, sizeof(struct csio_rnode));
if (csio_rnode_init(rn, ln))
goto err_free;
CSIO_INC_STATS(ln, n_rnode_alloc);
return rn;
err_free:
mempool_free(rn, hw->rnode_mempool);
err:
CSIO_INC_STATS(ln, n_rnode_nomem);
return NULL;
}
static void
csio_free_rnode(struct csio_rnode *rn)
{
struct csio_hw *hw = csio_lnode_to_hw(csio_rnode_to_lnode(rn));
csio_rnode_exit(rn);
CSIO_INC_STATS(rn->lnp, n_rnode_free);
mempool_free(rn, hw->rnode_mempool);
}
/*
* csio_get_rnode - Gets rnode with the given flowid
* @ln - lnode
* @flowid - flow id.
*
* Does the rnode lookup on the given lnode and flowid. If no matching
* rnode found, then new rnode with given npid is allocated and returned.
*/
static struct csio_rnode *
csio_get_rnode(struct csio_lnode *ln, uint32_t flowid)
{
struct csio_rnode *rn;
rn = csio_rn_lookup(ln, flowid);
if (!rn) {
rn = csio_alloc_rnode(ln);
if (!rn)
return NULL;
rn->flowid = flowid;
}
return rn;
}
/*
* csio_put_rnode - Frees the given rnode
* @ln - lnode
* @flowid - flow id.
*
* Does the rnode lookup on the given lnode and flowid. If no matching
* rnode found, then new rnode with given npid is allocated and returned.
*/
void
csio_put_rnode(struct csio_lnode *ln, struct csio_rnode *rn)
{
CSIO_DB_ASSERT(csio_is_rnode_uninit(rn) != 0);
csio_free_rnode(rn);
}
/*
* csio_confirm_rnode - confirms rnode based on wwpn.
* @ln: lnode
* @rdev_flowid: remote device flowid
* @rdevp: remote device params
* This routines searches other rnode in list having same wwpn of new rnode.
* If there is a match, then matched rnode is returned and otherwise new rnode
* is returned.
* returns rnode.
*/
struct csio_rnode *
csio_confirm_rnode(struct csio_lnode *ln, uint32_t rdev_flowid,
struct fcoe_rdev_entry *rdevp)
{
uint8_t rport_type;
struct csio_rnode *rn, *match_rn;
uint32_t vnp_flowid;
__be32 *port_id;
port_id = (__be32 *)&rdevp->r_id[0];
rport_type =
FW_RDEV_WR_RPORT_TYPE_GET(rdevp->rd_xfer_rdy_to_rport_type);
/* Drop rdev event for cntrl port */
if (rport_type == FAB_CTLR_VNPORT) {
csio_ln_dbg(ln,
"Unhandled rport_type:%d recv in rdev evt "
"ssni:x%x\n", rport_type, rdev_flowid);
return NULL;
}
/* Lookup on flowid */
rn = csio_rn_lookup(ln, rdev_flowid);
if (!rn) {
/* Drop events with duplicate flowid */
if (csio_rn_dup_flowid(ln, rdev_flowid, &vnp_flowid)) {
csio_ln_warn(ln,
"ssni:%x already active on vnpi:%x",
rdev_flowid, vnp_flowid);
return NULL;
}
/* Lookup on wwpn for NPORTs */
rn = csio_rn_lookup_wwpn(ln, rdevp->wwpn);
if (!rn)
goto alloc_rnode;
} else {
/* Lookup well-known ports with nport id */
if (csio_is_rnode_wka(rport_type)) {
match_rn = csio_rnode_lookup_portid(ln,
((ntohl(*port_id) >> 8) & CSIO_DID_MASK));
if (match_rn == NULL) {
csio_rn_flowid(rn) = CSIO_INVALID_IDX;
goto alloc_rnode;
}
/*
* Now compare the wwpn to confirm that
* same port relogged in. If so update the matched rn.
* Else, go ahead and alloc a new rnode.
*/
if (!memcmp(csio_rn_wwpn(match_rn), rdevp->wwpn, 8)) {
if (csio_is_rnode_ready(rn)) {
csio_ln_warn(ln,
"rnode is already"
"active ssni:x%x\n",
rdev_flowid);
CSIO_ASSERT(0);
}
csio_rn_flowid(rn) = CSIO_INVALID_IDX;
rn = match_rn;
/* Update rn */
goto found_rnode;
}
csio_rn_flowid(rn) = CSIO_INVALID_IDX;
goto alloc_rnode;
}
/* wwpn match */
if (!memcmp(csio_rn_wwpn(rn), rdevp->wwpn, 8))
goto found_rnode;
/* Search for rnode that have same wwpn */
match_rn = csio_rn_lookup_wwpn(ln, rdevp->wwpn);
if (match_rn != NULL) {
csio_ln_dbg(ln,
"ssni:x%x changed for rport name(wwpn):%llx "
"did:x%x\n", rdev_flowid,
wwn_to_u64(rdevp->wwpn),
match_rn->nport_id);
csio_rn_flowid(rn) = CSIO_INVALID_IDX;
rn = match_rn;
} else {
csio_ln_dbg(ln,
"rnode wwpn mismatch found ssni:x%x "
"name(wwpn):%llx\n",
rdev_flowid,
wwn_to_u64(csio_rn_wwpn(rn)));
if (csio_is_rnode_ready(rn)) {
csio_ln_warn(ln,
"rnode is already active "
"wwpn:%llx ssni:x%x\n",
wwn_to_u64(csio_rn_wwpn(rn)),
rdev_flowid);
CSIO_ASSERT(0);
}
csio_rn_flowid(rn) = CSIO_INVALID_IDX;
goto alloc_rnode;
}
}
found_rnode:
csio_ln_dbg(ln, "found rnode:%p ssni:x%x name(wwpn):%llx\n",
rn, rdev_flowid, wwn_to_u64(rdevp->wwpn));
/* Update flowid */
csio_rn_flowid(rn) = rdev_flowid;
/* update rdev entry */
rn->rdev_entry = rdevp;
CSIO_INC_STATS(ln, n_rnode_match);
return rn;
alloc_rnode:
rn = csio_get_rnode(ln, rdev_flowid);
if (!rn)
return NULL;
csio_ln_dbg(ln, "alloc rnode:%p ssni:x%x name(wwpn):%llx\n",
rn, rdev_flowid, wwn_to_u64(rdevp->wwpn));
/* update rdev entry */
rn->rdev_entry = rdevp;
return rn;
}
/*
* csio_rn_verify_rparams - verify rparams.
* @ln: lnode
* @rn: rnode
* @rdevp: remote device params
* returns success if rparams are verified.
*/
static int
csio_rn_verify_rparams(struct csio_lnode *ln, struct csio_rnode *rn,
struct fcoe_rdev_entry *rdevp)
{
uint8_t null[8];
uint8_t rport_type;
uint8_t fc_class;
__be32 *did;
did = (__be32 *) &rdevp->r_id[0];
rport_type =
FW_RDEV_WR_RPORT_TYPE_GET(rdevp->rd_xfer_rdy_to_rport_type);
switch (rport_type) {
case FLOGI_VFPORT:
rn->role = CSIO_RNFR_FABRIC;
if (((ntohl(*did) >> 8) & CSIO_DID_MASK) != FC_FID_FLOGI) {
csio_ln_err(ln, "ssni:x%x invalid fabric portid\n",
csio_rn_flowid(rn));
return -EINVAL;
}
/* NPIV support */
if (FW_RDEV_WR_NPIV_GET(rdevp->vft_to_qos))
ln->flags |= CSIO_LNF_NPIVSUPP;
break;
case NS_VNPORT:
rn->role = CSIO_RNFR_NS;
if (((ntohl(*did) >> 8) & CSIO_DID_MASK) != FC_FID_DIR_SERV) {
csio_ln_err(ln, "ssni:x%x invalid fabric portid\n",
csio_rn_flowid(rn));
return -EINVAL;
}
break;
case REG_FC4_VNPORT:
case REG_VNPORT:
rn->role = CSIO_RNFR_NPORT;
if (rdevp->event_cause == PRLI_ACC_RCVD ||
rdevp->event_cause == PRLI_RCVD) {
if (FW_RDEV_WR_TASK_RETRY_ID_GET(
rdevp->enh_disc_to_tgt))
rn->fcp_flags |= FCP_SPPF_OVLY_ALLOW;
if (FW_RDEV_WR_RETRY_GET(rdevp->enh_disc_to_tgt))
rn->fcp_flags |= FCP_SPPF_RETRY;
if (FW_RDEV_WR_CONF_CMPL_GET(rdevp->enh_disc_to_tgt))
rn->fcp_flags |= FCP_SPPF_CONF_COMPL;
if (FW_RDEV_WR_TGT_GET(rdevp->enh_disc_to_tgt))
rn->role |= CSIO_RNFR_TARGET;
if (FW_RDEV_WR_INI_GET(rdevp->enh_disc_to_tgt))
rn->role |= CSIO_RNFR_INITIATOR;
}
break;
case FDMI_VNPORT:
case FAB_CTLR_VNPORT:
rn->role = 0;
break;
default:
csio_ln_err(ln, "ssni:x%x invalid rport type recv x%x\n",
csio_rn_flowid(rn), rport_type);
return -EINVAL;
}
/* validate wwpn/wwnn for Name server/remote port */
if (rport_type == REG_VNPORT || rport_type == NS_VNPORT) {
memset(null, 0, 8);
if (!memcmp(rdevp->wwnn, null, 8)) {
csio_ln_err(ln,
"ssni:x%x invalid wwnn received from"
" rport did:x%x\n",
csio_rn_flowid(rn),
(ntohl(*did) & CSIO_DID_MASK));
return -EINVAL;
}
if (!memcmp(rdevp->wwpn, null, 8)) {
csio_ln_err(ln,
"ssni:x%x invalid wwpn received from"
" rport did:x%x\n",
csio_rn_flowid(rn),
(ntohl(*did) & CSIO_DID_MASK));
return -EINVAL;
}
}
/* Copy wwnn, wwpn and nport id */
rn->nport_id = (ntohl(*did) >> 8) & CSIO_DID_MASK;
memcpy(csio_rn_wwnn(rn), rdevp->wwnn, 8);
memcpy(csio_rn_wwpn(rn), rdevp->wwpn, 8);
rn->rn_sparm.csp.sp_bb_data = rdevp->rcv_fr_sz;
fc_class = FW_RDEV_WR_CLASS_GET(rdevp->vft_to_qos);
rn->rn_sparm.clsp[fc_class - 1].cp_class = htons(FC_CPC_VALID);
return 0;
}
static void
__csio_reg_rnode(struct csio_rnode *rn)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
struct csio_hw *hw = csio_lnode_to_hw(ln);
spin_unlock_irq(&hw->lock);
csio_reg_rnode(rn);
spin_lock_irq(&hw->lock);
if (rn->role & CSIO_RNFR_TARGET)
ln->n_scsi_tgts++;
if (rn->nport_id == FC_FID_MGMT_SERV)
csio_ln_fdmi_start(ln, (void *) rn);
}
static void
__csio_unreg_rnode(struct csio_rnode *rn)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
struct csio_hw *hw = csio_lnode_to_hw(ln);
LIST_HEAD(tmp_q);
int cmpl = 0;
if (!list_empty(&rn->host_cmpl_q)) {
csio_dbg(hw, "Returning completion queue I/Os\n");
list_splice_tail_init(&rn->host_cmpl_q, &tmp_q);
cmpl = 1;
}
if (rn->role & CSIO_RNFR_TARGET) {
ln->n_scsi_tgts--;
ln->last_scan_ntgts--;
}
spin_unlock_irq(&hw->lock);
csio_unreg_rnode(rn);
spin_lock_irq(&hw->lock);
/* Cleanup I/Os that were waiting for rnode to unregister */
if (cmpl)
csio_scsi_cleanup_io_q(csio_hw_to_scsim(hw), &tmp_q);
}
/*****************************************************************************/
/* START: Rnode SM */
/*****************************************************************************/
/*
* csio_rns_uninit -
* @rn - rnode
* @evt - SM event.
*
*/
static void
csio_rns_uninit(struct csio_rnode *rn, enum csio_rn_ev evt)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
int ret = 0;
CSIO_INC_STATS(rn, n_evt_sm[evt]);
switch (evt) {
case CSIO_RNFE_LOGGED_IN:
case CSIO_RNFE_PLOGI_RECV:
ret = csio_rn_verify_rparams(ln, rn, rn->rdev_entry);
if (!ret) {
csio_set_state(&rn->sm, csio_rns_ready);
__csio_reg_rnode(rn);
} else {
CSIO_INC_STATS(rn, n_err_inval);
}
break;
case CSIO_RNFE_LOGO_RECV:
csio_ln_dbg(ln,
"ssni:x%x Ignoring event %d recv "
"in rn state[uninit]\n", csio_rn_flowid(rn), evt);
CSIO_INC_STATS(rn, n_evt_drop);
break;
default:
csio_ln_dbg(ln,
"ssni:x%x unexp event %d recv "
"in rn state[uninit]\n", csio_rn_flowid(rn), evt);
CSIO_INC_STATS(rn, n_evt_unexp);
break;
}
}
/*
* csio_rns_ready -
* @rn - rnode
* @evt - SM event.
*
*/
static void
csio_rns_ready(struct csio_rnode *rn, enum csio_rn_ev evt)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
int ret = 0;
CSIO_INC_STATS(rn, n_evt_sm[evt]);
switch (evt) {
case CSIO_RNFE_LOGGED_IN:
case CSIO_RNFE_PLOGI_RECV:
csio_ln_dbg(ln,
"ssni:x%x Ignoring event %d recv from did:x%x "
"in rn state[ready]\n", csio_rn_flowid(rn), evt,
rn->nport_id);
CSIO_INC_STATS(rn, n_evt_drop);
break;
case CSIO_RNFE_PRLI_DONE:
case CSIO_RNFE_PRLI_RECV:
ret = csio_rn_verify_rparams(ln, rn, rn->rdev_entry);
if (!ret)
__csio_reg_rnode(rn);
else
CSIO_INC_STATS(rn, n_err_inval);
break;
case CSIO_RNFE_DOWN:
csio_set_state(&rn->sm, csio_rns_offline);
__csio_unreg_rnode(rn);
/* FW expected to internally aborted outstanding SCSI WRs
* and return all SCSI WRs to host with status "ABORTED".
*/
break;
case CSIO_RNFE_LOGO_RECV:
csio_set_state(&rn->sm, csio_rns_offline);
__csio_unreg_rnode(rn);
/* FW expected to internally aborted outstanding SCSI WRs
* and return all SCSI WRs to host with status "ABORTED".
*/
break;
case CSIO_RNFE_CLOSE:
/*
* Each rnode receives CLOSE event when driver is removed or
* device is reset
* Note: All outstanding IOs on remote port need to returned
* to uppper layer with appropriate error before sending
* CLOSE event
*/
csio_set_state(&rn->sm, csio_rns_uninit);
__csio_unreg_rnode(rn);
break;
case CSIO_RNFE_NAME_MISSING:
csio_set_state(&rn->sm, csio_rns_disappeared);
__csio_unreg_rnode(rn);
/*
* FW expected to internally aborted outstanding SCSI WRs
* and return all SCSI WRs to host with status "ABORTED".
*/
break;
default:
csio_ln_dbg(ln,
"ssni:x%x unexp event %d recv from did:x%x "
"in rn state[uninit]\n", csio_rn_flowid(rn), evt,
rn->nport_id);
CSIO_INC_STATS(rn, n_evt_unexp);
break;
}
}
/*
* csio_rns_offline -
* @rn - rnode
* @evt - SM event.
*
*/
static void
csio_rns_offline(struct csio_rnode *rn, enum csio_rn_ev evt)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
int ret = 0;
CSIO_INC_STATS(rn, n_evt_sm[evt]);
switch (evt) {
case CSIO_RNFE_LOGGED_IN:
case CSIO_RNFE_PLOGI_RECV:
ret = csio_rn_verify_rparams(ln, rn, rn->rdev_entry);
if (!ret) {
csio_set_state(&rn->sm, csio_rns_ready);
__csio_reg_rnode(rn);
} else {
CSIO_INC_STATS(rn, n_err_inval);
csio_post_event(&rn->sm, CSIO_RNFE_CLOSE);
}
break;
case CSIO_RNFE_DOWN:
csio_ln_dbg(ln,
"ssni:x%x Ignoring event %d recv from did:x%x "
"in rn state[offline]\n", csio_rn_flowid(rn), evt,
rn->nport_id);
CSIO_INC_STATS(rn, n_evt_drop);
break;
case CSIO_RNFE_CLOSE:
/* Each rnode receives CLOSE event when driver is removed or
* device is reset
* Note: All outstanding IOs on remote port need to returned
* to uppper layer with appropriate error before sending
* CLOSE event
*/
csio_set_state(&rn->sm, csio_rns_uninit);
break;
case CSIO_RNFE_NAME_MISSING:
csio_set_state(&rn->sm, csio_rns_disappeared);
break;
default:
csio_ln_dbg(ln,
"ssni:x%x unexp event %d recv from did:x%x "
"in rn state[offline]\n", csio_rn_flowid(rn), evt,
rn->nport_id);
CSIO_INC_STATS(rn, n_evt_unexp);
break;
}
}
/*
* csio_rns_disappeared -
* @rn - rnode
* @evt - SM event.
*
*/
static void
csio_rns_disappeared(struct csio_rnode *rn, enum csio_rn_ev evt)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
int ret = 0;
CSIO_INC_STATS(rn, n_evt_sm[evt]);
switch (evt) {
case CSIO_RNFE_LOGGED_IN:
case CSIO_RNFE_PLOGI_RECV:
ret = csio_rn_verify_rparams(ln, rn, rn->rdev_entry);
if (!ret) {
csio_set_state(&rn->sm, csio_rns_ready);
__csio_reg_rnode(rn);
} else {
CSIO_INC_STATS(rn, n_err_inval);
csio_post_event(&rn->sm, CSIO_RNFE_CLOSE);
}
break;
case CSIO_RNFE_CLOSE:
/* Each rnode receives CLOSE event when driver is removed or
* device is reset.
* Note: All outstanding IOs on remote port need to returned
* to uppper layer with appropriate error before sending
* CLOSE event
*/
csio_set_state(&rn->sm, csio_rns_uninit);
break;
case CSIO_RNFE_DOWN:
case CSIO_RNFE_NAME_MISSING:
csio_ln_dbg(ln,
"ssni:x%x Ignoring event %d recv from did x%x"
"in rn state[disappeared]\n", csio_rn_flowid(rn),
evt, rn->nport_id);
break;
default:
csio_ln_dbg(ln,
"ssni:x%x unexp event %d recv from did x%x"
"in rn state[disappeared]\n", csio_rn_flowid(rn),
evt, rn->nport_id);
CSIO_INC_STATS(rn, n_evt_unexp);
break;
}
}
/*****************************************************************************/
/* END: Rnode SM */
/*****************************************************************************/
/*
* csio_rnode_devloss_handler - Device loss event handler
* @rn: rnode
*
* Post event to close rnode SM and free rnode.
*/
void
csio_rnode_devloss_handler(struct csio_rnode *rn)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
/* ignore if same rnode came back as online */
if (csio_is_rnode_ready(rn))
return;
csio_post_event(&rn->sm, CSIO_RNFE_CLOSE);
/* Free rn if in uninit state */
if (csio_is_rnode_uninit(rn))
csio_put_rnode(ln, rn);
}
/**
* csio_rnode_fwevt_handler - Event handler for firmware rnode events.
* @rn: rnode
*
*/
void
csio_rnode_fwevt_handler(struct csio_rnode *rn, uint8_t fwevt)
{
struct csio_lnode *ln = csio_rnode_to_lnode(rn);
enum csio_rn_ev evt;
evt = CSIO_FWE_TO_RNFE(fwevt);
if (!evt) {
csio_ln_err(ln, "ssni:x%x Unhandled FW Rdev event: %d\n",
csio_rn_flowid(rn), fwevt);
CSIO_INC_STATS(rn, n_evt_unexp);
return;
}
CSIO_INC_STATS(rn, n_evt_fw[fwevt]);
/* Track previous & current events for debugging */
rn->prev_evt = rn->cur_evt;
rn->cur_evt = fwevt;
/* Post event to rnode SM */
csio_post_event(&rn->sm, evt);
/* Free rn if in uninit state */
if (csio_is_rnode_uninit(rn))
csio_put_rnode(ln, rn);
}
/*
* csio_rnode_init - Initialize rnode.
* @rn: RNode
* @ln: Associated lnode
*
* Caller is responsible for holding the lock. The lock is required
* to be held for inserting the rnode in ln->rnhead list.
*/
static int
csio_rnode_init(struct csio_rnode *rn, struct csio_lnode *ln)
{
csio_rnode_to_lnode(rn) = ln;
csio_init_state(&rn->sm, csio_rns_uninit);
INIT_LIST_HEAD(&rn->host_cmpl_q);
csio_rn_flowid(rn) = CSIO_INVALID_IDX;
/* Add rnode to list of lnodes->rnhead */
list_add_tail(&rn->sm.sm_list, &ln->rnhead);
return 0;
}
static void
csio_rnode_exit(struct csio_rnode *rn)
{
list_del_init(&rn->sm.sm_list);
CSIO_DB_ASSERT(list_empty(&rn->host_cmpl_q));
}

View File

@ -0,0 +1,141 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CSIO_RNODE_H__
#define __CSIO_RNODE_H__
#include "csio_defs.h"
/* State machine evets */
enum csio_rn_ev {
CSIO_RNFE_NONE = (uint32_t)0, /* None */
CSIO_RNFE_LOGGED_IN, /* [N/F]Port login
* complete.
*/
CSIO_RNFE_PRLI_DONE, /* PRLI completed */
CSIO_RNFE_PLOGI_RECV, /* Received PLOGI */
CSIO_RNFE_PRLI_RECV, /* Received PLOGI */
CSIO_RNFE_LOGO_RECV, /* Received LOGO */
CSIO_RNFE_PRLO_RECV, /* Received PRLO */
CSIO_RNFE_DOWN, /* Rnode is down */
CSIO_RNFE_CLOSE, /* Close rnode */
CSIO_RNFE_NAME_MISSING, /* Rnode name missing
* in name server.
*/
CSIO_RNFE_MAX_EVENT,
};
/* rnode stats */
struct csio_rnode_stats {
uint32_t n_err; /* error */
uint32_t n_err_inval; /* invalid parameter */
uint32_t n_err_nomem; /* error nomem */
uint32_t n_evt_unexp; /* unexpected event */
uint32_t n_evt_drop; /* unexpected event */
uint32_t n_evt_fw[RSCN_DEV_LOST]; /* fw events */
enum csio_rn_ev n_evt_sm[CSIO_RNFE_MAX_EVENT]; /* State m/c events */
uint32_t n_lun_rst; /* Number of resets of
* of LUNs under this
* target
*/
uint32_t n_lun_rst_fail; /* Number of LUN reset
* failures.
*/
uint32_t n_tgt_rst; /* Number of target resets */
uint32_t n_tgt_rst_fail; /* Number of target reset
* failures.
*/
};
/* Defines for rnode role */
#define CSIO_RNFR_INITIATOR 0x1
#define CSIO_RNFR_TARGET 0x2
#define CSIO_RNFR_FABRIC 0x4
#define CSIO_RNFR_NS 0x8
#define CSIO_RNFR_NPORT 0x10
struct csio_rnode {
struct csio_sm sm; /* State machine -
* should be the
* 1st member
*/
struct csio_lnode *lnp; /* Pointer to owning
* Lnode */
uint32_t flowid; /* Firmware ID */
struct list_head host_cmpl_q; /* SCSI IOs
* pending to completed
* to Mid-layer.
*/
/* FC identifiers for remote node */
uint32_t nport_id;
uint16_t fcp_flags; /* FCP Flags */
uint8_t cur_evt; /* Current event */
uint8_t prev_evt; /* Previous event */
uint32_t role; /* Fabric/Target/
* Initiator/NS
*/
struct fcoe_rdev_entry *rdev_entry; /* Rdev entry */
struct csio_service_parms rn_sparm;
/* FC transport attributes */
struct fc_rport *rport; /* FC transport rport */
uint32_t supp_classes; /* Supported FC classes */
uint32_t maxframe_size; /* Max Frame size */
uint32_t scsi_id; /* Transport given SCSI id */
struct csio_rnode_stats stats; /* Common rnode stats */
};
#define csio_rn_flowid(rn) ((rn)->flowid)
#define csio_rn_wwpn(rn) ((rn)->rn_sparm.wwpn)
#define csio_rn_wwnn(rn) ((rn)->rn_sparm.wwnn)
#define csio_rnode_to_lnode(rn) ((rn)->lnp)
int csio_is_rnode_ready(struct csio_rnode *rn);
void csio_rnode_state_to_str(struct csio_rnode *rn, int8_t *str);
struct csio_rnode *csio_rnode_lookup_portid(struct csio_lnode *, uint32_t);
struct csio_rnode *csio_confirm_rnode(struct csio_lnode *,
uint32_t, struct fcoe_rdev_entry *);
void csio_rnode_fwevt_handler(struct csio_rnode *rn, uint8_t fwevt);
void csio_put_rnode(struct csio_lnode *ln, struct csio_rnode *rn);
void csio_reg_rnode(struct csio_rnode *);
void csio_unreg_rnode(struct csio_rnode *);
void csio_rnode_devloss_handler(struct csio_rnode *);
#endif /* ifndef __CSIO_RNODE_H__ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,342 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CSIO_SCSI_H__
#define __CSIO_SCSI_H__
#include <linux/spinlock_types.h>
#include <linux/completion.h>
#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_eh.h>
#include <scsi/scsi_tcq.h>
#include <scsi/fc/fc_fcp.h>
#include "csio_defs.h"
#include "csio_wr.h"
extern struct scsi_host_template csio_fcoe_shost_template;
extern struct scsi_host_template csio_fcoe_shost_vport_template;
extern int csio_scsi_eqsize;
extern int csio_scsi_iqlen;
extern int csio_scsi_ioreqs;
extern uint32_t csio_max_scan_tmo;
extern uint32_t csio_delta_scan_tmo;
extern int csio_lun_qdepth;
/*
**************************** NOTE *******************************
* How do we calculate MAX FCoE SCSI SGEs? Here is the math:
* Max Egress WR size = 512 bytes
* One SCSI egress WR has the following fixed no of bytes:
* 48 (sizeof(struct fw_scsi_write[read]_wr)) - FW WR
* + 32 (sizeof(struct fc_fcp_cmnd)) - Immediate FCP_CMD
* ------
* 80
* ------
* That leaves us with 512 - 96 = 432 bytes for data SGE. Using
* struct ulptx_sgl header for the SGE consumes:
* - 4 bytes for cmnd_sge.
* - 12 bytes for the first SGL.
* That leaves us with 416 bytes for the remaining SGE pairs. Which is
* is 416 / 24 (size(struct ulptx_sge_pair)) = 17 SGE pairs,
* or 34 SGEs. Adding the first SGE fetches us 35 SGEs.
*/
#define CSIO_SCSI_MAX_SGE 35
#define CSIO_SCSI_ABRT_TMO_MS 60000
#define CSIO_SCSI_LUNRST_TMO_MS 60000
#define CSIO_SCSI_TM_POLL_MS 2000 /* should be less than
* all TM timeouts.
*/
#define CSIO_SCSI_IQ_WRSZ 128
#define CSIO_SCSI_IQSIZE (csio_scsi_iqlen * CSIO_SCSI_IQ_WRSZ)
#define CSIO_MAX_SNS_LEN 128
#define CSIO_SCSI_RSP_LEN (FCP_RESP_WITH_EXT + 4 + CSIO_MAX_SNS_LEN)
/* Reference to scsi_cmnd */
#define csio_scsi_cmnd(req) ((req)->scratch1)
struct csio_scsi_stats {
uint64_t n_tot_success; /* Total number of good I/Os */
uint32_t n_rn_nr_error; /* No. of remote-node-not-
* ready errors
*/
uint32_t n_hw_nr_error; /* No. of hw-module-not-
* ready errors
*/
uint32_t n_dmamap_error; /* No. of DMA map erros */
uint32_t n_unsupp_sge_error; /* No. of too-many-SGes
* errors.
*/
uint32_t n_no_req_error; /* No. of Out-of-ioreqs error */
uint32_t n_busy_error; /* No. of -EBUSY errors */
uint32_t n_hosterror; /* No. of FW_HOSTERROR I/O */
uint32_t n_rsperror; /* No. of response errors */
uint32_t n_autosense; /* No. of auto sense replies */
uint32_t n_ovflerror; /* No. of overflow errors */
uint32_t n_unflerror; /* No. of underflow errors */
uint32_t n_rdev_nr_error;/* No. of rdev not
* ready errors
*/
uint32_t n_rdev_lost_error;/* No. of rdev lost errors */
uint32_t n_rdev_logo_error;/* No. of rdev logo errors */
uint32_t n_link_down_error;/* No. of link down errors */
uint32_t n_no_xchg_error; /* No. no exchange error */
uint32_t n_unknown_error;/* No. of unhandled errors */
uint32_t n_aborted; /* No. of aborted I/Os */
uint32_t n_abrt_timedout; /* No. of abort timedouts */
uint32_t n_abrt_fail; /* No. of abort failures */
uint32_t n_abrt_dups; /* No. of duplicate aborts */
uint32_t n_abrt_race_comp; /* No. of aborts that raced
* with completions.
*/
uint32_t n_abrt_busy_error;/* No. of abort failures
* due to -EBUSY.
*/
uint32_t n_closed; /* No. of closed I/Os */
uint32_t n_cls_busy_error; /* No. of close failures
* due to -EBUSY.
*/
uint32_t n_active; /* No. of IOs in active_q */
uint32_t n_tm_active; /* No. of TMs in active_q */
uint32_t n_wcbfn; /* No. of I/Os in worker
* cbfn q
*/
uint32_t n_free_ioreq; /* No. of freelist entries */
uint32_t n_free_ddp; /* No. of DDP freelist */
uint32_t n_unaligned; /* No. of Unaligned SGls */
uint32_t n_inval_cplop; /* No. invalid CPL op's in IQ */
uint32_t n_inval_scsiop; /* No. invalid scsi op's in IQ*/
};
struct csio_scsim {
struct csio_hw *hw; /* Pointer to HW moduel */
uint8_t max_sge; /* Max SGE */
uint8_t proto_cmd_len; /* Proto specific SCSI
* cmd length
*/
uint16_t proto_rsp_len; /* Proto specific SCSI
* response length
*/
spinlock_t freelist_lock; /* Lock for ioreq freelist */
struct list_head active_q; /* Outstanding SCSI I/Os */
struct list_head ioreq_freelist; /* Free list of ioreq's */
struct list_head ddp_freelist; /* DDP descriptor freelist */
struct csio_scsi_stats stats; /* This module's statistics */
};
/* State machine defines */
enum csio_scsi_ev {
CSIO_SCSIE_START_IO = 1, /* Start a regular SCSI IO */
CSIO_SCSIE_START_TM, /* Start a TM IO */
CSIO_SCSIE_COMPLETED, /* IO Completed */
CSIO_SCSIE_ABORT, /* Abort IO */
CSIO_SCSIE_ABORTED, /* IO Aborted */
CSIO_SCSIE_CLOSE, /* Close exchange */
CSIO_SCSIE_CLOSED, /* Exchange closed */
CSIO_SCSIE_DRVCLEANUP, /* Driver wants to manually
* cleanup this I/O.
*/
};
enum csio_scsi_lev {
CSIO_LEV_ALL = 1,
CSIO_LEV_LNODE,
CSIO_LEV_RNODE,
CSIO_LEV_LUN,
};
struct csio_scsi_level_data {
enum csio_scsi_lev level;
struct csio_rnode *rnode;
struct csio_lnode *lnode;
uint64_t oslun;
};
static inline struct csio_ioreq *
csio_get_scsi_ioreq(struct csio_scsim *scm)
{
struct csio_sm *req;
if (likely(!list_empty(&scm->ioreq_freelist))) {
req = list_first_entry(&scm->ioreq_freelist,
struct csio_sm, sm_list);
list_del_init(&req->sm_list);
CSIO_DEC_STATS(scm, n_free_ioreq);
return (struct csio_ioreq *)req;
} else
return NULL;
}
static inline void
csio_put_scsi_ioreq(struct csio_scsim *scm, struct csio_ioreq *ioreq)
{
list_add_tail(&ioreq->sm.sm_list, &scm->ioreq_freelist);
CSIO_INC_STATS(scm, n_free_ioreq);
}
static inline void
csio_put_scsi_ioreq_list(struct csio_scsim *scm, struct list_head *reqlist,
int n)
{
list_splice_init(reqlist, &scm->ioreq_freelist);
scm->stats.n_free_ioreq += n;
}
static inline struct csio_dma_buf *
csio_get_scsi_ddp(struct csio_scsim *scm)
{
struct csio_dma_buf *ddp;
if (likely(!list_empty(&scm->ddp_freelist))) {
ddp = list_first_entry(&scm->ddp_freelist,
struct csio_dma_buf, list);
list_del_init(&ddp->list);
CSIO_DEC_STATS(scm, n_free_ddp);
return ddp;
} else
return NULL;
}
static inline void
csio_put_scsi_ddp(struct csio_scsim *scm, struct csio_dma_buf *ddp)
{
list_add_tail(&ddp->list, &scm->ddp_freelist);
CSIO_INC_STATS(scm, n_free_ddp);
}
static inline void
csio_put_scsi_ddp_list(struct csio_scsim *scm, struct list_head *reqlist,
int n)
{
list_splice_tail_init(reqlist, &scm->ddp_freelist);
scm->stats.n_free_ddp += n;
}
static inline void
csio_scsi_completed(struct csio_ioreq *ioreq, struct list_head *cbfn_q)
{
csio_post_event(&ioreq->sm, CSIO_SCSIE_COMPLETED);
if (csio_list_deleted(&ioreq->sm.sm_list))
list_add_tail(&ioreq->sm.sm_list, cbfn_q);
}
static inline void
csio_scsi_aborted(struct csio_ioreq *ioreq, struct list_head *cbfn_q)
{
csio_post_event(&ioreq->sm, CSIO_SCSIE_ABORTED);
list_add_tail(&ioreq->sm.sm_list, cbfn_q);
}
static inline void
csio_scsi_closed(struct csio_ioreq *ioreq, struct list_head *cbfn_q)
{
csio_post_event(&ioreq->sm, CSIO_SCSIE_CLOSED);
list_add_tail(&ioreq->sm.sm_list, cbfn_q);
}
static inline void
csio_scsi_drvcleanup(struct csio_ioreq *ioreq)
{
csio_post_event(&ioreq->sm, CSIO_SCSIE_DRVCLEANUP);
}
/*
* csio_scsi_start_io - Kick starts the IO SM.
* @req: io request SM.
*
* needs to be called with lock held.
*/
static inline int
csio_scsi_start_io(struct csio_ioreq *ioreq)
{
csio_post_event(&ioreq->sm, CSIO_SCSIE_START_IO);
return ioreq->drv_status;
}
/*
* csio_scsi_start_tm - Kicks off the Task management IO SM.
* @req: io request SM.
*
* needs to be called with lock held.
*/
static inline int
csio_scsi_start_tm(struct csio_ioreq *ioreq)
{
csio_post_event(&ioreq->sm, CSIO_SCSIE_START_TM);
return ioreq->drv_status;
}
/*
* csio_scsi_abort - Abort an IO request
* @req: io request SM.
*
* needs to be called with lock held.
*/
static inline int
csio_scsi_abort(struct csio_ioreq *ioreq)
{
csio_post_event(&ioreq->sm, CSIO_SCSIE_ABORT);
return ioreq->drv_status;
}
/*
* csio_scsi_close - Close an IO request
* @req: io request SM.
*
* needs to be called with lock held.
*/
static inline int
csio_scsi_close(struct csio_ioreq *ioreq)
{
csio_post_event(&ioreq->sm, CSIO_SCSIE_CLOSE);
return ioreq->drv_status;
}
void csio_scsi_cleanup_io_q(struct csio_scsim *, struct list_head *);
int csio_scsim_cleanup_io(struct csio_scsim *, bool abort);
int csio_scsim_cleanup_io_lnode(struct csio_scsim *,
struct csio_lnode *);
struct csio_ioreq *csio_scsi_cmpl_handler(struct csio_hw *, void *, uint32_t,
struct csio_fl_dma_buf *,
void *, uint8_t **);
int csio_scsi_qconfig(struct csio_hw *);
int csio_scsim_init(struct csio_scsim *, struct csio_hw *);
void csio_scsim_exit(struct csio_scsim *);
#endif /* __CSIO_SCSI_H__ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,512 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2008-2012 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __CSIO_WR_H__
#define __CSIO_WR_H__
#include <linux/cache.h>
#include "csio_defs.h"
#include "t4fw_api.h"
#include "t4fw_api_stor.h"
/*
* SGE register field values.
*/
#define X_INGPCIEBOUNDARY_32B 0
#define X_INGPCIEBOUNDARY_64B 1
#define X_INGPCIEBOUNDARY_128B 2
#define X_INGPCIEBOUNDARY_256B 3
#define X_INGPCIEBOUNDARY_512B 4
#define X_INGPCIEBOUNDARY_1024B 5
#define X_INGPCIEBOUNDARY_2048B 6
#define X_INGPCIEBOUNDARY_4096B 7
/* GTS register */
#define X_TIMERREG_COUNTER0 0
#define X_TIMERREG_COUNTER1 1
#define X_TIMERREG_COUNTER2 2
#define X_TIMERREG_COUNTER3 3
#define X_TIMERREG_COUNTER4 4
#define X_TIMERREG_COUNTER5 5
#define X_TIMERREG_RESTART_COUNTER 6
#define X_TIMERREG_UPDATE_CIDX 7
/*
* Egress Context field values
*/
#define X_FETCHBURSTMIN_16B 0
#define X_FETCHBURSTMIN_32B 1
#define X_FETCHBURSTMIN_64B 2
#define X_FETCHBURSTMIN_128B 3
#define X_FETCHBURSTMAX_64B 0
#define X_FETCHBURSTMAX_128B 1
#define X_FETCHBURSTMAX_256B 2
#define X_FETCHBURSTMAX_512B 3
#define X_HOSTFCMODE_NONE 0
#define X_HOSTFCMODE_INGRESS_QUEUE 1
#define X_HOSTFCMODE_STATUS_PAGE 2
#define X_HOSTFCMODE_BOTH 3
/*
* Ingress Context field values
*/
#define X_UPDATESCHEDULING_TIMER 0
#define X_UPDATESCHEDULING_COUNTER_OPTTIMER 1
#define X_UPDATEDELIVERY_NONE 0
#define X_UPDATEDELIVERY_INTERRUPT 1
#define X_UPDATEDELIVERY_STATUS_PAGE 2
#define X_UPDATEDELIVERY_BOTH 3
#define X_INTERRUPTDESTINATION_PCIE 0
#define X_INTERRUPTDESTINATION_IQ 1
#define X_RSPD_TYPE_FLBUF 0
#define X_RSPD_TYPE_CPL 1
#define X_RSPD_TYPE_INTR 2
/* WR status is at the same position as retval in a CMD header */
#define csio_wr_status(_wr) \
(FW_CMD_RETVAL_GET(ntohl(((struct fw_cmd_hdr *)(_wr))->lo)))
struct csio_hw;
extern int csio_intr_coalesce_cnt;
extern int csio_intr_coalesce_time;
/* Ingress queue params */
struct csio_iq_params {
uint8_t iq_start:1;
uint8_t iq_stop:1;
uint8_t pfn:3;
uint8_t vfn;
uint16_t physiqid;
uint16_t iqid;
uint16_t fl0id;
uint16_t fl1id;
uint8_t viid;
uint8_t type;
uint8_t iqasynch;
uint8_t reserved4;
uint8_t iqandst;
uint8_t iqanus;
uint8_t iqanud;
uint16_t iqandstindex;
uint8_t iqdroprss;
uint8_t iqpciech;
uint8_t iqdcaen;
uint8_t iqdcacpu;
uint8_t iqintcntthresh;
uint8_t iqo;
uint8_t iqcprio;
uint8_t iqesize;
uint16_t iqsize;
uint64_t iqaddr;
uint8_t iqflintiqhsen;
uint8_t reserved5;
uint8_t iqflintcongen;
uint8_t iqflintcngchmap;
uint32_t reserved6;
uint8_t fl0hostfcmode;
uint8_t fl0cprio;
uint8_t fl0paden;
uint8_t fl0packen;
uint8_t fl0congen;
uint8_t fl0dcaen;
uint8_t fl0dcacpu;
uint8_t fl0fbmin;
uint8_t fl0fbmax;
uint8_t fl0cidxfthresho;
uint8_t fl0cidxfthresh;
uint16_t fl0size;
uint64_t fl0addr;
uint64_t reserved7;
uint8_t fl1hostfcmode;
uint8_t fl1cprio;
uint8_t fl1paden;
uint8_t fl1packen;
uint8_t fl1congen;
uint8_t fl1dcaen;
uint8_t fl1dcacpu;
uint8_t fl1fbmin;
uint8_t fl1fbmax;
uint8_t fl1cidxfthresho;
uint8_t fl1cidxfthresh;
uint16_t fl1size;
uint64_t fl1addr;
};
/* Egress queue params */
struct csio_eq_params {
uint8_t pfn;
uint8_t vfn;
uint8_t eqstart:1;
uint8_t eqstop:1;
uint16_t physeqid;
uint32_t eqid;
uint8_t hostfcmode:2;
uint8_t cprio:1;
uint8_t pciechn:3;
uint16_t iqid;
uint8_t dcaen:1;
uint8_t dcacpu:5;
uint8_t fbmin:3;
uint8_t fbmax:3;
uint8_t cidxfthresho:1;
uint8_t cidxfthresh:3;
uint16_t eqsize;
uint64_t eqaddr;
};
struct csio_dma_buf {
struct list_head list;
void *vaddr; /* Virtual address */
dma_addr_t paddr; /* Physical address */
uint32_t len; /* Buffer size */
};
/* Generic I/O request structure */
struct csio_ioreq {
struct csio_sm sm; /* SM, List
* should be the first member
*/
int iq_idx; /* Ingress queue index */
int eq_idx; /* Egress queue index */
uint32_t nsge; /* Number of SG elements */
uint32_t tmo; /* Driver timeout */
uint32_t datadir; /* Data direction */
struct csio_dma_buf dma_buf; /* Req/resp DMA buffers */
uint16_t wr_status; /* WR completion status */
int16_t drv_status; /* Driver internal status */
struct csio_lnode *lnode; /* Owner lnode */
struct csio_rnode *rnode; /* Src/destination rnode */
void (*io_cbfn) (struct csio_hw *, struct csio_ioreq *);
/* completion callback */
void *scratch1; /* Scratch area 1.
*/
void *scratch2; /* Scratch area 2. */
struct list_head gen_list; /* Any list associated with
* this ioreq.
*/
uint64_t fw_handle; /* Unique handle passed
* to FW
*/
uint8_t dcopy; /* Data copy required */
uint8_t reserved1;
uint16_t reserved2;
struct completion cmplobj; /* ioreq completion object */
} ____cacheline_aligned_in_smp;
/*
* Egress status page for egress cidx updates
*/
struct csio_qstatus_page {
__be32 qid;
__be16 cidx;
__be16 pidx;
};
enum {
CSIO_MAX_FLBUF_PER_IQWR = 4,
CSIO_QCREDIT_SZ = 64, /* pidx/cidx increments
* in bytes
*/
CSIO_MAX_QID = 0xFFFF,
CSIO_MAX_IQ = 128,
CSIO_SGE_NTIMERS = 6,
CSIO_SGE_NCOUNTERS = 4,
CSIO_SGE_FL_SIZE_REGS = 16,
};
/* Defines for type */
enum {
CSIO_EGRESS = 1,
CSIO_INGRESS = 2,
CSIO_FREELIST = 3,
};
/*
* Structure for footer (last 2 flits) of Ingress Queue Entry.
*/
struct csio_iqwr_footer {
__be32 hdrbuflen_pidx;
__be32 pldbuflen_qid;
union {
u8 type_gen;
__be64 last_flit;
} u;
};
#define IQWRF_NEWBUF (1 << 31)
#define IQWRF_LEN_GET(x) (((x) >> 0) & 0x7fffffffU)
#define IQWRF_GEN_SHIFT 7
#define IQWRF_TYPE_GET(x) (((x) >> 4) & 0x3U)
/*
* WR pair:
* ========
* A WR can start towards the end of a queue, and then continue at the
* beginning, since the queue is considered to be circular. This will
* require a pair of address/len to be passed back to the caller -
* hence the Work request pair structure.
*/
struct csio_wr_pair {
void *addr1;
uint32_t size1;
void *addr2;
uint32_t size2;
};
/*
* The following structure is used by ingress processing to return the
* free list buffers to consumers.
*/
struct csio_fl_dma_buf {
struct csio_dma_buf flbufs[CSIO_MAX_FLBUF_PER_IQWR];
/* Freelist DMA buffers */
int offset; /* Offset within the
* first FL buf.
*/
uint32_t totlen; /* Total length */
uint8_t defer_free; /* Free of buffer can
* deferred
*/
};
/* Data-types */
typedef void (*iq_handler_t)(struct csio_hw *, void *, uint32_t,
struct csio_fl_dma_buf *, void *);
struct csio_iq {
uint16_t iqid; /* Queue ID */
uint16_t physiqid; /* Physical Queue ID */
uint16_t genbit; /* Generation bit,
* initially set to 1
*/
int flq_idx; /* Freelist queue index */
iq_handler_t iq_intx_handler; /* IQ INTx handler routine */
};
struct csio_eq {
uint16_t eqid; /* Qid */
uint16_t physeqid; /* Physical Queue ID */
uint8_t wrap[512]; /* Temp area for q-wrap around*/
};
struct csio_fl {
uint16_t flid; /* Qid */
uint16_t packen; /* Packing enabled? */
int offset; /* Offset within FL buf */
int sreg; /* Size register */
struct csio_dma_buf *bufs; /* Free list buffer ptr array
* indexed using flq->cidx/pidx
*/
};
struct csio_qstats {
uint32_t n_tot_reqs; /* Total no. of Requests */
uint32_t n_tot_rsps; /* Total no. of responses */
uint32_t n_qwrap; /* Queue wraps */
uint32_t n_eq_wr_split; /* Number of split EQ WRs */
uint32_t n_qentry; /* Queue entry */
uint32_t n_qempty; /* Queue empty */
uint32_t n_qfull; /* Queue fulls */
uint32_t n_rsp_unknown; /* Unknown response type */
uint32_t n_stray_comp; /* Stray completion intr */
uint32_t n_flq_refill; /* Number of FL refills */
};
/* Queue metadata */
struct csio_q {
uint16_t type; /* Type: Ingress/Egress/FL */
uint16_t pidx; /* producer index */
uint16_t cidx; /* consumer index */
uint16_t inc_idx; /* Incremental index */
uint32_t wr_sz; /* Size of all WRs in this q
* if fixed
*/
void *vstart; /* Base virtual address
* of queue
*/
void *vwrap; /* Virtual end address to
* wrap around at
*/
uint32_t credits; /* Size of queue in credits */
void *owner; /* Owner */
union { /* Queue contexts */
struct csio_iq iq;
struct csio_eq eq;
struct csio_fl fl;
} un;
dma_addr_t pstart; /* Base physical address of
* queue
*/
uint32_t portid; /* PCIE Channel */
uint32_t size; /* Size of queue in bytes */
struct csio_qstats stats; /* Statistics */
} ____cacheline_aligned_in_smp;
struct csio_sge {
uint32_t csio_fl_align; /* Calculated and cached
* for fast path
*/
uint32_t sge_control; /* padding, boundaries,
* lengths, etc.
*/
uint32_t sge_host_page_size; /* Host page size */
uint32_t sge_fl_buf_size[CSIO_SGE_FL_SIZE_REGS];
/* free list buffer sizes */
uint16_t timer_val[CSIO_SGE_NTIMERS];
uint8_t counter_val[CSIO_SGE_NCOUNTERS];
};
/* Work request module */
struct csio_wrm {
int num_q; /* Number of queues */
struct csio_q **q_arr; /* Array of queue pointers
* allocated dynamically
* based on configured values
*/
uint32_t fw_iq_start; /* Start ID of IQ for this fn*/
uint32_t fw_eq_start; /* Start ID of EQ for this fn*/
struct csio_q *intr_map[CSIO_MAX_IQ];
/* IQ-id to IQ map table. */
int free_qidx; /* queue idx of free queue */
struct csio_sge sge; /* SGE params */
};
#define csio_get_q(__hw, __idx) ((__hw)->wrm.q_arr[__idx])
#define csio_q_type(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->type)
#define csio_q_pidx(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->pidx)
#define csio_q_cidx(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->cidx)
#define csio_q_inc_idx(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->inc_idx)
#define csio_q_vstart(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->vstart)
#define csio_q_pstart(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->pstart)
#define csio_q_size(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->size)
#define csio_q_credits(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->credits)
#define csio_q_portid(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->portid)
#define csio_q_wr_sz(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->wr_sz)
#define csio_q_iqid(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->un.iq.iqid)
#define csio_q_physiqid(__hw, __idx) \
((__hw)->wrm.q_arr[(__idx)]->un.iq.physiqid)
#define csio_q_iq_flq_idx(__hw, __idx) \
((__hw)->wrm.q_arr[(__idx)]->un.iq.flq_idx)
#define csio_q_eqid(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->un.eq.eqid)
#define csio_q_flid(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->un.fl.flid)
#define csio_q_physeqid(__hw, __idx) \
((__hw)->wrm.q_arr[(__idx)]->un.eq.physeqid)
#define csio_iq_has_fl(__iq) ((__iq)->un.iq.flq_idx != -1)
#define csio_q_iq_to_flid(__hw, __iq_idx) \
csio_q_flid((__hw), (__hw)->wrm.q_arr[(__iq_qidx)]->un.iq.flq_idx)
#define csio_q_set_intr_map(__hw, __iq_idx, __rel_iq_id) \
(__hw)->wrm.intr_map[__rel_iq_id] = csio_get_q(__hw, __iq_idx)
#define csio_q_eq_wrap(__hw, __idx) ((__hw)->wrm.q_arr[(__idx)]->un.eq.wrap)
struct csio_mb;
int csio_wr_alloc_q(struct csio_hw *, uint32_t, uint32_t,
uint16_t, void *, uint32_t, int, iq_handler_t);
int csio_wr_iq_create(struct csio_hw *, void *, int,
uint32_t, uint8_t, bool,
void (*)(struct csio_hw *, struct csio_mb *));
int csio_wr_eq_create(struct csio_hw *, void *, int, int, uint8_t,
void (*)(struct csio_hw *, struct csio_mb *));
int csio_wr_destroy_queues(struct csio_hw *, bool cmd);
int csio_wr_get(struct csio_hw *, int, uint32_t,
struct csio_wr_pair *);
void csio_wr_copy_to_wrp(void *, struct csio_wr_pair *, uint32_t, uint32_t);
int csio_wr_issue(struct csio_hw *, int, bool);
int csio_wr_process_iq(struct csio_hw *, struct csio_q *,
void (*)(struct csio_hw *, void *,
uint32_t, struct csio_fl_dma_buf *,
void *),
void *);
int csio_wr_process_iq_idx(struct csio_hw *, int,
void (*)(struct csio_hw *, void *,
uint32_t, struct csio_fl_dma_buf *,
void *),
void *);
void csio_wr_sge_init(struct csio_hw *);
int csio_wrm_init(struct csio_wrm *, struct csio_hw *);
void csio_wrm_exit(struct csio_wrm *, struct csio_hw *);
#endif /* ifndef __CSIO_WR_H__ */

View File

@ -0,0 +1,578 @@
/*
* This file is part of the Chelsio FCoE driver for Linux.
*
* Copyright (c) 2009-2010 Chelsio Communications, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _T4FW_API_STOR_H_
#define _T4FW_API_STOR_H_
/******************************************************************************
* R E T U R N V A L U E S
********************************/
enum fw_retval {
FW_SUCCESS = 0, /* completed sucessfully */
FW_EPERM = 1, /* operation not permitted */
FW_ENOENT = 2, /* no such file or directory */
FW_EIO = 5, /* input/output error; hw bad */
FW_ENOEXEC = 8, /* exec format error; inv microcode */
FW_EAGAIN = 11, /* try again */
FW_ENOMEM = 12, /* out of memory */
FW_EFAULT = 14, /* bad address; fw bad */
FW_EBUSY = 16, /* resource busy */
FW_EEXIST = 17, /* file exists */
FW_EINVAL = 22, /* invalid argument */
FW_ENOSPC = 28, /* no space left on device */
FW_ENOSYS = 38, /* functionality not implemented */
FW_EPROTO = 71, /* protocol error */
FW_EADDRINUSE = 98, /* address already in use */
FW_EADDRNOTAVAIL = 99, /* cannot assigned requested address */
FW_ENETDOWN = 100, /* network is down */
FW_ENETUNREACH = 101, /* network is unreachable */
FW_ENOBUFS = 105, /* no buffer space available */
FW_ETIMEDOUT = 110, /* timeout */
FW_EINPROGRESS = 115, /* fw internal */
FW_SCSI_ABORT_REQUESTED = 128, /* */
FW_SCSI_ABORT_TIMEDOUT = 129, /* */
FW_SCSI_ABORTED = 130, /* */
FW_SCSI_CLOSE_REQUESTED = 131, /* */
FW_ERR_LINK_DOWN = 132, /* */
FW_RDEV_NOT_READY = 133, /* */
FW_ERR_RDEV_LOST = 134, /* */
FW_ERR_RDEV_LOGO = 135, /* */
FW_FCOE_NO_XCHG = 136, /* */
FW_SCSI_RSP_ERR = 137, /* */
FW_ERR_RDEV_IMPL_LOGO = 138, /* */
FW_SCSI_UNDER_FLOW_ERR = 139, /* */
FW_SCSI_OVER_FLOW_ERR = 140, /* */
FW_SCSI_DDP_ERR = 141, /* DDP error*/
FW_SCSI_TASK_ERR = 142, /* No SCSI tasks available */
};
enum fw_fcoe_link_sub_op {
FCOE_LINK_DOWN = 0x0,
FCOE_LINK_UP = 0x1,
FCOE_LINK_COND = 0x2,
};
enum fw_fcoe_link_status {
FCOE_LINKDOWN = 0x0,
FCOE_LINKUP = 0x1,
};
enum fw_ofld_prot {
PROT_FCOE = 0x1,
PROT_ISCSI = 0x2,
};
enum rport_type_fcoe {
FLOGI_VFPORT = 0x1, /* 0xfffffe */
FDISC_VFPORT = 0x2, /* 0xfffffe */
NS_VNPORT = 0x3, /* 0xfffffc */
REG_FC4_VNPORT = 0x4, /* any FC4 type VN_PORT */
REG_VNPORT = 0x5, /* 0xfffxxx - non FC4 port in switch */
FDMI_VNPORT = 0x6, /* 0xfffffa */
FAB_CTLR_VNPORT = 0x7, /* 0xfffffd */
};
enum event_cause_fcoe {
PLOGI_ACC_RCVD = 0x01,
PLOGI_RJT_RCVD = 0x02,
PLOGI_RCVD = 0x03,
PLOGO_RCVD = 0x04,
PRLI_ACC_RCVD = 0x05,
PRLI_RJT_RCVD = 0x06,
PRLI_RCVD = 0x07,
PRLO_RCVD = 0x08,
NPORT_ID_CHGD = 0x09,
FLOGO_RCVD = 0x0a,
CLR_VIRT_LNK_RCVD = 0x0b,
FLOGI_ACC_RCVD = 0x0c,
FLOGI_RJT_RCVD = 0x0d,
FDISC_ACC_RCVD = 0x0e,
FDISC_RJT_RCVD = 0x0f,
FLOGI_TMO_MAX_RETRY = 0x10,
IMPL_LOGO_ADISC_ACC = 0x11,
IMPL_LOGO_ADISC_RJT = 0x12,
IMPL_LOGO_ADISC_CNFLT = 0x13,
PRLI_TMO = 0x14,
ADISC_TMO = 0x15,
RSCN_DEV_LOST = 0x16,
SCR_ACC_RCVD = 0x17,
ADISC_RJT_RCVD = 0x18,
LOGO_SNT = 0x19,
PROTO_ERR_IMPL_LOGO = 0x1a,
};
enum fcoe_cmn_type {
FCOE_ELS,
FCOE_CT,
FCOE_SCSI_CMD,
FCOE_UNSOL_ELS,
};
enum fw_wr_stor_opcodes {
FW_RDEV_WR = 0x38,
FW_FCOE_ELS_CT_WR = 0x30,
FW_SCSI_WRITE_WR = 0x31,
FW_SCSI_READ_WR = 0x32,
FW_SCSI_CMD_WR = 0x33,
FW_SCSI_ABRT_CLS_WR = 0x34,
};
struct fw_rdev_wr {
__be32 op_to_immdlen;
__be32 alloc_to_len16;
__be64 cookie;
u8 protocol;
u8 event_cause;
u8 cur_state;
u8 prev_state;
__be32 flags_to_assoc_flowid;
union rdev_entry {
struct fcoe_rdev_entry {
__be32 flowid;
u8 protocol;
u8 event_cause;
u8 flags;
u8 rjt_reason;
u8 cur_login_st;
u8 prev_login_st;
__be16 rcv_fr_sz;
u8 rd_xfer_rdy_to_rport_type;
u8 vft_to_qos;
u8 org_proc_assoc_to_acc_rsp_code;
u8 enh_disc_to_tgt;
u8 wwnn[8];
u8 wwpn[8];
__be16 iqid;
u8 fc_oui[3];
u8 r_id[3];
} fcoe_rdev;
struct iscsi_rdev_entry {
__be32 flowid;
u8 protocol;
u8 event_cause;
u8 flags;
u8 r3;
__be16 iscsi_opts;
__be16 tcp_opts;
__be16 ip_opts;
__be16 max_rcv_len;
__be16 max_snd_len;
__be16 first_brst_len;
__be16 max_brst_len;
__be16 r4;
__be16 def_time2wait;
__be16 def_time2ret;
__be16 nop_out_intrvl;
__be16 non_scsi_to;
__be16 isid;
__be16 tsid;
__be16 port;
__be16 tpgt;
u8 r5[6];
__be16 iqid;
} iscsi_rdev;
} u;
};
#define FW_RDEV_WR_FLOWID_GET(x) (((x) >> 8) & 0xfffff)
#define FW_RDEV_WR_ASSOC_FLOWID_GET(x) (((x) >> 0) & 0xfffff)
#define FW_RDEV_WR_RPORT_TYPE_GET(x) (((x) >> 0) & 0x1f)
#define FW_RDEV_WR_NPIV_GET(x) (((x) >> 6) & 0x1)
#define FW_RDEV_WR_CLASS_GET(x) (((x) >> 4) & 0x3)
#define FW_RDEV_WR_TASK_RETRY_ID_GET(x) (((x) >> 5) & 0x1)
#define FW_RDEV_WR_RETRY_GET(x) (((x) >> 4) & 0x1)
#define FW_RDEV_WR_CONF_CMPL_GET(x) (((x) >> 3) & 0x1)
#define FW_RDEV_WR_INI_GET(x) (((x) >> 1) & 0x1)
#define FW_RDEV_WR_TGT_GET(x) (((x) >> 0) & 0x1)
struct fw_fcoe_els_ct_wr {
__be32 op_immdlen;
__be32 flowid_len16;
u64 cookie;
__be16 iqid;
u8 tmo_val;
u8 els_ct_type;
u8 ctl_pri;
u8 cp_en_class;
__be16 xfer_cnt;
u8 fl_to_sp;
u8 l_id[3];
u8 r5;
u8 r_id[3];
__be64 rsp_dmaaddr;
__be32 rsp_dmalen;
__be32 r6;
};
#define FW_FCOE_ELS_CT_WR_OPCODE(x) ((x) << 24)
#define FW_FCOE_ELS_CT_WR_OPCODE_GET(x) (((x) >> 24) & 0xff)
#define FW_FCOE_ELS_CT_WR_IMMDLEN(x) ((x) << 0)
#define FW_FCOE_ELS_CT_WR_IMMDLEN_GET(x) (((x) >> 0) & 0xff)
#define FW_FCOE_ELS_CT_WR_SP(x) ((x) << 0)
struct fw_scsi_write_wr {
__be32 op_immdlen;
__be32 flowid_len16;
u64 cookie;
__be16 iqid;
u8 tmo_val;
u8 use_xfer_cnt;
union fw_scsi_write_priv {
struct fcoe_write_priv {
u8 ctl_pri;
u8 cp_en_class;
u8 r3_lo[2];
} fcoe;
struct iscsi_write_priv {
u8 r3[4];
} iscsi;
} u;
__be32 xfer_cnt;
__be32 ini_xfer_cnt;
__be64 rsp_dmaaddr;
__be32 rsp_dmalen;
__be32 r4;
};
#define FW_SCSI_WRITE_WR_IMMDLEN(x) ((x) << 0)
struct fw_scsi_read_wr {
__be32 op_immdlen;
__be32 flowid_len16;
u64 cookie;
__be16 iqid;
u8 tmo_val;
u8 use_xfer_cnt;
union fw_scsi_read_priv {
struct fcoe_read_priv {
u8 ctl_pri;
u8 cp_en_class;
u8 r3_lo[2];
} fcoe;
struct iscsi_read_priv {
u8 r3[4];
} iscsi;
} u;
__be32 xfer_cnt;
__be32 ini_xfer_cnt;
__be64 rsp_dmaaddr;
__be32 rsp_dmalen;
__be32 r4;
};
#define FW_SCSI_READ_WR_IMMDLEN(x) ((x) << 0)
struct fw_scsi_cmd_wr {
__be32 op_immdlen;
__be32 flowid_len16;
u64 cookie;
__be16 iqid;
u8 tmo_val;
u8 r3;
union fw_scsi_cmd_priv {
struct fcoe_cmd_priv {
u8 ctl_pri;
u8 cp_en_class;
u8 r4_lo[2];
} fcoe;
struct iscsi_cmd_priv {
u8 r4[4];
} iscsi;
} u;
u8 r5[8];
__be64 rsp_dmaaddr;
__be32 rsp_dmalen;
__be32 r6;
};
#define FW_SCSI_CMD_WR_IMMDLEN(x) ((x) << 0)
#define SCSI_ABORT 0
#define SCSI_CLOSE 1
struct fw_scsi_abrt_cls_wr {
__be32 op_immdlen;
__be32 flowid_len16;
u64 cookie;
__be16 iqid;
u8 tmo_val;
u8 sub_opcode_to_chk_all_io;
u8 r3[4];
u64 t_cookie;
};
#define FW_SCSI_ABRT_CLS_WR_SUB_OPCODE(x) ((x) << 2)
#define FW_SCSI_ABRT_CLS_WR_SUB_OPCODE_GET(x) (((x) >> 2) & 0x3f)
#define FW_SCSI_ABRT_CLS_WR_CHK_ALL_IO(x) ((x) << 0)
enum fw_cmd_stor_opcodes {
FW_FCOE_RES_INFO_CMD = 0x31,
FW_FCOE_LINK_CMD = 0x32,
FW_FCOE_VNP_CMD = 0x33,
FW_FCOE_SPARAMS_CMD = 0x35,
FW_FCOE_STATS_CMD = 0x37,
FW_FCOE_FCF_CMD = 0x38,
};
struct fw_fcoe_res_info_cmd {
__be32 op_to_read;
__be32 retval_len16;
__be16 e_d_tov;
__be16 r_a_tov_seq;
__be16 r_a_tov_els;
__be16 r_r_tov;
__be32 max_xchgs;
__be32 max_ssns;
__be32 used_xchgs;
__be32 used_ssns;
__be32 max_fcfs;
__be32 max_vnps;
__be32 used_fcfs;
__be32 used_vnps;
};
struct fw_fcoe_link_cmd {
__be32 op_to_portid;
__be32 retval_len16;
__be32 sub_opcode_fcfi;
u8 r3;
u8 lstatus;
__be16 flags;
u8 r4;
u8 set_vlan;
__be16 vlan_id;
__be32 vnpi_pkd;
__be16 r6;
u8 phy_mac[6];
u8 vnport_wwnn[8];
u8 vnport_wwpn[8];
};
#define FW_FCOE_LINK_CMD_PORTID(x) ((x) << 0)
#define FW_FCOE_LINK_CMD_PORTID_GET(x) (((x) >> 0) & 0xf)
#define FW_FCOE_LINK_CMD_SUB_OPCODE(x) ((x) << 24U)
#define FW_FCOE_LINK_CMD_FCFI(x) ((x) << 0)
#define FW_FCOE_LINK_CMD_FCFI_GET(x) (((x) >> 0) & 0xffffff)
#define FW_FCOE_LINK_CMD_VNPI_GET(x) (((x) >> 0) & 0xfffff)
struct fw_fcoe_vnp_cmd {
__be32 op_to_fcfi;
__be32 alloc_to_len16;
__be32 gen_wwn_to_vnpi;
__be32 vf_id;
__be16 iqid;
u8 vnport_mac[6];
u8 vnport_wwnn[8];
u8 vnport_wwpn[8];
u8 cmn_srv_parms[16];
u8 clsp_word_0_1[8];
};
#define FW_FCOE_VNP_CMD_FCFI(x) ((x) << 0)
#define FW_FCOE_VNP_CMD_ALLOC (1U << 31)
#define FW_FCOE_VNP_CMD_FREE (1U << 30)
#define FW_FCOE_VNP_CMD_MODIFY (1U << 29)
#define FW_FCOE_VNP_CMD_GEN_WWN (1U << 22)
#define FW_FCOE_VNP_CMD_VFID_EN (1U << 20)
#define FW_FCOE_VNP_CMD_VNPI(x) ((x) << 0)
#define FW_FCOE_VNP_CMD_VNPI_GET(x) (((x) >> 0) & 0xfffff)
struct fw_fcoe_sparams_cmd {
__be32 op_to_portid;
__be32 retval_len16;
u8 r3[7];
u8 cos;
u8 lport_wwnn[8];
u8 lport_wwpn[8];
u8 cmn_srv_parms[16];
u8 cls_srv_parms[16];
};
#define FW_FCOE_SPARAMS_CMD_PORTID(x) ((x) << 0)
struct fw_fcoe_stats_cmd {
__be32 op_to_flowid;
__be32 free_to_len16;
union fw_fcoe_stats {
struct fw_fcoe_stats_ctl {
u8 nstats_port;
u8 port_valid_ix;
__be16 r6;
__be32 r7;
__be64 stat0;
__be64 stat1;
__be64 stat2;
__be64 stat3;
__be64 stat4;
__be64 stat5;
} ctl;
struct fw_fcoe_port_stats {
__be64 tx_bcast_bytes;
__be64 tx_bcast_frames;
__be64 tx_mcast_bytes;
__be64 tx_mcast_frames;
__be64 tx_ucast_bytes;
__be64 tx_ucast_frames;
__be64 tx_drop_frames;
__be64 tx_offload_bytes;
__be64 tx_offload_frames;
__be64 rx_bcast_bytes;
__be64 rx_bcast_frames;
__be64 rx_mcast_bytes;
__be64 rx_mcast_frames;
__be64 rx_ucast_bytes;
__be64 rx_ucast_frames;
__be64 rx_err_frames;
} port_stats;
struct fw_fcoe_fcf_stats {
__be32 fip_tx_bytes;
__be32 fip_tx_fr;
__be64 fcf_ka;
__be64 mcast_adv_rcvd;
__be16 ucast_adv_rcvd;
__be16 sol_sent;
__be16 vlan_req;
__be16 vlan_rpl;
__be16 clr_vlink;
__be16 link_down;
__be16 link_up;
__be16 logo;
__be16 flogi_req;
__be16 flogi_rpl;
__be16 fdisc_req;
__be16 fdisc_rpl;
__be16 fka_prd_chg;
__be16 fc_map_chg;
__be16 vfid_chg;
u8 no_fka_req;
u8 no_vnp;
} fcf_stats;
struct fw_fcoe_pcb_stats {
__be64 tx_bytes;
__be64 tx_frames;
__be64 rx_bytes;
__be64 rx_frames;
__be32 vnp_ka;
__be32 unsol_els_rcvd;
__be64 unsol_cmd_rcvd;
__be16 implicit_logo;
__be16 flogi_inv_sparm;
__be16 fdisc_inv_sparm;
__be16 flogi_rjt;
__be16 fdisc_rjt;
__be16 no_ssn;
__be16 mac_flt_fail;
__be16 inv_fr_rcvd;
} pcb_stats;
struct fw_fcoe_scb_stats {
__be64 tx_bytes;
__be64 tx_frames;
__be64 rx_bytes;
__be64 rx_frames;
__be32 host_abrt_req;
__be32 adap_auto_abrt;
__be32 adap_abrt_rsp;
__be32 host_ios_req;
__be16 ssn_offl_ios;
__be16 ssn_not_rdy_ios;
u8 rx_data_ddp_err;
u8 ddp_flt_set_err;
__be16 rx_data_fr_err;
u8 bad_st_abrt_req;
u8 no_io_abrt_req;
u8 abort_tmo;
u8 abort_tmo_2;
__be32 abort_req;
u8 no_ppod_res_tmo;
u8 bp_tmo;
u8 adap_auto_cls;
u8 no_io_cls_req;
__be32 host_cls_req;
__be64 unsol_cmd_rcvd;
__be32 plogi_req_rcvd;
__be32 prli_req_rcvd;
__be16 logo_req_rcvd;
__be16 prlo_req_rcvd;
__be16 plogi_rjt_rcvd;
__be16 prli_rjt_rcvd;
__be32 adisc_req_rcvd;
__be32 rscn_rcvd;
__be32 rrq_req_rcvd;
__be32 unsol_els_rcvd;
u8 adisc_rjt_rcvd;
u8 scr_rjt;
u8 ct_rjt;
u8 inval_bls_rcvd;
__be32 ba_rjt_rcvd;
} scb_stats;
} u;
};
#define FW_FCOE_STATS_CMD_FLOWID(x) ((x) << 0)
#define FW_FCOE_STATS_CMD_FREE (1U << 30)
#define FW_FCOE_STATS_CMD_NSTATS(x) ((x) << 4)
#define FW_FCOE_STATS_CMD_PORT(x) ((x) << 0)
#define FW_FCOE_STATS_CMD_PORT_VALID (1U << 7)
#define FW_FCOE_STATS_CMD_IX(x) ((x) << 0)
struct fw_fcoe_fcf_cmd {
__be32 op_to_fcfi;
__be32 retval_len16;
__be16 priority_pkd;
u8 mac[6];
u8 name_id[8];
u8 fabric[8];
__be16 vf_id;
__be16 max_fcoe_size;
u8 vlan_id;
u8 fc_map[3];
__be32 fka_adv;
__be32 r6;
u8 r7_hi;
u8 fpma_to_portid;
u8 spma_mac[6];
__be64 r8;
};
#define FW_FCOE_FCF_CMD_FCFI(x) ((x) << 0)
#define FW_FCOE_FCF_CMD_FCFI_GET(x) (((x) >> 0) & 0xfffff)
#define FW_FCOE_FCF_CMD_PRIORITY_GET(x) (((x) >> 0) & 0xff)
#define FW_FCOE_FCF_CMD_FPMA_GET(x) (((x) >> 6) & 0x1)
#define FW_FCOE_FCF_CMD_SPMA_GET(x) (((x) >> 5) & 0x1)
#define FW_FCOE_FCF_CMD_LOGIN_GET(x) (((x) >> 4) & 0x1)
#define FW_FCOE_FCF_CMD_PORTID_GET(x) (((x) >> 0) & 0xf)
#endif /* _T4FW_API_STOR_H_ */

View File

@ -1,6 +1,6 @@
/*
* HighPoint RR3xxx/4xxx controller driver for Linux
* Copyright (C) 2006-2009 HighPoint Technologies, Inc. All Rights Reserved.
* Copyright (C) 2006-2012 HighPoint Technologies, Inc. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -42,7 +42,7 @@ MODULE_DESCRIPTION("HighPoint RocketRAID 3xxx/4xxx Controller Driver");
static char driver_name[] = "hptiop";
static const char driver_name_long[] = "RocketRAID 3xxx/4xxx Controller driver";
static const char driver_ver[] = "v1.6 (091225)";
static const char driver_ver[] = "v1.8";
static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec);
static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
@ -77,6 +77,11 @@ static int iop_wait_ready_mv(struct hptiop_hba *hba, u32 millisec)
return iop_send_sync_msg(hba, IOPMU_INBOUND_MSG0_NOP, millisec);
}
static int iop_wait_ready_mvfrey(struct hptiop_hba *hba, u32 millisec)
{
return iop_send_sync_msg(hba, IOPMU_INBOUND_MSG0_NOP, millisec);
}
static void hptiop_request_callback_itl(struct hptiop_hba *hba, u32 tag)
{
if (tag & IOPMU_QUEUE_ADDR_HOST_BIT)
@ -230,6 +235,74 @@ static int iop_intr_mv(struct hptiop_hba *hba)
return ret;
}
static void hptiop_request_callback_mvfrey(struct hptiop_hba *hba, u32 _tag)
{
u32 req_type = _tag & 0xf;
struct hpt_iop_request_scsi_command *req;
switch (req_type) {
case IOP_REQUEST_TYPE_GET_CONFIG:
case IOP_REQUEST_TYPE_SET_CONFIG:
hba->msg_done = 1;
break;
case IOP_REQUEST_TYPE_SCSI_COMMAND:
req = hba->reqs[(_tag >> 4) & 0xff].req_virt;
if (likely(_tag & IOPMU_QUEUE_REQUEST_RESULT_BIT))
req->header.result = IOP_RESULT_SUCCESS;
hptiop_finish_scsi_req(hba, (_tag >> 4) & 0xff, req);
break;
default:
break;
}
}
static int iop_intr_mvfrey(struct hptiop_hba *hba)
{
u32 _tag, status, cptr, cur_rptr;
int ret = 0;
if (hba->initialized)
writel(0, &(hba->u.mvfrey.mu->pcie_f0_int_enable));
status = readl(&(hba->u.mvfrey.mu->f0_doorbell));
if (status) {
writel(status, &(hba->u.mvfrey.mu->f0_doorbell));
if (status & CPU_TO_F0_DRBL_MSG_BIT) {
u32 msg = readl(&(hba->u.mvfrey.mu->cpu_to_f0_msg_a));
dprintk("received outbound msg %x\n", msg);
hptiop_message_callback(hba, msg);
}
ret = 1;
}
status = readl(&(hba->u.mvfrey.mu->isr_cause));
if (status) {
writel(status, &(hba->u.mvfrey.mu->isr_cause));
do {
cptr = *hba->u.mvfrey.outlist_cptr & 0xff;
cur_rptr = hba->u.mvfrey.outlist_rptr;
while (cur_rptr != cptr) {
cur_rptr++;
if (cur_rptr == hba->u.mvfrey.list_count)
cur_rptr = 0;
_tag = hba->u.mvfrey.outlist[cur_rptr].val;
BUG_ON(!(_tag & IOPMU_QUEUE_MASK_HOST_BITS));
hptiop_request_callback_mvfrey(hba, _tag);
ret = 1;
}
hba->u.mvfrey.outlist_rptr = cur_rptr;
} while (cptr != (*hba->u.mvfrey.outlist_cptr & 0xff));
}
if (hba->initialized)
writel(0x1010, &(hba->u.mvfrey.mu->pcie_f0_int_enable));
return ret;
}
static int iop_send_sync_request_itl(struct hptiop_hba *hba,
void __iomem *_req, u32 millisec)
{
@ -272,6 +345,26 @@ static int iop_send_sync_request_mv(struct hptiop_hba *hba,
return -1;
}
static int iop_send_sync_request_mvfrey(struct hptiop_hba *hba,
u32 size_bits, u32 millisec)
{
struct hpt_iop_request_header *reqhdr =
hba->u.mvfrey.internal_req.req_virt;
u32 i;
hba->msg_done = 0;
reqhdr->flags |= cpu_to_le32(IOP_REQUEST_FLAG_SYNC_REQUEST);
hba->ops->post_req(hba, &(hba->u.mvfrey.internal_req));
for (i = 0; i < millisec; i++) {
iop_intr_mvfrey(hba);
if (hba->msg_done)
break;
msleep(1);
}
return hba->msg_done ? 0 : -1;
}
static void hptiop_post_msg_itl(struct hptiop_hba *hba, u32 msg)
{
writel(msg, &hba->u.itl.iop->inbound_msgaddr0);
@ -285,11 +378,18 @@ static void hptiop_post_msg_mv(struct hptiop_hba *hba, u32 msg)
readl(&hba->u.mv.regs->inbound_doorbell);
}
static void hptiop_post_msg_mvfrey(struct hptiop_hba *hba, u32 msg)
{
writel(msg, &(hba->u.mvfrey.mu->f0_to_cpu_msg_a));
readl(&(hba->u.mvfrey.mu->f0_to_cpu_msg_a));
}
static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec)
{
u32 i;
hba->msg_done = 0;
hba->ops->disable_intr(hba);
hba->ops->post_msg(hba, msg);
for (i = 0; i < millisec; i++) {
@ -301,6 +401,7 @@ static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec)
msleep(1);
}
hba->ops->enable_intr(hba);
return hba->msg_done? 0 : -1;
}
@ -354,6 +455,28 @@ static int iop_get_config_mv(struct hptiop_hba *hba,
return 0;
}
static int iop_get_config_mvfrey(struct hptiop_hba *hba,
struct hpt_iop_request_get_config *config)
{
struct hpt_iop_request_get_config *info = hba->u.mvfrey.config;
if (info->header.size != sizeof(struct hpt_iop_request_get_config) ||
info->header.type != IOP_REQUEST_TYPE_GET_CONFIG)
return -1;
config->interface_version = info->interface_version;
config->firmware_version = info->firmware_version;
config->max_requests = info->max_requests;
config->request_size = info->request_size;
config->max_sg_count = info->max_sg_count;
config->data_transfer_length = info->data_transfer_length;
config->alignment_mask = info->alignment_mask;
config->max_devices = info->max_devices;
config->sdram_size = info->sdram_size;
return 0;
}
static int iop_set_config_itl(struct hptiop_hba *hba,
struct hpt_iop_request_set_config *config)
{
@ -408,6 +531,29 @@ static int iop_set_config_mv(struct hptiop_hba *hba,
return 0;
}
static int iop_set_config_mvfrey(struct hptiop_hba *hba,
struct hpt_iop_request_set_config *config)
{
struct hpt_iop_request_set_config *req =
hba->u.mvfrey.internal_req.req_virt;
memcpy(req, config, sizeof(struct hpt_iop_request_set_config));
req->header.flags = cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT);
req->header.type = cpu_to_le32(IOP_REQUEST_TYPE_SET_CONFIG);
req->header.size =
cpu_to_le32(sizeof(struct hpt_iop_request_set_config));
req->header.result = cpu_to_le32(IOP_RESULT_PENDING);
req->header.context = cpu_to_le32(IOP_REQUEST_TYPE_SET_CONFIG<<5);
req->header.context_hi32 = 0;
if (iop_send_sync_request_mvfrey(hba, 0, 20000)) {
dprintk("Set config send cmd failed\n");
return -1;
}
return 0;
}
static void hptiop_enable_intr_itl(struct hptiop_hba *hba)
{
writel(~(IOPMU_OUTBOUND_INT_POSTQUEUE | IOPMU_OUTBOUND_INT_MSG0),
@ -420,6 +566,13 @@ static void hptiop_enable_intr_mv(struct hptiop_hba *hba)
&hba->u.mv.regs->outbound_intmask);
}
static void hptiop_enable_intr_mvfrey(struct hptiop_hba *hba)
{
writel(CPU_TO_F0_DRBL_MSG_BIT, &(hba->u.mvfrey.mu->f0_doorbell_enable));
writel(0x1, &(hba->u.mvfrey.mu->isr_enable));
writel(0x1010, &(hba->u.mvfrey.mu->pcie_f0_int_enable));
}
static int hptiop_initialize_iop(struct hptiop_hba *hba)
{
/* enable interrupts */
@ -502,17 +655,39 @@ static int hptiop_map_pci_bar_mv(struct hptiop_hba *hba)
return 0;
}
static int hptiop_map_pci_bar_mvfrey(struct hptiop_hba *hba)
{
hba->u.mvfrey.config = hptiop_map_pci_bar(hba, 0);
if (hba->u.mvfrey.config == NULL)
return -1;
hba->u.mvfrey.mu = hptiop_map_pci_bar(hba, 2);
if (hba->u.mvfrey.mu == NULL) {
iounmap(hba->u.mvfrey.config);
return -1;
}
return 0;
}
static void hptiop_unmap_pci_bar_mv(struct hptiop_hba *hba)
{
iounmap(hba->u.mv.regs);
iounmap(hba->u.mv.mu);
}
static void hptiop_unmap_pci_bar_mvfrey(struct hptiop_hba *hba)
{
iounmap(hba->u.mvfrey.config);
iounmap(hba->u.mvfrey.mu);
}
static void hptiop_message_callback(struct hptiop_hba *hba, u32 msg)
{
dprintk("iop message 0x%x\n", msg);
if (msg == IOPMU_INBOUND_MSG0_NOP)
if (msg == IOPMU_INBOUND_MSG0_NOP ||
msg == IOPMU_INBOUND_MSG0_RESET_COMM)
hba->msg_done = 1;
if (!hba->initialized)
@ -592,6 +767,7 @@ static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
memcpy(scp->sense_buffer, &req->sg_list,
min_t(size_t, SCSI_SENSE_BUFFERSIZE,
le32_to_cpu(req->dataxfer_length)));
goto skip_resid;
break;
default:
@ -599,6 +775,10 @@ static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
break;
}
scsi_set_resid(scp,
scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length));
skip_resid:
dprintk("scsi_done(%p)\n", scp);
scp->scsi_done(scp);
free_req(hba, &hba->reqs[tag]);
@ -692,7 +872,8 @@ static int hptiop_buildsgl(struct scsi_cmnd *scp, struct hpt_iopsg *psg)
BUG_ON(HPT_SCP(scp)->sgcnt > hba->max_sg_descriptors);
scsi_for_each_sg(scp, sg, HPT_SCP(scp)->sgcnt, idx) {
psg[idx].pci_address = cpu_to_le64(sg_dma_address(sg));
psg[idx].pci_address = cpu_to_le64(sg_dma_address(sg)) |
hba->ops->host_phy_flag;
psg[idx].size = cpu_to_le32(sg_dma_len(sg));
psg[idx].eot = (idx == HPT_SCP(scp)->sgcnt - 1) ?
cpu_to_le32(1) : 0;
@ -751,6 +932,78 @@ static void hptiop_post_req_mv(struct hptiop_hba *hba,
MVIOP_MU_QUEUE_ADDR_HOST_BIT | size_bit, hba);
}
static void hptiop_post_req_mvfrey(struct hptiop_hba *hba,
struct hptiop_request *_req)
{
struct hpt_iop_request_header *reqhdr = _req->req_virt;
u32 index;
reqhdr->flags |= cpu_to_le32(IOP_REQUEST_FLAG_OUTPUT_CONTEXT |
IOP_REQUEST_FLAG_ADDR_BITS |
((_req->req_shifted_phy >> 11) & 0xffff0000));
reqhdr->context = cpu_to_le32(IOPMU_QUEUE_ADDR_HOST_BIT |
(_req->index << 4) | reqhdr->type);
reqhdr->context_hi32 = cpu_to_le32((_req->req_shifted_phy << 5) &
0xffffffff);
hba->u.mvfrey.inlist_wptr++;
index = hba->u.mvfrey.inlist_wptr & 0x3fff;
if (index == hba->u.mvfrey.list_count) {
index = 0;
hba->u.mvfrey.inlist_wptr &= ~0x3fff;
hba->u.mvfrey.inlist_wptr ^= CL_POINTER_TOGGLE;
}
hba->u.mvfrey.inlist[index].addr =
(dma_addr_t)_req->req_shifted_phy << 5;
hba->u.mvfrey.inlist[index].intrfc_len = (reqhdr->size + 3) / 4;
writel(hba->u.mvfrey.inlist_wptr,
&(hba->u.mvfrey.mu->inbound_write_ptr));
readl(&(hba->u.mvfrey.mu->inbound_write_ptr));
}
static int hptiop_reset_comm_itl(struct hptiop_hba *hba)
{
return 0;
}
static int hptiop_reset_comm_mv(struct hptiop_hba *hba)
{
return 0;
}
static int hptiop_reset_comm_mvfrey(struct hptiop_hba *hba)
{
u32 list_count = hba->u.mvfrey.list_count;
if (iop_send_sync_msg(hba, IOPMU_INBOUND_MSG0_RESET_COMM, 3000))
return -1;
/* wait 100ms for MCU ready */
msleep(100);
writel(cpu_to_le32(hba->u.mvfrey.inlist_phy & 0xffffffff),
&(hba->u.mvfrey.mu->inbound_base));
writel(cpu_to_le32((hba->u.mvfrey.inlist_phy >> 16) >> 16),
&(hba->u.mvfrey.mu->inbound_base_high));
writel(cpu_to_le32(hba->u.mvfrey.outlist_phy & 0xffffffff),
&(hba->u.mvfrey.mu->outbound_base));
writel(cpu_to_le32((hba->u.mvfrey.outlist_phy >> 16) >> 16),
&(hba->u.mvfrey.mu->outbound_base_high));
writel(cpu_to_le32(hba->u.mvfrey.outlist_cptr_phy & 0xffffffff),
&(hba->u.mvfrey.mu->outbound_shadow_base));
writel(cpu_to_le32((hba->u.mvfrey.outlist_cptr_phy >> 16) >> 16),
&(hba->u.mvfrey.mu->outbound_shadow_base_high));
hba->u.mvfrey.inlist_wptr = (list_count - 1) | CL_POINTER_TOGGLE;
*hba->u.mvfrey.outlist_cptr = (list_count - 1) | CL_POINTER_TOGGLE;
hba->u.mvfrey.outlist_rptr = list_count - 1;
return 0;
}
static int hptiop_queuecommand_lck(struct scsi_cmnd *scp,
void (*done)(struct scsi_cmnd *))
{
@ -771,14 +1024,15 @@ static int hptiop_queuecommand_lck(struct scsi_cmnd *scp,
_req->scp = scp;
dprintk("hptiop_queuecmd(scp=%p) %d/%d/%d/%d cdb=(%x-%x-%x) "
dprintk("hptiop_queuecmd(scp=%p) %d/%d/%d/%d cdb=(%08x-%08x-%08x-%08x) "
"req_index=%d, req=%p\n",
scp,
host->host_no, scp->device->channel,
scp->device->id, scp->device->lun,
((u32 *)scp->cmnd)[0],
((u32 *)scp->cmnd)[1],
((u32 *)scp->cmnd)[2],
cpu_to_be32(((u32 *)scp->cmnd)[0]),
cpu_to_be32(((u32 *)scp->cmnd)[1]),
cpu_to_be32(((u32 *)scp->cmnd)[2]),
cpu_to_be32(((u32 *)scp->cmnd)[3]),
_req->index, _req->req_virt);
scp->result = 0;
@ -933,6 +1187,11 @@ static struct scsi_host_template driver_template = {
.change_queue_depth = hptiop_adjust_disk_queue_depth,
};
static int hptiop_internal_memalloc_itl(struct hptiop_hba *hba)
{
return 0;
}
static int hptiop_internal_memalloc_mv(struct hptiop_hba *hba)
{
hba->u.mv.internal_req = dma_alloc_coherent(&hba->pcidev->dev,
@ -943,6 +1202,63 @@ static int hptiop_internal_memalloc_mv(struct hptiop_hba *hba)
return -1;
}
static int hptiop_internal_memalloc_mvfrey(struct hptiop_hba *hba)
{
u32 list_count = readl(&hba->u.mvfrey.mu->inbound_conf_ctl);
char *p;
dma_addr_t phy;
BUG_ON(hba->max_request_size == 0);
if (list_count == 0) {
BUG_ON(1);
return -1;
}
list_count >>= 16;
hba->u.mvfrey.list_count = list_count;
hba->u.mvfrey.internal_mem_size = 0x800 +
list_count * sizeof(struct mvfrey_inlist_entry) +
list_count * sizeof(struct mvfrey_outlist_entry) +
sizeof(int);
p = dma_alloc_coherent(&hba->pcidev->dev,
hba->u.mvfrey.internal_mem_size, &phy, GFP_KERNEL);
if (!p)
return -1;
hba->u.mvfrey.internal_req.req_virt = p;
hba->u.mvfrey.internal_req.req_shifted_phy = phy >> 5;
hba->u.mvfrey.internal_req.scp = NULL;
hba->u.mvfrey.internal_req.next = NULL;
p += 0x800;
phy += 0x800;
hba->u.mvfrey.inlist = (struct mvfrey_inlist_entry *)p;
hba->u.mvfrey.inlist_phy = phy;
p += list_count * sizeof(struct mvfrey_inlist_entry);
phy += list_count * sizeof(struct mvfrey_inlist_entry);
hba->u.mvfrey.outlist = (struct mvfrey_outlist_entry *)p;
hba->u.mvfrey.outlist_phy = phy;
p += list_count * sizeof(struct mvfrey_outlist_entry);
phy += list_count * sizeof(struct mvfrey_outlist_entry);
hba->u.mvfrey.outlist_cptr = (__le32 *)p;
hba->u.mvfrey.outlist_cptr_phy = phy;
return 0;
}
static int hptiop_internal_memfree_itl(struct hptiop_hba *hba)
{
return 0;
}
static int hptiop_internal_memfree_mv(struct hptiop_hba *hba)
{
if (hba->u.mv.internal_req) {
@ -953,6 +1269,19 @@ static int hptiop_internal_memfree_mv(struct hptiop_hba *hba)
return -1;
}
static int hptiop_internal_memfree_mvfrey(struct hptiop_hba *hba)
{
if (hba->u.mvfrey.internal_req.req_virt) {
dma_free_coherent(&hba->pcidev->dev,
hba->u.mvfrey.internal_mem_size,
hba->u.mvfrey.internal_req.req_virt,
(dma_addr_t)
hba->u.mvfrey.internal_req.req_shifted_phy << 5);
return 0;
} else
return -1;
}
static int __devinit hptiop_probe(struct pci_dev *pcidev,
const struct pci_device_id *id)
{
@ -1027,7 +1356,7 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
goto unmap_pci_bar;
}
if (hba->ops->internal_memalloc) {
if (hba->ops->family == MV_BASED_IOP) {
if (hba->ops->internal_memalloc(hba)) {
printk(KERN_ERR "scsi%d: internal_memalloc failed\n",
hba->host->host_no);
@ -1050,6 +1379,19 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
hba->interface_version = le32_to_cpu(iop_config.interface_version);
hba->sdram_size = le32_to_cpu(iop_config.sdram_size);
if (hba->ops->family == MVFREY_BASED_IOP) {
if (hba->ops->internal_memalloc(hba)) {
printk(KERN_ERR "scsi%d: internal_memalloc failed\n",
hba->host->host_no);
goto unmap_pci_bar;
}
if (hba->ops->reset_comm(hba)) {
printk(KERN_ERR "scsi%d: reset comm failed\n",
hba->host->host_no);
goto unmap_pci_bar;
}
}
if (hba->firmware_version > 0x01020000 ||
hba->interface_version > 0x01020000)
hba->iopintf_v2 = 1;
@ -1104,14 +1446,13 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
hba->dma_coherent = start_virt;
hba->dma_coherent_handle = start_phy;
if ((start_phy & 0x1f) != 0)
{
if ((start_phy & 0x1f) != 0) {
offset = ((start_phy + 0x1f) & ~0x1f) - start_phy;
start_phy += offset;
start_virt += offset;
}
hba->req_list = start_virt;
hba->req_list = NULL;
for (i = 0; i < hba->max_requests; i++) {
hba->reqs[i].next = NULL;
hba->reqs[i].req_virt = start_virt;
@ -1132,7 +1473,6 @@ static int __devinit hptiop_probe(struct pci_dev *pcidev,
goto free_request_mem;
}
scsi_scan_host(host);
dprintk("scsi%d: hptiop_probe successfully\n", hba->host->host_no);
@ -1147,8 +1487,7 @@ free_request_irq:
free_irq(hba->pcidev->irq, hba);
unmap_pci_bar:
if (hba->ops->internal_memfree)
hba->ops->internal_memfree(hba);
hba->ops->internal_memfree(hba);
hba->ops->unmap_pci_bar(hba);
@ -1198,6 +1537,16 @@ static void hptiop_disable_intr_mv(struct hptiop_hba *hba)
readl(&hba->u.mv.regs->outbound_intmask);
}
static void hptiop_disable_intr_mvfrey(struct hptiop_hba *hba)
{
writel(0, &(hba->u.mvfrey.mu->f0_doorbell_enable));
readl(&(hba->u.mvfrey.mu->f0_doorbell_enable));
writel(0, &(hba->u.mvfrey.mu->isr_enable));
readl(&(hba->u.mvfrey.mu->isr_enable));
writel(0, &(hba->u.mvfrey.mu->pcie_f0_int_enable));
readl(&(hba->u.mvfrey.mu->pcie_f0_int_enable));
}
static void hptiop_remove(struct pci_dev *pcidev)
{
struct Scsi_Host *host = pci_get_drvdata(pcidev);
@ -1216,8 +1565,7 @@ static void hptiop_remove(struct pci_dev *pcidev)
hba->dma_coherent,
hba->dma_coherent_handle);
if (hba->ops->internal_memfree)
hba->ops->internal_memfree(hba);
hba->ops->internal_memfree(hba);
hba->ops->unmap_pci_bar(hba);
@ -1229,9 +1577,10 @@ static void hptiop_remove(struct pci_dev *pcidev)
}
static struct hptiop_adapter_ops hptiop_itl_ops = {
.family = INTEL_BASED_IOP,
.iop_wait_ready = iop_wait_ready_itl,
.internal_memalloc = NULL,
.internal_memfree = NULL,
.internal_memalloc = hptiop_internal_memalloc_itl,
.internal_memfree = hptiop_internal_memfree_itl,
.map_pci_bar = hptiop_map_pci_bar_itl,
.unmap_pci_bar = hptiop_unmap_pci_bar_itl,
.enable_intr = hptiop_enable_intr_itl,
@ -1242,9 +1591,12 @@ static struct hptiop_adapter_ops hptiop_itl_ops = {
.post_msg = hptiop_post_msg_itl,
.post_req = hptiop_post_req_itl,
.hw_dma_bit_mask = 64,
.reset_comm = hptiop_reset_comm_itl,
.host_phy_flag = cpu_to_le64(0),
};
static struct hptiop_adapter_ops hptiop_mv_ops = {
.family = MV_BASED_IOP,
.iop_wait_ready = iop_wait_ready_mv,
.internal_memalloc = hptiop_internal_memalloc_mv,
.internal_memfree = hptiop_internal_memfree_mv,
@ -1258,6 +1610,27 @@ static struct hptiop_adapter_ops hptiop_mv_ops = {
.post_msg = hptiop_post_msg_mv,
.post_req = hptiop_post_req_mv,
.hw_dma_bit_mask = 33,
.reset_comm = hptiop_reset_comm_mv,
.host_phy_flag = cpu_to_le64(0),
};
static struct hptiop_adapter_ops hptiop_mvfrey_ops = {
.family = MVFREY_BASED_IOP,
.iop_wait_ready = iop_wait_ready_mvfrey,
.internal_memalloc = hptiop_internal_memalloc_mvfrey,
.internal_memfree = hptiop_internal_memfree_mvfrey,
.map_pci_bar = hptiop_map_pci_bar_mvfrey,
.unmap_pci_bar = hptiop_unmap_pci_bar_mvfrey,
.enable_intr = hptiop_enable_intr_mvfrey,
.disable_intr = hptiop_disable_intr_mvfrey,
.get_config = iop_get_config_mvfrey,
.set_config = iop_set_config_mvfrey,
.iop_intr = iop_intr_mvfrey,
.post_msg = hptiop_post_msg_mvfrey,
.post_req = hptiop_post_req_mvfrey,
.hw_dma_bit_mask = 64,
.reset_comm = hptiop_reset_comm_mvfrey,
.host_phy_flag = cpu_to_le64(1),
};
static struct pci_device_id hptiop_id_table[] = {
@ -1283,6 +1656,8 @@ static struct pci_device_id hptiop_id_table[] = {
{ PCI_VDEVICE(TTI, 0x3120), (kernel_ulong_t)&hptiop_mv_ops },
{ PCI_VDEVICE(TTI, 0x3122), (kernel_ulong_t)&hptiop_mv_ops },
{ PCI_VDEVICE(TTI, 0x3020), (kernel_ulong_t)&hptiop_mv_ops },
{ PCI_VDEVICE(TTI, 0x4520), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x4522), (kernel_ulong_t)&hptiop_mvfrey_ops },
{},
};

View File

@ -1,6 +1,6 @@
/*
* HighPoint RR3xxx/4xxx controller driver for Linux
* Copyright (C) 2006-2009 HighPoint Technologies, Inc. All Rights Reserved.
* Copyright (C) 2006-2012 HighPoint Technologies, Inc. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -75,6 +75,45 @@ struct hpt_iopmv_regs {
__le32 outbound_intmask;
};
#pragma pack(1)
struct hpt_iopmu_mvfrey {
__le32 reserved0[(0x4000 - 0) / 4];
__le32 inbound_base;
__le32 inbound_base_high;
__le32 reserved1[(0x4018 - 0x4008) / 4];
__le32 inbound_write_ptr;
__le32 reserved2[(0x402c - 0x401c) / 4];
__le32 inbound_conf_ctl;
__le32 reserved3[(0x4050 - 0x4030) / 4];
__le32 outbound_base;
__le32 outbound_base_high;
__le32 outbound_shadow_base;
__le32 outbound_shadow_base_high;
__le32 reserved4[(0x4088 - 0x4060) / 4];
__le32 isr_cause;
__le32 isr_enable;
__le32 reserved5[(0x1020c - 0x4090) / 4];
__le32 pcie_f0_int_enable;
__le32 reserved6[(0x10400 - 0x10210) / 4];
__le32 f0_to_cpu_msg_a;
__le32 reserved7[(0x10420 - 0x10404) / 4];
__le32 cpu_to_f0_msg_a;
__le32 reserved8[(0x10480 - 0x10424) / 4];
__le32 f0_doorbell;
__le32 f0_doorbell_enable;
};
struct mvfrey_inlist_entry {
dma_addr_t addr;
__le32 intrfc_len;
__le32 reserved;
};
struct mvfrey_outlist_entry {
__le32 val;
};
#pragma pack()
#define MVIOP_MU_QUEUE_ADDR_HOST_MASK (~(0x1full))
#define MVIOP_MU_QUEUE_ADDR_HOST_BIT 4
@ -87,6 +126,9 @@ struct hpt_iopmv_regs {
#define MVIOP_MU_OUTBOUND_INT_MSG 1
#define MVIOP_MU_OUTBOUND_INT_POSTQUEUE 2
#define CL_POINTER_TOGGLE 0x00004000
#define CPU_TO_F0_DRBL_MSG_BIT 0x02000000
enum hpt_iopmu_message {
/* host-to-iop messages */
IOPMU_INBOUND_MSG0_NOP = 0,
@ -95,6 +137,7 @@ enum hpt_iopmu_message {
IOPMU_INBOUND_MSG0_SHUTDOWN,
IOPMU_INBOUND_MSG0_STOP_BACKGROUND_TASK,
IOPMU_INBOUND_MSG0_START_BACKGROUND_TASK,
IOPMU_INBOUND_MSG0_RESET_COMM,
IOPMU_INBOUND_MSG0_MAX = 0xff,
/* iop-to-host messages */
IOPMU_OUTBOUND_MSG0_REGISTER_DEVICE_0 = 0x100,
@ -118,6 +161,7 @@ struct hpt_iop_request_header {
#define IOP_REQUEST_FLAG_BIST_REQUEST 2
#define IOP_REQUEST_FLAG_REMAPPED 4
#define IOP_REQUEST_FLAG_OUTPUT_CONTEXT 8
#define IOP_REQUEST_FLAG_ADDR_BITS 0x40 /* flags[31:16] is phy_addr[47:32] */
enum hpt_iop_request_type {
IOP_REQUEST_TYPE_GET_CONFIG = 0,
@ -223,6 +267,13 @@ struct hpt_scsi_pointer {
#define HPT_SCP(scp) ((struct hpt_scsi_pointer *)&(scp)->SCp)
enum hptiop_family {
UNKNOWN_BASED_IOP,
INTEL_BASED_IOP,
MV_BASED_IOP,
MVFREY_BASED_IOP
} ;
struct hptiop_hba {
struct hptiop_adapter_ops *ops;
union {
@ -236,6 +287,22 @@ struct hptiop_hba {
void *internal_req;
dma_addr_t internal_req_phy;
} mv;
struct {
struct hpt_iop_request_get_config __iomem *config;
struct hpt_iopmu_mvfrey __iomem *mu;
int internal_mem_size;
struct hptiop_request internal_req;
int list_count;
struct mvfrey_inlist_entry *inlist;
dma_addr_t inlist_phy;
__le32 inlist_wptr;
struct mvfrey_outlist_entry *outlist;
dma_addr_t outlist_phy;
__le32 *outlist_cptr; /* copy pointer shadow */
dma_addr_t outlist_cptr_phy;
__le32 outlist_rptr;
} mvfrey;
} u;
struct Scsi_Host *host;
@ -283,6 +350,7 @@ struct hpt_ioctl_k {
};
struct hptiop_adapter_ops {
enum hptiop_family family;
int (*iop_wait_ready)(struct hptiop_hba *hba, u32 millisec);
int (*internal_memalloc)(struct hptiop_hba *hba);
int (*internal_memfree)(struct hptiop_hba *hba);
@ -298,6 +366,8 @@ struct hptiop_adapter_ops {
void (*post_msg)(struct hptiop_hba *hba, u32 msg);
void (*post_req)(struct hptiop_hba *hba, struct hptiop_request *_req);
int hw_dma_bit_mask;
int (*reset_comm)(struct hptiop_hba *hba);
__le64 host_phy_flag;
};
#define HPT_IOCTL_RESULT_OK 0

View File

@ -689,6 +689,7 @@ struct lpfc_hba {
#define LPFC_FCF_PRIORITY 2 /* Priority fcf failover */
uint32_t cfg_fcf_failover_policy;
uint32_t cfg_fcp_io_sched;
uint32_t cfg_fcp2_no_tgt_reset;
uint32_t cfg_cr_delay;
uint32_t cfg_cr_count;
uint32_t cfg_multi_ring_support;
@ -714,6 +715,7 @@ struct lpfc_hba {
uint32_t cfg_log_verbose;
uint32_t cfg_aer_support;
uint32_t cfg_sriov_nr_virtfn;
uint32_t cfg_request_firmware_upgrade;
uint32_t cfg_iocb_cnt;
uint32_t cfg_suppress_link_up;
#define LPFC_INITIALIZE_LINK 0 /* do normal init_link mbox */

View File

@ -3617,6 +3617,77 @@ lpfc_sriov_nr_virtfn_init(struct lpfc_hba *phba, int val)
static DEVICE_ATTR(lpfc_sriov_nr_virtfn, S_IRUGO | S_IWUSR,
lpfc_sriov_nr_virtfn_show, lpfc_sriov_nr_virtfn_store);
/**
* lpfc_request_firmware_store - Request for Linux generic firmware upgrade
*
* @dev: class device that is converted into a Scsi_host.
* @attr: device attribute, not used.
* @buf: containing the string the number of vfs to be enabled.
* @count: unused variable.
*
* Description:
*
* Returns:
* length of the buf on success if val is in range the intended mode
* is supported.
* -EINVAL if val out of range or intended mode is not supported.
**/
static ssize_t
lpfc_request_firmware_upgrade_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct Scsi_Host *shost = class_to_shost(dev);
struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
struct lpfc_hba *phba = vport->phba;
int val = 0, rc = -EINVAL;
/* Sanity check on user data */
if (!isdigit(buf[0]))
return -EINVAL;
if (sscanf(buf, "%i", &val) != 1)
return -EINVAL;
if (val != 1)
return -EINVAL;
rc = lpfc_sli4_request_firmware_update(phba, RUN_FW_UPGRADE);
if (rc)
rc = -EPERM;
else
rc = strlen(buf);
return rc;
}
static int lpfc_req_fw_upgrade;
module_param(lpfc_req_fw_upgrade, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(lpfc_req_fw_upgrade, "Enable Linux generic firmware upgrade");
lpfc_param_show(request_firmware_upgrade)
/**
* lpfc_request_firmware_upgrade_init - Enable initial linux generic fw upgrade
* @phba: lpfc_hba pointer.
* @val: 0 or 1.
*
* Description:
* Set the initial Linux generic firmware upgrade enable or disable flag.
*
* Returns:
* zero if val saved.
* -EINVAL val out of range
**/
static int
lpfc_request_firmware_upgrade_init(struct lpfc_hba *phba, int val)
{
if (val >= 0 && val <= 1) {
phba->cfg_request_firmware_upgrade = val;
return 0;
}
return -EINVAL;
}
static DEVICE_ATTR(lpfc_req_fw_upgrade, S_IRUGO | S_IWUSR,
lpfc_request_firmware_upgrade_show,
lpfc_request_firmware_upgrade_store);
/**
* lpfc_fcp_imax_store
*
@ -3787,6 +3858,16 @@ LPFC_ATTR_R(ack0, 0, 0, 1, "Enable ACK0 support");
LPFC_ATTR_RW(fcp_io_sched, 0, 0, 1, "Determine scheduling algrithmn for "
"issuing commands [0] - Round Robin, [1] - Current CPU");
/*
# lpfc_fcp2_no_tgt_reset: Determine bus reset behavior
# range is [0,1]. Default value is 0.
# For [0], bus reset issues target reset to ALL devices
# For [1], bus reset issues target reset to non-FCP2 devices
*/
LPFC_ATTR_RW(fcp2_no_tgt_reset, 0, 0, 1, "Determine bus reset behavior for "
"FCP2 devices [0] - issue tgt reset, [1] - no tgt reset");
/*
# lpfc_cr_delay & lpfc_cr_count: Default values for I/O colaesing
# cr_delay (msec) or cr_count outstanding commands. cr_delay can take
@ -4029,6 +4110,7 @@ struct device_attribute *lpfc_hba_attrs[] = {
&dev_attr_lpfc_scan_down,
&dev_attr_lpfc_link_speed,
&dev_attr_lpfc_fcp_io_sched,
&dev_attr_lpfc_fcp2_no_tgt_reset,
&dev_attr_lpfc_cr_delay,
&dev_attr_lpfc_cr_count,
&dev_attr_lpfc_multi_ring_support,
@ -4069,6 +4151,7 @@ struct device_attribute *lpfc_hba_attrs[] = {
&dev_attr_lpfc_aer_support,
&dev_attr_lpfc_aer_state_cleanup,
&dev_attr_lpfc_sriov_nr_virtfn,
&dev_attr_lpfc_req_fw_upgrade,
&dev_attr_lpfc_suppress_link_up,
&dev_attr_lpfc_iocb_cnt,
&dev_attr_iocb_hw,
@ -5019,6 +5102,7 @@ void
lpfc_get_cfgparam(struct lpfc_hba *phba)
{
lpfc_fcp_io_sched_init(phba, lpfc_fcp_io_sched);
lpfc_fcp2_no_tgt_reset_init(phba, lpfc_fcp2_no_tgt_reset);
lpfc_cr_delay_init(phba, lpfc_cr_delay);
lpfc_cr_count_init(phba, lpfc_cr_count);
lpfc_multi_ring_support_init(phba, lpfc_multi_ring_support);
@ -5051,6 +5135,7 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
lpfc_hba_log_verbose_init(phba, lpfc_log_verbose);
lpfc_aer_support_init(phba, lpfc_aer_support);
lpfc_sriov_nr_virtfn_init(phba, lpfc_sriov_nr_virtfn);
lpfc_request_firmware_upgrade_init(phba, lpfc_req_fw_upgrade);
lpfc_suppress_link_up_init(phba, lpfc_suppress_link_up);
lpfc_iocb_cnt_init(phba, lpfc_iocb_cnt);
phba->cfg_enable_dss = 1;

View File

@ -468,3 +468,4 @@ void lpfc_sli4_node_prep(struct lpfc_hba *);
int lpfc_sli4_xri_sgl_update(struct lpfc_hba *);
void lpfc_free_sgl_list(struct lpfc_hba *, struct list_head *);
uint32_t lpfc_sli_port_speed_get(struct lpfc_hba *);
int lpfc_sli4_request_firmware_update(struct lpfc_hba *, uint8_t);

View File

@ -634,7 +634,7 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
/* Check for retry */
if (vport->fc_ns_retry < LPFC_MAX_NS_RETRY) {
if (irsp->ulpStatus != IOSTAT_LOCAL_REJECT ||
(irsp->un.ulpWord[4] && IOERR_PARAM_MASK) !=
(irsp->un.ulpWord[4] & IOERR_PARAM_MASK) !=
IOERR_NO_RESOURCES)
vport->fc_ns_retry++;

View File

@ -1182,8 +1182,6 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
sp->cmn.w2.r_a_tov = 0;
sp->cmn.virtual_fabric_support = 0;
sp->cls1.classValid = 0;
sp->cls2.seqDelivery = 1;
sp->cls3.seqDelivery = 1;
if (sp->cmn.fcphLow < FC_PH3)
sp->cmn.fcphLow = FC_PH3;
if (sp->cmn.fcphHigh < FC_PH3)
@ -1198,7 +1196,13 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
/* Set the fcfi to the fcfi we registered with */
elsiocb->iocb.ulpContext = phba->fcf.fcfi;
}
/* Can't do SLI4 class2 without support sequence coalescing */
sp->cls2.classValid = 0;
sp->cls2.seqDelivery = 0;
} else {
/* Historical, setting sequential-delivery bit for SLI3 */
sp->cls2.seqDelivery = (sp->cls2.classValid) ? 1 : 0;
sp->cls3.seqDelivery = (sp->cls3.classValid) ? 1 : 0;
if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
sp->cmn.request_multiple_Nport = 1;
/* For FLOGI, Let FLOGI rsp set the NPortID for VPI 0 */

View File

@ -3219,6 +3219,9 @@ struct wqe_common {
#define wqe_dif_SHIFT 0
#define wqe_dif_MASK 0x00000003
#define wqe_dif_WORD word7
#define LPFC_WQE_DIF_PASSTHRU 1
#define LPFC_WQE_DIF_STRIP 2
#define LPFC_WQE_DIF_INSERT 3
#define wqe_ct_SHIFT 2
#define wqe_ct_MASK 0x00000003
#define wqe_ct_WORD word7

View File

@ -3854,7 +3854,7 @@ static void
lpfc_sli4_async_sli_evt(struct lpfc_hba *phba, struct lpfc_acqe_sli *acqe_sli)
{
char port_name;
char message[80];
char message[128];
uint8_t status;
struct lpfc_acqe_misconfigured_event *misconfigured;
@ -9450,7 +9450,7 @@ lpfc_write_firmware(const struct firmware *fw, void *context)
struct lpfc_dmabuf *dmabuf, *next;
uint32_t offset = 0, temp_offset = 0;
/* It can be null, sanity check */
/* It can be null in no-wait mode, sanity check */
if (!fw) {
rc = -ENXIO;
goto out;
@ -9528,10 +9528,47 @@ release_out:
release_firmware(fw);
out:
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"3024 Firmware update done: %d.", rc);
"3024 Firmware update done: %d.\n", rc);
return;
}
/**
* lpfc_sli4_request_firmware_update - Request linux generic firmware upgrade
* @phba: pointer to lpfc hba data structure.
*
* This routine is called to perform Linux generic firmware upgrade on device
* that supports such feature.
**/
int
lpfc_sli4_request_firmware_update(struct lpfc_hba *phba, uint8_t fw_upgrade)
{
uint8_t file_name[ELX_MODEL_NAME_SIZE];
int ret;
const struct firmware *fw;
/* Only supported on SLI4 interface type 2 for now */
if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) !=
LPFC_SLI_INTF_IF_TYPE_2)
return -EPERM;
snprintf(file_name, ELX_MODEL_NAME_SIZE, "%s.grp", phba->ModelName);
if (fw_upgrade == INT_FW_UPGRADE) {
ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_HOTPLUG,
file_name, &phba->pcidev->dev,
GFP_KERNEL, (void *)phba,
lpfc_write_firmware);
} else if (fw_upgrade == RUN_FW_UPGRADE) {
ret = request_firmware(&fw, file_name, &phba->pcidev->dev);
if (!ret)
lpfc_write_firmware(fw, (void *)phba);
} else {
ret = -EINVAL;
}
return ret;
}
/**
* lpfc_pci_probe_one_s4 - PCI probe func to reg SLI-4 device to PCI subsys
* @pdev: pointer to PCI device
@ -9560,7 +9597,6 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
uint32_t cfg_mode, intr_mode;
int mcnt;
int adjusted_fcp_io_channel;
uint8_t file_name[ELX_MODEL_NAME_SIZE];
/* Allocate memory for HBA structure */
phba = lpfc_hba_alloc(pdev);
@ -9703,16 +9739,9 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
/* Perform post initialization setup */
lpfc_post_init_setup(phba);
/* check for firmware upgrade or downgrade (if_type 2 only) */
if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) ==
LPFC_SLI_INTF_IF_TYPE_2) {
snprintf(file_name, ELX_MODEL_NAME_SIZE, "%s.grp",
phba->ModelName);
ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_HOTPLUG,
file_name, &phba->pcidev->dev,
GFP_KERNEL, (void *)phba,
lpfc_write_firmware);
}
/* check for firmware upgrade or downgrade */
if (phba->cfg_request_firmware_upgrade)
ret = lpfc_sli4_request_firmware_update(phba, INT_FW_UPGRADE);
/* Check if there are static vports to be created. */
lpfc_create_static_vport(phba);

View File

@ -3227,6 +3227,21 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba,
}
}
switch (scsi_get_prot_op(scsi_cmnd)) {
case SCSI_PROT_WRITE_STRIP:
case SCSI_PROT_READ_STRIP:
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_DIF_STRIP;
break;
case SCSI_PROT_WRITE_INSERT:
case SCSI_PROT_READ_INSERT:
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_DIF_INSERT;
break;
case SCSI_PROT_WRITE_PASS:
case SCSI_PROT_READ_PASS:
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_DIF_PASS;
break;
}
fcpdl = lpfc_bg_scsi_adjust_dl(phba, lpfc_cmd);
fcp_cmnd->fcpDl = be32_to_cpu(fcpdl);
@ -3236,7 +3251,6 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba,
* we need to set word 4 of IOCB here
*/
iocb_cmd->un.fcpi.fcpi_parm = fcpdl;
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_DIF;
return 0;
err:
@ -4914,6 +4928,9 @@ lpfc_bus_reset_handler(struct scsi_cmnd *cmnd)
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
if (!NLP_CHK_NODE_ACT(ndlp))
continue;
if (vport->phba->cfg_fcp2_no_tgt_reset &&
(ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE))
continue;
if (ndlp->nlp_state == NLP_STE_MAPPED_NODE &&
ndlp->nlp_sid == i &&
ndlp->rport) {

View File

@ -8068,10 +8068,6 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
LPFC_WQE_LENLOC_WORD4);
bf_set(wqe_ebde_cnt, &wqe->fcp_iwrite.wqe_com, 0);
bf_set(wqe_pu, &wqe->fcp_iwrite.wqe_com, iocbq->iocb.ulpPU);
if (iocbq->iocb_flag & LPFC_IO_DIF) {
iocbq->iocb_flag &= ~LPFC_IO_DIF;
bf_set(wqe_dif, &wqe->generic.wqe_com, 1);
}
bf_set(wqe_dbde, &wqe->fcp_iwrite.wqe_com, 1);
break;
case CMD_FCP_IREAD64_CR:
@ -8091,10 +8087,6 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
LPFC_WQE_LENLOC_WORD4);
bf_set(wqe_ebde_cnt, &wqe->fcp_iread.wqe_com, 0);
bf_set(wqe_pu, &wqe->fcp_iread.wqe_com, iocbq->iocb.ulpPU);
if (iocbq->iocb_flag & LPFC_IO_DIF) {
iocbq->iocb_flag &= ~LPFC_IO_DIF;
bf_set(wqe_dif, &wqe->generic.wqe_com, 1);
}
bf_set(wqe_dbde, &wqe->fcp_iread.wqe_com, 1);
break;
case CMD_FCP_ICMND64_CR:
@ -8304,6 +8296,14 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
break;
}
if (iocbq->iocb_flag & LPFC_IO_DIF_PASS)
bf_set(wqe_dif, &wqe->generic.wqe_com, LPFC_WQE_DIF_PASSTHRU);
else if (iocbq->iocb_flag & LPFC_IO_DIF_STRIP)
bf_set(wqe_dif, &wqe->generic.wqe_com, LPFC_WQE_DIF_STRIP);
else if (iocbq->iocb_flag & LPFC_IO_DIF_INSERT)
bf_set(wqe_dif, &wqe->generic.wqe_com, LPFC_WQE_DIF_INSERT);
iocbq->iocb_flag &= ~(LPFC_IO_DIF_PASS | LPFC_IO_DIF_STRIP |
LPFC_IO_DIF_INSERT);
bf_set(wqe_xri_tag, &wqe->generic.wqe_com, xritag);
bf_set(wqe_reqtag, &wqe->generic.wqe_com, iocbq->iotag);
wqe->generic.wqe_com.abort_tag = abort_tag;

View File

@ -69,7 +69,9 @@ struct lpfc_iocbq {
#define LPFC_USE_FCPWQIDX 0x80 /* Submit to specified FCPWQ index */
#define DSS_SECURITY_OP 0x100 /* security IO */
#define LPFC_IO_ON_TXCMPLQ 0x200 /* The IO is still on the TXCMPLQ */
#define LPFC_IO_DIF 0x400 /* T10 DIF IO */
#define LPFC_IO_DIF_PASS 0x400 /* T10 DIF IO pass-thru prot */
#define LPFC_IO_DIF_STRIP 0x800 /* T10 DIF IO strip prot */
#define LPFC_IO_DIF_INSERT 0x1000 /* T10 DIF IO insert prot */
#define LPFC_FIP_ELS_ID_MASK 0xc000 /* ELS_ID range 0-3, non-shifted mask */
#define LPFC_FIP_ELS_ID_SHIFT 14

View File

@ -82,6 +82,9 @@
#define LPFC_FW_RESET_MAXIMUM_WAIT_10MS_CNT 12000
#define INT_FW_UPGRADE 0
#define RUN_FW_UPGRADE 1
enum lpfc_sli4_queue_type {
LPFC_EQ,
LPFC_GCQ,

View File

@ -18,7 +18,7 @@
* included with this package. *
*******************************************************************/
#define LPFC_DRIVER_VERSION "8.3.35"
#define LPFC_DRIVER_VERSION "8.3.36"
#define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */

View File

@ -0,0 +1,67 @@
#
# Kernel configuration file for the MPT3SAS
#
# This code is based on drivers/scsi/mpt3sas/Kconfig
# Copyright (C) 2012 LSI Corporation
# (mailto:DL-MPTFusionLinux@lsi.com)
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# NO WARRANTY
# THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
# LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
# solely responsible for determining the appropriateness of using and
# distributing the Program and assumes all risks associated with its
# exercise of rights under this Agreement, including but not limited to
# the risks and costs of program errors, damage to or loss of data,
# programs or equipment, and unavailability or interruption of operations.
# DISCLAIMER OF LIABILITY
# NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
# HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
# USA.
config SCSI_MPT3SAS
tristate "LSI MPT Fusion SAS 3.0 Device Driver"
depends on PCI && SCSI
select SCSI_SAS_ATTRS
select RAID_ATTRS
---help---
This driver supports PCI-Express SAS 12Gb/s Host Adapters.
config SCSI_MPT3SAS_MAX_SGE
int "LSI MPT Fusion Max number of SG Entries (16 - 256)"
depends on PCI && SCSI && SCSI_MPT3SAS
default "128"
range 16 256
---help---
This option allows you to specify the maximum number of scatter-
gather entries per I/O. The driver default is 128, which matches
MAX_PHYS_SEGMENTS in most kernels. However in SuSE kernels this
can be 256. However, it may decreased down to 16. Decreasing this
parameter will reduce memory requirements on a per controller instance.
config SCSI_MPT3SAS_LOGGING
bool "LSI MPT Fusion logging facility"
depends on PCI && SCSI && SCSI_MPT3SAS
---help---
This turns on a logging facility.

View File

@ -0,0 +1,8 @@
# mpt3sas makefile
obj-m += mpt3sas.o
mpt3sas-y += mpt3sas_base.o \
mpt3sas_config.o \
mpt3sas_scsih.o \
mpt3sas_transport.o \
mpt3sas_ctl.o \
mpt3sas_trigger_diag.o

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,560 @@
/*
* Copyright (c) 2000-2012 LSI Corporation.
*
*
* Name: mpi2_init.h
* Title: MPI SCSI initiator mode messages and structures
* Creation Date: June 23, 2006
*
* mpi2_init.h Version: 02.00.14
*
* NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25
* prefix are for use only on MPI v2.5 products, and must not be used
* with MPI v2.0 products. Unless otherwise noted, names beginning with
* MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products.
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* 10-31-07 02.00.01 Fixed name for pMpi2SCSITaskManagementRequest_t.
* 12-18-07 02.00.02 Modified Task Management Target Reset Method defines.
* 02-29-08 02.00.03 Added Query Task Set and Query Unit Attention.
* 03-03-08 02.00.04 Fixed name of struct _MPI2_SCSI_TASK_MANAGE_REPLY.
* 05-21-08 02.00.05 Fixed typo in name of Mpi2SepRequest_t.
* 10-02-08 02.00.06 Removed Untagged and No Disconnect values from SCSI IO
* Control field Task Attribute flags.
* Moved LUN field defines to mpi2.h becasue they are
* common to many structures.
* 05-06-09 02.00.07 Changed task management type of Query Unit Attention to
* Query Asynchronous Event.
* Defined two new bits in the SlotStatus field of the SCSI
* Enclosure Processor Request and Reply.
* 10-28-09 02.00.08 Added defines for decoding the ResponseInfo bytes for
* both SCSI IO Error Reply and SCSI Task Management Reply.
* Added ResponseInfo field to MPI2_SCSI_TASK_MANAGE_REPLY.
* Added MPI2_SCSITASKMGMT_RSP_TM_OVERLAPPED_TAG define.
* 02-10-10 02.00.09 Removed unused structure that had "#if 0" around it.
* 05-12-10 02.00.10 Added optional vendor-unique region to SCSI IO Request.
* 11-10-10 02.00.11 Added MPI2_SCSIIO_NUM_SGLOFFSETS define.
* 11-18-11 02.00.12 Incorporating additions for MPI v2.5.
* 02-06-12 02.00.13 Added alternate defines for Task Priority / Command
* Priority to match SAM-4.
* Added EEDPErrorOffset to MPI2_SCSI_IO_REPLY.
* 07-10-12 02.00.14 Added MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_INIT_H
#define MPI2_INIT_H
/*****************************************************************************
*
* SCSI Initiator Messages
*
*****************************************************************************/
/****************************************************************************
* SCSI IO messages and associated structures
****************************************************************************/
typedef struct _MPI2_SCSI_IO_CDB_EEDP32 {
U8 CDB[20]; /*0x00 */
U32 PrimaryReferenceTag; /*0x14 */
U16 PrimaryApplicationTag; /*0x18 */
U16 PrimaryApplicationTagMask; /*0x1A */
U32 TransferLength; /*0x1C */
} MPI2_SCSI_IO_CDB_EEDP32, *PTR_MPI2_SCSI_IO_CDB_EEDP32,
Mpi2ScsiIoCdbEedp32_t, *pMpi2ScsiIoCdbEedp32_t;
/*MPI v2.0 CDB field */
typedef union _MPI2_SCSI_IO_CDB_UNION {
U8 CDB32[32];
MPI2_SCSI_IO_CDB_EEDP32 EEDP32;
MPI2_SGE_SIMPLE_UNION SGE;
} MPI2_SCSI_IO_CDB_UNION, *PTR_MPI2_SCSI_IO_CDB_UNION,
Mpi2ScsiIoCdb_t, *pMpi2ScsiIoCdb_t;
/*MPI v2.0 SCSI IO Request Message */
typedef struct _MPI2_SCSI_IO_REQUEST {
U16 DevHandle; /*0x00 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved1; /*0x04 */
U8 Reserved2; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved3; /*0x0A */
U32 SenseBufferLowAddress; /*0x0C */
U16 SGLFlags; /*0x10 */
U8 SenseBufferLength; /*0x12 */
U8 Reserved4; /*0x13 */
U8 SGLOffset0; /*0x14 */
U8 SGLOffset1; /*0x15 */
U8 SGLOffset2; /*0x16 */
U8 SGLOffset3; /*0x17 */
U32 SkipCount; /*0x18 */
U32 DataLength; /*0x1C */
U32 BidirectionalDataLength; /*0x20 */
U16 IoFlags; /*0x24 */
U16 EEDPFlags; /*0x26 */
U32 EEDPBlockSize; /*0x28 */
U32 SecondaryReferenceTag; /*0x2C */
U16 SecondaryApplicationTag; /*0x30 */
U16 ApplicationTagTranslationMask; /*0x32 */
U8 LUN[8]; /*0x34 */
U32 Control; /*0x3C */
MPI2_SCSI_IO_CDB_UNION CDB; /*0x40 */
#ifdef MPI2_SCSI_IO_VENDOR_UNIQUE_REGION /*typically this is left undefined */
MPI2_SCSI_IO_VENDOR_UNIQUE VendorRegion;
#endif
MPI2_SGE_IO_UNION SGL; /*0x60 */
} MPI2_SCSI_IO_REQUEST, *PTR_MPI2_SCSI_IO_REQUEST,
Mpi2SCSIIORequest_t, *pMpi2SCSIIORequest_t;
/*SCSI IO MsgFlags bits */
/*MsgFlags for SenseBufferAddressSpace */
#define MPI2_SCSIIO_MSGFLAGS_MASK_SENSE_ADDR (0x0C)
#define MPI2_SCSIIO_MSGFLAGS_SYSTEM_SENSE_ADDR (0x00)
#define MPI2_SCSIIO_MSGFLAGS_IOCDDR_SENSE_ADDR (0x04)
#define MPI2_SCSIIO_MSGFLAGS_IOCPLB_SENSE_ADDR (0x08)
#define MPI2_SCSIIO_MSGFLAGS_IOCPLBNTA_SENSE_ADDR (0x0C)
/*SCSI IO SGLFlags bits */
/*base values for Data Location Address Space */
#define MPI2_SCSIIO_SGLFLAGS_ADDR_MASK (0x0C)
#define MPI2_SCSIIO_SGLFLAGS_SYSTEM_ADDR (0x00)
#define MPI2_SCSIIO_SGLFLAGS_IOCDDR_ADDR (0x04)
#define MPI2_SCSIIO_SGLFLAGS_IOCPLB_ADDR (0x08)
#define MPI2_SCSIIO_SGLFLAGS_IOCPLBNTA_ADDR (0x0C)
/*base values for Type */
#define MPI2_SCSIIO_SGLFLAGS_TYPE_MASK (0x03)
#define MPI2_SCSIIO_SGLFLAGS_TYPE_MPI (0x00)
#define MPI2_SCSIIO_SGLFLAGS_TYPE_IEEE32 (0x01)
#define MPI2_SCSIIO_SGLFLAGS_TYPE_IEEE64 (0x02)
/*shift values for each sub-field */
#define MPI2_SCSIIO_SGLFLAGS_SGL3_SHIFT (12)
#define MPI2_SCSIIO_SGLFLAGS_SGL2_SHIFT (8)
#define MPI2_SCSIIO_SGLFLAGS_SGL1_SHIFT (4)
#define MPI2_SCSIIO_SGLFLAGS_SGL0_SHIFT (0)
/*number of SGLOffset fields */
#define MPI2_SCSIIO_NUM_SGLOFFSETS (4)
/*SCSI IO IoFlags bits */
/*Large CDB Address Space */
#define MPI2_SCSIIO_CDB_ADDR_MASK (0x6000)
#define MPI2_SCSIIO_CDB_ADDR_SYSTEM (0x0000)
#define MPI2_SCSIIO_CDB_ADDR_IOCDDR (0x2000)
#define MPI2_SCSIIO_CDB_ADDR_IOCPLB (0x4000)
#define MPI2_SCSIIO_CDB_ADDR_IOCPLBNTA (0x6000)
#define MPI2_SCSIIO_IOFLAGS_LARGE_CDB (0x1000)
#define MPI2_SCSIIO_IOFLAGS_BIDIRECTIONAL (0x0800)
#define MPI2_SCSIIO_IOFLAGS_MULTICAST (0x0400)
#define MPI2_SCSIIO_IOFLAGS_CMD_DETERMINES_DATA_DIR (0x0200)
#define MPI2_SCSIIO_IOFLAGS_CDBLENGTH_MASK (0x01FF)
/*SCSI IO EEDPFlags bits */
#define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_REFTAG (0x8000)
#define MPI2_SCSIIO_EEDPFLAGS_INC_SEC_REFTAG (0x4000)
#define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_APPTAG (0x2000)
#define MPI2_SCSIIO_EEDPFLAGS_INC_SEC_APPTAG (0x1000)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REFTAG (0x0400)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_APPTAG (0x0200)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_GUARD (0x0100)
#define MPI2_SCSIIO_EEDPFLAGS_PASSTHRU_REFTAG (0x0008)
#define MPI2_SCSIIO_EEDPFLAGS_MASK_OP (0x0007)
#define MPI2_SCSIIO_EEDPFLAGS_NOOP_OP (0x0000)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_OP (0x0001)
#define MPI2_SCSIIO_EEDPFLAGS_STRIP_OP (0x0002)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REMOVE_OP (0x0003)
#define MPI2_SCSIIO_EEDPFLAGS_INSERT_OP (0x0004)
#define MPI2_SCSIIO_EEDPFLAGS_REPLACE_OP (0x0006)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REGEN_OP (0x0007)
/*SCSI IO LUN fields: use MPI2_LUN_ from mpi2.h */
/*SCSI IO Control bits */
#define MPI2_SCSIIO_CONTROL_ADDCDBLEN_MASK (0xFC000000)
#define MPI2_SCSIIO_CONTROL_ADDCDBLEN_SHIFT (26)
#define MPI2_SCSIIO_CONTROL_DATADIRECTION_MASK (0x03000000)
#define MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION (24)
#define MPI2_SCSIIO_CONTROL_NODATATRANSFER (0x00000000)
#define MPI2_SCSIIO_CONTROL_WRITE (0x01000000)
#define MPI2_SCSIIO_CONTROL_READ (0x02000000)
#define MPI2_SCSIIO_CONTROL_BIDIRECTIONAL (0x03000000)
#define MPI2_SCSIIO_CONTROL_TASKPRI_MASK (0x00007800)
#define MPI2_SCSIIO_CONTROL_TASKPRI_SHIFT (11)
/*alternate name for the previous field; called Command Priority in SAM-4 */
#define MPI2_SCSIIO_CONTROL_CMDPRI_MASK (0x00007800)
#define MPI2_SCSIIO_CONTROL_CMDPRI_SHIFT (11)
#define MPI2_SCSIIO_CONTROL_TASKATTRIBUTE_MASK (0x00000700)
#define MPI2_SCSIIO_CONTROL_SIMPLEQ (0x00000000)
#define MPI2_SCSIIO_CONTROL_HEADOFQ (0x00000100)
#define MPI2_SCSIIO_CONTROL_ORDEREDQ (0x00000200)
#define MPI2_SCSIIO_CONTROL_ACAQ (0x00000400)
#define MPI2_SCSIIO_CONTROL_TLR_MASK (0x000000C0)
#define MPI2_SCSIIO_CONTROL_NO_TLR (0x00000000)
#define MPI2_SCSIIO_CONTROL_TLR_ON (0x00000040)
#define MPI2_SCSIIO_CONTROL_TLR_OFF (0x00000080)
/*MPI v2.5 CDB field */
typedef union _MPI25_SCSI_IO_CDB_UNION {
U8 CDB32[32];
MPI2_SCSI_IO_CDB_EEDP32 EEDP32;
MPI2_IEEE_SGE_SIMPLE64 SGE;
} MPI25_SCSI_IO_CDB_UNION, *PTR_MPI25_SCSI_IO_CDB_UNION,
Mpi25ScsiIoCdb_t, *pMpi25ScsiIoCdb_t;
/*MPI v2.5 SCSI IO Request Message */
typedef struct _MPI25_SCSI_IO_REQUEST {
U16 DevHandle; /*0x00 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved1; /*0x04 */
U8 Reserved2; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved3; /*0x0A */
U32 SenseBufferLowAddress; /*0x0C */
U8 DMAFlags; /*0x10 */
U8 Reserved5; /*0x11 */
U8 SenseBufferLength; /*0x12 */
U8 Reserved4; /*0x13 */
U8 SGLOffset0; /*0x14 */
U8 SGLOffset1; /*0x15 */
U8 SGLOffset2; /*0x16 */
U8 SGLOffset3; /*0x17 */
U32 SkipCount; /*0x18 */
U32 DataLength; /*0x1C */
U32 BidirectionalDataLength; /*0x20 */
U16 IoFlags; /*0x24 */
U16 EEDPFlags; /*0x26 */
U16 EEDPBlockSize; /*0x28 */
U16 Reserved6; /*0x2A */
U32 SecondaryReferenceTag; /*0x2C */
U16 SecondaryApplicationTag; /*0x30 */
U16 ApplicationTagTranslationMask; /*0x32 */
U8 LUN[8]; /*0x34 */
U32 Control; /*0x3C */
MPI25_SCSI_IO_CDB_UNION CDB; /*0x40 */
#ifdef MPI25_SCSI_IO_VENDOR_UNIQUE_REGION /*typically this is left undefined */
MPI25_SCSI_IO_VENDOR_UNIQUE VendorRegion;
#endif
MPI25_SGE_IO_UNION SGL; /*0x60 */
} MPI25_SCSI_IO_REQUEST, *PTR_MPI25_SCSI_IO_REQUEST,
Mpi25SCSIIORequest_t, *pMpi25SCSIIORequest_t;
/*use MPI2_SCSIIO_MSGFLAGS_ defines for the MsgFlags field */
/*Defines for the DMAFlags field
* Each setting affects 4 SGLS, from SGL0 to SGL3.
* D = Data
* C = Cache DIF
* I = Interleaved
* H = Host DIF
*/
#define MPI25_SCSIIO_DMAFLAGS_OP_MASK (0x0F)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_D_D_D (0x00)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_D_D_C (0x01)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_D_D_I (0x02)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_D_C_C (0x03)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_D_C_I (0x04)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_D_I_I (0x05)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_C_C_C (0x06)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_C_C_I (0x07)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_C_I_I (0x08)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_I_I_I (0x09)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_H_D_D (0x0A)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_H_D_C (0x0B)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_H_D_I (0x0C)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_H_C_C (0x0D)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_H_C_I (0x0E)
#define MPI25_SCSIIO_DMAFLAGS_OP_D_H_I_I (0x0F)
/*number of SGLOffset fields */
#define MPI25_SCSIIO_NUM_SGLOFFSETS (4)
/*defines for the IoFlags field */
#define MPI25_SCSIIO_IOFLAGS_IO_PATH_MASK (0xC000)
#define MPI25_SCSIIO_IOFLAGS_NORMAL_PATH (0x0000)
#define MPI25_SCSIIO_IOFLAGS_FAST_PATH (0x4000)
#define MPI25_SCSIIO_IOFLAGS_LARGE_CDB (0x1000)
#define MPI25_SCSIIO_IOFLAGS_BIDIRECTIONAL (0x0800)
#define MPI25_SCSIIO_IOFLAGS_CDBLENGTH_MASK (0x01FF)
/*MPI v2.5 defines for the EEDPFlags bits */
/*use MPI2_SCSIIO_EEDPFLAGS_ defines for the other EEDPFlags bits */
#define MPI25_SCSIIO_EEDPFLAGS_ESCAPE_MODE_MASK (0x00C0)
#define MPI25_SCSIIO_EEDPFLAGS_COMPATIBLE_MODE (0x0000)
#define MPI25_SCSIIO_EEDPFLAGS_DO_NOT_DISABLE_MODE (0x0040)
#define MPI25_SCSIIO_EEDPFLAGS_APPTAG_DISABLE_MODE (0x0080)
#define MPI25_SCSIIO_EEDPFLAGS_APPTAG_REFTAG_DISABLE_MODE (0x00C0)
#define MPI25_SCSIIO_EEDPFLAGS_HOST_GUARD_METHOD_MASK (0x0030)
#define MPI25_SCSIIO_EEDPFLAGS_T10_CRC_HOST_GUARD (0x0000)
#define MPI25_SCSIIO_EEDPFLAGS_IP_CHKSUM_HOST_GUARD (0x0010)
/*use MPI2_LUN_ defines from mpi2.h for the LUN field */
/*use MPI2_SCSIIO_CONTROL_ defines for the Control field */
/*NOTE: The SCSI IO Reply is nearly the same for MPI 2.0 and MPI 2.5, so
* MPI2_SCSI_IO_REPLY is used for both.
*/
/*SCSI IO Error Reply Message */
typedef struct _MPI2_SCSI_IO_REPLY {
U16 DevHandle; /*0x00 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved1; /*0x04 */
U8 Reserved2; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved3; /*0x0A */
U8 SCSIStatus; /*0x0C */
U8 SCSIState; /*0x0D */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
U32 TransferCount; /*0x14 */
U32 SenseCount; /*0x18 */
U32 ResponseInfo; /*0x1C */
U16 TaskTag; /*0x20 */
U16 Reserved4; /*0x22 */
U32 BidirectionalTransferCount; /*0x24 */
U32 EEDPErrorOffset; /*0x28 *//*MPI 2.5 only; Reserved in MPI 2.0*/
U32 Reserved6; /*0x2C */
} MPI2_SCSI_IO_REPLY, *PTR_MPI2_SCSI_IO_REPLY,
Mpi2SCSIIOReply_t, *pMpi2SCSIIOReply_t;
/*SCSI IO Reply SCSIStatus values (SAM-4 status codes) */
#define MPI2_SCSI_STATUS_GOOD (0x00)
#define MPI2_SCSI_STATUS_CHECK_CONDITION (0x02)
#define MPI2_SCSI_STATUS_CONDITION_MET (0x04)
#define MPI2_SCSI_STATUS_BUSY (0x08)
#define MPI2_SCSI_STATUS_INTERMEDIATE (0x10)
#define MPI2_SCSI_STATUS_INTERMEDIATE_CONDMET (0x14)
#define MPI2_SCSI_STATUS_RESERVATION_CONFLICT (0x18)
#define MPI2_SCSI_STATUS_COMMAND_TERMINATED (0x22) /*obsolete */
#define MPI2_SCSI_STATUS_TASK_SET_FULL (0x28)
#define MPI2_SCSI_STATUS_ACA_ACTIVE (0x30)
#define MPI2_SCSI_STATUS_TASK_ABORTED (0x40)
/*SCSI IO Reply SCSIState flags */
#define MPI2_SCSI_STATE_RESPONSE_INFO_VALID (0x10)
#define MPI2_SCSI_STATE_TERMINATED (0x08)
#define MPI2_SCSI_STATE_NO_SCSI_STATUS (0x04)
#define MPI2_SCSI_STATE_AUTOSENSE_FAILED (0x02)
#define MPI2_SCSI_STATE_AUTOSENSE_VALID (0x01)
/*masks and shifts for the ResponseInfo field */
#define MPI2_SCSI_RI_MASK_REASONCODE (0x000000FF)
#define MPI2_SCSI_RI_SHIFT_REASONCODE (0)
#define MPI2_SCSI_TASKTAG_UNKNOWN (0xFFFF)
/****************************************************************************
* SCSI Task Management messages
****************************************************************************/
/*SCSI Task Management Request Message */
typedef struct _MPI2_SCSI_TASK_MANAGE_REQUEST {
U16 DevHandle; /*0x00 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U8 Reserved1; /*0x04 */
U8 TaskType; /*0x05 */
U8 Reserved2; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved3; /*0x0A */
U8 LUN[8]; /*0x0C */
U32 Reserved4[7]; /*0x14 */
U16 TaskMID; /*0x30 */
U16 Reserved5; /*0x32 */
} MPI2_SCSI_TASK_MANAGE_REQUEST,
*PTR_MPI2_SCSI_TASK_MANAGE_REQUEST,
Mpi2SCSITaskManagementRequest_t,
*pMpi2SCSITaskManagementRequest_t;
/*TaskType values */
#define MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK (0x01)
#define MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET (0x02)
#define MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET (0x03)
#define MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET (0x05)
#define MPI2_SCSITASKMGMT_TASKTYPE_CLEAR_TASK_SET (0x06)
#define MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK (0x07)
#define MPI2_SCSITASKMGMT_TASKTYPE_CLR_ACA (0x08)
#define MPI2_SCSITASKMGMT_TASKTYPE_QRY_TASK_SET (0x09)
#define MPI2_SCSITASKMGMT_TASKTYPE_QRY_ASYNC_EVENT (0x0A)
/*obsolete TaskType name */
#define MPI2_SCSITASKMGMT_TASKTYPE_QRY_UNIT_ATTENTION \
(MPI2_SCSITASKMGMT_TASKTYPE_QRY_ASYNC_EVENT)
/*MsgFlags bits */
#define MPI2_SCSITASKMGMT_MSGFLAGS_MASK_TARGET_RESET (0x18)
#define MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET (0x00)
#define MPI2_SCSITASKMGMT_MSGFLAGS_NEXUS_RESET_SRST (0x08)
#define MPI2_SCSITASKMGMT_MSGFLAGS_SAS_HARD_LINK_RESET (0x10)
#define MPI2_SCSITASKMGMT_MSGFLAGS_DO_NOT_SEND_TASK_IU (0x01)
/*SCSI Task Management Reply Message */
typedef struct _MPI2_SCSI_TASK_MANAGE_REPLY {
U16 DevHandle; /*0x00 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U8 ResponseCode; /*0x04 */
U8 TaskType; /*0x05 */
U8 Reserved1; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved2; /*0x0A */
U16 Reserved3; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
U32 TerminationCount; /*0x14 */
U32 ResponseInfo; /*0x18 */
} MPI2_SCSI_TASK_MANAGE_REPLY,
*PTR_MPI2_SCSI_TASK_MANAGE_REPLY,
Mpi2SCSITaskManagementReply_t, *pMpi2SCSIManagementReply_t;
/*ResponseCode values */
#define MPI2_SCSITASKMGMT_RSP_TM_COMPLETE (0x00)
#define MPI2_SCSITASKMGMT_RSP_INVALID_FRAME (0x02)
#define MPI2_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED (0x04)
#define MPI2_SCSITASKMGMT_RSP_TM_FAILED (0x05)
#define MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED (0x08)
#define MPI2_SCSITASKMGMT_RSP_TM_INVALID_LUN (0x09)
#define MPI2_SCSITASKMGMT_RSP_TM_OVERLAPPED_TAG (0x0A)
#define MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC (0x80)
/*masks and shifts for the ResponseInfo field */
#define MPI2_SCSITASKMGMT_RI_MASK_REASONCODE (0x000000FF)
#define MPI2_SCSITASKMGMT_RI_SHIFT_REASONCODE (0)
#define MPI2_SCSITASKMGMT_RI_MASK_ARI2 (0x0000FF00)
#define MPI2_SCSITASKMGMT_RI_SHIFT_ARI2 (8)
#define MPI2_SCSITASKMGMT_RI_MASK_ARI1 (0x00FF0000)
#define MPI2_SCSITASKMGMT_RI_SHIFT_ARI1 (16)
#define MPI2_SCSITASKMGMT_RI_MASK_ARI0 (0xFF000000)
#define MPI2_SCSITASKMGMT_RI_SHIFT_ARI0 (24)
/****************************************************************************
* SCSI Enclosure Processor messages
****************************************************************************/
/*SCSI Enclosure Processor Request Message */
typedef struct _MPI2_SEP_REQUEST {
U16 DevHandle; /*0x00 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U8 Action; /*0x04 */
U8 Flags; /*0x05 */
U8 Reserved1; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved2; /*0x0A */
U32 SlotStatus; /*0x0C */
U32 Reserved3; /*0x10 */
U32 Reserved4; /*0x14 */
U32 Reserved5; /*0x18 */
U16 Slot; /*0x1C */
U16 EnclosureHandle; /*0x1E */
} MPI2_SEP_REQUEST, *PTR_MPI2_SEP_REQUEST,
Mpi2SepRequest_t, *pMpi2SepRequest_t;
/*Action defines */
#define MPI2_SEP_REQ_ACTION_WRITE_STATUS (0x00)
#define MPI2_SEP_REQ_ACTION_READ_STATUS (0x01)
/*Flags defines */
#define MPI2_SEP_REQ_FLAGS_DEVHANDLE_ADDRESS (0x00)
#define MPI2_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS (0x01)
/*SlotStatus defines */
#define MPI2_SEP_REQ_SLOTSTATUS_REQUEST_REMOVE (0x00040000)
#define MPI2_SEP_REQ_SLOTSTATUS_IDENTIFY_REQUEST (0x00020000)
#define MPI2_SEP_REQ_SLOTSTATUS_REBUILD_STOPPED (0x00000200)
#define MPI2_SEP_REQ_SLOTSTATUS_HOT_SPARE (0x00000100)
#define MPI2_SEP_REQ_SLOTSTATUS_UNCONFIGURED (0x00000080)
#define MPI2_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT (0x00000040)
#define MPI2_SEP_REQ_SLOTSTATUS_IN_CRITICAL_ARRAY (0x00000010)
#define MPI2_SEP_REQ_SLOTSTATUS_IN_FAILED_ARRAY (0x00000008)
#define MPI2_SEP_REQ_SLOTSTATUS_DEV_REBUILDING (0x00000004)
#define MPI2_SEP_REQ_SLOTSTATUS_DEV_FAULTY (0x00000002)
#define MPI2_SEP_REQ_SLOTSTATUS_NO_ERROR (0x00000001)
/*SCSI Enclosure Processor Reply Message */
typedef struct _MPI2_SEP_REPLY {
U16 DevHandle; /*0x00 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U8 Action; /*0x04 */
U8 Flags; /*0x05 */
U8 Reserved1; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved2; /*0x0A */
U16 Reserved3; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
U32 SlotStatus; /*0x14 */
U32 Reserved4; /*0x18 */
U16 Slot; /*0x1C */
U16 EnclosureHandle; /*0x1E */
} MPI2_SEP_REPLY, *PTR_MPI2_SEP_REPLY,
Mpi2SepReply_t, *pMpi2SepReply_t;
/*SlotStatus defines */
#define MPI2_SEP_REPLY_SLOTSTATUS_REMOVE_READY (0x00040000)
#define MPI2_SEP_REPLY_SLOTSTATUS_IDENTIFY_REQUEST (0x00020000)
#define MPI2_SEP_REPLY_SLOTSTATUS_REBUILD_STOPPED (0x00000200)
#define MPI2_SEP_REPLY_SLOTSTATUS_HOT_SPARE (0x00000100)
#define MPI2_SEP_REPLY_SLOTSTATUS_UNCONFIGURED (0x00000080)
#define MPI2_SEP_REPLY_SLOTSTATUS_PREDICTED_FAULT (0x00000040)
#define MPI2_SEP_REPLY_SLOTSTATUS_IN_CRITICAL_ARRAY (0x00000010)
#define MPI2_SEP_REPLY_SLOTSTATUS_IN_FAILED_ARRAY (0x00000008)
#define MPI2_SEP_REPLY_SLOTSTATUS_DEV_REBUILDING (0x00000004)
#define MPI2_SEP_REPLY_SLOTSTATUS_DEV_FAULTY (0x00000002)
#define MPI2_SEP_REPLY_SLOTSTATUS_NO_ERROR (0x00000001)
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,346 @@
/*
* Copyright (c) 2000-2012 LSI Corporation.
*
*
* Name: mpi2_raid.h
* Title: MPI Integrated RAID messages and structures
* Creation Date: April 26, 2007
*
* mpi2_raid.h Version: 02.00.08
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* 08-31-07 02.00.01 Modifications to RAID Action request and reply,
* including the Actions and ActionData.
* 02-29-08 02.00.02 Added MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD.
* 05-21-08 02.00.03 Added MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS so that
* the PhysDisk array in MPI2_RAID_VOLUME_CREATION_STRUCT
* can be sized by the build environment.
* 07-30-09 02.00.04 Added proper define for the Use Default Settings bit of
* VolumeCreationFlags and marked the old one as obsolete.
* 05-12-10 02.00.05 Added MPI2_RAID_VOL_FLAGS_OP_MDC define.
* 08-24-10 02.00.06 Added MPI2_RAID_ACTION_COMPATIBILITY_CHECK along with
* related structures and defines.
* Added product-specific range to RAID Action values.
* 11-18-11 02.00.07 Incorporating additions for MPI v2.5.
* 02-06-12 02.00.08 Added MPI2_RAID_ACTION_PHYSDISK_HIDDEN.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_RAID_H
#define MPI2_RAID_H
/*****************************************************************************
*
* Integrated RAID Messages
*
*****************************************************************************/
/****************************************************************************
* RAID Action messages
****************************************************************************/
/*ActionDataWord defines for use with MPI2_RAID_ACTION_DELETE_VOLUME action */
#define MPI2_RAID_ACTION_ADATA_KEEP_LBA0 (0x00000000)
#define MPI2_RAID_ACTION_ADATA_ZERO_LBA0 (0x00000001)
/*use MPI2_RAIDVOL0_SETTING_ defines from mpi2_cnfg.h for
*MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE action */
/*ActionDataWord defines for use with
*MPI2_RAID_ACTION_DISABLE_ALL_VOLUMES action */
#define MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD (0x00000001)
/*ActionDataWord for MPI2_RAID_ACTION_SET_RAID_FUNCTION_RATE Action */
typedef struct _MPI2_RAID_ACTION_RATE_DATA {
U8 RateToChange; /*0x00 */
U8 RateOrMode; /*0x01 */
U16 DataScrubDuration; /*0x02 */
} MPI2_RAID_ACTION_RATE_DATA, *PTR_MPI2_RAID_ACTION_RATE_DATA,
Mpi2RaidActionRateData_t, *pMpi2RaidActionRateData_t;
#define MPI2_RAID_ACTION_SET_RATE_RESYNC (0x00)
#define MPI2_RAID_ACTION_SET_RATE_DATA_SCRUB (0x01)
#define MPI2_RAID_ACTION_SET_RATE_POWERSAVE_MODE (0x02)
/*ActionDataWord for MPI2_RAID_ACTION_START_RAID_FUNCTION Action */
typedef struct _MPI2_RAID_ACTION_START_RAID_FUNCTION {
U8 RAIDFunction; /*0x00 */
U8 Flags; /*0x01 */
U16 Reserved1; /*0x02 */
} MPI2_RAID_ACTION_START_RAID_FUNCTION,
*PTR_MPI2_RAID_ACTION_START_RAID_FUNCTION,
Mpi2RaidActionStartRaidFunction_t,
*pMpi2RaidActionStartRaidFunction_t;
/*defines for the RAIDFunction field */
#define MPI2_RAID_ACTION_START_BACKGROUND_INIT (0x00)
#define MPI2_RAID_ACTION_START_ONLINE_CAP_EXPANSION (0x01)
#define MPI2_RAID_ACTION_START_CONSISTENCY_CHECK (0x02)
/*defines for the Flags field */
#define MPI2_RAID_ACTION_START_NEW (0x00)
#define MPI2_RAID_ACTION_START_RESUME (0x01)
/*ActionDataWord for MPI2_RAID_ACTION_STOP_RAID_FUNCTION Action */
typedef struct _MPI2_RAID_ACTION_STOP_RAID_FUNCTION {
U8 RAIDFunction; /*0x00 */
U8 Flags; /*0x01 */
U16 Reserved1; /*0x02 */
} MPI2_RAID_ACTION_STOP_RAID_FUNCTION,
*PTR_MPI2_RAID_ACTION_STOP_RAID_FUNCTION,
Mpi2RaidActionStopRaidFunction_t,
*pMpi2RaidActionStopRaidFunction_t;
/*defines for the RAIDFunction field */
#define MPI2_RAID_ACTION_STOP_BACKGROUND_INIT (0x00)
#define MPI2_RAID_ACTION_STOP_ONLINE_CAP_EXPANSION (0x01)
#define MPI2_RAID_ACTION_STOP_CONSISTENCY_CHECK (0x02)
/*defines for the Flags field */
#define MPI2_RAID_ACTION_STOP_ABORT (0x00)
#define MPI2_RAID_ACTION_STOP_PAUSE (0x01)
/*ActionDataWord for MPI2_RAID_ACTION_CREATE_HOT_SPARE Action */
typedef struct _MPI2_RAID_ACTION_HOT_SPARE {
U8 HotSparePool; /*0x00 */
U8 Reserved1; /*0x01 */
U16 DevHandle; /*0x02 */
} MPI2_RAID_ACTION_HOT_SPARE, *PTR_MPI2_RAID_ACTION_HOT_SPARE,
Mpi2RaidActionHotSpare_t, *pMpi2RaidActionHotSpare_t;
/*ActionDataWord for MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE Action */
typedef struct _MPI2_RAID_ACTION_FW_UPDATE_MODE {
U8 Flags; /*0x00 */
U8 DeviceFirmwareUpdateModeTimeout; /*0x01 */
U16 Reserved1; /*0x02 */
} MPI2_RAID_ACTION_FW_UPDATE_MODE,
*PTR_MPI2_RAID_ACTION_FW_UPDATE_MODE,
Mpi2RaidActionFwUpdateMode_t,
*pMpi2RaidActionFwUpdateMode_t;
/*ActionDataWord defines for use with
*MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE action */
#define MPI2_RAID_ACTION_ADATA_DISABLE_FW_UPDATE (0x00)
#define MPI2_RAID_ACTION_ADATA_ENABLE_FW_UPDATE (0x01)
typedef union _MPI2_RAID_ACTION_DATA {
U32 Word;
MPI2_RAID_ACTION_RATE_DATA Rates;
MPI2_RAID_ACTION_START_RAID_FUNCTION StartRaidFunction;
MPI2_RAID_ACTION_STOP_RAID_FUNCTION StopRaidFunction;
MPI2_RAID_ACTION_HOT_SPARE HotSpare;
MPI2_RAID_ACTION_FW_UPDATE_MODE FwUpdateMode;
} MPI2_RAID_ACTION_DATA, *PTR_MPI2_RAID_ACTION_DATA,
Mpi2RaidActionData_t, *pMpi2RaidActionData_t;
/*RAID Action Request Message */
typedef struct _MPI2_RAID_ACTION_REQUEST {
U8 Action; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 VolDevHandle; /*0x04 */
U8 PhysDiskNum; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved2; /*0x0A */
U32 Reserved3; /*0x0C */
MPI2_RAID_ACTION_DATA ActionDataWord; /*0x10 */
MPI2_SGE_SIMPLE_UNION ActionDataSGE; /*0x14 */
} MPI2_RAID_ACTION_REQUEST, *PTR_MPI2_RAID_ACTION_REQUEST,
Mpi2RaidActionRequest_t, *pMpi2RaidActionRequest_t;
/*RAID Action request Action values */
#define MPI2_RAID_ACTION_INDICATOR_STRUCT (0x01)
#define MPI2_RAID_ACTION_CREATE_VOLUME (0x02)
#define MPI2_RAID_ACTION_DELETE_VOLUME (0x03)
#define MPI2_RAID_ACTION_DISABLE_ALL_VOLUMES (0x04)
#define MPI2_RAID_ACTION_ENABLE_ALL_VOLUMES (0x05)
#define MPI2_RAID_ACTION_PHYSDISK_OFFLINE (0x0A)
#define MPI2_RAID_ACTION_PHYSDISK_ONLINE (0x0B)
#define MPI2_RAID_ACTION_FAIL_PHYSDISK (0x0F)
#define MPI2_RAID_ACTION_ACTIVATE_VOLUME (0x11)
#define MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE (0x15)
#define MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE (0x17)
#define MPI2_RAID_ACTION_SET_VOLUME_NAME (0x18)
#define MPI2_RAID_ACTION_SET_RAID_FUNCTION_RATE (0x19)
#define MPI2_RAID_ACTION_ENABLE_FAILED_VOLUME (0x1C)
#define MPI2_RAID_ACTION_CREATE_HOT_SPARE (0x1D)
#define MPI2_RAID_ACTION_DELETE_HOT_SPARE (0x1E)
#define MPI2_RAID_ACTION_SYSTEM_SHUTDOWN_INITIATED (0x20)
#define MPI2_RAID_ACTION_START_RAID_FUNCTION (0x21)
#define MPI2_RAID_ACTION_STOP_RAID_FUNCTION (0x22)
#define MPI2_RAID_ACTION_COMPATIBILITY_CHECK (0x23)
#define MPI2_RAID_ACTION_PHYSDISK_HIDDEN (0x24)
#define MPI2_RAID_ACTION_MIN_PRODUCT_SPECIFIC (0x80)
#define MPI2_RAID_ACTION_MAX_PRODUCT_SPECIFIC (0xFF)
/*RAID Volume Creation Structure */
/*
*The following define can be customized for the targeted product.
*/
#ifndef MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS
#define MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS (1)
#endif
typedef struct _MPI2_RAID_VOLUME_PHYSDISK {
U8 RAIDSetNum; /*0x00 */
U8 PhysDiskMap; /*0x01 */
U16 PhysDiskDevHandle; /*0x02 */
} MPI2_RAID_VOLUME_PHYSDISK, *PTR_MPI2_RAID_VOLUME_PHYSDISK,
Mpi2RaidVolumePhysDisk_t, *pMpi2RaidVolumePhysDisk_t;
/*defines for the PhysDiskMap field */
#define MPI2_RAIDACTION_PHYSDISK_PRIMARY (0x01)
#define MPI2_RAIDACTION_PHYSDISK_SECONDARY (0x02)
typedef struct _MPI2_RAID_VOLUME_CREATION_STRUCT {
U8 NumPhysDisks; /*0x00 */
U8 VolumeType; /*0x01 */
U16 Reserved1; /*0x02 */
U32 VolumeCreationFlags; /*0x04 */
U32 VolumeSettings; /*0x08 */
U8 Reserved2; /*0x0C */
U8 ResyncRate; /*0x0D */
U16 DataScrubDuration; /*0x0E */
U64 VolumeMaxLBA; /*0x10 */
U32 StripeSize; /*0x18 */
U8 Name[16]; /*0x1C */
MPI2_RAID_VOLUME_PHYSDISK
PhysDisk[MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS]; /*0x2C */
} MPI2_RAID_VOLUME_CREATION_STRUCT,
*PTR_MPI2_RAID_VOLUME_CREATION_STRUCT,
Mpi2RaidVolumeCreationStruct_t,
*pMpi2RaidVolumeCreationStruct_t;
/*use MPI2_RAID_VOL_TYPE_ defines from mpi2_cnfg.h for VolumeType */
/*defines for the VolumeCreationFlags field */
#define MPI2_RAID_VOL_CREATION_DEFAULT_SETTINGS (0x80000000)
#define MPI2_RAID_VOL_CREATION_BACKGROUND_INIT (0x00000004)
#define MPI2_RAID_VOL_CREATION_LOW_LEVEL_INIT (0x00000002)
#define MPI2_RAID_VOL_CREATION_MIGRATE_DATA (0x00000001)
/*The following is an obsolete define.
*It must be shifted left 24 bits in order to set the proper bit.
*/
#define MPI2_RAID_VOL_CREATION_USE_DEFAULT_SETTINGS (0x80)
/*RAID Online Capacity Expansion Structure */
typedef struct _MPI2_RAID_ONLINE_CAPACITY_EXPANSION {
U32 Flags; /*0x00 */
U16 DevHandle0; /*0x04 */
U16 Reserved1; /*0x06 */
U16 DevHandle1; /*0x08 */
U16 Reserved2; /*0x0A */
} MPI2_RAID_ONLINE_CAPACITY_EXPANSION,
*PTR_MPI2_RAID_ONLINE_CAPACITY_EXPANSION,
Mpi2RaidOnlineCapacityExpansion_t,
*pMpi2RaidOnlineCapacityExpansion_t;
/*RAID Compatibility Input Structure */
typedef struct _MPI2_RAID_COMPATIBILITY_INPUT_STRUCT {
U16 SourceDevHandle; /*0x00 */
U16 CandidateDevHandle; /*0x02 */
U32 Flags; /*0x04 */
U32 Reserved1; /*0x08 */
U32 Reserved2; /*0x0C */
} MPI2_RAID_COMPATIBILITY_INPUT_STRUCT,
*PTR_MPI2_RAID_COMPATIBILITY_INPUT_STRUCT,
Mpi2RaidCompatibilityInputStruct_t,
*pMpi2RaidCompatibilityInputStruct_t;
/*defines for RAID Compatibility Structure Flags field */
#define MPI2_RAID_COMPAT_SOURCE_IS_VOLUME_FLAG (0x00000002)
#define MPI2_RAID_COMPAT_REPORT_SOURCE_INFO_FLAG (0x00000001)
/*RAID Volume Indicator Structure */
typedef struct _MPI2_RAID_VOL_INDICATOR {
U64 TotalBlocks; /*0x00 */
U64 BlocksRemaining; /*0x08 */
U32 Flags; /*0x10 */
} MPI2_RAID_VOL_INDICATOR, *PTR_MPI2_RAID_VOL_INDICATOR,
Mpi2RaidVolIndicator_t, *pMpi2RaidVolIndicator_t;
/*defines for RAID Volume Indicator Flags field */
#define MPI2_RAID_VOL_FLAGS_OP_MASK (0x0000000F)
#define MPI2_RAID_VOL_FLAGS_OP_BACKGROUND_INIT (0x00000000)
#define MPI2_RAID_VOL_FLAGS_OP_ONLINE_CAP_EXPANSION (0x00000001)
#define MPI2_RAID_VOL_FLAGS_OP_CONSISTENCY_CHECK (0x00000002)
#define MPI2_RAID_VOL_FLAGS_OP_RESYNC (0x00000003)
#define MPI2_RAID_VOL_FLAGS_OP_MDC (0x00000004)
/*RAID Compatibility Result Structure */
typedef struct _MPI2_RAID_COMPATIBILITY_RESULT_STRUCT {
U8 State; /*0x00 */
U8 Reserved1; /*0x01 */
U16 Reserved2; /*0x02 */
U32 GenericAttributes; /*0x04 */
U32 OEMSpecificAttributes; /*0x08 */
U32 Reserved3; /*0x0C */
U32 Reserved4; /*0x10 */
} MPI2_RAID_COMPATIBILITY_RESULT_STRUCT,
*PTR_MPI2_RAID_COMPATIBILITY_RESULT_STRUCT,
Mpi2RaidCompatibilityResultStruct_t,
*pMpi2RaidCompatibilityResultStruct_t;
/*defines for RAID Compatibility Result Structure State field */
#define MPI2_RAID_COMPAT_STATE_COMPATIBLE (0x00)
#define MPI2_RAID_COMPAT_STATE_NOT_COMPATIBLE (0x01)
/*defines for RAID Compatibility Result Structure GenericAttributes field */
#define MPI2_RAID_COMPAT_GENATTRIB_4K_SECTOR (0x00000010)
#define MPI2_RAID_COMPAT_GENATTRIB_MEDIA_MASK (0x0000000C)
#define MPI2_RAID_COMPAT_GENATTRIB_SOLID_STATE_DRIVE (0x00000008)
#define MPI2_RAID_COMPAT_GENATTRIB_HARD_DISK_DRIVE (0x00000004)
#define MPI2_RAID_COMPAT_GENATTRIB_PROTOCOL_MASK (0x00000003)
#define MPI2_RAID_COMPAT_GENATTRIB_SAS_PROTOCOL (0x00000002)
#define MPI2_RAID_COMPAT_GENATTRIB_SATA_PROTOCOL (0x00000001)
/*RAID Action Reply ActionData union */
typedef union _MPI2_RAID_ACTION_REPLY_DATA {
U32 Word[5];
MPI2_RAID_VOL_INDICATOR RaidVolumeIndicator;
U16 VolDevHandle;
U8 VolumeState;
U8 PhysDiskNum;
MPI2_RAID_COMPATIBILITY_RESULT_STRUCT RaidCompatibilityResult;
} MPI2_RAID_ACTION_REPLY_DATA, *PTR_MPI2_RAID_ACTION_REPLY_DATA,
Mpi2RaidActionReplyData_t, *pMpi2RaidActionReplyData_t;
/*use MPI2_RAIDVOL0_SETTING_ defines from mpi2_cnfg.h for
*MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE action */
/*RAID Action Reply Message */
typedef struct _MPI2_RAID_ACTION_REPLY {
U8 Action; /*0x00 */
U8 Reserved1; /*0x01 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 VolDevHandle; /*0x04 */
U8 PhysDiskNum; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved2; /*0x0A */
U16 Reserved3; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
MPI2_RAID_ACTION_REPLY_DATA ActionData; /*0x14 */
} MPI2_RAID_ACTION_REPLY, *PTR_MPI2_RAID_ACTION_REPLY,
Mpi2RaidActionReply_t, *pMpi2RaidActionReply_t;
#endif

View File

@ -0,0 +1,295 @@
/*
* Copyright (c) 2000-2012 LSI Corporation.
*
*
* Name: mpi2_sas.h
* Title: MPI Serial Attached SCSI structures and definitions
* Creation Date: February 9, 2007
*
* mpi2_sas.h Version: 02.00.07
*
* NOTE: Names (typedefs, defines, etc.) beginning with an MPI25 or Mpi25
* prefix are for use only on MPI v2.5 products, and must not be used
* with MPI v2.0 products. Unless otherwise noted, names beginning with
* MPI2 or Mpi2 are for use with both MPI v2.0 and MPI v2.5 products.
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* 06-26-07 02.00.01 Added Clear All Persistent Operation to SAS IO Unit
* Control Request.
* 10-02-08 02.00.02 Added Set IOC Parameter Operation to SAS IO Unit Control
* Request.
* 10-28-09 02.00.03 Changed the type of SGL in MPI2_SATA_PASSTHROUGH_REQUEST
* to MPI2_SGE_IO_UNION since it supports chained SGLs.
* 05-12-10 02.00.04 Modified some comments.
* 08-11-10 02.00.05 Added NCQ operations to SAS IO Unit Control.
* 11-18-11 02.00.06 Incorporating additions for MPI v2.5.
* 07-10-12 02.00.07 Added MPI2_SATA_PT_SGE_UNION for use in the SATA
* Passthrough Request message.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_SAS_H
#define MPI2_SAS_H
/*
*Values for SASStatus.
*/
#define MPI2_SASSTATUS_SUCCESS (0x00)
#define MPI2_SASSTATUS_UNKNOWN_ERROR (0x01)
#define MPI2_SASSTATUS_INVALID_FRAME (0x02)
#define MPI2_SASSTATUS_UTC_BAD_DEST (0x03)
#define MPI2_SASSTATUS_UTC_BREAK_RECEIVED (0x04)
#define MPI2_SASSTATUS_UTC_CONNECT_RATE_NOT_SUPPORTED (0x05)
#define MPI2_SASSTATUS_UTC_PORT_LAYER_REQUEST (0x06)
#define MPI2_SASSTATUS_UTC_PROTOCOL_NOT_SUPPORTED (0x07)
#define MPI2_SASSTATUS_UTC_STP_RESOURCES_BUSY (0x08)
#define MPI2_SASSTATUS_UTC_WRONG_DESTINATION (0x09)
#define MPI2_SASSTATUS_SHORT_INFORMATION_UNIT (0x0A)
#define MPI2_SASSTATUS_LONG_INFORMATION_UNIT (0x0B)
#define MPI2_SASSTATUS_XFER_RDY_INCORRECT_WRITE_DATA (0x0C)
#define MPI2_SASSTATUS_XFER_RDY_REQUEST_OFFSET_ERROR (0x0D)
#define MPI2_SASSTATUS_XFER_RDY_NOT_EXPECTED (0x0E)
#define MPI2_SASSTATUS_DATA_INCORRECT_DATA_LENGTH (0x0F)
#define MPI2_SASSTATUS_DATA_TOO_MUCH_READ_DATA (0x10)
#define MPI2_SASSTATUS_DATA_OFFSET_ERROR (0x11)
#define MPI2_SASSTATUS_SDSF_NAK_RECEIVED (0x12)
#define MPI2_SASSTATUS_SDSF_CONNECTION_FAILED (0x13)
#define MPI2_SASSTATUS_INITIATOR_RESPONSE_TIMEOUT (0x14)
/*
*Values for the SAS DeviceInfo field used in SAS Device Status Change Event
*data and SAS Configuration pages.
*/
#define MPI2_SAS_DEVICE_INFO_SEP (0x00004000)
#define MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE (0x00002000)
#define MPI2_SAS_DEVICE_INFO_LSI_DEVICE (0x00001000)
#define MPI2_SAS_DEVICE_INFO_DIRECT_ATTACH (0x00000800)
#define MPI2_SAS_DEVICE_INFO_SSP_TARGET (0x00000400)
#define MPI2_SAS_DEVICE_INFO_STP_TARGET (0x00000200)
#define MPI2_SAS_DEVICE_INFO_SMP_TARGET (0x00000100)
#define MPI2_SAS_DEVICE_INFO_SATA_DEVICE (0x00000080)
#define MPI2_SAS_DEVICE_INFO_SSP_INITIATOR (0x00000040)
#define MPI2_SAS_DEVICE_INFO_STP_INITIATOR (0x00000020)
#define MPI2_SAS_DEVICE_INFO_SMP_INITIATOR (0x00000010)
#define MPI2_SAS_DEVICE_INFO_SATA_HOST (0x00000008)
#define MPI2_SAS_DEVICE_INFO_MASK_DEVICE_TYPE (0x00000007)
#define MPI2_SAS_DEVICE_INFO_NO_DEVICE (0x00000000)
#define MPI2_SAS_DEVICE_INFO_END_DEVICE (0x00000001)
#define MPI2_SAS_DEVICE_INFO_EDGE_EXPANDER (0x00000002)
#define MPI2_SAS_DEVICE_INFO_FANOUT_EXPANDER (0x00000003)
/*****************************************************************************
*
* SAS Messages
*
*****************************************************************************/
/****************************************************************************
* SMP Passthrough messages
****************************************************************************/
/*SMP Passthrough Request Message */
typedef struct _MPI2_SMP_PASSTHROUGH_REQUEST {
U8 PassthroughFlags; /*0x00 */
U8 PhysicalPort; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 RequestDataLength; /*0x04 */
U8 SGLFlags; /*0x06*//*MPI v2.0 only. Reserved on MPI v2.5*/
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved1; /*0x0A */
U32 Reserved2; /*0x0C */
U64 SASAddress; /*0x10 */
U32 Reserved3; /*0x18 */
U32 Reserved4; /*0x1C */
MPI2_SIMPLE_SGE_UNION SGL;/*0x20 */
} MPI2_SMP_PASSTHROUGH_REQUEST, *PTR_MPI2_SMP_PASSTHROUGH_REQUEST,
Mpi2SmpPassthroughRequest_t, *pMpi2SmpPassthroughRequest_t;
/*values for PassthroughFlags field */
#define MPI2_SMP_PT_REQ_PT_FLAGS_IMMEDIATE (0x80)
/*MPI v2.0: use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
/*SMP Passthrough Reply Message */
typedef struct _MPI2_SMP_PASSTHROUGH_REPLY {
U8 PassthroughFlags; /*0x00 */
U8 PhysicalPort; /*0x01 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 ResponseDataLength; /*0x04 */
U8 SGLFlags; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved1; /*0x0A */
U8 Reserved2; /*0x0C */
U8 SASStatus; /*0x0D */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
U32 Reserved3; /*0x14 */
U8 ResponseData[4]; /*0x18 */
} MPI2_SMP_PASSTHROUGH_REPLY, *PTR_MPI2_SMP_PASSTHROUGH_REPLY,
Mpi2SmpPassthroughReply_t, *pMpi2SmpPassthroughReply_t;
/*values for PassthroughFlags field */
#define MPI2_SMP_PT_REPLY_PT_FLAGS_IMMEDIATE (0x80)
/*values for SASStatus field are at the top of this file */
/****************************************************************************
* SATA Passthrough messages
****************************************************************************/
typedef union _MPI2_SATA_PT_SGE_UNION {
MPI2_SGE_SIMPLE_UNION MpiSimple; /*MPI v2.0 only */
MPI2_SGE_CHAIN_UNION MpiChain; /*MPI v2.0 only */
MPI2_IEEE_SGE_SIMPLE_UNION IeeeSimple;
MPI2_IEEE_SGE_CHAIN_UNION IeeeChain; /*MPI v2.0 only */
MPI25_IEEE_SGE_CHAIN64 IeeeChain64; /*MPI v2.5 only */
} MPI2_SATA_PT_SGE_UNION, *PTR_MPI2_SATA_PT_SGE_UNION,
Mpi2SataPTSGEUnion_t, *pMpi2SataPTSGEUnion_t;
/*SATA Passthrough Request Message */
typedef struct _MPI2_SATA_PASSTHROUGH_REQUEST {
U16 DevHandle; /*0x00 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 PassthroughFlags; /*0x04 */
U8 SGLFlags; /*0x06*//*MPI v2.0 only. Reserved on MPI v2.5*/
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved1; /*0x0A */
U32 Reserved2; /*0x0C */
U32 Reserved3; /*0x10 */
U32 Reserved4; /*0x14 */
U32 DataLength; /*0x18 */
U8 CommandFIS[20]; /*0x1C */
MPI2_SATA_PT_SGE_UNION SGL;/*0x30*//*MPI v2.5: IEEE 64 elements only*/
} MPI2_SATA_PASSTHROUGH_REQUEST, *PTR_MPI2_SATA_PASSTHROUGH_REQUEST,
Mpi2SataPassthroughRequest_t,
*pMpi2SataPassthroughRequest_t;
/*values for PassthroughFlags field */
#define MPI2_SATA_PT_REQ_PT_FLAGS_EXECUTE_DIAG (0x0100)
#define MPI2_SATA_PT_REQ_PT_FLAGS_DMA (0x0020)
#define MPI2_SATA_PT_REQ_PT_FLAGS_PIO (0x0010)
#define MPI2_SATA_PT_REQ_PT_FLAGS_UNSPECIFIED_VU (0x0004)
#define MPI2_SATA_PT_REQ_PT_FLAGS_WRITE (0x0002)
#define MPI2_SATA_PT_REQ_PT_FLAGS_READ (0x0001)
/*MPI v2.0: use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
/*SATA Passthrough Reply Message */
typedef struct _MPI2_SATA_PASSTHROUGH_REPLY {
U16 DevHandle; /*0x00 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 PassthroughFlags; /*0x04 */
U8 SGLFlags; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved1; /*0x0A */
U8 Reserved2; /*0x0C */
U8 SASStatus; /*0x0D */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
U8 StatusFIS[20]; /*0x14 */
U32 StatusControlRegisters; /*0x28 */
U32 TransferCount; /*0x2C */
} MPI2_SATA_PASSTHROUGH_REPLY, *PTR_MPI2_SATA_PASSTHROUGH_REPLY,
Mpi2SataPassthroughReply_t, *pMpi2SataPassthroughReply_t;
/*values for SASStatus field are at the top of this file */
/****************************************************************************
* SAS IO Unit Control messages
****************************************************************************/
/*SAS IO Unit Control Request Message */
typedef struct _MPI2_SAS_IOUNIT_CONTROL_REQUEST {
U8 Operation; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 DevHandle; /*0x04 */
U8 IOCParameter; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved3; /*0x0A */
U16 Reserved4; /*0x0C */
U8 PhyNum; /*0x0E */
U8 PrimFlags; /*0x0F */
U32 Primitive; /*0x10 */
U8 LookupMethod; /*0x14 */
U8 Reserved5; /*0x15 */
U16 SlotNumber; /*0x16 */
U64 LookupAddress; /*0x18 */
U32 IOCParameterValue; /*0x20 */
U32 Reserved7; /*0x24 */
U32 Reserved8; /*0x28 */
} MPI2_SAS_IOUNIT_CONTROL_REQUEST,
*PTR_MPI2_SAS_IOUNIT_CONTROL_REQUEST,
Mpi2SasIoUnitControlRequest_t,
*pMpi2SasIoUnitControlRequest_t;
/*values for the Operation field */
#define MPI2_SAS_OP_CLEAR_ALL_PERSISTENT (0x02)
#define MPI2_SAS_OP_PHY_LINK_RESET (0x06)
#define MPI2_SAS_OP_PHY_HARD_RESET (0x07)
#define MPI2_SAS_OP_PHY_CLEAR_ERROR_LOG (0x08)
#define MPI2_SAS_OP_SEND_PRIMITIVE (0x0A)
#define MPI2_SAS_OP_FORCE_FULL_DISCOVERY (0x0B)
#define MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL (0x0C)
#define MPI2_SAS_OP_REMOVE_DEVICE (0x0D)
#define MPI2_SAS_OP_LOOKUP_MAPPING (0x0E)
#define MPI2_SAS_OP_SET_IOC_PARAMETER (0x0F)
#define MPI25_SAS_OP_ENABLE_FP_DEVICE (0x10)
#define MPI25_SAS_OP_DISABLE_FP_DEVICE (0x11)
#define MPI25_SAS_OP_ENABLE_FP_ALL (0x12)
#define MPI25_SAS_OP_DISABLE_FP_ALL (0x13)
#define MPI2_SAS_OP_DEV_ENABLE_NCQ (0x14)
#define MPI2_SAS_OP_DEV_DISABLE_NCQ (0x15)
#define MPI2_SAS_OP_PRODUCT_SPECIFIC_MIN (0x80)
/*values for the PrimFlags field */
#define MPI2_SAS_PRIMFLAGS_SINGLE (0x08)
#define MPI2_SAS_PRIMFLAGS_TRIPLE (0x02)
#define MPI2_SAS_PRIMFLAGS_REDUNDANT (0x01)
/*values for the LookupMethod field */
#define MPI2_SAS_LOOKUP_METHOD_SAS_ADDRESS (0x01)
#define MPI2_SAS_LOOKUP_METHOD_SAS_ENCLOSURE_SLOT (0x02)
#define MPI2_SAS_LOOKUP_METHOD_SAS_DEVICE_NAME (0x03)
/*SAS IO Unit Control Reply Message */
typedef struct _MPI2_SAS_IOUNIT_CONTROL_REPLY {
U8 Operation; /*0x00 */
U8 Reserved1; /*0x01 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 DevHandle; /*0x04 */
U8 IOCParameter; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved3; /*0x0A */
U16 Reserved4; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
} MPI2_SAS_IOUNIT_CONTROL_REPLY,
*PTR_MPI2_SAS_IOUNIT_CONTROL_REPLY,
Mpi2SasIoUnitControlReply_t, *pMpi2SasIoUnitControlReply_t;
#endif

View File

@ -0,0 +1,437 @@
/*
* Copyright (c) 2000-2012 LSI Corporation.
*
*
* Name: mpi2_tool.h
* Title: MPI diagnostic tool structures and definitions
* Creation Date: March 26, 2007
*
* mpi2_tool.h Version: 02.00.09
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* 12-18-07 02.00.01 Added Diagnostic Buffer Post and Diagnostic Release
* structures and defines.
* 02-29-08 02.00.02 Modified various names to make them 32-character unique.
* 05-06-09 02.00.03 Added ISTWI Read Write Tool and Diagnostic CLI Tool.
* 07-30-09 02.00.04 Added ExtendedType field to DiagnosticBufferPost request
* and reply messages.
* Added MPI2_DIAG_BUF_TYPE_EXTENDED.
* Incremented MPI2_DIAG_BUF_TYPE_COUNT.
* 05-12-10 02.00.05 Added Diagnostic Data Upload tool.
* 08-11-10 02.00.06 Added defines that were missing for Diagnostic Buffer
* Post Request.
* 05-25-11 02.00.07 Added Flags field and related defines to
* MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST.
* 11-18-11 02.00.08 Incorporating additions for MPI v2.5.
* 07-10-12 02.00.09 Add MPI v2.5 Toolbox Diagnostic CLI Tool Request
* message.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_TOOL_H
#define MPI2_TOOL_H
/*****************************************************************************
*
* Toolbox Messages
*
*****************************************************************************/
/*defines for the Tools */
#define MPI2_TOOLBOX_CLEAN_TOOL (0x00)
#define MPI2_TOOLBOX_MEMORY_MOVE_TOOL (0x01)
#define MPI2_TOOLBOX_DIAG_DATA_UPLOAD_TOOL (0x02)
#define MPI2_TOOLBOX_ISTWI_READ_WRITE_TOOL (0x03)
#define MPI2_TOOLBOX_BEACON_TOOL (0x05)
#define MPI2_TOOLBOX_DIAGNOSTIC_CLI_TOOL (0x06)
/****************************************************************************
* Toolbox reply
****************************************************************************/
typedef struct _MPI2_TOOLBOX_REPLY {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U16 Reserved5; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
} MPI2_TOOLBOX_REPLY, *PTR_MPI2_TOOLBOX_REPLY,
Mpi2ToolboxReply_t, *pMpi2ToolboxReply_t;
/****************************************************************************
* Toolbox Clean Tool request
****************************************************************************/
typedef struct _MPI2_TOOLBOX_CLEAN_REQUEST {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U32 Flags; /*0x0C */
} MPI2_TOOLBOX_CLEAN_REQUEST, *PTR_MPI2_TOOLBOX_CLEAN_REQUEST,
Mpi2ToolboxCleanRequest_t, *pMpi2ToolboxCleanRequest_t;
/*values for the Flags field */
#define MPI2_TOOLBOX_CLEAN_BOOT_SERVICES (0x80000000)
#define MPI2_TOOLBOX_CLEAN_PERSIST_MANUFACT_PAGES (0x40000000)
#define MPI2_TOOLBOX_CLEAN_OTHER_PERSIST_PAGES (0x20000000)
#define MPI2_TOOLBOX_CLEAN_FW_CURRENT (0x10000000)
#define MPI2_TOOLBOX_CLEAN_FW_BACKUP (0x08000000)
#define MPI2_TOOLBOX_CLEAN_MEGARAID (0x02000000)
#define MPI2_TOOLBOX_CLEAN_INITIALIZATION (0x01000000)
#define MPI2_TOOLBOX_CLEAN_FLASH (0x00000004)
#define MPI2_TOOLBOX_CLEAN_SEEPROM (0x00000002)
#define MPI2_TOOLBOX_CLEAN_NVSRAM (0x00000001)
/****************************************************************************
* Toolbox Memory Move request
****************************************************************************/
typedef struct _MPI2_TOOLBOX_MEM_MOVE_REQUEST {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
MPI2_SGE_SIMPLE_UNION SGL; /*0x0C */
} MPI2_TOOLBOX_MEM_MOVE_REQUEST, *PTR_MPI2_TOOLBOX_MEM_MOVE_REQUEST,
Mpi2ToolboxMemMoveRequest_t, *pMpi2ToolboxMemMoveRequest_t;
/****************************************************************************
* Toolbox Diagnostic Data Upload request
****************************************************************************/
typedef struct _MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U8 SGLFlags; /*0x0C */
U8 Reserved5; /*0x0D */
U16 Reserved6; /*0x0E */
U32 Flags; /*0x10 */
U32 DataLength; /*0x14 */
MPI2_SGE_SIMPLE_UNION SGL; /*0x18 */
} MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST,
*PTR_MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST,
Mpi2ToolboxDiagDataUploadRequest_t,
*pMpi2ToolboxDiagDataUploadRequest_t;
/*use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
typedef struct _MPI2_DIAG_DATA_UPLOAD_HEADER {
U32 DiagDataLength; /*00h */
U8 FormatCode; /*04h */
U8 Reserved1; /*05h */
U16 Reserved2; /*06h */
} MPI2_DIAG_DATA_UPLOAD_HEADER, *PTR_MPI2_DIAG_DATA_UPLOAD_HEADER,
Mpi2DiagDataUploadHeader_t, *pMpi2DiagDataUploadHeader_t;
/****************************************************************************
* Toolbox ISTWI Read Write Tool
****************************************************************************/
/*Toolbox ISTWI Read Write Tool request message */
typedef struct _MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U32 Reserved5; /*0x0C */
U32 Reserved6; /*0x10 */
U8 DevIndex; /*0x14 */
U8 Action; /*0x15 */
U8 SGLFlags; /*0x16 */
U8 Flags; /*0x17 */
U16 TxDataLength; /*0x18 */
U16 RxDataLength; /*0x1A */
U32 Reserved8; /*0x1C */
U32 Reserved9; /*0x20 */
U32 Reserved10; /*0x24 */
U32 Reserved11; /*0x28 */
U32 Reserved12; /*0x2C */
MPI2_SGE_SIMPLE_UNION SGL; /*0x30 */
} MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST,
*PTR_MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST,
Mpi2ToolboxIstwiReadWriteRequest_t,
*pMpi2ToolboxIstwiReadWriteRequest_t;
/*values for the Action field */
#define MPI2_TOOL_ISTWI_ACTION_READ_DATA (0x01)
#define MPI2_TOOL_ISTWI_ACTION_WRITE_DATA (0x02)
#define MPI2_TOOL_ISTWI_ACTION_SEQUENCE (0x03)
#define MPI2_TOOL_ISTWI_ACTION_RESERVE_BUS (0x10)
#define MPI2_TOOL_ISTWI_ACTION_RELEASE_BUS (0x11)
#define MPI2_TOOL_ISTWI_ACTION_RESET (0x12)
/*use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
/*values for the Flags field */
#define MPI2_TOOL_ISTWI_FLAG_AUTO_RESERVE_RELEASE (0x80)
#define MPI2_TOOL_ISTWI_FLAG_PAGE_ADDR_MASK (0x07)
/*Toolbox ISTWI Read Write Tool reply message */
typedef struct _MPI2_TOOLBOX_ISTWI_REPLY {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U16 Reserved5; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
U8 DevIndex; /*0x14 */
U8 Action; /*0x15 */
U8 IstwiStatus; /*0x16 */
U8 Reserved6; /*0x17 */
U16 TxDataCount; /*0x18 */
U16 RxDataCount; /*0x1A */
} MPI2_TOOLBOX_ISTWI_REPLY, *PTR_MPI2_TOOLBOX_ISTWI_REPLY,
Mpi2ToolboxIstwiReply_t, *pMpi2ToolboxIstwiReply_t;
/****************************************************************************
* Toolbox Beacon Tool request
****************************************************************************/
typedef struct _MPI2_TOOLBOX_BEACON_REQUEST {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U8 Reserved5; /*0x0C */
U8 PhysicalPort; /*0x0D */
U8 Reserved6; /*0x0E */
U8 Flags; /*0x0F */
} MPI2_TOOLBOX_BEACON_REQUEST, *PTR_MPI2_TOOLBOX_BEACON_REQUEST,
Mpi2ToolboxBeaconRequest_t, *pMpi2ToolboxBeaconRequest_t;
/*values for the Flags field */
#define MPI2_TOOLBOX_FLAGS_BEACONMODE_OFF (0x00)
#define MPI2_TOOLBOX_FLAGS_BEACONMODE_ON (0x01)
/****************************************************************************
* Toolbox Diagnostic CLI Tool
****************************************************************************/
#define MPI2_TOOLBOX_DIAG_CLI_CMD_LENGTH (0x5C)
/*MPI v2.0 Toolbox Diagnostic CLI Tool request message */
typedef struct _MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U8 SGLFlags; /*0x0C */
U8 Reserved5; /*0x0D */
U16 Reserved6; /*0x0E */
U32 DataLength; /*0x10 */
U8 DiagnosticCliCommand[MPI2_TOOLBOX_DIAG_CLI_CMD_LENGTH];/*0x14 */
MPI2_SGE_SIMPLE_UNION SGL; /*0x70 */
} MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST,
*PTR_MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST,
Mpi2ToolboxDiagnosticCliRequest_t,
*pMpi2ToolboxDiagnosticCliRequest_t;
/*use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
/*MPI v2.5 Toolbox Diagnostic CLI Tool request message */
typedef struct _MPI25_TOOLBOX_DIAGNOSTIC_CLI_REQUEST {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U32 Reserved5; /*0x0C */
U32 DataLength; /*0x10 */
U8 DiagnosticCliCommand[MPI2_TOOLBOX_DIAG_CLI_CMD_LENGTH];/*0x14 */
MPI25_SGE_IO_UNION SGL; /*0x70 */
} MPI25_TOOLBOX_DIAGNOSTIC_CLI_REQUEST,
*PTR_MPI25_TOOLBOX_DIAGNOSTIC_CLI_REQUEST,
Mpi25ToolboxDiagnosticCliRequest_t,
*pMpi25ToolboxDiagnosticCliRequest_t;
/*Toolbox Diagnostic CLI Tool reply message */
typedef struct _MPI2_TOOLBOX_DIAGNOSTIC_CLI_REPLY {
U8 Tool; /*0x00 */
U8 Reserved1; /*0x01 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U16 Reserved5; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
U32 ReturnedDataLength; /*0x14 */
} MPI2_TOOLBOX_DIAGNOSTIC_CLI_REPLY,
*PTR_MPI2_TOOLBOX_DIAG_CLI_REPLY,
Mpi2ToolboxDiagnosticCliReply_t,
*pMpi2ToolboxDiagnosticCliReply_t;
/*****************************************************************************
*
* Diagnostic Buffer Messages
*
*****************************************************************************/
/****************************************************************************
* Diagnostic Buffer Post request
****************************************************************************/
typedef struct _MPI2_DIAG_BUFFER_POST_REQUEST {
U8 ExtendedType; /*0x00 */
U8 BufferType; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U64 BufferAddress; /*0x0C */
U32 BufferLength; /*0x14 */
U32 Reserved5; /*0x18 */
U32 Reserved6; /*0x1C */
U32 Flags; /*0x20 */
U32 ProductSpecific[23]; /*0x24 */
} MPI2_DIAG_BUFFER_POST_REQUEST, *PTR_MPI2_DIAG_BUFFER_POST_REQUEST,
Mpi2DiagBufferPostRequest_t, *pMpi2DiagBufferPostRequest_t;
/*values for the ExtendedType field */
#define MPI2_DIAG_EXTENDED_TYPE_UTILIZATION (0x02)
/*values for the BufferType field */
#define MPI2_DIAG_BUF_TYPE_TRACE (0x00)
#define MPI2_DIAG_BUF_TYPE_SNAPSHOT (0x01)
#define MPI2_DIAG_BUF_TYPE_EXTENDED (0x02)
/*count of the number of buffer types */
#define MPI2_DIAG_BUF_TYPE_COUNT (0x03)
/*values for the Flags field */
#define MPI2_DIAG_BUF_FLAG_RELEASE_ON_FULL (0x00000002)
#define MPI2_DIAG_BUF_FLAG_IMMEDIATE_RELEASE (0x00000001)
/****************************************************************************
* Diagnostic Buffer Post reply
****************************************************************************/
typedef struct _MPI2_DIAG_BUFFER_POST_REPLY {
U8 ExtendedType; /*0x00 */
U8 BufferType; /*0x01 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U16 Reserved5; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
U32 TransferLength; /*0x14 */
} MPI2_DIAG_BUFFER_POST_REPLY, *PTR_MPI2_DIAG_BUFFER_POST_REPLY,
Mpi2DiagBufferPostReply_t, *pMpi2DiagBufferPostReply_t;
/****************************************************************************
* Diagnostic Release request
****************************************************************************/
typedef struct _MPI2_DIAG_RELEASE_REQUEST {
U8 Reserved1; /*0x00 */
U8 BufferType; /*0x01 */
U8 ChainOffset; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
} MPI2_DIAG_RELEASE_REQUEST, *PTR_MPI2_DIAG_RELEASE_REQUEST,
Mpi2DiagReleaseRequest_t, *pMpi2DiagReleaseRequest_t;
/****************************************************************************
* Diagnostic Buffer Post reply
****************************************************************************/
typedef struct _MPI2_DIAG_RELEASE_REPLY {
U8 Reserved1; /*0x00 */
U8 BufferType; /*0x01 */
U8 MsgLength; /*0x02 */
U8 Function; /*0x03 */
U16 Reserved2; /*0x04 */
U8 Reserved3; /*0x06 */
U8 MsgFlags; /*0x07 */
U8 VP_ID; /*0x08 */
U8 VF_ID; /*0x09 */
U16 Reserved4; /*0x0A */
U16 Reserved5; /*0x0C */
U16 IOCStatus; /*0x0E */
U32 IOCLogInfo; /*0x10 */
} MPI2_DIAG_RELEASE_REPLY, *PTR_MPI2_DIAG_RELEASE_REPLY,
Mpi2DiagReleaseReply_t, *pMpi2DiagReleaseReply_t;
#endif

View File

@ -0,0 +1,56 @@
/*
* Copyright (c) 2000-2007 LSI Corporation.
*
*
* Name: mpi2_type.h
* Title: MPI basic type definitions
* Creation Date: August 16, 2006
*
* mpi2_type.h Version: 02.00.00
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_TYPE_H
#define MPI2_TYPE_H
/*******************************************************************************
* Define * if it hasn't already been defined. By default
* * is defined to be a near pointer. MPI2_POINTER can be defined as
* a far pointer by defining * as "far *" before this header file is
* included.
*/
/* the basic types may have already been included by mpi_type.h */
#ifndef MPI_TYPE_H
/*****************************************************************************
*
* Basic Types
*
*****************************************************************************/
typedef u8 U8;
typedef __le16 U16;
typedef __le32 U32;
typedef __le64 U64 __attribute__ ((aligned(4)));
/*****************************************************************************
*
* Pointer Types
*
*****************************************************************************/
typedef U8 *PU8;
typedef U16 *PU16;
typedef U32 *PU32;
typedef U64 *PU64;
#endif
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,418 @@
/*
* Management Module Support for MPT (Message Passing Technology) based
* controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_ctl.h
* Copyright (C) 2012 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* NO WARRANTY
* THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
* LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
* solely responsible for determining the appropriateness of using and
* distributing the Program and assumes all risks associated with its
* exercise of rights under this Agreement, including but not limited to
* the risks and costs of program errors, damage to or loss of data,
* programs or equipment, and unavailability or interruption of operations.
* DISCLAIMER OF LIABILITY
* NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
* HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
* USA.
*/
#ifndef MPT3SAS_CTL_H_INCLUDED
#define MPT3SAS_CTL_H_INCLUDED
#ifdef __KERNEL__
#include <linux/miscdevice.h>
#endif
#ifndef MPT3SAS_MINOR
#define MPT3SAS_MINOR (MPT_MINOR + 2)
#endif
#define MPT3SAS_DEV_NAME "mpt3ctl"
#define MPT3_MAGIC_NUMBER 'L'
#define MPT3_IOCTL_DEFAULT_TIMEOUT (10) /* in seconds */
/**
* IOCTL opcodes
*/
#define MPT3IOCINFO _IOWR(MPT3_MAGIC_NUMBER, 17, \
struct mpt3_ioctl_iocinfo)
#define MPT3COMMAND _IOWR(MPT3_MAGIC_NUMBER, 20, \
struct mpt3_ioctl_command)
#ifdef CONFIG_COMPAT
#define MPT3COMMAND32 _IOWR(MPT3_MAGIC_NUMBER, 20, \
struct mpt3_ioctl_command32)
#endif
#define MPT3EVENTQUERY _IOWR(MPT3_MAGIC_NUMBER, 21, \
struct mpt3_ioctl_eventquery)
#define MPT3EVENTENABLE _IOWR(MPT3_MAGIC_NUMBER, 22, \
struct mpt3_ioctl_eventenable)
#define MPT3EVENTREPORT _IOWR(MPT3_MAGIC_NUMBER, 23, \
struct mpt3_ioctl_eventreport)
#define MPT3HARDRESET _IOWR(MPT3_MAGIC_NUMBER, 24, \
struct mpt3_ioctl_diag_reset)
#define MPT3BTDHMAPPING _IOWR(MPT3_MAGIC_NUMBER, 31, \
struct mpt3_ioctl_btdh_mapping)
/* diag buffer support */
#define MPT3DIAGREGISTER _IOWR(MPT3_MAGIC_NUMBER, 26, \
struct mpt3_diag_register)
#define MPT3DIAGRELEASE _IOWR(MPT3_MAGIC_NUMBER, 27, \
struct mpt3_diag_release)
#define MPT3DIAGUNREGISTER _IOWR(MPT3_MAGIC_NUMBER, 28, \
struct mpt3_diag_unregister)
#define MPT3DIAGQUERY _IOWR(MPT3_MAGIC_NUMBER, 29, \
struct mpt3_diag_query)
#define MPT3DIAGREADBUFFER _IOWR(MPT3_MAGIC_NUMBER, 30, \
struct mpt3_diag_read_buffer)
/**
* struct mpt3_ioctl_header - main header structure
* @ioc_number - IOC unit number
* @port_number - IOC port number
* @max_data_size - maximum number bytes to transfer on read
*/
struct mpt3_ioctl_header {
uint32_t ioc_number;
uint32_t port_number;
uint32_t max_data_size;
};
/**
* struct mpt3_ioctl_diag_reset - diagnostic reset
* @hdr - generic header
*/
struct mpt3_ioctl_diag_reset {
struct mpt3_ioctl_header hdr;
};
/**
* struct mpt3_ioctl_pci_info - pci device info
* @device - pci device id
* @function - pci function id
* @bus - pci bus id
* @segment_id - pci segment id
*/
struct mpt3_ioctl_pci_info {
union {
struct {
uint32_t device:5;
uint32_t function:3;
uint32_t bus:24;
} bits;
uint32_t word;
} u;
uint32_t segment_id;
};
#define MPT2_IOCTL_INTERFACE_SCSI (0x00)
#define MPT2_IOCTL_INTERFACE_FC (0x01)
#define MPT2_IOCTL_INTERFACE_FC_IP (0x02)
#define MPT2_IOCTL_INTERFACE_SAS (0x03)
#define MPT2_IOCTL_INTERFACE_SAS2 (0x04)
#define MPT3_IOCTL_INTERFACE_SAS3 (0x06)
#define MPT2_IOCTL_VERSION_LENGTH (32)
/**
* struct mpt3_ioctl_iocinfo - generic controller info
* @hdr - generic header
* @adapter_type - type of adapter (spi, fc, sas)
* @port_number - port number
* @pci_id - PCI Id
* @hw_rev - hardware revision
* @sub_system_device - PCI subsystem Device ID
* @sub_system_vendor - PCI subsystem Vendor ID
* @rsvd0 - reserved
* @firmware_version - firmware version
* @bios_version - BIOS version
* @driver_version - driver version - 32 ASCII characters
* @rsvd1 - reserved
* @scsi_id - scsi id of adapter 0
* @rsvd2 - reserved
* @pci_information - pci info (2nd revision)
*/
struct mpt3_ioctl_iocinfo {
struct mpt3_ioctl_header hdr;
uint32_t adapter_type;
uint32_t port_number;
uint32_t pci_id;
uint32_t hw_rev;
uint32_t subsystem_device;
uint32_t subsystem_vendor;
uint32_t rsvd0;
uint32_t firmware_version;
uint32_t bios_version;
uint8_t driver_version[MPT2_IOCTL_VERSION_LENGTH];
uint8_t rsvd1;
uint8_t scsi_id;
uint16_t rsvd2;
struct mpt3_ioctl_pci_info pci_information;
};
/* number of event log entries */
#define MPT3SAS_CTL_EVENT_LOG_SIZE (50)
/**
* struct mpt3_ioctl_eventquery - query event count and type
* @hdr - generic header
* @event_entries - number of events returned by get_event_report
* @rsvd - reserved
* @event_types - type of events currently being captured
*/
struct mpt3_ioctl_eventquery {
struct mpt3_ioctl_header hdr;
uint16_t event_entries;
uint16_t rsvd;
uint32_t event_types[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS];
};
/**
* struct mpt3_ioctl_eventenable - enable/disable event capturing
* @hdr - generic header
* @event_types - toggle off/on type of events to be captured
*/
struct mpt3_ioctl_eventenable {
struct mpt3_ioctl_header hdr;
uint32_t event_types[4];
};
#define MPT3_EVENT_DATA_SIZE (192)
/**
* struct MPT3_IOCTL_EVENTS -
* @event - the event that was reported
* @context - unique value for each event assigned by driver
* @data - event data returned in fw reply message
*/
struct MPT3_IOCTL_EVENTS {
uint32_t event;
uint32_t context;
uint8_t data[MPT3_EVENT_DATA_SIZE];
};
/**
* struct mpt3_ioctl_eventreport - returing event log
* @hdr - generic header
* @event_data - (see struct MPT3_IOCTL_EVENTS)
*/
struct mpt3_ioctl_eventreport {
struct mpt3_ioctl_header hdr;
struct MPT3_IOCTL_EVENTS event_data[1];
};
/**
* struct mpt3_ioctl_command - generic mpt firmware passthru ioctl
* @hdr - generic header
* @timeout - command timeout in seconds. (if zero then use driver default
* value).
* @reply_frame_buf_ptr - reply location
* @data_in_buf_ptr - destination for read
* @data_out_buf_ptr - data source for write
* @sense_data_ptr - sense data location
* @max_reply_bytes - maximum number of reply bytes to be sent to app.
* @data_in_size - number bytes for data transfer in (read)
* @data_out_size - number bytes for data transfer out (write)
* @max_sense_bytes - maximum number of bytes for auto sense buffers
* @data_sge_offset - offset in words from the start of the request message to
* the first SGL
* @mf[1];
*/
struct mpt3_ioctl_command {
struct mpt3_ioctl_header hdr;
uint32_t timeout;
void __user *reply_frame_buf_ptr;
void __user *data_in_buf_ptr;
void __user *data_out_buf_ptr;
void __user *sense_data_ptr;
uint32_t max_reply_bytes;
uint32_t data_in_size;
uint32_t data_out_size;
uint32_t max_sense_bytes;
uint32_t data_sge_offset;
uint8_t mf[1];
};
#ifdef CONFIG_COMPAT
struct mpt3_ioctl_command32 {
struct mpt3_ioctl_header hdr;
uint32_t timeout;
uint32_t reply_frame_buf_ptr;
uint32_t data_in_buf_ptr;
uint32_t data_out_buf_ptr;
uint32_t sense_data_ptr;
uint32_t max_reply_bytes;
uint32_t data_in_size;
uint32_t data_out_size;
uint32_t max_sense_bytes;
uint32_t data_sge_offset;
uint8_t mf[1];
};
#endif
/**
* struct mpt3_ioctl_btdh_mapping - mapping info
* @hdr - generic header
* @id - target device identification number
* @bus - SCSI bus number that the target device exists on
* @handle - device handle for the target device
* @rsvd - reserved
*
* To obtain a bus/id the application sets
* handle to valid handle, and bus/id to 0xFFFF.
*
* To obtain the device handle the application sets
* bus/id valid value, and the handle to 0xFFFF.
*/
struct mpt3_ioctl_btdh_mapping {
struct mpt3_ioctl_header hdr;
uint32_t id;
uint32_t bus;
uint16_t handle;
uint16_t rsvd;
};
/* application flags for mpt3_diag_register, mpt3_diag_query */
#define MPT3_APP_FLAGS_APP_OWNED (0x0001)
#define MPT3_APP_FLAGS_BUFFER_VALID (0x0002)
#define MPT3_APP_FLAGS_FW_BUFFER_ACCESS (0x0004)
/* flags for mpt3_diag_read_buffer */
#define MPT3_FLAGS_REREGISTER (0x0001)
#define MPT3_PRODUCT_SPECIFIC_DWORDS 23
/**
* struct mpt3_diag_register - application register with driver
* @hdr - generic header
* @reserved -
* @buffer_type - specifies either TRACE, SNAPSHOT, or EXTENDED
* @application_flags - misc flags
* @diagnostic_flags - specifies flags affecting command processing
* @product_specific - product specific information
* @requested_buffer_size - buffers size in bytes
* @unique_id - tag specified by application that is used to signal ownership
* of the buffer.
*
* This will allow the driver to setup any required buffers that will be
* needed by firmware to communicate with the driver.
*/
struct mpt3_diag_register {
struct mpt3_ioctl_header hdr;
uint8_t reserved;
uint8_t buffer_type;
uint16_t application_flags;
uint32_t diagnostic_flags;
uint32_t product_specific[MPT3_PRODUCT_SPECIFIC_DWORDS];
uint32_t requested_buffer_size;
uint32_t unique_id;
};
/**
* struct mpt3_diag_unregister - application unregister with driver
* @hdr - generic header
* @unique_id - tag uniquely identifies the buffer to be unregistered
*
* This will allow the driver to cleanup any memory allocated for diag
* messages and to free up any resources.
*/
struct mpt3_diag_unregister {
struct mpt3_ioctl_header hdr;
uint32_t unique_id;
};
/**
* struct mpt3_diag_query - query relevant info associated with diag buffers
* @hdr - generic header
* @reserved -
* @buffer_type - specifies either TRACE, SNAPSHOT, or EXTENDED
* @application_flags - misc flags
* @diagnostic_flags - specifies flags affecting command processing
* @product_specific - product specific information
* @total_buffer_size - diag buffer size in bytes
* @driver_added_buffer_size - size of extra space appended to end of buffer
* @unique_id - unique id associated with this buffer.
*
* The application will send only buffer_type and unique_id. Driver will
* inspect unique_id first, if valid, fill in all the info. If unique_id is
* 0x00, the driver will return info specified by Buffer Type.
*/
struct mpt3_diag_query {
struct mpt3_ioctl_header hdr;
uint8_t reserved;
uint8_t buffer_type;
uint16_t application_flags;
uint32_t diagnostic_flags;
uint32_t product_specific[MPT3_PRODUCT_SPECIFIC_DWORDS];
uint32_t total_buffer_size;
uint32_t driver_added_buffer_size;
uint32_t unique_id;
};
/**
* struct mpt3_diag_release - request to send Diag Release Message to firmware
* @hdr - generic header
* @unique_id - tag uniquely identifies the buffer to be released
*
* This allows ownership of the specified buffer to returned to the driver,
* allowing an application to read the buffer without fear that firmware is
* overwritting information in the buffer.
*/
struct mpt3_diag_release {
struct mpt3_ioctl_header hdr;
uint32_t unique_id;
};
/**
* struct mpt3_diag_read_buffer - request for copy of the diag buffer
* @hdr - generic header
* @status -
* @reserved -
* @flags - misc flags
* @starting_offset - starting offset within drivers buffer where to start
* reading data at into the specified application buffer
* @bytes_to_read - number of bytes to copy from the drivers buffer into the
* application buffer starting at starting_offset.
* @unique_id - unique id associated with this buffer.
* @diagnostic_data - data payload
*/
struct mpt3_diag_read_buffer {
struct mpt3_ioctl_header hdr;
uint8_t status;
uint8_t reserved;
uint16_t flags;
uint32_t starting_offset;
uint32_t bytes_to_read;
uint32_t unique_id;
uint32_t diagnostic_data[1];
};
#endif /* MPT3SAS_CTL_H_INCLUDED */

View File

@ -0,0 +1,219 @@
/*
* Logging Support for MPT (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_debug.c
* Copyright (C) 2012 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* NO WARRANTY
* THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
* LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
* solely responsible for determining the appropriateness of using and
* distributing the Program and assumes all risks associated with its
* exercise of rights under this Agreement, including but not limited to
* the risks and costs of program errors, damage to or loss of data,
* programs or equipment, and unavailability or interruption of operations.
* DISCLAIMER OF LIABILITY
* NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
* HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
* USA.
*/
#ifndef MPT3SAS_DEBUG_H_INCLUDED
#define MPT3SAS_DEBUG_H_INCLUDED
#define MPT_DEBUG 0x00000001
#define MPT_DEBUG_MSG_FRAME 0x00000002
#define MPT_DEBUG_SG 0x00000004
#define MPT_DEBUG_EVENTS 0x00000008
#define MPT_DEBUG_EVENT_WORK_TASK 0x00000010
#define MPT_DEBUG_INIT 0x00000020
#define MPT_DEBUG_EXIT 0x00000040
#define MPT_DEBUG_FAIL 0x00000080
#define MPT_DEBUG_TM 0x00000100
#define MPT_DEBUG_REPLY 0x00000200
#define MPT_DEBUG_HANDSHAKE 0x00000400
#define MPT_DEBUG_CONFIG 0x00000800
#define MPT_DEBUG_DL 0x00001000
#define MPT_DEBUG_RESET 0x00002000
#define MPT_DEBUG_SCSI 0x00004000
#define MPT_DEBUG_IOCTL 0x00008000
#define MPT_DEBUG_SAS 0x00020000
#define MPT_DEBUG_TRANSPORT 0x00040000
#define MPT_DEBUG_TASK_SET_FULL 0x00080000
#define MPT_DEBUG_TRIGGER_DIAG 0x00200000
/*
* CONFIG_SCSI_MPT3SAS_LOGGING - enabled in Kconfig
*/
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
#define MPT_CHECK_LOGGING(IOC, CMD, BITS) \
{ \
if (IOC->logging_level & BITS) \
CMD; \
}
#else
#define MPT_CHECK_LOGGING(IOC, CMD, BITS)
#endif /* CONFIG_SCSI_MPT3SAS_LOGGING */
/*
* debug macros
*/
#define dprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG)
#define dsgprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SG)
#define devtprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EVENTS)
#define dewtprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EVENT_WORK_TASK)
#define dinitprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_INIT)
#define dexitprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EXIT)
#define dfailprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_FAIL)
#define dtmprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TM)
#define dreplyprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_REPLY)
#define dhsprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_HANDSHAKE)
#define dcprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_CONFIG)
#define ddlprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_DL)
#define drsprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_RESET)
#define dsprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SCSI)
#define dctlprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_IOCTL)
#define dsasprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SAS)
#define dsastransport(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SAS_WIDE)
#define dmfprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_MSG_FRAME)
#define dtsfprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TASK_SET_FULL)
#define dtransportprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TRANSPORT)
#define dTriggerDiagPrintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TRIGGER_DIAG)
/* inline functions for dumping debug data*/
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
/**
* _debug_dump_mf - print message frame contents
* @mpi_request: pointer to message frame
* @sz: number of dwords
*/
static inline void
_debug_dump_mf(void *mpi_request, int sz)
{
int i;
__le32 *mfp = (__le32 *)mpi_request;
pr_info("mf:\n\t");
for (i = 0; i < sz; i++) {
if (i && ((i % 8) == 0))
pr_info("\n\t");
pr_info("%08x ", le32_to_cpu(mfp[i]));
}
pr_info("\n");
}
/**
* _debug_dump_reply - print message frame contents
* @mpi_request: pointer to message frame
* @sz: number of dwords
*/
static inline void
_debug_dump_reply(void *mpi_request, int sz)
{
int i;
__le32 *mfp = (__le32 *)mpi_request;
pr_info("reply:\n\t");
for (i = 0; i < sz; i++) {
if (i && ((i % 8) == 0))
pr_info("\n\t");
pr_info("%08x ", le32_to_cpu(mfp[i]));
}
pr_info("\n");
}
/**
* _debug_dump_config - print config page contents
* @mpi_request: pointer to message frame
* @sz: number of dwords
*/
static inline void
_debug_dump_config(void *mpi_request, int sz)
{
int i;
__le32 *mfp = (__le32 *)mpi_request;
pr_info("config:\n\t");
for (i = 0; i < sz; i++) {
if (i && ((i % 8) == 0))
pr_info("\n\t");
pr_info("%08x ", le32_to_cpu(mfp[i]));
}
pr_info("\n");
}
#else
#define _debug_dump_mf(mpi_request, sz)
#define _debug_dump_reply(mpi_request, sz)
#define _debug_dump_config(mpi_request, sz)
#endif /* CONFIG_SCSI_MPT3SAS_LOGGING */
#endif /* MPT3SAS_DEBUG_H_INCLUDED */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,434 @@
/*
* This module provides common API to set Diagnostic trigger for MPT
* (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_trigger_diag.c
* Copyright (C) 2012 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* NO WARRANTY
* THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
* LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
* solely responsible for determining the appropriateness of using and
* distributing the Program and assumes all risks associated with its
* exercise of rights under this Agreement, including but not limited to
* the risks and costs of program errors, damage to or loss of data,
* programs or equipment, and unavailability or interruption of operations.
* DISCLAIMER OF LIABILITY
* NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
* HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
* USA.
*/
#include <linux/version.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/delay.h>
#include <linux/compat.h>
#include <linux/poll.h>
#include <linux/io.h>
#include <linux/uaccess.h>
#include "mpt3sas_base.h"
/**
* _mpt3sas_raise_sigio - notifiy app
* @ioc: per adapter object
* @event_data:
*/
static void
_mpt3sas_raise_sigio(struct MPT3SAS_ADAPTER *ioc,
struct SL_WH_TRIGGERS_EVENT_DATA_T *event_data)
{
Mpi2EventNotificationReply_t *mpi_reply;
u16 sz, event_data_sz;
unsigned long flags;
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT "%s: enter\n",
ioc->name, __func__));
sz = offsetof(Mpi2EventNotificationReply_t, EventData) +
sizeof(struct SL_WH_TRIGGERS_EVENT_DATA_T) + 4;
mpi_reply = kzalloc(sz, GFP_KERNEL);
if (!mpi_reply)
goto out;
mpi_reply->Event = cpu_to_le16(MPI3_EVENT_DIAGNOSTIC_TRIGGER_FIRED);
event_data_sz = (sizeof(struct SL_WH_TRIGGERS_EVENT_DATA_T) + 4) / 4;
mpi_reply->EventDataLength = cpu_to_le16(event_data_sz);
memcpy(&mpi_reply->EventData, event_data,
sizeof(struct SL_WH_TRIGGERS_EVENT_DATA_T));
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: add to driver event log\n",
ioc->name, __func__));
mpt3sas_ctl_add_to_event_log(ioc, mpi_reply);
kfree(mpi_reply);
out:
/* clearing the diag_trigger_active flag */
spin_lock_irqsave(&ioc->diag_trigger_lock, flags);
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: clearing diag_trigger_active flag\n",
ioc->name, __func__));
ioc->diag_trigger_active = 0;
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT "%s: exit\n", ioc->name,
__func__));
}
/**
* mpt3sas_process_trigger_data - process the event data for the trigger
* @ioc: per adapter object
* @event_data:
*/
void
mpt3sas_process_trigger_data(struct MPT3SAS_ADAPTER *ioc,
struct SL_WH_TRIGGERS_EVENT_DATA_T *event_data)
{
u8 issue_reset = 0;
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT "%s: enter\n",
ioc->name, __func__));
/* release the diag buffer trace */
if ((ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_RELEASED) == 0) {
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: release trace diag buffer\n", ioc->name, __func__));
mpt3sas_send_diag_release(ioc, MPI2_DIAG_BUF_TYPE_TRACE,
&issue_reset);
}
_mpt3sas_raise_sigio(ioc, event_data);
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT "%s: exit\n", ioc->name,
__func__));
}
/**
* mpt3sas_trigger_master - Master trigger handler
* @ioc: per adapter object
* @trigger_bitmask:
*
*/
void
mpt3sas_trigger_master(struct MPT3SAS_ADAPTER *ioc, u32 trigger_bitmask)
{
struct SL_WH_TRIGGERS_EVENT_DATA_T event_data;
unsigned long flags;
u8 found_match = 0;
spin_lock_irqsave(&ioc->diag_trigger_lock, flags);
if (trigger_bitmask & MASTER_TRIGGER_FW_FAULT ||
trigger_bitmask & MASTER_TRIGGER_ADAPTER_RESET)
goto by_pass_checks;
/* check to see if trace buffers are currently registered */
if ((ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_REGISTERED) == 0) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return;
}
/* check to see if trace buffers are currently released */
if (ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_RELEASED) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return;
}
by_pass_checks:
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: enter - trigger_bitmask = 0x%08x\n",
ioc->name, __func__, trigger_bitmask));
/* don't send trigger if an trigger is currently active */
if (ioc->diag_trigger_active) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
goto out;
}
/* check for the trigger condition */
if (ioc->diag_trigger_master.MasterData & trigger_bitmask) {
found_match = 1;
ioc->diag_trigger_active = 1;
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: setting diag_trigger_active flag\n",
ioc->name, __func__));
}
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
if (!found_match)
goto out;
memset(&event_data, 0, sizeof(struct SL_WH_TRIGGERS_EVENT_DATA_T));
event_data.trigger_type = MPT3SAS_TRIGGER_MASTER;
event_data.u.master.MasterData = trigger_bitmask;
if (trigger_bitmask & MASTER_TRIGGER_FW_FAULT ||
trigger_bitmask & MASTER_TRIGGER_ADAPTER_RESET)
_mpt3sas_raise_sigio(ioc, &event_data);
else
mpt3sas_send_trigger_data_event(ioc, &event_data);
out:
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT "%s: exit\n", ioc->name,
__func__));
}
/**
* mpt3sas_trigger_event - Event trigger handler
* @ioc: per adapter object
* @event:
* @log_entry_qualifier:
*
*/
void
mpt3sas_trigger_event(struct MPT3SAS_ADAPTER *ioc, u16 event,
u16 log_entry_qualifier)
{
struct SL_WH_TRIGGERS_EVENT_DATA_T event_data;
struct SL_WH_EVENT_TRIGGER_T *event_trigger;
int i;
unsigned long flags;
u8 found_match;
spin_lock_irqsave(&ioc->diag_trigger_lock, flags);
/* check to see if trace buffers are currently registered */
if ((ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_REGISTERED) == 0) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return;
}
/* check to see if trace buffers are currently released */
if (ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_RELEASED) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return;
}
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: enter - event = 0x%04x, log_entry_qualifier = 0x%04x\n",
ioc->name, __func__, event, log_entry_qualifier));
/* don't send trigger if an trigger is currently active */
if (ioc->diag_trigger_active) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
goto out;
}
/* check for the trigger condition */
event_trigger = ioc->diag_trigger_event.EventTriggerEntry;
for (i = 0 , found_match = 0; i < ioc->diag_trigger_event.ValidEntries
&& !found_match; i++, event_trigger++) {
if (event_trigger->EventValue != event)
continue;
if (event == MPI2_EVENT_LOG_ENTRY_ADDED) {
if (event_trigger->LogEntryQualifier ==
log_entry_qualifier)
found_match = 1;
continue;
}
found_match = 1;
ioc->diag_trigger_active = 1;
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: setting diag_trigger_active flag\n",
ioc->name, __func__));
}
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
if (!found_match)
goto out;
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: setting diag_trigger_active flag\n",
ioc->name, __func__));
memset(&event_data, 0, sizeof(struct SL_WH_TRIGGERS_EVENT_DATA_T));
event_data.trigger_type = MPT3SAS_TRIGGER_EVENT;
event_data.u.event.EventValue = event;
event_data.u.event.LogEntryQualifier = log_entry_qualifier;
mpt3sas_send_trigger_data_event(ioc, &event_data);
out:
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT "%s: exit\n", ioc->name,
__func__));
}
/**
* mpt3sas_trigger_scsi - SCSI trigger handler
* @ioc: per adapter object
* @sense_key:
* @asc:
* @ascq:
*
*/
void
mpt3sas_trigger_scsi(struct MPT3SAS_ADAPTER *ioc, u8 sense_key, u8 asc,
u8 ascq)
{
struct SL_WH_TRIGGERS_EVENT_DATA_T event_data;
struct SL_WH_SCSI_TRIGGER_T *scsi_trigger;
int i;
unsigned long flags;
u8 found_match;
spin_lock_irqsave(&ioc->diag_trigger_lock, flags);
/* check to see if trace buffers are currently registered */
if ((ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_REGISTERED) == 0) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return;
}
/* check to see if trace buffers are currently released */
if (ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_RELEASED) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return;
}
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: enter - sense_key = 0x%02x, asc = 0x%02x, ascq = 0x%02x\n",
ioc->name, __func__, sense_key, asc, ascq));
/* don't send trigger if an trigger is currently active */
if (ioc->diag_trigger_active) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
goto out;
}
/* check for the trigger condition */
scsi_trigger = ioc->diag_trigger_scsi.SCSITriggerEntry;
for (i = 0 , found_match = 0; i < ioc->diag_trigger_scsi.ValidEntries
&& !found_match; i++, scsi_trigger++) {
if (scsi_trigger->SenseKey != sense_key)
continue;
if (!(scsi_trigger->ASC == 0xFF || scsi_trigger->ASC == asc))
continue;
if (!(scsi_trigger->ASCQ == 0xFF || scsi_trigger->ASCQ == ascq))
continue;
found_match = 1;
ioc->diag_trigger_active = 1;
}
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
if (!found_match)
goto out;
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: setting diag_trigger_active flag\n",
ioc->name, __func__));
memset(&event_data, 0, sizeof(struct SL_WH_TRIGGERS_EVENT_DATA_T));
event_data.trigger_type = MPT3SAS_TRIGGER_SCSI;
event_data.u.scsi.SenseKey = sense_key;
event_data.u.scsi.ASC = asc;
event_data.u.scsi.ASCQ = ascq;
mpt3sas_send_trigger_data_event(ioc, &event_data);
out:
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT "%s: exit\n", ioc->name,
__func__));
}
/**
* mpt3sas_trigger_mpi - MPI trigger handler
* @ioc: per adapter object
* @ioc_status:
* @loginfo:
*
*/
void
mpt3sas_trigger_mpi(struct MPT3SAS_ADAPTER *ioc, u16 ioc_status, u32 loginfo)
{
struct SL_WH_TRIGGERS_EVENT_DATA_T event_data;
struct SL_WH_MPI_TRIGGER_T *mpi_trigger;
int i;
unsigned long flags;
u8 found_match;
spin_lock_irqsave(&ioc->diag_trigger_lock, flags);
/* check to see if trace buffers are currently registered */
if ((ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_REGISTERED) == 0) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return;
}
/* check to see if trace buffers are currently released */
if (ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] &
MPT3_DIAG_BUFFER_IS_RELEASED) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
return;
}
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: enter - ioc_status = 0x%04x, loginfo = 0x%08x\n",
ioc->name, __func__, ioc_status, loginfo));
/* don't send trigger if an trigger is currently active */
if (ioc->diag_trigger_active) {
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
goto out;
}
/* check for the trigger condition */
mpi_trigger = ioc->diag_trigger_mpi.MPITriggerEntry;
for (i = 0 , found_match = 0; i < ioc->diag_trigger_mpi.ValidEntries
&& !found_match; i++, mpi_trigger++) {
if (mpi_trigger->IOCStatus != ioc_status)
continue;
if (!(mpi_trigger->IocLogInfo == 0xFFFFFFFF ||
mpi_trigger->IocLogInfo == loginfo))
continue;
found_match = 1;
ioc->diag_trigger_active = 1;
}
spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags);
if (!found_match)
goto out;
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT
"%s: setting diag_trigger_active flag\n",
ioc->name, __func__));
memset(&event_data, 0, sizeof(struct SL_WH_TRIGGERS_EVENT_DATA_T));
event_data.trigger_type = MPT3SAS_TRIGGER_MPI;
event_data.u.mpi.IOCStatus = ioc_status;
event_data.u.mpi.IocLogInfo = loginfo;
mpt3sas_send_trigger_data_event(ioc, &event_data);
out:
dTriggerDiagPrintk(ioc, pr_info(MPT3SAS_FMT "%s: exit\n", ioc->name,
__func__));
}

View File

@ -0,0 +1,193 @@
/*
* This is the Fusion MPT base driver providing common API layer interface
* to set Diagnostic triggers for MPT (Message Passing Technology) based
* controllers
*
* This code is based on drivers/scsi/mpt3sas/mpt3sas_base.h
* Copyright (C) 2012 LSI Corporation
* (mailto:DL-MPTFusionLinux@lsi.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* NO WARRANTY
* THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
* LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
* solely responsible for determining the appropriateness of using and
* distributing the Program and assumes all risks associated with its
* exercise of rights under this Agreement, including but not limited to
* the risks and costs of program errors, damage to or loss of data,
* programs or equipment, and unavailability or interruption of operations.
* DISCLAIMER OF LIABILITY
* NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
* HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
* USA.
*/
/* Diagnostic Trigger Configuration Data Structures */
#ifndef MPT3SAS_TRIGGER_DIAG_H_INCLUDED
#define MPT3SAS_TRIGGER_DIAG_H_INCLUDED
/* limitation on number of entries */
#define NUM_VALID_ENTRIES (20)
/* trigger types */
#define MPT3SAS_TRIGGER_MASTER (1)
#define MPT3SAS_TRIGGER_EVENT (2)
#define MPT3SAS_TRIGGER_SCSI (3)
#define MPT3SAS_TRIGGER_MPI (4)
/* trigger names */
#define MASTER_TRIGGER_FILE_NAME "diag_trigger_master"
#define EVENT_TRIGGERS_FILE_NAME "diag_trigger_event"
#define SCSI_TRIGGERS_FILE_NAME "diag_trigger_scsi"
#define MPI_TRIGGER_FILE_NAME "diag_trigger_mpi"
/* master trigger bitmask */
#define MASTER_TRIGGER_FW_FAULT (0x00000001)
#define MASTER_TRIGGER_ADAPTER_RESET (0x00000002)
#define MASTER_TRIGGER_TASK_MANAGMENT (0x00000004)
#define MASTER_TRIGGER_DEVICE_REMOVAL (0x00000008)
/* fake firmware event for tigger */
#define MPI3_EVENT_DIAGNOSTIC_TRIGGER_FIRED (0x6E)
/**
* MasterTrigger is a single U32 passed to/from sysfs.
*
* Bit Flags (enables) include:
* 1. FW Faults
* 2. Adapter Reset issued by driver
* 3. TMs
* 4. Device Remove Event sent by FW
*/
struct SL_WH_MASTER_TRIGGER_T {
uint32_t MasterData;
};
/**
* struct SL_WH_EVENT_TRIGGER_T - Definition of an event trigger element
* @EventValue: Event Code to trigger on
* @LogEntryQualifier: Type of FW event that logged (Log Entry Added Event only)
*
* Defines an event that should induce a DIAG_TRIGGER driver event if observed.
*/
struct SL_WH_EVENT_TRIGGER_T {
uint16_t EventValue;
uint16_t LogEntryQualifier;
};
/**
* struct SL_WH_EVENT_TRIGGERS_T - Structure passed to/from sysfs containing a
* list of Event Triggers to be monitored for.
* @ValidEntries: Number of _SL_WH_EVENT_TRIGGER_T structures contained in this
* structure.
* @EventTriggerEntry: List of Event trigger elements.
*
* This binary structure is transferred via sysfs to get/set Event Triggers
* in the Linux Driver.
*/
struct SL_WH_EVENT_TRIGGERS_T {
uint32_t ValidEntries;
struct SL_WH_EVENT_TRIGGER_T EventTriggerEntry[NUM_VALID_ENTRIES];
};
/**
* struct SL_WH_SCSI_TRIGGER_T - Definition of a SCSI trigger element
* @ASCQ: Additional Sense Code Qualifier. Can be specific or 0xFF for
* wildcard.
* @ASC: Additional Sense Code. Can be specific or 0xFF for wildcard
* @SenseKey: SCSI Sense Key
*
* Defines a sense key (single or many variants) that should induce a
* DIAG_TRIGGER driver event if observed.
*/
struct SL_WH_SCSI_TRIGGER_T {
U8 ASCQ;
U8 ASC;
U8 SenseKey;
U8 Reserved;
};
/**
* struct SL_WH_SCSI_TRIGGERS_T - Structure passed to/from sysfs containing a
* list of SCSI sense codes that should trigger a DIAG_SERVICE event when
* observed.
* @ValidEntries: Number of _SL_WH_SCSI_TRIGGER_T structures contained in this
* structure.
* @SCSITriggerEntry: List of SCSI Sense Code trigger elements.
*
* This binary structure is transferred via sysfs to get/set SCSI Sense Code
* Triggers in the Linux Driver.
*/
struct SL_WH_SCSI_TRIGGERS_T {
uint32_t ValidEntries;
struct SL_WH_SCSI_TRIGGER_T SCSITriggerEntry[NUM_VALID_ENTRIES];
};
/**
* struct SL_WH_MPI_TRIGGER_T - Definition of an MPI trigger element
* @IOCStatus: MPI IOCStatus
* @IocLogInfo: MPI IocLogInfo. Can be specific or 0xFFFFFFFF for wildcard
*
* Defines a MPI IOCStatus/IocLogInfo pair that should induce a DIAG_TRIGGER
* driver event if observed.
*/
struct SL_WH_MPI_TRIGGER_T {
uint16_t IOCStatus;
uint16_t Reserved;
uint32_t IocLogInfo;
};
/**
* struct SL_WH_MPI_TRIGGERS_T - Structure passed to/from sysfs containing a
* list of MPI IOCStatus/IocLogInfo pairs that should trigger a DIAG_SERVICE
* event when observed.
* @ValidEntries: Number of _SL_WH_MPI_TRIGGER_T structures contained in this
* structure.
* @MPITriggerEntry: List of MPI IOCStatus/IocLogInfo trigger elements.
*
* This binary structure is transferred via sysfs to get/set MPI Error Triggers
* in the Linux Driver.
*/
struct SL_WH_MPI_TRIGGERS_T {
uint32_t ValidEntries;
struct SL_WH_MPI_TRIGGER_T MPITriggerEntry[NUM_VALID_ENTRIES];
};
/**
* struct SL_WH_TRIGGERS_EVENT_DATA_T - event data for trigger
* @trigger_type: trigger type (see MPT3SAS_TRIGGER_XXXX)
* @u: trigger condition that caused trigger to be sent
*/
struct SL_WH_TRIGGERS_EVENT_DATA_T {
uint32_t trigger_type;
union {
struct SL_WH_MASTER_TRIGGER_T master;
struct SL_WH_EVENT_TRIGGER_T event;
struct SL_WH_SCSI_TRIGGER_T scsi;
struct SL_WH_MPI_TRIGGER_T mpi;
} u;
};
#endif /* MPT3SAS_TRIGGER_DIAG_H_INCLUDED */

View File

@ -258,21 +258,11 @@ enum sas_sata_phy_regs {
#define SPI_ADDR_VLD_94XX (1U << 1)
#define SPI_CTRL_SpiStart_94XX (1U << 0)
#define mv_ffc(x) ffz(x)
static inline int
mv_ffc64(u64 v)
{
int i;
i = mv_ffc((u32)v);
if (i >= 0)
return i;
i = mv_ffc((u32)(v>>32));
if (i != 0)
return 32 + i;
return -1;
u64 x = ~v;
return x ? __ffs64(x) : -1;
}
#define r_reg_set_enable(i) \

View File

@ -69,7 +69,7 @@ extern struct kmem_cache *mvs_task_list_cache;
#define DEV_IS_EXPANDER(type) \
((type == EDGE_DEV) || (type == FANOUT_DEV))
#define bit(n) ((u32)1 << n)
#define bit(n) ((u64)1 << n)
#define for_each_phy(__lseq_mask, __mc, __lseq) \
for ((__mc) = (__lseq_mask), (__lseq) = 0; \

View File

@ -97,9 +97,37 @@ struct osd_dev_handle {
static DEFINE_IDA(osd_minor_ida);
/*
* scsi sysfs attribute operations
*/
static ssize_t osdname_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct osd_uld_device *ould = container_of(dev, struct osd_uld_device,
class_dev);
return sprintf(buf, "%s\n", ould->odi.osdname);
}
static ssize_t systemid_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct osd_uld_device *ould = container_of(dev, struct osd_uld_device,
class_dev);
memcpy(buf, ould->odi.systemid, ould->odi.systemid_len);
return ould->odi.systemid_len;
}
static struct device_attribute osd_uld_attrs[] = {
__ATTR(osdname, S_IRUGO, osdname_show, NULL),
__ATTR(systemid, S_IRUGO, systemid_show, NULL),
__ATTR_NULL,
};
static struct class osd_uld_class = {
.owner = THIS_MODULE,
.name = "scsi_osd",
.dev_attrs = osd_uld_attrs,
};
/*

View File

@ -1615,8 +1615,7 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
* At this point all fcport's software-states are cleared. Perform any
* final cleanup of firmware resources (PCBs and XCBs).
*/
if (fcport->loop_id != FC_NO_LOOP_ID &&
!test_bit(UNLOADING, &fcport->vha->dpc_flags)) {
if (fcport->loop_id != FC_NO_LOOP_ID) {
if (IS_FWI2_CAPABLE(fcport->vha->hw))
fcport->vha->hw->isp_ops->fabric_logout(fcport->vha,
fcport->loop_id, fcport->d_id.b.domain,

View File

@ -219,7 +219,8 @@ qla24xx_proc_fcp_prio_cfg_cmd(struct fc_bsg_job *bsg_job)
break;
}
exit_fcp_prio_cfg:
bsg_job->job_done(bsg_job);
if (!ret)
bsg_job->job_done(bsg_job);
return ret;
}
@ -741,9 +742,8 @@ qla2x00_process_loopback(struct fc_bsg_job *bsg_job)
if (qla81xx_get_port_config(vha, config)) {
ql_log(ql_log_warn, vha, 0x701f,
"Get port config failed.\n");
bsg_job->reply->result = (DID_ERROR << 16);
rval = -EPERM;
goto done_free_dma_req;
goto done_free_dma_rsp;
}
ql_dbg(ql_dbg_user, vha, 0x70c0,
@ -761,9 +761,8 @@ qla2x00_process_loopback(struct fc_bsg_job *bsg_job)
new_config, elreq.options);
if (rval) {
bsg_job->reply->result = (DID_ERROR << 16);
rval = -EPERM;
goto done_free_dma_req;
goto done_free_dma_rsp;
}
type = "FC_BSG_HST_VENDOR_LOOPBACK";
@ -795,9 +794,8 @@ qla2x00_process_loopback(struct fc_bsg_job *bsg_job)
"MPI reset failed.\n");
}
bsg_job->reply->result = (DID_ERROR << 16);
rval = -EIO;
goto done_free_dma_req;
goto done_free_dma_rsp;
}
} else {
type = "FC_BSG_HST_VENDOR_LOOPBACK";
@ -812,34 +810,27 @@ qla2x00_process_loopback(struct fc_bsg_job *bsg_job)
ql_log(ql_log_warn, vha, 0x702c,
"Vendor request %s failed.\n", type);
fw_sts_ptr = ((uint8_t *)bsg_job->req->sense) +
sizeof(struct fc_bsg_reply);
memcpy(fw_sts_ptr, response, sizeof(response));
fw_sts_ptr += sizeof(response);
*fw_sts_ptr = command_sent;
rval = 0;
bsg_job->reply->result = (DID_ERROR << 16);
bsg_job->reply->reply_payload_rcv_len = 0;
} else {
ql_dbg(ql_dbg_user, vha, 0x702d,
"Vendor request %s completed.\n", type);
bsg_job->reply_len = sizeof(struct fc_bsg_reply) +
sizeof(response) + sizeof(uint8_t);
bsg_job->reply->reply_payload_rcv_len =
bsg_job->reply_payload.payload_len;
fw_sts_ptr = ((uint8_t *)bsg_job->req->sense) +
sizeof(struct fc_bsg_reply);
memcpy(fw_sts_ptr, response, sizeof(response));
fw_sts_ptr += sizeof(response);
*fw_sts_ptr = command_sent;
bsg_job->reply->result = DID_OK;
bsg_job->reply->result = (DID_OK << 16);
sg_copy_from_buffer(bsg_job->reply_payload.sg_list,
bsg_job->reply_payload.sg_cnt, rsp_data,
rsp_data_len);
}
bsg_job->job_done(bsg_job);
bsg_job->reply_len = sizeof(struct fc_bsg_reply) +
sizeof(response) + sizeof(uint8_t);
fw_sts_ptr = ((uint8_t *)bsg_job->req->sense) +
sizeof(struct fc_bsg_reply);
memcpy(fw_sts_ptr, response, sizeof(response));
fw_sts_ptr += sizeof(response);
*fw_sts_ptr = command_sent;
done_free_dma_rsp:
dma_free_coherent(&ha->pdev->dev, rsp_data_len,
rsp_data, rsp_data_dma);
done_free_dma_req:
@ -853,6 +844,8 @@ done_unmap_req_sg:
dma_unmap_sg(&ha->pdev->dev,
bsg_job->request_payload.sg_list,
bsg_job->request_payload.sg_cnt, DMA_TO_DEVICE);
if (!rval)
bsg_job->job_done(bsg_job);
return rval;
}
@ -877,16 +870,15 @@ qla84xx_reset(struct fc_bsg_job *bsg_job)
if (rval) {
ql_log(ql_log_warn, vha, 0x7030,
"Vendor request 84xx reset failed.\n");
rval = 0;
bsg_job->reply->result = (DID_ERROR << 16);
rval = (DID_ERROR << 16);
} else {
ql_dbg(ql_dbg_user, vha, 0x7031,
"Vendor request 84xx reset completed.\n");
bsg_job->reply->result = DID_OK;
bsg_job->job_done(bsg_job);
}
bsg_job->job_done(bsg_job);
return rval;
}
@ -976,8 +968,7 @@ qla84xx_updatefw(struct fc_bsg_job *bsg_job)
ql_log(ql_log_warn, vha, 0x7037,
"Vendor request 84xx updatefw failed.\n");
rval = 0;
bsg_job->reply->result = (DID_ERROR << 16);
rval = (DID_ERROR << 16);
} else {
ql_dbg(ql_dbg_user, vha, 0x7038,
"Vendor request 84xx updatefw completed.\n");
@ -986,7 +977,6 @@ qla84xx_updatefw(struct fc_bsg_job *bsg_job)
bsg_job->reply->result = DID_OK;
}
bsg_job->job_done(bsg_job);
dma_pool_free(ha->s_dma_pool, mn, mn_dma);
done_free_fw_buf:
@ -996,6 +986,8 @@ done_unmap_sg:
dma_unmap_sg(&ha->pdev->dev, bsg_job->request_payload.sg_list,
bsg_job->request_payload.sg_cnt, DMA_TO_DEVICE);
if (!rval)
bsg_job->job_done(bsg_job);
return rval;
}
@ -1163,8 +1155,7 @@ qla84xx_mgmt_cmd(struct fc_bsg_job *bsg_job)
ql_log(ql_log_warn, vha, 0x7043,
"Vendor request 84xx mgmt failed.\n");
rval = 0;
bsg_job->reply->result = (DID_ERROR << 16);
rval = (DID_ERROR << 16);
} else {
ql_dbg(ql_dbg_user, vha, 0x7044,
@ -1184,8 +1175,6 @@ qla84xx_mgmt_cmd(struct fc_bsg_job *bsg_job)
}
}
bsg_job->job_done(bsg_job);
done_unmap_sg:
if (mgmt_b)
dma_free_coherent(&ha->pdev->dev, data_len, mgmt_b, mgmt_dma);
@ -1200,6 +1189,8 @@ done_unmap_sg:
exit_mgmt:
dma_pool_free(ha->s_dma_pool, mn, mn_dma);
if (!rval)
bsg_job->job_done(bsg_job);
return rval;
}
@ -1276,9 +1267,7 @@ qla24xx_iidma(struct fc_bsg_job *bsg_job)
fcport->port_name[3], fcport->port_name[4],
fcport->port_name[5], fcport->port_name[6],
fcport->port_name[7], rval, fcport->fp_speed, mb[0], mb[1]);
rval = 0;
bsg_job->reply->result = (DID_ERROR << 16);
rval = (DID_ERROR << 16);
} else {
if (!port_param->mode) {
bsg_job->reply_len = sizeof(struct fc_bsg_reply) +
@ -1292,9 +1281,9 @@ qla24xx_iidma(struct fc_bsg_job *bsg_job)
}
bsg_job->reply->result = DID_OK;
bsg_job->job_done(bsg_job);
}
bsg_job->job_done(bsg_job);
return rval;
}
@ -1887,8 +1876,6 @@ qla2x00_process_vendor_specific(struct fc_bsg_job *bsg_job)
return qla24xx_process_bidir_cmd(bsg_job);
default:
bsg_job->reply->result = (DID_ERROR << 16);
bsg_job->job_done(bsg_job);
return -ENOSYS;
}
}
@ -1919,8 +1906,6 @@ qla24xx_bsg_request(struct fc_bsg_job *bsg_job)
ql_dbg(ql_dbg_user, vha, 0x709f,
"BSG: ISP abort active/needed -- cmd=%d.\n",
bsg_job->request->msgcode);
bsg_job->reply->result = (DID_ERROR << 16);
bsg_job->job_done(bsg_job);
return -EBUSY;
}
@ -1943,7 +1928,6 @@ qla24xx_bsg_request(struct fc_bsg_job *bsg_job)
case FC_BSG_RPT_CT:
default:
ql_log(ql_log_warn, vha, 0x705a, "Unsupported BSG request.\n");
bsg_job->reply->result = ret;
break;
}
return ret;

View File

@ -11,7 +11,7 @@
* ----------------------------------------------------------------------
* | Level | Last Value Used | Holes |
* ----------------------------------------------------------------------
* | Module Init and Probe | 0x0124 | 0x4b,0xba,0xfa |
* | Module Init and Probe | 0x0125 | 0x4b,0xba,0xfa |
* | Mailbox commands | 0x114f | 0x111a-0x111b |
* | | | 0x112c-0x112e |
* | | | 0x113a |
@ -526,8 +526,8 @@ qla25xx_copy_mq(struct qla_hw_data *ha, void *ptr, uint32_t **last_chain)
ha->max_req_queues : ha->max_rsp_queues;
mq->count = htonl(que_cnt);
for (cnt = 0; cnt < que_cnt; cnt++) {
reg = (struct device_reg_25xxmq *) ((void *)
ha->mqiobase + cnt * QLA_QUE_PAGE);
reg = (struct device_reg_25xxmq __iomem *)
(ha->mqiobase + cnt * QLA_QUE_PAGE);
que_idx = cnt * 4;
mq->qregs[que_idx] = htonl(RD_REG_DWORD(&reg->req_q_in));
mq->qregs[que_idx+1] = htonl(RD_REG_DWORD(&reg->req_q_out));
@ -2268,7 +2268,7 @@ qla83xx_fw_dump(scsi_qla_host_t *vha, int hardware_locked)
if (!cnt) {
nxt = fw->code_ram;
nxt += sizeof(fw->code_ram),
nxt += sizeof(fw->code_ram);
nxt += (ha->fw_memory_size - 0x100000 + 1);
goto copy_queue;
} else

View File

@ -2486,9 +2486,9 @@ struct bidi_statistics {
#define QLA_MAX_QUEUES 256
#define ISP_QUE_REG(ha, id) \
((ha->mqenable || IS_QLA83XX(ha)) ? \
((void *)(ha->mqiobase) +\
((device_reg_t __iomem *)(ha->mqiobase) +\
(QLA_QUE_PAGE * id)) :\
((void *)(ha->iobase)))
((device_reg_t __iomem *)(ha->iobase)))
#define QLA_REQ_QUE_ID(tag) \
((tag < QLA_MAX_QUEUES && tag > 0) ? tag : 0)
#define QLA_DEFAULT_QUE_QOS 5

View File

@ -1092,6 +1092,27 @@ struct device_reg_24xx {
uint32_t unused_6[2]; /* Gap. */
uint32_t iobase_sdata;
};
/* RISC-RISC semaphore register PCI offet */
#define RISC_REGISTER_BASE_OFFSET 0x7010
#define RISC_REGISTER_WINDOW_OFFET 0x6
/* RISC-RISC semaphore/flag register (risc address 0x7016) */
#define RISC_SEMAPHORE 0x1UL
#define RISC_SEMAPHORE_WE (RISC_SEMAPHORE << 16)
#define RISC_SEMAPHORE_CLR (RISC_SEMAPHORE_WE | 0x0UL)
#define RISC_SEMAPHORE_SET (RISC_SEMAPHORE_WE | RISC_SEMAPHORE)
#define RISC_SEMAPHORE_FORCE 0x8000UL
#define RISC_SEMAPHORE_FORCE_WE (RISC_SEMAPHORE_FORCE << 16)
#define RISC_SEMAPHORE_FORCE_CLR (RISC_SEMAPHORE_FORCE_WE | 0x0UL)
#define RISC_SEMAPHORE_FORCE_SET \
(RISC_SEMAPHORE_FORCE_WE | RISC_SEMAPHORE_FORCE)
/* RISC semaphore timeouts (ms) */
#define TIMEOUT_SEMAPHORE 2500
#define TIMEOUT_SEMAPHORE_FORCE 2000
#define TIMEOUT_TOTAL_ELAPSED 4500
/* Trace Control *************************************************************/

View File

@ -416,7 +416,7 @@ extern int qla2x00_request_irqs(struct qla_hw_data *, struct rsp_que *);
extern void qla2x00_free_irqs(scsi_qla_host_t *);
extern int qla2x00_get_data_rate(scsi_qla_host_t *);
extern char *qla2x00_get_link_speed_str(struct qla_hw_data *);
extern const char *qla2x00_get_link_speed_str(struct qla_hw_data *, uint16_t);
/*
* Global Function Prototypes in qla_sup.c source file.
@ -598,7 +598,6 @@ extern void qla82xx_init_flags(struct qla_hw_data *);
/* ISP 8021 hardware related */
extern void qla82xx_set_drv_active(scsi_qla_host_t *);
extern void qla82xx_crb_win_unlock(struct qla_hw_data *);
extern int qla82xx_wr_32(struct qla_hw_data *, ulong, u32);
extern int qla82xx_rd_32(struct qla_hw_data *, ulong);
extern int qla82xx_rdmem(struct qla_hw_data *, u64, void *, int);

View File

@ -218,6 +218,9 @@ qla2x00_ga_nxt(scsi_qla_host_t *vha, fc_port_t *fcport)
memcpy(fcport->port_name, ct_rsp->rsp.ga_nxt.port_name,
WWN_SIZE);
fcport->fc4_type = (ct_rsp->rsp.ga_nxt.fc4_types[2] & BIT_0) ?
FC4_TYPE_FCP_SCSI : FC4_TYPE_OTHER;
if (ct_rsp->rsp.ga_nxt.port_type != NS_N_PORT_TYPE &&
ct_rsp->rsp.ga_nxt.port_type != NS_NL_PORT_TYPE)
fcport->d_id.b.domain = 0xf0;
@ -1930,6 +1933,9 @@ qla2x00_gpsc(scsi_qla_host_t *vha, sw_info_t *list)
case BIT_11:
list[i].fp_speed = PORT_SPEED_8GB;
break;
case BIT_10:
list[i].fp_speed = PORT_SPEED_16GB;
break;
}
ql_dbg(ql_dbg_disc, vha, 0x205b,

View File

@ -429,7 +429,7 @@ qla2x00_async_adisc_done(struct scsi_qla_host *vha, fc_port_t *fcport,
/* QLogic ISP2x00 Hardware Support Functions. */
/****************************************************************************/
int
static int
qla83xx_nic_core_fw_load(scsi_qla_host_t *vha)
{
int rval = QLA_SUCCESS;
@ -997,7 +997,7 @@ qla2x00_reset_chip(scsi_qla_host_t *vha)
*
* Returns 0 on success.
*/
int
static int
qla81xx_reset_mpi(scsi_qla_host_t *vha)
{
uint16_t mb[4] = {0x1010, 0, 1, 0};
@ -1095,6 +1095,83 @@ qla24xx_reset_risc(scsi_qla_host_t *vha)
ha->isp_ops->enable_intrs(ha);
}
static void
qla25xx_read_risc_sema_reg(scsi_qla_host_t *vha, uint32_t *data)
{
struct device_reg_24xx __iomem *reg = &vha->hw->iobase->isp24;
WRT_REG_DWORD(&reg->iobase_addr, RISC_REGISTER_BASE_OFFSET);
*data = RD_REG_DWORD(&reg->iobase_window + RISC_REGISTER_WINDOW_OFFET);
}
static void
qla25xx_write_risc_sema_reg(scsi_qla_host_t *vha, uint32_t data)
{
struct device_reg_24xx __iomem *reg = &vha->hw->iobase->isp24;
WRT_REG_DWORD(&reg->iobase_addr, RISC_REGISTER_BASE_OFFSET);
WRT_REG_DWORD(&reg->iobase_window + RISC_REGISTER_WINDOW_OFFET, data);
}
static void
qla25xx_manipulate_risc_semaphore(scsi_qla_host_t *vha)
{
struct qla_hw_data *ha = vha->hw;
uint32_t wd32 = 0;
uint delta_msec = 100;
uint elapsed_msec = 0;
uint timeout_msec;
ulong n;
if (!IS_QLA25XX(ha) && !IS_QLA2031(ha))
return;
attempt:
timeout_msec = TIMEOUT_SEMAPHORE;
n = timeout_msec / delta_msec;
while (n--) {
qla25xx_write_risc_sema_reg(vha, RISC_SEMAPHORE_SET);
qla25xx_read_risc_sema_reg(vha, &wd32);
if (wd32 & RISC_SEMAPHORE)
break;
msleep(delta_msec);
elapsed_msec += delta_msec;
if (elapsed_msec > TIMEOUT_TOTAL_ELAPSED)
goto force;
}
if (!(wd32 & RISC_SEMAPHORE))
goto force;
if (!(wd32 & RISC_SEMAPHORE_FORCE))
goto acquired;
qla25xx_write_risc_sema_reg(vha, RISC_SEMAPHORE_CLR);
timeout_msec = TIMEOUT_SEMAPHORE_FORCE;
n = timeout_msec / delta_msec;
while (n--) {
qla25xx_read_risc_sema_reg(vha, &wd32);
if (!(wd32 & RISC_SEMAPHORE_FORCE))
break;
msleep(delta_msec);
elapsed_msec += delta_msec;
if (elapsed_msec > TIMEOUT_TOTAL_ELAPSED)
goto force;
}
if (wd32 & RISC_SEMAPHORE_FORCE)
qla25xx_write_risc_sema_reg(vha, RISC_SEMAPHORE_FORCE_CLR);
goto attempt;
force:
qla25xx_write_risc_sema_reg(vha, RISC_SEMAPHORE_FORCE_SET);
acquired:
return;
}
/**
* qla24xx_reset_chip() - Reset ISP24xx chip.
* @ha: HA context
@ -1113,6 +1190,8 @@ qla24xx_reset_chip(scsi_qla_host_t *vha)
ha->isp_ops->disable_intrs(ha);
qla25xx_manipulate_risc_semaphore(vha);
/* Perform RISC reset. */
qla24xx_reset_risc(vha);
}
@ -1888,10 +1967,6 @@ qla2x00_init_rings(scsi_qla_host_t *vha)
qla2x00_init_response_q_entries(rsp);
}
spin_lock(&ha->vport_slock);
spin_unlock(&ha->vport_slock);
ha->tgt.atio_ring_ptr = ha->tgt.atio_ring;
ha->tgt.atio_ring_index = 0;
/* Initialize ATIO queue entries */
@ -1971,6 +2046,7 @@ qla2x00_fw_ready(scsi_qla_host_t *vha)
"Waiting for LIP to complete.\n");
do {
memset(state, -1, sizeof(state));
rval = qla2x00_get_firmware_state(vha, state);
if (rval == QLA_SUCCESS) {
if (state[0] < FSTATE_LOSS_OF_SYNC) {
@ -2907,7 +2983,6 @@ cleanup_allocation:
static void
qla2x00_iidma_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
{
char *link_speed;
int rval;
uint16_t mb[4];
struct qla_hw_data *ha = vha->hw;
@ -2934,10 +3009,10 @@ qla2x00_iidma_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
fcport->port_name[6], fcport->port_name[7], rval,
fcport->fp_speed, mb[0], mb[1]);
} else {
link_speed = qla2x00_get_link_speed_str(ha);
ql_dbg(ql_dbg_disc, vha, 0x2005,
"iIDMA adjusted to %s GB/s "
"on %02x%02x%02x%02x%02x%02x%02x%02x.\n", link_speed,
"on %02x%02x%02x%02x%02x%02x%02x%02x.\n",
qla2x00_get_link_speed_str(ha, fcport->fp_speed),
fcport->port_name[0], fcport->port_name[1],
fcport->port_name[2], fcport->port_name[3],
fcport->port_name[4], fcport->port_name[5],
@ -3007,10 +3082,10 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
fcport->login_retry = 0;
fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT);
qla2x00_set_fcport_state(fcport, FCS_ONLINE);
qla2x00_iidma_fcport(vha, fcport);
qla24xx_update_fcport_fcp_prio(vha, fcport);
qla2x00_reg_remote_port(vha, fcport);
qla2x00_set_fcport_state(fcport, FCS_ONLINE);
}
/*
@ -3868,7 +3943,7 @@ qla83xx_reset_ownership(scsi_qla_host_t *vha)
}
}
int
static int
__qla83xx_set_drv_ack(scsi_qla_host_t *vha)
{
int rval = QLA_SUCCESS;
@ -3884,19 +3959,7 @@ __qla83xx_set_drv_ack(scsi_qla_host_t *vha)
return rval;
}
int
qla83xx_set_drv_ack(scsi_qla_host_t *vha)
{
int rval = QLA_SUCCESS;
qla83xx_idc_lock(vha, 0);
rval = __qla83xx_set_drv_ack(vha);
qla83xx_idc_unlock(vha, 0);
return rval;
}
int
static int
__qla83xx_clear_drv_ack(scsi_qla_host_t *vha)
{
int rval = QLA_SUCCESS;
@ -3912,19 +3975,7 @@ __qla83xx_clear_drv_ack(scsi_qla_host_t *vha)
return rval;
}
int
qla83xx_clear_drv_ack(scsi_qla_host_t *vha)
{
int rval = QLA_SUCCESS;
qla83xx_idc_lock(vha, 0);
rval = __qla83xx_clear_drv_ack(vha);
qla83xx_idc_unlock(vha, 0);
return rval;
}
const char *
static const char *
qla83xx_dev_state_to_string(uint32_t dev_state)
{
switch (dev_state) {
@ -3978,7 +4029,7 @@ qla83xx_idc_audit(scsi_qla_host_t *vha, int audit_type)
}
/* Assumes idc_lock always held on entry */
int
static int
qla83xx_initiating_reset(scsi_qla_host_t *vha)
{
struct qla_hw_data *ha = vha->hw;
@ -4025,37 +4076,13 @@ __qla83xx_set_idc_control(scsi_qla_host_t *vha, uint32_t idc_control)
return qla83xx_wr_reg(vha, QLA83XX_IDC_CONTROL, idc_control);
}
int
qla83xx_set_idc_control(scsi_qla_host_t *vha, uint32_t idc_control)
{
int rval = QLA_SUCCESS;
qla83xx_idc_lock(vha, 0);
rval = __qla83xx_set_idc_control(vha, idc_control);
qla83xx_idc_unlock(vha, 0);
return rval;
}
int
__qla83xx_get_idc_control(scsi_qla_host_t *vha, uint32_t *idc_control)
{
return qla83xx_rd_reg(vha, QLA83XX_IDC_CONTROL, idc_control);
}
int
qla83xx_get_idc_control(scsi_qla_host_t *vha, uint32_t *idc_control)
{
int rval = QLA_SUCCESS;
qla83xx_idc_lock(vha, 0);
rval = __qla83xx_get_idc_control(vha, idc_control);
qla83xx_idc_unlock(vha, 0);
return rval;
}
int
static int
qla83xx_check_driver_presence(scsi_qla_host_t *vha)
{
uint32_t drv_presence = 0;

View File

@ -520,7 +520,7 @@ __qla2x00_marker(struct scsi_qla_host *vha, struct req_que *req,
mrk24 = NULL;
req = ha->req_q_map[0];
mrk = (mrk_entry_t *)qla2x00_alloc_iocbs(vha, 0);
mrk = (mrk_entry_t *)qla2x00_alloc_iocbs(vha, NULL);
if (mrk == NULL) {
ql_log(ql_log_warn, base_vha, 0x3026,
"Failed to allocate Marker IOCB.\n");
@ -2551,7 +2551,7 @@ sufficient_dsds:
(unsigned long __iomem *)ha->nxdb_wr_ptr,
dbval);
wmb();
while (RD_REG_DWORD(ha->nxdb_rd_ptr) != dbval) {
while (RD_REG_DWORD((void __iomem *)ha->nxdb_rd_ptr) != dbval) {
WRT_REG_DWORD(
(unsigned long __iomem *)ha->nxdb_wr_ptr,
dbval);
@ -2748,7 +2748,6 @@ qla2x00_start_bidir(srb_t *sp, struct scsi_qla_host *vha, uint32_t tot_dsds)
struct rsp_que *rsp;
struct req_que *req;
int rval = EXT_STATUS_OK;
device_reg_t __iomem *reg = ISP_QUE_REG(ha, vha->req->id);
rval = QLA_SUCCESS;
@ -2786,15 +2785,7 @@ qla2x00_start_bidir(srb_t *sp, struct scsi_qla_host *vha, uint32_t tot_dsds)
/* Check for room on request queue. */
if (req->cnt < req_cnt + 2) {
if (ha->mqenable)
cnt = RD_REG_DWORD(&reg->isp25mq.req_q_out);
else if (IS_QLA82XX(ha))
cnt = RD_REG_DWORD(&reg->isp82.req_q_out);
else if (IS_FWI2_CAPABLE(ha))
cnt = RD_REG_DWORD(&reg->isp24.req_q_out);
else
cnt = qla2x00_debounce_register(
ISP_REQ_Q_OUT(ha, &reg->isp));
cnt = RD_REG_DWORD_RELAXED(req->req_q_out);
if (req->ring_index < cnt)
req->cnt = cnt - req->ring_index;

View File

@ -316,28 +316,24 @@ qla81xx_idc_event(scsi_qla_host_t *vha, uint16_t aen, uint16_t descr)
}
#define LS_UNKNOWN 2
char *
qla2x00_get_link_speed_str(struct qla_hw_data *ha)
const char *
qla2x00_get_link_speed_str(struct qla_hw_data *ha, uint16_t speed)
{
static char *link_speeds[] = {"1", "2", "?", "4", "8", "16", "10"};
char *link_speed;
int fw_speed = ha->link_data_rate;
static const char * const link_speeds[] = {
"1", "2", "?", "4", "8", "16", "10"
};
if (IS_QLA2100(ha) || IS_QLA2200(ha))
link_speed = link_speeds[0];
else if (fw_speed == 0x13)
link_speed = link_speeds[6];
else {
link_speed = link_speeds[LS_UNKNOWN];
if (fw_speed < 6)
link_speed =
link_speeds[fw_speed];
}
return link_speed;
return link_speeds[0];
else if (speed == 0x13)
return link_speeds[6];
else if (speed < 6)
return link_speeds[speed];
else
return link_speeds[LS_UNKNOWN];
}
void
static void
qla83xx_handle_8200_aen(scsi_qla_host_t *vha, uint16_t *mb)
{
struct qla_hw_data *ha = vha->hw;
@ -671,7 +667,7 @@ skip_rio:
ql_dbg(ql_dbg_async, vha, 0x500a,
"LOOP UP detected (%s Gbps).\n",
qla2x00_get_link_speed_str(ha));
qla2x00_get_link_speed_str(ha, ha->link_data_rate));
vha->flags.management_server_logged_in = 0;
qla2x00_post_aen_work(vha, FCH_EVT_LINKUP, ha->link_data_rate);
@ -860,7 +856,7 @@ skip_rio:
mb[1], mb[2], mb[3]);
ql_log(ql_log_warn, vha, 0x505f,
"Link is operational (%s Gbps).\n",
qla2x00_get_link_speed_str(ha));
qla2x00_get_link_speed_str(ha, ha->link_data_rate));
/*
* Mark all devices as missing so we will login again.
@ -2944,7 +2940,9 @@ skip_msi:
"Failed to reserve interrupt %d already in use.\n",
ha->pdev->irq);
goto fail;
}
} else if (!ha->flags.msi_enabled)
ql_dbg(ql_dbg_init, vha, 0x0125,
"INTa mode: Enabled.\n");
clear_risc_ints:

View File

@ -3122,7 +3122,7 @@ qla24xx_report_id_acquisition(scsi_qla_host_t *vha,
if (vp_idx == 0 && (MSB(stat) != 1))
goto reg_needed;
if (MSB(stat) != 0) {
if (MSB(stat) != 0 && MSB(stat) != 2) {
ql_dbg(ql_dbg_mbx, vha, 0x10ba,
"Could not acquire ID for VP[%d].\n", vp_idx);
return;
@ -3536,7 +3536,7 @@ qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req)
if (IS_QLA83XX(ha))
mcp->mb[15] = 0;
reg = (struct device_reg_25xxmq *)((void *)(ha->mqiobase) +
reg = (struct device_reg_25xxmq __iomem *)((ha->mqiobase) +
QLA_QUE_PAGE * req->id);
mcp->mb[4] = req->id;
@ -3605,7 +3605,7 @@ qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
if (IS_QLA83XX(ha))
mcp->mb[15] = 0;
reg = (struct device_reg_25xxmq *)((void *)(ha->mqiobase) +
reg = (struct device_reg_25xxmq __iomem *)((ha->mqiobase) +
QLA_QUE_PAGE * rsp->id);
mcp->mb[4] = rsp->id;

View File

@ -36,7 +36,7 @@
#define MAX_CRB_XFORM 60
static unsigned long crb_addr_xform[MAX_CRB_XFORM];
int qla82xx_crb_table_initialized;
static int qla82xx_crb_table_initialized;
#define qla82xx_crb_addr_transform(name) \
(crb_addr_xform[QLA82XX_HW_PX_MAP_CRB_##name] = \
@ -102,7 +102,7 @@ static void qla82xx_crb_addr_transform_setup(void)
qla82xx_crb_table_initialized = 1;
}
struct crb_128M_2M_block_map crb_128M_2M_map[64] = {
static struct crb_128M_2M_block_map crb_128M_2M_map[64] = {
{{{0, 0, 0, 0} } },
{{{1, 0x0100000, 0x0102000, 0x120000},
{1, 0x0110000, 0x0120000, 0x130000},
@ -262,7 +262,7 @@ struct crb_128M_2M_block_map crb_128M_2M_map[64] = {
/*
* top 12 bits of crb internal address (hub, agent)
*/
unsigned qla82xx_crb_hub_agt[64] = {
static unsigned qla82xx_crb_hub_agt[64] = {
0,
QLA82XX_HW_CRB_HUB_AGT_ADR_PS,
QLA82XX_HW_CRB_HUB_AGT_ADR_MN,
@ -330,7 +330,7 @@ unsigned qla82xx_crb_hub_agt[64] = {
};
/* Device states */
char *q_dev_state[] = {
static char *q_dev_state[] = {
"Unknown",
"Cold",
"Initializing",
@ -359,12 +359,13 @@ qla82xx_pci_set_crbwindow_2M(struct qla_hw_data *ha, ulong *off)
ha->crb_win = CRB_HI(*off);
writel(ha->crb_win,
(void *)(CRB_WINDOW_2M + ha->nx_pcibase));
(void __iomem *)(CRB_WINDOW_2M + ha->nx_pcibase));
/* Read back value to make sure write has gone through before trying
* to use it.
*/
win_read = RD_REG_DWORD((void *)(CRB_WINDOW_2M + ha->nx_pcibase));
win_read = RD_REG_DWORD((void __iomem *)
(CRB_WINDOW_2M + ha->nx_pcibase));
if (win_read != ha->crb_win) {
ql_dbg(ql_dbg_p3p, vha, 0xb000,
"%s: Written crbwin (0x%x) "
@ -567,7 +568,7 @@ qla82xx_pci_mem_bound_check(struct qla_hw_data *ha,
return 1;
}
int qla82xx_pci_set_window_warning_count;
static int qla82xx_pci_set_window_warning_count;
static unsigned long
qla82xx_pci_set_window(struct qla_hw_data *ha, unsigned long long addr)
@ -677,10 +678,10 @@ static int qla82xx_pci_mem_read_direct(struct qla_hw_data *ha,
u64 off, void *data, int size)
{
unsigned long flags;
void *addr = NULL;
void __iomem *addr = NULL;
int ret = 0;
u64 start;
uint8_t *mem_ptr = NULL;
uint8_t __iomem *mem_ptr = NULL;
unsigned long mem_base;
unsigned long mem_page;
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
@ -712,7 +713,7 @@ static int qla82xx_pci_mem_read_direct(struct qla_hw_data *ha,
mem_ptr = ioremap(mem_base + mem_page, PAGE_SIZE * 2);
else
mem_ptr = ioremap(mem_base + mem_page, PAGE_SIZE);
if (mem_ptr == 0UL) {
if (mem_ptr == NULL) {
*(u8 *)data = 0;
return -1;
}
@ -749,10 +750,10 @@ qla82xx_pci_mem_write_direct(struct qla_hw_data *ha,
u64 off, void *data, int size)
{
unsigned long flags;
void *addr = NULL;
void __iomem *addr = NULL;
int ret = 0;
u64 start;
uint8_t *mem_ptr = NULL;
uint8_t __iomem *mem_ptr = NULL;
unsigned long mem_base;
unsigned long mem_page;
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
@ -784,7 +785,7 @@ qla82xx_pci_mem_write_direct(struct qla_hw_data *ha,
mem_ptr = ioremap(mem_base + mem_page, PAGE_SIZE*2);
else
mem_ptr = ioremap(mem_base + mem_page, PAGE_SIZE);
if (mem_ptr == 0UL)
if (mem_ptr == NULL)
return -1;
addr = mem_ptr;
@ -908,24 +909,24 @@ qla82xx_wait_rom_done(struct qla_hw_data *ha)
return 0;
}
int
static int
qla82xx_md_rw_32(struct qla_hw_data *ha, uint32_t off, u32 data, uint8_t flag)
{
uint32_t off_value, rval = 0;
WRT_REG_DWORD((void *)(CRB_WINDOW_2M + ha->nx_pcibase),
WRT_REG_DWORD((void __iomem *)(CRB_WINDOW_2M + ha->nx_pcibase),
(off & 0xFFFF0000));
/* Read back value to make sure write has gone through */
RD_REG_DWORD((void *)(CRB_WINDOW_2M + ha->nx_pcibase));
RD_REG_DWORD((void __iomem *)(CRB_WINDOW_2M + ha->nx_pcibase));
off_value = (off & 0x0000FFFF);
if (flag)
WRT_REG_DWORD((void *)
WRT_REG_DWORD((void __iomem *)
(off_value + CRB_INDIRECT_2M + ha->nx_pcibase),
data);
else
rval = RD_REG_DWORD((void *)
rval = RD_REG_DWORD((void __iomem *)
(off_value + CRB_INDIRECT_2M + ha->nx_pcibase));
return rval;
@ -1654,7 +1655,6 @@ qla82xx_iospace_config(struct qla_hw_data *ha)
if (!ha->nx_pcibase) {
ql_log_pci(ql_log_fatal, ha->pdev, 0x000e,
"Cannot remap pcibase MMIO, aborting.\n");
pci_release_regions(ha->pdev);
goto iospace_error_exit;
}
@ -1669,7 +1669,6 @@ qla82xx_iospace_config(struct qla_hw_data *ha)
if (!ha->nxdb_wr_ptr) {
ql_log_pci(ql_log_fatal, ha->pdev, 0x000f,
"Cannot remap MMIO, aborting.\n");
pci_release_regions(ha->pdev);
goto iospace_error_exit;
}
@ -1764,14 +1763,6 @@ void qla82xx_config_rings(struct scsi_qla_host *vha)
WRT_REG_DWORD((unsigned long __iomem *)&reg->rsp_q_out[0], 0);
}
void qla82xx_reset_adapter(struct scsi_qla_host *vha)
{
struct qla_hw_data *ha = vha->hw;
vha->flags.online = 0;
qla2x00_try_to_stop_firmware(vha);
ha->isp_ops->disable_intrs(ha);
}
static int
qla82xx_fw_load_from_blob(struct qla_hw_data *ha)
{
@ -1856,7 +1847,7 @@ qla82xx_set_product_offset(struct qla_hw_data *ha)
return -1;
}
int
static int
qla82xx_validate_firmware_blob(scsi_qla_host_t *vha, uint8_t fw_type)
{
__le32 val;
@ -1961,20 +1952,6 @@ qla82xx_check_rcvpeg_state(struct qla_hw_data *ha)
}
/* ISR related functions */
uint32_t qla82xx_isr_int_target_mask_enable[8] = {
ISR_INT_TARGET_MASK, ISR_INT_TARGET_MASK_F1,
ISR_INT_TARGET_MASK_F2, ISR_INT_TARGET_MASK_F3,
ISR_INT_TARGET_MASK_F4, ISR_INT_TARGET_MASK_F5,
ISR_INT_TARGET_MASK_F7, ISR_INT_TARGET_MASK_F7
};
uint32_t qla82xx_isr_int_target_status[8] = {
ISR_INT_TARGET_STATUS, ISR_INT_TARGET_STATUS_F1,
ISR_INT_TARGET_STATUS_F2, ISR_INT_TARGET_STATUS_F3,
ISR_INT_TARGET_STATUS_F4, ISR_INT_TARGET_STATUS_F5,
ISR_INT_TARGET_STATUS_F7, ISR_INT_TARGET_STATUS_F7
};
static struct qla82xx_legacy_intr_set legacy_intr[] = \
QLA82XX_LEGACY_INTR_CONFIG;
@ -2813,7 +2790,7 @@ qla82xx_start_iocbs(scsi_qla_host_t *vha)
else {
WRT_REG_DWORD((unsigned long __iomem *)ha->nxdb_wr_ptr, dbval);
wmb();
while (RD_REG_DWORD(ha->nxdb_rd_ptr) != dbval) {
while (RD_REG_DWORD((void __iomem *)ha->nxdb_rd_ptr) != dbval) {
WRT_REG_DWORD((unsigned long __iomem *)ha->nxdb_wr_ptr,
dbval);
wmb();
@ -2821,7 +2798,8 @@ qla82xx_start_iocbs(scsi_qla_host_t *vha)
}
}
void qla82xx_rom_lock_recovery(struct qla_hw_data *ha)
static void
qla82xx_rom_lock_recovery(struct qla_hw_data *ha)
{
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
@ -3177,7 +3155,7 @@ qla82xx_check_md_needed(scsi_qla_host_t *vha)
}
int
static int
qla82xx_check_fw_alive(scsi_qla_host_t *vha)
{
uint32_t fw_heartbeat_counter;
@ -3817,7 +3795,8 @@ qla82xx_minidump_process_rdocm(scsi_qla_host_t *vha,
loop_cnt = ocm_hdr->op_count;
for (i = 0; i < loop_cnt; i++) {
r_value = RD_REG_DWORD((void *)(r_addr + ha->nx_pcibase));
r_value = RD_REG_DWORD((void __iomem *)
(r_addr + ha->nx_pcibase));
*data_ptr++ = cpu_to_le32(r_value);
r_addr += r_stride;
}
@ -4376,7 +4355,7 @@ qla82xx_md_free(scsi_qla_host_t *vha)
ha->md_tmplt_hdr, ha->md_template_size / 1024);
dma_free_coherent(&ha->pdev->dev, ha->md_template_size,
ha->md_tmplt_hdr, ha->md_tmplt_hdr_dma);
ha->md_tmplt_hdr = 0;
ha->md_tmplt_hdr = NULL;
}
/* Release the template data buffer allocated */
@ -4386,7 +4365,7 @@ qla82xx_md_free(scsi_qla_host_t *vha)
ha->md_dump, ha->md_dump_size / 1024);
vfree(ha->md_dump);
ha->md_dump_size = 0;
ha->md_dump = 0;
ha->md_dump = NULL;
}
}
@ -4423,7 +4402,7 @@ qla82xx_md_prep(scsi_qla_host_t *vha)
dma_free_coherent(&ha->pdev->dev,
ha->md_template_size,
ha->md_tmplt_hdr, ha->md_tmplt_hdr_dma);
ha->md_tmplt_hdr = 0;
ha->md_tmplt_hdr = NULL;
}
}

View File

@ -41,7 +41,7 @@ static struct kmem_cache *ctx_cachep;
*/
int ql_errlev = ql_log_all;
int ql2xenableclass2;
static int ql2xenableclass2;
module_param(ql2xenableclass2, int, S_IRUGO|S_IRUSR);
MODULE_PARM_DESC(ql2xenableclass2,
"Specify if Class 2 operations are supported from the very "
@ -89,6 +89,8 @@ MODULE_PARM_DESC(ql2xextended_error_logging,
"\t\t0x00200000 - AER/EEH. 0x00100000 - Multi Q.\n"
"\t\t0x00080000 - P3P Specific. 0x00040000 - Virtual Port.\n"
"\t\t0x00020000 - Buffer Dump. 0x00010000 - Misc.\n"
"\t\t0x00008000 - Verbose. 0x00004000 - Target.\n"
"\t\t0x00002000 - Target Mgmt. 0x00001000 - Target TMF.\n"
"\t\t0x7fffffff - For enabling all logs, can be too many logs.\n"
"\t\t0x1e400000 - Preferred value for capturing essential "
"debug information (equivalent to old "
@ -494,12 +496,20 @@ qla24xx_pci_info_str(struct scsi_qla_host *vha, char *str)
(BIT_4 | BIT_5 | BIT_6 | BIT_7 | BIT_8 | BIT_9)) >> 4;
strcpy(str, "PCIe (");
if (lspeed == 1)
switch (lspeed) {
case 1:
strcat(str, "2.5GT/s ");
else if (lspeed == 2)
break;
case 2:
strcat(str, "5.0GT/s ");
else
break;
case 3:
strcat(str, "8.0GT/s ");
break;
default:
strcat(str, "<unknown> ");
break;
}
snprintf(lwstr, sizeof(lwstr), "x%d)", lwidth);
strcat(str, lwstr);
@ -719,7 +729,7 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
rval = ha->isp_ops->start_scsi(sp);
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_io, vha, 0x3013,
ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3013,
"Start scsi failed rval=%d for cmd=%p.\n", rval, cmd);
goto qc24_host_busy_free_sp;
}
@ -2357,7 +2367,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
/* Configure PCI I/O space */
ret = ha->isp_ops->iospace_config(ha);
if (ret)
goto probe_hw_failed;
goto iospace_config_failed;
ql_log_pci(ql_log_info, pdev, 0x001d,
"Found an ISP%04X irq %d iobase 0x%p.\n",
@ -2668,7 +2678,11 @@ probe_hw_failed:
qla82xx_idc_lock(ha);
qla82xx_clear_drv_active(ha);
qla82xx_idc_unlock(ha);
iounmap((device_reg_t __iomem *)ha->nx_pcibase);
}
iospace_config_failed:
if (IS_QLA82XX(ha)) {
if (!ha->nx_pcibase)
iounmap((device_reg_t __iomem *)ha->nx_pcibase);
if (!ql2xdbwr)
iounmap((device_reg_t __iomem *)ha->nxdb_wr_ptr);
} else {
@ -2755,6 +2769,7 @@ qla2x00_remove_one(struct pci_dev *pdev)
ha->flags.host_shutting_down = 1;
set_bit(UNLOADING, &base_vha->dpc_flags);
mutex_lock(&ha->vport_lock);
while (ha->cur_vport_count) {
struct Scsi_Host *scsi_host;
@ -2784,8 +2799,6 @@ qla2x00_remove_one(struct pci_dev *pdev)
"Error while clearing DRV-Presence.\n");
}
set_bit(UNLOADING, &base_vha->dpc_flags);
qla2x00_abort_all_cmds(base_vha, DID_NO_CONNECT << 16);
qla2x00_dfs_remove(base_vha);
@ -3721,10 +3734,9 @@ void qla2x00_relogin(struct scsi_qla_host *vha)
if (fcport->flags &
FCF_FCP2_DEVICE)
opts |= BIT_1;
status2 =
qla2x00_get_port_database(
vha, fcport,
opts);
status2 =
qla2x00_get_port_database(
vha, fcport, opts);
if (status2 != QLA_SUCCESS)
status = 1;
}
@ -3836,7 +3848,7 @@ qla83xx_idc_state_handler_work(struct work_struct *work)
qla83xx_idc_unlock(base_vha, 0);
}
int
static int
qla83xx_check_nic_core_fw_alive(scsi_qla_host_t *base_vha)
{
int rval = QLA_SUCCESS;
@ -3954,7 +3966,7 @@ qla83xx_wait_logic(void)
}
}
int
static int
qla83xx_force_lock_recovery(scsi_qla_host_t *base_vha)
{
int rval;
@ -4013,7 +4025,7 @@ qla83xx_force_lock_recovery(scsi_qla_host_t *base_vha)
return rval;
}
int
static int
qla83xx_idc_lock_recovery(scsi_qla_host_t *base_vha)
{
int rval = QLA_SUCCESS;
@ -4212,7 +4224,7 @@ qla83xx_clear_drv_presence(scsi_qla_host_t *vha)
return rval;
}
void
static void
qla83xx_need_reset_handler(scsi_qla_host_t *vha)
{
struct qla_hw_data *ha = vha->hw;
@ -4224,7 +4236,7 @@ qla83xx_need_reset_handler(scsi_qla_host_t *vha)
while (1) {
qla83xx_rd_reg(vha, QLA83XX_IDC_DRIVER_ACK, &drv_ack);
qla83xx_rd_reg(vha, QLA83XX_IDC_DRV_PRESENCE, &drv_presence);
if (drv_ack == drv_presence)
if ((drv_ack & drv_presence) == drv_presence)
break;
if (time_after_eq(jiffies, ack_timeout)) {
@ -4251,7 +4263,7 @@ qla83xx_need_reset_handler(scsi_qla_host_t *vha)
ql_log(ql_log_info, vha, 0xb068, "HW State: COLD/RE-INIT.\n");
}
int
static int
qla83xx_device_bootstrap(scsi_qla_host_t *vha)
{
int rval = QLA_SUCCESS;
@ -4505,9 +4517,9 @@ qla2x00_do_dpc(void *data)
"ISP abort end.\n");
}
if (test_bit(FCPORT_UPDATE_NEEDED, &base_vha->dpc_flags)) {
if (test_and_clear_bit(FCPORT_UPDATE_NEEDED,
&base_vha->dpc_flags)) {
qla2x00_update_fcports(base_vha);
clear_bit(FCPORT_UPDATE_NEEDED, &base_vha->dpc_flags);
}
if (test_bit(SCR_PENDING, &base_vha->dpc_flags)) {
@ -4987,7 +4999,8 @@ qla2xxx_pci_mmio_enabled(struct pci_dev *pdev)
return PCI_ERS_RESULT_RECOVERED;
}
uint32_t qla82xx_error_recovery(scsi_qla_host_t *base_vha)
static uint32_t
qla82xx_error_recovery(scsi_qla_host_t *base_vha)
{
uint32_t rval = QLA_FUNCTION_FAILED;
uint32_t drv_active = 0;

View File

@ -1029,7 +1029,7 @@ void qlt_stop_phase2(struct qla_tgt *tgt)
EXPORT_SYMBOL(qlt_stop_phase2);
/* Called from qlt_remove_target() -> qla2x00_remove_one() */
void qlt_release(struct qla_tgt *tgt)
static void qlt_release(struct qla_tgt *tgt)
{
struct qla_hw_data *ha = tgt->ha;

View File

@ -7,7 +7,7 @@
/*
* Driver version
*/
#define QLA2XXX_VERSION "8.04.00.07-k"
#define QLA2XXX_VERSION "8.04.00.08-k"
#define QLA_DRIVER_MAJOR_VER 8
#define QLA_DRIVER_MINOR_VER 4

View File

@ -16,16 +16,14 @@
#include "scsi_priv.h"
static int scsi_dev_type_suspend(struct device *dev, pm_message_t msg)
static int scsi_dev_type_suspend(struct device *dev, int (*cb)(struct device *))
{
struct device_driver *drv;
int err;
err = scsi_device_quiesce(to_scsi_device(dev));
if (err == 0) {
drv = dev->driver;
if (drv && drv->suspend) {
err = drv->suspend(dev, msg);
if (cb) {
err = cb(dev);
if (err)
scsi_device_resume(to_scsi_device(dev));
}
@ -34,14 +32,12 @@ static int scsi_dev_type_suspend(struct device *dev, pm_message_t msg)
return err;
}
static int scsi_dev_type_resume(struct device *dev)
static int scsi_dev_type_resume(struct device *dev, int (*cb)(struct device *))
{
struct device_driver *drv;
int err = 0;
drv = dev->driver;
if (drv && drv->resume)
err = drv->resume(dev);
if (cb)
err = cb(dev);
scsi_device_resume(to_scsi_device(dev));
dev_dbg(dev, "scsi resume: %d\n", err);
return err;
@ -49,51 +45,39 @@ static int scsi_dev_type_resume(struct device *dev)
#ifdef CONFIG_PM_SLEEP
static int scsi_bus_suspend_common(struct device *dev, pm_message_t msg)
static int
scsi_bus_suspend_common(struct device *dev, int (*cb)(struct device *))
{
int err = 0;
if (scsi_is_sdev_device(dev)) {
/*
* sd is the only high-level SCSI driver to implement runtime
* PM, and sd treats runtime suspend, system suspend, and
* system hibernate identically (but not system freeze).
* All the high-level SCSI drivers that implement runtime
* PM treat runtime suspend, system suspend, and system
* hibernate identically.
*/
if (pm_runtime_suspended(dev)) {
if (msg.event == PM_EVENT_SUSPEND ||
msg.event == PM_EVENT_HIBERNATE)
return 0; /* already suspended */
if (pm_runtime_suspended(dev))
return 0;
/* wake up device so that FREEZE will succeed */
pm_runtime_resume(dev);
}
err = scsi_dev_type_suspend(dev, msg);
err = scsi_dev_type_suspend(dev, cb);
}
return err;
}
static int scsi_bus_resume_common(struct device *dev)
static int
scsi_bus_resume_common(struct device *dev, int (*cb)(struct device *))
{
int err = 0;
/*
* Parent device may have runtime suspended as soon as
* it is woken up during the system resume.
*
* Resume it on behalf of child.
*/
pm_runtime_get_sync(dev->parent);
if (scsi_is_sdev_device(dev))
err = scsi_dev_type_resume(dev);
err = scsi_dev_type_resume(dev, cb);
if (err == 0) {
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
}
pm_runtime_put_sync(dev->parent);
return err;
}
@ -112,26 +96,49 @@ static int scsi_bus_prepare(struct device *dev)
static int scsi_bus_suspend(struct device *dev)
{
return scsi_bus_suspend_common(dev, PMSG_SUSPEND);
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return scsi_bus_suspend_common(dev, pm ? pm->suspend : NULL);
}
static int scsi_bus_resume(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return scsi_bus_resume_common(dev, pm ? pm->resume : NULL);
}
static int scsi_bus_freeze(struct device *dev)
{
return scsi_bus_suspend_common(dev, PMSG_FREEZE);
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return scsi_bus_suspend_common(dev, pm ? pm->freeze : NULL);
}
static int scsi_bus_thaw(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return scsi_bus_resume_common(dev, pm ? pm->thaw : NULL);
}
static int scsi_bus_poweroff(struct device *dev)
{
return scsi_bus_suspend_common(dev, PMSG_HIBERNATE);
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return scsi_bus_suspend_common(dev, pm ? pm->poweroff : NULL);
}
static int scsi_bus_restore(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
return scsi_bus_resume_common(dev, pm ? pm->restore : NULL);
}
#else /* CONFIG_PM_SLEEP */
#define scsi_bus_resume_common NULL
#define scsi_bus_prepare NULL
#define scsi_bus_suspend NULL
#define scsi_bus_resume NULL
#define scsi_bus_freeze NULL
#define scsi_bus_thaw NULL
#define scsi_bus_poweroff NULL
#define scsi_bus_restore NULL
#endif /* CONFIG_PM_SLEEP */
@ -140,10 +147,12 @@ static int scsi_bus_poweroff(struct device *dev)
static int scsi_runtime_suspend(struct device *dev)
{
int err = 0;
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
dev_dbg(dev, "scsi_runtime_suspend\n");
if (scsi_is_sdev_device(dev)) {
err = scsi_dev_type_suspend(dev, PMSG_AUTO_SUSPEND);
err = scsi_dev_type_suspend(dev,
pm ? pm->runtime_suspend : NULL);
if (err == -EAGAIN)
pm_schedule_suspend(dev, jiffies_to_msecs(
round_jiffies_up_relative(HZ/10)));
@ -157,10 +166,11 @@ static int scsi_runtime_suspend(struct device *dev)
static int scsi_runtime_resume(struct device *dev)
{
int err = 0;
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
dev_dbg(dev, "scsi_runtime_resume\n");
if (scsi_is_sdev_device(dev))
err = scsi_dev_type_resume(dev);
err = scsi_dev_type_resume(dev, pm ? pm->runtime_resume : NULL);
/* Insert hooks here for targets, hosts, and transport classes */
@ -239,11 +249,11 @@ void scsi_autopm_put_host(struct Scsi_Host *shost)
const struct dev_pm_ops scsi_bus_pm_ops = {
.prepare = scsi_bus_prepare,
.suspend = scsi_bus_suspend,
.resume = scsi_bus_resume_common,
.resume = scsi_bus_resume,
.freeze = scsi_bus_freeze,
.thaw = scsi_bus_resume_common,
.thaw = scsi_bus_thaw,
.poweroff = scsi_bus_poweroff,
.restore = scsi_bus_resume_common,
.restore = scsi_bus_restore,
.runtime_suspend = scsi_runtime_suspend,
.runtime_resume = scsi_runtime_resume,
.runtime_idle = scsi_runtime_idle,

View File

@ -247,11 +247,11 @@ show_shost_active_mode(struct device *dev,
static DEVICE_ATTR(active_mode, S_IRUGO | S_IWUSR, show_shost_active_mode, NULL);
static int check_reset_type(char *str)
static int check_reset_type(const char *str)
{
if (strncmp(str, "adapter", 10) == 0)
if (sysfs_streq(str, "adapter"))
return SCSI_ADAPTER_RESET;
else if (strncmp(str, "firmware", 10) == 0)
else if (sysfs_streq(str, "firmware"))
return SCSI_FIRMWARE_RESET;
else
return 0;
@ -264,12 +264,9 @@ store_host_reset(struct device *dev, struct device_attribute *attr,
struct Scsi_Host *shost = class_to_shost(dev);
struct scsi_host_template *sht = shost->hostt;
int ret = -EINVAL;
char str[10];
int type;
sscanf(buf, "%s", str);
type = check_reset_type(str);
type = check_reset_type(buf);
if (!type)
goto exit_store_host_reset;

View File

@ -151,6 +151,7 @@ static struct {
{ SAS_LINK_RATE_1_5_GBPS, "1.5 Gbit" },
{ SAS_LINK_RATE_3_0_GBPS, "3.0 Gbit" },
{ SAS_LINK_RATE_6_0_GBPS, "6.0 Gbit" },
{ SAS_LINK_RATE_12_0_GBPS, "12.0 Gbit" },
};
sas_bitfield_name_search(linkspeed, sas_linkspeed_names)
sas_bitfield_name_set(linkspeed, sas_linkspeed_names)

Some files were not shown because too many files have changed in this diff Show More