dect
/
linux-2.6
Archived
13
0
Fork 0

Merge branch 'devel'

* devel: (33 commits)
  edac i5000, i5400: fix pointer math in i5000_get_mc_regs()
  edac: allow specifying the error count with fake_inject
  edac: add support for Calxeda highbank L2 cache ecc
  edac: add support for Calxeda highbank memory controller
  edac: create top-level debugfs directory
  sb_edac: properly handle error count
  i7core_edac: properly handle error count
  edac: edac_mc_handle_error(): add an error_count parameter
  edac: remove arch-specific parameter for the error handler
  amd64_edac: Don't pass driver name as an error parameter
  edac_mc: check for allocation failure in edac_mc_alloc()
  edac: Increase version to 3.0.0
  edac_mc: Cleanup per-dimm_info debug messages
  edac: Convert debugfX to edac_dbg(X,
  edac: Use more normal debugging macro style
  edac: Don't add __func__ or __FILE__ for debugf[0-9] msgs
  Edac: Add ABI Documentation for the new device nodes
  edac: move documentation ABI to ABI/testing/sysfs-devices-edac
  i7core_edac: change the mem allocation scheme to make Documentation/kobject.txt happy
  edac: change the mem allocation scheme to make Documentation/kobject.txt happy
  ...
This commit is contained in:
Mauro Carvalho Chehab 2012-07-29 21:11:05 -03:00
commit c2078e4c91
48 changed files with 3506 additions and 2590 deletions

View File

@ -0,0 +1,140 @@
What: /sys/devices/system/edac/mc/mc*/reset_counters
Date: January 2006
Contact: linux-edac@vger.kernel.org
Description: This write-only control file will zero all the statistical
counters for UE and CE errors on the given memory controller.
Zeroing the counters will also reset the timer indicating how
long since the last counter were reset. This is useful for
computing errors/time. Since the counters are always reset
at driver initialization time, no module/kernel parameter
is available.
What: /sys/devices/system/edac/mc/mc*/seconds_since_reset
Date: January 2006
Contact: linux-edac@vger.kernel.org
Description: This attribute file displays how many seconds have elapsed
since the last counter reset. This can be used with the error
counters to measure error rates.
What: /sys/devices/system/edac/mc/mc*/mc_name
Date: January 2006
Contact: linux-edac@vger.kernel.org
Description: This attribute file displays the type of memory controller
that is being utilized.
What: /sys/devices/system/edac/mc/mc*/size_mb
Date: January 2006
Contact: linux-edac@vger.kernel.org
Description: This attribute file displays, in count of megabytes, of memory
that this memory controller manages.
What: /sys/devices/system/edac/mc/mc*/ue_count
Date: January 2006
Contact: linux-edac@vger.kernel.org
Description: This attribute file displays the total count of uncorrectable
errors that have occurred on this memory controller. If
panic_on_ue is set, this counter will not have a chance to
increment, since EDAC will panic the system
What: /sys/devices/system/edac/mc/mc*/ue_noinfo_count
Date: January 2006
Contact: linux-edac@vger.kernel.org
Description: This attribute file displays the number of UEs that have
occurred on this memory controller with no information as to
which DIMM slot is having errors.
What: /sys/devices/system/edac/mc/mc*/ce_count
Date: January 2006
Contact: linux-edac@vger.kernel.org
Description: This attribute file displays the total count of correctable
errors that have occurred on this memory controller. This
count is very important to examine. CEs provide early
indications that a DIMM is beginning to fail. This count
field should be monitored for non-zero values and report
such information to the system administrator.
What: /sys/devices/system/edac/mc/mc*/ce_noinfo_count
Date: January 2006
Contact: linux-edac@vger.kernel.org
Description: This attribute file displays the number of CEs that
have occurred on this memory controller wherewith no
information as to which DIMM slot is having errors. Memory is
handicapped, but operational, yet no information is available
to indicate which slot the failing memory is in. This count
field should be also be monitored for non-zero values.
What: /sys/devices/system/edac/mc/mc*/sdram_scrub_rate
Date: February 2007
Contact: linux-edac@vger.kernel.org
Description: Read/Write attribute file that controls memory scrubbing.
The scrubbing rate used by the memory controller is set by
writing a minimum bandwidth in bytes/sec to the attribute file.
The rate will be translated to an internal value that gives at
least the specified rate.
Reading the file will return the actual scrubbing rate employed.
If configuration fails or memory scrubbing is not implemented,
the value of the attribute file will be -1.
What: /sys/devices/system/edac/mc/mc*/max_location
Date: April 2012
Contact: Mauro Carvalho Chehab <mchehab@redhat.com>
linux-edac@vger.kernel.org
Description: This attribute file displays the information about the last
available memory slot in this memory controller. It is used by
userspace tools in order to display the memory filling layout.
What: /sys/devices/system/edac/mc/mc*/(dimm|rank)*/size
Date: April 2012
Contact: Mauro Carvalho Chehab <mchehab@redhat.com>
linux-edac@vger.kernel.org
Description: This attribute file will display the size of dimm or rank.
For dimm*/size, this is the size, in MB of the DIMM memory
stick. For rank*/size, this is the size, in MB for one rank
of the DIMM memory stick. On single rank memories (1R), this
is also the total size of the dimm. On dual rank (2R) memories,
this is half the size of the total DIMM memories.
What: /sys/devices/system/edac/mc/mc*/(dimm|rank)*/dimm_dev_type
Date: April 2012
Contact: Mauro Carvalho Chehab <mchehab@redhat.com>
linux-edac@vger.kernel.org
Description: This attribute file will display what type of DRAM device is
being utilized on this DIMM (x1, x2, x4, x8, ...).
What: /sys/devices/system/edac/mc/mc*/(dimm|rank)*/dimm_edac_mode
Date: April 2012
Contact: Mauro Carvalho Chehab <mchehab@redhat.com>
linux-edac@vger.kernel.org
Description: This attribute file will display what type of Error detection
and correction is being utilized. For example: S4ECD4ED would
mean a Chipkill with x4 DRAM.
What: /sys/devices/system/edac/mc/mc*/(dimm|rank)*/dimm_label
Date: April 2012
Contact: Mauro Carvalho Chehab <mchehab@redhat.com>
linux-edac@vger.kernel.org
Description: This control file allows this DIMM to have a label assigned
to it. With this label in the module, when errors occur
the output can provide the DIMM label in the system log.
This becomes vital for panic events to isolate the
cause of the UE event.
DIMM Labels must be assigned after booting, with information
that correctly identifies the physical slot with its
silk screen label. This information is currently very
motherboard specific and determination of this information
must occur in userland at this time.
What: /sys/devices/system/edac/mc/mc*/(dimm|rank)*/dimm_location
Date: April 2012
Contact: Mauro Carvalho Chehab <mchehab@redhat.com>
linux-edac@vger.kernel.org
Description: This attribute file will display the location (csrow/channel,
branch/channel/slot or channel/slot) of the dimm or rank.
What: /sys/devices/system/edac/mc/mc*/(dimm|rank)*/dimm_mem_type
Date: April 2012
Contact: Mauro Carvalho Chehab <mchehab@redhat.com>
linux-edac@vger.kernel.org
Description: This attribute file will display what type of memory is
currently on this csrow. Normally, either buffered or
unbuffered memory (for example, Unbuffered-DDR3).

View File

@ -0,0 +1,15 @@
Calxeda Highbank L2 cache ECC
Properties:
- compatible : Should be "calxeda,hb-sregs-l2-ecc"
- reg : Address and size for ECC error interrupt clear registers.
- interrupts : Should be single bit error interrupt, then double bit error
interrupt.
Example:
sregs@fff3c200 {
compatible = "calxeda,hb-sregs-l2-ecc";
reg = <0xfff3c200 0x100>;
interrupts = <0 71 4 0 72 4>;
};

View File

@ -0,0 +1,14 @@
Calxeda DDR memory controller
Properties:
- compatible : Should be "calxeda,hb-ddr-ctrl"
- reg : Address and size for DDR controller registers.
- interrupts : Interrupt for DDR controller.
Example:
memory-controller@fff00000 {
compatible = "calxeda,hb-ddr-ctrl";
reg = <0xfff00000 0x1000>;
interrupts = <0 91 4>;
};

View File

@ -232,116 +232,20 @@ EDAC control and attribute files.
In 'mcX' directories are EDAC control and attribute files for
this 'X' instance of the memory controllers:
Counter reset control file:
'reset_counters'
This write-only control file will zero all the statistical counters
for UE and CE errors. Zeroing the counters will also reset the timer
indicating how long since the last counter zero. This is useful
for computing errors/time. Since the counters are always reset at
driver initialization time, no module/kernel parameter is available.
RUN TIME: echo "anything" >/sys/devices/system/edac/mc/mc0/counter_reset
This resets the counters on memory controller 0
Seconds since last counter reset control file:
'seconds_since_reset'
This attribute file displays how many seconds have elapsed since the
last counter reset. This can be used with the error counters to
measure error rates.
Memory Controller name attribute file:
'mc_name'
This attribute file displays the type of memory controller
that is being utilized.
Total memory managed by this memory controller attribute file:
'size_mb'
This attribute file displays, in count of megabytes, of memory
that this instance of memory controller manages.
Total Uncorrectable Errors count attribute file:
'ue_count'
This attribute file displays the total count of uncorrectable
errors that have occurred on this memory controller. If panic_on_ue
is set this counter will not have a chance to increment,
since EDAC will panic the system.
Total UE count that had no information attribute fileY:
'ue_noinfo_count'
This attribute file displays the number of UEs that have occurred
with no information as to which DIMM slot is having errors.
Total Correctable Errors count attribute file:
'ce_count'
This attribute file displays the total count of correctable
errors that have occurred on this memory controller. This
count is very important to examine. CEs provide early
indications that a DIMM is beginning to fail. This count
field should be monitored for non-zero values and report
such information to the system administrator.
Total Correctable Errors count attribute file:
'ce_noinfo_count'
This attribute file displays the number of CEs that
have occurred wherewith no information as to which DIMM slot
is having errors. Memory is handicapped, but operational,
yet no information is available to indicate which slot
the failing memory is in. This count field should be also
be monitored for non-zero values.
Device Symlink:
'device'
Symlink to the memory controller device.
Sdram memory scrubbing rate:
'sdram_scrub_rate'
Read/Write attribute file that controls memory scrubbing. The scrubbing
rate is set by writing a minimum bandwidth in bytes/sec to the attribute
file. The rate will be translated to an internal value that gives at
least the specified rate.
Reading the file will return the actual scrubbing rate employed.
If configuration fails or memory scrubbing is not implemented, accessing
that attribute will fail.
this 'X' instance of the memory controllers.
For a description of the sysfs API, please see:
Documentation/ABI/testing/sysfs/devices-edac
============================================================================
'csrowX' DIRECTORIES
When CONFIG_EDAC_LEGACY_SYSFS is enabled, the sysfs will contain the
csrowX directories. As this API doesn't work properly for Rambus, FB-DIMMs
and modern Intel Memory Controllers, this is being deprecated in favor
of dimmX directories.
In the 'csrowX' directories are EDAC control and attribute files for
this 'X' instance of csrow:

View File

@ -118,6 +118,12 @@
interrupts = <0 90 4>;
};
memory-controller@fff00000 {
compatible = "calxeda,hb-ddr-ctrl";
reg = <0xfff00000 0x1000>;
interrupts = <0 91 4>;
};
ipc@fff20000 {
compatible = "arm,pl320", "arm,primecell";
reg = <0xfff20000 0x1000>;
@ -188,6 +194,12 @@
reg = <0xfff3c000 0x1000>;
};
sregs@fff3c200 {
compatible = "calxeda,hb-sregs-l2-ecc";
reg = <0xfff3c200 0x100>;
interrupts = <0 71 4 0 72 4>;
};
dma@fff3d000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0xfff3d000 0x1000>;

View File

@ -7,7 +7,7 @@
menuconfig EDAC
bool "EDAC (Error Detection And Correction) reporting"
depends on HAS_IOMEM
depends on X86 || PPC || TILE
depends on X86 || PPC || TILE || ARM
help
EDAC is designed to report errors in the core system.
These are low-level errors that are reported in the CPU or
@ -31,6 +31,14 @@ if EDAC
comment "Reporting subsystems"
config EDAC_LEGACY_SYSFS
bool "EDAC legacy sysfs"
default y
help
Enable the compatibility sysfs nodes.
Use 'Y' if your edac utilities aren't ported to work with the newer
structures.
config EDAC_DEBUG
bool "Debugging"
help
@ -294,4 +302,18 @@ config EDAC_TILE
Support for error detection and correction on the
Tilera memory controller.
config EDAC_HIGHBANK_MC
tristate "Highbank Memory Controller"
depends on EDAC_MM_EDAC && ARCH_HIGHBANK
help
Support for error detection and correction on the
Calxeda Highbank memory controller.
config EDAC_HIGHBANK_L2
tristate "Highbank L2 Cache"
depends on EDAC_MM_EDAC && ARCH_HIGHBANK
help
Support for error detection and correction on the
Calxeda Highbank memory controller.
endif # EDAC

View File

@ -55,3 +55,6 @@ obj-$(CONFIG_EDAC_AMD8111) += amd8111_edac.o
obj-$(CONFIG_EDAC_AMD8131) += amd8131_edac.o
obj-$(CONFIG_EDAC_TILE) += tile_edac.o
obj-$(CONFIG_EDAC_HIGHBANK_MC) += highbank_mc_edac.o
obj-$(CONFIG_EDAC_HIGHBANK_L2) += highbank_l2_edac.o

View File

@ -321,8 +321,8 @@ found:
return edac_mc_find((int)node_id);
err_no_match:
debugf2("sys_addr 0x%lx doesn't match any node\n",
(unsigned long)sys_addr);
edac_dbg(2, "sys_addr 0x%lx doesn't match any node\n",
(unsigned long)sys_addr);
return NULL;
}
@ -393,15 +393,15 @@ static int input_addr_to_csrow(struct mem_ctl_info *mci, u64 input_addr)
mask = ~mask;
if ((input_addr & mask) == (base & mask)) {
debugf2("InputAddr 0x%lx matches csrow %d (node %d)\n",
(unsigned long)input_addr, csrow,
pvt->mc_node_id);
edac_dbg(2, "InputAddr 0x%lx matches csrow %d (node %d)\n",
(unsigned long)input_addr, csrow,
pvt->mc_node_id);
return csrow;
}
}
debugf2("no matching csrow for InputAddr 0x%lx (MC node %d)\n",
(unsigned long)input_addr, pvt->mc_node_id);
edac_dbg(2, "no matching csrow for InputAddr 0x%lx (MC node %d)\n",
(unsigned long)input_addr, pvt->mc_node_id);
return -1;
}
@ -430,20 +430,20 @@ int amd64_get_dram_hole_info(struct mem_ctl_info *mci, u64 *hole_base,
/* only revE and later have the DRAM Hole Address Register */
if (boot_cpu_data.x86 == 0xf && pvt->ext_model < K8_REV_E) {
debugf1(" revision %d for node %d does not support DHAR\n",
pvt->ext_model, pvt->mc_node_id);
edac_dbg(1, " revision %d for node %d does not support DHAR\n",
pvt->ext_model, pvt->mc_node_id);
return 1;
}
/* valid for Fam10h and above */
if (boot_cpu_data.x86 >= 0x10 && !dhar_mem_hoist_valid(pvt)) {
debugf1(" Dram Memory Hoisting is DISABLED on this system\n");
edac_dbg(1, " Dram Memory Hoisting is DISABLED on this system\n");
return 1;
}
if (!dhar_valid(pvt)) {
debugf1(" Dram Memory Hoisting is DISABLED on this node %d\n",
pvt->mc_node_id);
edac_dbg(1, " Dram Memory Hoisting is DISABLED on this node %d\n",
pvt->mc_node_id);
return 1;
}
@ -475,9 +475,9 @@ int amd64_get_dram_hole_info(struct mem_ctl_info *mci, u64 *hole_base,
else
*hole_offset = k8_dhar_offset(pvt);
debugf1(" DHAR info for node %d base 0x%lx offset 0x%lx size 0x%lx\n",
pvt->mc_node_id, (unsigned long)*hole_base,
(unsigned long)*hole_offset, (unsigned long)*hole_size);
edac_dbg(1, " DHAR info for node %d base 0x%lx offset 0x%lx size 0x%lx\n",
pvt->mc_node_id, (unsigned long)*hole_base,
(unsigned long)*hole_offset, (unsigned long)*hole_size);
return 0;
}
@ -528,10 +528,9 @@ static u64 sys_addr_to_dram_addr(struct mem_ctl_info *mci, u64 sys_addr)
/* use DHAR to translate SysAddr to DramAddr */
dram_addr = sys_addr - hole_offset;
debugf2("using DHAR to translate SysAddr 0x%lx to "
"DramAddr 0x%lx\n",
(unsigned long)sys_addr,
(unsigned long)dram_addr);
edac_dbg(2, "using DHAR to translate SysAddr 0x%lx to DramAddr 0x%lx\n",
(unsigned long)sys_addr,
(unsigned long)dram_addr);
return dram_addr;
}
@ -548,9 +547,8 @@ static u64 sys_addr_to_dram_addr(struct mem_ctl_info *mci, u64 sys_addr)
*/
dram_addr = (sys_addr & GENMASK(0, 39)) - dram_base;
debugf2("using DRAM Base register to translate SysAddr 0x%lx to "
"DramAddr 0x%lx\n", (unsigned long)sys_addr,
(unsigned long)dram_addr);
edac_dbg(2, "using DRAM Base register to translate SysAddr 0x%lx to DramAddr 0x%lx\n",
(unsigned long)sys_addr, (unsigned long)dram_addr);
return dram_addr;
}
@ -586,9 +584,9 @@ static u64 dram_addr_to_input_addr(struct mem_ctl_info *mci, u64 dram_addr)
input_addr = ((dram_addr >> intlv_shift) & GENMASK(12, 35)) +
(dram_addr & 0xfff);
debugf2(" Intlv Shift=%d DramAddr=0x%lx maps to InputAddr=0x%lx\n",
intlv_shift, (unsigned long)dram_addr,
(unsigned long)input_addr);
edac_dbg(2, " Intlv Shift=%d DramAddr=0x%lx maps to InputAddr=0x%lx\n",
intlv_shift, (unsigned long)dram_addr,
(unsigned long)input_addr);
return input_addr;
}
@ -604,8 +602,8 @@ static u64 sys_addr_to_input_addr(struct mem_ctl_info *mci, u64 sys_addr)
input_addr =
dram_addr_to_input_addr(mci, sys_addr_to_dram_addr(mci, sys_addr));
debugf2("SysAdddr 0x%lx translates to InputAddr 0x%lx\n",
(unsigned long)sys_addr, (unsigned long)input_addr);
edac_dbg(2, "SysAdddr 0x%lx translates to InputAddr 0x%lx\n",
(unsigned long)sys_addr, (unsigned long)input_addr);
return input_addr;
}
@ -637,8 +635,8 @@ static u64 input_addr_to_dram_addr(struct mem_ctl_info *mci, u64 input_addr)
intlv_shift = num_node_interleave_bits(dram_intlv_en(pvt, 0));
if (intlv_shift == 0) {
debugf1(" InputAddr 0x%lx translates to DramAddr of "
"same value\n", (unsigned long)input_addr);
edac_dbg(1, " InputAddr 0x%lx translates to DramAddr of same value\n",
(unsigned long)input_addr);
return input_addr;
}
@ -649,9 +647,9 @@ static u64 input_addr_to_dram_addr(struct mem_ctl_info *mci, u64 input_addr)
intlv_sel = dram_intlv_sel(pvt, node_id) & ((1 << intlv_shift) - 1);
dram_addr = bits + (intlv_sel << 12);
debugf1("InputAddr 0x%lx translates to DramAddr 0x%lx "
"(%d node interleave bits)\n", (unsigned long)input_addr,
(unsigned long)dram_addr, intlv_shift);
edac_dbg(1, "InputAddr 0x%lx translates to DramAddr 0x%lx (%d node interleave bits)\n",
(unsigned long)input_addr,
(unsigned long)dram_addr, intlv_shift);
return dram_addr;
}
@ -673,9 +671,9 @@ static u64 dram_addr_to_sys_addr(struct mem_ctl_info *mci, u64 dram_addr)
(dram_addr < (hole_base + hole_size))) {
sys_addr = dram_addr + hole_offset;
debugf1("using DHAR to translate DramAddr 0x%lx to "
"SysAddr 0x%lx\n", (unsigned long)dram_addr,
(unsigned long)sys_addr);
edac_dbg(1, "using DHAR to translate DramAddr 0x%lx to SysAddr 0x%lx\n",
(unsigned long)dram_addr,
(unsigned long)sys_addr);
return sys_addr;
}
@ -697,9 +695,9 @@ static u64 dram_addr_to_sys_addr(struct mem_ctl_info *mci, u64 dram_addr)
*/
sys_addr |= ~((sys_addr & (1ull << 39)) - 1);
debugf1(" Node %d, DramAddr 0x%lx to SysAddr 0x%lx\n",
pvt->mc_node_id, (unsigned long)dram_addr,
(unsigned long)sys_addr);
edac_dbg(1, " Node %d, DramAddr 0x%lx to SysAddr 0x%lx\n",
pvt->mc_node_id, (unsigned long)dram_addr,
(unsigned long)sys_addr);
return sys_addr;
}
@ -768,49 +766,48 @@ static void amd64_debug_display_dimm_sizes(struct amd64_pvt *, u8);
static void amd64_dump_dramcfg_low(u32 dclr, int chan)
{
debugf1("F2x%d90 (DRAM Cfg Low): 0x%08x\n", chan, dclr);
edac_dbg(1, "F2x%d90 (DRAM Cfg Low): 0x%08x\n", chan, dclr);
debugf1(" DIMM type: %sbuffered; all DIMMs support ECC: %s\n",
(dclr & BIT(16)) ? "un" : "",
(dclr & BIT(19)) ? "yes" : "no");
edac_dbg(1, " DIMM type: %sbuffered; all DIMMs support ECC: %s\n",
(dclr & BIT(16)) ? "un" : "",
(dclr & BIT(19)) ? "yes" : "no");
debugf1(" PAR/ERR parity: %s\n",
(dclr & BIT(8)) ? "enabled" : "disabled");
edac_dbg(1, " PAR/ERR parity: %s\n",
(dclr & BIT(8)) ? "enabled" : "disabled");
if (boot_cpu_data.x86 == 0x10)
debugf1(" DCT 128bit mode width: %s\n",
(dclr & BIT(11)) ? "128b" : "64b");
edac_dbg(1, " DCT 128bit mode width: %s\n",
(dclr & BIT(11)) ? "128b" : "64b");
debugf1(" x4 logical DIMMs present: L0: %s L1: %s L2: %s L3: %s\n",
(dclr & BIT(12)) ? "yes" : "no",
(dclr & BIT(13)) ? "yes" : "no",
(dclr & BIT(14)) ? "yes" : "no",
(dclr & BIT(15)) ? "yes" : "no");
edac_dbg(1, " x4 logical DIMMs present: L0: %s L1: %s L2: %s L3: %s\n",
(dclr & BIT(12)) ? "yes" : "no",
(dclr & BIT(13)) ? "yes" : "no",
(dclr & BIT(14)) ? "yes" : "no",
(dclr & BIT(15)) ? "yes" : "no");
}
/* Display and decode various NB registers for debug purposes. */
static void dump_misc_regs(struct amd64_pvt *pvt)
{
debugf1("F3xE8 (NB Cap): 0x%08x\n", pvt->nbcap);
edac_dbg(1, "F3xE8 (NB Cap): 0x%08x\n", pvt->nbcap);
debugf1(" NB two channel DRAM capable: %s\n",
(pvt->nbcap & NBCAP_DCT_DUAL) ? "yes" : "no");
edac_dbg(1, " NB two channel DRAM capable: %s\n",
(pvt->nbcap & NBCAP_DCT_DUAL) ? "yes" : "no");
debugf1(" ECC capable: %s, ChipKill ECC capable: %s\n",
(pvt->nbcap & NBCAP_SECDED) ? "yes" : "no",
(pvt->nbcap & NBCAP_CHIPKILL) ? "yes" : "no");
edac_dbg(1, " ECC capable: %s, ChipKill ECC capable: %s\n",
(pvt->nbcap & NBCAP_SECDED) ? "yes" : "no",
(pvt->nbcap & NBCAP_CHIPKILL) ? "yes" : "no");
amd64_dump_dramcfg_low(pvt->dclr0, 0);
debugf1("F3xB0 (Online Spare): 0x%08x\n", pvt->online_spare);
edac_dbg(1, "F3xB0 (Online Spare): 0x%08x\n", pvt->online_spare);
debugf1("F1xF0 (DRAM Hole Address): 0x%08x, base: 0x%08x, "
"offset: 0x%08x\n",
pvt->dhar, dhar_base(pvt),
(boot_cpu_data.x86 == 0xf) ? k8_dhar_offset(pvt)
: f10_dhar_offset(pvt));
edac_dbg(1, "F1xF0 (DRAM Hole Address): 0x%08x, base: 0x%08x, offset: 0x%08x\n",
pvt->dhar, dhar_base(pvt),
(boot_cpu_data.x86 == 0xf) ? k8_dhar_offset(pvt)
: f10_dhar_offset(pvt));
debugf1(" DramHoleValid: %s\n", dhar_valid(pvt) ? "yes" : "no");
edac_dbg(1, " DramHoleValid: %s\n", dhar_valid(pvt) ? "yes" : "no");
amd64_debug_display_dimm_sizes(pvt, 0);
@ -857,15 +854,15 @@ static void read_dct_base_mask(struct amd64_pvt *pvt)
u32 *base1 = &pvt->csels[1].csbases[cs];
if (!amd64_read_dct_pci_cfg(pvt, reg0, base0))
debugf0(" DCSB0[%d]=0x%08x reg: F2x%x\n",
cs, *base0, reg0);
edac_dbg(0, " DCSB0[%d]=0x%08x reg: F2x%x\n",
cs, *base0, reg0);
if (boot_cpu_data.x86 == 0xf || dct_ganging_enabled(pvt))
continue;
if (!amd64_read_dct_pci_cfg(pvt, reg1, base1))
debugf0(" DCSB1[%d]=0x%08x reg: F2x%x\n",
cs, *base1, reg1);
edac_dbg(0, " DCSB1[%d]=0x%08x reg: F2x%x\n",
cs, *base1, reg1);
}
for_each_chip_select_mask(cs, 0, pvt) {
@ -875,15 +872,15 @@ static void read_dct_base_mask(struct amd64_pvt *pvt)
u32 *mask1 = &pvt->csels[1].csmasks[cs];
if (!amd64_read_dct_pci_cfg(pvt, reg0, mask0))
debugf0(" DCSM0[%d]=0x%08x reg: F2x%x\n",
cs, *mask0, reg0);
edac_dbg(0, " DCSM0[%d]=0x%08x reg: F2x%x\n",
cs, *mask0, reg0);
if (boot_cpu_data.x86 == 0xf || dct_ganging_enabled(pvt))
continue;
if (!amd64_read_dct_pci_cfg(pvt, reg1, mask1))
debugf0(" DCSM1[%d]=0x%08x reg: F2x%x\n",
cs, *mask1, reg1);
edac_dbg(0, " DCSM1[%d]=0x%08x reg: F2x%x\n",
cs, *mask1, reg1);
}
}
@ -1049,24 +1046,22 @@ static void k8_map_sysaddr_to_csrow(struct mem_ctl_info *mci, u64 sys_addr,
if (!src_mci) {
amd64_mc_err(mci, "failed to map error addr 0x%lx to a node\n",
(unsigned long)sys_addr);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, offset, syndrome,
-1, -1, -1,
EDAC_MOD_STR,
"failed to map error addr to a node",
NULL);
"");
return;
}
/* Now map the sys_addr to a CSROW */
csrow = sys_addr_to_csrow(src_mci, sys_addr);
if (csrow < 0) {
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, offset, syndrome,
-1, -1, -1,
EDAC_MOD_STR,
"failed to map error addr to a csrow",
NULL);
"");
return;
}
@ -1082,12 +1077,11 @@ static void k8_map_sysaddr_to_csrow(struct mem_ctl_info *mci, u64 sys_addr,
amd64_mc_warn(src_mci, "unknown syndrome 0x%04x - "
"possible error reporting race\n",
syndrome);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, offset, syndrome,
csrow, -1, -1,
EDAC_MOD_STR,
"unknown syndrome - possible error reporting race",
NULL);
"");
return;
}
} else {
@ -1102,10 +1096,10 @@ static void k8_map_sysaddr_to_csrow(struct mem_ctl_info *mci, u64 sys_addr,
channel = ((sys_addr & BIT(3)) != 0);
}
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, src_mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, src_mci, 1,
page, offset, syndrome,
csrow, channel, -1,
EDAC_MOD_STR, "", NULL);
"", "");
}
static int ddr2_cs_size(unsigned i, bool dct_width)
@ -1193,7 +1187,7 @@ static int f1x_early_channel_count(struct amd64_pvt *pvt)
* Need to check DCT0[0] and DCT1[0] to see if only one of them has
* their CSEnable bit on. If so, then SINGLE DIMM case.
*/
debugf0("Data width is not 128 bits - need more decoding\n");
edac_dbg(0, "Data width is not 128 bits - need more decoding\n");
/*
* Check DRAM Bank Address Mapping values for each DIMM to see if there
@ -1272,25 +1266,24 @@ static void read_dram_ctl_register(struct amd64_pvt *pvt)
return;
if (!amd64_read_dct_pci_cfg(pvt, DCT_SEL_LO, &pvt->dct_sel_lo)) {
debugf0("F2x110 (DCTSelLow): 0x%08x, High range addrs at: 0x%x\n",
pvt->dct_sel_lo, dct_sel_baseaddr(pvt));
edac_dbg(0, "F2x110 (DCTSelLow): 0x%08x, High range addrs at: 0x%x\n",
pvt->dct_sel_lo, dct_sel_baseaddr(pvt));
debugf0(" DCTs operate in %s mode.\n",
(dct_ganging_enabled(pvt) ? "ganged" : "unganged"));
edac_dbg(0, " DCTs operate in %s mode\n",
(dct_ganging_enabled(pvt) ? "ganged" : "unganged"));
if (!dct_ganging_enabled(pvt))
debugf0(" Address range split per DCT: %s\n",
(dct_high_range_enabled(pvt) ? "yes" : "no"));
edac_dbg(0, " Address range split per DCT: %s\n",
(dct_high_range_enabled(pvt) ? "yes" : "no"));
debugf0(" data interleave for ECC: %s, "
"DRAM cleared since last warm reset: %s\n",
(dct_data_intlv_enabled(pvt) ? "enabled" : "disabled"),
(dct_memory_cleared(pvt) ? "yes" : "no"));
edac_dbg(0, " data interleave for ECC: %s, DRAM cleared since last warm reset: %s\n",
(dct_data_intlv_enabled(pvt) ? "enabled" : "disabled"),
(dct_memory_cleared(pvt) ? "yes" : "no"));
debugf0(" channel interleave: %s, "
"interleave bits selector: 0x%x\n",
(dct_interleave_enabled(pvt) ? "enabled" : "disabled"),
dct_sel_interleave_addr(pvt));
edac_dbg(0, " channel interleave: %s, "
"interleave bits selector: 0x%x\n",
(dct_interleave_enabled(pvt) ? "enabled" : "disabled"),
dct_sel_interleave_addr(pvt));
}
amd64_read_dct_pci_cfg(pvt, DCT_SEL_HI, &pvt->dct_sel_hi);
@ -1428,7 +1421,7 @@ static int f1x_lookup_addr_in_dct(u64 in_addr, u32 nid, u8 dct)
pvt = mci->pvt_info;
debugf1("input addr: 0x%llx, DCT: %d\n", in_addr, dct);
edac_dbg(1, "input addr: 0x%llx, DCT: %d\n", in_addr, dct);
for_each_chip_select(csrow, dct, pvt) {
if (!csrow_enabled(csrow, dct, pvt))
@ -1436,19 +1429,18 @@ static int f1x_lookup_addr_in_dct(u64 in_addr, u32 nid, u8 dct)
get_cs_base_and_mask(pvt, csrow, dct, &cs_base, &cs_mask);
debugf1(" CSROW=%d CSBase=0x%llx CSMask=0x%llx\n",
csrow, cs_base, cs_mask);
edac_dbg(1, " CSROW=%d CSBase=0x%llx CSMask=0x%llx\n",
csrow, cs_base, cs_mask);
cs_mask = ~cs_mask;
debugf1(" (InputAddr & ~CSMask)=0x%llx "
"(CSBase & ~CSMask)=0x%llx\n",
(in_addr & cs_mask), (cs_base & cs_mask));
edac_dbg(1, " (InputAddr & ~CSMask)=0x%llx (CSBase & ~CSMask)=0x%llx\n",
(in_addr & cs_mask), (cs_base & cs_mask));
if ((in_addr & cs_mask) == (cs_base & cs_mask)) {
cs_found = f10_process_possible_spare(pvt, dct, csrow);
debugf1(" MATCH csrow=%d\n", cs_found);
edac_dbg(1, " MATCH csrow=%d\n", cs_found);
break;
}
}
@ -1505,8 +1497,8 @@ static int f1x_match_to_this_node(struct amd64_pvt *pvt, unsigned range,
u8 intlv_en = dram_intlv_en(pvt, range);
u32 intlv_sel = dram_intlv_sel(pvt, range);
debugf1("(range %d) SystemAddr= 0x%llx Limit=0x%llx\n",
range, sys_addr, get_dram_limit(pvt, range));
edac_dbg(1, "(range %d) SystemAddr= 0x%llx Limit=0x%llx\n",
range, sys_addr, get_dram_limit(pvt, range));
if (dhar_valid(pvt) &&
dhar_base(pvt) <= sys_addr &&
@ -1562,7 +1554,7 @@ static int f1x_match_to_this_node(struct amd64_pvt *pvt, unsigned range,
(chan_addr & 0xfff);
}
debugf1(" Normalized DCT addr: 0x%llx\n", chan_addr);
edac_dbg(1, " Normalized DCT addr: 0x%llx\n", chan_addr);
cs_found = f1x_lookup_addr_in_dct(chan_addr, node_id, channel);
@ -1616,12 +1608,11 @@ static void f1x_map_sysaddr_to_csrow(struct mem_ctl_info *mci, u64 sys_addr,
csrow = f1x_translate_sysaddr_to_cs(pvt, sys_addr, &nid, &chan);
if (csrow < 0) {
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, offset, syndrome,
-1, -1, -1,
EDAC_MOD_STR,
"failed to map error addr to a csrow",
NULL);
"");
return;
}
@ -1633,10 +1624,10 @@ static void f1x_map_sysaddr_to_csrow(struct mem_ctl_info *mci, u64 sys_addr,
if (dct_ganging_enabled(pvt))
chan = get_channel_from_ecc_syndrome(mci, syndrome);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, offset, syndrome,
csrow, chan, -1,
EDAC_MOD_STR, "", NULL);
"", "");
}
/*
@ -1664,7 +1655,8 @@ static void amd64_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl)
dcsb = (ctrl && !dct_ganging_enabled(pvt)) ? pvt->csels[1].csbases
: pvt->csels[0].csbases;
debugf1("F2x%d80 (DRAM Bank Address Mapping): 0x%08x\n", ctrl, dbam);
edac_dbg(1, "F2x%d80 (DRAM Bank Address Mapping): 0x%08x\n",
ctrl, dbam);
edac_printk(KERN_DEBUG, EDAC_MC, "DCT%d chip selects:\n", ctrl);
@ -1840,7 +1832,7 @@ static int decode_syndrome(u16 syndrome, u16 *vectors, unsigned num_vecs,
}
}
debugf0("syndrome(%x) not found\n", syndrome);
edac_dbg(0, "syndrome(%x) not found\n", syndrome);
return -1;
}
@ -1917,12 +1909,11 @@ static void amd64_handle_ce(struct mem_ctl_info *mci, struct mce *m)
/* Ensure that the Error Address is VALID */
if (!(m->status & MCI_STATUS_ADDRV)) {
amd64_mc_err(mci, "HW has no ERROR_ADDRESS available\n");
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
0, 0, 0,
-1, -1, -1,
EDAC_MOD_STR,
"HW has no ERROR_ADDRESS available",
NULL);
"");
return;
}
@ -1946,12 +1937,11 @@ static void amd64_handle_ue(struct mem_ctl_info *mci, struct mce *m)
if (!(m->status & MCI_STATUS_ADDRV)) {
amd64_mc_err(mci, "HW has no ERROR_ADDRESS available\n");
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
0, 0, 0,
-1, -1, -1,
EDAC_MOD_STR,
"HW has no ERROR_ADDRESS available",
NULL);
"");
return;
}
@ -1966,11 +1956,11 @@ static void amd64_handle_ue(struct mem_ctl_info *mci, struct mce *m)
if (!src_mci) {
amd64_mc_err(mci, "ERROR ADDRESS (0x%lx) NOT mapped to a MC\n",
(unsigned long)sys_addr);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
page, offset, 0,
-1, -1, -1,
EDAC_MOD_STR,
"ERROR ADDRESS NOT mapped to a MC", NULL);
"ERROR ADDRESS NOT mapped to a MC",
"");
return;
}
@ -1980,17 +1970,16 @@ static void amd64_handle_ue(struct mem_ctl_info *mci, struct mce *m)
if (csrow < 0) {
amd64_mc_err(mci, "ERROR_ADDRESS (0x%lx) NOT mapped to CS\n",
(unsigned long)sys_addr);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
page, offset, 0,
-1, -1, -1,
EDAC_MOD_STR,
"ERROR ADDRESS NOT mapped to CS",
NULL);
"");
} else {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
page, offset, 0,
csrow, -1, -1,
EDAC_MOD_STR, "", NULL);
"", "");
}
}
@ -2047,9 +2036,9 @@ static int reserve_mc_sibling_devs(struct amd64_pvt *pvt, u16 f1_id, u16 f3_id)
return -ENODEV;
}
debugf1("F1: %s\n", pci_name(pvt->F1));
debugf1("F2: %s\n", pci_name(pvt->F2));
debugf1("F3: %s\n", pci_name(pvt->F3));
edac_dbg(1, "F1: %s\n", pci_name(pvt->F1));
edac_dbg(1, "F2: %s\n", pci_name(pvt->F2));
edac_dbg(1, "F3: %s\n", pci_name(pvt->F3));
return 0;
}
@ -2076,15 +2065,15 @@ static void read_mc_regs(struct amd64_pvt *pvt)
* those are Read-As-Zero
*/
rdmsrl(MSR_K8_TOP_MEM1, pvt->top_mem);
debugf0(" TOP_MEM: 0x%016llx\n", pvt->top_mem);
edac_dbg(0, " TOP_MEM: 0x%016llx\n", pvt->top_mem);
/* check first whether TOP_MEM2 is enabled */
rdmsrl(MSR_K8_SYSCFG, msr_val);
if (msr_val & (1U << 21)) {
rdmsrl(MSR_K8_TOP_MEM2, pvt->top_mem2);
debugf0(" TOP_MEM2: 0x%016llx\n", pvt->top_mem2);
edac_dbg(0, " TOP_MEM2: 0x%016llx\n", pvt->top_mem2);
} else
debugf0(" TOP_MEM2 disabled.\n");
edac_dbg(0, " TOP_MEM2 disabled\n");
amd64_read_pci_cfg(pvt->F3, NBCAP, &pvt->nbcap);
@ -2100,17 +2089,17 @@ static void read_mc_regs(struct amd64_pvt *pvt)
if (!rw)
continue;
debugf1(" DRAM range[%d], base: 0x%016llx; limit: 0x%016llx\n",
range,
get_dram_base(pvt, range),
get_dram_limit(pvt, range));
edac_dbg(1, " DRAM range[%d], base: 0x%016llx; limit: 0x%016llx\n",
range,
get_dram_base(pvt, range),
get_dram_limit(pvt, range));
debugf1(" IntlvEn=%s; Range access: %s%s IntlvSel=%d DstNode=%d\n",
dram_intlv_en(pvt, range) ? "Enabled" : "Disabled",
(rw & 0x1) ? "R" : "-",
(rw & 0x2) ? "W" : "-",
dram_intlv_sel(pvt, range),
dram_dst_node(pvt, range));
edac_dbg(1, " IntlvEn=%s; Range access: %s%s IntlvSel=%d DstNode=%d\n",
dram_intlv_en(pvt, range) ? "Enabled" : "Disabled",
(rw & 0x1) ? "R" : "-",
(rw & 0x2) ? "W" : "-",
dram_intlv_sel(pvt, range),
dram_dst_node(pvt, range));
}
read_dct_base_mask(pvt);
@ -2191,9 +2180,9 @@ static u32 amd64_csrow_nr_pages(struct amd64_pvt *pvt, u8 dct, int csrow_nr)
nr_pages = pvt->ops->dbam_to_cs(pvt, dct, cs_mode) << (20 - PAGE_SHIFT);
debugf0(" (csrow=%d) DBAM map index= %d\n", csrow_nr, cs_mode);
debugf0(" nr_pages/channel= %u channel-count = %d\n",
nr_pages, pvt->channel_count);
edac_dbg(0, " (csrow=%d) DBAM map index= %d\n", csrow_nr, cs_mode);
edac_dbg(0, " nr_pages/channel= %u channel-count = %d\n",
nr_pages, pvt->channel_count);
return nr_pages;
}
@ -2205,6 +2194,7 @@ static u32 amd64_csrow_nr_pages(struct amd64_pvt *pvt, u8 dct, int csrow_nr)
static int init_csrows(struct mem_ctl_info *mci)
{
struct csrow_info *csrow;
struct dimm_info *dimm;
struct amd64_pvt *pvt = mci->pvt_info;
u64 base, mask;
u32 val;
@ -2217,22 +2207,19 @@ static int init_csrows(struct mem_ctl_info *mci)
pvt->nbcfg = val;
debugf0("node %d, NBCFG=0x%08x[ChipKillEccCap: %d|DramEccEn: %d]\n",
pvt->mc_node_id, val,
!!(val & NBCFG_CHIPKILL), !!(val & NBCFG_ECC_ENABLE));
edac_dbg(0, "node %d, NBCFG=0x%08x[ChipKillEccCap: %d|DramEccEn: %d]\n",
pvt->mc_node_id, val,
!!(val & NBCFG_CHIPKILL), !!(val & NBCFG_ECC_ENABLE));
for_each_chip_select(i, 0, pvt) {
csrow = &mci->csrows[i];
csrow = mci->csrows[i];
if (!csrow_enabled(i, 0, pvt) && !csrow_enabled(i, 1, pvt)) {
debugf1("----CSROW %d EMPTY for node %d\n", i,
pvt->mc_node_id);
edac_dbg(1, "----CSROW %d VALID for MC node %d\n",
i, pvt->mc_node_id);
continue;
}
debugf1("----CSROW %d VALID for MC node %d\n",
i, pvt->mc_node_id);
empty = 0;
if (csrow_enabled(i, 0, pvt))
nr_pages = amd64_csrow_nr_pages(pvt, 0, i);
@ -2244,8 +2231,9 @@ static int init_csrows(struct mem_ctl_info *mci)
mtype = amd64_determine_memory_type(pvt, i);
debugf1(" for MC node %d csrow %d:\n", pvt->mc_node_id, i);
debugf1(" nr_pages: %u\n", nr_pages * pvt->channel_count);
edac_dbg(1, " for MC node %d csrow %d:\n", pvt->mc_node_id, i);
edac_dbg(1, " nr_pages: %u\n",
nr_pages * pvt->channel_count);
/*
* determine whether CHIPKILL or JUST ECC or NO ECC is operating
@ -2257,9 +2245,10 @@ static int init_csrows(struct mem_ctl_info *mci)
edac_mode = EDAC_NONE;
for (j = 0; j < pvt->channel_count; j++) {
csrow->channels[j].dimm->mtype = mtype;
csrow->channels[j].dimm->edac_mode = edac_mode;
csrow->channels[j].dimm->nr_pages = nr_pages;
dimm = csrow->channels[j]->dimm;
dimm->mtype = mtype;
dimm->edac_mode = edac_mode;
dimm->nr_pages = nr_pages;
}
}
@ -2296,9 +2285,9 @@ static bool amd64_nb_mce_bank_enabled_on_node(unsigned nid)
struct msr *reg = per_cpu_ptr(msrs, cpu);
nbe = reg->l & MSR_MCGCTL_NBE;
debugf0("core: %u, MCG_CTL: 0x%llx, NB MSR is %s\n",
cpu, reg->q,
(nbe ? "enabled" : "disabled"));
edac_dbg(0, "core: %u, MCG_CTL: 0x%llx, NB MSR is %s\n",
cpu, reg->q,
(nbe ? "enabled" : "disabled"));
if (!nbe)
goto out;
@ -2369,8 +2358,8 @@ static bool enable_ecc_error_reporting(struct ecc_settings *s, u8 nid,
amd64_read_pci_cfg(F3, NBCFG, &value);
debugf0("1: node %d, NBCFG=0x%08x[DramEccEn: %d]\n",
nid, value, !!(value & NBCFG_ECC_ENABLE));
edac_dbg(0, "1: node %d, NBCFG=0x%08x[DramEccEn: %d]\n",
nid, value, !!(value & NBCFG_ECC_ENABLE));
if (!(value & NBCFG_ECC_ENABLE)) {
amd64_warn("DRAM ECC disabled on this node, enabling...\n");
@ -2394,8 +2383,8 @@ static bool enable_ecc_error_reporting(struct ecc_settings *s, u8 nid,
s->flags.nb_ecc_prev = 1;
}
debugf0("2: node %d, NBCFG=0x%08x[DramEccEn: %d]\n",
nid, value, !!(value & NBCFG_ECC_ENABLE));
edac_dbg(0, "2: node %d, NBCFG=0x%08x[DramEccEn: %d]\n",
nid, value, !!(value & NBCFG_ECC_ENABLE));
return ret;
}
@ -2463,26 +2452,29 @@ static bool ecc_enabled(struct pci_dev *F3, u8 nid)
return true;
}
struct mcidev_sysfs_attribute sysfs_attrs[ARRAY_SIZE(amd64_dbg_attrs) +
ARRAY_SIZE(amd64_inj_attrs) +
1];
struct mcidev_sysfs_attribute terminator = { .attr = { .name = NULL } };
static void set_mc_sysfs_attrs(struct mem_ctl_info *mci)
static int set_mc_sysfs_attrs(struct mem_ctl_info *mci)
{
unsigned int i = 0, j = 0;
int rc;
for (; i < ARRAY_SIZE(amd64_dbg_attrs); i++)
sysfs_attrs[i] = amd64_dbg_attrs[i];
rc = amd64_create_sysfs_dbg_files(mci);
if (rc < 0)
return rc;
if (boot_cpu_data.x86 >= 0x10) {
rc = amd64_create_sysfs_inject_files(mci);
if (rc < 0)
return rc;
}
return 0;
}
static void del_mc_sysfs_attrs(struct mem_ctl_info *mci)
{
amd64_remove_sysfs_dbg_files(mci);
if (boot_cpu_data.x86 >= 0x10)
for (j = 0; j < ARRAY_SIZE(amd64_inj_attrs); j++, i++)
sysfs_attrs[i] = amd64_inj_attrs[j];
sysfs_attrs[i] = terminator;
mci->mc_driver_sysfs_attributes = sysfs_attrs;
amd64_remove_sysfs_inject_files(mci);
}
static void setup_mci_misc_attrs(struct mem_ctl_info *mci,
@ -2601,20 +2593,22 @@ static int amd64_init_one_instance(struct pci_dev *F2)
goto err_siblings;
mci->pvt_info = pvt;
mci->dev = &pvt->F2->dev;
mci->pdev = &pvt->F2->dev;
setup_mci_misc_attrs(mci, fam_type);
if (init_csrows(mci))
mci->edac_cap = EDAC_FLAG_NONE;
set_mc_sysfs_attrs(mci);
ret = -ENODEV;
if (edac_mc_add_mc(mci)) {
debugf1("failed edac_mc_add_mc()\n");
edac_dbg(1, "failed edac_mc_add_mc()\n");
goto err_add_mc;
}
if (set_mc_sysfs_attrs(mci)) {
edac_dbg(1, "failed edac_mc_add_mc()\n");
goto err_add_sysfs;
}
/* register stuff with EDAC MCE */
if (report_gart_errors)
@ -2628,6 +2622,8 @@ static int amd64_init_one_instance(struct pci_dev *F2)
return 0;
err_add_sysfs:
edac_mc_del_mc(mci->pdev);
err_add_mc:
edac_mc_free(mci);
@ -2651,7 +2647,7 @@ static int __devinit amd64_probe_one_instance(struct pci_dev *pdev,
ret = pci_enable_device(pdev);
if (ret < 0) {
debugf0("ret=%d\n", ret);
edac_dbg(0, "ret=%d\n", ret);
return -EIO;
}
@ -2698,6 +2694,8 @@ static void __devexit amd64_remove_one_instance(struct pci_dev *pdev)
struct pci_dev *F3 = node_to_amd_nb(nid)->misc;
struct ecc_settings *s = ecc_stngs[nid];
mci = find_mci_by_dev(&pdev->dev);
del_mc_sysfs_attrs(mci);
/* Remove from EDAC CORE tracking list */
mci = edac_mc_del_mc(&pdev->dev);
if (!mci)

View File

@ -413,19 +413,32 @@ struct ecc_settings {
};
#ifdef CONFIG_EDAC_DEBUG
#define NUM_DBG_ATTRS 5
int amd64_create_sysfs_dbg_files(struct mem_ctl_info *mci);
void amd64_remove_sysfs_dbg_files(struct mem_ctl_info *mci);
#else
#define NUM_DBG_ATTRS 0
static inline int amd64_create_sysfs_dbg_files(struct mem_ctl_info *mci)
{
return 0;
}
static void inline amd64_remove_sysfs_dbg_files(struct mem_ctl_info *mci)
{
}
#endif
#ifdef CONFIG_EDAC_AMD64_ERROR_INJECTION
#define NUM_INJ_ATTRS 5
#else
#define NUM_INJ_ATTRS 0
#endif
int amd64_create_sysfs_inject_files(struct mem_ctl_info *mci);
void amd64_remove_sysfs_inject_files(struct mem_ctl_info *mci);
extern struct mcidev_sysfs_attribute amd64_dbg_attrs[NUM_DBG_ATTRS],
amd64_inj_attrs[NUM_INJ_ATTRS];
#else
static inline int amd64_create_sysfs_inject_files(struct mem_ctl_info *mci)
{
return 0;
}
static inline void amd64_remove_sysfs_inject_files(struct mem_ctl_info *mci)
{
}
#endif
/*
* Each of the PCI Device IDs types have their own set of hardware accessor
@ -460,3 +473,5 @@ int __amd64_write_pci_cfg_dword(struct pci_dev *pdev, int offset,
int amd64_get_dram_hole_info(struct mem_ctl_info *mci, u64 *hole_base,
u64 *hole_offset, u64 *hole_size);
#define to_mci(k) container_of(k, struct mem_ctl_info, dev)

View File

@ -1,8 +1,11 @@
#include "amd64_edac.h"
#define EDAC_DCT_ATTR_SHOW(reg) \
static ssize_t amd64_##reg##_show(struct mem_ctl_info *mci, char *data) \
static ssize_t amd64_##reg##_show(struct device *dev, \
struct device_attribute *mattr, \
char *data) \
{ \
struct mem_ctl_info *mci = to_mci(dev); \
struct amd64_pvt *pvt = mci->pvt_info; \
return sprintf(data, "0x%016llx\n", (u64)pvt->reg); \
}
@ -12,8 +15,12 @@ EDAC_DCT_ATTR_SHOW(dbam0);
EDAC_DCT_ATTR_SHOW(top_mem);
EDAC_DCT_ATTR_SHOW(top_mem2);
static ssize_t amd64_hole_show(struct mem_ctl_info *mci, char *data)
static ssize_t amd64_hole_show(struct device *dev,
struct device_attribute *mattr,
char *data)
{
struct mem_ctl_info *mci = to_mci(dev);
u64 hole_base = 0;
u64 hole_offset = 0;
u64 hole_size = 0;
@ -27,46 +34,40 @@ static ssize_t amd64_hole_show(struct mem_ctl_info *mci, char *data)
/*
* update NUM_DBG_ATTRS in case you add new members
*/
struct mcidev_sysfs_attribute amd64_dbg_attrs[] = {
static DEVICE_ATTR(dhar, S_IRUGO, amd64_dhar_show, NULL);
static DEVICE_ATTR(dbam, S_IRUGO, amd64_dbam0_show, NULL);
static DEVICE_ATTR(topmem, S_IRUGO, amd64_top_mem_show, NULL);
static DEVICE_ATTR(topmem2, S_IRUGO, amd64_top_mem2_show, NULL);
static DEVICE_ATTR(dram_hole, S_IRUGO, amd64_hole_show, NULL);
{
.attr = {
.name = "dhar",
.mode = (S_IRUGO)
},
.show = amd64_dhar_show,
.store = NULL,
},
{
.attr = {
.name = "dbam",
.mode = (S_IRUGO)
},
.show = amd64_dbam0_show,
.store = NULL,
},
{
.attr = {
.name = "topmem",
.mode = (S_IRUGO)
},
.show = amd64_top_mem_show,
.store = NULL,
},
{
.attr = {
.name = "topmem2",
.mode = (S_IRUGO)
},
.show = amd64_top_mem2_show,
.store = NULL,
},
{
.attr = {
.name = "dram_hole",
.mode = (S_IRUGO)
},
.show = amd64_hole_show,
.store = NULL,
},
};
int amd64_create_sysfs_dbg_files(struct mem_ctl_info *mci)
{
int rc;
rc = device_create_file(&mci->dev, &dev_attr_dhar);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_dbam);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_topmem);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_topmem2);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_dram_hole);
if (rc < 0)
return rc;
return 0;
}
void amd64_remove_sysfs_dbg_files(struct mem_ctl_info *mci)
{
device_remove_file(&mci->dev, &dev_attr_dhar);
device_remove_file(&mci->dev, &dev_attr_dbam);
device_remove_file(&mci->dev, &dev_attr_topmem);
device_remove_file(&mci->dev, &dev_attr_topmem2);
device_remove_file(&mci->dev, &dev_attr_dram_hole);
}

View File

@ -1,7 +1,10 @@
#include "amd64_edac.h"
static ssize_t amd64_inject_section_show(struct mem_ctl_info *mci, char *buf)
static ssize_t amd64_inject_section_show(struct device *dev,
struct device_attribute *mattr,
char *buf)
{
struct mem_ctl_info *mci = to_mci(dev);
struct amd64_pvt *pvt = mci->pvt_info;
return sprintf(buf, "0x%x\n", pvt->injection.section);
}
@ -12,9 +15,11 @@ static ssize_t amd64_inject_section_show(struct mem_ctl_info *mci, char *buf)
*
* range: 0..3
*/
static ssize_t amd64_inject_section_store(struct mem_ctl_info *mci,
static ssize_t amd64_inject_section_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct amd64_pvt *pvt = mci->pvt_info;
unsigned long value;
int ret = 0;
@ -33,8 +38,11 @@ static ssize_t amd64_inject_section_store(struct mem_ctl_info *mci,
return ret;
}
static ssize_t amd64_inject_word_show(struct mem_ctl_info *mci, char *buf)
static ssize_t amd64_inject_word_show(struct device *dev,
struct device_attribute *mattr,
char *buf)
{
struct mem_ctl_info *mci = to_mci(dev);
struct amd64_pvt *pvt = mci->pvt_info;
return sprintf(buf, "0x%x\n", pvt->injection.word);
}
@ -45,9 +53,11 @@ static ssize_t amd64_inject_word_show(struct mem_ctl_info *mci, char *buf)
*
* range: 0..8
*/
static ssize_t amd64_inject_word_store(struct mem_ctl_info *mci,
const char *data, size_t count)
static ssize_t amd64_inject_word_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct amd64_pvt *pvt = mci->pvt_info;
unsigned long value;
int ret = 0;
@ -66,8 +76,11 @@ static ssize_t amd64_inject_word_store(struct mem_ctl_info *mci,
return ret;
}
static ssize_t amd64_inject_ecc_vector_show(struct mem_ctl_info *mci, char *buf)
static ssize_t amd64_inject_ecc_vector_show(struct device *dev,
struct device_attribute *mattr,
char *buf)
{
struct mem_ctl_info *mci = to_mci(dev);
struct amd64_pvt *pvt = mci->pvt_info;
return sprintf(buf, "0x%x\n", pvt->injection.bit_map);
}
@ -77,9 +90,11 @@ static ssize_t amd64_inject_ecc_vector_show(struct mem_ctl_info *mci, char *buf)
* corresponding bit within the error injection word above. When used during a
* DRAM ECC read, it holds the contents of the of the DRAM ECC bits.
*/
static ssize_t amd64_inject_ecc_vector_store(struct mem_ctl_info *mci,
const char *data, size_t count)
static ssize_t amd64_inject_ecc_vector_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct amd64_pvt *pvt = mci->pvt_info;
unsigned long value;
int ret = 0;
@ -103,9 +118,11 @@ static ssize_t amd64_inject_ecc_vector_store(struct mem_ctl_info *mci,
* Do a DRAM ECC read. Assemble staged values in the pvt area, format into
* fields needed by the injection registers and read the NB Array Data Port.
*/
static ssize_t amd64_inject_read_store(struct mem_ctl_info *mci,
const char *data, size_t count)
static ssize_t amd64_inject_read_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct amd64_pvt *pvt = mci->pvt_info;
unsigned long value;
u32 section, word_bits;
@ -125,7 +142,8 @@ static ssize_t amd64_inject_read_store(struct mem_ctl_info *mci,
/* Issue 'word' and 'bit' along with the READ request */
amd64_write_pci_cfg(pvt->F3, F10_NB_ARRAY_DATA, word_bits);
debugf0("section=0x%x word_bits=0x%x\n", section, word_bits);
edac_dbg(0, "section=0x%x word_bits=0x%x\n",
section, word_bits);
return count;
}
@ -136,9 +154,11 @@ static ssize_t amd64_inject_read_store(struct mem_ctl_info *mci,
* Do a DRAM ECC write. Assemble staged values in the pvt area and format into
* fields needed by the injection registers.
*/
static ssize_t amd64_inject_write_store(struct mem_ctl_info *mci,
static ssize_t amd64_inject_write_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct amd64_pvt *pvt = mci->pvt_info;
unsigned long value;
u32 section, word_bits;
@ -158,7 +178,8 @@ static ssize_t amd64_inject_write_store(struct mem_ctl_info *mci,
/* Issue 'word' and 'bit' along with the READ request */
amd64_write_pci_cfg(pvt->F3, F10_NB_ARRAY_DATA, word_bits);
debugf0("section=0x%x word_bits=0x%x\n", section, word_bits);
edac_dbg(0, "section=0x%x word_bits=0x%x\n",
section, word_bits);
return count;
}
@ -168,46 +189,47 @@ static ssize_t amd64_inject_write_store(struct mem_ctl_info *mci,
/*
* update NUM_INJ_ATTRS in case you add new members
*/
struct mcidev_sysfs_attribute amd64_inj_attrs[] = {
{
.attr = {
.name = "inject_section",
.mode = (S_IRUGO | S_IWUSR)
},
.show = amd64_inject_section_show,
.store = amd64_inject_section_store,
},
{
.attr = {
.name = "inject_word",
.mode = (S_IRUGO | S_IWUSR)
},
.show = amd64_inject_word_show,
.store = amd64_inject_word_store,
},
{
.attr = {
.name = "inject_ecc_vector",
.mode = (S_IRUGO | S_IWUSR)
},
.show = amd64_inject_ecc_vector_show,
.store = amd64_inject_ecc_vector_store,
},
{
.attr = {
.name = "inject_write",
.mode = (S_IRUGO | S_IWUSR)
},
.show = NULL,
.store = amd64_inject_write_store,
},
{
.attr = {
.name = "inject_read",
.mode = (S_IRUGO | S_IWUSR)
},
.show = NULL,
.store = amd64_inject_read_store,
},
};
static DEVICE_ATTR(inject_section, S_IRUGO | S_IWUSR,
amd64_inject_section_show, amd64_inject_section_store);
static DEVICE_ATTR(inject_word, S_IRUGO | S_IWUSR,
amd64_inject_word_show, amd64_inject_word_store);
static DEVICE_ATTR(inject_ecc_vector, S_IRUGO | S_IWUSR,
amd64_inject_ecc_vector_show, amd64_inject_ecc_vector_store);
static DEVICE_ATTR(inject_write, S_IRUGO | S_IWUSR,
NULL, amd64_inject_write_store);
static DEVICE_ATTR(inject_read, S_IRUGO | S_IWUSR,
NULL, amd64_inject_read_store);
int amd64_create_sysfs_inject_files(struct mem_ctl_info *mci)
{
int rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_section);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_word);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_ecc_vector);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_write);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_read);
if (rc < 0)
return rc;
return 0;
}
void amd64_remove_sysfs_inject_files(struct mem_ctl_info *mci)
{
device_remove_file(&mci->dev, &dev_attr_inject_section);
device_remove_file(&mci->dev, &dev_attr_inject_word);
device_remove_file(&mci->dev, &dev_attr_inject_ecc_vector);
device_remove_file(&mci->dev, &dev_attr_inject_write);
device_remove_file(&mci->dev, &dev_attr_inject_read);
}

View File

@ -105,7 +105,7 @@ static void amd76x_get_error_info(struct mem_ctl_info *mci,
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
pci_read_config_dword(pdev, AMD76X_ECC_MODE_STATUS,
&info->ecc_mode_status);
@ -145,10 +145,10 @@ static int amd76x_process_error_info(struct mem_ctl_info *mci,
if (handle_errors) {
row = (info->ecc_mode_status >> 4) & 0xf;
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
mci->csrows[row].first_page, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
mci->csrows[row]->first_page, 0, 0,
row, 0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
}
}
@ -160,10 +160,10 @@ static int amd76x_process_error_info(struct mem_ctl_info *mci,
if (handle_errors) {
row = info->ecc_mode_status & 0xf;
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
mci->csrows[row].first_page, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
mci->csrows[row]->first_page, 0, 0,
row, 0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
}
}
@ -180,7 +180,7 @@ static int amd76x_process_error_info(struct mem_ctl_info *mci,
static void amd76x_check(struct mem_ctl_info *mci)
{
struct amd76x_error_info info;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
amd76x_get_error_info(mci, &info);
amd76x_process_error_info(mci, &info, 1);
}
@ -194,8 +194,8 @@ static void amd76x_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev,
int index;
for (index = 0; index < mci->nr_csrows; index++) {
csrow = &mci->csrows[index];
dimm = csrow->channels[0].dimm;
csrow = mci->csrows[index];
dimm = csrow->channels[0]->dimm;
/* find the DRAM Chip Select Base address and mask */
pci_read_config_dword(pdev,
@ -241,7 +241,7 @@ static int amd76x_probe1(struct pci_dev *pdev, int dev_idx)
u32 ems_mode;
struct amd76x_error_info discard;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
pci_read_config_dword(pdev, AMD76X_ECC_MODE_STATUS, &ems);
ems_mode = (ems >> 10) & 0x3;
@ -256,8 +256,8 @@ static int amd76x_probe1(struct pci_dev *pdev, int dev_idx)
if (mci == NULL)
return -ENOMEM;
debugf0("%s(): mci = %p\n", __func__, mci);
mci->dev = &pdev->dev;
edac_dbg(0, "mci = %p\n", mci);
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_RDDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_EC | EDAC_FLAG_SECDED;
mci->edac_cap = ems_mode ?
@ -276,7 +276,7 @@ static int amd76x_probe1(struct pci_dev *pdev, int dev_idx)
* type of memory controller. The ID is therefore hardcoded to 0.
*/
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto fail;
}
@ -292,7 +292,7 @@ static int amd76x_probe1(struct pci_dev *pdev, int dev_idx)
}
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
fail:
@ -304,7 +304,7 @@ fail:
static int __devinit amd76x_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* don't need to call pci_enable_device() */
return amd76x_probe1(pdev, ent->driver_data);
@ -322,7 +322,7 @@ static void __devexit amd76x_remove_one(struct pci_dev *pdev)
{
struct mem_ctl_info *mci;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (amd76x_pci)
edac_pci_release_generic_ctl(amd76x_pci);

View File

@ -33,10 +33,10 @@ struct cell_edac_priv
static void cell_edac_count_ce(struct mem_ctl_info *mci, int chan, u64 ar)
{
struct cell_edac_priv *priv = mci->pvt_info;
struct csrow_info *csrow = &mci->csrows[0];
struct csrow_info *csrow = mci->csrows[0];
unsigned long address, pfn, offset, syndrome;
dev_dbg(mci->dev, "ECC CE err on node %d, channel %d, ar = 0x%016llx\n",
dev_dbg(mci->pdev, "ECC CE err on node %d, channel %d, ar = 0x%016llx\n",
priv->node, chan, ar);
/* Address decoding is likely a bit bogus, to dbl check */
@ -48,18 +48,18 @@ static void cell_edac_count_ce(struct mem_ctl_info *mci, int chan, u64 ar)
syndrome = (ar & 0x000000001fe00000ul) >> 21;
/* TODO: Decoding of the error address */
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
csrow->first_page + pfn, offset, syndrome,
0, chan, -1, "", "", NULL);
0, chan, -1, "", "");
}
static void cell_edac_count_ue(struct mem_ctl_info *mci, int chan, u64 ar)
{
struct cell_edac_priv *priv = mci->pvt_info;
struct csrow_info *csrow = &mci->csrows[0];
struct csrow_info *csrow = mci->csrows[0];
unsigned long address, pfn, offset;
dev_dbg(mci->dev, "ECC UE err on node %d, channel %d, ar = 0x%016llx\n",
dev_dbg(mci->pdev, "ECC UE err on node %d, channel %d, ar = 0x%016llx\n",
priv->node, chan, ar);
/* Address decoding is likely a bit bogus, to dbl check */
@ -70,9 +70,9 @@ static void cell_edac_count_ue(struct mem_ctl_info *mci, int chan, u64 ar)
offset = address & ~PAGE_MASK;
/* TODO: Decoding of the error address */
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
csrow->first_page + pfn, offset, 0,
0, chan, -1, "", "", NULL);
0, chan, -1, "", "");
}
static void cell_edac_check(struct mem_ctl_info *mci)
@ -83,7 +83,7 @@ static void cell_edac_check(struct mem_ctl_info *mci)
fir = in_be64(&priv->regs->mic_fir);
#ifdef DEBUG
if (fir != priv->prev_fir) {
dev_dbg(mci->dev, "fir change : 0x%016lx\n", fir);
dev_dbg(mci->pdev, "fir change : 0x%016lx\n", fir);
priv->prev_fir = fir;
}
#endif
@ -119,14 +119,14 @@ static void cell_edac_check(struct mem_ctl_info *mci)
mb(); /* sync up */
#ifdef DEBUG
fir = in_be64(&priv->regs->mic_fir);
dev_dbg(mci->dev, "fir clear : 0x%016lx\n", fir);
dev_dbg(mci->pdev, "fir clear : 0x%016lx\n", fir);
#endif
}
}
static void __devinit cell_edac_init_csrows(struct mem_ctl_info *mci)
{
struct csrow_info *csrow = &mci->csrows[0];
struct csrow_info *csrow = mci->csrows[0];
struct dimm_info *dimm;
struct cell_edac_priv *priv = mci->pvt_info;
struct device_node *np;
@ -150,12 +150,12 @@ static void __devinit cell_edac_init_csrows(struct mem_ctl_info *mci)
csrow->last_page = csrow->first_page + nr_pages - 1;
for (j = 0; j < csrow->nr_channels; j++) {
dimm = csrow->channels[j].dimm;
dimm = csrow->channels[j]->dimm;
dimm->mtype = MEM_XDR;
dimm->edac_mode = EDAC_SECDED;
dimm->nr_pages = nr_pages / csrow->nr_channels;
}
dev_dbg(mci->dev,
dev_dbg(mci->pdev,
"Initialized on node %d, chanmask=0x%x,"
" first_page=0x%lx, nr_pages=0x%x\n",
priv->node, priv->chanmask,
@ -212,7 +212,7 @@ static int __devinit cell_edac_probe(struct platform_device *pdev)
priv->regs = regs;
priv->node = pdev->id;
priv->chanmask = chanmask;
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_XDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_EC | EDAC_FLAG_SECDED;
mci->edac_cap = EDAC_FLAG_EC | EDAC_FLAG_SECDED;

View File

@ -316,13 +316,12 @@ static void get_total_mem(struct cpc925_mc_pdata *pdata)
reg += aw;
size = of_read_number(reg, sw);
reg += sw;
debugf1("%s: start 0x%lx, size 0x%lx\n", __func__,
start, size);
edac_dbg(1, "start 0x%lx, size 0x%lx\n", start, size);
pdata->total_mem += size;
} while (reg < reg_end);
of_node_put(np);
debugf0("%s: total_mem 0x%lx\n", __func__, pdata->total_mem);
edac_dbg(0, "total_mem 0x%lx\n", pdata->total_mem);
}
static void cpc925_init_csrows(struct mem_ctl_info *mci)
@ -330,8 +329,9 @@ static void cpc925_init_csrows(struct mem_ctl_info *mci)
struct cpc925_mc_pdata *pdata = mci->pvt_info;
struct csrow_info *csrow;
struct dimm_info *dimm;
enum dev_type dtype;
int index, j;
u32 mbmr, mbbar, bba;
u32 mbmr, mbbar, bba, grain;
unsigned long row_size, nr_pages, last_nr_pages = 0;
get_total_mem(pdata);
@ -347,7 +347,7 @@ static void cpc925_init_csrows(struct mem_ctl_info *mci)
if (bba == 0)
continue; /* not populated */
csrow = &mci->csrows[index];
csrow = mci->csrows[index];
row_size = bba * (1UL << 28); /* 256M */
csrow->first_page = last_nr_pages;
@ -355,37 +355,36 @@ static void cpc925_init_csrows(struct mem_ctl_info *mci)
csrow->last_page = csrow->first_page + nr_pages - 1;
last_nr_pages = csrow->last_page + 1;
switch (csrow->nr_channels) {
case 1: /* Single channel */
grain = 32; /* four-beat burst of 32 bytes */
break;
case 2: /* Dual channel */
default:
grain = 64; /* four-beat burst of 64 bytes */
break;
}
switch ((mbmr & MBMR_MODE_MASK) >> MBMR_MODE_SHIFT) {
case 6: /* 0110, no way to differentiate X8 VS X16 */
case 5: /* 0101 */
case 8: /* 1000 */
dtype = DEV_X16;
break;
case 7: /* 0111 */
case 9: /* 1001 */
dtype = DEV_X8;
break;
default:
dtype = DEV_UNKNOWN;
break;
}
for (j = 0; j < csrow->nr_channels; j++) {
dimm = csrow->channels[j].dimm;
dimm = csrow->channels[j]->dimm;
dimm->nr_pages = nr_pages / csrow->nr_channels;
dimm->mtype = MEM_RDDR;
dimm->edac_mode = EDAC_SECDED;
switch (csrow->nr_channels) {
case 1: /* Single channel */
dimm->grain = 32; /* four-beat burst of 32 bytes */
break;
case 2: /* Dual channel */
default:
dimm->grain = 64; /* four-beat burst of 64 bytes */
break;
}
switch ((mbmr & MBMR_MODE_MASK) >> MBMR_MODE_SHIFT) {
case 6: /* 0110, no way to differentiate X8 VS X16 */
case 5: /* 0101 */
case 8: /* 1000 */
dimm->dtype = DEV_X16;
break;
case 7: /* 0111 */
case 9: /* 1001 */
dimm->dtype = DEV_X8;
break;
default:
dimm->dtype = DEV_UNKNOWN;
break;
}
dimm->grain = grain;
dimm->dtype = dtype;
}
}
}
@ -463,7 +462,7 @@ static void cpc925_mc_get_pfn(struct mem_ctl_info *mci, u32 mear,
*csrow = rank;
#ifdef CONFIG_EDAC_DEBUG
if (mci->csrows[rank].first_page == 0) {
if (mci->csrows[rank]->first_page == 0) {
cpc925_mc_printk(mci, KERN_ERR, "ECC occurs in a "
"non-populated csrow, broken hardware?\n");
return;
@ -471,7 +470,7 @@ static void cpc925_mc_get_pfn(struct mem_ctl_info *mci, u32 mear,
#endif
/* Revert csrow number */
pa = mci->csrows[rank].first_page << PAGE_SHIFT;
pa = mci->csrows[rank]->first_page << PAGE_SHIFT;
/* Revert column address */
col += bcnt;
@ -512,7 +511,7 @@ static void cpc925_mc_get_pfn(struct mem_ctl_info *mci, u32 mear,
*offset = pa & (PAGE_SIZE - 1);
*pfn = pa >> PAGE_SHIFT;
debugf0("%s: ECC physical address 0x%lx\n", __func__, pa);
edac_dbg(0, "ECC physical address 0x%lx\n", pa);
}
static int cpc925_mc_find_channel(struct mem_ctl_info *mci, u16 syndrome)
@ -555,18 +554,18 @@ static void cpc925_mc_check(struct mem_ctl_info *mci)
if (apiexcp & CECC_EXCP_DETECTED) {
cpc925_mc_printk(mci, KERN_INFO, "DRAM CECC Fault\n");
channel = cpc925_mc_find_channel(mci, syndrome);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
pfn, offset, syndrome,
csrow, channel, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
}
if (apiexcp & UECC_EXCP_DETECTED) {
cpc925_mc_printk(mci, KERN_INFO, "DRAM UECC Fault\n");
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
pfn, offset, 0,
csrow, -1, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
}
cpc925_mc_printk(mci, KERN_INFO, "Dump registers:\n");
@ -852,8 +851,8 @@ static void cpc925_add_edac_devices(void __iomem *vbase)
goto err2;
}
debugf0("%s: Successfully added edac device for %s\n",
__func__, dev_info->ctl_name);
edac_dbg(0, "Successfully added edac device for %s\n",
dev_info->ctl_name);
continue;
@ -884,8 +883,8 @@ static void cpc925_del_edac_devices(void)
if (dev_info->exit)
dev_info->exit(dev_info);
debugf0("%s: Successfully deleted edac device for %s\n",
__func__, dev_info->ctl_name);
edac_dbg(0, "Successfully deleted edac device for %s\n",
dev_info->ctl_name);
}
}
@ -900,7 +899,7 @@ static int cpc925_get_sdram_scrub_rate(struct mem_ctl_info *mci)
mscr = __raw_readl(pdata->vbase + REG_MSCR_OFFSET);
si = (mscr & MSCR_SI_MASK) >> MSCR_SI_SHIFT;
debugf0("%s, Mem Scrub Ctrl Register 0x%x\n", __func__, mscr);
edac_dbg(0, "Mem Scrub Ctrl Register 0x%x\n", mscr);
if (((mscr & MSCR_SCRUB_MOD_MASK) != MSCR_BACKGR_SCRUB) ||
(si == 0)) {
@ -928,8 +927,7 @@ static int cpc925_mc_get_channels(void __iomem *vbase)
((mbcr & MBCR_64BITBUS_MASK) == 0))
dual = 1;
debugf0("%s: %s channel\n", __func__,
(dual > 0) ? "Dual" : "Single");
edac_dbg(0, "%s channel\n", (dual > 0) ? "Dual" : "Single");
return dual;
}
@ -944,7 +942,7 @@ static int __devinit cpc925_probe(struct platform_device *pdev)
struct resource *r;
int res = 0, nr_channels;
debugf0("%s: %s platform device found!\n", __func__, pdev->name);
edac_dbg(0, "%s platform device found!\n", pdev->name);
if (!devres_open_group(&pdev->dev, cpc925_probe, GFP_KERNEL)) {
res = -ENOMEM;
@ -995,7 +993,7 @@ static int __devinit cpc925_probe(struct platform_device *pdev)
pdata->edac_idx = edac_mc_idx++;
pdata->name = pdev->name;
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
platform_set_drvdata(pdev, mci);
mci->dev_name = dev_name(&pdev->dev);
mci->mtype_cap = MEM_FLAG_RDDR | MEM_FLAG_DDR;
@ -1026,7 +1024,7 @@ static int __devinit cpc925_probe(struct platform_device *pdev)
cpc925_add_edac_devices(vbase);
/* get this far and it's successful */
debugf0("%s: success\n", __func__);
edac_dbg(0, "success\n");
res = 0;
goto out;

View File

@ -309,7 +309,7 @@ static unsigned long ctl_page_to_phys(struct mem_ctl_info *mci,
u32 remap;
struct e752x_pvt *pvt = (struct e752x_pvt *)mci->pvt_info;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
if (page < pvt->tolm)
return page;
@ -335,7 +335,7 @@ static void do_process_ce(struct mem_ctl_info *mci, u16 error_one,
int i;
struct e752x_pvt *pvt = (struct e752x_pvt *)mci->pvt_info;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* convert the addr to 4k page */
page = sec1_add >> (PAGE_SHIFT - 4);
@ -371,10 +371,10 @@ static void do_process_ce(struct mem_ctl_info *mci, u16 error_one,
channel = !(error_one & 1);
/* e752x mc reads 34:6 of the DRAM linear address */
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, offset_in_page(sec1_add << 4), sec1_syndrome,
row, channel, -1,
"e752x CE", "", NULL);
"e752x CE", "");
}
static inline void process_ce(struct mem_ctl_info *mci, u16 error_one,
@ -394,7 +394,7 @@ static void do_process_ue(struct mem_ctl_info *mci, u16 error_one,
int row;
struct e752x_pvt *pvt = (struct e752x_pvt *)mci->pvt_info;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
if (error_one & 0x0202) {
error_2b = ded_add;
@ -408,11 +408,11 @@ static void do_process_ue(struct mem_ctl_info *mci, u16 error_one,
edac_mc_find_csrow_by_page(mci, block_page);
/* e752x mc reads 34:6 of the DRAM linear address */
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
block_page,
offset_in_page(error_2b << 4), 0,
row, -1, -1,
"e752x UE from Read", "", NULL);
"e752x UE from Read", "");
}
if (error_one & 0x0404) {
@ -427,11 +427,11 @@ static void do_process_ue(struct mem_ctl_info *mci, u16 error_one,
edac_mc_find_csrow_by_page(mci, block_page);
/* e752x mc reads 34:6 of the DRAM linear address */
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
block_page,
offset_in_page(error_2b << 4), 0,
row, -1, -1,
"e752x UE from Scruber", "", NULL);
"e752x UE from Scruber", "");
}
}
@ -453,10 +453,10 @@ static inline void process_ue_no_info_wr(struct mem_ctl_info *mci,
if (!handle_error)
return;
debugf3("%s()\n", __func__);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0,
edac_dbg(3, "\n");
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0,
-1, -1, -1,
"e752x UE log memory write", "", NULL);
"e752x UE log memory write", "");
}
static void do_process_ded_retry(struct mem_ctl_info *mci, u16 error,
@ -982,7 +982,7 @@ static void e752x_check(struct mem_ctl_info *mci)
{
struct e752x_error_info info;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
e752x_get_error_info(mci, &info);
e752x_process_error_info(mci, &info, 1);
}
@ -1069,6 +1069,7 @@ static void e752x_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev,
u16 ddrcsr)
{
struct csrow_info *csrow;
enum edac_type edac_mode;
unsigned long last_cumul_size;
int index, mem_dev, drc_chan;
int drc_drbg; /* DRB granularity 0=64mb, 1=128mb */
@ -1095,14 +1096,13 @@ static void e752x_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev,
for (last_cumul_size = index = 0; index < mci->nr_csrows; index++) {
/* mem_dev 0=x8, 1=x4 */
mem_dev = (dra >> (index * 4 + 2)) & 0x3;
csrow = &mci->csrows[remap_csrow_index(mci, index)];
csrow = mci->csrows[remap_csrow_index(mci, index)];
mem_dev = (mem_dev == 2);
pci_read_config_byte(pdev, E752X_DRB + index, &value);
/* convert a 128 or 64 MiB DRB to a page size. */
cumul_size = value << (25 + drc_drbg - PAGE_SHIFT);
debugf3("%s(): (%d) cumul_size 0x%x\n", __func__, index,
cumul_size);
edac_dbg(3, "(%d) cumul_size 0x%x\n", index, cumul_size);
if (cumul_size == last_cumul_size)
continue; /* not populated */
@ -1111,29 +1111,29 @@ static void e752x_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev,
nr_pages = cumul_size - last_cumul_size;
last_cumul_size = cumul_size;
/*
* if single channel or x8 devices then SECDED
* if dual channel and x4 then S4ECD4ED
*/
if (drc_ddim) {
if (drc_chan && mem_dev) {
edac_mode = EDAC_S4ECD4ED;
mci->edac_cap |= EDAC_FLAG_S4ECD4ED;
} else {
edac_mode = EDAC_SECDED;
mci->edac_cap |= EDAC_FLAG_SECDED;
}
} else
edac_mode = EDAC_NONE;
for (i = 0; i < csrow->nr_channels; i++) {
struct dimm_info *dimm = csrow->channels[i].dimm;
struct dimm_info *dimm = csrow->channels[i]->dimm;
debugf3("Initializing rank at (%i,%i)\n", index, i);
edac_dbg(3, "Initializing rank at (%i,%i)\n", index, i);
dimm->nr_pages = nr_pages / csrow->nr_channels;
dimm->grain = 1 << 12; /* 4KiB - resolution of CELOG */
dimm->mtype = MEM_RDDR; /* only one type supported */
dimm->dtype = mem_dev ? DEV_X4 : DEV_X8;
/*
* if single channel or x8 devices then SECDED
* if dual channel and x4 then S4ECD4ED
*/
if (drc_ddim) {
if (drc_chan && mem_dev) {
dimm->edac_mode = EDAC_S4ECD4ED;
mci->edac_cap |= EDAC_FLAG_S4ECD4ED;
} else {
dimm->edac_mode = EDAC_SECDED;
mci->edac_cap |= EDAC_FLAG_SECDED;
}
} else
dimm->edac_mode = EDAC_NONE;
dimm->edac_mode = edac_mode;
}
}
}
@ -1269,8 +1269,8 @@ static int e752x_probe1(struct pci_dev *pdev, int dev_idx)
int drc_chan; /* Number of channels 0=1chan,1=2chan */
struct e752x_error_info discard;
debugf0("%s(): mci\n", __func__);
debugf0("Starting Probe1\n");
edac_dbg(0, "mci\n");
edac_dbg(0, "Starting Probe1\n");
/* check to see if device 0 function 1 is enabled; if it isn't, we
* assume the BIOS has reserved it for a reason and is expecting
@ -1300,7 +1300,7 @@ static int e752x_probe1(struct pci_dev *pdev, int dev_idx)
if (mci == NULL)
return -ENOMEM;
debugf3("%s(): init mci\n", __func__);
edac_dbg(3, "init mci\n");
mci->mtype_cap = MEM_FLAG_RDDR;
/* 3100 IMCH supports SECDEC only */
mci->edac_ctl_cap = (dev_idx == I3100) ? EDAC_FLAG_SECDED :
@ -1308,9 +1308,9 @@ static int e752x_probe1(struct pci_dev *pdev, int dev_idx)
/* FIXME - what if different memory types are in different csrows? */
mci->mod_name = EDAC_MOD_STR;
mci->mod_ver = E752X_REVISION;
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
debugf3("%s(): init pvt\n", __func__);
edac_dbg(3, "init pvt\n");
pvt = (struct e752x_pvt *)mci->pvt_info;
pvt->dev_info = &e752x_devs[dev_idx];
pvt->mc_symmetric = ((ddrcsr & 0x10) != 0);
@ -1320,7 +1320,7 @@ static int e752x_probe1(struct pci_dev *pdev, int dev_idx)
return -ENODEV;
}
debugf3("%s(): more mci init\n", __func__);
edac_dbg(3, "more mci init\n");
mci->ctl_name = pvt->dev_info->ctl_name;
mci->dev_name = pci_name(pdev);
mci->edac_check = e752x_check;
@ -1342,7 +1342,7 @@ static int e752x_probe1(struct pci_dev *pdev, int dev_idx)
mci->edac_cap = EDAC_FLAG_SECDED; /* the only mode supported */
else
mci->edac_cap |= EDAC_FLAG_NONE;
debugf3("%s(): tolm, remapbase, remaplimit\n", __func__);
edac_dbg(3, "tolm, remapbase, remaplimit\n");
/* load the top of low memory, remap base, and remap limit vars */
pci_read_config_word(pdev, E752X_TOLM, &pci_data);
@ -1359,7 +1359,7 @@ static int e752x_probe1(struct pci_dev *pdev, int dev_idx)
* type of memory controller. The ID is therefore hardcoded to 0.
*/
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto fail;
}
@ -1377,7 +1377,7 @@ static int e752x_probe1(struct pci_dev *pdev, int dev_idx)
}
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
fail:
@ -1393,7 +1393,7 @@ fail:
static int __devinit e752x_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* wake up and enable device */
if (pci_enable_device(pdev) < 0)
@ -1407,7 +1407,7 @@ static void __devexit e752x_remove_one(struct pci_dev *pdev)
struct mem_ctl_info *mci;
struct e752x_pvt *pvt;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (e752x_pci)
edac_pci_release_generic_ctl(e752x_pci);
@ -1453,7 +1453,7 @@ static int __init e752x_init(void)
{
int pci_rc;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -1464,7 +1464,7 @@ static int __init e752x_init(void)
static void __exit e752x_exit(void)
{
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
pci_unregister_driver(&e752x_driver);
}

View File

@ -166,7 +166,7 @@ static const struct e7xxx_dev_info e7xxx_devs[] = {
/* FIXME - is this valid for both SECDED and S4ECD4ED? */
static inline int e7xxx_find_channel(u16 syndrome)
{
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
if ((syndrome & 0xff00) == 0)
return 0;
@ -186,7 +186,7 @@ static unsigned long ctl_page_to_phys(struct mem_ctl_info *mci,
u32 remap;
struct e7xxx_pvt *pvt = (struct e7xxx_pvt *)mci->pvt_info;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
if ((page < pvt->tolm) ||
((page >= 0x100000) && (page < pvt->remapbase)))
@ -208,7 +208,7 @@ static void process_ce(struct mem_ctl_info *mci, struct e7xxx_error_info *info)
int row;
int channel;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* read the error address */
error_1b = info->dram_celog_add;
/* FIXME - should use PAGE_SHIFT */
@ -219,15 +219,15 @@ static void process_ce(struct mem_ctl_info *mci, struct e7xxx_error_info *info)
row = edac_mc_find_csrow_by_page(mci, page);
/* convert syndrome to channel */
channel = e7xxx_find_channel(syndrome);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, page, 0, syndrome,
row, channel, -1, "e7xxx CE", "", NULL);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, page, 0, syndrome,
row, channel, -1, "e7xxx CE", "");
}
static void process_ce_no_info(struct mem_ctl_info *mci)
{
debugf3("%s()\n", __func__);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0, -1, -1, -1,
"e7xxx CE log register overflow", "", NULL);
edac_dbg(3, "\n");
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0, -1, -1, -1,
"e7xxx CE log register overflow", "");
}
static void process_ue(struct mem_ctl_info *mci, struct e7xxx_error_info *info)
@ -235,23 +235,23 @@ static void process_ue(struct mem_ctl_info *mci, struct e7xxx_error_info *info)
u32 error_2b, block_page;
int row;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* read the error address */
error_2b = info->dram_uelog_add;
/* FIXME - should use PAGE_SHIFT */
block_page = error_2b >> 6; /* convert to 4k address */
row = edac_mc_find_csrow_by_page(mci, block_page);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, block_page, 0, 0,
row, -1, -1, "e7xxx UE", "", NULL);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, block_page, 0, 0,
row, -1, -1, "e7xxx UE", "");
}
static void process_ue_no_info(struct mem_ctl_info *mci)
{
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0, -1, -1, -1,
"e7xxx UE log register overflow", "", NULL);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0, -1, -1, -1,
"e7xxx UE log register overflow", "");
}
static void e7xxx_get_error_info(struct mem_ctl_info *mci,
@ -334,7 +334,7 @@ static void e7xxx_check(struct mem_ctl_info *mci)
{
struct e7xxx_error_info info;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
e7xxx_get_error_info(mci, &info);
e7xxx_process_error_info(mci, &info, 1);
}
@ -362,6 +362,7 @@ static void e7xxx_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev,
int drc_chan, drc_drbg, drc_ddim, mem_dev;
struct csrow_info *csrow;
struct dimm_info *dimm;
enum edac_type edac_mode;
pci_read_config_dword(pdev, E7XXX_DRA, &dra);
drc_chan = dual_channel_active(drc, dev_idx);
@ -377,13 +378,12 @@ static void e7xxx_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev,
for (index = 0; index < mci->nr_csrows; index++) {
/* mem_dev 0=x8, 1=x4 */
mem_dev = (dra >> (index * 4 + 3)) & 0x1;
csrow = &mci->csrows[index];
csrow = mci->csrows[index];
pci_read_config_byte(pdev, E7XXX_DRB + index, &value);
/* convert a 64 or 32 MiB DRB to a page size. */
cumul_size = value << (25 + drc_drbg - PAGE_SHIFT);
debugf3("%s(): (%d) cumul_size 0x%x\n", __func__, index,
cumul_size);
edac_dbg(3, "(%d) cumul_size 0x%x\n", index, cumul_size);
if (cumul_size == last_cumul_size)
continue; /* not populated */
@ -392,28 +392,29 @@ static void e7xxx_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev,
nr_pages = cumul_size - last_cumul_size;
last_cumul_size = cumul_size;
/*
* if single channel or x8 devices then SECDED
* if dual channel and x4 then S4ECD4ED
*/
if (drc_ddim) {
if (drc_chan && mem_dev) {
edac_mode = EDAC_S4ECD4ED;
mci->edac_cap |= EDAC_FLAG_S4ECD4ED;
} else {
edac_mode = EDAC_SECDED;
mci->edac_cap |= EDAC_FLAG_SECDED;
}
} else
edac_mode = EDAC_NONE;
for (j = 0; j < drc_chan + 1; j++) {
dimm = csrow->channels[j].dimm;
dimm = csrow->channels[j]->dimm;
dimm->nr_pages = nr_pages / (drc_chan + 1);
dimm->grain = 1 << 12; /* 4KiB - resolution of CELOG */
dimm->mtype = MEM_RDDR; /* only one type supported */
dimm->dtype = mem_dev ? DEV_X4 : DEV_X8;
/*
* if single channel or x8 devices then SECDED
* if dual channel and x4 then S4ECD4ED
*/
if (drc_ddim) {
if (drc_chan && mem_dev) {
dimm->edac_mode = EDAC_S4ECD4ED;
mci->edac_cap |= EDAC_FLAG_S4ECD4ED;
} else {
dimm->edac_mode = EDAC_SECDED;
mci->edac_cap |= EDAC_FLAG_SECDED;
}
} else
dimm->edac_mode = EDAC_NONE;
dimm->edac_mode = edac_mode;
}
}
}
@ -428,7 +429,7 @@ static int e7xxx_probe1(struct pci_dev *pdev, int dev_idx)
int drc_chan;
struct e7xxx_error_info discard;
debugf0("%s(): mci\n", __func__);
edac_dbg(0, "mci\n");
pci_read_config_dword(pdev, E7XXX_DRC, &drc);
@ -451,15 +452,15 @@ static int e7xxx_probe1(struct pci_dev *pdev, int dev_idx)
if (mci == NULL)
return -ENOMEM;
debugf3("%s(): init mci\n", __func__);
edac_dbg(3, "init mci\n");
mci->mtype_cap = MEM_FLAG_RDDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED |
EDAC_FLAG_S4ECD4ED;
/* FIXME - what if different memory types are in different csrows? */
mci->mod_name = EDAC_MOD_STR;
mci->mod_ver = E7XXX_REVISION;
mci->dev = &pdev->dev;
debugf3("%s(): init pvt\n", __func__);
mci->pdev = &pdev->dev;
edac_dbg(3, "init pvt\n");
pvt = (struct e7xxx_pvt *)mci->pvt_info;
pvt->dev_info = &e7xxx_devs[dev_idx];
pvt->bridge_ck = pci_get_device(PCI_VENDOR_ID_INTEL,
@ -472,14 +473,14 @@ static int e7xxx_probe1(struct pci_dev *pdev, int dev_idx)
goto fail0;
}
debugf3("%s(): more mci init\n", __func__);
edac_dbg(3, "more mci init\n");
mci->ctl_name = pvt->dev_info->ctl_name;
mci->dev_name = pci_name(pdev);
mci->edac_check = e7xxx_check;
mci->ctl_page_to_phys = ctl_page_to_phys;
e7xxx_init_csrows(mci, pdev, dev_idx, drc);
mci->edac_cap |= EDAC_FLAG_NONE;
debugf3("%s(): tolm, remapbase, remaplimit\n", __func__);
edac_dbg(3, "tolm, remapbase, remaplimit\n");
/* load the top of low memory, remap base, and remap limit vars */
pci_read_config_word(pdev, E7XXX_TOLM, &pci_data);
pvt->tolm = ((u32) pci_data) << 4;
@ -498,7 +499,7 @@ static int e7xxx_probe1(struct pci_dev *pdev, int dev_idx)
* type of memory controller. The ID is therefore hardcoded to 0.
*/
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto fail1;
}
@ -514,7 +515,7 @@ static int e7xxx_probe1(struct pci_dev *pdev, int dev_idx)
}
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
fail1:
@ -530,7 +531,7 @@ fail0:
static int __devinit e7xxx_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* wake up and enable device */
return pci_enable_device(pdev) ?
@ -542,7 +543,7 @@ static void __devexit e7xxx_remove_one(struct pci_dev *pdev)
struct mem_ctl_info *mci;
struct e7xxx_pvt *pvt;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (e7xxx_pci)
edac_pci_release_generic_ctl(e7xxx_pci);

View File

@ -71,26 +71,21 @@ extern const char *edac_mem_types[];
#ifdef CONFIG_EDAC_DEBUG
extern int edac_debug_level;
#define edac_debug_printk(level, fmt, arg...) \
do { \
if (level <= edac_debug_level) \
edac_printk(KERN_DEBUG, EDAC_DEBUG, \
"%s: " fmt, __func__, ##arg); \
} while (0)
#define debugf0( ... ) edac_debug_printk(0, __VA_ARGS__ )
#define debugf1( ... ) edac_debug_printk(1, __VA_ARGS__ )
#define debugf2( ... ) edac_debug_printk(2, __VA_ARGS__ )
#define debugf3( ... ) edac_debug_printk(3, __VA_ARGS__ )
#define debugf4( ... ) edac_debug_printk(4, __VA_ARGS__ )
#define edac_dbg(level, fmt, ...) \
do { \
if (level <= edac_debug_level) \
edac_printk(KERN_DEBUG, EDAC_DEBUG, \
"%s: " fmt, __func__, ##__VA_ARGS__); \
} while (0)
#else /* !CONFIG_EDAC_DEBUG */
#define debugf0( ... )
#define debugf1( ... )
#define debugf2( ... )
#define debugf3( ... )
#define debugf4( ... )
#define edac_dbg(level, fmt, ...) \
do { \
if (0) \
edac_printk(KERN_DEBUG, EDAC_DEBUG, \
"%s: " fmt, __func__, ##__VA_ARGS__); \
} while (0)
#endif /* !CONFIG_EDAC_DEBUG */
@ -460,15 +455,15 @@ extern int edac_mc_find_csrow_by_page(struct mem_ctl_info *mci,
unsigned long page);
void edac_mc_handle_error(const enum hw_event_mc_err_type type,
struct mem_ctl_info *mci,
const u16 error_count,
const unsigned long page_frame_number,
const unsigned long offset_in_page,
const unsigned long syndrome,
const int layer0,
const int layer1,
const int layer2,
const int top_layer,
const int mid_layer,
const int low_layer,
const char *msg,
const char *other_detail,
const void *mcelog);
const char *other_detail);
/*
* edac_device APIs

View File

@ -40,12 +40,13 @@ static LIST_HEAD(edac_device_list);
#ifdef CONFIG_EDAC_DEBUG
static void edac_device_dump_device(struct edac_device_ctl_info *edac_dev)
{
debugf3("\tedac_dev = %p dev_idx=%d \n", edac_dev, edac_dev->dev_idx);
debugf4("\tedac_dev->edac_check = %p\n", edac_dev->edac_check);
debugf3("\tdev = %p\n", edac_dev->dev);
debugf3("\tmod_name:ctl_name = %s:%s\n",
edac_dev->mod_name, edac_dev->ctl_name);
debugf3("\tpvt_info = %p\n\n", edac_dev->pvt_info);
edac_dbg(3, "\tedac_dev = %p dev_idx=%d\n",
edac_dev, edac_dev->dev_idx);
edac_dbg(4, "\tedac_dev->edac_check = %p\n", edac_dev->edac_check);
edac_dbg(3, "\tdev = %p\n", edac_dev->dev);
edac_dbg(3, "\tmod_name:ctl_name = %s:%s\n",
edac_dev->mod_name, edac_dev->ctl_name);
edac_dbg(3, "\tpvt_info = %p\n\n", edac_dev->pvt_info);
}
#endif /* CONFIG_EDAC_DEBUG */
@ -82,8 +83,7 @@ struct edac_device_ctl_info *edac_device_alloc_ctl_info(
void *pvt, *p;
int err;
debugf4("%s() instances=%d blocks=%d\n",
__func__, nr_instances, nr_blocks);
edac_dbg(4, "instances=%d blocks=%d\n", nr_instances, nr_blocks);
/* Calculate the size of memory we need to allocate AND
* determine the offsets of the various item arrays
@ -156,8 +156,8 @@ struct edac_device_ctl_info *edac_device_alloc_ctl_info(
/* Name of this edac device */
snprintf(dev_ctl->name,sizeof(dev_ctl->name),"%s",edac_device_name);
debugf4("%s() edac_dev=%p next after end=%p\n",
__func__, dev_ctl, pvt + sz_private );
edac_dbg(4, "edac_dev=%p next after end=%p\n",
dev_ctl, pvt + sz_private);
/* Initialize every Instance */
for (instance = 0; instance < nr_instances; instance++) {
@ -178,10 +178,8 @@ struct edac_device_ctl_info *edac_device_alloc_ctl_info(
snprintf(blk->name, sizeof(blk->name),
"%s%d", edac_block_name, block+offset_value);
debugf4("%s() instance=%d inst_p=%p block=#%d "
"block_p=%p name='%s'\n",
__func__, instance, inst, block,
blk, blk->name);
edac_dbg(4, "instance=%d inst_p=%p block=#%d block_p=%p name='%s'\n",
instance, inst, block, blk, blk->name);
/* if there are NO attributes OR no attribute pointer
* then continue on to next block iteration
@ -194,8 +192,8 @@ struct edac_device_ctl_info *edac_device_alloc_ctl_info(
attrib_p = &dev_attrib[block*nr_instances*nr_attrib];
blk->block_attributes = attrib_p;
debugf4("%s() THIS BLOCK_ATTRIB=%p\n",
__func__, blk->block_attributes);
edac_dbg(4, "THIS BLOCK_ATTRIB=%p\n",
blk->block_attributes);
/* Initialize every user specified attribute in this
* block with the data the caller passed in
@ -214,11 +212,10 @@ struct edac_device_ctl_info *edac_device_alloc_ctl_info(
attrib->block = blk; /* up link */
debugf4("%s() alloc-attrib=%p attrib_name='%s' "
"attrib-spec=%p spec-name=%s\n",
__func__, attrib, attrib->attr.name,
&attrib_spec[attr],
attrib_spec[attr].attr.name
edac_dbg(4, "alloc-attrib=%p attrib_name='%s' attrib-spec=%p spec-name=%s\n",
attrib, attrib->attr.name,
&attrib_spec[attr],
attrib_spec[attr].attr.name
);
}
}
@ -273,7 +270,7 @@ static struct edac_device_ctl_info *find_edac_device_by_dev(struct device *dev)
struct edac_device_ctl_info *edac_dev;
struct list_head *item;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
list_for_each(item, &edac_device_list) {
edac_dev = list_entry(item, struct edac_device_ctl_info, link);
@ -408,7 +405,7 @@ static void edac_device_workq_function(struct work_struct *work_req)
void edac_device_workq_setup(struct edac_device_ctl_info *edac_dev,
unsigned msec)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* take the arg 'msec' and set it into the control structure
* to used in the time period calculation
@ -496,7 +493,7 @@ EXPORT_SYMBOL_GPL(edac_device_alloc_index);
*/
int edac_device_add_device(struct edac_device_ctl_info *edac_dev)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
#ifdef CONFIG_EDAC_DEBUG
if (edac_debug_level >= 3)
@ -570,7 +567,7 @@ struct edac_device_ctl_info *edac_device_del_device(struct device *dev)
{
struct edac_device_ctl_info *edac_dev;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
mutex_lock(&device_ctls_mutex);

View File

@ -202,7 +202,7 @@ static void edac_device_ctrl_master_release(struct kobject *kobj)
{
struct edac_device_ctl_info *edac_dev = to_edacdev(kobj);
debugf4("%s() control index=%d\n", __func__, edac_dev->dev_idx);
edac_dbg(4, "control index=%d\n", edac_dev->dev_idx);
/* decrement the EDAC CORE module ref count */
module_put(edac_dev->owner);
@ -233,12 +233,12 @@ int edac_device_register_sysfs_main_kobj(struct edac_device_ctl_info *edac_dev)
struct bus_type *edac_subsys;
int err;
debugf1("%s()\n", __func__);
edac_dbg(1, "\n");
/* get the /sys/devices/system/edac reference */
edac_subsys = edac_get_sysfs_subsys();
if (edac_subsys == NULL) {
debugf1("%s() no edac_subsys error\n", __func__);
edac_dbg(1, "no edac_subsys error\n");
err = -ENODEV;
goto err_out;
}
@ -264,8 +264,8 @@ int edac_device_register_sysfs_main_kobj(struct edac_device_ctl_info *edac_dev)
&edac_subsys->dev_root->kobj,
"%s", edac_dev->name);
if (err) {
debugf1("%s()Failed to register '.../edac/%s'\n",
__func__, edac_dev->name);
edac_dbg(1, "Failed to register '.../edac/%s'\n",
edac_dev->name);
goto err_kobj_reg;
}
kobject_uevent(&edac_dev->kobj, KOBJ_ADD);
@ -274,8 +274,7 @@ int edac_device_register_sysfs_main_kobj(struct edac_device_ctl_info *edac_dev)
* edac_device_unregister_sysfs_main_kobj() must be used
*/
debugf4("%s() Registered '.../edac/%s' kobject\n",
__func__, edac_dev->name);
edac_dbg(4, "Registered '.../edac/%s' kobject\n", edac_dev->name);
return 0;
@ -296,9 +295,8 @@ err_out:
*/
void edac_device_unregister_sysfs_main_kobj(struct edac_device_ctl_info *dev)
{
debugf0("%s()\n", __func__);
debugf4("%s() name of kobject is: %s\n",
__func__, kobject_name(&dev->kobj));
edac_dbg(0, "\n");
edac_dbg(4, "name of kobject is: %s\n", kobject_name(&dev->kobj));
/*
* Unregister the edac device's kobject and
@ -336,7 +334,7 @@ static void edac_device_ctrl_instance_release(struct kobject *kobj)
{
struct edac_device_instance *instance;
debugf1("%s()\n", __func__);
edac_dbg(1, "\n");
/* map from this kobj to the main control struct
* and then dec the main kobj count
@ -442,7 +440,7 @@ static void edac_device_ctrl_block_release(struct kobject *kobj)
{
struct edac_device_block *block;
debugf1("%s()\n", __func__);
edac_dbg(1, "\n");
/* get the container of the kobj */
block = to_block(kobj);
@ -524,10 +522,10 @@ static int edac_device_create_block(struct edac_device_ctl_info *edac_dev,
struct edac_dev_sysfs_block_attribute *sysfs_attrib;
struct kobject *main_kobj;
debugf4("%s() Instance '%s' inst_p=%p block '%s' block_p=%p\n",
__func__, instance->name, instance, block->name, block);
debugf4("%s() block kobj=%p block kobj->parent=%p\n",
__func__, &block->kobj, &block->kobj.parent);
edac_dbg(4, "Instance '%s' inst_p=%p block '%s' block_p=%p\n",
instance->name, instance, block->name, block);
edac_dbg(4, "block kobj=%p block kobj->parent=%p\n",
&block->kobj, &block->kobj.parent);
/* init this block's kobject */
memset(&block->kobj, 0, sizeof(struct kobject));
@ -546,8 +544,7 @@ static int edac_device_create_block(struct edac_device_ctl_info *edac_dev,
&instance->kobj,
"%s", block->name);
if (err) {
debugf1("%s() Failed to register instance '%s'\n",
__func__, block->name);
edac_dbg(1, "Failed to register instance '%s'\n", block->name);
kobject_put(main_kobj);
err = -ENODEV;
goto err_out;
@ -560,11 +557,9 @@ static int edac_device_create_block(struct edac_device_ctl_info *edac_dev,
if (sysfs_attrib && block->nr_attribs) {
for (i = 0; i < block->nr_attribs; i++, sysfs_attrib++) {
debugf4("%s() creating block attrib='%s' "
"attrib->%p to kobj=%p\n",
__func__,
sysfs_attrib->attr.name,
sysfs_attrib, &block->kobj);
edac_dbg(4, "creating block attrib='%s' attrib->%p to kobj=%p\n",
sysfs_attrib->attr.name,
sysfs_attrib, &block->kobj);
/* Create each block_attribute file */
err = sysfs_create_file(&block->kobj,
@ -647,14 +642,14 @@ static int edac_device_create_instance(struct edac_device_ctl_info *edac_dev,
err = kobject_init_and_add(&instance->kobj, &ktype_instance_ctrl,
&edac_dev->kobj, "%s", instance->name);
if (err != 0) {
debugf2("%s() Failed to register instance '%s'\n",
__func__, instance->name);
edac_dbg(2, "Failed to register instance '%s'\n",
instance->name);
kobject_put(main_kobj);
goto err_out;
}
debugf4("%s() now register '%d' blocks for instance %d\n",
__func__, instance->nr_blocks, idx);
edac_dbg(4, "now register '%d' blocks for instance %d\n",
instance->nr_blocks, idx);
/* register all blocks of this instance */
for (i = 0; i < instance->nr_blocks; i++) {
@ -670,8 +665,8 @@ static int edac_device_create_instance(struct edac_device_ctl_info *edac_dev,
}
kobject_uevent(&instance->kobj, KOBJ_ADD);
debugf4("%s() Registered instance %d '%s' kobject\n",
__func__, idx, instance->name);
edac_dbg(4, "Registered instance %d '%s' kobject\n",
idx, instance->name);
return 0;
@ -715,7 +710,7 @@ static int edac_device_create_instances(struct edac_device_ctl_info *edac_dev)
int i, j;
int err;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* iterate over creation of the instances */
for (i = 0; i < edac_dev->nr_instances; i++) {
@ -817,12 +812,12 @@ int edac_device_create_sysfs(struct edac_device_ctl_info *edac_dev)
int err;
struct kobject *edac_kobj = &edac_dev->kobj;
debugf0("%s() idx=%d\n", __func__, edac_dev->dev_idx);
edac_dbg(0, "idx=%d\n", edac_dev->dev_idx);
/* go create any main attributes callers wants */
err = edac_device_add_main_sysfs_attributes(edac_dev);
if (err) {
debugf0("%s() failed to add sysfs attribs\n", __func__);
edac_dbg(0, "failed to add sysfs attribs\n");
goto err_out;
}
@ -832,8 +827,7 @@ int edac_device_create_sysfs(struct edac_device_ctl_info *edac_dev)
err = sysfs_create_link(edac_kobj,
&edac_dev->dev->kobj, EDAC_DEVICE_SYMLINK);
if (err) {
debugf0("%s() sysfs_create_link() returned err= %d\n",
__func__, err);
edac_dbg(0, "sysfs_create_link() returned err= %d\n", err);
goto err_remove_main_attribs;
}
@ -843,14 +837,13 @@ int edac_device_create_sysfs(struct edac_device_ctl_info *edac_dev)
*/
err = edac_device_create_instances(edac_dev);
if (err) {
debugf0("%s() edac_device_create_instances() "
"returned err= %d\n", __func__, err);
edac_dbg(0, "edac_device_create_instances() returned err= %d\n",
err);
goto err_remove_link;
}
debugf4("%s() create-instances done, idx=%d\n",
__func__, edac_dev->dev_idx);
edac_dbg(4, "create-instances done, idx=%d\n", edac_dev->dev_idx);
return 0;
@ -873,7 +866,7 @@ err_out:
*/
void edac_device_remove_sysfs(struct edac_device_ctl_info *edac_dev)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* remove any main attributes for this device */
edac_device_remove_main_sysfs_attributes(edac_dev);

View File

@ -27,70 +27,95 @@
#include <linux/list.h>
#include <linux/ctype.h>
#include <linux/edac.h>
#include <linux/bitops.h>
#include <asm/uaccess.h>
#include <asm/page.h>
#include <asm/edac.h>
#include "edac_core.h"
#include "edac_module.h"
#define CREATE_TRACE_POINTS
#define TRACE_INCLUDE_PATH ../../include/ras
#include <ras/ras_event.h>
/* lock to memory controller's control array */
static DEFINE_MUTEX(mem_ctls_mutex);
static LIST_HEAD(mc_devices);
unsigned edac_dimm_info_location(struct dimm_info *dimm, char *buf,
unsigned len)
{
struct mem_ctl_info *mci = dimm->mci;
int i, n, count = 0;
char *p = buf;
for (i = 0; i < mci->n_layers; i++) {
n = snprintf(p, len, "%s %d ",
edac_layer_name[mci->layers[i].type],
dimm->location[i]);
p += n;
len -= n;
count += n;
if (!len)
break;
}
return count;
}
#ifdef CONFIG_EDAC_DEBUG
static void edac_mc_dump_channel(struct rank_info *chan)
{
debugf4("\tchannel = %p\n", chan);
debugf4("\tchannel->chan_idx = %d\n", chan->chan_idx);
debugf4("\tchannel->csrow = %p\n\n", chan->csrow);
debugf4("\tchannel->dimm = %p\n", chan->dimm);
edac_dbg(4, " channel->chan_idx = %d\n", chan->chan_idx);
edac_dbg(4, " channel = %p\n", chan);
edac_dbg(4, " channel->csrow = %p\n", chan->csrow);
edac_dbg(4, " channel->dimm = %p\n", chan->dimm);
}
static void edac_mc_dump_dimm(struct dimm_info *dimm)
static void edac_mc_dump_dimm(struct dimm_info *dimm, int number)
{
int i;
char location[80];
debugf4("\tdimm = %p\n", dimm);
debugf4("\tdimm->label = '%s'\n", dimm->label);
debugf4("\tdimm->nr_pages = 0x%x\n", dimm->nr_pages);
debugf4("\tdimm location ");
for (i = 0; i < dimm->mci->n_layers; i++) {
printk(KERN_CONT "%d", dimm->location[i]);
if (i < dimm->mci->n_layers - 1)
printk(KERN_CONT ".");
}
printk(KERN_CONT "\n");
debugf4("\tdimm->grain = %d\n", dimm->grain);
debugf4("\tdimm->nr_pages = 0x%x\n", dimm->nr_pages);
edac_dimm_info_location(dimm, location, sizeof(location));
edac_dbg(4, "%s%i: %smapped as virtual row %d, chan %d\n",
dimm->mci->mem_is_per_rank ? "rank" : "dimm",
number, location, dimm->csrow, dimm->cschannel);
edac_dbg(4, " dimm = %p\n", dimm);
edac_dbg(4, " dimm->label = '%s'\n", dimm->label);
edac_dbg(4, " dimm->nr_pages = 0x%x\n", dimm->nr_pages);
edac_dbg(4, " dimm->grain = %d\n", dimm->grain);
edac_dbg(4, " dimm->nr_pages = 0x%x\n", dimm->nr_pages);
}
static void edac_mc_dump_csrow(struct csrow_info *csrow)
{
debugf4("\tcsrow = %p\n", csrow);
debugf4("\tcsrow->csrow_idx = %d\n", csrow->csrow_idx);
debugf4("\tcsrow->first_page = 0x%lx\n", csrow->first_page);
debugf4("\tcsrow->last_page = 0x%lx\n", csrow->last_page);
debugf4("\tcsrow->page_mask = 0x%lx\n", csrow->page_mask);
debugf4("\tcsrow->nr_channels = %d\n", csrow->nr_channels);
debugf4("\tcsrow->channels = %p\n", csrow->channels);
debugf4("\tcsrow->mci = %p\n\n", csrow->mci);
edac_dbg(4, "csrow->csrow_idx = %d\n", csrow->csrow_idx);
edac_dbg(4, " csrow = %p\n", csrow);
edac_dbg(4, " csrow->first_page = 0x%lx\n", csrow->first_page);
edac_dbg(4, " csrow->last_page = 0x%lx\n", csrow->last_page);
edac_dbg(4, " csrow->page_mask = 0x%lx\n", csrow->page_mask);
edac_dbg(4, " csrow->nr_channels = %d\n", csrow->nr_channels);
edac_dbg(4, " csrow->channels = %p\n", csrow->channels);
edac_dbg(4, " csrow->mci = %p\n", csrow->mci);
}
static void edac_mc_dump_mci(struct mem_ctl_info *mci)
{
debugf3("\tmci = %p\n", mci);
debugf3("\tmci->mtype_cap = %lx\n", mci->mtype_cap);
debugf3("\tmci->edac_ctl_cap = %lx\n", mci->edac_ctl_cap);
debugf3("\tmci->edac_cap = %lx\n", mci->edac_cap);
debugf4("\tmci->edac_check = %p\n", mci->edac_check);
debugf3("\tmci->nr_csrows = %d, csrows = %p\n",
mci->nr_csrows, mci->csrows);
debugf3("\tmci->nr_dimms = %d, dimms = %p\n",
mci->tot_dimms, mci->dimms);
debugf3("\tdev = %p\n", mci->dev);
debugf3("\tmod_name:ctl_name = %s:%s\n", mci->mod_name, mci->ctl_name);
debugf3("\tpvt_info = %p\n\n", mci->pvt_info);
edac_dbg(3, "\tmci = %p\n", mci);
edac_dbg(3, "\tmci->mtype_cap = %lx\n", mci->mtype_cap);
edac_dbg(3, "\tmci->edac_ctl_cap = %lx\n", mci->edac_ctl_cap);
edac_dbg(3, "\tmci->edac_cap = %lx\n", mci->edac_cap);
edac_dbg(4, "\tmci->edac_check = %p\n", mci->edac_check);
edac_dbg(3, "\tmci->nr_csrows = %d, csrows = %p\n",
mci->nr_csrows, mci->csrows);
edac_dbg(3, "\tmci->nr_dimms = %d, dimms = %p\n",
mci->tot_dimms, mci->dimms);
edac_dbg(3, "\tdev = %p\n", mci->pdev);
edac_dbg(3, "\tmod_name:ctl_name = %s:%s\n",
mci->mod_name, mci->ctl_name);
edac_dbg(3, "\tpvt_info = %p\n\n", mci->pvt_info);
}
#endif /* CONFIG_EDAC_DEBUG */
@ -205,15 +230,15 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
{
struct mem_ctl_info *mci;
struct edac_mc_layer *layer;
struct csrow_info *csi, *csr;
struct rank_info *chi, *chp, *chan;
struct csrow_info *csr;
struct rank_info *chan;
struct dimm_info *dimm;
u32 *ce_per_layer[EDAC_MAX_LAYERS], *ue_per_layer[EDAC_MAX_LAYERS];
unsigned pos[EDAC_MAX_LAYERS];
unsigned size, tot_dimms = 1, count = 1;
unsigned tot_csrows = 1, tot_channels = 1, tot_errcount = 0;
void *pvt, *p, *ptr = NULL;
int i, j, err, row, chn, n, len;
int i, j, row, chn, n, len, off;
bool per_rank = false;
BUG_ON(n_layers > EDAC_MAX_LAYERS || n_layers == 0);
@ -239,26 +264,24 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
*/
mci = edac_align_ptr(&ptr, sizeof(*mci), 1);
layer = edac_align_ptr(&ptr, sizeof(*layer), n_layers);
csi = edac_align_ptr(&ptr, sizeof(*csi), tot_csrows);
chi = edac_align_ptr(&ptr, sizeof(*chi), tot_csrows * tot_channels);
dimm = edac_align_ptr(&ptr, sizeof(*dimm), tot_dimms);
for (i = 0; i < n_layers; i++) {
count *= layers[i].size;
debugf4("%s: errcount layer %d size %d\n", __func__, i, count);
edac_dbg(4, "errcount layer %d size %d\n", i, count);
ce_per_layer[i] = edac_align_ptr(&ptr, sizeof(u32), count);
ue_per_layer[i] = edac_align_ptr(&ptr, sizeof(u32), count);
tot_errcount += 2 * count;
}
debugf4("%s: allocating %d error counters\n", __func__, tot_errcount);
edac_dbg(4, "allocating %d error counters\n", tot_errcount);
pvt = edac_align_ptr(&ptr, sz_pvt, 1);
size = ((unsigned long)pvt) + sz_pvt;
debugf1("%s(): allocating %u bytes for mci data (%d %s, %d csrows/channels)\n",
__func__, size,
tot_dimms,
per_rank ? "ranks" : "dimms",
tot_csrows * tot_channels);
edac_dbg(1, "allocating %u bytes for mci data (%d %s, %d csrows/channels)\n",
size,
tot_dimms,
per_rank ? "ranks" : "dimms",
tot_csrows * tot_channels);
mci = kzalloc(size, GFP_KERNEL);
if (mci == NULL)
return NULL;
@ -267,9 +290,6 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
* rather than an imaginary chunk of memory located at address 0.
*/
layer = (struct edac_mc_layer *)(((char *)mci) + ((unsigned long)layer));
csi = (struct csrow_info *)(((char *)mci) + ((unsigned long)csi));
chi = (struct rank_info *)(((char *)mci) + ((unsigned long)chi));
dimm = (struct dimm_info *)(((char *)mci) + ((unsigned long)dimm));
for (i = 0; i < n_layers; i++) {
mci->ce_per_layer[i] = (u32 *)((char *)mci + ((unsigned long)ce_per_layer[i]));
mci->ue_per_layer[i] = (u32 *)((char *)mci + ((unsigned long)ue_per_layer[i]));
@ -278,8 +298,6 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
/* setup index and various internal pointers */
mci->mc_idx = mc_num;
mci->csrows = csi;
mci->dimms = dimm;
mci->tot_dimms = tot_dimms;
mci->pvt_info = pvt;
mci->n_layers = n_layers;
@ -290,40 +308,57 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
mci->mem_is_per_rank = per_rank;
/*
* Fill the csrow struct
* Alocate and fill the csrow/channels structs
*/
mci->csrows = kcalloc(sizeof(*mci->csrows), tot_csrows, GFP_KERNEL);
if (!mci->csrows)
goto error;
for (row = 0; row < tot_csrows; row++) {
csr = &csi[row];
csr = kzalloc(sizeof(**mci->csrows), GFP_KERNEL);
if (!csr)
goto error;
mci->csrows[row] = csr;
csr->csrow_idx = row;
csr->mci = mci;
csr->nr_channels = tot_channels;
chp = &chi[row * tot_channels];
csr->channels = chp;
csr->channels = kcalloc(sizeof(*csr->channels), tot_channels,
GFP_KERNEL);
if (!csr->channels)
goto error;
for (chn = 0; chn < tot_channels; chn++) {
chan = &chp[chn];
chan = kzalloc(sizeof(**csr->channels), GFP_KERNEL);
if (!chan)
goto error;
csr->channels[chn] = chan;
chan->chan_idx = chn;
chan->csrow = csr;
}
}
/*
* Fill the dimm struct
* Allocate and fill the dimm structs
*/
mci->dimms = kcalloc(sizeof(*mci->dimms), tot_dimms, GFP_KERNEL);
if (!mci->dimms)
goto error;
memset(&pos, 0, sizeof(pos));
row = 0;
chn = 0;
debugf4("%s: initializing %d %s\n", __func__, tot_dimms,
per_rank ? "ranks" : "dimms");
for (i = 0; i < tot_dimms; i++) {
chan = &csi[row].channels[chn];
dimm = EDAC_DIMM_PTR(layer, mci->dimms, n_layers,
pos[0], pos[1], pos[2]);
dimm->mci = mci;
chan = mci->csrows[row]->channels[chn];
off = EDAC_DIMM_OFF(layer, n_layers, pos[0], pos[1], pos[2]);
if (off < 0 || off >= tot_dimms) {
edac_mc_printk(mci, KERN_ERR, "EDAC core bug: EDAC_DIMM_OFF is trying to do an illegal data access\n");
goto error;
}
debugf2("%s: %d: %s%zd (%d:%d:%d): row %d, chan %d\n", __func__,
i, per_rank ? "rank" : "dimm", (dimm - mci->dimms),
pos[0], pos[1], pos[2], row, chn);
dimm = kzalloc(sizeof(**mci->dimms), GFP_KERNEL);
if (!dimm)
goto error;
mci->dimms[off] = dimm;
dimm->mci = mci;
/*
* Copy DIMM location and initialize it.
@ -367,16 +402,6 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
}
mci->op_state = OP_ALLOC;
INIT_LIST_HEAD(&mci->grp_kobj_list);
/*
* Initialize the 'root' kobj for the edac_mc controller
*/
err = edac_mc_register_sysfs_main_kobj(mci);
if (err) {
kfree(mci);
return NULL;
}
/* at this point, the root kobj is valid, and in order to
* 'free' the object, then the function:
@ -384,7 +409,30 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
* which will perform kobj unregistration and the actual free
* will occur during the kobject callback operation
*/
return mci;
error:
if (mci->dimms) {
for (i = 0; i < tot_dimms; i++)
kfree(mci->dimms[i]);
kfree(mci->dimms);
}
if (mci->csrows) {
for (chn = 0; chn < tot_channels; chn++) {
csr = mci->csrows[chn];
if (csr) {
for (chn = 0; chn < tot_channels; chn++)
kfree(csr->channels[chn]);
kfree(csr);
}
kfree(mci->csrows[i]);
}
kfree(mci->csrows);
}
kfree(mci);
return NULL;
}
EXPORT_SYMBOL_GPL(edac_mc_alloc);
@ -395,12 +443,10 @@ EXPORT_SYMBOL_GPL(edac_mc_alloc);
*/
void edac_mc_free(struct mem_ctl_info *mci)
{
debugf1("%s()\n", __func__);
edac_dbg(1, "\n");
edac_mc_unregister_sysfs_main_kobj(mci);
/* free the mci instance memory here */
kfree(mci);
/* the mci instance is freed here, when the sysfs object is dropped */
edac_unregister_sysfs(mci);
}
EXPORT_SYMBOL_GPL(edac_mc_free);
@ -417,12 +463,12 @@ struct mem_ctl_info *find_mci_by_dev(struct device *dev)
struct mem_ctl_info *mci;
struct list_head *item;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
list_for_each(item, &mc_devices) {
mci = list_entry(item, struct mem_ctl_info, link);
if (mci->dev == dev)
if (mci->pdev == dev)
return mci;
}
@ -485,7 +531,7 @@ static void edac_mc_workq_function(struct work_struct *work_req)
*/
static void edac_mc_workq_setup(struct mem_ctl_info *mci, unsigned msec)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* if this instance is not in the POLL state, then simply return */
if (mci->op_state != OP_RUNNING_POLL)
@ -512,8 +558,7 @@ static void edac_mc_workq_teardown(struct mem_ctl_info *mci)
status = cancel_delayed_work(&mci->work);
if (status == 0) {
debugf0("%s() not canceled, flush the queue\n",
__func__);
edac_dbg(0, "not canceled, flush the queue\n");
/* workq instance might be running, wait for it */
flush_workqueue(edac_workqueue);
@ -574,7 +619,7 @@ static int add_mc_to_global_list(struct mem_ctl_info *mci)
insert_before = &mc_devices;
p = find_mci_by_dev(mci->dev);
p = find_mci_by_dev(mci->pdev);
if (unlikely(p != NULL))
goto fail0;
@ -596,7 +641,7 @@ static int add_mc_to_global_list(struct mem_ctl_info *mci)
fail0:
edac_printk(KERN_WARNING, EDAC_MC,
"%s (%s) %s %s already assigned %d\n", dev_name(p->dev),
"%s (%s) %s %s already assigned %d\n", dev_name(p->pdev),
edac_dev_name(mci), p->mod_name, p->ctl_name, p->mc_idx);
return 1;
@ -660,7 +705,7 @@ EXPORT_SYMBOL(edac_mc_find);
/* FIXME - should a warning be printed if no error detection? correction? */
int edac_mc_add_mc(struct mem_ctl_info *mci)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
#ifdef CONFIG_EDAC_DEBUG
if (edac_debug_level >= 3)
@ -670,15 +715,22 @@ int edac_mc_add_mc(struct mem_ctl_info *mci)
int i;
for (i = 0; i < mci->nr_csrows; i++) {
struct csrow_info *csrow = mci->csrows[i];
u32 nr_pages = 0;
int j;
edac_mc_dump_csrow(&mci->csrows[i]);
for (j = 0; j < mci->csrows[i].nr_channels; j++)
edac_mc_dump_channel(&mci->csrows[i].
channels[j]);
for (j = 0; j < csrow->nr_channels; j++)
nr_pages += csrow->channels[j]->dimm->nr_pages;
if (!nr_pages)
continue;
edac_mc_dump_csrow(csrow);
for (j = 0; j < csrow->nr_channels; j++)
if (csrow->channels[j]->dimm->nr_pages)
edac_mc_dump_channel(csrow->channels[j]);
}
for (i = 0; i < mci->tot_dimms; i++)
edac_mc_dump_dimm(&mci->dimms[i]);
if (mci->dimms[i]->nr_pages)
edac_mc_dump_dimm(mci->dimms[i], i);
}
#endif
mutex_lock(&mem_ctls_mutex);
@ -732,7 +784,7 @@ struct mem_ctl_info *edac_mc_del_mc(struct device *dev)
{
struct mem_ctl_info *mci;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
mutex_lock(&mem_ctls_mutex);
@ -770,7 +822,7 @@ static void edac_mc_scrub_block(unsigned long page, unsigned long offset,
void *virt_addr;
unsigned long flags = 0;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* ECC error page was not in our memory. Ignore it. */
if (!pfn_valid(page))
@ -797,26 +849,26 @@ static void edac_mc_scrub_block(unsigned long page, unsigned long offset,
/* FIXME - should return -1 */
int edac_mc_find_csrow_by_page(struct mem_ctl_info *mci, unsigned long page)
{
struct csrow_info *csrows = mci->csrows;
struct csrow_info **csrows = mci->csrows;
int row, i, j, n;
debugf1("MC%d: %s(): 0x%lx\n", mci->mc_idx, __func__, page);
edac_dbg(1, "MC%d: 0x%lx\n", mci->mc_idx, page);
row = -1;
for (i = 0; i < mci->nr_csrows; i++) {
struct csrow_info *csrow = &csrows[i];
struct csrow_info *csrow = csrows[i];
n = 0;
for (j = 0; j < csrow->nr_channels; j++) {
struct dimm_info *dimm = csrow->channels[j].dimm;
struct dimm_info *dimm = csrow->channels[j]->dimm;
n += dimm->nr_pages;
}
if (n == 0)
continue;
debugf3("MC%d: %s(): first(0x%lx) page(0x%lx) last(0x%lx) "
"mask(0x%lx)\n", mci->mc_idx, __func__,
csrow->first_page, page, csrow->last_page,
csrow->page_mask);
edac_dbg(3, "MC%d: first(0x%lx) page(0x%lx) last(0x%lx) mask(0x%lx)\n",
mci->mc_idx,
csrow->first_page, page, csrow->last_page,
csrow->page_mask);
if ((page >= csrow->first_page) &&
(page <= csrow->last_page) &&
@ -845,15 +897,16 @@ const char *edac_layer_name[] = {
EXPORT_SYMBOL_GPL(edac_layer_name);
static void edac_inc_ce_error(struct mem_ctl_info *mci,
bool enable_per_layer_report,
const int pos[EDAC_MAX_LAYERS])
bool enable_per_layer_report,
const int pos[EDAC_MAX_LAYERS],
const u16 count)
{
int i, index = 0;
mci->ce_mc++;
mci->ce_mc += count;
if (!enable_per_layer_report) {
mci->ce_noinfo_count++;
mci->ce_noinfo_count += count;
return;
}
@ -861,7 +914,7 @@ static void edac_inc_ce_error(struct mem_ctl_info *mci,
if (pos[i] < 0)
break;
index += pos[i];
mci->ce_per_layer[i][index]++;
mci->ce_per_layer[i][index] += count;
if (i < mci->n_layers - 1)
index *= mci->layers[i + 1].size;
@ -870,14 +923,15 @@ static void edac_inc_ce_error(struct mem_ctl_info *mci,
static void edac_inc_ue_error(struct mem_ctl_info *mci,
bool enable_per_layer_report,
const int pos[EDAC_MAX_LAYERS])
const int pos[EDAC_MAX_LAYERS],
const u16 count)
{
int i, index = 0;
mci->ue_mc++;
mci->ue_mc += count;
if (!enable_per_layer_report) {
mci->ce_noinfo_count++;
mci->ce_noinfo_count += count;
return;
}
@ -885,7 +939,7 @@ static void edac_inc_ue_error(struct mem_ctl_info *mci,
if (pos[i] < 0)
break;
index += pos[i];
mci->ue_per_layer[i][index]++;
mci->ue_per_layer[i][index] += count;
if (i < mci->n_layers - 1)
index *= mci->layers[i + 1].size;
@ -893,6 +947,7 @@ static void edac_inc_ue_error(struct mem_ctl_info *mci,
}
static void edac_ce_error(struct mem_ctl_info *mci,
const u16 error_count,
const int pos[EDAC_MAX_LAYERS],
const char *msg,
const char *location,
@ -902,23 +957,25 @@ static void edac_ce_error(struct mem_ctl_info *mci,
const bool enable_per_layer_report,
const unsigned long page_frame_number,
const unsigned long offset_in_page,
u32 grain)
long grain)
{
unsigned long remapped_page;
if (edac_mc_get_log_ce()) {
if (other_detail && *other_detail)
edac_mc_printk(mci, KERN_WARNING,
"CE %s on %s (%s%s - %s)\n",
"%d CE %s on %s (%s %s - %s)\n",
error_count,
msg, label, location,
detail, other_detail);
else
edac_mc_printk(mci, KERN_WARNING,
"CE %s on %s (%s%s)\n",
"%d CE %s on %s (%s %s)\n",
error_count,
msg, label, location,
detail);
}
edac_inc_ce_error(mci, enable_per_layer_report, pos);
edac_inc_ce_error(mci, enable_per_layer_report, pos, error_count);
if (mci->scrub_mode & SCRUB_SW_SRC) {
/*
@ -942,6 +999,7 @@ static void edac_ce_error(struct mem_ctl_info *mci,
}
static void edac_ue_error(struct mem_ctl_info *mci,
const u16 error_count,
const int pos[EDAC_MAX_LAYERS],
const char *msg,
const char *location,
@ -953,12 +1011,14 @@ static void edac_ue_error(struct mem_ctl_info *mci,
if (edac_mc_get_log_ue()) {
if (other_detail && *other_detail)
edac_mc_printk(mci, KERN_WARNING,
"UE %s on %s (%s%s - %s)\n",
"%d UE %s on %s (%s %s - %s)\n",
error_count,
msg, label, location, detail,
other_detail);
else
edac_mc_printk(mci, KERN_WARNING,
"UE %s on %s (%s%s)\n",
"%d UE %s on %s (%s %s)\n",
error_count,
msg, label, location, detail);
}
@ -971,33 +1031,53 @@ static void edac_ue_error(struct mem_ctl_info *mci,
msg, label, location, detail);
}
edac_inc_ue_error(mci, enable_per_layer_report, pos);
edac_inc_ue_error(mci, enable_per_layer_report, pos, error_count);
}
#define OTHER_LABEL " or "
/**
* edac_mc_handle_error - reports a memory event to userspace
*
* @type: severity of the error (CE/UE/Fatal)
* @mci: a struct mem_ctl_info pointer
* @error_count: Number of errors of the same type
* @page_frame_number: mem page where the error occurred
* @offset_in_page: offset of the error inside the page
* @syndrome: ECC syndrome
* @top_layer: Memory layer[0] position
* @mid_layer: Memory layer[1] position
* @low_layer: Memory layer[2] position
* @msg: Message meaningful to the end users that
* explains the event
* @other_detail: Technical details about the event that
* may help hardware manufacturers and
* EDAC developers to analyse the event
*/
void edac_mc_handle_error(const enum hw_event_mc_err_type type,
struct mem_ctl_info *mci,
const u16 error_count,
const unsigned long page_frame_number,
const unsigned long offset_in_page,
const unsigned long syndrome,
const int layer0,
const int layer1,
const int layer2,
const int top_layer,
const int mid_layer,
const int low_layer,
const char *msg,
const char *other_detail,
const void *mcelog)
const char *other_detail)
{
/* FIXME: too much for stack: move it to some pre-alocated area */
char detail[80], location[80];
char label[(EDAC_MC_LABEL_LEN + 1 + sizeof(OTHER_LABEL)) * mci->tot_dimms];
char *p;
int row = -1, chan = -1;
int pos[EDAC_MAX_LAYERS] = { layer0, layer1, layer2 };
int pos[EDAC_MAX_LAYERS] = { top_layer, mid_layer, low_layer };
int i;
u32 grain;
long grain;
bool enable_per_layer_report = false;
u8 grain_bits;
debugf3("MC%d: %s()\n", mci->mc_idx, __func__);
edac_dbg(3, "MC%d\n", mci->mc_idx);
/*
* Check if the event report is consistent and if the memory
@ -1043,13 +1123,13 @@ void edac_mc_handle_error(const enum hw_event_mc_err_type type,
p = label;
*p = '\0';
for (i = 0; i < mci->tot_dimms; i++) {
struct dimm_info *dimm = &mci->dimms[i];
struct dimm_info *dimm = mci->dimms[i];
if (layer0 >= 0 && layer0 != dimm->location[0])
if (top_layer >= 0 && top_layer != dimm->location[0])
continue;
if (layer1 >= 0 && layer1 != dimm->location[1])
if (mid_layer >= 0 && mid_layer != dimm->location[1])
continue;
if (layer2 >= 0 && layer2 != dimm->location[2])
if (low_layer >= 0 && low_layer != dimm->location[2])
continue;
/* get the max grain, over the error match range */
@ -1075,11 +1155,9 @@ void edac_mc_handle_error(const enum hw_event_mc_err_type type,
* get csrow/channel of the DIMM, in order to allow
* incrementing the compat API counters
*/
debugf4("%s: %s csrows map: (%d,%d)\n",
__func__,
mci->mem_is_per_rank ? "rank" : "dimm",
dimm->csrow, dimm->cschannel);
edac_dbg(4, "%s csrows map: (%d,%d)\n",
mci->mem_is_per_rank ? "rank" : "dimm",
dimm->csrow, dimm->cschannel);
if (row == -1)
row = dimm->csrow;
else if (row >= 0 && row != dimm->csrow)
@ -1095,19 +1173,18 @@ void edac_mc_handle_error(const enum hw_event_mc_err_type type,
if (!enable_per_layer_report) {
strcpy(label, "any memory");
} else {
debugf4("%s: csrow/channel to increment: (%d,%d)\n",
__func__, row, chan);
edac_dbg(4, "csrow/channel to increment: (%d,%d)\n", row, chan);
if (p == label)
strcpy(label, "unknown memory");
if (type == HW_EVENT_ERR_CORRECTED) {
if (row >= 0) {
mci->csrows[row].ce_count++;
mci->csrows[row]->ce_count += error_count;
if (chan >= 0)
mci->csrows[row].channels[chan].ce_count++;
mci->csrows[row]->channels[chan]->ce_count += error_count;
}
} else
if (row >= 0)
mci->csrows[row].ue_count++;
mci->csrows[row]->ue_count += error_count;
}
/* Fill the RAM location data */
@ -1120,23 +1197,33 @@ void edac_mc_handle_error(const enum hw_event_mc_err_type type,
edac_layer_name[mci->layers[i].type],
pos[i]);
}
if (p > location)
*(p - 1) = '\0';
/* Report the error via the trace interface */
grain_bits = fls_long(grain) + 1;
trace_mc_event(type, msg, label, error_count,
mci->mc_idx, top_layer, mid_layer, low_layer,
PAGES_TO_MiB(page_frame_number) | offset_in_page,
grain_bits, syndrome, other_detail);
/* Memory type dependent details about the error */
if (type == HW_EVENT_ERR_CORRECTED) {
snprintf(detail, sizeof(detail),
"page:0x%lx offset:0x%lx grain:%d syndrome:0x%lx",
"page:0x%lx offset:0x%lx grain:%ld syndrome:0x%lx",
page_frame_number, offset_in_page,
grain, syndrome);
edac_ce_error(mci, pos, msg, location, label, detail,
other_detail, enable_per_layer_report,
edac_ce_error(mci, error_count, pos, msg, location, label,
detail, other_detail, enable_per_layer_report,
page_frame_number, offset_in_page, grain);
} else {
snprintf(detail, sizeof(detail),
"page:0x%lx offset:0x%lx grain:%d",
"page:0x%lx offset:0x%lx grain:%ld",
page_frame_number, offset_in_page, grain);
edac_ue_error(mci, pos, msg, location, label, detail,
other_detail, enable_per_layer_report);
edac_ue_error(mci, error_count, pos, msg, location, label,
detail, other_detail, enable_per_layer_report);
}
}
EXPORT_SYMBOL_GPL(edac_mc_handle_error);

File diff suppressed because it is too large Load Diff

View File

@ -15,7 +15,7 @@
#include "edac_core.h"
#include "edac_module.h"
#define EDAC_VERSION "Ver: 2.1.0"
#define EDAC_VERSION "Ver: 3.0.0"
#ifdef CONFIG_EDAC_DEBUG
/* Values of 0 to 4 will generate output */
@ -90,26 +90,21 @@ static int __init edac_init(void)
*/
edac_pci_clear_parity_errors();
/*
* now set up the mc_kset under the edac class object
*/
err = edac_sysfs_setup_mc_kset();
err = edac_mc_sysfs_init();
if (err)
goto error;
edac_debugfs_init();
/* Setup/Initialize the workq for this core */
err = edac_workqueue_setup();
if (err) {
edac_printk(KERN_ERR, EDAC_MC, "init WorkQueue failure\n");
goto workq_fail;
goto error;
}
return 0;
/* Error teardown stack */
workq_fail:
edac_sysfs_teardown_mc_kset();
error:
return err;
}
@ -120,11 +115,12 @@ error:
*/
static void __exit edac_exit(void)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* tear down the various subsystems */
edac_workqueue_teardown();
edac_sysfs_teardown_mc_kset();
edac_mc_sysfs_exit();
edac_debugfs_exit();
}
/*

View File

@ -19,12 +19,12 @@
*
* edac_mc objects
*/
extern int edac_sysfs_setup_mc_kset(void);
extern void edac_sysfs_teardown_mc_kset(void);
extern int edac_mc_register_sysfs_main_kobj(struct mem_ctl_info *mci);
extern void edac_mc_unregister_sysfs_main_kobj(struct mem_ctl_info *mci);
/* on edac_mc_sysfs.c */
int edac_mc_sysfs_init(void);
void edac_mc_sysfs_exit(void);
extern int edac_create_sysfs_mci_device(struct mem_ctl_info *mci);
extern void edac_remove_sysfs_mci_device(struct mem_ctl_info *mci);
void edac_unregister_sysfs(struct mem_ctl_info *mci);
extern int edac_get_log_ue(void);
extern int edac_get_log_ce(void);
extern int edac_get_panic_on_ue(void);
@ -34,6 +34,10 @@ extern int edac_mc_get_panic_on_ue(void);
extern int edac_get_poll_msec(void);
extern int edac_mc_get_poll_msec(void);
unsigned edac_dimm_info_location(struct dimm_info *dimm, char *buf,
unsigned len);
/* on edac_device.c */
extern int edac_device_register_sysfs_main_kobj(
struct edac_device_ctl_info *edac_dev);
extern void edac_device_unregister_sysfs_main_kobj(
@ -52,6 +56,20 @@ extern void edac_mc_reset_delay_period(int value);
extern void *edac_align_ptr(void **p, unsigned size, int n_elems);
/*
* EDAC debugfs functions
*/
#ifdef CONFIG_EDAC_DEBUG
int edac_debugfs_init(void);
void edac_debugfs_exit(void);
#else
static inline int edac_debugfs_init(void)
{
return -ENODEV;
}
static inline void edac_debugfs_exit(void) {}
#endif
/*
* EDAC PCI functions
*/

View File

@ -45,7 +45,7 @@ struct edac_pci_ctl_info *edac_pci_alloc_ctl_info(unsigned int sz_pvt,
void *p = NULL, *pvt;
unsigned int size;
debugf1("%s()\n", __func__);
edac_dbg(1, "\n");
pci = edac_align_ptr(&p, sizeof(*pci), 1);
pvt = edac_align_ptr(&p, 1, sz_pvt);
@ -80,7 +80,7 @@ EXPORT_SYMBOL_GPL(edac_pci_alloc_ctl_info);
*/
void edac_pci_free_ctl_info(struct edac_pci_ctl_info *pci)
{
debugf1("%s()\n", __func__);
edac_dbg(1, "\n");
edac_pci_remove_sysfs(pci);
}
@ -97,7 +97,7 @@ static struct edac_pci_ctl_info *find_edac_pci_by_dev(struct device *dev)
struct edac_pci_ctl_info *pci;
struct list_head *item;
debugf1("%s()\n", __func__);
edac_dbg(1, "\n");
list_for_each(item, &edac_pci_list) {
pci = list_entry(item, struct edac_pci_ctl_info, link);
@ -122,7 +122,7 @@ static int add_edac_pci_to_global_list(struct edac_pci_ctl_info *pci)
struct list_head *item, *insert_before;
struct edac_pci_ctl_info *rover;
debugf1("%s()\n", __func__);
edac_dbg(1, "\n");
insert_before = &edac_pci_list;
@ -226,7 +226,7 @@ static void edac_pci_workq_function(struct work_struct *work_req)
int msec;
unsigned long delay;
debugf3("%s() checking\n", __func__);
edac_dbg(3, "checking\n");
mutex_lock(&edac_pci_ctls_mutex);
@ -261,7 +261,7 @@ static void edac_pci_workq_function(struct work_struct *work_req)
static void edac_pci_workq_setup(struct edac_pci_ctl_info *pci,
unsigned int msec)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
INIT_DELAYED_WORK(&pci->work, edac_pci_workq_function);
queue_delayed_work(edac_workqueue, &pci->work,
@ -276,7 +276,7 @@ static void edac_pci_workq_teardown(struct edac_pci_ctl_info *pci)
{
int status;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
status = cancel_delayed_work(&pci->work);
if (status == 0)
@ -293,7 +293,7 @@ static void edac_pci_workq_teardown(struct edac_pci_ctl_info *pci)
void edac_pci_reset_delay_period(struct edac_pci_ctl_info *pci,
unsigned long value)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
edac_pci_workq_teardown(pci);
@ -333,7 +333,7 @@ EXPORT_SYMBOL_GPL(edac_pci_alloc_index);
*/
int edac_pci_add_device(struct edac_pci_ctl_info *pci, int edac_idx)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
pci->pci_idx = edac_idx;
pci->start_time = jiffies;
@ -393,7 +393,7 @@ struct edac_pci_ctl_info *edac_pci_del_device(struct device *dev)
{
struct edac_pci_ctl_info *pci;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
mutex_lock(&edac_pci_ctls_mutex);
@ -430,7 +430,7 @@ EXPORT_SYMBOL_GPL(edac_pci_del_device);
*/
static void edac_pci_generic_check(struct edac_pci_ctl_info *pci)
{
debugf4("%s()\n", __func__);
edac_dbg(4, "\n");
edac_pci_do_parity_check();
}
@ -475,7 +475,7 @@ struct edac_pci_ctl_info *edac_pci_create_generic_ctl(struct device *dev,
pdata->edac_idx = edac_pci_idx++;
if (edac_pci_add_device(pci, pdata->edac_idx) > 0) {
debugf3("%s(): failed edac_pci_add_device()\n", __func__);
edac_dbg(3, "failed edac_pci_add_device()\n");
edac_pci_free_ctl_info(pci);
return NULL;
}
@ -491,7 +491,7 @@ EXPORT_SYMBOL_GPL(edac_pci_create_generic_ctl);
*/
void edac_pci_release_generic_ctl(struct edac_pci_ctl_info *pci)
{
debugf0("%s() pci mod=%s\n", __func__, pci->mod_name);
edac_dbg(0, "pci mod=%s\n", pci->mod_name);
edac_pci_del_device(pci->dev);
edac_pci_free_ctl_info(pci);

View File

@ -78,7 +78,7 @@ static void edac_pci_instance_release(struct kobject *kobj)
{
struct edac_pci_ctl_info *pci;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* Form pointer to containing struct, the pci control struct */
pci = to_instance(kobj);
@ -161,7 +161,7 @@ static int edac_pci_create_instance_kobj(struct edac_pci_ctl_info *pci, int idx)
struct kobject *main_kobj;
int err;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* First bump the ref count on the top main kobj, which will
* track the number of PCI instances we have, and thus nest
@ -177,14 +177,13 @@ static int edac_pci_create_instance_kobj(struct edac_pci_ctl_info *pci, int idx)
err = kobject_init_and_add(&pci->kobj, &ktype_pci_instance,
edac_pci_top_main_kobj, "pci%d", idx);
if (err != 0) {
debugf2("%s() failed to register instance pci%d\n",
__func__, idx);
edac_dbg(2, "failed to register instance pci%d\n", idx);
kobject_put(edac_pci_top_main_kobj);
goto error_out;
}
kobject_uevent(&pci->kobj, KOBJ_ADD);
debugf1("%s() Register instance 'pci%d' kobject\n", __func__, idx);
edac_dbg(1, "Register instance 'pci%d' kobject\n", idx);
return 0;
@ -201,7 +200,7 @@ error_out:
static void edac_pci_unregister_sysfs_instance_kobj(
struct edac_pci_ctl_info *pci)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* Unregister the instance kobject and allow its release
* function release the main reference count and then
@ -317,7 +316,7 @@ static struct edac_pci_dev_attribute *edac_pci_attr[] = {
*/
static void edac_pci_release_main_kobj(struct kobject *kobj)
{
debugf0("%s() here to module_put(THIS_MODULE)\n", __func__);
edac_dbg(0, "here to module_put(THIS_MODULE)\n");
kfree(kobj);
@ -345,7 +344,7 @@ static int edac_pci_main_kobj_setup(void)
int err;
struct bus_type *edac_subsys;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* check and count if we have already created the main kobject */
if (atomic_inc_return(&edac_pci_sysfs_refcount) != 1)
@ -356,7 +355,7 @@ static int edac_pci_main_kobj_setup(void)
*/
edac_subsys = edac_get_sysfs_subsys();
if (edac_subsys == NULL) {
debugf1("%s() no edac_subsys\n", __func__);
edac_dbg(1, "no edac_subsys\n");
err = -ENODEV;
goto decrement_count_fail;
}
@ -366,14 +365,14 @@ static int edac_pci_main_kobj_setup(void)
* level main kobj for EDAC PCI
*/
if (!try_module_get(THIS_MODULE)) {
debugf1("%s() try_module_get() failed\n", __func__);
edac_dbg(1, "try_module_get() failed\n");
err = -ENODEV;
goto mod_get_fail;
}
edac_pci_top_main_kobj = kzalloc(sizeof(struct kobject), GFP_KERNEL);
if (!edac_pci_top_main_kobj) {
debugf1("Failed to allocate\n");
edac_dbg(1, "Failed to allocate\n");
err = -ENOMEM;
goto kzalloc_fail;
}
@ -383,7 +382,7 @@ static int edac_pci_main_kobj_setup(void)
&ktype_edac_pci_main_kobj,
&edac_subsys->dev_root->kobj, "pci");
if (err) {
debugf1("Failed to register '.../edac/pci'\n");
edac_dbg(1, "Failed to register '.../edac/pci'\n");
goto kobject_init_and_add_fail;
}
@ -392,7 +391,7 @@ static int edac_pci_main_kobj_setup(void)
* must be used, for resources to be cleaned up properly
*/
kobject_uevent(edac_pci_top_main_kobj, KOBJ_ADD);
debugf1("Registered '.../edac/pci' kobject\n");
edac_dbg(1, "Registered '.../edac/pci' kobject\n");
return 0;
@ -421,15 +420,14 @@ decrement_count_fail:
*/
static void edac_pci_main_kobj_teardown(void)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* Decrement the count and only if no more controller instances
* are connected perform the unregisteration of the top level
* main kobj
*/
if (atomic_dec_return(&edac_pci_sysfs_refcount) == 0) {
debugf0("%s() called kobject_put on main kobj\n",
__func__);
edac_dbg(0, "called kobject_put on main kobj\n");
kobject_put(edac_pci_top_main_kobj);
}
edac_put_sysfs_subsys();
@ -446,7 +444,7 @@ int edac_pci_create_sysfs(struct edac_pci_ctl_info *pci)
int err;
struct kobject *edac_kobj = &pci->kobj;
debugf0("%s() idx=%d\n", __func__, pci->pci_idx);
edac_dbg(0, "idx=%d\n", pci->pci_idx);
/* create the top main EDAC PCI kobject, IF needed */
err = edac_pci_main_kobj_setup();
@ -460,8 +458,7 @@ int edac_pci_create_sysfs(struct edac_pci_ctl_info *pci)
err = sysfs_create_link(edac_kobj, &pci->dev->kobj, EDAC_PCI_SYMLINK);
if (err) {
debugf0("%s() sysfs_create_link() returned err= %d\n",
__func__, err);
edac_dbg(0, "sysfs_create_link() returned err= %d\n", err);
goto symlink_fail;
}
@ -484,7 +481,7 @@ unregister_cleanup:
*/
void edac_pci_remove_sysfs(struct edac_pci_ctl_info *pci)
{
debugf0("%s() index=%d\n", __func__, pci->pci_idx);
edac_dbg(0, "index=%d\n", pci->pci_idx);
/* Remove the symlink */
sysfs_remove_link(&pci->kobj, EDAC_PCI_SYMLINK);
@ -496,7 +493,7 @@ void edac_pci_remove_sysfs(struct edac_pci_ctl_info *pci)
* if this 'pci' is the last instance.
* If it is, the main kobject will be unregistered as a result
*/
debugf0("%s() calling edac_pci_main_kobj_teardown()\n", __func__);
edac_dbg(0, "calling edac_pci_main_kobj_teardown()\n");
edac_pci_main_kobj_teardown();
}
@ -572,7 +569,7 @@ static void edac_pci_dev_parity_test(struct pci_dev *dev)
local_irq_restore(flags);
debugf4("PCI STATUS= 0x%04x %s\n", status, dev_name(&dev->dev));
edac_dbg(4, "PCI STATUS= 0x%04x %s\n", status, dev_name(&dev->dev));
/* check the status reg for errors on boards NOT marked as broken
* if broken, we cannot trust any of the status bits
@ -603,13 +600,15 @@ static void edac_pci_dev_parity_test(struct pci_dev *dev)
}
debugf4("PCI HEADER TYPE= 0x%02x %s\n", header_type, dev_name(&dev->dev));
edac_dbg(4, "PCI HEADER TYPE= 0x%02x %s\n",
header_type, dev_name(&dev->dev));
if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) {
/* On bridges, need to examine secondary status register */
status = get_pci_parity_status(dev, 1);
debugf4("PCI SEC_STATUS= 0x%04x %s\n", status, dev_name(&dev->dev));
edac_dbg(4, "PCI SEC_STATUS= 0x%04x %s\n",
status, dev_name(&dev->dev));
/* check the secondary status reg for errors,
* on NOT broken boards
@ -671,7 +670,7 @@ void edac_pci_do_parity_check(void)
{
int before_count;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* if policy has PCI check off, leave now */
if (!check_pci_errors)

View File

@ -0,0 +1,149 @@
/*
* Copyright 2011-2012 Calxeda, Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/ctype.h>
#include <linux/edac.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/of_platform.h>
#include "edac_core.h"
#include "edac_module.h"
#define SR_CLR_SB_ECC_INTR 0x0
#define SR_CLR_DB_ECC_INTR 0x4
struct hb_l2_drvdata {
void __iomem *base;
int sb_irq;
int db_irq;
};
static irqreturn_t highbank_l2_err_handler(int irq, void *dev_id)
{
struct edac_device_ctl_info *dci = dev_id;
struct hb_l2_drvdata *drvdata = dci->pvt_info;
if (irq == drvdata->sb_irq) {
writel(1, drvdata->base + SR_CLR_SB_ECC_INTR);
edac_device_handle_ce(dci, 0, 0, dci->ctl_name);
}
if (irq == drvdata->db_irq) {
writel(1, drvdata->base + SR_CLR_DB_ECC_INTR);
edac_device_handle_ue(dci, 0, 0, dci->ctl_name);
}
return IRQ_HANDLED;
}
static int __devinit highbank_l2_err_probe(struct platform_device *pdev)
{
struct edac_device_ctl_info *dci;
struct hb_l2_drvdata *drvdata;
struct resource *r;
int res = 0;
dci = edac_device_alloc_ctl_info(sizeof(*drvdata), "cpu",
1, "L", 1, 2, NULL, 0, 0);
if (!dci)
return -ENOMEM;
drvdata = dci->pvt_info;
dci->dev = &pdev->dev;
platform_set_drvdata(pdev, dci);
if (!devres_open_group(&pdev->dev, NULL, GFP_KERNEL))
return -ENOMEM;
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!r) {
dev_err(&pdev->dev, "Unable to get mem resource\n");
res = -ENODEV;
goto err;
}
if (!devm_request_mem_region(&pdev->dev, r->start,
resource_size(r), dev_name(&pdev->dev))) {
dev_err(&pdev->dev, "Error while requesting mem region\n");
res = -EBUSY;
goto err;
}
drvdata->base = devm_ioremap(&pdev->dev, r->start, resource_size(r));
if (!drvdata->base) {
dev_err(&pdev->dev, "Unable to map regs\n");
res = -ENOMEM;
goto err;
}
drvdata->db_irq = platform_get_irq(pdev, 0);
res = devm_request_irq(&pdev->dev, drvdata->db_irq,
highbank_l2_err_handler,
0, dev_name(&pdev->dev), dci);
if (res < 0)
goto err;
drvdata->sb_irq = platform_get_irq(pdev, 1);
res = devm_request_irq(&pdev->dev, drvdata->sb_irq,
highbank_l2_err_handler,
0, dev_name(&pdev->dev), dci);
if (res < 0)
goto err;
dci->mod_name = dev_name(&pdev->dev);
dci->dev_name = dev_name(&pdev->dev);
if (edac_device_add_device(dci))
goto err;
devres_close_group(&pdev->dev, NULL);
return 0;
err:
devres_release_group(&pdev->dev, NULL);
edac_device_free_ctl_info(dci);
return res;
}
static int highbank_l2_err_remove(struct platform_device *pdev)
{
struct edac_device_ctl_info *dci = platform_get_drvdata(pdev);
edac_device_del_device(&pdev->dev);
edac_device_free_ctl_info(dci);
return 0;
}
static const struct of_device_id hb_l2_err_of_match[] = {
{ .compatible = "calxeda,hb-sregs-l2-ecc", },
{},
};
MODULE_DEVICE_TABLE(of, hb_l2_err_of_match);
static struct platform_driver highbank_l2_edac_driver = {
.probe = highbank_l2_err_probe,
.remove = highbank_l2_err_remove,
.driver = {
.name = "hb_l2_edac",
.of_match_table = hb_l2_err_of_match,
},
};
module_platform_driver(highbank_l2_edac_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Calxeda, Inc.");
MODULE_DESCRIPTION("EDAC Driver for Calxeda Highbank L2 Cache");

View File

@ -0,0 +1,264 @@
/*
* Copyright 2011-2012 Calxeda, Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/ctype.h>
#include <linux/edac.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/of_platform.h>
#include <linux/uaccess.h>
#include "edac_core.h"
#include "edac_module.h"
/* DDR Ctrlr Error Registers */
#define HB_DDR_ECC_OPT 0x128
#define HB_DDR_ECC_U_ERR_ADDR 0x130
#define HB_DDR_ECC_U_ERR_STAT 0x134
#define HB_DDR_ECC_U_ERR_DATAL 0x138
#define HB_DDR_ECC_U_ERR_DATAH 0x13c
#define HB_DDR_ECC_C_ERR_ADDR 0x140
#define HB_DDR_ECC_C_ERR_STAT 0x144
#define HB_DDR_ECC_C_ERR_DATAL 0x148
#define HB_DDR_ECC_C_ERR_DATAH 0x14c
#define HB_DDR_ECC_INT_STATUS 0x180
#define HB_DDR_ECC_INT_ACK 0x184
#define HB_DDR_ECC_U_ERR_ID 0x424
#define HB_DDR_ECC_C_ERR_ID 0x428
#define HB_DDR_ECC_INT_STAT_CE 0x8
#define HB_DDR_ECC_INT_STAT_DOUBLE_CE 0x10
#define HB_DDR_ECC_INT_STAT_UE 0x20
#define HB_DDR_ECC_INT_STAT_DOUBLE_UE 0x40
#define HB_DDR_ECC_OPT_MODE_MASK 0x3
#define HB_DDR_ECC_OPT_FWC 0x100
#define HB_DDR_ECC_OPT_XOR_SHIFT 16
struct hb_mc_drvdata {
void __iomem *mc_vbase;
};
static irqreturn_t highbank_mc_err_handler(int irq, void *dev_id)
{
struct mem_ctl_info *mci = dev_id;
struct hb_mc_drvdata *drvdata = mci->pvt_info;
u32 status, err_addr;
/* Read the interrupt status register */
status = readl(drvdata->mc_vbase + HB_DDR_ECC_INT_STATUS);
if (status & HB_DDR_ECC_INT_STAT_UE) {
err_addr = readl(drvdata->mc_vbase + HB_DDR_ECC_U_ERR_ADDR);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
err_addr >> PAGE_SHIFT,
err_addr & ~PAGE_MASK, 0,
0, 0, -1,
mci->ctl_name, "");
}
if (status & HB_DDR_ECC_INT_STAT_CE) {
u32 syndrome = readl(drvdata->mc_vbase + HB_DDR_ECC_C_ERR_STAT);
syndrome = (syndrome >> 8) & 0xff;
err_addr = readl(drvdata->mc_vbase + HB_DDR_ECC_C_ERR_ADDR);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
err_addr >> PAGE_SHIFT,
err_addr & ~PAGE_MASK, syndrome,
0, 0, -1,
mci->ctl_name, "");
}
/* clear the error, clears the interrupt */
writel(status, drvdata->mc_vbase + HB_DDR_ECC_INT_ACK);
return IRQ_HANDLED;
}
#ifdef CONFIG_EDAC_DEBUG
static ssize_t highbank_mc_err_inject_write(struct file *file,
const char __user *data,
size_t count, loff_t *ppos)
{
struct mem_ctl_info *mci = file->private_data;
struct hb_mc_drvdata *pdata = mci->pvt_info;
char buf[32];
size_t buf_size;
u32 reg;
u8 synd;
buf_size = min(count, (sizeof(buf)-1));
if (copy_from_user(buf, data, buf_size))
return -EFAULT;
buf[buf_size] = 0;
if (!kstrtou8(buf, 16, &synd)) {
reg = readl(pdata->mc_vbase + HB_DDR_ECC_OPT);
reg &= HB_DDR_ECC_OPT_MODE_MASK;
reg |= (synd << HB_DDR_ECC_OPT_XOR_SHIFT) | HB_DDR_ECC_OPT_FWC;
writel(reg, pdata->mc_vbase + HB_DDR_ECC_OPT);
}
return count;
}
static int debugfs_open(struct inode *inode, struct file *file)
{
file->private_data = inode->i_private;
return 0;
}
static const struct file_operations highbank_mc_debug_inject_fops = {
.open = debugfs_open,
.write = highbank_mc_err_inject_write,
.llseek = generic_file_llseek,
};
static void __devinit highbank_mc_create_debugfs_nodes(struct mem_ctl_info *mci)
{
if (mci->debugfs)
debugfs_create_file("inject_ctrl", S_IWUSR, mci->debugfs, mci,
&highbank_mc_debug_inject_fops);
;
}
#else
static void __devinit highbank_mc_create_debugfs_nodes(struct mem_ctl_info *mci)
{}
#endif
static int __devinit highbank_mc_probe(struct platform_device *pdev)
{
struct edac_mc_layer layers[2];
struct mem_ctl_info *mci;
struct hb_mc_drvdata *drvdata;
struct dimm_info *dimm;
struct resource *r;
u32 control;
int irq;
int res = 0;
layers[0].type = EDAC_MC_LAYER_CHIP_SELECT;
layers[0].size = 1;
layers[0].is_virt_csrow = true;
layers[1].type = EDAC_MC_LAYER_CHANNEL;
layers[1].size = 1;
layers[1].is_virt_csrow = false;
mci = edac_mc_alloc(0, ARRAY_SIZE(layers), layers,
sizeof(struct hb_mc_drvdata));
if (!mci)
return -ENOMEM;
mci->pdev = &pdev->dev;
drvdata = mci->pvt_info;
platform_set_drvdata(pdev, mci);
if (!devres_open_group(&pdev->dev, NULL, GFP_KERNEL))
return -ENOMEM;
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!r) {
dev_err(&pdev->dev, "Unable to get mem resource\n");
res = -ENODEV;
goto err;
}
if (!devm_request_mem_region(&pdev->dev, r->start,
resource_size(r), dev_name(&pdev->dev))) {
dev_err(&pdev->dev, "Error while requesting mem region\n");
res = -EBUSY;
goto err;
}
drvdata->mc_vbase = devm_ioremap(&pdev->dev,
r->start, resource_size(r));
if (!drvdata->mc_vbase) {
dev_err(&pdev->dev, "Unable to map regs\n");
res = -ENOMEM;
goto err;
}
control = readl(drvdata->mc_vbase + HB_DDR_ECC_OPT) & 0x3;
if (!control || (control == 0x2)) {
dev_err(&pdev->dev, "No ECC present, or ECC disabled\n");
res = -ENODEV;
goto err;
}
irq = platform_get_irq(pdev, 0);
res = devm_request_irq(&pdev->dev, irq, highbank_mc_err_handler,
0, dev_name(&pdev->dev), mci);
if (res < 0) {
dev_err(&pdev->dev, "Unable to request irq %d\n", irq);
goto err;
}
mci->mtype_cap = MEM_FLAG_DDR3;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED;
mci->edac_cap = EDAC_FLAG_SECDED;
mci->mod_name = dev_name(&pdev->dev);
mci->mod_ver = "1";
mci->ctl_name = dev_name(&pdev->dev);
mci->scrub_mode = SCRUB_SW_SRC;
/* Only a single 4GB DIMM is supported */
dimm = *mci->dimms;
dimm->nr_pages = (~0UL >> PAGE_SHIFT) + 1;
dimm->grain = 8;
dimm->dtype = DEV_X8;
dimm->mtype = MEM_DDR3;
dimm->edac_mode = EDAC_SECDED;
res = edac_mc_add_mc(mci);
if (res < 0)
goto err;
highbank_mc_create_debugfs_nodes(mci);
devres_close_group(&pdev->dev, NULL);
return 0;
err:
devres_release_group(&pdev->dev, NULL);
edac_mc_free(mci);
return res;
}
static int highbank_mc_remove(struct platform_device *pdev)
{
struct mem_ctl_info *mci = platform_get_drvdata(pdev);
edac_mc_del_mc(&pdev->dev);
edac_mc_free(mci);
return 0;
}
static const struct of_device_id hb_ddr_ctrl_of_match[] = {
{ .compatible = "calxeda,hb-ddr-ctrl", },
{},
};
MODULE_DEVICE_TABLE(of, hb_ddr_ctrl_of_match);
static struct platform_driver highbank_mc_edac_driver = {
.probe = highbank_mc_probe,
.remove = highbank_mc_remove,
.driver = {
.name = "hb_mc_edac",
.of_match_table = hb_ddr_ctrl_of_match,
},
};
module_platform_driver(highbank_mc_edac_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Calxeda, Inc.");
MODULE_DESCRIPTION("EDAC Driver for Calxeda Highbank");

View File

@ -194,7 +194,7 @@ static void i3000_get_error_info(struct mem_ctl_info *mci,
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
/*
* This is a mess because there is no atomic way to read all the
@ -236,7 +236,7 @@ static int i3000_process_error_info(struct mem_ctl_info *mci,
int row, multi_chan, channel;
unsigned long pfn, offset;
multi_chan = mci->csrows[0].nr_channels - 1;
multi_chan = mci->csrows[0]->nr_channels - 1;
if (!(info->errsts & I3000_ERRSTS_BITS))
return 0;
@ -245,9 +245,9 @@ static int i3000_process_error_info(struct mem_ctl_info *mci,
return 1;
if ((info->errsts ^ info->errsts2) & I3000_ERRSTS_BITS) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0,
-1, -1, -1,
"UE overwrote CE", "", NULL);
"UE overwrote CE", "");
info->errsts = info->errsts2;
}
@ -258,15 +258,15 @@ static int i3000_process_error_info(struct mem_ctl_info *mci,
row = edac_mc_find_csrow_by_page(mci, pfn);
if (info->errsts & I3000_ERRSTS_UE)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
pfn, offset, 0,
row, -1, -1,
"i3000 UE", "", NULL);
"i3000 UE", "");
else
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
pfn, offset, info->derrsyn,
row, multi_chan ? channel : 0, -1,
"i3000 CE", "", NULL);
"i3000 CE", "");
return 1;
}
@ -275,7 +275,7 @@ static void i3000_check(struct mem_ctl_info *mci)
{
struct i3000_error_info info;
debugf1("MC%d: %s()\n", mci->mc_idx, __func__);
edac_dbg(1, "MC%d\n", mci->mc_idx);
i3000_get_error_info(mci, &info);
i3000_process_error_info(mci, &info, 1);
}
@ -322,7 +322,7 @@ static int i3000_probe1(struct pci_dev *pdev, int dev_idx)
unsigned long mchbar;
void __iomem *window;
debugf0("MC: %s()\n", __func__);
edac_dbg(0, "MC:\n");
pci_read_config_dword(pdev, I3000_MCHBAR, (u32 *) & mchbar);
mchbar &= I3000_MCHBAR_MASK;
@ -366,9 +366,9 @@ static int i3000_probe1(struct pci_dev *pdev, int dev_idx)
if (!mci)
return -ENOMEM;
debugf3("MC: %s(): init mci\n", __func__);
edac_dbg(3, "MC: init mci\n");
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_DDR2;
mci->edac_ctl_cap = EDAC_FLAG_SECDED;
@ -393,14 +393,13 @@ static int i3000_probe1(struct pci_dev *pdev, int dev_idx)
for (last_cumul_size = i = 0; i < mci->nr_csrows; i++) {
u8 value;
u32 cumul_size;
struct csrow_info *csrow = &mci->csrows[i];
struct csrow_info *csrow = mci->csrows[i];
value = drb[i];
cumul_size = value << (I3000_DRB_SHIFT - PAGE_SHIFT);
if (interleaved)
cumul_size <<= 1;
debugf3("MC: %s(): (%d) cumul_size 0x%x\n",
__func__, i, cumul_size);
edac_dbg(3, "MC: (%d) cumul_size 0x%x\n", i, cumul_size);
if (cumul_size == last_cumul_size)
continue;
@ -410,7 +409,7 @@ static int i3000_probe1(struct pci_dev *pdev, int dev_idx)
last_cumul_size = cumul_size;
for (j = 0; j < nr_channels; j++) {
struct dimm_info *dimm = csrow->channels[j].dimm;
struct dimm_info *dimm = csrow->channels[j]->dimm;
dimm->nr_pages = nr_pages / nr_channels;
dimm->grain = I3000_DEAP_GRAIN;
@ -429,7 +428,7 @@ static int i3000_probe1(struct pci_dev *pdev, int dev_idx)
rc = -ENODEV;
if (edac_mc_add_mc(mci)) {
debugf3("MC: %s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "MC: failed edac_mc_add_mc()\n");
goto fail;
}
@ -445,7 +444,7 @@ static int i3000_probe1(struct pci_dev *pdev, int dev_idx)
}
/* get this far and it's successful */
debugf3("MC: %s(): success\n", __func__);
edac_dbg(3, "MC: success\n");
return 0;
fail:
@ -461,7 +460,7 @@ static int __devinit i3000_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("MC: %s()\n", __func__);
edac_dbg(0, "MC:\n");
if (pci_enable_device(pdev) < 0)
return -EIO;
@ -477,7 +476,7 @@ static void __devexit i3000_remove_one(struct pci_dev *pdev)
{
struct mem_ctl_info *mci;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (i3000_pci)
edac_pci_release_generic_ctl(i3000_pci);
@ -511,7 +510,7 @@ static int __init i3000_init(void)
{
int pci_rc;
debugf3("MC: %s()\n", __func__);
edac_dbg(3, "MC:\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -525,14 +524,14 @@ static int __init i3000_init(void)
mci_pdev = pci_get_device(PCI_VENDOR_ID_INTEL,
PCI_DEVICE_ID_INTEL_3000_HB, NULL);
if (!mci_pdev) {
debugf0("i3000 pci_get_device fail\n");
edac_dbg(0, "i3000 pci_get_device fail\n");
pci_rc = -ENODEV;
goto fail1;
}
pci_rc = i3000_init_one(mci_pdev, i3000_pci_tbl);
if (pci_rc < 0) {
debugf0("i3000 init fail\n");
edac_dbg(0, "i3000 init fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -552,7 +551,7 @@ fail0:
static void __exit i3000_exit(void)
{
debugf3("MC: %s()\n", __func__);
edac_dbg(3, "MC:\n");
pci_unregister_driver(&i3000_driver);
if (!i3000_registered) {

View File

@ -110,10 +110,10 @@ static int how_many_channels(struct pci_dev *pdev)
pci_read_config_byte(pdev, I3200_CAPID0 + 8, &capid0_8b);
if (capid0_8b & 0x20) { /* check DCD: Dual Channel Disable */
debugf0("In single channel mode.\n");
edac_dbg(0, "In single channel mode\n");
return 1;
} else {
debugf0("In dual channel mode.\n");
edac_dbg(0, "In dual channel mode\n");
return 2;
}
}
@ -159,7 +159,7 @@ static void i3200_clear_error_info(struct mem_ctl_info *mci)
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
/*
* Clear any error bits.
@ -176,7 +176,7 @@ static void i3200_get_and_clear_error_info(struct mem_ctl_info *mci,
struct i3200_priv *priv = mci->pvt_info;
void __iomem *window = priv->window;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
/*
* This is a mess because there is no atomic way to read all the
@ -218,25 +218,25 @@ static void i3200_process_error_info(struct mem_ctl_info *mci,
return;
if ((info->errsts ^ info->errsts2) & I3200_ERRSTS_BITS) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0,
-1, -1, -1, "UE overwrote CE", "", NULL);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0,
-1, -1, -1, "UE overwrote CE", "");
info->errsts = info->errsts2;
}
for (channel = 0; channel < nr_channels; channel++) {
log = info->eccerrlog[channel];
if (log & I3200_ECCERRLOG_UE) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
0, 0, 0,
eccerrlog_row(channel, log),
-1, -1,
"i3000 UE", "", NULL);
"i3000 UE", "");
} else if (log & I3200_ECCERRLOG_CE) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
0, 0, eccerrlog_syndrome(log),
eccerrlog_row(channel, log),
-1, -1,
"i3000 UE", "", NULL);
"i3000 UE", "");
}
}
}
@ -245,7 +245,7 @@ static void i3200_check(struct mem_ctl_info *mci)
{
struct i3200_error_info info;
debugf1("MC%d: %s()\n", mci->mc_idx, __func__);
edac_dbg(1, "MC%d\n", mci->mc_idx);
i3200_get_and_clear_error_info(mci, &info);
i3200_process_error_info(mci, &info);
}
@ -332,7 +332,7 @@ static int i3200_probe1(struct pci_dev *pdev, int dev_idx)
void __iomem *window;
struct i3200_priv *priv;
debugf0("MC: %s()\n", __func__);
edac_dbg(0, "MC:\n");
window = i3200_map_mchbar(pdev);
if (!window)
@ -352,9 +352,9 @@ static int i3200_probe1(struct pci_dev *pdev, int dev_idx)
if (!mci)
return -ENOMEM;
debugf3("MC: %s(): init mci\n", __func__);
edac_dbg(3, "MC: init mci\n");
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_DDR2;
mci->edac_ctl_cap = EDAC_FLAG_SECDED;
@ -379,7 +379,7 @@ static int i3200_probe1(struct pci_dev *pdev, int dev_idx)
*/
for (i = 0; i < mci->nr_csrows; i++) {
unsigned long nr_pages;
struct csrow_info *csrow = &mci->csrows[i];
struct csrow_info *csrow = mci->csrows[i];
nr_pages = drb_to_nr_pages(drbs, stacked,
i / I3200_RANKS_PER_CHANNEL,
@ -389,7 +389,7 @@ static int i3200_probe1(struct pci_dev *pdev, int dev_idx)
continue;
for (j = 0; j < nr_channels; j++) {
struct dimm_info *dimm = csrow->channels[j].dimm;
struct dimm_info *dimm = csrow->channels[j]->dimm;
dimm->nr_pages = nr_pages / nr_channels;
dimm->grain = nr_pages << PAGE_SHIFT;
@ -403,12 +403,12 @@ static int i3200_probe1(struct pci_dev *pdev, int dev_idx)
rc = -ENODEV;
if (edac_mc_add_mc(mci)) {
debugf3("MC: %s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "MC: failed edac_mc_add_mc()\n");
goto fail;
}
/* get this far and it's successful */
debugf3("MC: %s(): success\n", __func__);
edac_dbg(3, "MC: success\n");
return 0;
fail:
@ -424,7 +424,7 @@ static int __devinit i3200_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("MC: %s()\n", __func__);
edac_dbg(0, "MC:\n");
if (pci_enable_device(pdev) < 0)
return -EIO;
@ -441,7 +441,7 @@ static void __devexit i3200_remove_one(struct pci_dev *pdev)
struct mem_ctl_info *mci;
struct i3200_priv *priv;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
mci = edac_mc_del_mc(&pdev->dev);
if (!mci)
@ -475,7 +475,7 @@ static int __init i3200_init(void)
{
int pci_rc;
debugf3("MC: %s()\n", __func__);
edac_dbg(3, "MC:\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -489,14 +489,14 @@ static int __init i3200_init(void)
mci_pdev = pci_get_device(PCI_VENDOR_ID_INTEL,
PCI_DEVICE_ID_INTEL_3200_HB, NULL);
if (!mci_pdev) {
debugf0("i3200 pci_get_device fail\n");
edac_dbg(0, "i3200 pci_get_device fail\n");
pci_rc = -ENODEV;
goto fail1;
}
pci_rc = i3200_init_one(mci_pdev, i3200_pci_tbl);
if (pci_rc < 0) {
debugf0("i3200 init fail\n");
edac_dbg(0, "i3200 init fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -516,7 +516,7 @@ fail0:
static void __exit i3200_exit(void)
{
debugf3("MC: %s()\n", __func__);
edac_dbg(3, "MC:\n");
pci_unregister_driver(&i3200_driver);
if (!i3200_registered) {

View File

@ -273,7 +273,7 @@
#define CHANNELS_PER_BRANCH 2
#define MAX_BRANCHES 2
/* Defines to extract the vaious fields from the
/* Defines to extract the various fields from the
* MTRx - Memory Technology Registers
*/
#define MTR_DIMMS_PRESENT(mtr) ((mtr) & (0x1 << 8))
@ -287,22 +287,6 @@
#define MTR_DIMM_COLS(mtr) ((mtr) & 0x3)
#define MTR_DIMM_COLS_ADDR_BITS(mtr) (MTR_DIMM_COLS(mtr) + 10)
#ifdef CONFIG_EDAC_DEBUG
static char *numrow_toString[] = {
"8,192 - 13 rows",
"16,384 - 14 rows",
"32,768 - 15 rows",
"reserved"
};
static char *numcol_toString[] = {
"1,024 - 10 columns",
"2,048 - 11 columns",
"4,096 - 12 columns",
"reserved"
};
#endif
/* enables the report of miscellaneous messages as CE errors - default off */
static int misc_messages;
@ -344,7 +328,13 @@ struct i5000_pvt {
struct pci_dev *branch_1; /* 22.0 */
u16 tolm; /* top of low memory */
u64 ambase; /* AMB BAR */
union {
u64 ambase; /* AMB BAR */
struct {
u32 ambase_bottom;
u32 ambase_top;
} u __packed;
};
u16 mir0, mir1, mir2;
@ -494,10 +484,9 @@ static void i5000_process_fatal_error_info(struct mem_ctl_info *mci,
ras = NREC_RAS(info->nrecmemb);
cas = NREC_CAS(info->nrecmemb);
debugf0("\t\tCSROW= %d Channel= %d "
"(DRAM Bank= %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, bank,
rdwr ? "Write" : "Read", ras, cas);
edac_dbg(0, "\t\tCSROW= %d Channel= %d (DRAM Bank= %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, bank,
rdwr ? "Write" : "Read", ras, cas);
/* Only 1 bit will be on */
switch (allErrors) {
@ -536,10 +525,10 @@ static void i5000_process_fatal_error_info(struct mem_ctl_info *mci,
bank, ras, cas, allErrors, specific);
/* Call the helper to output message */
edac_mc_handle_error(HW_EVENT_ERR_FATAL, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_FATAL, mci, 1, 0, 0, 0,
channel >> 1, channel & 1, rank,
rdwr ? "Write error" : "Read error",
msg, NULL);
msg);
}
/*
@ -574,7 +563,7 @@ static void i5000_process_nonfatal_error_info(struct mem_ctl_info *mci,
/* ONLY ONE of the possible error bits will be set, as per the docs */
ue_errors = allErrors & FERR_NF_UNCORRECTABLE;
if (ue_errors) {
debugf0("\tUncorrected bits= 0x%x\n", ue_errors);
edac_dbg(0, "\tUncorrected bits= 0x%x\n", ue_errors);
branch = EXTRACT_FBDCHAN_INDX(info->ferr_nf_fbd);
@ -590,11 +579,9 @@ static void i5000_process_nonfatal_error_info(struct mem_ctl_info *mci,
ras = NREC_RAS(info->nrecmemb);
cas = NREC_CAS(info->nrecmemb);
debugf0
("\t\tCSROW= %d Channels= %d,%d (Branch= %d "
"DRAM Bank= %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, channel + 1, branch >> 1, bank,
rdwr ? "Write" : "Read", ras, cas);
edac_dbg(0, "\t\tCSROW= %d Channels= %d,%d (Branch= %d DRAM Bank= %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, channel + 1, branch >> 1, bank,
rdwr ? "Write" : "Read", ras, cas);
switch (ue_errors) {
case FERR_NF_M12ERR:
@ -637,16 +624,16 @@ static void i5000_process_nonfatal_error_info(struct mem_ctl_info *mci,
rank, bank, ras, cas, ue_errors, specific);
/* Call the helper to output message */
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0,
channel >> 1, -1, rank,
rdwr ? "Write error" : "Read error",
msg, NULL);
msg);
}
/* Check correctable errors */
ce_errors = allErrors & FERR_NF_CORRECTABLE;
if (ce_errors) {
debugf0("\tCorrected bits= 0x%x\n", ce_errors);
edac_dbg(0, "\tCorrected bits= 0x%x\n", ce_errors);
branch = EXTRACT_FBDCHAN_INDX(info->ferr_nf_fbd);
@ -664,10 +651,9 @@ static void i5000_process_nonfatal_error_info(struct mem_ctl_info *mci,
ras = REC_RAS(info->recmemb);
cas = REC_CAS(info->recmemb);
debugf0("\t\tCSROW= %d Channel= %d (Branch %d "
"DRAM Bank= %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, branch >> 1, bank,
rdwr ? "Write" : "Read", ras, cas);
edac_dbg(0, "\t\tCSROW= %d Channel= %d (Branch %d DRAM Bank= %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, branch >> 1, bank,
rdwr ? "Write" : "Read", ras, cas);
switch (ce_errors) {
case FERR_NF_M17ERR:
@ -692,10 +678,10 @@ static void i5000_process_nonfatal_error_info(struct mem_ctl_info *mci,
specific);
/* Call the helper to output message */
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, 0, 0, 0,
channel >> 1, channel % 2, rank,
rdwr ? "Write error" : "Read error",
msg, NULL);
msg);
}
if (!misc_messages)
@ -738,9 +724,9 @@ static void i5000_process_nonfatal_error_info(struct mem_ctl_info *mci,
"Err=%#x (%s)", misc_errors, specific);
/* Call the helper to output message */
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, 0, 0, 0,
branch >> 1, -1, -1,
"Misc error", msg, NULL);
"Misc error", msg);
}
}
@ -779,7 +765,7 @@ static void i5000_clear_error(struct mem_ctl_info *mci)
static void i5000_check_error(struct mem_ctl_info *mci)
{
struct i5000_error_info info;
debugf4("MC%d: %s: %s()\n", mci->mc_idx, __FILE__, __func__);
edac_dbg(4, "MC%d\n", mci->mc_idx);
i5000_get_error_info(mci, &info);
i5000_process_error_info(mci, &info, 1);
}
@ -850,15 +836,16 @@ static int i5000_get_devices(struct mem_ctl_info *mci, int dev_idx)
pvt->fsb_error_regs = pdev;
debugf1("System Address, processor bus- PCI Bus ID: %s %x:%x\n",
pci_name(pvt->system_address),
pvt->system_address->vendor, pvt->system_address->device);
debugf1("Branchmap, control and errors - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->branchmap_werrors),
pvt->branchmap_werrors->vendor, pvt->branchmap_werrors->device);
debugf1("FSB Error Regs - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->fsb_error_regs),
pvt->fsb_error_regs->vendor, pvt->fsb_error_regs->device);
edac_dbg(1, "System Address, processor bus- PCI Bus ID: %s %x:%x\n",
pci_name(pvt->system_address),
pvt->system_address->vendor, pvt->system_address->device);
edac_dbg(1, "Branchmap, control and errors - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->branchmap_werrors),
pvt->branchmap_werrors->vendor,
pvt->branchmap_werrors->device);
edac_dbg(1, "FSB Error Regs - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->fsb_error_regs),
pvt->fsb_error_regs->vendor, pvt->fsb_error_regs->device);
pdev = NULL;
pdev = pci_get_device(PCI_VENDOR_ID_INTEL,
@ -981,16 +968,25 @@ static void decode_mtr(int slot_row, u16 mtr)
ans = MTR_DIMMS_PRESENT(mtr);
debugf2("\tMTR%d=0x%x: DIMMs are %s\n", slot_row, mtr,
ans ? "Present" : "NOT Present");
edac_dbg(2, "\tMTR%d=0x%x: DIMMs are %sPresent\n",
slot_row, mtr, ans ? "" : "NOT ");
if (!ans)
return;
debugf2("\t\tWIDTH: x%d\n", MTR_DRAM_WIDTH(mtr));
debugf2("\t\tNUMBANK: %d bank(s)\n", MTR_DRAM_BANKS(mtr));
debugf2("\t\tNUMRANK: %s\n", MTR_DIMM_RANK(mtr) ? "double" : "single");
debugf2("\t\tNUMROW: %s\n", numrow_toString[MTR_DIMM_ROWS(mtr)]);
debugf2("\t\tNUMCOL: %s\n", numcol_toString[MTR_DIMM_COLS(mtr)]);
edac_dbg(2, "\t\tWIDTH: x%d\n", MTR_DRAM_WIDTH(mtr));
edac_dbg(2, "\t\tNUMBANK: %d bank(s)\n", MTR_DRAM_BANKS(mtr));
edac_dbg(2, "\t\tNUMRANK: %s\n",
MTR_DIMM_RANK(mtr) ? "double" : "single");
edac_dbg(2, "\t\tNUMROW: %s\n",
MTR_DIMM_ROWS(mtr) == 0 ? "8,192 - 13 rows" :
MTR_DIMM_ROWS(mtr) == 1 ? "16,384 - 14 rows" :
MTR_DIMM_ROWS(mtr) == 2 ? "32,768 - 15 rows" :
"reserved");
edac_dbg(2, "\t\tNUMCOL: %s\n",
MTR_DIMM_COLS(mtr) == 0 ? "1,024 - 10 columns" :
MTR_DIMM_COLS(mtr) == 1 ? "2,048 - 11 columns" :
MTR_DIMM_COLS(mtr) == 2 ? "4,096 - 12 columns" :
"reserved");
}
static void handle_channel(struct i5000_pvt *pvt, int slot, int channel,
@ -1061,7 +1057,7 @@ static void calculate_dimm_size(struct i5000_pvt *pvt)
"--------------------------------");
p += n;
space -= n;
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;
}
@ -1082,7 +1078,7 @@ static void calculate_dimm_size(struct i5000_pvt *pvt)
}
p += n;
space -= n;
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;
}
@ -1092,7 +1088,7 @@ static void calculate_dimm_size(struct i5000_pvt *pvt)
"--------------------------------");
p += n;
space -= n;
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;
@ -1105,7 +1101,7 @@ static void calculate_dimm_size(struct i5000_pvt *pvt)
p += n;
space -= n;
}
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;
@ -1118,7 +1114,7 @@ static void calculate_dimm_size(struct i5000_pvt *pvt)
}
/* output the last message and free buffer */
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
kfree(mem_buffer);
}
@ -1141,24 +1137,25 @@ static void i5000_get_mc_regs(struct mem_ctl_info *mci)
pvt = mci->pvt_info;
pci_read_config_dword(pvt->system_address, AMBASE,
(u32 *) & pvt->ambase);
&pvt->u.ambase_bottom);
pci_read_config_dword(pvt->system_address, AMBASE + sizeof(u32),
((u32 *) & pvt->ambase) + sizeof(u32));
&pvt->u.ambase_top);
maxdimmperch = pvt->maxdimmperch;
maxch = pvt->maxch;
debugf2("AMBASE= 0x%lx MAXCH= %d MAX-DIMM-Per-CH= %d\n",
(long unsigned int)pvt->ambase, pvt->maxch, pvt->maxdimmperch);
edac_dbg(2, "AMBASE= 0x%lx MAXCH= %d MAX-DIMM-Per-CH= %d\n",
(long unsigned int)pvt->ambase, pvt->maxch, pvt->maxdimmperch);
/* Get the Branch Map regs */
pci_read_config_word(pvt->branchmap_werrors, TOLM, &pvt->tolm);
pvt->tolm >>= 12;
debugf2("\nTOLM (number of 256M regions) =%u (0x%x)\n", pvt->tolm,
pvt->tolm);
edac_dbg(2, "TOLM (number of 256M regions) =%u (0x%x)\n",
pvt->tolm, pvt->tolm);
actual_tolm = pvt->tolm << 28;
debugf2("Actual TOLM byte addr=%u (0x%x)\n", actual_tolm, actual_tolm);
edac_dbg(2, "Actual TOLM byte addr=%u (0x%x)\n",
actual_tolm, actual_tolm);
pci_read_config_word(pvt->branchmap_werrors, MIR0, &pvt->mir0);
pci_read_config_word(pvt->branchmap_werrors, MIR1, &pvt->mir1);
@ -1168,15 +1165,18 @@ static void i5000_get_mc_regs(struct mem_ctl_info *mci)
limit = (pvt->mir0 >> 4) & 0x0FFF;
way0 = pvt->mir0 & 0x1;
way1 = pvt->mir0 & 0x2;
debugf2("MIR0: limit= 0x%x WAY1= %u WAY0= %x\n", limit, way1, way0);
edac_dbg(2, "MIR0: limit= 0x%x WAY1= %u WAY0= %x\n",
limit, way1, way0);
limit = (pvt->mir1 >> 4) & 0x0FFF;
way0 = pvt->mir1 & 0x1;
way1 = pvt->mir1 & 0x2;
debugf2("MIR1: limit= 0x%x WAY1= %u WAY0= %x\n", limit, way1, way0);
edac_dbg(2, "MIR1: limit= 0x%x WAY1= %u WAY0= %x\n",
limit, way1, way0);
limit = (pvt->mir2 >> 4) & 0x0FFF;
way0 = pvt->mir2 & 0x1;
way1 = pvt->mir2 & 0x2;
debugf2("MIR2: limit= 0x%x WAY1= %u WAY0= %x\n", limit, way1, way0);
edac_dbg(2, "MIR2: limit= 0x%x WAY1= %u WAY0= %x\n",
limit, way1, way0);
/* Get the MTR[0-3] regs */
for (slot_row = 0; slot_row < NUM_MTRS; slot_row++) {
@ -1185,31 +1185,31 @@ static void i5000_get_mc_regs(struct mem_ctl_info *mci)
pci_read_config_word(pvt->branch_0, where,
&pvt->b0_mtr[slot_row]);
debugf2("MTR%d where=0x%x B0 value=0x%x\n", slot_row, where,
pvt->b0_mtr[slot_row]);
edac_dbg(2, "MTR%d where=0x%x B0 value=0x%x\n",
slot_row, where, pvt->b0_mtr[slot_row]);
if (pvt->maxch >= CHANNELS_PER_BRANCH) {
pci_read_config_word(pvt->branch_1, where,
&pvt->b1_mtr[slot_row]);
debugf2("MTR%d where=0x%x B1 value=0x%x\n", slot_row,
where, pvt->b1_mtr[slot_row]);
edac_dbg(2, "MTR%d where=0x%x B1 value=0x%x\n",
slot_row, where, pvt->b1_mtr[slot_row]);
} else {
pvt->b1_mtr[slot_row] = 0;
}
}
/* Read and dump branch 0's MTRs */
debugf2("\nMemory Technology Registers:\n");
debugf2(" Branch 0:\n");
edac_dbg(2, "Memory Technology Registers:\n");
edac_dbg(2, " Branch 0:\n");
for (slot_row = 0; slot_row < NUM_MTRS; slot_row++) {
decode_mtr(slot_row, pvt->b0_mtr[slot_row]);
}
pci_read_config_word(pvt->branch_0, AMB_PRESENT_0,
&pvt->b0_ambpresent0);
debugf2("\t\tAMB-Branch 0-present0 0x%x:\n", pvt->b0_ambpresent0);
edac_dbg(2, "\t\tAMB-Branch 0-present0 0x%x:\n", pvt->b0_ambpresent0);
pci_read_config_word(pvt->branch_0, AMB_PRESENT_1,
&pvt->b0_ambpresent1);
debugf2("\t\tAMB-Branch 0-present1 0x%x:\n", pvt->b0_ambpresent1);
edac_dbg(2, "\t\tAMB-Branch 0-present1 0x%x:\n", pvt->b0_ambpresent1);
/* Only if we have 2 branchs (4 channels) */
if (pvt->maxch < CHANNELS_PER_BRANCH) {
@ -1217,18 +1217,18 @@ static void i5000_get_mc_regs(struct mem_ctl_info *mci)
pvt->b1_ambpresent1 = 0;
} else {
/* Read and dump branch 1's MTRs */
debugf2(" Branch 1:\n");
edac_dbg(2, " Branch 1:\n");
for (slot_row = 0; slot_row < NUM_MTRS; slot_row++) {
decode_mtr(slot_row, pvt->b1_mtr[slot_row]);
}
pci_read_config_word(pvt->branch_1, AMB_PRESENT_0,
&pvt->b1_ambpresent0);
debugf2("\t\tAMB-Branch 1-present0 0x%x:\n",
pvt->b1_ambpresent0);
edac_dbg(2, "\t\tAMB-Branch 1-present0 0x%x:\n",
pvt->b1_ambpresent0);
pci_read_config_word(pvt->branch_1, AMB_PRESENT_1,
&pvt->b1_ambpresent1);
debugf2("\t\tAMB-Branch 1-present1 0x%x:\n",
pvt->b1_ambpresent1);
edac_dbg(2, "\t\tAMB-Branch 1-present1 0x%x:\n",
pvt->b1_ambpresent1);
}
/* Go and determine the size of each DIMM and place in an
@ -1363,10 +1363,9 @@ static int i5000_probe1(struct pci_dev *pdev, int dev_idx)
int num_channels;
int num_dimms_per_channel;
debugf0("MC: %s: %s(), pdev bus %u dev=0x%x fn=0x%x\n",
__FILE__, __func__,
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
edac_dbg(0, "MC: pdev bus %u dev=0x%x fn=0x%x\n",
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
/* We only are looking for func 0 of the set */
if (PCI_FUNC(pdev->devfn) != 0)
@ -1388,8 +1387,8 @@ static int i5000_probe1(struct pci_dev *pdev, int dev_idx)
i5000_get_dimm_and_channel_counts(pdev, &num_dimms_per_channel,
&num_channels);
debugf0("MC: %s(): Number of Branches=2 Channels= %d DIMMS= %d\n",
__func__, num_channels, num_dimms_per_channel);
edac_dbg(0, "MC: Number of Branches=2 Channels= %d DIMMS= %d\n",
num_channels, num_dimms_per_channel);
/* allocate a new MC control structure */
@ -1406,10 +1405,9 @@ static int i5000_probe1(struct pci_dev *pdev, int dev_idx)
if (mci == NULL)
return -ENOMEM;
kobject_get(&mci->edac_mci_kobj);
debugf0("MC: %s: %s(): mci = %p\n", __FILE__, __func__, mci);
edac_dbg(0, "MC: mci = %p\n", mci);
mci->dev = &pdev->dev; /* record ptr to the generic device */
mci->pdev = &pdev->dev; /* record ptr to the generic device */
pvt = mci->pvt_info;
pvt->system_address = pdev; /* Record this device in our private */
@ -1439,19 +1437,16 @@ static int i5000_probe1(struct pci_dev *pdev, int dev_idx)
/* initialize the MC control structure 'csrows' table
* with the mapping and control information */
if (i5000_init_csrows(mci)) {
debugf0("MC: Setting mci->edac_cap to EDAC_FLAG_NONE\n"
" because i5000_init_csrows() returned nonzero "
"value\n");
edac_dbg(0, "MC: Setting mci->edac_cap to EDAC_FLAG_NONE because i5000_init_csrows() returned nonzero value\n");
mci->edac_cap = EDAC_FLAG_NONE; /* no csrows found */
} else {
debugf1("MC: Enable error reporting now\n");
edac_dbg(1, "MC: Enable error reporting now\n");
i5000_enable_error_reporting(mci);
}
/* add this new MC control structure to EDAC's list of MCs */
if (edac_mc_add_mc(mci)) {
debugf0("MC: %s: %s(): failed edac_mc_add_mc()\n",
__FILE__, __func__);
edac_dbg(0, "MC: failed edac_mc_add_mc()\n");
/* FIXME: perhaps some code should go here that disables error
* reporting if we just enabled it
*/
@ -1479,7 +1474,6 @@ fail1:
i5000_put_devices(mci);
fail0:
kobject_put(&mci->edac_mci_kobj);
edac_mc_free(mci);
return -ENODEV;
}
@ -1496,7 +1490,7 @@ static int __devinit i5000_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("MC: %s: %s()\n", __FILE__, __func__);
edac_dbg(0, "MC:\n");
/* wake up device */
rc = pci_enable_device(pdev);
@ -1515,7 +1509,7 @@ static void __devexit i5000_remove_one(struct pci_dev *pdev)
{
struct mem_ctl_info *mci;
debugf0("%s: %s()\n", __FILE__, __func__);
edac_dbg(0, "\n");
if (i5000_pci)
edac_pci_release_generic_ctl(i5000_pci);
@ -1525,7 +1519,6 @@ static void __devexit i5000_remove_one(struct pci_dev *pdev)
/* retrieve references to resources, and free those resources */
i5000_put_devices(mci);
kobject_put(&mci->edac_mci_kobj);
edac_mc_free(mci);
}
@ -1562,7 +1555,7 @@ static int __init i5000_init(void)
{
int pci_rc;
debugf2("MC: %s: %s()\n", __FILE__, __func__);
edac_dbg(2, "MC:\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -1578,7 +1571,7 @@ static int __init i5000_init(void)
*/
static void __exit i5000_exit(void)
{
debugf2("MC: %s: %s()\n", __FILE__, __func__);
edac_dbg(2, "MC:\n");
pci_unregister_driver(&i5000_driver);
}

View File

@ -431,10 +431,10 @@ static void i5100_handle_ce(struct mem_ctl_info *mci,
"bank %u, cas %u, ras %u\n",
bank, cas, ras);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
0, 0, syndrome,
chan, rank, -1,
msg, detail, NULL);
msg, detail);
}
static void i5100_handle_ue(struct mem_ctl_info *mci,
@ -453,10 +453,10 @@ static void i5100_handle_ue(struct mem_ctl_info *mci,
"bank %u, cas %u, ras %u\n",
bank, cas, ras);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
0, 0, syndrome,
chan, rank, -1,
msg, detail, NULL);
msg, detail);
}
static void i5100_read_log(struct mem_ctl_info *mci, int chan,
@ -859,8 +859,8 @@ static void __devinit i5100_init_csrows(struct mem_ctl_info *mci)
i5100_rank_to_slot(mci, chan, rank));
}
debugf2("dimm channel %d, rank %d, size %ld\n",
chan, rank, (long)PAGES_TO_MiB(npages));
edac_dbg(2, "dimm channel %d, rank %d, size %ld\n",
chan, rank, (long)PAGES_TO_MiB(npages));
}
}
@ -943,7 +943,7 @@ static int __devinit i5100_init_one(struct pci_dev *pdev,
goto bail_disable_ch1;
}
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
priv = mci->pvt_info;
priv->ranksperchan = ranksperch;

View File

@ -300,24 +300,6 @@ static inline int extract_fbdchan_indx(u32 x)
return (x>>28) & 0x3;
}
#ifdef CONFIG_EDAC_DEBUG
/* MTR NUMROW */
static const char *numrow_toString[] = {
"8,192 - 13 rows",
"16,384 - 14 rows",
"32,768 - 15 rows",
"65,536 - 16 rows"
};
/* MTR NUMCOL */
static const char *numcol_toString[] = {
"1,024 - 10 columns",
"2,048 - 11 columns",
"4,096 - 12 columns",
"reserved"
};
#endif
/* Device name and register DID (Device ID) */
struct i5400_dev_info {
const char *ctl_name; /* name for this device */
@ -345,7 +327,13 @@ struct i5400_pvt {
struct pci_dev *branch_1; /* 22.0 */
u16 tolm; /* top of low memory */
u64 ambase; /* AMB BAR */
union {
u64 ambase; /* AMB BAR */
struct {
u32 ambase_bottom;
u32 ambase_top;
} u __packed;
};
u16 mir0, mir1;
@ -560,10 +548,9 @@ static void i5400_proccess_non_recoverable_info(struct mem_ctl_info *mci,
ras = nrec_ras(info);
cas = nrec_cas(info);
debugf0("\t\tDIMM= %d Channels= %d,%d (Branch= %d "
"DRAM Bank= %d Buffer ID = %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, channel + 1, branch >> 1, bank,
buf_id, rdwr_str(rdwr), ras, cas);
edac_dbg(0, "\t\tDIMM= %d Channels= %d,%d (Branch= %d DRAM Bank= %d Buffer ID = %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, channel + 1, branch >> 1, bank,
buf_id, rdwr_str(rdwr), ras, cas);
/* Only 1 bit will be on */
errnum = find_first_bit(&allErrors, ARRAY_SIZE(error_name));
@ -573,10 +560,10 @@ static void i5400_proccess_non_recoverable_info(struct mem_ctl_info *mci,
"Bank=%d Buffer ID = %d RAS=%d CAS=%d Err=0x%lx (%s)",
bank, buf_id, ras, cas, allErrors, error_name[errnum]);
edac_mc_handle_error(tp_event, mci, 0, 0, 0,
edac_mc_handle_error(tp_event, mci, 1, 0, 0, 0,
branch >> 1, -1, rank,
rdwr ? "Write error" : "Read error",
msg, NULL);
msg);
}
/*
@ -613,7 +600,7 @@ static void i5400_process_nonfatal_error_info(struct mem_ctl_info *mci,
/* Correctable errors */
if (allErrors & ERROR_NF_CORRECTABLE) {
debugf0("\tCorrected bits= 0x%lx\n", allErrors);
edac_dbg(0, "\tCorrected bits= 0x%lx\n", allErrors);
branch = extract_fbdchan_indx(info->ferr_nf_fbd);
@ -634,10 +621,9 @@ static void i5400_process_nonfatal_error_info(struct mem_ctl_info *mci,
/* Only 1 bit will be on */
errnum = find_first_bit(&allErrors, ARRAY_SIZE(error_name));
debugf0("\t\tDIMM= %d Channel= %d (Branch %d "
"DRAM Bank= %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, branch >> 1, bank,
rdwr_str(rdwr), ras, cas);
edac_dbg(0, "\t\tDIMM= %d Channel= %d (Branch %d DRAM Bank= %d rdwr= %s ras= %d cas= %d)\n",
rank, channel, branch >> 1, bank,
rdwr_str(rdwr), ras, cas);
/* Form out message */
snprintf(msg, sizeof(msg),
@ -646,10 +632,10 @@ static void i5400_process_nonfatal_error_info(struct mem_ctl_info *mci,
branch >> 1, bank, rdwr_str(rdwr), ras, cas,
allErrors, error_name[errnum]);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, 0, 0, 0,
branch >> 1, channel % 2, rank,
rdwr ? "Write error" : "Read error",
msg, NULL);
msg);
return;
}
@ -700,7 +686,7 @@ static void i5400_clear_error(struct mem_ctl_info *mci)
static void i5400_check_error(struct mem_ctl_info *mci)
{
struct i5400_error_info info;
debugf4("MC%d: %s: %s()\n", mci->mc_idx, __FILE__, __func__);
edac_dbg(4, "MC%d\n", mci->mc_idx);
i5400_get_error_info(mci, &info);
i5400_process_error_info(mci, &info);
}
@ -786,15 +772,16 @@ static int i5400_get_devices(struct mem_ctl_info *mci, int dev_idx)
}
pvt->fsb_error_regs = pdev;
debugf1("System Address, processor bus- PCI Bus ID: %s %x:%x\n",
pci_name(pvt->system_address),
pvt->system_address->vendor, pvt->system_address->device);
debugf1("Branchmap, control and errors - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->branchmap_werrors),
pvt->branchmap_werrors->vendor, pvt->branchmap_werrors->device);
debugf1("FSB Error Regs - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->fsb_error_regs),
pvt->fsb_error_regs->vendor, pvt->fsb_error_regs->device);
edac_dbg(1, "System Address, processor bus- PCI Bus ID: %s %x:%x\n",
pci_name(pvt->system_address),
pvt->system_address->vendor, pvt->system_address->device);
edac_dbg(1, "Branchmap, control and errors - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->branchmap_werrors),
pvt->branchmap_werrors->vendor,
pvt->branchmap_werrors->device);
edac_dbg(1, "FSB Error Regs - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->fsb_error_regs),
pvt->fsb_error_regs->vendor, pvt->fsb_error_regs->device);
pvt->branch_0 = pci_get_device(PCI_VENDOR_ID_INTEL,
PCI_DEVICE_ID_INTEL_5400_FBD0, NULL);
@ -882,8 +869,8 @@ static int determine_mtr(struct i5400_pvt *pvt, int dimm, int channel)
n = dimm;
if (n >= DIMMS_PER_CHANNEL) {
debugf0("ERROR: trying to access an invalid dimm: %d\n",
dimm);
edac_dbg(0, "ERROR: trying to access an invalid dimm: %d\n",
dimm);
return 0;
}
@ -903,20 +890,29 @@ static void decode_mtr(int slot_row, u16 mtr)
ans = MTR_DIMMS_PRESENT(mtr);
debugf2("\tMTR%d=0x%x: DIMMs are %s\n", slot_row, mtr,
ans ? "Present" : "NOT Present");
edac_dbg(2, "\tMTR%d=0x%x: DIMMs are %sPresent\n",
slot_row, mtr, ans ? "" : "NOT ");
if (!ans)
return;
debugf2("\t\tWIDTH: x%d\n", MTR_DRAM_WIDTH(mtr));
edac_dbg(2, "\t\tWIDTH: x%d\n", MTR_DRAM_WIDTH(mtr));
debugf2("\t\tELECTRICAL THROTTLING is %s\n",
MTR_DIMMS_ETHROTTLE(mtr) ? "enabled" : "disabled");
edac_dbg(2, "\t\tELECTRICAL THROTTLING is %s\n",
MTR_DIMMS_ETHROTTLE(mtr) ? "enabled" : "disabled");
debugf2("\t\tNUMBANK: %d bank(s)\n", MTR_DRAM_BANKS(mtr));
debugf2("\t\tNUMRANK: %s\n", MTR_DIMM_RANK(mtr) ? "double" : "single");
debugf2("\t\tNUMROW: %s\n", numrow_toString[MTR_DIMM_ROWS(mtr)]);
debugf2("\t\tNUMCOL: %s\n", numcol_toString[MTR_DIMM_COLS(mtr)]);
edac_dbg(2, "\t\tNUMBANK: %d bank(s)\n", MTR_DRAM_BANKS(mtr));
edac_dbg(2, "\t\tNUMRANK: %s\n",
MTR_DIMM_RANK(mtr) ? "double" : "single");
edac_dbg(2, "\t\tNUMROW: %s\n",
MTR_DIMM_ROWS(mtr) == 0 ? "8,192 - 13 rows" :
MTR_DIMM_ROWS(mtr) == 1 ? "16,384 - 14 rows" :
MTR_DIMM_ROWS(mtr) == 2 ? "32,768 - 15 rows" :
"65,536 - 16 rows");
edac_dbg(2, "\t\tNUMCOL: %s\n",
MTR_DIMM_COLS(mtr) == 0 ? "1,024 - 10 columns" :
MTR_DIMM_COLS(mtr) == 1 ? "2,048 - 11 columns" :
MTR_DIMM_COLS(mtr) == 2 ? "4,096 - 12 columns" :
"reserved");
}
static void handle_channel(struct i5400_pvt *pvt, int dimm, int channel,
@ -989,7 +985,7 @@ static void calculate_dimm_size(struct i5400_pvt *pvt)
"-------------------------------");
p += n;
space -= n;
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;
}
@ -1004,7 +1000,7 @@ static void calculate_dimm_size(struct i5400_pvt *pvt)
p += n;
space -= n;
}
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;
}
@ -1014,7 +1010,7 @@ static void calculate_dimm_size(struct i5400_pvt *pvt)
"-------------------------------");
p += n;
space -= n;
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;
@ -1029,7 +1025,7 @@ static void calculate_dimm_size(struct i5400_pvt *pvt)
}
space -= n;
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;
@ -1042,7 +1038,7 @@ static void calculate_dimm_size(struct i5400_pvt *pvt)
}
/* output the last message and free buffer */
debugf2("%s\n", mem_buffer);
edac_dbg(2, "%s\n", mem_buffer);
kfree(mem_buffer);
}
@ -1065,25 +1061,25 @@ static void i5400_get_mc_regs(struct mem_ctl_info *mci)
pvt = mci->pvt_info;
pci_read_config_dword(pvt->system_address, AMBASE,
(u32 *) &pvt->ambase);
&pvt->u.ambase_bottom);
pci_read_config_dword(pvt->system_address, AMBASE + sizeof(u32),
((u32 *) &pvt->ambase) + sizeof(u32));
&pvt->u.ambase_top);
maxdimmperch = pvt->maxdimmperch;
maxch = pvt->maxch;
debugf2("AMBASE= 0x%lx MAXCH= %d MAX-DIMM-Per-CH= %d\n",
(long unsigned int)pvt->ambase, pvt->maxch, pvt->maxdimmperch);
edac_dbg(2, "AMBASE= 0x%lx MAXCH= %d MAX-DIMM-Per-CH= %d\n",
(long unsigned int)pvt->ambase, pvt->maxch, pvt->maxdimmperch);
/* Get the Branch Map regs */
pci_read_config_word(pvt->branchmap_werrors, TOLM, &pvt->tolm);
pvt->tolm >>= 12;
debugf2("\nTOLM (number of 256M regions) =%u (0x%x)\n", pvt->tolm,
pvt->tolm);
edac_dbg(2, "\nTOLM (number of 256M regions) =%u (0x%x)\n",
pvt->tolm, pvt->tolm);
actual_tolm = (u32) ((1000l * pvt->tolm) >> (30 - 28));
debugf2("Actual TOLM byte addr=%u.%03u GB (0x%x)\n",
actual_tolm/1000, actual_tolm % 1000, pvt->tolm << 28);
edac_dbg(2, "Actual TOLM byte addr=%u.%03u GB (0x%x)\n",
actual_tolm/1000, actual_tolm % 1000, pvt->tolm << 28);
pci_read_config_word(pvt->branchmap_werrors, MIR0, &pvt->mir0);
pci_read_config_word(pvt->branchmap_werrors, MIR1, &pvt->mir1);
@ -1092,11 +1088,13 @@ static void i5400_get_mc_regs(struct mem_ctl_info *mci)
limit = (pvt->mir0 >> 4) & 0x0fff;
way0 = pvt->mir0 & 0x1;
way1 = pvt->mir0 & 0x2;
debugf2("MIR0: limit= 0x%x WAY1= %u WAY0= %x\n", limit, way1, way0);
edac_dbg(2, "MIR0: limit= 0x%x WAY1= %u WAY0= %x\n",
limit, way1, way0);
limit = (pvt->mir1 >> 4) & 0xfff;
way0 = pvt->mir1 & 0x1;
way1 = pvt->mir1 & 0x2;
debugf2("MIR1: limit= 0x%x WAY1= %u WAY0= %x\n", limit, way1, way0);
edac_dbg(2, "MIR1: limit= 0x%x WAY1= %u WAY0= %x\n",
limit, way1, way0);
/* Get the set of MTR[0-3] regs by each branch */
for (slot_row = 0; slot_row < DIMMS_PER_CHANNEL; slot_row++) {
@ -1106,8 +1104,8 @@ static void i5400_get_mc_regs(struct mem_ctl_info *mci)
pci_read_config_word(pvt->branch_0, where,
&pvt->b0_mtr[slot_row]);
debugf2("MTR%d where=0x%x B0 value=0x%x\n", slot_row, where,
pvt->b0_mtr[slot_row]);
edac_dbg(2, "MTR%d where=0x%x B0 value=0x%x\n",
slot_row, where, pvt->b0_mtr[slot_row]);
if (pvt->maxch < CHANNELS_PER_BRANCH) {
pvt->b1_mtr[slot_row] = 0;
@ -1117,22 +1115,22 @@ static void i5400_get_mc_regs(struct mem_ctl_info *mci)
/* Branch 1 set of MTR registers */
pci_read_config_word(pvt->branch_1, where,
&pvt->b1_mtr[slot_row]);
debugf2("MTR%d where=0x%x B1 value=0x%x\n", slot_row, where,
pvt->b1_mtr[slot_row]);
edac_dbg(2, "MTR%d where=0x%x B1 value=0x%x\n",
slot_row, where, pvt->b1_mtr[slot_row]);
}
/* Read and dump branch 0's MTRs */
debugf2("\nMemory Technology Registers:\n");
debugf2(" Branch 0:\n");
edac_dbg(2, "Memory Technology Registers:\n");
edac_dbg(2, " Branch 0:\n");
for (slot_row = 0; slot_row < DIMMS_PER_CHANNEL; slot_row++)
decode_mtr(slot_row, pvt->b0_mtr[slot_row]);
pci_read_config_word(pvt->branch_0, AMBPRESENT_0,
&pvt->b0_ambpresent0);
debugf2("\t\tAMB-Branch 0-present0 0x%x:\n", pvt->b0_ambpresent0);
edac_dbg(2, "\t\tAMB-Branch 0-present0 0x%x:\n", pvt->b0_ambpresent0);
pci_read_config_word(pvt->branch_0, AMBPRESENT_1,
&pvt->b0_ambpresent1);
debugf2("\t\tAMB-Branch 0-present1 0x%x:\n", pvt->b0_ambpresent1);
edac_dbg(2, "\t\tAMB-Branch 0-present1 0x%x:\n", pvt->b0_ambpresent1);
/* Only if we have 2 branchs (4 channels) */
if (pvt->maxch < CHANNELS_PER_BRANCH) {
@ -1140,18 +1138,18 @@ static void i5400_get_mc_regs(struct mem_ctl_info *mci)
pvt->b1_ambpresent1 = 0;
} else {
/* Read and dump branch 1's MTRs */
debugf2(" Branch 1:\n");
edac_dbg(2, " Branch 1:\n");
for (slot_row = 0; slot_row < DIMMS_PER_CHANNEL; slot_row++)
decode_mtr(slot_row, pvt->b1_mtr[slot_row]);
pci_read_config_word(pvt->branch_1, AMBPRESENT_0,
&pvt->b1_ambpresent0);
debugf2("\t\tAMB-Branch 1-present0 0x%x:\n",
pvt->b1_ambpresent0);
edac_dbg(2, "\t\tAMB-Branch 1-present0 0x%x:\n",
pvt->b1_ambpresent0);
pci_read_config_word(pvt->branch_1, AMBPRESENT_1,
&pvt->b1_ambpresent1);
debugf2("\t\tAMB-Branch 1-present1 0x%x:\n",
pvt->b1_ambpresent1);
edac_dbg(2, "\t\tAMB-Branch 1-present1 0x%x:\n",
pvt->b1_ambpresent1);
}
/* Go and determine the size of each DIMM and place in an
@ -1203,10 +1201,9 @@ static int i5400_init_dimms(struct mem_ctl_info *mci)
size_mb = pvt->dimm_info[slot][channel].megabytes;
debugf2("%s: dimm%zd (branch %d channel %d slot %d): %d.%03d GB\n",
__func__, dimm - mci->dimms,
channel / 2, channel % 2, slot,
size_mb / 1000, size_mb % 1000);
edac_dbg(2, "dimm (branch %d channel %d slot %d): %d.%03d GB\n",
channel / 2, channel % 2, slot,
size_mb / 1000, size_mb % 1000);
dimm->nr_pages = size_mb << 8;
dimm->grain = 8;
@ -1227,7 +1224,7 @@ static int i5400_init_dimms(struct mem_ctl_info *mci)
* With such single-DIMM mode, the SDCC algorithm degrades to SECDEC+.
*/
if (ndimms == 1)
mci->dimms[0].edac_mode = EDAC_SECDED;
mci->dimms[0]->edac_mode = EDAC_SECDED;
return (ndimms == 0);
}
@ -1270,10 +1267,9 @@ static int i5400_probe1(struct pci_dev *pdev, int dev_idx)
if (dev_idx >= ARRAY_SIZE(i5400_devs))
return -EINVAL;
debugf0("MC: %s: %s(), pdev bus %u dev=0x%x fn=0x%x\n",
__FILE__, __func__,
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
edac_dbg(0, "MC: pdev bus %u dev=0x%x fn=0x%x\n",
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
/* We only are looking for func 0 of the set */
if (PCI_FUNC(pdev->devfn) != 0)
@ -1297,9 +1293,9 @@ static int i5400_probe1(struct pci_dev *pdev, int dev_idx)
if (mci == NULL)
return -ENOMEM;
debugf0("MC: %s: %s(): mci = %p\n", __FILE__, __func__, mci);
edac_dbg(0, "MC: mci = %p\n", mci);
mci->dev = &pdev->dev; /* record ptr to the generic device */
mci->pdev = &pdev->dev; /* record ptr to the generic device */
pvt = mci->pvt_info;
pvt->system_address = pdev; /* Record this device in our private */
@ -1329,19 +1325,16 @@ static int i5400_probe1(struct pci_dev *pdev, int dev_idx)
/* initialize the MC control structure 'dimms' table
* with the mapping and control information */
if (i5400_init_dimms(mci)) {
debugf0("MC: Setting mci->edac_cap to EDAC_FLAG_NONE\n"
" because i5400_init_dimms() returned nonzero "
"value\n");
edac_dbg(0, "MC: Setting mci->edac_cap to EDAC_FLAG_NONE because i5400_init_dimms() returned nonzero value\n");
mci->edac_cap = EDAC_FLAG_NONE; /* no dimms found */
} else {
debugf1("MC: Enable error reporting now\n");
edac_dbg(1, "MC: Enable error reporting now\n");
i5400_enable_error_reporting(mci);
}
/* add this new MC control structure to EDAC's list of MCs */
if (edac_mc_add_mc(mci)) {
debugf0("MC: %s: %s(): failed edac_mc_add_mc()\n",
__FILE__, __func__);
edac_dbg(0, "MC: failed edac_mc_add_mc()\n");
/* FIXME: perhaps some code should go here that disables error
* reporting if we just enabled it
*/
@ -1385,7 +1378,7 @@ static int __devinit i5400_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("MC: %s: %s()\n", __FILE__, __func__);
edac_dbg(0, "MC:\n");
/* wake up device */
rc = pci_enable_device(pdev);
@ -1404,7 +1397,7 @@ static void __devexit i5400_remove_one(struct pci_dev *pdev)
{
struct mem_ctl_info *mci;
debugf0("%s: %s()\n", __FILE__, __func__);
edac_dbg(0, "\n");
if (i5400_pci)
edac_pci_release_generic_ctl(i5400_pci);
@ -1450,7 +1443,7 @@ static int __init i5400_init(void)
{
int pci_rc;
debugf2("MC: %s: %s()\n", __FILE__, __func__);
edac_dbg(2, "MC:\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -1466,7 +1459,7 @@ static int __init i5400_init(void)
*/
static void __exit i5400_exit(void)
{
debugf2("MC: %s: %s()\n", __FILE__, __func__);
edac_dbg(2, "MC:\n");
pci_unregister_driver(&i5400_driver);
}

View File

@ -182,24 +182,6 @@ static const u16 mtr_regs[MAX_SLOTS] = {
#define MTR_DIMM_COLS(mtr) ((mtr) & 0x3)
#define MTR_DIMM_COLS_ADDR_BITS(mtr) (MTR_DIMM_COLS(mtr) + 10)
#ifdef CONFIG_EDAC_DEBUG
/* MTR NUMROW */
static const char *numrow_toString[] = {
"8,192 - 13 rows",
"16,384 - 14 rows",
"32,768 - 15 rows",
"65,536 - 16 rows"
};
/* MTR NUMCOL */
static const char *numcol_toString[] = {
"1,024 - 10 columns",
"2,048 - 11 columns",
"4,096 - 12 columns",
"reserved"
};
#endif
/************************************************
* i7300 Register definitions for error detection
************************************************/
@ -467,10 +449,10 @@ static void i7300_process_fbd_error(struct mem_ctl_info *mci)
"Bank=%d RAS=%d CAS=%d Err=0x%lx (%s))",
bank, ras, cas, errors, specific);
edac_mc_handle_error(HW_EVENT_ERR_FATAL, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_FATAL, mci, 1, 0, 0, 0,
branch, -1, rank,
is_wr ? "Write error" : "Read error",
pvt->tmp_prt_buffer, NULL);
pvt->tmp_prt_buffer);
}
@ -513,11 +495,11 @@ static void i7300_process_fbd_error(struct mem_ctl_info *mci)
"DRAM-Bank=%d RAS=%d CAS=%d, Err=0x%lx (%s))",
bank, ras, cas, errors, specific);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, 0, 0,
syndrome,
branch >> 1, channel % 2, rank,
is_wr ? "Write error" : "Read error",
pvt->tmp_prt_buffer, NULL);
pvt->tmp_prt_buffer);
}
return;
}
@ -614,9 +596,8 @@ static int decode_mtr(struct i7300_pvt *pvt,
mtr = pvt->mtr[slot][branch];
ans = MTR_DIMMS_PRESENT(mtr) ? 1 : 0;
debugf2("\tMTR%d CH%d: DIMMs are %s (mtr)\n",
slot, channel,
ans ? "Present" : "NOT Present");
edac_dbg(2, "\tMTR%d CH%d: DIMMs are %sPresent (mtr)\n",
slot, channel, ans ? "" : "NOT ");
/* Determine if there is a DIMM present in this DIMM slot */
if (!ans)
@ -638,16 +619,25 @@ static int decode_mtr(struct i7300_pvt *pvt,
dinfo->megabytes = 1 << addrBits;
debugf2("\t\tWIDTH: x%d\n", MTR_DRAM_WIDTH(mtr));
edac_dbg(2, "\t\tWIDTH: x%d\n", MTR_DRAM_WIDTH(mtr));
debugf2("\t\tELECTRICAL THROTTLING is %s\n",
MTR_DIMMS_ETHROTTLE(mtr) ? "enabled" : "disabled");
edac_dbg(2, "\t\tELECTRICAL THROTTLING is %s\n",
MTR_DIMMS_ETHROTTLE(mtr) ? "enabled" : "disabled");
debugf2("\t\tNUMBANK: %d bank(s)\n", MTR_DRAM_BANKS(mtr));
debugf2("\t\tNUMRANK: %s\n", MTR_DIMM_RANKS(mtr) ? "double" : "single");
debugf2("\t\tNUMROW: %s\n", numrow_toString[MTR_DIMM_ROWS(mtr)]);
debugf2("\t\tNUMCOL: %s\n", numcol_toString[MTR_DIMM_COLS(mtr)]);
debugf2("\t\tSIZE: %d MB\n", dinfo->megabytes);
edac_dbg(2, "\t\tNUMBANK: %d bank(s)\n", MTR_DRAM_BANKS(mtr));
edac_dbg(2, "\t\tNUMRANK: %s\n",
MTR_DIMM_RANKS(mtr) ? "double" : "single");
edac_dbg(2, "\t\tNUMROW: %s\n",
MTR_DIMM_ROWS(mtr) == 0 ? "8,192 - 13 rows" :
MTR_DIMM_ROWS(mtr) == 1 ? "16,384 - 14 rows" :
MTR_DIMM_ROWS(mtr) == 2 ? "32,768 - 15 rows" :
"65,536 - 16 rows");
edac_dbg(2, "\t\tNUMCOL: %s\n",
MTR_DIMM_COLS(mtr) == 0 ? "1,024 - 10 columns" :
MTR_DIMM_COLS(mtr) == 1 ? "2,048 - 11 columns" :
MTR_DIMM_COLS(mtr) == 2 ? "4,096 - 12 columns" :
"reserved");
edac_dbg(2, "\t\tSIZE: %d MB\n", dinfo->megabytes);
/*
* The type of error detection actually depends of the
@ -663,9 +653,9 @@ static int decode_mtr(struct i7300_pvt *pvt,
dimm->mtype = MEM_FB_DDR2;
if (IS_SINGLE_MODE(pvt->mc_settings_a)) {
dimm->edac_mode = EDAC_SECDED;
debugf2("\t\tECC code is 8-byte-over-32-byte SECDED+ code\n");
edac_dbg(2, "\t\tECC code is 8-byte-over-32-byte SECDED+ code\n");
} else {
debugf2("\t\tECC code is on Lockstep mode\n");
edac_dbg(2, "\t\tECC code is on Lockstep mode\n");
if (MTR_DRAM_WIDTH(mtr) == 8)
dimm->edac_mode = EDAC_S8ECD8ED;
else
@ -674,9 +664,9 @@ static int decode_mtr(struct i7300_pvt *pvt,
/* ask what device type on this row */
if (MTR_DRAM_WIDTH(mtr) == 8) {
debugf2("\t\tScrub algorithm for x8 is on %s mode\n",
IS_SCRBALGO_ENHANCED(pvt->mc_settings) ?
"enhanced" : "normal");
edac_dbg(2, "\t\tScrub algorithm for x8 is on %s mode\n",
IS_SCRBALGO_ENHANCED(pvt->mc_settings) ?
"enhanced" : "normal");
dimm->dtype = DEV_X8;
} else
@ -710,14 +700,14 @@ static void print_dimm_size(struct i7300_pvt *pvt)
p += n;
space -= n;
}
debugf2("%s\n", pvt->tmp_prt_buffer);
edac_dbg(2, "%s\n", pvt->tmp_prt_buffer);
p = pvt->tmp_prt_buffer;
space = PAGE_SIZE;
n = snprintf(p, space, "-------------------------------"
"------------------------------");
p += n;
space -= n;
debugf2("%s\n", pvt->tmp_prt_buffer);
edac_dbg(2, "%s\n", pvt->tmp_prt_buffer);
p = pvt->tmp_prt_buffer;
space = PAGE_SIZE;
@ -733,7 +723,7 @@ static void print_dimm_size(struct i7300_pvt *pvt)
space -= n;
}
debugf2("%s\n", pvt->tmp_prt_buffer);
edac_dbg(2, "%s\n", pvt->tmp_prt_buffer);
p = pvt->tmp_prt_buffer;
space = PAGE_SIZE;
}
@ -742,7 +732,7 @@ static void print_dimm_size(struct i7300_pvt *pvt)
"------------------------------");
p += n;
space -= n;
debugf2("%s\n", pvt->tmp_prt_buffer);
edac_dbg(2, "%s\n", pvt->tmp_prt_buffer);
p = pvt->tmp_prt_buffer;
space = PAGE_SIZE;
#endif
@ -765,7 +755,7 @@ static int i7300_init_csrows(struct mem_ctl_info *mci)
pvt = mci->pvt_info;
debugf2("Memory Technology Registers:\n");
edac_dbg(2, "Memory Technology Registers:\n");
/* Get the AMB present registers for the four channels */
for (branch = 0; branch < MAX_BRANCHES; branch++) {
@ -774,15 +764,15 @@ static int i7300_init_csrows(struct mem_ctl_info *mci)
pci_read_config_word(pvt->pci_dev_2x_0_fbd_branch[branch],
AMBPRESENT_0,
&pvt->ambpresent[channel]);
debugf2("\t\tAMB-present CH%d = 0x%x:\n",
channel, pvt->ambpresent[channel]);
edac_dbg(2, "\t\tAMB-present CH%d = 0x%x:\n",
channel, pvt->ambpresent[channel]);
channel = to_channel(1, branch);
pci_read_config_word(pvt->pci_dev_2x_0_fbd_branch[branch],
AMBPRESENT_1,
&pvt->ambpresent[channel]);
debugf2("\t\tAMB-present CH%d = 0x%x:\n",
channel, pvt->ambpresent[channel]);
edac_dbg(2, "\t\tAMB-present CH%d = 0x%x:\n",
channel, pvt->ambpresent[channel]);
}
/* Get the set of MTR[0-7] regs by each branch */
@ -824,12 +814,11 @@ static int i7300_init_csrows(struct mem_ctl_info *mci)
static void decode_mir(int mir_no, u16 mir[MAX_MIR])
{
if (mir[mir_no] & 3)
debugf2("MIR%d: limit= 0x%x Branch(es) that participate:"
" %s %s\n",
mir_no,
(mir[mir_no] >> 4) & 0xfff,
(mir[mir_no] & 1) ? "B0" : "",
(mir[mir_no] & 2) ? "B1" : "");
edac_dbg(2, "MIR%d: limit= 0x%x Branch(es) that participate: %s %s\n",
mir_no,
(mir[mir_no] >> 4) & 0xfff,
(mir[mir_no] & 1) ? "B0" : "",
(mir[mir_no] & 2) ? "B1" : "");
}
/**
@ -849,17 +838,17 @@ static int i7300_get_mc_regs(struct mem_ctl_info *mci)
pci_read_config_dword(pvt->pci_dev_16_0_fsb_ctlr, AMBASE,
(u32 *) &pvt->ambase);
debugf2("AMBASE= 0x%lx\n", (long unsigned int)pvt->ambase);
edac_dbg(2, "AMBASE= 0x%lx\n", (long unsigned int)pvt->ambase);
/* Get the Branch Map regs */
pci_read_config_word(pvt->pci_dev_16_1_fsb_addr_map, TOLM, &pvt->tolm);
pvt->tolm >>= 12;
debugf2("TOLM (number of 256M regions) =%u (0x%x)\n", pvt->tolm,
pvt->tolm);
edac_dbg(2, "TOLM (number of 256M regions) =%u (0x%x)\n",
pvt->tolm, pvt->tolm);
actual_tolm = (u32) ((1000l * pvt->tolm) >> (30 - 28));
debugf2("Actual TOLM byte addr=%u.%03u GB (0x%x)\n",
actual_tolm/1000, actual_tolm % 1000, pvt->tolm << 28);
edac_dbg(2, "Actual TOLM byte addr=%u.%03u GB (0x%x)\n",
actual_tolm/1000, actual_tolm % 1000, pvt->tolm << 28);
/* Get memory controller settings */
pci_read_config_dword(pvt->pci_dev_16_1_fsb_addr_map, MC_SETTINGS,
@ -868,15 +857,15 @@ static int i7300_get_mc_regs(struct mem_ctl_info *mci)
&pvt->mc_settings_a);
if (IS_SINGLE_MODE(pvt->mc_settings_a))
debugf0("Memory controller operating on single mode\n");
edac_dbg(0, "Memory controller operating on single mode\n");
else
debugf0("Memory controller operating on %s mode\n",
IS_MIRRORED(pvt->mc_settings) ? "mirrored" : "non-mirrored");
edac_dbg(0, "Memory controller operating on %smirrored mode\n",
IS_MIRRORED(pvt->mc_settings) ? "" : "non-");
debugf0("Error detection is %s\n",
IS_ECC_ENABLED(pvt->mc_settings) ? "enabled" : "disabled");
debugf0("Retry is %s\n",
IS_RETRY_ENABLED(pvt->mc_settings) ? "enabled" : "disabled");
edac_dbg(0, "Error detection is %s\n",
IS_ECC_ENABLED(pvt->mc_settings) ? "enabled" : "disabled");
edac_dbg(0, "Retry is %s\n",
IS_RETRY_ENABLED(pvt->mc_settings) ? "enabled" : "disabled");
/* Get Memory Interleave Range registers */
pci_read_config_word(pvt->pci_dev_16_1_fsb_addr_map, MIR0,
@ -970,18 +959,18 @@ static int __devinit i7300_get_devices(struct mem_ctl_info *mci)
}
}
debugf1("System Address, processor bus- PCI Bus ID: %s %x:%x\n",
pci_name(pvt->pci_dev_16_0_fsb_ctlr),
pvt->pci_dev_16_0_fsb_ctlr->vendor,
pvt->pci_dev_16_0_fsb_ctlr->device);
debugf1("Branchmap, control and errors - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->pci_dev_16_1_fsb_addr_map),
pvt->pci_dev_16_1_fsb_addr_map->vendor,
pvt->pci_dev_16_1_fsb_addr_map->device);
debugf1("FSB Error Regs - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->pci_dev_16_2_fsb_err_regs),
pvt->pci_dev_16_2_fsb_err_regs->vendor,
pvt->pci_dev_16_2_fsb_err_regs->device);
edac_dbg(1, "System Address, processor bus- PCI Bus ID: %s %x:%x\n",
pci_name(pvt->pci_dev_16_0_fsb_ctlr),
pvt->pci_dev_16_0_fsb_ctlr->vendor,
pvt->pci_dev_16_0_fsb_ctlr->device);
edac_dbg(1, "Branchmap, control and errors - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->pci_dev_16_1_fsb_addr_map),
pvt->pci_dev_16_1_fsb_addr_map->vendor,
pvt->pci_dev_16_1_fsb_addr_map->device);
edac_dbg(1, "FSB Error Regs - PCI Bus ID: %s %x:%x\n",
pci_name(pvt->pci_dev_16_2_fsb_err_regs),
pvt->pci_dev_16_2_fsb_err_regs->vendor,
pvt->pci_dev_16_2_fsb_err_regs->device);
pvt->pci_dev_2x_0_fbd_branch[0] = pci_get_device(PCI_VENDOR_ID_INTEL,
PCI_DEVICE_ID_INTEL_I7300_MCH_FB0,
@ -1032,10 +1021,9 @@ static int __devinit i7300_init_one(struct pci_dev *pdev,
if (rc == -EIO)
return rc;
debugf0("MC: " __FILE__ ": %s(), pdev bus %u dev=0x%x fn=0x%x\n",
__func__,
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
edac_dbg(0, "MC: pdev bus %u dev=0x%x fn=0x%x\n",
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
/* We only are looking for func 0 of the set */
if (PCI_FUNC(pdev->devfn) != 0)
@ -1055,9 +1043,9 @@ static int __devinit i7300_init_one(struct pci_dev *pdev,
if (mci == NULL)
return -ENOMEM;
debugf0("MC: " __FILE__ ": %s(): mci = %p\n", __func__, mci);
edac_dbg(0, "MC: mci = %p\n", mci);
mci->dev = &pdev->dev; /* record ptr to the generic device */
mci->pdev = &pdev->dev; /* record ptr to the generic device */
pvt = mci->pvt_info;
pvt->pci_dev_16_0_fsb_ctlr = pdev; /* Record this device in our private */
@ -1088,19 +1076,16 @@ static int __devinit i7300_init_one(struct pci_dev *pdev,
/* initialize the MC control structure 'csrows' table
* with the mapping and control information */
if (i7300_get_mc_regs(mci)) {
debugf0("MC: Setting mci->edac_cap to EDAC_FLAG_NONE\n"
" because i7300_init_csrows() returned nonzero "
"value\n");
edac_dbg(0, "MC: Setting mci->edac_cap to EDAC_FLAG_NONE because i7300_init_csrows() returned nonzero value\n");
mci->edac_cap = EDAC_FLAG_NONE; /* no csrows found */
} else {
debugf1("MC: Enable error reporting now\n");
edac_dbg(1, "MC: Enable error reporting now\n");
i7300_enable_error_reporting(mci);
}
/* add this new MC control structure to EDAC's list of MCs */
if (edac_mc_add_mc(mci)) {
debugf0("MC: " __FILE__
": %s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(0, "MC: failed edac_mc_add_mc()\n");
/* FIXME: perhaps some code should go here that disables error
* reporting if we just enabled it
*/
@ -1142,7 +1127,7 @@ static void __devexit i7300_remove_one(struct pci_dev *pdev)
struct mem_ctl_info *mci;
char *tmp;
debugf0(__FILE__ ": %s()\n", __func__);
edac_dbg(0, "\n");
if (i7300_pci)
edac_pci_release_generic_ctl(i7300_pci);
@ -1189,7 +1174,7 @@ static int __init i7300_init(void)
{
int pci_rc;
debugf2("MC: " __FILE__ ": %s()\n", __func__);
edac_dbg(2, "\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -1204,7 +1189,7 @@ static int __init i7300_init(void)
*/
static void __exit i7300_exit(void)
{
debugf2("MC: " __FILE__ ": %s()\n", __func__);
edac_dbg(2, "\n");
pci_unregister_driver(&i7300_driver);
}

View File

@ -248,6 +248,8 @@ struct i7core_dev {
};
struct i7core_pvt {
struct device *addrmatch_dev, *chancounts_dev;
struct pci_dev *pci_noncore;
struct pci_dev *pci_mcr[MAX_MCR_FUNC + 1];
struct pci_dev *pci_ch[NUM_CHANS][MAX_CHAN_FUNC + 1];
@ -514,29 +516,28 @@ static int get_dimm_config(struct mem_ctl_info *mci)
pci_read_config_dword(pdev, MC_MAX_DOD, &pvt->info.max_dod);
pci_read_config_dword(pdev, MC_CHANNEL_MAPPER, &pvt->info.ch_map);
debugf0("QPI %d control=0x%08x status=0x%08x dod=0x%08x map=0x%08x\n",
pvt->i7core_dev->socket, pvt->info.mc_control, pvt->info.mc_status,
pvt->info.max_dod, pvt->info.ch_map);
edac_dbg(0, "QPI %d control=0x%08x status=0x%08x dod=0x%08x map=0x%08x\n",
pvt->i7core_dev->socket, pvt->info.mc_control,
pvt->info.mc_status, pvt->info.max_dod, pvt->info.ch_map);
if (ECC_ENABLED(pvt)) {
debugf0("ECC enabled with x%d SDCC\n", ECCx8(pvt) ? 8 : 4);
edac_dbg(0, "ECC enabled with x%d SDCC\n", ECCx8(pvt) ? 8 : 4);
if (ECCx8(pvt))
mode = EDAC_S8ECD8ED;
else
mode = EDAC_S4ECD4ED;
} else {
debugf0("ECC disabled\n");
edac_dbg(0, "ECC disabled\n");
mode = EDAC_NONE;
}
/* FIXME: need to handle the error codes */
debugf0("DOD Max limits: DIMMS: %d, %d-ranked, %d-banked "
"x%x x 0x%x\n",
numdimms(pvt->info.max_dod),
numrank(pvt->info.max_dod >> 2),
numbank(pvt->info.max_dod >> 4),
numrow(pvt->info.max_dod >> 6),
numcol(pvt->info.max_dod >> 9));
edac_dbg(0, "DOD Max limits: DIMMS: %d, %d-ranked, %d-banked x%x x 0x%x\n",
numdimms(pvt->info.max_dod),
numrank(pvt->info.max_dod >> 2),
numbank(pvt->info.max_dod >> 4),
numrow(pvt->info.max_dod >> 6),
numcol(pvt->info.max_dod >> 9));
for (i = 0; i < NUM_CHANS; i++) {
u32 data, dimm_dod[3], value[8];
@ -545,11 +546,11 @@ static int get_dimm_config(struct mem_ctl_info *mci)
continue;
if (!CH_ACTIVE(pvt, i)) {
debugf0("Channel %i is not active\n", i);
edac_dbg(0, "Channel %i is not active\n", i);
continue;
}
if (CH_DISABLED(pvt, i)) {
debugf0("Channel %i is disabled\n", i);
edac_dbg(0, "Channel %i is disabled\n", i);
continue;
}
@ -580,15 +581,14 @@ static int get_dimm_config(struct mem_ctl_info *mci)
pci_read_config_dword(pvt->pci_ch[i][1],
MC_DOD_CH_DIMM2, &dimm_dod[2]);
debugf0("Ch%d phy rd%d, wr%d (0x%08x): "
"%s%s%s%cDIMMs\n",
i,
RDLCH(pvt->info.ch_map, i), WRLCH(pvt->info.ch_map, i),
data,
pvt->channel[i].is_3dimms_present ? "3DIMMS " : "",
pvt->channel[i].is_3dimms_present ? "SINGLE_4R " : "",
pvt->channel[i].has_4rank ? "HAS_4R " : "",
(data & REGISTERED_DIMM) ? 'R' : 'U');
edac_dbg(0, "Ch%d phy rd%d, wr%d (0x%08x): %s%s%s%cDIMMs\n",
i,
RDLCH(pvt->info.ch_map, i), WRLCH(pvt->info.ch_map, i),
data,
pvt->channel[i].is_3dimms_present ? "3DIMMS " : "",
pvt->channel[i].is_3dimms_present ? "SINGLE_4R " : "",
pvt->channel[i].has_4rank ? "HAS_4R " : "",
(data & REGISTERED_DIMM) ? 'R' : 'U');
for (j = 0; j < 3; j++) {
u32 banks, ranks, rows, cols;
@ -607,11 +607,10 @@ static int get_dimm_config(struct mem_ctl_info *mci)
/* DDR3 has 8 I/O banks */
size = (rows * cols * banks * ranks) >> (20 - 3);
debugf0("\tdimm %d %d Mb offset: %x, "
"bank: %d, rank: %d, row: %#x, col: %#x\n",
j, size,
RANKOFFSET(dimm_dod[j]),
banks, ranks, rows, cols);
edac_dbg(0, "\tdimm %d %d Mb offset: %x, bank: %d, rank: %d, row: %#x, col: %#x\n",
j, size,
RANKOFFSET(dimm_dod[j]),
banks, ranks, rows, cols);
npages = MiB_TO_PAGES(size);
@ -647,12 +646,12 @@ static int get_dimm_config(struct mem_ctl_info *mci)
pci_read_config_dword(pdev, MC_SAG_CH_5, &value[5]);
pci_read_config_dword(pdev, MC_SAG_CH_6, &value[6]);
pci_read_config_dword(pdev, MC_SAG_CH_7, &value[7]);
debugf1("\t[%i] DIVBY3\tREMOVED\tOFFSET\n", i);
edac_dbg(1, "\t[%i] DIVBY3\tREMOVED\tOFFSET\n", i);
for (j = 0; j < 8; j++)
debugf1("\t\t%#x\t%#x\t%#x\n",
(value[j] >> 27) & 0x1,
(value[j] >> 24) & 0x7,
(value[j] & ((1 << 24) - 1)));
edac_dbg(1, "\t\t%#x\t%#x\t%#x\n",
(value[j] >> 27) & 0x1,
(value[j] >> 24) & 0x7,
(value[j] & ((1 << 24) - 1)));
}
return 0;
@ -662,6 +661,8 @@ static int get_dimm_config(struct mem_ctl_info *mci)
Error insertion routines
****************************************************************************/
#define to_mci(k) container_of(k, struct mem_ctl_info, dev)
/* The i7core has independent error injection features per channel.
However, to have a simpler code, we don't allow enabling error injection
on more than one channel.
@ -691,9 +692,11 @@ static int disable_inject(const struct mem_ctl_info *mci)
* bit 0 - refers to the lower 32-byte half cacheline
* bit 1 - refers to the upper 32-byte half cacheline
*/
static ssize_t i7core_inject_section_store(struct mem_ctl_info *mci,
static ssize_t i7core_inject_section_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct i7core_pvt *pvt = mci->pvt_info;
unsigned long value;
int rc;
@ -709,9 +712,11 @@ static ssize_t i7core_inject_section_store(struct mem_ctl_info *mci,
return count;
}
static ssize_t i7core_inject_section_show(struct mem_ctl_info *mci,
char *data)
static ssize_t i7core_inject_section_show(struct device *dev,
struct device_attribute *mattr,
char *data)
{
struct mem_ctl_info *mci = to_mci(dev);
struct i7core_pvt *pvt = mci->pvt_info;
return sprintf(data, "0x%08x\n", pvt->inject.section);
}
@ -724,10 +729,12 @@ static ssize_t i7core_inject_section_show(struct mem_ctl_info *mci,
* bit 1 - inject ECC error
* bit 2 - inject parity error
*/
static ssize_t i7core_inject_type_store(struct mem_ctl_info *mci,
static ssize_t i7core_inject_type_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct i7core_pvt *pvt = mci->pvt_info;
struct mem_ctl_info *mci = to_mci(dev);
struct i7core_pvt *pvt = mci->pvt_info;
unsigned long value;
int rc;
@ -742,10 +749,13 @@ static ssize_t i7core_inject_type_store(struct mem_ctl_info *mci,
return count;
}
static ssize_t i7core_inject_type_show(struct mem_ctl_info *mci,
char *data)
static ssize_t i7core_inject_type_show(struct device *dev,
struct device_attribute *mattr,
char *data)
{
struct mem_ctl_info *mci = to_mci(dev);
struct i7core_pvt *pvt = mci->pvt_info;
return sprintf(data, "0x%08x\n", pvt->inject.type);
}
@ -759,9 +769,11 @@ static ssize_t i7core_inject_type_show(struct mem_ctl_info *mci,
* 23:16 and 31:24). Flipping bits in two symbol pairs will cause an
* uncorrectable error to be injected.
*/
static ssize_t i7core_inject_eccmask_store(struct mem_ctl_info *mci,
const char *data, size_t count)
static ssize_t i7core_inject_eccmask_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct i7core_pvt *pvt = mci->pvt_info;
unsigned long value;
int rc;
@ -777,10 +789,13 @@ static ssize_t i7core_inject_eccmask_store(struct mem_ctl_info *mci,
return count;
}
static ssize_t i7core_inject_eccmask_show(struct mem_ctl_info *mci,
char *data)
static ssize_t i7core_inject_eccmask_show(struct device *dev,
struct device_attribute *mattr,
char *data)
{
struct mem_ctl_info *mci = to_mci(dev);
struct i7core_pvt *pvt = mci->pvt_info;
return sprintf(data, "0x%08x\n", pvt->inject.eccmask);
}
@ -797,14 +812,16 @@ static ssize_t i7core_inject_eccmask_show(struct mem_ctl_info *mci,
#define DECLARE_ADDR_MATCH(param, limit) \
static ssize_t i7core_inject_store_##param( \
struct mem_ctl_info *mci, \
const char *data, size_t count) \
struct device *dev, \
struct device_attribute *mattr, \
const char *data, size_t count) \
{ \
struct mem_ctl_info *mci = to_mci(dev); \
struct i7core_pvt *pvt; \
long value; \
int rc; \
\
debugf1("%s()\n", __func__); \
edac_dbg(1, "\n"); \
pvt = mci->pvt_info; \
\
if (pvt->inject.enable) \
@ -824,13 +841,15 @@ static ssize_t i7core_inject_store_##param( \
} \
\
static ssize_t i7core_inject_show_##param( \
struct mem_ctl_info *mci, \
char *data) \
struct device *dev, \
struct device_attribute *mattr, \
char *data) \
{ \
struct mem_ctl_info *mci = to_mci(dev); \
struct i7core_pvt *pvt; \
\
pvt = mci->pvt_info; \
debugf1("%s() pvt=%p\n", __func__, pvt); \
edac_dbg(1, "pvt=%p\n", pvt); \
if (pvt->inject.param < 0) \
return sprintf(data, "any\n"); \
else \
@ -838,14 +857,9 @@ static ssize_t i7core_inject_show_##param( \
}
#define ATTR_ADDR_MATCH(param) \
{ \
.attr = { \
.name = #param, \
.mode = (S_IRUGO | S_IWUSR) \
}, \
.show = i7core_inject_show_##param, \
.store = i7core_inject_store_##param, \
}
static DEVICE_ATTR(param, S_IRUGO | S_IWUSR, \
i7core_inject_show_##param, \
i7core_inject_store_##param)
DECLARE_ADDR_MATCH(channel, 3);
DECLARE_ADDR_MATCH(dimm, 3);
@ -854,14 +868,21 @@ DECLARE_ADDR_MATCH(bank, 32);
DECLARE_ADDR_MATCH(page, 0x10000);
DECLARE_ADDR_MATCH(col, 0x4000);
ATTR_ADDR_MATCH(channel);
ATTR_ADDR_MATCH(dimm);
ATTR_ADDR_MATCH(rank);
ATTR_ADDR_MATCH(bank);
ATTR_ADDR_MATCH(page);
ATTR_ADDR_MATCH(col);
static int write_and_test(struct pci_dev *dev, const int where, const u32 val)
{
u32 read;
int count;
debugf0("setting pci %02x:%02x.%x reg=%02x value=%08x\n",
dev->bus->number, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
where, val);
edac_dbg(0, "setting pci %02x:%02x.%x reg=%02x value=%08x\n",
dev->bus->number, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
where, val);
for (count = 0; count < 10; count++) {
if (count)
@ -899,9 +920,11 @@ static int write_and_test(struct pci_dev *dev, const int where, const u32 val)
* is reliable enough to check if the MC is using the
* three channels. However, this is not clear at the datasheet.
*/
static ssize_t i7core_inject_enable_store(struct mem_ctl_info *mci,
const char *data, size_t count)
static ssize_t i7core_inject_enable_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct i7core_pvt *pvt = mci->pvt_info;
u32 injectmask;
u64 mask = 0;
@ -994,17 +1017,18 @@ static ssize_t i7core_inject_enable_store(struct mem_ctl_info *mci,
pci_write_config_dword(pvt->pci_noncore,
MC_CFG_CONTROL, 8);
debugf0("Error inject addr match 0x%016llx, ecc 0x%08x,"
" inject 0x%08x\n",
mask, pvt->inject.eccmask, injectmask);
edac_dbg(0, "Error inject addr match 0x%016llx, ecc 0x%08x, inject 0x%08x\n",
mask, pvt->inject.eccmask, injectmask);
return count;
}
static ssize_t i7core_inject_enable_show(struct mem_ctl_info *mci,
char *data)
static ssize_t i7core_inject_enable_show(struct device *dev,
struct device_attribute *mattr,
char *data)
{
struct mem_ctl_info *mci = to_mci(dev);
struct i7core_pvt *pvt = mci->pvt_info;
u32 injectmask;
@ -1014,7 +1038,7 @@ static ssize_t i7core_inject_enable_show(struct mem_ctl_info *mci,
pci_read_config_dword(pvt->pci_ch[pvt->inject.channel][0],
MC_CHANNEL_ERROR_INJECT, &injectmask);
debugf0("Inject error read: 0x%018x\n", injectmask);
edac_dbg(0, "Inject error read: 0x%018x\n", injectmask);
if (injectmask & 0x0c)
pvt->inject.enable = 1;
@ -1024,12 +1048,14 @@ static ssize_t i7core_inject_enable_show(struct mem_ctl_info *mci,
#define DECLARE_COUNTER(param) \
static ssize_t i7core_show_counter_##param( \
struct mem_ctl_info *mci, \
char *data) \
struct device *dev, \
struct device_attribute *mattr, \
char *data) \
{ \
struct mem_ctl_info *mci = to_mci(dev); \
struct i7core_pvt *pvt = mci->pvt_info; \
\
debugf1("%s() \n", __func__); \
edac_dbg(1, "\n"); \
if (!pvt->ce_count_available || (pvt->is_registered)) \
return sprintf(data, "data unavailable\n"); \
return sprintf(data, "%lu\n", \
@ -1037,121 +1063,179 @@ static ssize_t i7core_show_counter_##param( \
}
#define ATTR_COUNTER(param) \
{ \
.attr = { \
.name = __stringify(udimm##param), \
.mode = (S_IRUGO | S_IWUSR) \
}, \
.show = i7core_show_counter_##param \
}
static DEVICE_ATTR(udimm##param, S_IRUGO | S_IWUSR, \
i7core_show_counter_##param, \
NULL)
DECLARE_COUNTER(0);
DECLARE_COUNTER(1);
DECLARE_COUNTER(2);
ATTR_COUNTER(0);
ATTR_COUNTER(1);
ATTR_COUNTER(2);
/*
* Sysfs struct
* inject_addrmatch device sysfs struct
*/
static const struct mcidev_sysfs_attribute i7core_addrmatch_attrs[] = {
ATTR_ADDR_MATCH(channel),
ATTR_ADDR_MATCH(dimm),
ATTR_ADDR_MATCH(rank),
ATTR_ADDR_MATCH(bank),
ATTR_ADDR_MATCH(page),
ATTR_ADDR_MATCH(col),
{ } /* End of list */
static struct attribute *i7core_addrmatch_attrs[] = {
&dev_attr_channel.attr,
&dev_attr_dimm.attr,
&dev_attr_rank.attr,
&dev_attr_bank.attr,
&dev_attr_page.attr,
&dev_attr_col.attr,
NULL
};
static const struct mcidev_sysfs_group i7core_inject_addrmatch = {
.name = "inject_addrmatch",
.mcidev_attr = i7core_addrmatch_attrs,
static struct attribute_group addrmatch_grp = {
.attrs = i7core_addrmatch_attrs,
};
static const struct mcidev_sysfs_attribute i7core_udimm_counters_attrs[] = {
ATTR_COUNTER(0),
ATTR_COUNTER(1),
ATTR_COUNTER(2),
{ .attr = { .name = NULL } }
static const struct attribute_group *addrmatch_groups[] = {
&addrmatch_grp,
NULL
};
static const struct mcidev_sysfs_group i7core_udimm_counters = {
.name = "all_channel_counts",
.mcidev_attr = i7core_udimm_counters_attrs,
static void addrmatch_release(struct device *device)
{
edac_dbg(1, "Releasing device %s\n", dev_name(device));
kfree(device);
}
static struct device_type addrmatch_type = {
.groups = addrmatch_groups,
.release = addrmatch_release,
};
static const struct mcidev_sysfs_attribute i7core_sysfs_rdimm_attrs[] = {
{
.attr = {
.name = "inject_section",
.mode = (S_IRUGO | S_IWUSR)
},
.show = i7core_inject_section_show,
.store = i7core_inject_section_store,
}, {
.attr = {
.name = "inject_type",
.mode = (S_IRUGO | S_IWUSR)
},
.show = i7core_inject_type_show,
.store = i7core_inject_type_store,
}, {
.attr = {
.name = "inject_eccmask",
.mode = (S_IRUGO | S_IWUSR)
},
.show = i7core_inject_eccmask_show,
.store = i7core_inject_eccmask_store,
}, {
.grp = &i7core_inject_addrmatch,
}, {
.attr = {
.name = "inject_enable",
.mode = (S_IRUGO | S_IWUSR)
},
.show = i7core_inject_enable_show,
.store = i7core_inject_enable_store,
},
{ } /* End of list */
/*
* all_channel_counts sysfs struct
*/
static struct attribute *i7core_udimm_counters_attrs[] = {
&dev_attr_udimm0.attr,
&dev_attr_udimm1.attr,
&dev_attr_udimm2.attr,
NULL
};
static const struct mcidev_sysfs_attribute i7core_sysfs_udimm_attrs[] = {
{
.attr = {
.name = "inject_section",
.mode = (S_IRUGO | S_IWUSR)
},
.show = i7core_inject_section_show,
.store = i7core_inject_section_store,
}, {
.attr = {
.name = "inject_type",
.mode = (S_IRUGO | S_IWUSR)
},
.show = i7core_inject_type_show,
.store = i7core_inject_type_store,
}, {
.attr = {
.name = "inject_eccmask",
.mode = (S_IRUGO | S_IWUSR)
},
.show = i7core_inject_eccmask_show,
.store = i7core_inject_eccmask_store,
}, {
.grp = &i7core_inject_addrmatch,
}, {
.attr = {
.name = "inject_enable",
.mode = (S_IRUGO | S_IWUSR)
},
.show = i7core_inject_enable_show,
.store = i7core_inject_enable_store,
}, {
.grp = &i7core_udimm_counters,
},
{ } /* End of list */
static struct attribute_group all_channel_counts_grp = {
.attrs = i7core_udimm_counters_attrs,
};
static const struct attribute_group *all_channel_counts_groups[] = {
&all_channel_counts_grp,
NULL
};
static void all_channel_counts_release(struct device *device)
{
edac_dbg(1, "Releasing device %s\n", dev_name(device));
kfree(device);
}
static struct device_type all_channel_counts_type = {
.groups = all_channel_counts_groups,
.release = all_channel_counts_release,
};
/*
* inject sysfs attributes
*/
static DEVICE_ATTR(inject_section, S_IRUGO | S_IWUSR,
i7core_inject_section_show, i7core_inject_section_store);
static DEVICE_ATTR(inject_type, S_IRUGO | S_IWUSR,
i7core_inject_type_show, i7core_inject_type_store);
static DEVICE_ATTR(inject_eccmask, S_IRUGO | S_IWUSR,
i7core_inject_eccmask_show, i7core_inject_eccmask_store);
static DEVICE_ATTR(inject_enable, S_IRUGO | S_IWUSR,
i7core_inject_enable_show, i7core_inject_enable_store);
static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
{
struct i7core_pvt *pvt = mci->pvt_info;
int rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_section);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_type);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_eccmask);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_enable);
if (rc < 0)
return rc;
pvt->addrmatch_dev = kzalloc(sizeof(*pvt->addrmatch_dev), GFP_KERNEL);
if (!pvt->addrmatch_dev)
return rc;
pvt->addrmatch_dev->type = &addrmatch_type;
pvt->addrmatch_dev->bus = mci->dev.bus;
device_initialize(pvt->addrmatch_dev);
pvt->addrmatch_dev->parent = &mci->dev;
dev_set_name(pvt->addrmatch_dev, "inject_addrmatch");
dev_set_drvdata(pvt->addrmatch_dev, mci);
edac_dbg(1, "creating %s\n", dev_name(pvt->addrmatch_dev));
rc = device_add(pvt->addrmatch_dev);
if (rc < 0)
return rc;
if (!pvt->is_registered) {
pvt->chancounts_dev = kzalloc(sizeof(*pvt->chancounts_dev),
GFP_KERNEL);
if (!pvt->chancounts_dev) {
put_device(pvt->addrmatch_dev);
device_del(pvt->addrmatch_dev);
return rc;
}
pvt->chancounts_dev->type = &all_channel_counts_type;
pvt->chancounts_dev->bus = mci->dev.bus;
device_initialize(pvt->chancounts_dev);
pvt->chancounts_dev->parent = &mci->dev;
dev_set_name(pvt->chancounts_dev, "all_channel_counts");
dev_set_drvdata(pvt->chancounts_dev, mci);
edac_dbg(1, "creating %s\n", dev_name(pvt->chancounts_dev));
rc = device_add(pvt->chancounts_dev);
if (rc < 0)
return rc;
}
return 0;
}
static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
{
struct i7core_pvt *pvt = mci->pvt_info;
edac_dbg(1, "\n");
device_remove_file(&mci->dev, &dev_attr_inject_section);
device_remove_file(&mci->dev, &dev_attr_inject_type);
device_remove_file(&mci->dev, &dev_attr_inject_eccmask);
device_remove_file(&mci->dev, &dev_attr_inject_enable);
if (!pvt->is_registered) {
put_device(pvt->chancounts_dev);
device_del(pvt->chancounts_dev);
}
put_device(pvt->addrmatch_dev);
device_del(pvt->addrmatch_dev);
}
/****************************************************************************
Device initialization routines: put/get, init/exit
****************************************************************************/
@ -1164,14 +1248,14 @@ static void i7core_put_devices(struct i7core_dev *i7core_dev)
{
int i;
debugf0(__FILE__ ": %s()\n", __func__);
edac_dbg(0, "\n");
for (i = 0; i < i7core_dev->n_devs; i++) {
struct pci_dev *pdev = i7core_dev->pdev[i];
if (!pdev)
continue;
debugf0("Removing dev %02x:%02x.%d\n",
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
edac_dbg(0, "Removing dev %02x:%02x.%d\n",
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
pci_dev_put(pdev);
}
}
@ -1214,12 +1298,12 @@ static unsigned i7core_pci_lastbus(void)
while ((b = pci_find_next_bus(b)) != NULL) {
bus = b->number;
debugf0("Found bus %d\n", bus);
edac_dbg(0, "Found bus %d\n", bus);
if (bus > last_bus)
last_bus = bus;
}
debugf0("Last bus %d\n", last_bus);
edac_dbg(0, "Last bus %d\n", last_bus);
return last_bus;
}
@ -1326,10 +1410,10 @@ static int i7core_get_onedevice(struct pci_dev **prev,
return -ENODEV;
}
debugf0("Detected socket %d dev %02x:%02x.%d PCI ID %04x:%04x\n",
socket, bus, dev_descr->dev,
dev_descr->func,
PCI_VENDOR_ID_INTEL, dev_descr->dev_id);
edac_dbg(0, "Detected socket %d dev %02x:%02x.%d PCI ID %04x:%04x\n",
socket, bus, dev_descr->dev,
dev_descr->func,
PCI_VENDOR_ID_INTEL, dev_descr->dev_id);
/*
* As stated on drivers/pci/search.c, the reference count for
@ -1427,13 +1511,13 @@ static int mci_bind_devs(struct mem_ctl_info *mci,
family = "unknown";
pvt->enable_scrub = false;
}
debugf0("Detected a processor type %s\n", family);
edac_dbg(0, "Detected a processor type %s\n", family);
} else
goto error;
debugf0("Associated fn %d.%d, dev = %p, socket %d\n",
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),
pdev, i7core_dev->socket);
edac_dbg(0, "Associated fn %d.%d, dev = %p, socket %d\n",
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),
pdev, i7core_dev->socket);
if (PCI_SLOT(pdev->devfn) == 3 &&
PCI_FUNC(pdev->devfn) == 2)
@ -1452,18 +1536,6 @@ error:
/****************************************************************************
Error check routines
****************************************************************************/
static void i7core_rdimm_update_errcount(struct mem_ctl_info *mci,
const int chan,
const int dimm,
const int add)
{
int i;
for (i = 0; i < add; i++) {
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 0, 0, 0,
chan, dimm, -1, "error", "", NULL);
}
}
static void i7core_rdimm_update_ce_count(struct mem_ctl_info *mci,
const int chan,
@ -1502,12 +1574,17 @@ static void i7core_rdimm_update_ce_count(struct mem_ctl_info *mci,
/*updated the edac core */
if (add0 != 0)
i7core_rdimm_update_errcount(mci, chan, 0, add0);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, add0,
0, 0, 0,
chan, 0, -1, "error", "");
if (add1 != 0)
i7core_rdimm_update_errcount(mci, chan, 1, add1);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, add1,
0, 0, 0,
chan, 1, -1, "error", "");
if (add2 != 0)
i7core_rdimm_update_errcount(mci, chan, 2, add2);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, add2,
0, 0, 0,
chan, 2, -1, "error", "");
}
static void i7core_rdimm_check_mc_ecc_err(struct mem_ctl_info *mci)
@ -1530,8 +1607,8 @@ static void i7core_rdimm_check_mc_ecc_err(struct mem_ctl_info *mci)
pci_read_config_dword(pvt->pci_mcr[2], MC_COR_ECC_CNT_5,
&rcv[2][1]);
for (i = 0 ; i < 3; i++) {
debugf3("MC_COR_ECC_CNT%d = 0x%x; MC_COR_ECC_CNT%d = 0x%x\n",
(i * 2), rcv[i][0], (i * 2) + 1, rcv[i][1]);
edac_dbg(3, "MC_COR_ECC_CNT%d = 0x%x; MC_COR_ECC_CNT%d = 0x%x\n",
(i * 2), rcv[i][0], (i * 2) + 1, rcv[i][1]);
/*if the channel has 3 dimms*/
if (pvt->channel[i].dimms > 2) {
new0 = DIMM_BOT_COR_ERR(rcv[i][0]);
@ -1562,7 +1639,7 @@ static void i7core_udimm_check_mc_ecc_err(struct mem_ctl_info *mci)
int new0, new1, new2;
if (!pvt->pci_mcr[4]) {
debugf0("%s MCR registers not found\n", __func__);
edac_dbg(0, "MCR registers not found\n");
return;
}
@ -1626,7 +1703,7 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
const struct mce *m)
{
struct i7core_pvt *pvt = mci->pvt_info;
char *type, *optype, *err, msg[80];
char *type, *optype, *err;
enum hw_event_mc_err_type tp_event;
unsigned long error = m->status & 0x1ff0000l;
bool uncorrected_error = m->mcgstatus & 1ll << 61;
@ -1704,20 +1781,18 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
err = "unknown";
}
snprintf(msg, sizeof(msg), "count=%d %s", core_err_cnt, optype);
/*
* Call the helper to output message
* FIXME: what to do if core_err_cnt > 1? Currently, it generates
* only one event
*/
if (uncorrected_error || !pvt->is_registered)
edac_mc_handle_error(tp_event, mci,
edac_mc_handle_error(tp_event, mci, core_err_cnt,
m->addr >> PAGE_SHIFT,
m->addr & ~PAGE_MASK,
syndrome,
channel, dimm, -1,
err, msg, m);
err, optype);
}
/*
@ -2094,8 +2169,7 @@ static void i7core_unregister_mci(struct i7core_dev *i7core_dev)
struct i7core_pvt *pvt;
if (unlikely(!mci || !mci->pvt_info)) {
debugf0("MC: " __FILE__ ": %s(): dev = %p\n",
__func__, &i7core_dev->pdev[0]->dev);
edac_dbg(0, "MC: dev = %p\n", &i7core_dev->pdev[0]->dev);
i7core_printk(KERN_ERR, "Couldn't find mci handler\n");
return;
@ -2103,8 +2177,7 @@ static void i7core_unregister_mci(struct i7core_dev *i7core_dev)
pvt = mci->pvt_info;
debugf0("MC: " __FILE__ ": %s(): mci = %p, dev = %p\n",
__func__, mci, &i7core_dev->pdev[0]->dev);
edac_dbg(0, "MC: mci = %p, dev = %p\n", mci, &i7core_dev->pdev[0]->dev);
/* Disable scrubrate setting */
if (pvt->enable_scrub)
@ -2114,9 +2187,10 @@ static void i7core_unregister_mci(struct i7core_dev *i7core_dev)
i7core_pci_ctl_release(pvt);
/* Remove MC sysfs nodes */
edac_mc_del_mc(mci->dev);
i7core_delete_sysfs_devices(mci);
edac_mc_del_mc(mci->pdev);
debugf1("%s: free mci struct\n", mci->ctl_name);
edac_dbg(1, "%s: free mci struct\n", mci->ctl_name);
kfree(mci->ctl_name);
edac_mc_free(mci);
i7core_dev->mci = NULL;
@ -2142,8 +2216,7 @@ static int i7core_register_mci(struct i7core_dev *i7core_dev)
if (unlikely(!mci))
return -ENOMEM;
debugf0("MC: " __FILE__ ": %s(): mci = %p, dev = %p\n",
__func__, mci, &i7core_dev->pdev[0]->dev);
edac_dbg(0, "MC: mci = %p, dev = %p\n", mci, &i7core_dev->pdev[0]->dev);
pvt = mci->pvt_info;
memset(pvt, 0, sizeof(*pvt));
@ -2172,15 +2245,11 @@ static int i7core_register_mci(struct i7core_dev *i7core_dev)
if (unlikely(rc < 0))
goto fail0;
if (pvt->is_registered)
mci->mc_driver_sysfs_attributes = i7core_sysfs_rdimm_attrs;
else
mci->mc_driver_sysfs_attributes = i7core_sysfs_udimm_attrs;
/* Get dimm basic config */
get_dimm_config(mci);
/* record ptr to the generic device */
mci->dev = &i7core_dev->pdev[0]->dev;
mci->pdev = &i7core_dev->pdev[0]->dev;
/* Set the function pointer to an actual operation function */
mci->edac_check = i7core_check_error;
@ -2190,8 +2259,7 @@ static int i7core_register_mci(struct i7core_dev *i7core_dev)
/* add this new MC control structure to EDAC's list of MCs */
if (unlikely(edac_mc_add_mc(mci))) {
debugf0("MC: " __FILE__
": %s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(0, "MC: failed edac_mc_add_mc()\n");
/* FIXME: perhaps some code should go here that disables error
* reporting if we just enabled it
*/
@ -2199,6 +2267,12 @@ static int i7core_register_mci(struct i7core_dev *i7core_dev)
rc = -EINVAL;
goto fail0;
}
if (i7core_create_sysfs_devices(mci)) {
edac_dbg(0, "MC: failed to create sysfs nodes\n");
edac_mc_del_mc(mci->pdev);
rc = -EINVAL;
goto fail0;
}
/* Default error mask is any memory */
pvt->inject.channel = 0;
@ -2298,7 +2372,7 @@ static void __devexit i7core_remove(struct pci_dev *pdev)
{
struct i7core_dev *i7core_dev;
debugf0(__FILE__ ": %s()\n", __func__);
edac_dbg(0, "\n");
/*
* we have a trouble here: pdev value for removal will be wrong, since
@ -2347,7 +2421,7 @@ static int __init i7core_init(void)
{
int pci_rc;
debugf2("MC: " __FILE__ ": %s()\n", __func__);
edac_dbg(2, "\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -2374,7 +2448,7 @@ static int __init i7core_init(void)
*/
static void __exit i7core_exit(void)
{
debugf2("MC: " __FILE__ ": %s()\n", __func__);
edac_dbg(2, "\n");
pci_unregister_driver(&i7core_driver);
mce_unregister_decode_chain(&i7_mce_dec);
}

View File

@ -124,7 +124,7 @@ static void i82443bxgx_edacmc_get_error_info(struct mem_ctl_info *mci,
*info)
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
pci_read_config_dword(pdev, I82443BXGX_EAP, &info->eap);
if (info->eap & I82443BXGX_EAP_OFFSET_SBE)
/* Clear error to allow next error to be reported [p.61] */
@ -156,19 +156,19 @@ static int i82443bxgx_edacmc_process_error_info(struct mem_ctl_info *mci,
if (info->eap & I82443BXGX_EAP_OFFSET_SBE) {
error_found = 1;
if (handle_errors)
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, pageoffset, 0,
edac_mc_find_csrow_by_page(mci, page),
0, -1, mci->ctl_name, "", NULL);
0, -1, mci->ctl_name, "");
}
if (info->eap & I82443BXGX_EAP_OFFSET_MBE) {
error_found = 1;
if (handle_errors)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
page, pageoffset, 0,
edac_mc_find_csrow_by_page(mci, page),
0, -1, mci->ctl_name, "", NULL);
0, -1, mci->ctl_name, "");
}
return error_found;
@ -178,7 +178,7 @@ static void i82443bxgx_edacmc_check(struct mem_ctl_info *mci)
{
struct i82443bxgx_edacmc_error_info info;
debugf1("MC%d: %s: %s()\n", mci->mc_idx, __FILE__, __func__);
edac_dbg(1, "MC%d\n", mci->mc_idx);
i82443bxgx_edacmc_get_error_info(mci, &info);
i82443bxgx_edacmc_process_error_info(mci, &info, 1);
}
@ -197,18 +197,17 @@ static void i82443bxgx_init_csrows(struct mem_ctl_info *mci,
pci_read_config_byte(pdev, I82443BXGX_DRAMC, &dramc);
row_high_limit_last = 0;
for (index = 0; index < mci->nr_csrows; index++) {
csrow = &mci->csrows[index];
dimm = csrow->channels[0].dimm;
csrow = mci->csrows[index];
dimm = csrow->channels[0]->dimm;
pci_read_config_byte(pdev, I82443BXGX_DRB + index, &drbar);
debugf1("MC%d: %s: %s() Row=%d DRB = %#0x\n",
mci->mc_idx, __FILE__, __func__, index, drbar);
edac_dbg(1, "MC%d: Row=%d DRB = %#0x\n",
mci->mc_idx, index, drbar);
row_high_limit = ((u32) drbar << 23);
/* find the DRAM Chip Select Base address and mask */
debugf1("MC%d: %s: %s() Row=%d, "
"Boundary Address=%#0x, Last = %#0x\n",
mci->mc_idx, __FILE__, __func__, index, row_high_limit,
row_high_limit_last);
edac_dbg(1, "MC%d: Row=%d, Boundary Address=%#0x, Last = %#0x\n",
mci->mc_idx, index, row_high_limit,
row_high_limit_last);
/* 440GX goes to 2GB, represented with a DRB of 0. */
if (row_high_limit_last && !row_high_limit)
@ -241,7 +240,7 @@ static int i82443bxgx_edacmc_probe1(struct pci_dev *pdev, int dev_idx)
enum mem_type mtype;
enum edac_type edac_mode;
debugf0("MC: %s: %s()\n", __FILE__, __func__);
edac_dbg(0, "MC:\n");
/* Something is really hosed if PCI config space reads from
* the MC aren't working.
@ -259,8 +258,8 @@ static int i82443bxgx_edacmc_probe1(struct pci_dev *pdev, int dev_idx)
if (mci == NULL)
return -ENOMEM;
debugf0("MC: %s: %s(): mci = %p\n", __FILE__, __func__, mci);
mci->dev = &pdev->dev;
edac_dbg(0, "MC: mci = %p\n", mci);
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_EDO | MEM_FLAG_SDR | MEM_FLAG_RDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_EC | EDAC_FLAG_SECDED;
pci_read_config_byte(pdev, I82443BXGX_DRAMC, &dramc);
@ -275,8 +274,7 @@ static int i82443bxgx_edacmc_probe1(struct pci_dev *pdev, int dev_idx)
mtype = MEM_RDR;
break;
default:
debugf0("Unknown/reserved DRAM type value "
"in DRAMC register!\n");
edac_dbg(0, "Unknown/reserved DRAM type value in DRAMC register!\n");
mtype = -MEM_UNKNOWN;
}
@ -305,8 +303,7 @@ static int i82443bxgx_edacmc_probe1(struct pci_dev *pdev, int dev_idx)
edac_mode = EDAC_SECDED;
break;
default:
debugf0("%s(): Unknown/reserved ECC state "
"in NBXCFG register!\n", __func__);
edac_dbg(0, "Unknown/reserved ECC state in NBXCFG register!\n");
edac_mode = EDAC_UNKNOWN;
break;
}
@ -330,7 +327,7 @@ static int i82443bxgx_edacmc_probe1(struct pci_dev *pdev, int dev_idx)
mci->ctl_page_to_phys = NULL;
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto fail;
}
@ -345,7 +342,7 @@ static int i82443bxgx_edacmc_probe1(struct pci_dev *pdev, int dev_idx)
__func__);
}
debugf3("MC: %s: %s(): success\n", __FILE__, __func__);
edac_dbg(3, "MC: success\n");
return 0;
fail:
@ -361,7 +358,7 @@ static int __devinit i82443bxgx_edacmc_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("MC: %s: %s()\n", __FILE__, __func__);
edac_dbg(0, "MC:\n");
/* don't need to call pci_enable_device() */
rc = i82443bxgx_edacmc_probe1(pdev, ent->driver_data);
@ -376,7 +373,7 @@ static void __devexit i82443bxgx_edacmc_remove_one(struct pci_dev *pdev)
{
struct mem_ctl_info *mci;
debugf0("%s: %s()\n", __FILE__, __func__);
edac_dbg(0, "\n");
if (i82443bxgx_pci)
edac_pci_release_generic_ctl(i82443bxgx_pci);
@ -428,7 +425,7 @@ static int __init i82443bxgx_edacmc_init(void)
id = &i82443bxgx_pci_tbl[i];
}
if (!mci_pdev) {
debugf0("i82443bxgx pci_get_device fail\n");
edac_dbg(0, "i82443bxgx pci_get_device fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -436,7 +433,7 @@ static int __init i82443bxgx_edacmc_init(void)
pci_rc = i82443bxgx_edacmc_init_one(mci_pdev, i82443bxgx_pci_tbl);
if (pci_rc < 0) {
debugf0("i82443bxgx init fail\n");
edac_dbg(0, "i82443bxgx init fail\n");
pci_rc = -ENODEV;
goto fail1;
}

View File

@ -67,7 +67,7 @@ static void i82860_get_error_info(struct mem_ctl_info *mci,
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
/*
* This is a mess because there is no atomic way to read all the
@ -109,25 +109,25 @@ static int i82860_process_error_info(struct mem_ctl_info *mci,
return 1;
if ((info->errsts ^ info->errsts2) & 0x0003) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0,
-1, -1, -1, "UE overwrote CE", "", NULL);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0,
-1, -1, -1, "UE overwrote CE", "");
info->errsts = info->errsts2;
}
info->eap >>= PAGE_SHIFT;
row = edac_mc_find_csrow_by_page(mci, info->eap);
dimm = mci->csrows[row].channels[0].dimm;
dimm = mci->csrows[row]->channels[0]->dimm;
if (info->errsts & 0x0002)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
info->eap, 0, 0,
dimm->location[0], dimm->location[1], -1,
"i82860 UE", "", NULL);
"i82860 UE", "");
else
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
info->eap, 0, info->derrsyn,
dimm->location[0], dimm->location[1], -1,
"i82860 CE", "", NULL);
"i82860 CE", "");
return 1;
}
@ -136,7 +136,7 @@ static void i82860_check(struct mem_ctl_info *mci)
{
struct i82860_error_info info;
debugf1("MC%d: %s()\n", mci->mc_idx, __func__);
edac_dbg(1, "MC%d\n", mci->mc_idx);
i82860_get_error_info(mci, &info);
i82860_process_error_info(mci, &info, 1);
}
@ -161,14 +161,13 @@ static void i82860_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev)
* in all eight rows.
*/
for (index = 0; index < mci->nr_csrows; index++) {
csrow = &mci->csrows[index];
dimm = csrow->channels[0].dimm;
csrow = mci->csrows[index];
dimm = csrow->channels[0]->dimm;
pci_read_config_word(pdev, I82860_GBA + index * 2, &value);
cumul_size = (value & I82860_GBA_MASK) <<
(I82860_GBA_SHIFT - PAGE_SHIFT);
debugf3("%s(): (%d) cumul_size 0x%x\n", __func__, index,
cumul_size);
edac_dbg(3, "(%d) cumul_size 0x%x\n", index, cumul_size);
if (cumul_size == last_cumul_size)
continue; /* not populated */
@ -210,8 +209,8 @@ static int i82860_probe1(struct pci_dev *pdev, int dev_idx)
if (!mci)
return -ENOMEM;
debugf3("%s(): init mci\n", __func__);
mci->dev = &pdev->dev;
edac_dbg(3, "init mci\n");
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_DDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED;
/* I"m not sure about this but I think that all RDRAM is SECDED */
@ -229,7 +228,7 @@ static int i82860_probe1(struct pci_dev *pdev, int dev_idx)
* type of memory controller. The ID is therefore hardcoded to 0.
*/
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto fail;
}
@ -245,7 +244,7 @@ static int i82860_probe1(struct pci_dev *pdev, int dev_idx)
}
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
@ -260,7 +259,7 @@ static int __devinit i82860_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
i82860_printk(KERN_INFO, "i82860 init one\n");
if (pci_enable_device(pdev) < 0)
@ -278,7 +277,7 @@ static void __devexit i82860_remove_one(struct pci_dev *pdev)
{
struct mem_ctl_info *mci;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (i82860_pci)
edac_pci_release_generic_ctl(i82860_pci);
@ -311,7 +310,7 @@ static int __init i82860_init(void)
{
int pci_rc;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -324,7 +323,7 @@ static int __init i82860_init(void)
PCI_DEVICE_ID_INTEL_82860_0, NULL);
if (mci_pdev == NULL) {
debugf0("860 pci_get_device fail\n");
edac_dbg(0, "860 pci_get_device fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -332,7 +331,7 @@ static int __init i82860_init(void)
pci_rc = i82860_init_one(mci_pdev, i82860_pci_tbl);
if (pci_rc < 0) {
debugf0("860 init fail\n");
edac_dbg(0, "860 init fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -352,7 +351,7 @@ fail0:
static void __exit i82860_exit(void)
{
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
pci_unregister_driver(&i82860_driver);

View File

@ -189,7 +189,7 @@ static void i82875p_get_error_info(struct mem_ctl_info *mci,
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
/*
* This is a mess because there is no atomic way to read all the
@ -227,7 +227,7 @@ static int i82875p_process_error_info(struct mem_ctl_info *mci,
{
int row, multi_chan;
multi_chan = mci->csrows[0].nr_channels - 1;
multi_chan = mci->csrows[0]->nr_channels - 1;
if (!(info->errsts & 0x0081))
return 0;
@ -236,9 +236,9 @@ static int i82875p_process_error_info(struct mem_ctl_info *mci,
return 1;
if ((info->errsts ^ info->errsts2) & 0x0081) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0,
-1, -1, -1,
"UE overwrote CE", "", NULL);
"UE overwrote CE", "");
info->errsts = info->errsts2;
}
@ -246,15 +246,15 @@ static int i82875p_process_error_info(struct mem_ctl_info *mci,
row = edac_mc_find_csrow_by_page(mci, info->eap);
if (info->errsts & 0x0080)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
info->eap, 0, 0,
row, -1, -1,
"i82875p UE", "", NULL);
"i82875p UE", "");
else
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
info->eap, 0, info->derrsyn,
row, multi_chan ? (info->des & 0x1) : 0,
-1, "i82875p CE", "", NULL);
-1, "i82875p CE", "");
return 1;
}
@ -263,7 +263,7 @@ static void i82875p_check(struct mem_ctl_info *mci)
{
struct i82875p_error_info info;
debugf1("MC%d: %s()\n", mci->mc_idx, __func__);
edac_dbg(1, "MC%d\n", mci->mc_idx);
i82875p_get_error_info(mci, &info);
i82875p_process_error_info(mci, &info, 1);
}
@ -367,12 +367,11 @@ static void i82875p_init_csrows(struct mem_ctl_info *mci,
*/
for (index = 0; index < mci->nr_csrows; index++) {
csrow = &mci->csrows[index];
csrow = mci->csrows[index];
value = readb(ovrfl_window + I82875P_DRB + index);
cumul_size = value << (I82875P_DRB_SHIFT - PAGE_SHIFT);
debugf3("%s(): (%d) cumul_size 0x%x\n", __func__, index,
cumul_size);
edac_dbg(3, "(%d) cumul_size 0x%x\n", index, cumul_size);
if (cumul_size == last_cumul_size)
continue; /* not populated */
@ -382,7 +381,7 @@ static void i82875p_init_csrows(struct mem_ctl_info *mci,
last_cumul_size = cumul_size;
for (j = 0; j < nr_chans; j++) {
dimm = csrow->channels[j].dimm;
dimm = csrow->channels[j]->dimm;
dimm->nr_pages = nr_pages / nr_chans;
dimm->grain = 1 << 12; /* I82875P_EAP has 4KiB reolution */
@ -405,7 +404,7 @@ static int i82875p_probe1(struct pci_dev *pdev, int dev_idx)
u32 nr_chans;
struct i82875p_error_info discard;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
ovrfl_pdev = pci_get_device(PCI_VEND_DEV(INTEL, 82875_6), NULL);
@ -426,11 +425,8 @@ static int i82875p_probe1(struct pci_dev *pdev, int dev_idx)
goto fail0;
}
/* Keeps mci available after edac_mc_del_mc() till edac_mc_free() */
kobject_get(&mci->edac_mci_kobj);
debugf3("%s(): init mci\n", __func__);
mci->dev = &pdev->dev;
edac_dbg(3, "init mci\n");
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_DDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED;
mci->edac_cap = EDAC_FLAG_UNKNOWN;
@ -440,7 +436,7 @@ static int i82875p_probe1(struct pci_dev *pdev, int dev_idx)
mci->dev_name = pci_name(pdev);
mci->edac_check = i82875p_check;
mci->ctl_page_to_phys = NULL;
debugf3("%s(): init pvt\n", __func__);
edac_dbg(3, "init pvt\n");
pvt = (struct i82875p_pvt *)mci->pvt_info;
pvt->ovrfl_pdev = ovrfl_pdev;
pvt->ovrfl_window = ovrfl_window;
@ -451,7 +447,7 @@ static int i82875p_probe1(struct pci_dev *pdev, int dev_idx)
* type of memory controller. The ID is therefore hardcoded to 0.
*/
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto fail1;
}
@ -467,11 +463,10 @@ static int i82875p_probe1(struct pci_dev *pdev, int dev_idx)
}
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
fail1:
kobject_put(&mci->edac_mci_kobj);
edac_mc_free(mci);
fail0:
@ -489,7 +484,7 @@ static int __devinit i82875p_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
i82875p_printk(KERN_INFO, "i82875p init one\n");
if (pci_enable_device(pdev) < 0)
@ -508,7 +503,7 @@ static void __devexit i82875p_remove_one(struct pci_dev *pdev)
struct mem_ctl_info *mci;
struct i82875p_pvt *pvt = NULL;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (i82875p_pci)
edac_pci_release_generic_ctl(i82875p_pci);
@ -554,7 +549,7 @@ static int __init i82875p_init(void)
{
int pci_rc;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -569,7 +564,7 @@ static int __init i82875p_init(void)
PCI_DEVICE_ID_INTEL_82875_0, NULL);
if (!mci_pdev) {
debugf0("875p pci_get_device fail\n");
edac_dbg(0, "875p pci_get_device fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -577,7 +572,7 @@ static int __init i82875p_init(void)
pci_rc = i82875p_init_one(mci_pdev, i82875p_pci_tbl);
if (pci_rc < 0) {
debugf0("875p init fail\n");
edac_dbg(0, "875p init fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -597,7 +592,7 @@ fail0:
static void __exit i82875p_exit(void)
{
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
i82875p_remove_one(mci_pdev);
pci_dev_put(mci_pdev);

View File

@ -241,7 +241,7 @@ static void i82975x_get_error_info(struct mem_ctl_info *mci,
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
/*
* This is a mess because there is no atomic way to read all the
@ -288,8 +288,8 @@ static int i82975x_process_error_info(struct mem_ctl_info *mci,
return 1;
if ((info->errsts ^ info->errsts2) & 0x0003) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0,
-1, -1, -1, "UE overwrote CE", "", NULL);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0,
-1, -1, -1, "UE overwrote CE", "");
info->errsts = info->errsts2;
}
@ -308,21 +308,21 @@ static int i82975x_process_error_info(struct mem_ctl_info *mci,
(info->xeap & 1) ? 1 : 0, info->eap, (unsigned int) page);
return 0;
}
chan = (mci->csrows[row].nr_channels == 1) ? 0 : info->eap & 1;
chan = (mci->csrows[row]->nr_channels == 1) ? 0 : info->eap & 1;
offst = info->eap
& ((1 << PAGE_SHIFT) -
(1 << mci->csrows[row].channels[chan].dimm->grain));
(1 << mci->csrows[row]->channels[chan]->dimm->grain));
if (info->errsts & 0x0002)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
page, offst, 0,
row, -1, -1,
"i82975x UE", "", NULL);
"i82975x UE", "");
else
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, offst, info->derrsyn,
row, chan ? chan : 0, -1,
"i82975x CE", "", NULL);
"i82975x CE", "");
return 1;
}
@ -331,7 +331,7 @@ static void i82975x_check(struct mem_ctl_info *mci)
{
struct i82975x_error_info info;
debugf1("MC%d: %s()\n", mci->mc_idx, __func__);
edac_dbg(1, "MC%d\n", mci->mc_idx);
i82975x_get_error_info(mci, &info);
i82975x_process_error_info(mci, &info, 1);
}
@ -394,7 +394,7 @@ static void i82975x_init_csrows(struct mem_ctl_info *mci,
*/
for (index = 0; index < mci->nr_csrows; index++) {
csrow = &mci->csrows[index];
csrow = mci->csrows[index];
value = readb(mch_window + I82975X_DRB + index +
((index >= 4) ? 0x80 : 0));
@ -406,8 +406,7 @@ static void i82975x_init_csrows(struct mem_ctl_info *mci,
*/
if (csrow->nr_channels > 1)
cumul_size <<= 1;
debugf3("%s(): (%d) cumul_size 0x%x\n", __func__, index,
cumul_size);
edac_dbg(3, "(%d) cumul_size 0x%x\n", index, cumul_size);
nr_pages = cumul_size - last_cumul_size;
if (!nr_pages)
@ -421,10 +420,10 @@ static void i82975x_init_csrows(struct mem_ctl_info *mci,
*/
dtype = i82975x_dram_type(mch_window, index);
for (chan = 0; chan < csrow->nr_channels; chan++) {
dimm = mci->csrows[index].channels[chan].dimm;
dimm = mci->csrows[index]->channels[chan]->dimm;
dimm->nr_pages = nr_pages / csrow->nr_channels;
strncpy(csrow->channels[chan].dimm->label,
strncpy(csrow->channels[chan]->dimm->label,
labels[(index >> 1) + (chan * 2)],
EDAC_MC_LABEL_LEN);
dimm->grain = 1 << 7; /* 128Byte cache-line resolution */
@ -489,11 +488,11 @@ static int i82975x_probe1(struct pci_dev *pdev, int dev_idx)
u8 c1drb[4];
#endif
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
pci_read_config_dword(pdev, I82975X_MCHBAR, &mchbar);
if (!(mchbar & 1)) {
debugf3("%s(): failed, MCHBAR disabled!\n", __func__);
edac_dbg(3, "failed, MCHBAR disabled!\n");
goto fail0;
}
mchbar &= 0xffffc000; /* bits 31:14 used for 16K window */
@ -558,8 +557,8 @@ static int i82975x_probe1(struct pci_dev *pdev, int dev_idx)
goto fail1;
}
debugf3("%s(): init mci\n", __func__);
mci->dev = &pdev->dev;
edac_dbg(3, "init mci\n");
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_DDR2;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED;
mci->edac_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED;
@ -569,7 +568,7 @@ static int i82975x_probe1(struct pci_dev *pdev, int dev_idx)
mci->dev_name = pci_name(pdev);
mci->edac_check = i82975x_check;
mci->ctl_page_to_phys = NULL;
debugf3("%s(): init pvt\n", __func__);
edac_dbg(3, "init pvt\n");
pvt = (struct i82975x_pvt *) mci->pvt_info;
pvt->mch_window = mch_window;
i82975x_init_csrows(mci, pdev, mch_window);
@ -578,12 +577,12 @@ static int i82975x_probe1(struct pci_dev *pdev, int dev_idx)
/* finalize this instance of memory controller with edac core */
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto fail2;
}
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
fail2:
@ -601,7 +600,7 @@ static int __devinit i82975x_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (pci_enable_device(pdev) < 0)
return -EIO;
@ -619,7 +618,7 @@ static void __devexit i82975x_remove_one(struct pci_dev *pdev)
struct mem_ctl_info *mci;
struct i82975x_pvt *pvt;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
mci = edac_mc_del_mc(&pdev->dev);
if (mci == NULL)
@ -655,7 +654,7 @@ static int __init i82975x_init(void)
{
int pci_rc;
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -669,7 +668,7 @@ static int __init i82975x_init(void)
PCI_DEVICE_ID_INTEL_82975_0, NULL);
if (!mci_pdev) {
debugf0("i82975x pci_get_device fail\n");
edac_dbg(0, "i82975x pci_get_device fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -677,7 +676,7 @@ static int __init i82975x_init(void)
pci_rc = i82975x_init_one(mci_pdev, i82975x_pci_tbl);
if (pci_rc < 0) {
debugf0("i82975x init fail\n");
edac_dbg(0, "i82975x init fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -697,7 +696,7 @@ fail0:
static void __exit i82975x_exit(void)
{
debugf3("%s()\n", __func__);
edac_dbg(3, "\n");
pci_unregister_driver(&i82975x_driver);

View File

@ -49,34 +49,45 @@ static u32 orig_hid1[2];
/************************ MC SYSFS parts ***********************************/
static ssize_t mpc85xx_mc_inject_data_hi_show(struct mem_ctl_info *mci,
#define to_mci(k) container_of(k, struct mem_ctl_info, dev)
static ssize_t mpc85xx_mc_inject_data_hi_show(struct device *dev,
struct device_attribute *mattr,
char *data)
{
struct mem_ctl_info *mci = to_mci(dev);
struct mpc85xx_mc_pdata *pdata = mci->pvt_info;
return sprintf(data, "0x%08x",
in_be32(pdata->mc_vbase +
MPC85XX_MC_DATA_ERR_INJECT_HI));
}
static ssize_t mpc85xx_mc_inject_data_lo_show(struct mem_ctl_info *mci,
static ssize_t mpc85xx_mc_inject_data_lo_show(struct device *dev,
struct device_attribute *mattr,
char *data)
{
struct mem_ctl_info *mci = to_mci(dev);
struct mpc85xx_mc_pdata *pdata = mci->pvt_info;
return sprintf(data, "0x%08x",
in_be32(pdata->mc_vbase +
MPC85XX_MC_DATA_ERR_INJECT_LO));
}
static ssize_t mpc85xx_mc_inject_ctrl_show(struct mem_ctl_info *mci, char *data)
static ssize_t mpc85xx_mc_inject_ctrl_show(struct device *dev,
struct device_attribute *mattr,
char *data)
{
struct mem_ctl_info *mci = to_mci(dev);
struct mpc85xx_mc_pdata *pdata = mci->pvt_info;
return sprintf(data, "0x%08x",
in_be32(pdata->mc_vbase + MPC85XX_MC_ECC_ERR_INJECT));
}
static ssize_t mpc85xx_mc_inject_data_hi_store(struct mem_ctl_info *mci,
static ssize_t mpc85xx_mc_inject_data_hi_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct mpc85xx_mc_pdata *pdata = mci->pvt_info;
if (isdigit(*data)) {
out_be32(pdata->mc_vbase + MPC85XX_MC_DATA_ERR_INJECT_HI,
@ -86,9 +97,11 @@ static ssize_t mpc85xx_mc_inject_data_hi_store(struct mem_ctl_info *mci,
return 0;
}
static ssize_t mpc85xx_mc_inject_data_lo_store(struct mem_ctl_info *mci,
static ssize_t mpc85xx_mc_inject_data_lo_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct mpc85xx_mc_pdata *pdata = mci->pvt_info;
if (isdigit(*data)) {
out_be32(pdata->mc_vbase + MPC85XX_MC_DATA_ERR_INJECT_LO,
@ -98,9 +111,11 @@ static ssize_t mpc85xx_mc_inject_data_lo_store(struct mem_ctl_info *mci,
return 0;
}
static ssize_t mpc85xx_mc_inject_ctrl_store(struct mem_ctl_info *mci,
const char *data, size_t count)
static ssize_t mpc85xx_mc_inject_ctrl_store(struct device *dev,
struct device_attribute *mattr,
const char *data, size_t count)
{
struct mem_ctl_info *mci = to_mci(dev);
struct mpc85xx_mc_pdata *pdata = mci->pvt_info;
if (isdigit(*data)) {
out_be32(pdata->mc_vbase + MPC85XX_MC_ECC_ERR_INJECT,
@ -110,38 +125,35 @@ static ssize_t mpc85xx_mc_inject_ctrl_store(struct mem_ctl_info *mci,
return 0;
}
static struct mcidev_sysfs_attribute mpc85xx_mc_sysfs_attributes[] = {
{
.attr = {
.name = "inject_data_hi",
.mode = (S_IRUGO | S_IWUSR)
},
.show = mpc85xx_mc_inject_data_hi_show,
.store = mpc85xx_mc_inject_data_hi_store},
{
.attr = {
.name = "inject_data_lo",
.mode = (S_IRUGO | S_IWUSR)
},
.show = mpc85xx_mc_inject_data_lo_show,
.store = mpc85xx_mc_inject_data_lo_store},
{
.attr = {
.name = "inject_ctrl",
.mode = (S_IRUGO | S_IWUSR)
},
.show = mpc85xx_mc_inject_ctrl_show,
.store = mpc85xx_mc_inject_ctrl_store},
DEVICE_ATTR(inject_data_hi, S_IRUGO | S_IWUSR,
mpc85xx_mc_inject_data_hi_show, mpc85xx_mc_inject_data_hi_store);
DEVICE_ATTR(inject_data_lo, S_IRUGO | S_IWUSR,
mpc85xx_mc_inject_data_lo_show, mpc85xx_mc_inject_data_lo_store);
DEVICE_ATTR(inject_ctrl, S_IRUGO | S_IWUSR,
mpc85xx_mc_inject_ctrl_show, mpc85xx_mc_inject_ctrl_store);
/* End of list */
{
.attr = {.name = NULL}
}
};
static void mpc85xx_set_mc_sysfs_attributes(struct mem_ctl_info *mci)
static int mpc85xx_create_sysfs_attributes(struct mem_ctl_info *mci)
{
mci->mc_driver_sysfs_attributes = mpc85xx_mc_sysfs_attributes;
int rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_data_hi);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_data_lo);
if (rc < 0)
return rc;
rc = device_create_file(&mci->dev, &dev_attr_inject_ctrl);
if (rc < 0)
return rc;
return 0;
}
static void mpc85xx_remove_sysfs_attributes(struct mem_ctl_info *mci)
{
device_remove_file(&mci->dev, &dev_attr_inject_data_hi);
device_remove_file(&mci->dev, &dev_attr_inject_data_lo);
device_remove_file(&mci->dev, &dev_attr_inject_ctrl);
}
/**************************** PCI Err device ***************************/
@ -268,7 +280,7 @@ static int __devinit mpc85xx_pci_err_probe(struct platform_device *op)
out_be32(pdata->pci_vbase + MPC85XX_PCI_ERR_DR, ~0);
if (edac_pci_add_device(pci, pdata->edac_idx) > 0) {
debugf3("%s(): failed edac_pci_add_device()\n", __func__);
edac_dbg(3, "failed edac_pci_add_device()\n");
goto err;
}
@ -291,7 +303,7 @@ static int __devinit mpc85xx_pci_err_probe(struct platform_device *op)
}
devres_remove_group(&op->dev, mpc85xx_pci_err_probe);
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
printk(KERN_INFO EDAC_MOD_STR " PCI err registered\n");
return 0;
@ -309,7 +321,7 @@ static int mpc85xx_pci_err_remove(struct platform_device *op)
struct edac_pci_ctl_info *pci = dev_get_drvdata(&op->dev);
struct mpc85xx_pci_pdata *pdata = pci->pvt_info;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
out_be32(pdata->pci_vbase + MPC85XX_PCI_ERR_CAP_DR,
orig_pci_err_cap_dr);
@ -570,7 +582,7 @@ static int __devinit mpc85xx_l2_err_probe(struct platform_device *op)
pdata->edac_idx = edac_dev_idx++;
if (edac_device_add_device(edac_dev) > 0) {
debugf3("%s(): failed edac_device_add_device()\n", __func__);
edac_dbg(3, "failed edac_device_add_device()\n");
goto err;
}
@ -598,7 +610,7 @@ static int __devinit mpc85xx_l2_err_probe(struct platform_device *op)
devres_remove_group(&op->dev, mpc85xx_l2_err_probe);
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
printk(KERN_INFO EDAC_MOD_STR " L2 err registered\n");
return 0;
@ -616,7 +628,7 @@ static int mpc85xx_l2_err_remove(struct platform_device *op)
struct edac_device_ctl_info *edac_dev = dev_get_drvdata(&op->dev);
struct mpc85xx_l2_pdata *pdata = edac_dev->pvt_info;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (edac_op_state == EDAC_OPSTATE_INT) {
out_be32(pdata->l2_vbase + MPC85XX_L2_ERRINTEN, 0);
@ -813,7 +825,7 @@ static void mpc85xx_mc_check(struct mem_ctl_info *mci)
pfn = err_addr >> PAGE_SHIFT;
for (row_index = 0; row_index < mci->nr_csrows; row_index++) {
csrow = &mci->csrows[row_index];
csrow = mci->csrows[row_index];
if ((pfn >= csrow->first_page) && (pfn <= csrow->last_page))
break;
}
@ -854,16 +866,16 @@ static void mpc85xx_mc_check(struct mem_ctl_info *mci)
mpc85xx_mc_printk(mci, KERN_ERR, "PFN out of range!\n");
if (err_detect & DDR_EDE_SBE)
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
pfn, err_addr & ~PAGE_MASK, syndrome,
row_index, 0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
if (err_detect & DDR_EDE_MBE)
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
pfn, err_addr & ~PAGE_MASK, syndrome,
row_index, 0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
out_be32(pdata->mc_vbase + MPC85XX_MC_ERR_DETECT, err_detect);
}
@ -933,8 +945,8 @@ static void __devinit mpc85xx_init_csrows(struct mem_ctl_info *mci)
u32 start;
u32 end;
csrow = &mci->csrows[index];
dimm = csrow->channels[0].dimm;
csrow = mci->csrows[index];
dimm = csrow->channels[0]->dimm;
cs_bnds = in_be32(pdata->mc_vbase + MPC85XX_MC_CS_BNDS_0 +
(index * MPC85XX_MC_CS_BNDS_OFS));
@ -990,9 +1002,9 @@ static int __devinit mpc85xx_mc_err_probe(struct platform_device *op)
pdata = mci->pvt_info;
pdata->name = "mpc85xx_mc_err";
pdata->irq = NO_IRQ;
mci->dev = &op->dev;
mci->pdev = &op->dev;
pdata->edac_idx = edac_mc_idx++;
dev_set_drvdata(mci->dev, mci);
dev_set_drvdata(mci->pdev, mci);
mci->ctl_name = pdata->name;
mci->dev_name = pdata->name;
@ -1026,7 +1038,7 @@ static int __devinit mpc85xx_mc_err_probe(struct platform_device *op)
goto err;
}
debugf3("%s(): init mci\n", __func__);
edac_dbg(3, "init mci\n");
mci->mtype_cap = MEM_FLAG_RDDR | MEM_FLAG_RDDR2 |
MEM_FLAG_DDR | MEM_FLAG_DDR2;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED;
@ -1041,8 +1053,6 @@ static int __devinit mpc85xx_mc_err_probe(struct platform_device *op)
mci->scrub_mode = SCRUB_SW_SRC;
mpc85xx_set_mc_sysfs_attributes(mci);
mpc85xx_init_csrows(mci);
/* store the original error disable bits */
@ -1054,7 +1064,13 @@ static int __devinit mpc85xx_mc_err_probe(struct platform_device *op)
out_be32(pdata->mc_vbase + MPC85XX_MC_ERR_DETECT, ~0);
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto err;
}
if (mpc85xx_create_sysfs_attributes(mci)) {
edac_mc_del_mc(mci->pdev);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto err;
}
@ -1088,7 +1104,7 @@ static int __devinit mpc85xx_mc_err_probe(struct platform_device *op)
}
devres_remove_group(&op->dev, mpc85xx_mc_err_probe);
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
printk(KERN_INFO EDAC_MOD_STR " MC err registered\n");
return 0;
@ -1106,7 +1122,7 @@ static int mpc85xx_mc_err_remove(struct platform_device *op)
struct mem_ctl_info *mci = dev_get_drvdata(&op->dev);
struct mpc85xx_mc_pdata *pdata = mci->pvt_info;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (edac_op_state == EDAC_OPSTATE_INT) {
out_be32(pdata->mc_vbase + MPC85XX_MC_ERR_INT_EN, 0);
@ -1117,6 +1133,7 @@ static int mpc85xx_mc_err_remove(struct platform_device *op)
orig_ddr_err_disable);
out_be32(pdata->mc_vbase + MPC85XX_MC_ERR_SBE, orig_ddr_err_sbe);
mpc85xx_remove_sysfs_attributes(mci);
edac_mc_del_mc(&op->dev);
edac_mc_free(mci);
return 0;

View File

@ -169,7 +169,7 @@ static int __devinit mv64x60_pci_err_probe(struct platform_device *pdev)
MV64X60_PCIx_ERR_MASK_VAL);
if (edac_pci_add_device(pci, pdata->edac_idx) > 0) {
debugf3("%s(): failed edac_pci_add_device()\n", __func__);
edac_dbg(3, "failed edac_pci_add_device()\n");
goto err;
}
@ -194,7 +194,7 @@ static int __devinit mv64x60_pci_err_probe(struct platform_device *pdev)
devres_remove_group(&pdev->dev, mv64x60_pci_err_probe);
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
@ -210,7 +210,7 @@ static int mv64x60_pci_err_remove(struct platform_device *pdev)
{
struct edac_pci_ctl_info *pci = platform_get_drvdata(pdev);
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
edac_pci_del_device(&pdev->dev);
@ -336,7 +336,7 @@ static int __devinit mv64x60_sram_err_probe(struct platform_device *pdev)
pdata->edac_idx = edac_dev_idx++;
if (edac_device_add_device(edac_dev) > 0) {
debugf3("%s(): failed edac_device_add_device()\n", __func__);
edac_dbg(3, "failed edac_device_add_device()\n");
goto err;
}
@ -363,7 +363,7 @@ static int __devinit mv64x60_sram_err_probe(struct platform_device *pdev)
devres_remove_group(&pdev->dev, mv64x60_sram_err_probe);
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
@ -379,7 +379,7 @@ static int mv64x60_sram_err_remove(struct platform_device *pdev)
{
struct edac_device_ctl_info *edac_dev = platform_get_drvdata(pdev);
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
edac_device_del_device(&pdev->dev);
edac_device_free_ctl_info(edac_dev);
@ -531,7 +531,7 @@ static int __devinit mv64x60_cpu_err_probe(struct platform_device *pdev)
pdata->edac_idx = edac_dev_idx++;
if (edac_device_add_device(edac_dev) > 0) {
debugf3("%s(): failed edac_device_add_device()\n", __func__);
edac_dbg(3, "failed edac_device_add_device()\n");
goto err;
}
@ -558,7 +558,7 @@ static int __devinit mv64x60_cpu_err_probe(struct platform_device *pdev)
devres_remove_group(&pdev->dev, mv64x60_cpu_err_probe);
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
@ -574,7 +574,7 @@ static int mv64x60_cpu_err_remove(struct platform_device *pdev)
{
struct edac_device_ctl_info *edac_dev = platform_get_drvdata(pdev);
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
edac_device_del_device(&pdev->dev);
edac_device_free_ctl_info(edac_dev);
@ -611,17 +611,17 @@ static void mv64x60_mc_check(struct mem_ctl_info *mci)
/* first bit clear in ECC Err Reg, 1 bit error, correctable by HW */
if (!(reg & 0x1))
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
err_addr >> PAGE_SHIFT,
err_addr & PAGE_MASK, syndrome,
0, 0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
else /* 2 bit error, UE */
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
err_addr >> PAGE_SHIFT,
err_addr & PAGE_MASK, 0,
0, 0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
/* clear the error */
out_le32(pdata->mc_vbase + MV64X60_SDRAM_ERR_ADDR, 0);
@ -670,8 +670,8 @@ static void mv64x60_init_csrows(struct mem_ctl_info *mci,
ctl = in_le32(pdata->mc_vbase + MV64X60_SDRAM_CONFIG);
csrow = &mci->csrows[0];
dimm = csrow->channels[0].dimm;
csrow = mci->csrows[0];
dimm = csrow->channels[0]->dimm;
dimm->nr_pages = pdata->total_mem >> PAGE_SHIFT;
dimm->grain = 8;
@ -724,7 +724,7 @@ static int __devinit mv64x60_mc_err_probe(struct platform_device *pdev)
}
pdata = mci->pvt_info;
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
platform_set_drvdata(pdev, mci);
pdata->name = "mv64x60_mc_err";
pdata->irq = NO_IRQ;
@ -766,7 +766,7 @@ static int __devinit mv64x60_mc_err_probe(struct platform_device *pdev)
goto err2;
}
debugf3("%s(): init mci\n", __func__);
edac_dbg(3, "init mci\n");
mci->mtype_cap = MEM_FLAG_RDDR | MEM_FLAG_DDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED;
mci->edac_cap = EDAC_FLAG_SECDED;
@ -790,7 +790,7 @@ static int __devinit mv64x60_mc_err_probe(struct platform_device *pdev)
out_le32(pdata->mc_vbase + MV64X60_SDRAM_ERR_ECC_CNTL, ctl);
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto err;
}
@ -815,7 +815,7 @@ static int __devinit mv64x60_mc_err_probe(struct platform_device *pdev)
}
/* get this far and it's successful */
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
@ -831,7 +831,7 @@ static int mv64x60_mc_err_remove(struct platform_device *pdev)
{
struct mem_ctl_info *mci = platform_get_drvdata(pdev);
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
edac_mc_del_mc(&pdev->dev);
edac_mc_free(mci);

View File

@ -74,7 +74,7 @@ static int system_mmc_id;
static u32 pasemi_edac_get_error_info(struct mem_ctl_info *mci)
{
struct pci_dev *pdev = to_pci_dev(mci->dev);
struct pci_dev *pdev = to_pci_dev(mci->pdev);
u32 tmp;
pci_read_config_dword(pdev, MCDEBUG_ERRSTA,
@ -95,7 +95,7 @@ static u32 pasemi_edac_get_error_info(struct mem_ctl_info *mci)
static void pasemi_edac_process_error_info(struct mem_ctl_info *mci, u32 errsta)
{
struct pci_dev *pdev = to_pci_dev(mci->dev);
struct pci_dev *pdev = to_pci_dev(mci->pdev);
u32 errlog1a;
u32 cs;
@ -110,16 +110,16 @@ static void pasemi_edac_process_error_info(struct mem_ctl_info *mci, u32 errsta)
/* uncorrectable/multi-bit errors */
if (errsta & (MCDEBUG_ERRSTA_MBE_STATUS |
MCDEBUG_ERRSTA_RFL_STATUS)) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
mci->csrows[cs].first_page, 0, 0,
cs, 0, -1, mci->ctl_name, "", NULL);
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
mci->csrows[cs]->first_page, 0, 0,
cs, 0, -1, mci->ctl_name, "");
}
/* correctable/single-bit errors */
if (errsta & MCDEBUG_ERRSTA_SBE_STATUS)
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
mci->csrows[cs].first_page, 0, 0,
cs, 0, -1, mci->ctl_name, "", NULL);
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
mci->csrows[cs]->first_page, 0, 0,
cs, 0, -1, mci->ctl_name, "");
}
static void pasemi_edac_check(struct mem_ctl_info *mci)
@ -141,8 +141,8 @@ static int pasemi_edac_init_csrows(struct mem_ctl_info *mci,
int index;
for (index = 0; index < mci->nr_csrows; index++) {
csrow = &mci->csrows[index];
dimm = csrow->channels[0].dimm;
csrow = mci->csrows[index];
dimm = csrow->channels[0]->dimm;
pci_read_config_dword(pdev,
MCDRAM_RANKCFG + (index * 12),
@ -225,7 +225,7 @@ static int __devinit pasemi_edac_probe(struct pci_dev *pdev,
MCCFG_ERRCOR_ECC_GEN_EN |
MCCFG_ERRCOR_ECC_CRR_EN;
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_DDR | MEM_FLAG_RDDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_EC | EDAC_FLAG_SECDED;
mci->edac_cap = (errcor & MCCFG_ERRCOR_ECC_GEN_EN) ?

View File

@ -727,10 +727,10 @@ ppc4xx_edac_handle_ce(struct mem_ctl_info *mci,
for (row = 0; row < mci->nr_csrows; row++)
if (ppc4xx_edac_check_bank_error(status, row))
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
0, 0, 0,
row, 0, -1,
message, "", NULL);
message, "");
}
/**
@ -758,10 +758,10 @@ ppc4xx_edac_handle_ue(struct mem_ctl_info *mci,
for (row = 0; row < mci->nr_csrows; row++)
if (ppc4xx_edac_check_bank_error(status, row))
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
page, offset, 0,
row, 0, -1,
message, "", NULL);
message, "");
}
/**
@ -1027,9 +1027,9 @@ ppc4xx_edac_mc_init(struct mem_ctl_info *mci,
/* Initial driver pointers and private data */
mci->dev = &op->dev;
mci->pdev = &op->dev;
dev_set_drvdata(mci->dev, mci);
dev_set_drvdata(mci->pdev, mci);
pdata = mci->pvt_info;
@ -1334,7 +1334,7 @@ static int __devinit ppc4xx_edac_probe(struct platform_device *op)
return 0;
fail1:
edac_mc_del_mc(mci->dev);
edac_mc_del_mc(mci->pdev);
fail:
edac_mc_free(mci);
@ -1368,7 +1368,7 @@ ppc4xx_edac_remove(struct platform_device *op)
dcr_unmap(pdata->dcr_host, SDRAM_DCR_RESOURCE_LEN);
edac_mc_del_mc(mci->dev);
edac_mc_del_mc(mci->pdev);
edac_mc_free(mci);
return 0;

View File

@ -140,7 +140,7 @@ static void r82600_get_error_info(struct mem_ctl_info *mci,
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
pci_read_config_dword(pdev, R82600_EAP, &info->eapr);
if (info->eapr & BIT(0))
@ -179,11 +179,11 @@ static int r82600_process_error_info(struct mem_ctl_info *mci,
error_found = 1;
if (handle_errors)
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
page, 0, syndrome,
edac_mc_find_csrow_by_page(mci, page),
0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
}
if (info->eapr & BIT(1)) { /* UE? */
@ -191,11 +191,11 @@ static int r82600_process_error_info(struct mem_ctl_info *mci,
if (handle_errors)
/* 82600 doesn't give enough info */
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
page, 0, 0,
edac_mc_find_csrow_by_page(mci, page),
0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
}
return error_found;
@ -205,7 +205,7 @@ static void r82600_check(struct mem_ctl_info *mci)
{
struct r82600_error_info info;
debugf1("MC%d: %s()\n", mci->mc_idx, __func__);
edac_dbg(1, "MC%d\n", mci->mc_idx);
r82600_get_error_info(mci, &info);
r82600_process_error_info(mci, &info, 1);
}
@ -230,19 +230,19 @@ static void r82600_init_csrows(struct mem_ctl_info *mci, struct pci_dev *pdev,
row_high_limit_last = 0;
for (index = 0; index < mci->nr_csrows; index++) {
csrow = &mci->csrows[index];
dimm = csrow->channels[0].dimm;
csrow = mci->csrows[index];
dimm = csrow->channels[0]->dimm;
/* find the DRAM Chip Select Base address and mask */
pci_read_config_byte(pdev, R82600_DRBA + index, &drbar);
debugf1("%s() Row=%d DRBA = %#0x\n", __func__, index, drbar);
edac_dbg(1, "Row=%d DRBA = %#0x\n", index, drbar);
row_high_limit = ((u32) drbar << 24);
/* row_high_limit = ((u32)drbar << 24) | 0xffffffUL; */
debugf1("%s() Row=%d, Boundary Address=%#0x, Last = %#0x\n",
__func__, index, row_high_limit, row_high_limit_last);
edac_dbg(1, "Row=%d, Boundary Address=%#0x, Last = %#0x\n",
index, row_high_limit, row_high_limit_last);
/* Empty row [p.57] */
if (row_high_limit == row_high_limit_last)
@ -277,14 +277,13 @@ static int r82600_probe1(struct pci_dev *pdev, int dev_idx)
u32 sdram_refresh_rate;
struct r82600_error_info discard;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
pci_read_config_byte(pdev, R82600_DRAMC, &dramcr);
pci_read_config_dword(pdev, R82600_EAP, &eapr);
scrub_disabled = eapr & BIT(31);
sdram_refresh_rate = dramcr & (BIT(0) | BIT(1));
debugf2("%s(): sdram refresh rate = %#0x\n", __func__,
sdram_refresh_rate);
debugf2("%s(): DRAMC register = %#0x\n", __func__, dramcr);
edac_dbg(2, "sdram refresh rate = %#0x\n", sdram_refresh_rate);
edac_dbg(2, "DRAMC register = %#0x\n", dramcr);
layers[0].type = EDAC_MC_LAYER_CHIP_SELECT;
layers[0].size = R82600_NR_CSROWS;
layers[0].is_virt_csrow = true;
@ -295,8 +294,8 @@ static int r82600_probe1(struct pci_dev *pdev, int dev_idx)
if (mci == NULL)
return -ENOMEM;
debugf0("%s(): mci = %p\n", __func__, mci);
mci->dev = &pdev->dev;
edac_dbg(0, "mci = %p\n", mci);
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_RDDR | MEM_FLAG_DDR;
mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_EC | EDAC_FLAG_SECDED;
/* FIXME try to work out if the chip leads have been used for COM2
@ -311,8 +310,8 @@ static int r82600_probe1(struct pci_dev *pdev, int dev_idx)
if (ecc_enabled(dramcr)) {
if (scrub_disabled)
debugf3("%s(): mci = %p - Scrubbing disabled! EAP: "
"%#0x\n", __func__, mci, eapr);
edac_dbg(3, "mci = %p - Scrubbing disabled! EAP: %#0x\n",
mci, eapr);
} else
mci->edac_cap = EDAC_FLAG_NONE;
@ -329,15 +328,14 @@ static int r82600_probe1(struct pci_dev *pdev, int dev_idx)
* type of memory controller. The ID is therefore hardcoded to 0.
*/
if (edac_mc_add_mc(mci)) {
debugf3("%s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "failed edac_mc_add_mc()\n");
goto fail;
}
/* get this far and it's successful */
if (disable_hardware_scrub) {
debugf3("%s(): Disabling Hardware Scrub (scrub on error)\n",
__func__);
edac_dbg(3, "Disabling Hardware Scrub (scrub on error)\n");
pci_write_bits32(pdev, R82600_EAP, BIT(31), BIT(31));
}
@ -352,7 +350,7 @@ static int r82600_probe1(struct pci_dev *pdev, int dev_idx)
__func__);
}
debugf3("%s(): success\n", __func__);
edac_dbg(3, "success\n");
return 0;
fail:
@ -364,7 +362,7 @@ fail:
static int __devinit r82600_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
/* don't need to call pci_enable_device() */
return r82600_probe1(pdev, ent->driver_data);
@ -374,7 +372,7 @@ static void __devexit r82600_remove_one(struct pci_dev *pdev)
{
struct mem_ctl_info *mci;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
if (r82600_pci)
edac_pci_release_generic_ctl(r82600_pci);

View File

@ -381,8 +381,8 @@ static inline int numrank(u32 mtr)
int ranks = (1 << RANK_CNT_BITS(mtr));
if (ranks > 4) {
debugf0("Invalid number of ranks: %d (max = 4) raw value = %x (%04x)",
ranks, (unsigned int)RANK_CNT_BITS(mtr), mtr);
edac_dbg(0, "Invalid number of ranks: %d (max = 4) raw value = %x (%04x)\n",
ranks, (unsigned int)RANK_CNT_BITS(mtr), mtr);
return -EINVAL;
}
@ -394,8 +394,8 @@ static inline int numrow(u32 mtr)
int rows = (RANK_WIDTH_BITS(mtr) + 12);
if (rows < 13 || rows > 18) {
debugf0("Invalid number of rows: %d (should be between 14 and 17) raw value = %x (%04x)",
rows, (unsigned int)RANK_WIDTH_BITS(mtr), mtr);
edac_dbg(0, "Invalid number of rows: %d (should be between 14 and 17) raw value = %x (%04x)\n",
rows, (unsigned int)RANK_WIDTH_BITS(mtr), mtr);
return -EINVAL;
}
@ -407,8 +407,8 @@ static inline int numcol(u32 mtr)
int cols = (COL_WIDTH_BITS(mtr) + 10);
if (cols > 12) {
debugf0("Invalid number of cols: %d (max = 4) raw value = %x (%04x)",
cols, (unsigned int)COL_WIDTH_BITS(mtr), mtr);
edac_dbg(0, "Invalid number of cols: %d (max = 4) raw value = %x (%04x)\n",
cols, (unsigned int)COL_WIDTH_BITS(mtr), mtr);
return -EINVAL;
}
@ -475,8 +475,8 @@ static struct pci_dev *get_pdev_slot_func(u8 bus, unsigned slot,
if (PCI_SLOT(sbridge_dev->pdev[i]->devfn) == slot &&
PCI_FUNC(sbridge_dev->pdev[i]->devfn) == func) {
debugf1("Associated %02x.%02x.%d with %p\n",
bus, slot, func, sbridge_dev->pdev[i]);
edac_dbg(1, "Associated %02x.%02x.%d with %p\n",
bus, slot, func, sbridge_dev->pdev[i]);
return sbridge_dev->pdev[i];
}
}
@ -523,45 +523,45 @@ static int get_dimm_config(struct mem_ctl_info *mci)
pci_read_config_dword(pvt->pci_br, SAD_CONTROL, &reg);
pvt->sbridge_dev->node_id = NODE_ID(reg);
debugf0("mc#%d: Node ID: %d, source ID: %d\n",
pvt->sbridge_dev->mc,
pvt->sbridge_dev->node_id,
pvt->sbridge_dev->source_id);
edac_dbg(0, "mc#%d: Node ID: %d, source ID: %d\n",
pvt->sbridge_dev->mc,
pvt->sbridge_dev->node_id,
pvt->sbridge_dev->source_id);
pci_read_config_dword(pvt->pci_ras, RASENABLES, &reg);
if (IS_MIRROR_ENABLED(reg)) {
debugf0("Memory mirror is enabled\n");
edac_dbg(0, "Memory mirror is enabled\n");
pvt->is_mirrored = true;
} else {
debugf0("Memory mirror is disabled\n");
edac_dbg(0, "Memory mirror is disabled\n");
pvt->is_mirrored = false;
}
pci_read_config_dword(pvt->pci_ta, MCMTR, &pvt->info.mcmtr);
if (IS_LOCKSTEP_ENABLED(pvt->info.mcmtr)) {
debugf0("Lockstep is enabled\n");
edac_dbg(0, "Lockstep is enabled\n");
mode = EDAC_S8ECD8ED;
pvt->is_lockstep = true;
} else {
debugf0("Lockstep is disabled\n");
edac_dbg(0, "Lockstep is disabled\n");
mode = EDAC_S4ECD4ED;
pvt->is_lockstep = false;
}
if (IS_CLOSE_PG(pvt->info.mcmtr)) {
debugf0("address map is on closed page mode\n");
edac_dbg(0, "address map is on closed page mode\n");
pvt->is_close_pg = true;
} else {
debugf0("address map is on open page mode\n");
edac_dbg(0, "address map is on open page mode\n");
pvt->is_close_pg = false;
}
pci_read_config_dword(pvt->pci_ddrio, RANK_CFG_A, &reg);
if (IS_RDIMM_ENABLED(reg)) {
/* FIXME: Can also be LRDIMM */
debugf0("Memory is registered\n");
edac_dbg(0, "Memory is registered\n");
mtype = MEM_RDDR3;
} else {
debugf0("Memory is unregistered\n");
edac_dbg(0, "Memory is unregistered\n");
mtype = MEM_DDR3;
}
@ -576,7 +576,7 @@ static int get_dimm_config(struct mem_ctl_info *mci)
i, j, 0);
pci_read_config_dword(pvt->pci_tad[i],
mtr_regs[j], &mtr);
debugf4("Channel #%d MTR%d = %x\n", i, j, mtr);
edac_dbg(4, "Channel #%d MTR%d = %x\n", i, j, mtr);
if (IS_DIMM_PRESENT(mtr)) {
pvt->channel[i].dimms++;
@ -588,10 +588,10 @@ static int get_dimm_config(struct mem_ctl_info *mci)
size = (rows * cols * banks * ranks) >> (20 - 3);
npages = MiB_TO_PAGES(size);
debugf0("mc#%d: channel %d, dimm %d, %d Mb (%d pages) bank: %d, rank: %d, row: %#x, col: %#x\n",
pvt->sbridge_dev->mc, i, j,
size, npages,
banks, ranks, rows, cols);
edac_dbg(0, "mc#%d: channel %d, dimm %d, %d Mb (%d pages) bank: %d, rank: %d, row: %#x, col: %#x\n",
pvt->sbridge_dev->mc, i, j,
size, npages,
banks, ranks, rows, cols);
dimm->nr_pages = npages;
dimm->grain = 32;
@ -629,8 +629,7 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
tmp_mb = (1 + pvt->tolm) >> 20;
mb = div_u64_rem(tmp_mb, 1000, &kb);
debugf0("TOLM: %u.%03u GB (0x%016Lx)\n",
mb, kb, (u64)pvt->tolm);
edac_dbg(0, "TOLM: %u.%03u GB (0x%016Lx)\n", mb, kb, (u64)pvt->tolm);
/* Address range is already 45:25 */
pci_read_config_dword(pvt->pci_sad1, TOHM,
@ -639,8 +638,7 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
tmp_mb = (1 + pvt->tohm) >> 20;
mb = div_u64_rem(tmp_mb, 1000, &kb);
debugf0("TOHM: %u.%03u GB (0x%016Lx)",
mb, kb, (u64)pvt->tohm);
edac_dbg(0, "TOHM: %u.%03u GB (0x%016Lx)", mb, kb, (u64)pvt->tohm);
/*
* Step 2) Get SAD range and SAD Interleave list
@ -663,13 +661,13 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
tmp_mb = (limit + 1) >> 20;
mb = div_u64_rem(tmp_mb, 1000, &kb);
debugf0("SAD#%d %s up to %u.%03u GB (0x%016Lx) %s reg=0x%08x\n",
n_sads,
get_dram_attr(reg),
mb, kb,
((u64)tmp_mb) << 20L,
INTERLEAVE_MODE(reg) ? "Interleave: 8:6" : "Interleave: [8:6]XOR[18:16]",
reg);
edac_dbg(0, "SAD#%d %s up to %u.%03u GB (0x%016Lx) Interleave: %s reg=0x%08x\n",
n_sads,
get_dram_attr(reg),
mb, kb,
((u64)tmp_mb) << 20L,
INTERLEAVE_MODE(reg) ? "8:6" : "[8:6]XOR[18:16]",
reg);
prv = limit;
pci_read_config_dword(pvt->pci_sad0, interleave_list[n_sads],
@ -679,8 +677,8 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
if (j > 0 && sad_interl == sad_pkg(reg, j))
break;
debugf0("SAD#%d, interleave #%d: %d\n",
n_sads, j, sad_pkg(reg, j));
edac_dbg(0, "SAD#%d, interleave #%d: %d\n",
n_sads, j, sad_pkg(reg, j));
}
}
@ -697,16 +695,16 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
tmp_mb = (limit + 1) >> 20;
mb = div_u64_rem(tmp_mb, 1000, &kb);
debugf0("TAD#%d: up to %u.%03u GB (0x%016Lx), socket interleave %d, memory interleave %d, TGT: %d, %d, %d, %d, reg=0x%08x\n",
n_tads, mb, kb,
((u64)tmp_mb) << 20L,
(u32)TAD_SOCK(reg),
(u32)TAD_CH(reg),
(u32)TAD_TGT0(reg),
(u32)TAD_TGT1(reg),
(u32)TAD_TGT2(reg),
(u32)TAD_TGT3(reg),
reg);
edac_dbg(0, "TAD#%d: up to %u.%03u GB (0x%016Lx), socket interleave %d, memory interleave %d, TGT: %d, %d, %d, %d, reg=0x%08x\n",
n_tads, mb, kb,
((u64)tmp_mb) << 20L,
(u32)TAD_SOCK(reg),
(u32)TAD_CH(reg),
(u32)TAD_TGT0(reg),
(u32)TAD_TGT1(reg),
(u32)TAD_TGT2(reg),
(u32)TAD_TGT3(reg),
reg);
prv = limit;
}
@ -722,11 +720,11 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
&reg);
tmp_mb = TAD_OFFSET(reg) >> 20;
mb = div_u64_rem(tmp_mb, 1000, &kb);
debugf0("TAD CH#%d, offset #%d: %u.%03u GB (0x%016Lx), reg=0x%08x\n",
i, j,
mb, kb,
((u64)tmp_mb) << 20L,
reg);
edac_dbg(0, "TAD CH#%d, offset #%d: %u.%03u GB (0x%016Lx), reg=0x%08x\n",
i, j,
mb, kb,
((u64)tmp_mb) << 20L,
reg);
}
}
@ -747,12 +745,12 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
tmp_mb = RIR_LIMIT(reg) >> 20;
rir_way = 1 << RIR_WAY(reg);
mb = div_u64_rem(tmp_mb, 1000, &kb);
debugf0("CH#%d RIR#%d, limit: %u.%03u GB (0x%016Lx), way: %d, reg=0x%08x\n",
i, j,
mb, kb,
((u64)tmp_mb) << 20L,
rir_way,
reg);
edac_dbg(0, "CH#%d RIR#%d, limit: %u.%03u GB (0x%016Lx), way: %d, reg=0x%08x\n",
i, j,
mb, kb,
((u64)tmp_mb) << 20L,
rir_way,
reg);
for (k = 0; k < rir_way; k++) {
pci_read_config_dword(pvt->pci_tad[i],
@ -761,12 +759,12 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
tmp_mb = RIR_OFFSET(reg) << 6;
mb = div_u64_rem(tmp_mb, 1000, &kb);
debugf0("CH#%d RIR#%d INTL#%d, offset %u.%03u GB (0x%016Lx), tgt: %d, reg=0x%08x\n",
i, j, k,
mb, kb,
((u64)tmp_mb) << 20L,
(u32)RIR_RNK_TGT(reg),
reg);
edac_dbg(0, "CH#%d RIR#%d INTL#%d, offset %u.%03u GB (0x%016Lx), tgt: %d, reg=0x%08x\n",
i, j, k,
mb, kb,
((u64)tmp_mb) << 20L,
(u32)RIR_RNK_TGT(reg),
reg);
}
}
}
@ -853,16 +851,16 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
if (sad_way > 0 && sad_interl == sad_pkg(reg, sad_way))
break;
sad_interleave[sad_way] = sad_pkg(reg, sad_way);
debugf0("SAD interleave #%d: %d\n",
sad_way, sad_interleave[sad_way]);
edac_dbg(0, "SAD interleave #%d: %d\n",
sad_way, sad_interleave[sad_way]);
}
debugf0("mc#%d: Error detected on SAD#%d: address 0x%016Lx < 0x%016Lx, Interleave [%d:6]%s\n",
pvt->sbridge_dev->mc,
n_sads,
addr,
limit,
sad_way + 7,
interleave_mode ? "" : "XOR[18:16]");
edac_dbg(0, "mc#%d: Error detected on SAD#%d: address 0x%016Lx < 0x%016Lx, Interleave [%d:6]%s\n",
pvt->sbridge_dev->mc,
n_sads,
addr,
limit,
sad_way + 7,
interleave_mode ? "" : "XOR[18:16]");
if (interleave_mode)
idx = ((addr >> 6) ^ (addr >> 16)) & 7;
else
@ -884,8 +882,8 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
return -EINVAL;
}
*socket = sad_interleave[idx];
debugf0("SAD interleave index: %d (wayness %d) = CPU socket %d\n",
idx, sad_way, *socket);
edac_dbg(0, "SAD interleave index: %d (wayness %d) = CPU socket %d\n",
idx, sad_way, *socket);
/*
* Move to the proper node structure, in order to access the
@ -972,16 +970,16 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
offset = TAD_OFFSET(tad_offset);
debugf0("TAD#%d: address 0x%016Lx < 0x%016Lx, socket interleave %d, channel interleave %d (offset 0x%08Lx), index %d, base ch: %d, ch mask: 0x%02lx\n",
n_tads,
addr,
limit,
(u32)TAD_SOCK(reg),
ch_way,
offset,
idx,
base_ch,
*channel_mask);
edac_dbg(0, "TAD#%d: address 0x%016Lx < 0x%016Lx, socket interleave %d, channel interleave %d (offset 0x%08Lx), index %d, base ch: %d, ch mask: 0x%02lx\n",
n_tads,
addr,
limit,
(u32)TAD_SOCK(reg),
ch_way,
offset,
idx,
base_ch,
*channel_mask);
/* Calculate channel address */
/* Remove the TAD offset */
@ -1017,11 +1015,11 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
limit = RIR_LIMIT(reg);
mb = div_u64_rem(limit >> 20, 1000, &kb);
debugf0("RIR#%d, limit: %u.%03u GB (0x%016Lx), way: %d\n",
n_rir,
mb, kb,
limit,
1 << RIR_WAY(reg));
edac_dbg(0, "RIR#%d, limit: %u.%03u GB (0x%016Lx), way: %d\n",
n_rir,
mb, kb,
limit,
1 << RIR_WAY(reg));
if (ch_addr <= limit)
break;
}
@ -1042,12 +1040,12 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
&reg);
*rank = RIR_RNK_TGT(reg);
debugf0("RIR#%d: channel address 0x%08Lx < 0x%08Lx, RIR interleave %d, index %d\n",
n_rir,
ch_addr,
limit,
rir_way,
idx);
edac_dbg(0, "RIR#%d: channel address 0x%08Lx < 0x%08Lx, RIR interleave %d, index %d\n",
n_rir,
ch_addr,
limit,
rir_way,
idx);
return 0;
}
@ -1064,14 +1062,14 @@ static void sbridge_put_devices(struct sbridge_dev *sbridge_dev)
{
int i;
debugf0(__FILE__ ": %s()\n", __func__);
edac_dbg(0, "\n");
for (i = 0; i < sbridge_dev->n_devs; i++) {
struct pci_dev *pdev = sbridge_dev->pdev[i];
if (!pdev)
continue;
debugf0("Removing dev %02x:%02x.%d\n",
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
edac_dbg(0, "Removing dev %02x:%02x.%d\n",
pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
pci_dev_put(pdev);
}
}
@ -1177,10 +1175,9 @@ static int sbridge_get_onedevice(struct pci_dev **prev,
return -ENODEV;
}
debugf0("Detected dev %02x:%d.%d PCI ID %04x:%04x\n",
bus, dev_descr->dev,
dev_descr->func,
PCI_VENDOR_ID_INTEL, dev_descr->dev_id);
edac_dbg(0, "Detected dev %02x:%d.%d PCI ID %04x:%04x\n",
bus, dev_descr->dev, dev_descr->func,
PCI_VENDOR_ID_INTEL, dev_descr->dev_id);
/*
* As stated on drivers/pci/search.c, the reference count for
@ -1297,10 +1294,10 @@ static int mci_bind_devs(struct mem_ctl_info *mci,
goto error;
}
debugf0("Associated PCI %02x.%02d.%d with dev = %p\n",
sbridge_dev->bus,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),
pdev);
edac_dbg(0, "Associated PCI %02x.%02d.%d with dev = %p\n",
sbridge_dev->bus,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),
pdev);
}
/* Check if everything were registered */
@ -1435,8 +1432,7 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
* to the group of dimm's where the error may be happening.
*/
snprintf(msg, sizeof(msg),
"count:%d%s%s area:%s err_code:%04x:%04x socket:%d channel_mask:%ld rank:%d",
core_err_cnt,
"%s%s area:%s err_code:%04x:%04x socket:%d channel_mask:%ld rank:%d",
overflow ? " OVERFLOW" : "",
(uncorrected_error && recoverable) ? " recoverable" : "",
area_type,
@ -1445,20 +1441,20 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
channel_mask,
rank);
debugf0("%s", msg);
edac_dbg(0, "%s\n", msg);
/* FIXME: need support for channel mask */
/* Call the helper to output message */
edac_mc_handle_error(tp_event, mci,
edac_mc_handle_error(tp_event, mci, core_err_cnt,
m->addr >> PAGE_SHIFT, m->addr & ~PAGE_MASK, 0,
channel, dimm, -1,
optype, msg, m);
optype, msg);
return;
err_parsing:
edac_mc_handle_error(tp_event, mci, 0, 0, 0,
edac_mc_handle_error(tp_event, mci, core_err_cnt, 0, 0, 0,
-1, -1, -1,
msg, "", m);
msg, "");
}
@ -1592,8 +1588,7 @@ static void sbridge_unregister_mci(struct sbridge_dev *sbridge_dev)
struct sbridge_pvt *pvt;
if (unlikely(!mci || !mci->pvt_info)) {
debugf0("MC: " __FILE__ ": %s(): dev = %p\n",
__func__, &sbridge_dev->pdev[0]->dev);
edac_dbg(0, "MC: dev = %p\n", &sbridge_dev->pdev[0]->dev);
sbridge_printk(KERN_ERR, "Couldn't find mci handler\n");
return;
@ -1601,13 +1596,13 @@ static void sbridge_unregister_mci(struct sbridge_dev *sbridge_dev)
pvt = mci->pvt_info;
debugf0("MC: " __FILE__ ": %s(): mci = %p, dev = %p\n",
__func__, mci, &sbridge_dev->pdev[0]->dev);
edac_dbg(0, "MC: mci = %p, dev = %p\n",
mci, &sbridge_dev->pdev[0]->dev);
/* Remove MC sysfs nodes */
edac_mc_del_mc(mci->dev);
edac_mc_del_mc(mci->pdev);
debugf1("%s: free mci struct\n", mci->ctl_name);
edac_dbg(1, "%s: free mci struct\n", mci->ctl_name);
kfree(mci->ctl_name);
edac_mc_free(mci);
sbridge_dev->mci = NULL;
@ -1638,8 +1633,8 @@ static int sbridge_register_mci(struct sbridge_dev *sbridge_dev)
if (unlikely(!mci))
return -ENOMEM;
debugf0("MC: " __FILE__ ": %s(): mci = %p, dev = %p\n",
__func__, mci, &sbridge_dev->pdev[0]->dev);
edac_dbg(0, "MC: mci = %p, dev = %p\n",
mci, &sbridge_dev->pdev[0]->dev);
pvt = mci->pvt_info;
memset(pvt, 0, sizeof(*pvt));
@ -1670,12 +1665,11 @@ static int sbridge_register_mci(struct sbridge_dev *sbridge_dev)
get_memory_layout(mci);
/* record ptr to the generic device */
mci->dev = &sbridge_dev->pdev[0]->dev;
mci->pdev = &sbridge_dev->pdev[0]->dev;
/* add this new MC control structure to EDAC's list of MCs */
if (unlikely(edac_mc_add_mc(mci))) {
debugf0("MC: " __FILE__
": %s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(0, "MC: failed edac_mc_add_mc()\n");
rc = -EINVAL;
goto fail0;
}
@ -1722,7 +1716,8 @@ static int __devinit sbridge_probe(struct pci_dev *pdev,
mc = 0;
list_for_each_entry(sbridge_dev, &sbridge_edac_list, list) {
debugf0("Registering MC#%d (%d of %d)\n", mc, mc + 1, num_mc);
edac_dbg(0, "Registering MC#%d (%d of %d)\n",
mc, mc + 1, num_mc);
sbridge_dev->mc = mc++;
rc = sbridge_register_mci(sbridge_dev);
if (unlikely(rc < 0))
@ -1752,7 +1747,7 @@ static void __devexit sbridge_remove(struct pci_dev *pdev)
{
struct sbridge_dev *sbridge_dev;
debugf0(__FILE__ ": %s()\n", __func__);
edac_dbg(0, "\n");
/*
* we have a trouble here: pdev value for removal will be wrong, since
@ -1801,7 +1796,7 @@ static int __init sbridge_init(void)
{
int pci_rc;
debugf2("MC: " __FILE__ ": %s()\n", __func__);
edac_dbg(2, "\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -1825,7 +1820,7 @@ static int __init sbridge_init(void)
*/
static void __exit sbridge_exit(void)
{
debugf2("MC: " __FILE__ ": %s()\n", __func__);
edac_dbg(2, "\n");
pci_unregister_driver(&sbridge_driver);
mce_unregister_decode_chain(&sbridge_mce_dec);
}

View File

@ -69,12 +69,12 @@ static void tile_edac_check(struct mem_ctl_info *mci)
/* Check if the current error count is different from the saved one. */
if (mem_error.sbe_count != priv->ce_count) {
dev_dbg(mci->dev, "ECC CE err on node %d\n", priv->node);
dev_dbg(mci->pdev, "ECC CE err on node %d\n", priv->node);
priv->ce_count = mem_error.sbe_count;
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
0, 0, 0,
0, 0, -1,
mci->ctl_name, "", NULL);
mci->ctl_name, "");
}
}
@ -84,10 +84,10 @@ static void tile_edac_check(struct mem_ctl_info *mci)
*/
static int __devinit tile_edac_init_csrows(struct mem_ctl_info *mci)
{
struct csrow_info *csrow = &mci->csrows[0];
struct csrow_info *csrow = mci->csrows[0];
struct tile_edac_priv *priv = mci->pvt_info;
struct mshim_mem_info mem_info;
struct dimm_info *dimm = csrow->channels[0].dimm;
struct dimm_info *dimm = csrow->channels[0]->dimm;
if (hv_dev_pread(priv->hv_devhdl, 0, (HV_VirtAddr)&mem_info,
sizeof(struct mshim_mem_info), MSHIM_MEM_INFO_OFF) !=
@ -149,7 +149,7 @@ static int __devinit tile_edac_mc_probe(struct platform_device *pdev)
priv->node = pdev->id;
priv->hv_devhdl = hv_devhdl;
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_DDR2;
mci->edac_ctl_cap = EDAC_FLAG_SECDED;

View File

@ -103,10 +103,10 @@ static int how_many_channel(struct pci_dev *pdev)
pci_read_config_byte(pdev, X38_CAPID0 + 8, &capid0_8b);
if (capid0_8b & 0x20) { /* check DCD: Dual Channel Disable */
debugf0("In single channel mode.\n");
edac_dbg(0, "In single channel mode\n");
x38_channel_num = 1;
} else {
debugf0("In dual channel mode.\n");
edac_dbg(0, "In dual channel mode\n");
x38_channel_num = 2;
}
@ -151,7 +151,7 @@ static void x38_clear_error_info(struct mem_ctl_info *mci)
{
struct pci_dev *pdev;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
/*
* Clear any error bits.
@ -172,7 +172,7 @@ static void x38_get_and_clear_error_info(struct mem_ctl_info *mci,
struct pci_dev *pdev;
void __iomem *window = mci->pvt_info;
pdev = to_pci_dev(mci->dev);
pdev = to_pci_dev(mci->pdev);
/*
* This is a mess because there is no atomic way to read all the
@ -215,26 +215,26 @@ static void x38_process_error_info(struct mem_ctl_info *mci,
return;
if ((info->errsts ^ info->errsts2) & X38_ERRSTS_BITS) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 0, 0, 0,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0,
-1, -1, -1,
"UE overwrote CE", "", NULL);
"UE overwrote CE", "");
info->errsts = info->errsts2;
}
for (channel = 0; channel < x38_channel_num; channel++) {
log = info->eccerrlog[channel];
if (log & X38_ECCERRLOG_UE) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
0, 0, 0,
eccerrlog_row(channel, log),
-1, -1,
"x38 UE", "", NULL);
"x38 UE", "");
} else if (log & X38_ECCERRLOG_CE) {
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci,
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
0, 0, eccerrlog_syndrome(log),
eccerrlog_row(channel, log),
-1, -1,
"x38 CE", "", NULL);
"x38 CE", "");
}
}
}
@ -243,7 +243,7 @@ static void x38_check(struct mem_ctl_info *mci)
{
struct x38_error_info info;
debugf1("MC%d: %s()\n", mci->mc_idx, __func__);
edac_dbg(1, "MC%d\n", mci->mc_idx);
x38_get_and_clear_error_info(mci, &info);
x38_process_error_info(mci, &info);
}
@ -331,7 +331,7 @@ static int x38_probe1(struct pci_dev *pdev, int dev_idx)
bool stacked;
void __iomem *window;
debugf0("MC: %s()\n", __func__);
edac_dbg(0, "MC:\n");
window = x38_map_mchbar(pdev);
if (!window)
@ -352,9 +352,9 @@ static int x38_probe1(struct pci_dev *pdev, int dev_idx)
if (!mci)
return -ENOMEM;
debugf3("MC: %s(): init mci\n", __func__);
edac_dbg(3, "MC: init mci\n");
mci->dev = &pdev->dev;
mci->pdev = &pdev->dev;
mci->mtype_cap = MEM_FLAG_DDR2;
mci->edac_ctl_cap = EDAC_FLAG_SECDED;
@ -378,7 +378,7 @@ static int x38_probe1(struct pci_dev *pdev, int dev_idx)
*/
for (i = 0; i < mci->nr_csrows; i++) {
unsigned long nr_pages;
struct csrow_info *csrow = &mci->csrows[i];
struct csrow_info *csrow = mci->csrows[i];
nr_pages = drb_to_nr_pages(drbs, stacked,
i / X38_RANKS_PER_CHANNEL,
@ -388,7 +388,7 @@ static int x38_probe1(struct pci_dev *pdev, int dev_idx)
continue;
for (j = 0; j < x38_channel_num; j++) {
struct dimm_info *dimm = csrow->channels[j].dimm;
struct dimm_info *dimm = csrow->channels[j]->dimm;
dimm->nr_pages = nr_pages / x38_channel_num;
dimm->grain = nr_pages << PAGE_SHIFT;
@ -402,12 +402,12 @@ static int x38_probe1(struct pci_dev *pdev, int dev_idx)
rc = -ENODEV;
if (edac_mc_add_mc(mci)) {
debugf3("MC: %s(): failed edac_mc_add_mc()\n", __func__);
edac_dbg(3, "MC: failed edac_mc_add_mc()\n");
goto fail;
}
/* get this far and it's successful */
debugf3("MC: %s(): success\n", __func__);
edac_dbg(3, "MC: success\n");
return 0;
fail:
@ -423,7 +423,7 @@ static int __devinit x38_init_one(struct pci_dev *pdev,
{
int rc;
debugf0("MC: %s()\n", __func__);
edac_dbg(0, "MC:\n");
if (pci_enable_device(pdev) < 0)
return -EIO;
@ -439,7 +439,7 @@ static void __devexit x38_remove_one(struct pci_dev *pdev)
{
struct mem_ctl_info *mci;
debugf0("%s()\n", __func__);
edac_dbg(0, "\n");
mci = edac_mc_del_mc(&pdev->dev);
if (!mci)
@ -472,7 +472,7 @@ static int __init x38_init(void)
{
int pci_rc;
debugf3("MC: %s()\n", __func__);
edac_dbg(3, "MC:\n");
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
@ -486,14 +486,14 @@ static int __init x38_init(void)
mci_pdev = pci_get_device(PCI_VENDOR_ID_INTEL,
PCI_DEVICE_ID_INTEL_X38_HB, NULL);
if (!mci_pdev) {
debugf0("x38 pci_get_device fail\n");
edac_dbg(0, "x38 pci_get_device fail\n");
pci_rc = -ENODEV;
goto fail1;
}
pci_rc = x38_init_one(mci_pdev, x38_pci_tbl);
if (pci_rc < 0) {
debugf0("x38 init fail\n");
edac_dbg(0, "x38 init fail\n");
pci_rc = -ENODEV;
goto fail1;
}
@ -513,7 +513,7 @@ fail0:
static void __exit x38_exit(void)
{
debugf3("MC: %s()\n", __func__);
edac_dbg(3, "MC:\n");
pci_unregister_driver(&x38_driver);
if (!x38_registered) {

View File

@ -13,9 +13,11 @@
#define _LINUX_EDAC_H_
#include <linux/atomic.h>
#include <linux/device.h>
#include <linux/kobject.h>
#include <linux/completion.h>
#include <linux/workqueue.h>
#include <linux/debugfs.h>
struct device;
@ -49,7 +51,19 @@ static inline void opstate_init(void)
#define EDAC_MC_LABEL_LEN 31
#define MC_PROC_NAME_MAX_LEN 7
/* memory devices */
/**
* enum dev_type - describe the type of memory DRAM chips used at the stick
* @DEV_UNKNOWN: Can't be determined, or MC doesn't support detect it
* @DEV_X1: 1 bit for data
* @DEV_X2: 2 bits for data
* @DEV_X4: 4 bits for data
* @DEV_X8: 8 bits for data
* @DEV_X16: 16 bits for data
* @DEV_X32: 32 bits for data
* @DEV_X64: 64 bits for data
*
* Typical values are x4 and x8.
*/
enum dev_type {
DEV_UNKNOWN = 0,
DEV_X1,
@ -167,18 +181,30 @@ enum mem_type {
#define MEM_FLAG_DDR3 BIT(MEM_DDR3)
#define MEM_FLAG_RDDR3 BIT(MEM_RDDR3)
/* chipset Error Detection and Correction capabilities and mode */
/**
* enum edac-type - Error Detection and Correction capabilities and mode
* @EDAC_UNKNOWN: Unknown if ECC is available
* @EDAC_NONE: Doesn't support ECC
* @EDAC_RESERVED: Reserved ECC type
* @EDAC_PARITY: Detects parity errors
* @EDAC_EC: Error Checking - no correction
* @EDAC_SECDED: Single bit error correction, Double detection
* @EDAC_S2ECD2ED: Chipkill x2 devices - do these exist?
* @EDAC_S4ECD4ED: Chipkill x4 devices
* @EDAC_S8ECD8ED: Chipkill x8 devices
* @EDAC_S16ECD16ED: Chipkill x16 devices
*/
enum edac_type {
EDAC_UNKNOWN = 0, /* Unknown if ECC is available */
EDAC_NONE, /* Doesn't support ECC */
EDAC_RESERVED, /* Reserved ECC type */
EDAC_PARITY, /* Detects parity errors */
EDAC_EC, /* Error Checking - no correction */
EDAC_SECDED, /* Single bit error correction, Double detection */
EDAC_S2ECD2ED, /* Chipkill x2 devices - do these exist? */
EDAC_S4ECD4ED, /* Chipkill x4 devices */
EDAC_S8ECD8ED, /* Chipkill x8 devices */
EDAC_S16ECD16ED, /* Chipkill x16 devices */
EDAC_UNKNOWN = 0,
EDAC_NONE,
EDAC_RESERVED,
EDAC_PARITY,
EDAC_EC,
EDAC_SECDED,
EDAC_S2ECD2ED,
EDAC_S4ECD4ED,
EDAC_S8ECD8ED,
EDAC_S16ECD16ED,
};
#define EDAC_FLAG_UNKNOWN BIT(EDAC_UNKNOWN)
@ -191,18 +217,30 @@ enum edac_type {
#define EDAC_FLAG_S8ECD8ED BIT(EDAC_S8ECD8ED)
#define EDAC_FLAG_S16ECD16ED BIT(EDAC_S16ECD16ED)
/* scrubbing capabilities */
/**
* enum scrub_type - scrubbing capabilities
* @SCRUB_UNKNOWN Unknown if scrubber is available
* @SCRUB_NONE: No scrubber
* @SCRUB_SW_PROG: SW progressive (sequential) scrubbing
* @SCRUB_SW_SRC: Software scrub only errors
* @SCRUB_SW_PROG_SRC: Progressive software scrub from an error
* @SCRUB_SW_TUNABLE: Software scrub frequency is tunable
* @SCRUB_HW_PROG: HW progressive (sequential) scrubbing
* @SCRUB_HW_SRC: Hardware scrub only errors
* @SCRUB_HW_PROG_SRC: Progressive hardware scrub from an error
* SCRUB_HW_TUNABLE: Hardware scrub frequency is tunable
*/
enum scrub_type {
SCRUB_UNKNOWN = 0, /* Unknown if scrubber is available */
SCRUB_NONE, /* No scrubber */
SCRUB_SW_PROG, /* SW progressive (sequential) scrubbing */
SCRUB_SW_SRC, /* Software scrub only errors */
SCRUB_SW_PROG_SRC, /* Progressive software scrub from an error */
SCRUB_SW_TUNABLE, /* Software scrub frequency is tunable */
SCRUB_HW_PROG, /* HW progressive (sequential) scrubbing */
SCRUB_HW_SRC, /* Hardware scrub only errors */
SCRUB_HW_PROG_SRC, /* Progressive hardware scrub from an error */
SCRUB_HW_TUNABLE /* Hardware scrub frequency is tunable */
SCRUB_UNKNOWN = 0,
SCRUB_NONE,
SCRUB_SW_PROG,
SCRUB_SW_SRC,
SCRUB_SW_PROG_SRC,
SCRUB_SW_TUNABLE,
SCRUB_HW_PROG,
SCRUB_HW_SRC,
SCRUB_HW_PROG_SRC,
SCRUB_HW_TUNABLE
};
#define SCRUB_FLAG_SW_PROG BIT(SCRUB_SW_PROG)
@ -374,7 +412,44 @@ struct edac_mc_layer {
#define EDAC_MAX_LAYERS 3
/**
* EDAC_DIMM_PTR - Macro responsible to find a pointer inside a pointer array
* EDAC_DIMM_OFF - Macro responsible to get a pointer offset inside a pointer array
* for the element given by [layer0,layer1,layer2] position
*
* @layers: a struct edac_mc_layer array, describing how many elements
* were allocated for each layer
* @n_layers: Number of layers at the @layers array
* @layer0: layer0 position
* @layer1: layer1 position. Unused if n_layers < 2
* @layer2: layer2 position. Unused if n_layers < 3
*
* For 1 layer, this macro returns &var[layer0] - &var
* For 2 layers, this macro is similar to allocate a bi-dimensional array
* and to return "&var[layer0][layer1] - &var"
* For 3 layers, this macro is similar to allocate a tri-dimensional array
* and to return "&var[layer0][layer1][layer2] - &var"
*
* A loop could be used here to make it more generic, but, as we only have
* 3 layers, this is a little faster.
* By design, layers can never be 0 or more than 3. If that ever happens,
* a NULL is returned, causing an OOPS during the memory allocation routine,
* with would point to the developer that he's doing something wrong.
*/
#define EDAC_DIMM_OFF(layers, nlayers, layer0, layer1, layer2) ({ \
int __i; \
if ((nlayers) == 1) \
__i = layer0; \
else if ((nlayers) == 2) \
__i = (layer1) + ((layers[1]).size * (layer0)); \
else if ((nlayers) == 3) \
__i = (layer2) + ((layers[2]).size * ((layer1) + \
((layers[1]).size * (layer0)))); \
else \
__i = -EINVAL; \
__i; \
})
/**
* EDAC_DIMM_PTR - Macro responsible to get a pointer inside a pointer array
* for the element given by [layer0,layer1,layer2] position
*
* @layers: a struct edac_mc_layer array, describing how many elements
@ -391,30 +466,20 @@ struct edac_mc_layer {
* and to return "&var[layer0][layer1]"
* For 3 layers, this macro is similar to allocate a tri-dimensional array
* and to return "&var[layer0][layer1][layer2]"
*
* A loop could be used here to make it more generic, but, as we only have
* 3 layers, this is a little faster.
* By design, layers can never be 0 or more than 3. If that ever happens,
* a NULL is returned, causing an OOPS during the memory allocation routine,
* with would point to the developer that he's doing something wrong.
*/
#define EDAC_DIMM_PTR(layers, var, nlayers, layer0, layer1, layer2) ({ \
typeof(var) __p; \
if ((nlayers) == 1) \
__p = &var[layer0]; \
else if ((nlayers) == 2) \
__p = &var[(layer1) + ((layers[1]).size * (layer0))]; \
else if ((nlayers) == 3) \
__p = &var[(layer2) + ((layers[2]).size * ((layer1) + \
((layers[1]).size * (layer0))))]; \
else \
typeof(*var) __p; \
int ___i = EDAC_DIMM_OFF(layers, nlayers, layer0, layer1, layer2); \
if (___i < 0) \
__p = NULL; \
else \
__p = (var)[___i]; \
__p; \
})
/* FIXME: add the proper per-location error counts */
struct dimm_info {
struct device dev;
char label[EDAC_MC_LABEL_LEN + 1]; /* DIMM label on motherboard */
/* Memory location data */
@ -456,6 +521,8 @@ struct rank_info {
};
struct csrow_info {
struct device dev;
/* Used only by edac_mc_find_csrow_by_page() */
unsigned long first_page; /* first page number in csrow */
unsigned long last_page; /* last page number in csrow */
@ -469,44 +536,26 @@ struct csrow_info {
struct mem_ctl_info *mci; /* the parent */
struct kobject kobj; /* sysfs kobject for this csrow */
/* channel information for this csrow */
u32 nr_channels;
struct rank_info *channels;
struct rank_info **channels;
};
struct mcidev_sysfs_group {
const char *name; /* group name */
const struct mcidev_sysfs_attribute *mcidev_attr; /* group attributes */
};
struct mcidev_sysfs_group_kobj {
struct list_head list; /* list for all instances within a mc */
struct kobject kobj; /* kobj for the group */
const struct mcidev_sysfs_group *grp; /* group description table */
struct mem_ctl_info *mci; /* the parent */
};
/* mcidev_sysfs_attribute structure
* used for driver sysfs attributes and in mem_ctl_info
* sysfs top level entries
/*
* struct errcount_attribute - used to store the several error counts
*/
struct mcidev_sysfs_attribute {
/* It should use either attr or grp */
struct attribute attr;
const struct mcidev_sysfs_group *grp; /* Points to a group of attributes */
/* Ops for show/store values at the attribute - not used on group */
ssize_t (*show)(struct mem_ctl_info *,char *);
ssize_t (*store)(struct mem_ctl_info *, const char *,size_t);
struct errcount_attribute_data {
int n_layers;
int pos[EDAC_MAX_LAYERS];
int layer0, layer1, layer2;
};
/* MEMORY controller information structure
*/
struct mem_ctl_info {
struct device dev;
struct bus_type bus;
struct list_head link; /* for global list of mem_ctl_info structs */
struct module *owner; /* Module owner of this control struct */
@ -548,10 +597,18 @@ struct mem_ctl_info {
unsigned long (*ctl_page_to_phys) (struct mem_ctl_info * mci,
unsigned long page);
int mc_idx;
struct csrow_info *csrows;
struct csrow_info **csrows;
unsigned nr_csrows, num_cschannel;
/* Memory Controller hierarchy */
/*
* Memory Controller hierarchy
*
* There are basically two types of memory controller: the ones that
* sees memory sticks ("dimms"), and the ones that sees memory ranks.
* All old memory controllers enumerate memories per rank, but most
* of the recent drivers enumerate memories per DIMM, instead.
* When the memory controller is per rank, mem_is_per_rank is true.
*/
unsigned n_layers;
struct edac_mc_layer *layers;
bool mem_is_per_rank;
@ -560,14 +617,14 @@ struct mem_ctl_info {
* DIMM info. Will eventually remove the entire csrows_info some day
*/
unsigned tot_dimms;
struct dimm_info *dimms;
struct dimm_info **dimms;
/*
* FIXME - what about controllers on other busses? - IDs must be
* unique. dev pointer should be sufficiently unique, but
* BUS:SLOT.FUNC numbers may not be unique.
*/
struct device *dev;
struct device *pdev;
const char *mod_name;
const char *mod_ver;
const char *ctl_name;
@ -586,12 +643,6 @@ struct mem_ctl_info {
struct completion complete;
/* edac sysfs device control */
struct kobject edac_mci_kobj;
/* list for all grp instances within a mc */
struct list_head grp_kobj_list;
/* Additional top controller level attributes, but specified
* by the low level driver.
*
@ -609,6 +660,13 @@ struct mem_ctl_info {
/* the internal state of this controller instance */
int op_state;
#ifdef CONFIG_EDAC_DEBUG
struct dentry *debugfs;
u8 fake_inject_layer[EDAC_MAX_LAYERS];
u32 fake_inject_ue;
u16 fake_inject_count;
#endif
};
#endif

102
include/ras/ras_event.h Normal file
View File

@ -0,0 +1,102 @@
#undef TRACE_SYSTEM
#define TRACE_SYSTEM ras
#define TRACE_INCLUDE_FILE ras_event
#if !defined(_TRACE_HW_EVENT_MC_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_HW_EVENT_MC_H
#include <linux/tracepoint.h>
#include <linux/edac.h>
#include <linux/ktime.h>
/*
* Hardware Events Report
*
* Those events are generated when hardware detected a corrected or
* uncorrected event, and are meant to replace the current API to report
* errors defined on both EDAC and MCE subsystems.
*
* FIXME: Add events for handling memory errors originated from the
* MCE subsystem.
*/
/*
* Hardware-independent Memory Controller specific events
*/
/*
* Default error mechanisms for Memory Controller errors (CE and UE)
*/
TRACE_EVENT(mc_event,
TP_PROTO(const unsigned int err_type,
const char *error_msg,
const char *label,
const int error_count,
const u8 mc_index,
const s8 top_layer,
const s8 mid_layer,
const s8 low_layer,
unsigned long address,
const u8 grain_bits,
unsigned long syndrome,
const char *driver_detail),
TP_ARGS(err_type, error_msg, label, error_count, mc_index,
top_layer, mid_layer, low_layer, address, grain_bits,
syndrome, driver_detail),
TP_STRUCT__entry(
__field( unsigned int, error_type )
__string( msg, error_msg )
__string( label, label )
__field( u16, error_count )
__field( u8, mc_index )
__field( s8, top_layer )
__field( s8, middle_layer )
__field( s8, lower_layer )
__field( long, address )
__field( u8, grain_bits )
__field( long, syndrome )
__string( driver_detail, driver_detail )
),
TP_fast_assign(
__entry->error_type = err_type;
__assign_str(msg, error_msg);
__assign_str(label, label);
__entry->error_count = error_count;
__entry->mc_index = mc_index;
__entry->top_layer = top_layer;
__entry->middle_layer = mid_layer;
__entry->lower_layer = low_layer;
__entry->address = address;
__entry->grain_bits = grain_bits;
__entry->syndrome = syndrome;
__assign_str(driver_detail, driver_detail);
),
TP_printk("%d %s error%s:%s%s on %s (mc:%d location:%d:%d:%d address:0x%08lx grain:%d syndrome:0x%08lx%s%s)",
__entry->error_count,
(__entry->error_type == HW_EVENT_ERR_CORRECTED) ? "Corrected" :
((__entry->error_type == HW_EVENT_ERR_FATAL) ?
"Fatal" : "Uncorrected"),
__entry->error_count > 1 ? "s" : "",
((char *)__get_str(msg))[0] ? " " : "",
__get_str(msg),
__get_str(label),
__entry->mc_index,
__entry->top_layer,
__entry->middle_layer,
__entry->lower_layer,
__entry->address,
1 << __entry->grain_bits,
__entry->syndrome,
((char *)__get_str(driver_detail))[0] ? " " : "",
__get_str(driver_detail))
);
#endif /* _TRACE_HW_EVENT_MC_H */
/* This part must be outside protection */
#include <trace/define_trace.h>