dect
/
linux-2.6
Archived
13
0
Fork 0
Commit Graph

502 Commits

Author SHA1 Message Date
Paul Mundt e2fcf74f3d sh: nommu: use 32-bit phys mode.
The nommu code has regressed somewhat in that 29BIT gets set for the
SH-2/2A configs regardless of the fact that they are really 32BIT sans
MMU or PMB. This does a bit of tidying to get nommu properly selecting
32BIT as it was before.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-11-04 12:32:24 +09:00
Paul Mundt 667b279baa sh: lockless get_user_pages_fast()
Implement get_user_pages_fast without locking in the fastpath on sh.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-10-27 16:43:08 +09:00
Linus Torvalds 1dfd166e93 Merge git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6: (110 commits)
  sh: i2c-sh7760: Replase from ctrl_* to __raw_*
  sh: clkfwk: Shuffle around to match the intc split up.
  sh: clkfwk: modify for_each_frequency end condition
  sh: fix clk_get() error handling
  sh: clkfwk: Fix fault in frequency iterator.
  sh: clkfwk: Add a helper for rate rounding by divisor ranges.
  sh: clkfwk: Abstract rate rounding helper.
  sh: clkfwk: support clock remapping.
  sh: pci: Convert to upper/lower_32_bits() helpers.
  sh: mach-sdk7786: Add support for the FPGA SRAM.
  sh: Provide a generic SRAM pool for tiny memories.
  sh: pci: Support secondary FPGA-driven PCIe clocks on SDK7786.
  sh: pci: Support slot 4 routing on SDK7786.
  sh: Fix up PMB locking.
  sh: mach-sdk7786: Add support for fpga gpios.
  sh: use pr_fmt for clock framework, too.
  sh: remove name and id from struct clk
  sh: free-without-alloc fix for sh_mobile_lcdcfb
  sh: perf: Set up perf_max_events.
  sh: perf: Support SH-X3 hardware counters.
  ...

Fix up trivial conflicts (perf_max_events got removed) in arch/sh/kernel/perf_event.c
2010-10-25 07:51:49 -07:00
Paul Mundt c993487ec8 sh: Provide a generic SRAM pool for tiny memories.
This sets up a generic SRAM pool for CPUs and platform code to insert
their otherwise unused memories into. A simple alloc/free interface is
provided (lifed from avr32) for generic code.

This only applies to tiny SRAMs that are otherwise unmanaged, and does
not take in to account the more complex SRAMs sitting behind transfer
engines, or that employ an I/D split.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-10-15 02:09:00 +09:00
Paul Mundt f7fcec93b6 sh: Fix up PMB locking.
This first converts the PMB locking over to raw spinlocks, and secondly
fixes up a nested locking issue that was triggering lockdep early on:

 swapper/0 is trying to acquire lock:
  (&pmbe->lock){......}, at: [<806be9bc>] pmb_init+0xf4/0x4dc

 but task is already holding lock:
  (&pmbe->lock){......}, at: [<806be98e>] pmb_init+0xc6/0x4dc

 other info that might help us debug this:
 1 lock held by swapper/0:
  #0:  (&pmbe->lock){......}, at: [<806be98e>] pmb_init+0xc6/0x4dc

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-10-14 03:49:15 +09:00
Yinghai Lu c7fc2de0c8 memblock, bootmem: Round pfn properly for memory and reserved regions
We need to round memory regions correctly -- specifically, we need to
round reserved region in the more expansive direction (lower limit
down, upper limit up) whereas usable memory regions need to be rounded
in the more restrictive direction (lower limit up, upper limit down).

This introduces two set of inlines:

	memblock_region_memory_base_pfn()
	memblock_region_memory_end_pfn()
	memblock_region_reserved_base_pfn()
	memblock_region_reserved_end_pfn()

Although they are antisymmetric (and therefore are technically
duplicates) the use of the different inlines explicitly documents the
programmer's intention.

The lack of proper rounding caused a bug on ARM, which was then found
to also affect other architectures.

Reported-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <4CB4CDFD.4020105@kernel.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-10-12 15:37:51 -07:00
Paul Mundt 3f224f4e05 sh: provide generic arch_debugfs_dir.
While sh previously had its own debugfs root, there now exists a
common arch_debugfs_dir prototype, so we switch everything over to
that.  Presumably once more architectures start making use of this
we'll be able to just kill off the stub kdebugfs wrapper.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-09-24 04:04:26 +09:00
matt mooney a234ca0faa sh: change to new flag variable
Replace EXTRA_CFLAGS with ccflags-y.

Signed-off-by: matt mooney <mfm@muteddisk.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-09-23 16:18:18 +09:00
Paul Mundt a8dc49b51a sh: stub __flush_tlb_global() definition for nommu.
This fixes up the nommu build with a stub definition for
__flush_tlb_global(), now used by the reboot code.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-08-16 14:53:01 +09:00
Andrew Murray 57682827b9 sh: Use __GFP_ZERO for dma_generic_alloc_coherent().
This follows the x86 change off of memset() and on to an unconditional
__GFP_ZERO for wrapping in to optimized page clearing by way of
clear_highpage().

Signed-off-by: Andrew Murray <amurray@mpc-data.co.uk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-08-04 16:38:35 +09:00
Paul Mundt baea90ea14 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 2010-08-04 13:52:34 +09:00
Benjamin Herrenschmidt 64106ca61c memblock/sh: Use new accessors
CC: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-04 14:38:59 +10:00
Yinghai Lu 95f72d1ed4 lmb: rename to memblock
via following scripts

      FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

      sed -i \
        -e 's/lmb/memblock/g' \
        -e 's/LMB/MEMBLOCK/g' \
        $FILES

      for N in $(find . -name lmb.[ch]); do
        M=$(echo $N | sed 's/lmb/memblock/g')
        mv $N $M
      done

and remove some wrong change like lmbench and dlmb etc.

also move memblock.c from lib/ to mm/

Suggested-by: Ingo Molnar <mingo@elte.hu>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-07-14 17:14:00 +10:00
Paul Mundt 59615ecdb5 sh: Provide a global TLB flush for U/I-TLB clear.
This provides a sledgehammer approach for clearing the TLBs, only to be
used in cases where we know we will never want to use the mappings again
and have no interest in preserving state. This also destroys wired
entries.

The primary use for this is when we are either entering or exiting the
kernel completely, in the latter case as a precursor for CPU reset by
MMU.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-07-02 15:44:09 +09:00
Paul Mundt 598ee698d9 sh: Fix up PUD trampling in ranged page table init for X2TLB.
page_table_range_init() presently allocates a PUD page for the 3-level
page table case on X2 TLB configurations on each successive call. This
results in the previous PUD page being trampled when PMDs with an
overlapping PUD are initialized. This case was triggered by putting
persistent kmaps immediately below the fixmap range for highmem.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-06-21 16:26:27 +09:00
Julia Lawall 0e6f989ba8 arch/sh/mm: Eliminate a double lock
The function begins and ends with a read_lock.  The latter is changed to a
read_unlock.

A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)

// <smpl>
@locked@
expression E1;
position p;
@@

read_lock(E1@p,...);

@r exists@
expression x <= locked.E1;
expression locked.E1;
expression E2;
identifier lock;
position locked.p,p1,p2;
@@

*lock@p1 (E1@p,...);
... when != E1
    when != \(x = E2\|&x\)
*lock@p2 (E1,...);
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Acked-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-06-21 13:46:53 +09:00
Paul Mundt 06225c08ec sh: Fix up the NUMA build for recent LMB changes.
Now that the node 0 initialization code has been overhauled, kill off the
now obsolete setup_memory() bits.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-18 18:15:44 +09:00
Paul Mundt c77b29db74 sh: fix up CONFIG_KEXEC=n build.
The reserve_crashkernel() definition is in asm/kexec.h which is only
dragged in via linux/kexec.h if CONFIG_KEXEC is set. Just switch over to
asm/kexec.h unconditionally to fix up the build.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-18 14:53:23 +09:00
Paul Mundt dfbca89987 sh: Reject small mappings for PMB bolting.
The minimum section size for the PMB is 16M, so just always error
out early if the specified size is too small. This permits us to
unconditionally call in to pmb_bolt_mapping() with variable sizes
without wasting a TLB and cache flush for the range.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-11 13:50:29 +09:00
Paul Mundt 4bc277ac9c sh: bootmem refactoring.
This reworks much of the bootmem setup and initialization code allowing
us to get rid of duplicate work between the NUMA and non-NUMA cases. The
end result is that we end up with a much more flexible interface for
supporting more complex topologies (fake NUMA, highmem, etc, etc.) which
is entirely LMB backed. This is an incremental step for more NUMA work as
well as gradually enabling migration off of bootmem entirely.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-11 13:32:19 +09:00
Paul Mundt 19d8f84f86 sh: enable LMB region setup via machvec.
This plugs in a memory init callback in the machvec to permit boards to
wire up various bits of memory directly in to LMB. A generic machvec
implementation is provided that simply wraps around the normal
Kconfig-derived memory start/size.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-10 15:39:05 +09:00
Paul Mundt 364b97d9e2 sh: Kill off dangling goto labels from oom-killer rework.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-26 16:15:17 +09:00
Paul Mundt e19553427c Merge branch 'sh/stable-updates'
Conflicts:
	arch/sh/kernel/dwarf.c
	drivers/dma/shdma.c

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-26 16:08:27 +09:00
Paul Mundt 35f6cd4a06 Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/mfleming/sh-2.6
* 'master' of master.kernel.org:/pub/scm/linux/kernel/git/mfleming/sh-2.6:
  sh: Use correct mask when comparing PMB DATA array values
  sh: Do not try merging two 128MB PMB mappings
  sh: Fix zImage load address when CONFIG_32BIT=y
  sh: Fix address to decompress at when CONFIG_32BIT=y
  sh: Assembly friendly __pa and __va definitions
2010-04-26 15:54:48 +09:00
Nick Piggin 6b6b18e62c sh: invoke oom-killer from page fault
As explained in commit 1c0fe6e3bd, we want to call the architecture independent
oom killer when getting an unexplained OOM from handle_mm_fault, rather than
simply killing current.

Cc: linux-sh@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-26 15:47:01 +09:00
Matt Fleming c7b03fa0bd sh: Do not try merging two 128MB PMB mappings
There is a logic error in pmb_merge() that means we will incorrectly try
to merge two 128MB PMB mappings into one mapping. However, 256MB isn't a
valid PMB map size and pmb_merge() will actually drop the second 128MB
mapping.

This patch allows my SDK7786 board to boot when configured with
CONFIG_MEMORY_SIZE=0x10000000.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-04-25 20:44:23 +01:00
Paul Mundt 88253e8459 sh: Zero out aliases counter when using SH-X3 hardware assistance.
This zeroes out the number of cache aliases in the cache info descriptors
when hardware alias avoidance is enabled. This cuts down on the amount of
flushing taken care of by common code, and also permits coherency control
to be disabled for the single CPU and 4k page size case.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-20 15:37:23 +09:00
Paul Mundt 3cf6fa1e33 sh: Enable SH-X3 hardware synonym avoidance handling.
This enables support for the hardware synonym avoidance handling on SH-X3
CPUs for the case where dcache aliases are possible. icache handling is
retained, but we flip on broadcasting of the block invalidations due to
the lack of coherency otherwise on SMP.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-19 17:27:17 +09:00
Paul Mundt 94a46d3cde Merge branch 'sh/stable-updates' 2010-04-05 12:21:09 +09:00
Tejun Heo 336f5899d2 Merge branch 'master' into export-slabh 2010-04-05 11:37:28 +09:00
Paul Mundt be97d758e5 sh: Fix up the SH-3 build for recent TLB changes.
While the MMUCR.URB and ITLB/UTLB differentiation works fine for all SH-4
and later TLBs, these features are absent on SH-3. This splits out
local_flush_tlb_all() in to SH-4 and PTEAEX copies while restoring the
old SH-3 one, subsequently fixing up the build.

This will probably want some further reordering and tidying in the
future, but that's out of scope at present.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-02 16:13:27 +09:00
Tejun Heo 5a0e3ad6af include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files.  percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed.  Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability.  As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

  http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
  only the necessary includes are there.  ie. if only gfp is used,
  gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
  blocks and try to put the new include such that its order conforms
  to its surrounding.  It's put in the include block which contains
  core kernel includes, in the same order that the rest are ordered -
  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
  doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
  because the file doesn't have fitting include block), it prints out
  an error message indicating which .h file needs to be added to the
  file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
   over 4000 files, deleting around 700 includes and adding ~480 gfp.h
   and ~3000 slab.h inclusions.  The script emitted errors for ~400
   files.

2. Each error was manually checked.  Some didn't need the inclusion,
   some needed manual addition while adding it to implementation .h or
   embedding .c file was more appropriate for others.  This step added
   inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
   from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
   e.g. lib/decompress_*.c used malloc/free() wrappers around slab
   APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
   editing them as sprinkling gfp.h and slab.h inclusions around .h
   files could easily lead to inclusion dependency hell.  Most gfp.h
   inclusion directives were ignored as stuff from gfp.h was usually
   wildly available and often used in preprocessor macros.  Each
   slab.h inclusion directive was examined and added manually as
   necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
   distributed build env didn't work with gcov compiles) and a few
   more options had to be turned off depending on archs to make things
   build (like ipr on powerpc/64 which failed due to missing writeq).

   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
   * powerpc and powerpc64 SMP allmodconfig
   * sparc and sparc64 SMP allmodconfig
   * ia64 SMP allmodconfig
   * s390 SMP allmodconfig
   * alpha SMP allmodconfig
   * um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
   a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-30 22:02:32 +09:00
Matt Fleming 6ae6650232 sh: tlb debugfs support.
Export the status of the utlb and itlb entries through debugfs.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-29 15:24:54 +09:00
Matt Fleming 4539282dbc sh: update the TLB replacement counter for entry wiring.
Presently the TLB wiring code depends on MMUCR.URB for working out where
to place the wired entry, but fails to take the replacment counter in to
consideration. This fixes up the wiring logic and ensures that wired
entries remain so.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-26 11:37:16 +09:00
Matt Fleming 3fe0f36c7e sh: Fix build after dynamic PMB rework
set_pmb_entry() is now only used by a function that is wrapped in #ifdef
CONFIG_PM, so wrap set_pmb_entry() in CONFIG_PM too.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23 13:37:03 +09:00
Matt Fleming b5b6c7eea1 sh: Replace unsafe manipulation of MMUCR
Setting the TI in MMUCR causes all the TLB bits in MMUCR to be
cleared. Unfortunately, the TLB wired bits are also cleared when setting
the TI bit, causing any wired TLB entries to become unwired.

Use local_flush_tlb_all() which implements TLB flushing in a safer
manner by using the memory-mapped TLB registers. As each CPU has its own
PMB the modifications in pmb_init() only affect the local CPU, so only
flush the local CPU's TLB.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23 13:36:21 +09:00
Matt Fleming a9eb4f6d1a sh: Flush ITLB too in PTEAEX's flush_tlb_page()
flush_tlb_page() can be used to flush TLB entries that map executable
pages. Therefore, we need to ensure that the ITLB is also flushed in
local_flush_tlb_page().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23 13:36:15 +09:00
Paul Mundt 5b34d1ee1e sh: Export uncached helper symbols.
oprofile and others need to get at these, so provide symbol exports.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-10 16:46:58 +09:00
Paul Mundt 40d1f00482 sh: Fix up uncached offset for legacy 29-bit mode.
The uncached_start was being set up properly for 32-bit but managed to
break 29-bit in the process, fix it up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-08 21:03:21 +09:00
Pawel Moll 62c8cbbfc2 sh: Move PMB debugfs entry initialization to later stage
... so the "sh_debugfs_root" is already available. Previously it
wasn't and in result its path was "/sys/kernel/debug/pmb" instead of
"/sys/kernel/debug/sh/pmb".

Signed-off-by: Pawel Moll <pawel.moll@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-05 01:11:31 +09:00
Paul Mundt 281983d6ff sh: fix up MMU reset with variable PMB mapping sizes.
Presently we run in to issues with the MMU resetting the CPU when
variable sized mappings are employed. This takes a slightly more
aggressive approach to keeping the TLB and cache state sane before
establishing the mappings in order to cut down on races observed on
SMP configurations.

At the same time, we bump the VMA range up to the 0xb000...0xc000 range,
as there still seems to be some undocumented behaviour in setting up
variable mappings in the 0xa000...0xb000 range, resulting in reset by the
TLB.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-04 16:44:20 +09:00
Paul Mundt 09e1172317 sh: establish PMB mappings for NUMA nodes.
In the case of NUMA emulation when in range PPNs are being used for
secondary nodes, we need to make sure that the PMB has a mapping for it
before setting up the pgdat. This prevents the MMU from resetting.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-03 13:16:31 +09:00
Paul Mundt a1042aa248 sh: check for existing mappings for bolted PMB entries.
When entries are being bolted unconditionally it's possible that the boot
loader has established mappings that are within range that we don't want
to clobber. Perform some basic validation to ensure that the new mapping
is out of range before allowing the entry setup to take place.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-03 13:13:25 +09:00
Paul Mundt 6eb3c735d2 sh: fixed virt/phys mapping helpers for PMB.
This moves the pmb_remap_caller() mapping logic out in to
pmb_bolt_mapping(), which enables us to establish fixed mappings in
places such as the NUMA code.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02 17:22:29 +09:00
Paul Mundt 4cfa8e75d6 sh: make pmb iomapping configurable.
This plugs in an early_param for permitting transparent PMB-backed
ioremapping to be enabled/disabled. For the time being, we use a
default-disabled policy.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02 16:49:50 +09:00
Paul Mundt 90e7d649d8 sh: reworked dynamic PMB mapping.
This implements a fairly significant overhaul of the dynamic PMB mapping
code. The primary change here is that the PMB gets its own VMA that
follows the uncached mapping and we attempt to be a bit more intelligent
with dynamic sizing, multi-entry mapping, and so forth.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02 16:40:06 +09:00
Paul Mundt 9adae97209 Merge branches 'sh/dmaengine', 'sh/hw-breakpoints' and 'sh/trivial' 2010-03-02 11:49:25 +09:00
Linus Torvalds ac0f6f927d Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (100 commits)
  ARM: Eliminate decompressor -Dstatic= PIC hack
  ARM: 5958/1: ARM: U300: fix inverted clk round rate
  ARM: 5956/1: misplaced parentheses
  ARM: 5955/1: ep93xx: move timer defines into core.c and document
  ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c
  ARM: 5953/1: ep93xx: fix broken build of clock.c
  ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig
  ARM: 5949/1: NUC900 add gpio virtual memory map
  ARM: 5948/1: Enable timer0 to time4 clock support for nuc910
  ARM: 5940/2: ARM: MMCI: remove custom DBG macro and printk
  ARM: make_coherent(): fix problems with highpte, part 2
  MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
  ARM: 5945/1: ep93xx: include correct irq.h in core.c
  ARM: 5933/1: amba-pl011: support hardware flow control
  ARM: 5930/1: Add PKMAP area description to memory.txt.
  ARM: 5929/1: Add checks to detect overlap of memory regions.
  ARM: 5928/1: Change type of VMALLOC_END to unsigned long.
  ARM: 5927/1: Make delimiters of DMA area globally visibly.
  ARM: 5926/1: Add "Virtual kernel memory..." printout.
  ARM: 5920/1: OMAP4: Enable L2 Cache
  ...

Fix up trivial conflict in arch/arm/mach-mx25/clock.c
2010-03-01 09:15:15 -08:00
Robert P. J. Day 4b62c0f1e7 sh: No need to explicitly include <linux/rwlock.h>.
Since <linux/spinlock.h> already includes <linux/rwlock.h>, and the
latter file will warn about not having included the former file
anyway, there is no value in including rwlock.h explicitly.

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-01 11:57:32 +09:00
Paul Mundt 94ea5e449a sh: wire up SET/GET_UNALIGN_CTL.
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from
the PPC and ia64 implementations. The thread flags happen to be the
logical inverse of what the global fault mode is set to, so this works
out pretty cleanly. By default the global fault mode is used, with tasks
now being able to override their own settings via prctl().

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-23 12:56:30 +09:00