dect
/
linux-2.6
Archived
13
0
Fork 0
Commit Graph

393 Commits

Author SHA1 Message Date
Al Viro e77414e0aa fix broken aliasing checks for MAP_FIXED on sparc32, mips, arm and sh
We want addr - (pgoff << PAGE_SHIFT) consistently coloured...

Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2009-12-11 06:44:59 -05:00
Matt Fleming a781d1e5ff sh: Drop associative writes for SH-4 cache flushes.
When flushing/invalidating the icache/dcache via the memory-mapped IC/OC
address arrays, the associative bit should only be used in conjunction with
virtual addresses. However, we currently flush cache lines based on physical
address, so stop using the associative bit.

It is a better strategy to use non-associative writes (and physical tags) for
flushing the caches anyway, because flushing by virtual address (as with the
A-bit set) requires a valid TLB entry for that virtual address. If one does not
exist in the TLB no exception is generated and the flush is silently ignored.

This is also future-proofing for SH-4A parts which are gradually phasing out
associative writes to the cache array due to the aforementioned case of certain
flushes silently turning in to nops.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-04 16:18:11 +09:00
Paul Mundt 7e01c94998 sh: Partial revert of copy/clear_user_highpage() optimizations.
These still require more testing, so revert them for now. We keep the
off-by-1 in the fixmap colouring and drop the rest.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-04 15:14:52 +09:00
Stuart Menefy 39ac11c160 sh: Improve performance of SH4 versions of copy/clear_user_highpage
The previous implementation of clear_user_highpage and copy_user_highpage
checked to see if there was a D-cache aliasing issue between the user
and kernel mappings of a page, but if there was they always did a
flush with writeback on the dirtied kernel alias.

However as we now have the ability to map a page into kernel space
with the same cache colour as the user mapping, there is no need to
write back this data.

Currently we also invalidate the kernel alias as a precaution, however
I'm not sure if this is actually required.

Also correct the definition of FIX_CMAP_END so that the mappings created
by kmap_coherent() are actually at the correct colour.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-24 17:13:35 +09:00
Paul Mundt 3af539e59c sh64: Fix up reworked cache op build.
This gets the build fixed up for the sh64 cache enabled case.
Disabling still needs further abstraction for independent I/D-cache
disabling.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-12 17:03:28 +09:00
Paul Mundt a4d9d0b8a8 sh: Enable PMB support for all SH-4A CPUs.
Presently the PMB options were limited to a number of CPUs they were
tested with, but it is generally available on all SH-4A CPUs, so just
drop the subtype conditionals.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-11 10:56:13 +09:00
Paul Mundt 76d2318020 Merge branch 'sh/stable-updates' 2009-11-09 10:55:36 +09:00
Matt Fleming a9d244a2ff sh: Account for cache aliases in flush_icache_range()
The icache may also contain aliases so we must account for them just
like we do when manipulating the dcache. We usually get away with
aliases in the icache because the instructions that are read from memory
are read-only, i.e. they never change. However, the place where this
bites us is when the code has been modified.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-09 10:45:30 +09:00
Roel Kluin 9016332014 sh: Make sure indexes are positive
The indexes are signed, make sure they are not negative
when we read array elements.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-04 11:48:07 +09:00
Matt Fleming eb3118f652 sh: Do not apply virt_to_phys() to a physical address
The variable 'phys' already contains the physical address to flush. It
is not a virtual address and should not be passed to virt_to_phys().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-30 11:53:22 +09:00
Paul Mundt 9b3b21f788 Merge branch 'sh/stable-updates' 2009-10-27 17:10:24 +09:00
Paul Mundt 94c285108e sh: Bump up dma_ops initialization far earlier in the boot process.
Presently this was tacked on to the dma debug init bits from
fs_initcall(), which is far too late for devices setting up their own
per-device coherent areas.

Throw this in the beginning of mem_init(), as per the x86 iommu
allocation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-27 17:07:45 +09:00
Paul Mundt 0a993b0a29 sh64: cache flush symbol exports.
These were previously hidden in sh_ksyms_32, despite also being needed
for sh64 now that the cache.c code is shared.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-27 10:51:35 +09:00
Paul Mundt ffb4a73d89 sh: Fix hugetlbfs dependencies for SH-3 && MMU configurations.
The hugetlb dependencies presently depend on SUPERH && MMU while the
hugetlb page size definitions depend on CPU_SH4 or CPU_SH5. This
unfortunately allows SH-3 + MMU configurations to enable hugetlbfs
without a corresponding HPAGE_SHIFT definition, resulting in the build
blowing up.

As SH-3 doesn't support variable page sizes, we tighten up the
dependenies a bit to prevent hugetlbfs from being enabled. These days
we also have a shiny new SYS_SUPPORTS_HUGETLBFS, so switch to using
that rather than adding to the list of corner cases in fs/Kconfig.

Reported-by: Kristoffer Ericson <kristoffer.ericson@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-27 07:22:37 +09:00
Paul Mundt f32154c9b5 sh: Add dma-mapping support for dma_alloc/free_coherent() overrides.
This moves the current dma_alloc/free_coherent() calls to a generic
variant and plugs them in for the nommu default. Other variants can
override the defaults in the dma mapping ops directly.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-26 09:50:51 +09:00
Paul Mundt 73c926bee0 sh: Convert to asm-generic/dma-mapping-common.h
This converts the old DMA mapping support to the new generic
dma-mapping-common.h abstraction.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-20 12:55:56 +09:00
Paul Mundt 896f0c0e8e sh: Support SCHED_MC for SH-X3 multi-cores.
This enables SCHED_MC support for SH-X3 multi-cores. Presently this is
just a simple wrapper around the possible map, but this allows for
tying in support for some of the more exotic NUMA clusters where we can
actually do something with the topology.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 18:00:02 +09:00
Paul Mundt abeaf33a41 Merge branch 'sh/stable-updates'
Conflicts:
	arch/sh/mm/cache-sh4.c
2009-10-16 15:14:50 +09:00
Magnus Damm 5fb80ae8bd sh: disabled cache handling fix.
Add code to handle the cache disabled case. Fixes breakage introduced by
37443ef3f0 ("sh: Migrate SH-4 cacheflush
ops to function pointers."). Without this patch configuring caches off
with CONFIG_CACHE_OFF=y makes kfr2r09 and migo-r lock up in fbdev
deferred io or early user space.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 14:38:48 +09:00
Valentin Sitdikov a7a7c0e1d1 sh: Fix up single page flushing to use PAGE_SIZE.
Presently The SH-4 cache flushing code uses flush_cache_4096() for most
of the real flushing work, which breaks down to a fixed 4096 unroll and
increment. Not only is this sub-optimal for larger page sizes, it's also
uncovered a bug in sh4_flush_dcache_page() when large page sizes are used
and we have no cache aliases -- resulting in only a part of the page's
D-cache lines being written back.

Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 14:15:38 +09:00
Paul Mundt 95019b48ad Merge branch 'sh/stable-updates' 2009-10-13 11:27:08 +09:00
Paul Mundt 964f7e5a56 sh: force dcache flush if dcache_dirty bit set.
This too follows the ARM change, given that the issue at hand applies to
all platforms that implement lazy D-cache writeback.

This fixes up the case when a page mapping disappears between the
flush_dcache_page() call (when PG_dcache_dirty is set for the page) and
the update_mmu_cache() call -- such as in the case of swap cache being
freed early. This kills off the mapping test in update_mmu_cache() and
switches to simply testing for PG_dcache_dirty.

Reported-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 11:18:34 +09:00
Matt Fleming 20b5014b3e sh: Fold fixed-PMB support into dynamic PMB support
The initialisation process differs for CONFIG_PMB and for
CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB
entries that were allocated by the bootloader.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:34 +09:00
Matt Fleming ef269b3276 sh: Fix the offset from P1SEG/P2SEG where we map RAM
We need to map the gap between 0x00000000 and __MEMORY_START in the PMB,
as well as RAM.

With this change my 7785LCR board can switch to 32bit MMU mode at
runtime.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:26 +09:00
Matt Fleming 3105121949 sh: Remap physical memory into P1 and P2 in pmb_init()
Eventually we'll have complete control over what physical memory gets
mapped where and we can probably do other interesting things. For now
though, when the MMU is in 32-bit mode, we map physical memory into the
P1 and P2 virtual address ranges with the same semantics as they have in
29-bit mode.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:03 +09:00
Matt Fleming edd7de803c sh: Get rid of the kmem cache code
Unfortunately, at the time during in boot when we want to be setting up
the PMB entries, the kmem subsystem hasn't been initialised.

We now match pmb_map slots with pmb_entry_list slots. When we find an
empty slot in pmb_map, we set the bit, thereby acquiring the
corresponding pmb_entry_list entry. There is a benefit in using this
static array of struct pmb_entry's; we don't need to acquire any locks
in order to traverse the list of struct pmb_entry's.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:47 +09:00
Matt Fleming 8386aebb9e sh: Make most PMB functions static
There's no need to export the internal PMB functions for allocating,
freeing and modifying PMB entries, etc. This way we can restrict the
interface for PMB.

Also remove the static from pmb_init() so that we have more freedom in
setting up the initial PMB entries and turning on MMU 32bit mode.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:37 +09:00
Matt Fleming b336f124b1 sh: CONFIG_PMB doesn't mean the MMU is in 32bit mode
CONFIG_PMB will eventually allow the MMU to be switched between 29-bit
and 32-bit mode dynamically at runtime.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:23 +09:00
Matt Fleming 1f69b6af91 sh: Prepare for dynamic PMB support
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:12 +09:00
Matt Fleming 8bd642b17b sh: Obliterate the P1 area macros
Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
idea that all addresses in P1SEG are untranslated, so we can access an
address's physical page as an offset from P1SEG. This doesn't work for
CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
for PMB mappings and so can be translated to any physical address.

Likewise, replace a P1SEGADDR() use with virt_to_phys().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:02 +09:00
Matt Fleming 067784f623 sh: Allocate PMB entry slot earlier
Simplify set_pmb_entry() by removing the possibility of not finding a
free slot in the PMB. Instead we now allocate a slot in pmb_alloc() so
that if there are no free slots we fail at allocation time, rather than
in set_pmb_entry().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:49:57 +09:00
Paul Mundt 5e3679c594 Merge branch 'sh/cachetlb' 2009-10-10 21:36:53 +09:00
Matt Fleming a2767cfb1d sh: Don't allocate smaller sized mappings on every iteration
Currently, we've got the less than ideal situation where if we need to
allocate a 256MB mapping we'll allocate four entries like so,

	 entry 1: 128MB
	 entry 2:  64MB
	 entry 3:  16MB
	 entry 4:  16MB

This is because as we execute the loop in pmb_remap() we will
progressively try mapping the remaining address space with smaller and
smaller sizes. This isn't good because the size we use on one iteration
may be the perfect size to use on the next iteration, for instance when
the initial size is divisible by one of the PMB mapping sizes.

With this patch, we now only need two entries in the PMB to map 256MB of
address space,

	  entry 1: 128MB
	  entry 2: 128MB

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:26:35 +09:00
Matt Fleming 2bea7ea7d5 sh: Try PMB mapping based on physical address, not mapping size
We should favour PMB mappings when the physical address cannot be
reached with 29-bits.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:25:10 +09:00
Matt Fleming fc2bdefdde sh: Plug PMB alloc memory leak
If we fail to allocate a PMB entry in pmb_remap() we must remember to
clear and free any PMB entries that we may have previously allocated,
e.g. if we were allocating a multiple entry mapping.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:24:09 +09:00
Matt Fleming a6325247f5 sh: Sprinkle __uses_jump_to_uncached
Fix some callers of jump_to_uncached() and back_to_cached() that were
not annotated with __uses_jump_to_uncached.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:23:57 +09:00
KAMEZAWA Hiroyuki 3089aa1b0c kcore: use registerd physmem information
For /proc/kcore, each arch registers its memory range by kclist_add().
In usual,

	- range of physical memory
	- range of vmalloc area
	- text, etc...

are registered but "range of physical memory" has some troubles.  It
doesn't updated at memory hotplug and it tend to include unnecessary
memory holes.  Now, /proc/iomem (kernel/resource.c) includes required
physical memory range information and it's properly updated at memory
hotplug.  Then, it's good to avoid using its own code(duplicating
information) and to rebuild kclist for physical memory based on
/proc/iomem.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
KAMEZAWA Hiroyuki a0614da88b kcore: register vmalloc area in generic way
For /proc/kcore, vmalloc areas are registered per arch.  But, all of them
registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
them.  By this.  archs which have no kclist_add() hooks can see vmalloc
area correctly.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
KAMEZAWA Hiroyuki c30bb2a25f kcore: add kclist types
Presently, kclist_add() only eats start address and size as its arguments.
Considering to make kclist dynamically reconfigulable, it's necessary to
know which kclists are for System RAM and which are not.

This patch add kclist types as
  KCORE_RAM
  KCORE_VMALLOC
  KCORE_TEXT
  KCORE_OTHER

This "type" is used in a patch following this for detecting KCORE_RAM.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
Geert Uytterhoeven cc013a8890 arches: drop superfluous casts in nr_free_pages() callers
Commit 9617729941 ("Drop free_pages()")
modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
int'.  This made the casts to 'unsigned long' in most callers superfluous,
so remove them.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <zankel@tensilica.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-22 07:17:34 -07:00
Ingo Molnar cdd6c482c9 perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!

In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.

Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.

All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)

The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.

Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.

User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)

This patch has been generated via the following script:

  FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

  sed -i \
    -e 's/PERF_EVENT_/PERF_RECORD_/g' \
    -e 's/PERF_COUNTER/PERF_EVENT/g' \
    -e 's/perf_counter/perf_event/g' \
    -e 's/nb_counters/nb_events/g' \
    -e 's/swcounter/swevent/g' \
    -e 's/tpcounter_event/tp_event/g' \
    $FILES

  for N in $(find . -name perf_counter.[ch]); do
    M=$(echo $N | sed 's/perf_counter/perf_event/g')
    mv $N $M
  done

  FILES=$(find . -name perf_event.*)

  sed -i \
    -e 's/COUNTER_MASK/REG_MASK/g' \
    -e 's/COUNTER/EVENT/g' \
    -e 's/\<event\>/event_id/g' \
    -e 's/counter/event/g' \
    -e 's/Counter/Event/g' \
    $FILES

... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.

Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.

( NOTE: 'counters' are still the proper terminology when we deal
  with hardware registers - and these sed scripts are a bit
  over-eager in renaming them. I've undone some of that, but
  in case there's something left where 'counter' would be
  better than 'event' we can undo that on an individual basis
  instead of touching an otherwise nicely automated patch. )

Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 14:28:04 +02:00
Paul Mundt c8c2df9055 sh: Fix up sh7705 flush_dcache_page() build.
Type mismatch caused the page deref to blow up, fix it up as per the sh4
change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-15 09:47:35 +09:00
Paul Mundt f9e2bdfdbb sh: Factor in cpu id for selection of cache colour fixmap.
In the SMP VIPT case the page copy/clear ops still perform colouring,
care needs to be taken that CPUs don't end up stepping on each other,
so we give them a bit of room to work with.

At the same time, we reduce the worst-case colouring given that these
pages are always consumed.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 17:14:19 +09:00
Paul Mundt c4845a4b22 sh: Fix up redundant cache flushing for PAGE_SIZE > 4k.
If PAGE_SIZE is presently over 4k we do a lot of extra flushing given
that we purge the cache 4k at a time. Make it explicitly 4k per
iteration, rather than iterating for PAGE_SIZE before looping over again.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 17:13:07 +09:00
Paul Mundt deaef20e97 sh: Rework sh4_flush_cache_page() for coherent kmap mapping.
This builds on top of the MIPS r4k code that does roughly the same thing.
This permits the use of kmap_coherent() for mapped pages with dirty
dcache lines and falls back on kmap_atomic() otherwise.

This also fixes up a problem with the alias check and defers to
shm_align_mask directly.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 16:06:39 +09:00
Paul Mundt bd6df57481 sh: Kill off segment-based d-cache flushing on SH-4.
This kills off the unrolled segment based flushers on SH-4 and switches
over to a generic unrolled approach derived from the writethrough segment
flusher.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:22:15 +09:00
Paul Mundt 31c9efde78 sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page().
PHYSADDR() runs in to issues in 32-bit mode when we do not have the
legacy P1/P2 areas mapped, as such, we need to use page_to_phys()
directly, which also happens to do the right thing in legacy 29-bit mode.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:10:28 +09:00
Paul Mundt 654d364e26 sh: sh4_flush_cache_mm() optimizations.
The i-cache flush in the case of VM_EXEC was added way back when as a
sanity measure, and in practice we only care about evicting aliases from
the d-cache. As a result, it's possible to drop the i-cache flush
completely here.

After careful profiling it's also come up that all of the work associated
with hunting down aliases and doing ranged flushing ends up generating
more overhead than simply blasting away the entire dcache, particularly
if there are many mm's that need to be iterated over. As a result of
that, just move back to flush_dcache_all() in these cases, which restores
the old behaviour, and vastly simplifies the path.

Additionally, on platforms without aliases at all, this can simply be
nopped out. Presently we have the alias check in the SH-4 specific
version, but this is true for all of the platforms, so move the check up
to a generic location. This cuts down quite a bit on superfluous cacheop
IPIs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:04:06 +09:00
Paul Mundt 682f88ab74 sh: Cleanup whitespace damage in sh4_flush_icache_range().
There was quite a lot of tab->space damage done here from a former patch,
clean it up once and for all.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 13:19:46 +09:00
Paul Mundt 6e4154d4c2 sh: Use more aggressive dcache purging in kmap teardown.
This fixes up a number of outstanding issues observed with old mappings
on the same colour hanging around. This requires some more optimal
handling, but is a safe fallback until all of the corner cases have been
handled.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-08 16:21:00 +09:00