dect
/
linux-2.6
Archived
13
0
Fork 0

Merge Linus' tree.

This commit is contained in:
Russell King 2006-01-09 19:18:33 +00:00 committed by Russell King
commit 0a3a98f6dd
590 changed files with 16766 additions and 7749 deletions

View File

@ -3203,7 +3203,7 @@ N: Eugene Surovegin
E: ebs@ebshome.net
W: http://kernel.ebshome.net/
P: 1024D/AE5467F1 FF22 39F1 6728 89F6 6E6C 2365 7602 F33D AE54 67F1
D: Embedded PowerPC 4xx: I2C, PIC and random hacks/fixes
D: Embedded PowerPC 4xx: EMAC, I2C, PIC and random hacks/fixes
S: Sunnyvale, California 94085
S: USA

View File

@ -31,8 +31,6 @@ al espa
Eine deutsche Version dieser Datei finden Sie unter
<http://www.stefan-winter.de/Changes-2.4.0.txt>.
Last updated: October 29th, 2002
Chris Ricker (kaboom@gatech.edu or chris.ricker@genetics.utah.edu).
Current Minimal Requirements
@ -48,7 +46,7 @@ necessary on all systems; obviously, if you don't have any ISDN
hardware, for example, you probably needn't concern yourself with
isdn4k-utils.
o Gnu C 2.95.3 # gcc --version
o Gnu C 3.2 # gcc --version
o Gnu make 3.79.1 # make --version
o binutils 2.12 # ld -v
o util-linux 2.10o # fdformat --version
@ -74,26 +72,7 @@ GCC
---
The gcc version requirements may vary depending on the type of CPU in your
computer. The next paragraph applies to users of x86 CPUs, but not
necessarily to users of other CPUs. Users of other CPUs should obtain
information about their gcc version requirements from another source.
The recommended compiler for the kernel is gcc 2.95.x (x >= 3), and it
should be used when you need absolute stability. You may use gcc 3.0.x
instead if you wish, although it may cause problems. Later versions of gcc
have not received much testing for Linux kernel compilation, and there are
almost certainly bugs (mainly, but not exclusively, in the kernel) that
will need to be fixed in order to use these compilers. In any case, using
pgcc instead of plain gcc is just asking for trouble.
The Red Hat gcc 2.96 compiler subtree can also be used to build this tree.
You should ensure you use gcc-2.96-74 or later. gcc-2.96-54 will not build
the kernel correctly.
In addition, please pay attention to compiler optimization. Anything
greater than -O2 may not be wise. Similarly, if you choose to use gcc-2.95.x
or derivatives, be sure not to use -fstrict-aliasing (which, depending on
your version of gcc 2.95.x, may necessitate using -fno-strict-aliasing).
computer.
Make
----
@ -322,9 +301,9 @@ Getting updated software
Kernel compilation
******************
gcc 2.95.3
----------
o <ftp://ftp.gnu.org/gnu/gcc/gcc-2.95.3.tar.gz>
gcc
---
o <ftp://ftp.gnu.org/gnu/gcc/>
Make
----

View File

@ -344,7 +344,7 @@ Remember: if another thread can find your data structure, and you don't
have a reference count on it, you almost certainly have a bug.
Chapter 11: Macros, Enums, Inline functions and RTL
Chapter 11: Macros, Enums and RTL
Names of macros defining constants and labels in enums are capitalized.
@ -429,7 +429,35 @@ from void pointer to any other pointer type is guaranteed by the C programming
language.
Chapter 14: References
Chapter 14: The inline disease
There appears to be a common misperception that gcc has a magic "make me
faster" speedup option called "inline". While the use of inlines can be
appropriate (for example as a means of replacing macros, see Chapter 11), it
very often is not. Abundant use of the inline keyword leads to a much bigger
kernel, which in turn slows the system as a whole down, due to a bigger
icache footprint for the CPU and simply because there is less memory
available for the pagecache. Just think about it; a pagecache miss causes a
disk seek, which easily takes 5 miliseconds. There are a LOT of cpu cycles
that can go into these 5 miliseconds.
A reasonable rule of thumb is to not put inline at functions that have more
than 3 lines of code in them. An exception to this rule are the cases where
a parameter is known to be a compiletime constant, and as a result of this
constantness you *know* the compiler will be able to optimize most of your
function away at compile time. For a good example of this later case, see
the kmalloc() inline function.
Often people argue that adding inline to functions that are static and used
only once is always a win since there is no space tradeoff. While this is
technically correct, gcc is capable of inlining these automatically without
help, and the maintenance issue of removing the inline when a second user
appears outweighs the potential value of the hint that tells gcc to do
something it would have done anyway.
Chapter 15: References
The C Programming Language, Second Edition
by Brian W. Kernighan and Dennis M. Ritchie.
@ -444,10 +472,13 @@ ISBN 0-201-61586-X.
URL: http://cm.bell-labs.com/cm/cs/tpop/
GNU manuals - where in compliance with K&R and this text - for cpp, gcc,
gcc internals and indent, all available from http://www.gnu.org
gcc internals and indent, all available from http://www.gnu.org/manual/
WG14 is the international standardization working group for the programming
language C, URL: http://std.dkuug.dk/JTC1/SC22/WG14/
language C, URL: http://www.open-std.org/JTC1/SC22/WG14/
Kernel CodingStyle, by greg@kroah.com at OLS 2002:
http://www.kroah.com/linux/talks/ols_2002_kernel_codingstyle_talk/html/
--
Last updated on 16 February 2004 by a community effort on LKML.
Last updated on 30 December 2005 by a community effort on LKML.

View File

@ -1,74 +1,67 @@
Refcounter framework for elements of lists/arrays protected by
RCU.
Refcounter design for elements of lists/arrays protected by RCU.
Refcounting on elements of lists which are protected by traditional
reader/writer spinlocks or semaphores are straight forward as in:
1. 2.
add() search_and_reference()
{ {
alloc_object read_lock(&list_lock);
... search_for_element
atomic_set(&el->rc, 1); atomic_inc(&el->rc);
write_lock(&list_lock); ...
add_element read_unlock(&list_lock);
... ...
write_unlock(&list_lock); }
1. 2.
add() search_and_reference()
{ {
alloc_object read_lock(&list_lock);
... search_for_element
atomic_set(&el->rc, 1); atomic_inc(&el->rc);
write_lock(&list_lock); ...
add_element read_unlock(&list_lock);
... ...
write_unlock(&list_lock); }
}
3. 4.
release_referenced() delete()
{ {
... write_lock(&list_lock);
atomic_dec(&el->rc, relfunc) ...
... delete_element
} write_unlock(&list_lock);
...
if (atomic_dec_and_test(&el->rc))
kfree(el);
...
... write_lock(&list_lock);
atomic_dec(&el->rc, relfunc) ...
... delete_element
} write_unlock(&list_lock);
...
if (atomic_dec_and_test(&el->rc))
kfree(el);
...
}
If this list/array is made lock free using rcu as in changing the
write_lock in add() and delete() to spin_lock and changing read_lock
in search_and_reference to rcu_read_lock(), the rcuref_get in
in search_and_reference to rcu_read_lock(), the atomic_get in
search_and_reference could potentially hold reference to an element which
has already been deleted from the list/array. rcuref_lf_get_rcu takes
has already been deleted from the list/array. atomic_inc_not_zero takes
care of this scenario. search_and_reference should look as;
1. 2.
add() search_and_reference()
{ {
alloc_object rcu_read_lock();
... search_for_element
atomic_set(&el->rc, 1); if (rcuref_inc_lf(&el->rc)) {
write_lock(&list_lock); rcu_read_unlock();
return FAIL;
add_element }
... ...
write_unlock(&list_lock); rcu_read_unlock();
alloc_object rcu_read_lock();
... search_for_element
atomic_set(&el->rc, 1); if (atomic_inc_not_zero(&el->rc)) {
write_lock(&list_lock); rcu_read_unlock();
return FAIL;
add_element }
... ...
write_unlock(&list_lock); rcu_read_unlock();
} }
3. 4.
release_referenced() delete()
{ {
... write_lock(&list_lock);
rcuref_dec(&el->rc, relfunc) ...
... delete_element
} write_unlock(&list_lock);
...
if (rcuref_dec_and_test(&el->rc))
call_rcu(&el->head, el_free);
...
... write_lock(&list_lock);
atomic_dec(&el->rc, relfunc) ...
... delete_element
} write_unlock(&list_lock);
...
if (atomic_dec_and_test(&el->rc))
call_rcu(&el->head, el_free);
...
}
Sometimes, reference to the element need to be obtained in the
update (write) stream. In such cases, rcuref_inc_lf might be an overkill
since the spinlock serialising list updates are held. rcuref_inc
update (write) stream. In such cases, atomic_inc_not_zero might be an
overkill since the spinlock serialising list updates are held. atomic_inc
is to be used in such cases.
For arches which do not have cmpxchg rcuref_inc_lf
api uses a hashed spinlock implementation and the same hashed spinlock
is acquired in all rcuref_xxx primitives to preserve atomicity.
Note: Use rcuref_inc api only if you need to use rcuref_inc_lf on the
refcounter atleast at one place. Mixing rcuref_inc and atomic_xxx api
might lead to races. rcuref_inc_lf() must be used in lockfree
RCU critical sections only.

View File

@ -27,18 +27,17 @@ Who To Submit Drivers To
------------------------
Linux 2.0:
No new drivers are accepted for this kernel tree
No new drivers are accepted for this kernel tree.
Linux 2.2:
No new drivers are accepted for this kernel tree.
Linux 2.4:
If the code area has a general maintainer then please submit it to
the maintainer listed in MAINTAINERS in the kernel file. If the
maintainer does not respond or you cannot find the appropriate
maintainer then please contact the 2.2 kernel maintainer:
Marc-Christian Petersen <m.c.p@wolk-project.de>.
Linux 2.4:
The same rules apply as 2.2. The final contact point for Linux 2.4
submissions is Marcelo Tosatti <marcelo.tosatti@cyclades.com>.
maintainer then please contact Marcelo Tosatti
<marcelo.tosatti@cyclades.com>.
Linux 2.6:
The same rules apply as 2.4 except that you should follow linux-kernel
@ -53,6 +52,7 @@ Licensing: The code must be released to us under the
of exclusive GPL licensing, and if you wish the driver
to be useful to other communities such as BSD you may well
wish to release under multiple licenses.
See accepted licenses at include/linux/module.h
Copyright: The copyright owner must agree to use of GPL.
It's best if the submitter and copyright owner
@ -143,5 +143,13 @@ KernelNewbies:
http://kernelnewbies.org/
Linux USB project:
http://sourceforge.net/projects/linux-usb/
http://linux-usb.sourceforge.net/
How to NOT write kernel driver by arjanv@redhat.com
http://people.redhat.com/arjanv/olspaper.pdf
Kernel Janitor:
http://janitor.kernelnewbies.org/
--
Last updated on 17 Nov 2005.

View File

@ -78,7 +78,9 @@ Randy Dunlap's patch scripts:
http://www.xenotime.net/linux/scripts/patching-scripts-002.tar.gz
Andrew Morton's patch scripts:
http://www.zip.com.au/~akpm/linux/patches/patch-scripts-0.20
http://www.zip.com.au/~akpm/linux/patches/
Instead of these scripts, quilt is the recommended patch management
tool (see above).
@ -97,7 +99,7 @@ need to split up your patch. See #3, next.
3) Separate your changes.
Separate each logical change into its own patch.
Separate _logical changes_ into a single patch file.
For example, if your changes include both bug fixes and performance
enhancements for a single driver, separate those changes into two
@ -112,6 +114,10 @@ If one patch depends on another patch in order for a change to be
complete, that is OK. Simply note "this patch depends on patch X"
in your patch description.
If you cannot condense your patch set into a smaller set of patches,
then only post say 15 or so at a time and wait for review and integration.
4) Select e-mail destination.
@ -124,6 +130,10 @@ your patch to the primary Linux kernel developer's mailing list,
linux-kernel@vger.kernel.org. Most kernel developers monitor this
e-mail list, and can comment on your changes.
Do not send more than 15 patches at once to the vger mailing lists!!!
Linus Torvalds is the final arbiter of all changes accepted into the
Linux kernel. His e-mail address is <torvalds@osdl.org>. He gets
a lot of e-mail, so typically you should do your best to -avoid- sending
@ -149,6 +159,9 @@ USB, framebuffer devices, the VFS, the SCSI subsystem, etc. See the
MAINTAINERS file for a mailing list that relates specifically to
your change.
Majordomo lists of VGER.KERNEL.ORG at:
<http://vger.kernel.org/vger-lists.html>
If changes affect userland-kernel interfaces, please send
the MAN-PAGES maintainer (as listed in the MAINTAINERS file)
a man-pages patch, or at least a notification of the change,
@ -373,27 +386,14 @@ a diffstat, to show what files have changed, and the number of inserted
and deleted lines per file. A diffstat is especially useful on bigger
patches. Other comments relevant only to the moment or the maintainer,
not suitable for the permanent changelog, should also go here.
Use diffstat options "-p 1 -w 70" so that filenames are listed from the
top of the kernel source tree and don't use too much horizontal space
(easily fit in 80 columns, maybe with some indentation).
See more details on the proper patch format in the following
references.
13) More references for submitting patches
Andrew Morton, "The perfect patch" (tpp).
<http://www.zip.com.au/~akpm/linux/patches/stuff/tpp.txt>
Jeff Garzik, "Linux kernel patch submission format."
<http://linux.yyz.us/patch-format.html>
Greg KH, "How to piss off a kernel subsystem maintainer"
<http://www.kroah.com/log/2005/03/31/>
Kernel Documentation/CodingStyle
<http://sosdg.org/~coywolf/lxr/source/Documentation/CodingStyle>
Linus Torvald's mail on the canonical patch format:
<http://lkml.org/lkml/2005/4/7/183>
-----------------------------------
@ -466,3 +466,30 @@ and 'extern __inline__'.
Don't try to anticipate nebulous future cases which may or may not
be useful: "Make it as simple as you can, and no simpler."
----------------------
SECTION 3 - REFERENCES
----------------------
Andrew Morton, "The perfect patch" (tpp).
<http://www.zip.com.au/~akpm/linux/patches/stuff/tpp.txt>
Jeff Garzik, "Linux kernel patch submission format."
<http://linux.yyz.us/patch-format.html>
Greg Kroah, "How to piss off a kernel subsystem maintainer".
<http://www.kroah.com/log/2005/03/31/>
<http://www.kroah.com/log/2005/07/08/>
<http://www.kroah.com/log/2005/10/19/>
NO!!!! No more huge patch bombs to linux-kernel@vger.kernel.org people!.
<http://marc.theaimsgroup.com/?l=linux-kernel&m=112112749912944&w=2>
Kernel Documentation/CodingStyle
<http://sosdg.org/~coywolf/lxr/source/Documentation/CodingStyle>
Linus Torvald's mail on the canonical patch format:
<http://lkml.org/lkml/2005/4/7/183>
--
Last updated on 17 Nov 2005.

View File

@ -2,7 +2,8 @@
Applying Patches To The Linux Kernel
------------------------------------
(Written by Jesper Juhl, August 2005)
Original by: Jesper Juhl, August 2005
Last update: 2005-12-02
@ -118,7 +119,7 @@ wrong.
When patch encounters a change that it can't fix up with fuzz it rejects it
outright and leaves a file with a .rej extension (a reject file). You can
read this file to see exactely what change couldn't be applied, so you can
read this file to see exactly what change couldn't be applied, so you can
go fix it up by hand if you wish.
If you don't have any third party patches applied to your kernel source, but
@ -127,7 +128,7 @@ and have made no modifications yourself to the source files, then you should
never see a fuzz or reject message from patch. If you do see such messages
anyway, then there's a high risk that either your local source tree or the
patch file is corrupted in some way. In that case you should probably try
redownloading the patch and if things are still not OK then you'd be advised
re-downloading the patch and if things are still not OK then you'd be advised
to start with a fresh tree downloaded in full from kernel.org.
Let's look a bit more at some of the messages patch can produce.
@ -180,9 +181,11 @@ wish to apply.
Are there any alternatives to `patch'?
---
Yes there are alternatives. You can use the `interdiff' program
(http://cyberelk.net/tim/patchutils/) to generate a patch representing the
differences between two patches and then apply the result.
Yes there are alternatives.
You can use the `interdiff' program (http://cyberelk.net/tim/patchutils/) to
generate a patch representing the differences between two patches and then
apply the result.
This will let you move from something like 2.6.12.2 to 2.6.12.3 in a single
step. The -z flag to interdiff will even let you feed it patches in gzip or
bzip2 compressed form directly without the use of zcat or bzcat or manual
@ -197,7 +200,7 @@ do the additional steps since interdiff can get things wrong in some cases.
Another alternative is `ketchup', which is a python script for automatic
downloading and applying of patches (http://www.selenic.com/ketchup/).
Other nice tools are diffstat which shows a summary of changes made by a
Other nice tools are diffstat which shows a summary of changes made by a
patch, lsdiff which displays a short listing of affected files in a patch
file, along with (optionally) the line numbers of the start of each patch
and grepdiff which displays a list of the files modified by a patch where
@ -258,7 +261,7 @@ $ patch -p1 -R < ../patch-2.6.11.1 # revert the 2.6.11.1 patch
# source dir is now 2.6.11
$ patch -p1 < ../patch-2.6.12 # apply new 2.6.12 patch
$ cd ..
$ mv linux-2.6.11.1 inux-2.6.12 # rename source dir
$ mv linux-2.6.11.1 linux-2.6.12 # rename source dir
The 2.6.x.y kernels
@ -433,7 +436,11 @@ $ cd ..
$ mv linux-2.6.12-mm1 linux-2.6.13-rc3-mm3 # rename the source dir
This concludes this list of explanations of the various kernel trees and I
hope you are now crystal clear on how to apply the various patches and help
testing the kernel.
This concludes this list of explanations of the various kernel trees.
I hope you are now clear on how to apply the various patches and help testing
the kernel.
Thank you's to Randy Dunlap, Rolf Eike Beer, Linus Torvalds, Bodo Eggert,
Johannes Stezenbach, Grant Coady, Pavel Machek and others that I may have
forgotten for their reviews and contributions to this document.

View File

@ -0,0 +1,82 @@
Block layer statistics in /sys/block/<dev>/stat
===============================================
This file documents the contents of the /sys/block/<dev>/stat file.
The stat file provides several statistics about the state of block
device <dev>.
Q. Why are there multiple statistics in a single file? Doesn't sysfs
normally contain a single value per file?
A. By having a single file, the kernel can guarantee that the statistics
represent a consistent snapshot of the state of the device. If the
statistics were exported as multiple files containing one statistic
each, it would be impossible to guarantee that a set of readings
represent a single point in time.
The stat file consists of a single line of text containing 11 decimal
values separated by whitespace. The fields are summarized in the
following table, and described in more detail below.
Name units description
---- ----- -----------
read I/Os requests number of read I/Os processed
read merges requests number of read I/Os merged with in-queue I/O
read sectors sectors number of sectors read
read ticks milliseconds total wait time for read requests
write I/Os requests number of write I/Os processed
write merges requests number of write I/Os merged with in-queue I/O
write sectors sectors number of sectors written
write ticks milliseconds total wait time for write requests
in_flight requests number of I/Os currently in flight
io_ticks milliseconds total time this block device has been active
time_in_queue milliseconds total wait time for all requests
read I/Os, write I/Os
=====================
These values increment when an I/O request completes.
read merges, write merges
=========================
These values increment when an I/O request is merged with an
already-queued I/O request.
read sectors, write sectors
===========================
These values count the number of sectors read from or written to this
block device. The "sectors" in question are the standard UNIX 512-byte
sectors, not any device- or filesystem-specific block size. The
counters are incremented when the I/O completes.
read ticks, write ticks
=======================
These values count the number of milliseconds that I/O requests have
waited on this block device. If there are multiple I/O requests waiting,
these values will increase at a rate greater than 1000/second; for
example, if 60 read requests wait for an average of 30 ms, the read_ticks
field will increase by 60*30 = 1800.
in_flight
=========
This value counts the number of I/O requests that have been issued to
the device driver but have not yet completed. It does not include I/O
requests that are in the queue but not yet issued to the device driver.
io_ticks
========
This value counts the number of milliseconds during which the device has
had I/O requests queued.
time_in_queue
=============
This value counts the number of milliseconds that I/O requests have waited
on this block device. If there are multiple I/O requests waiting, this
value will increase as the product of the number of milliseconds times the
number of requests waiting (see "read ticks" above for an example).

View File

@ -0,0 +1,357 @@
CPU hotplug Support in Linux(tm) Kernel
Maintainers:
CPU Hotplug Core:
Rusty Russell <rusty@rustycorp.com.au>
Srivatsa Vaddagiri <vatsa@in.ibm.com>
i386:
Zwane Mwaikambo <zwane@arm.linux.org.uk>
ppc64:
Nathan Lynch <nathanl@austin.ibm.com>
Joel Schopp <jschopp@austin.ibm.com>
ia64/x86_64:
Ashok Raj <ashok.raj@intel.com>
Authors: Ashok Raj <ashok.raj@intel.com>
Lots of feedback: Nathan Lynch <nathanl@austin.ibm.com>,
Joel Schopp <jschopp@austin.ibm.com>
Introduction
Modern advances in system architectures have introduced advanced error
reporting and correction capabilities in processors. CPU architectures permit
partitioning support, where compute resources of a single CPU could be made
available to virtual machine environments. There are couple OEMS that
support NUMA hardware which are hot pluggable as well, where physical
node insertion and removal require support for CPU hotplug.
Such advances require CPUs available to a kernel to be removed either for
provisioning reasons, or for RAS purposes to keep an offending CPU off
system execution path. Hence the need for CPU hotplug support in the
Linux kernel.
A more novel use of CPU-hotplug support is its use today in suspend
resume support for SMP. Dual-core and HT support makes even
a laptop run SMP kernels which didn't support these methods. SMP support
for suspend/resume is a work in progress.
General Stuff about CPU Hotplug
--------------------------------
Command Line Switches
---------------------
maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using
maxcpus=2 will only boot 2. You can choose to bring the
other cpus later online, read FAQ's for more info.
additional_cpus=n [x86_64 only] use this to limit hotpluggable cpus.
This option sets
cpu_possible_map = cpu_present_map + additional_cpus
CPU maps and such
-----------------
[More on cpumaps and primitive to manipulate, please check
include/linux/cpumask.h that has more descriptive text.]
cpu_possible_map: Bitmap of possible CPUs that can ever be available in the
system. This is used to allocate some boot time memory for per_cpu variables
that aren't designed to grow/shrink as CPUs are made available or removed.
Once set during boot time discovery phase, the map is static, i.e no bits
are added or removed anytime. Trimming it accurately for your system needs
upfront can save some boot time memory. See below for how we use heuristics
in x86_64 case to keep this under check.
cpu_online_map: Bitmap of all CPUs currently online. Its set in __cpu_up()
after a cpu is available for kernel scheduling and ready to receive
interrupts from devices. Its cleared when a cpu is brought down using
__cpu_disable(), before which all OS services including interrupts are
migrated to another target CPU.
cpu_present_map: Bitmap of CPUs currently present in the system. Not all
of them may be online. When physical hotplug is processed by the relevant
subsystem (e.g ACPI) can change and new bit either be added or removed
from the map depending on the event is hot-add/hot-remove. There are currently
no locking rules as of now. Typical usage is to init topology during boot,
at which time hotplug is disabled.
You really dont need to manipulate any of the system cpu maps. They should
be read-only for most use. When setting up per-cpu resources almost always use
cpu_possible_map/for_each_cpu() to iterate.
Never use anything other than cpumask_t to represent bitmap of CPUs.
#include <linux/cpumask.h>
for_each_cpu - Iterate over cpu_possible_map
for_each_online_cpu - Iterate over cpu_online_map
for_each_present_cpu - Iterate over cpu_present_map
for_each_cpu_mask(x,mask) - Iterate over some random collection of cpu mask.
#include <linux/cpu.h>
lock_cpu_hotplug() and unlock_cpu_hotplug():
The above calls are used to inhibit cpu hotplug operations. While holding the
cpucontrol mutex, cpu_online_map will not change. If you merely need to avoid
cpus going away, you could also use preempt_disable() and preempt_enable()
for those sections. Just remember the critical section cannot call any
function that can sleep or schedule this process away. The preempt_disable()
will work as long as stop_machine_run() is used to take a cpu down.
CPU Hotplug - Frequently Asked Questions.
Q: How to i enable my kernel to support CPU hotplug?
A: When doing make defconfig, Enable CPU hotplug support
"Processor type and Features" -> Support for Hotpluggable CPUs
Make sure that you have CONFIG_HOTPLUG, and CONFIG_SMP turned on as well.
You would need to enable CONFIG_HOTPLUG_CPU for SMP suspend/resume support
as well.
Q: What architectures support CPU hotplug?
A: As of 2.6.14, the following architectures support CPU hotplug.
i386 (Intel), ppc, ppc64, parisc, s390, ia64 and x86_64
Q: How to test if hotplug is supported on the newly built kernel?
A: You should now notice an entry in sysfs.
Check if sysfs is mounted, using the "mount" command. You should notice
an entry as shown below in the output.
....
none on /sys type sysfs (rw)
....
if this is not mounted, do the following.
#mkdir /sysfs
#mount -t sysfs sys /sys
now you should see entries for all present cpu, the following is an example
in a 8-way system.
#pwd
#/sys/devices/system/cpu
#ls -l
total 0
drwxr-xr-x 10 root root 0 Sep 19 07:44 .
drwxr-xr-x 13 root root 0 Sep 19 07:45 ..
drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu0
drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu1
drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu2
drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu3
drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu4
drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu5
drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu6
drwxr-xr-x 3 root root 0 Sep 19 07:48 cpu7
Under each directory you would find an "online" file which is the control
file to logically online/offline a processor.
Q: Does hot-add/hot-remove refer to physical add/remove of cpus?
A: The usage of hot-add/remove may not be very consistently used in the code.
CONFIG_CPU_HOTPLUG enables logical online/offline capability in the kernel.
To support physical addition/removal, one would need some BIOS hooks and
the platform should have something like an attention button in PCI hotplug.
CONFIG_ACPI_HOTPLUG_CPU enables ACPI support for physical add/remove of CPUs.
Q: How do i logically offline a CPU?
A: Do the following.
#echo 0 > /sys/devices/system/cpu/cpuX/online
once the logical offline is successful, check
#cat /proc/interrupts
you should now not see the CPU that you removed. Also online file will report
the state as 0 when a cpu if offline and 1 when its online.
#To display the current cpu state.
#cat /sys/devices/system/cpu/cpuX/online
Q: Why cant i remove CPU0 on some systems?
A: Some architectures may have some special dependency on a certain CPU.
For e.g in IA64 platforms we have ability to sent platform interrupts to the
OS. a.k.a Corrected Platform Error Interrupts (CPEI). In current ACPI
specifications, we didn't have a way to change the target CPU. Hence if the
current ACPI version doesn't support such re-direction, we disable that CPU
by making it not-removable.
In such cases you will also notice that the online file is missing under cpu0.
Q: How do i find out if a particular CPU is not removable?
A: Depending on the implementation, some architectures may show this by the
absence of the "online" file. This is done if it can be determined ahead of
time that this CPU cannot be removed.
In some situations, this can be a run time check, i.e if you try to remove the
last CPU, this will not be permitted. You can find such failures by
investigating the return value of the "echo" command.
Q: What happens when a CPU is being logically offlined?
A: The following happen, listed in no particular order :-)
- A notification is sent to in-kernel registered modules by sending an event
CPU_DOWN_PREPARE
- All process is migrated away from this outgoing CPU to a new CPU
- All interrupts targeted to this CPU is migrated to a new CPU
- timers/bottom half/task lets are also migrated to a new CPU
- Once all services are migrated, kernel calls an arch specific routine
__cpu_disable() to perform arch specific cleanup.
- Once this is successful, an event for successful cleanup is sent by an event
CPU_DEAD.
"It is expected that each service cleans up when the CPU_DOWN_PREPARE
notifier is called, when CPU_DEAD is called its expected there is nothing
running on behalf of this CPU that was offlined"
Q: If i have some kernel code that needs to be aware of CPU arrival and
departure, how to i arrange for proper notification?
A: This is what you would need in your kernel code to receive notifications.
#include <linux/cpu.h>
static int __cpuinit foobar_cpu_callback(struct notifier_block *nfb,
unsigned long action, void *hcpu)
{
unsigned int cpu = (unsigned long)hcpu;
switch (action) {
case CPU_ONLINE:
foobar_online_action(cpu);
break;
case CPU_DEAD:
foobar_dead_action(cpu);
break;
}
return NOTIFY_OK;
}
static struct notifier_block foobar_cpu_notifer =
{
.notifier_call = foobar_cpu_callback,
};
In your init function,
register_cpu_notifier(&foobar_cpu_notifier);
You can fail PREPARE notifiers if something doesn't work to prepare resources.
This will stop the activity and send a following CANCELED event back.
CPU_DEAD should not be failed, its just a goodness indication, but bad
things will happen if a notifier in path sent a BAD notify code.
Q: I don't see my action being called for all CPUs already up and running?
A: Yes, CPU notifiers are called only when new CPUs are on-lined or offlined.
If you need to perform some action for each cpu already in the system, then
for_each_online_cpu(i) {
foobar_cpu_callback(&foobar_cpu_notifier, CPU_UP_PREPARE, i);
foobar_cpu_callback(&foobar-cpu_notifier, CPU_ONLINE, i);
}
Q: If i would like to develop cpu hotplug support for a new architecture,
what do i need at a minimum?
A: The following are what is required for CPU hotplug infrastructure to work
correctly.
- Make sure you have an entry in Kconfig to enable CONFIG_HOTPLUG_CPU
- __cpu_up() - Arch interface to bring up a CPU
- __cpu_disable() - Arch interface to shutdown a CPU, no more interrupts
can be handled by the kernel after the routine
returns. Including local APIC timers etc are
shutdown.
- __cpu_die() - This actually supposed to ensure death of the CPU.
Actually look at some example code in other arch
that implement CPU hotplug. The processor is taken
down from the idle() loop for that specific
architecture. __cpu_die() typically waits for some
per_cpu state to be set, to ensure the processor
dead routine is called to be sure positively.
Q: I need to ensure that a particular cpu is not removed when there is some
work specific to this cpu is in progress.
A: First switch the current thread context to preferred cpu
int my_func_on_cpu(int cpu)
{
cpumask_t saved_mask, new_mask = CPU_MASK_NONE;
int curr_cpu, err = 0;
saved_mask = current->cpus_allowed;
cpu_set(cpu, new_mask);
err = set_cpus_allowed(current, new_mask);
if (err)
return err;
/*
* If we got scheduled out just after the return from
* set_cpus_allowed() before running the work, this ensures
* we stay locked.
*/
curr_cpu = get_cpu();
if (curr_cpu != cpu) {
err = -EAGAIN;
goto ret;
} else {
/*
* Do work : But cant sleep, since get_cpu() disables preempt
*/
}
ret:
put_cpu();
set_cpus_allowed(current, saved_mask);
return err;
}
Q: How do we determine how many CPUs are available for hotplug.
A: There is no clear spec defined way from ACPI that can give us that
information today. Based on some input from Natalie of Unisys,
that the ACPI MADT (Multiple APIC Description Tables) marks those possible
CPUs in a system with disabled status.
Andi implemented some simple heuristics that count the number of disabled
CPUs in MADT as hotpluggable CPUS. In the case there are no disabled CPUS
we assume 1/2 the number of CPUs currently present can be hotplugged.
Caveat: Today's ACPI MADT can only provide 256 entries since the apicid field
in MADT is only 8 bits.
User Space Notification
Hotplug support for devices is common in Linux today. Its being used today to
support automatic configuration of network, usb and pci devices. A hotplug
event can be used to invoke an agent script to perform the configuration task.
You can add /etc/hotplug/cpu.agent to handle hotplug notification user space
scripts.
#!/bin/bash
# $Id: cpu.agent
# Kernel hotplug params include:
#ACTION=%s [online or offline]
#DEVPATH=%s
#
cd /etc/hotplug
. ./hotplug.functions
case $ACTION in
online)
echo `date` ":cpu.agent" add cpu >> /tmp/hotplug.txt
;;
offline)
echo `date` ":cpu.agent" remove cpu >>/tmp/hotplug.txt
;;
*)
debug_mesg CPU $ACTION event not supported
exit 1
;;
esac

View File

@ -14,7 +14,10 @@ CONTENTS:
1.1 What are cpusets ?
1.2 Why are cpusets needed ?
1.3 How are cpusets implemented ?
1.4 How do I use cpusets ?
1.4 What are exclusive cpusets ?
1.5 What does notify_on_release do ?
1.6 What is memory_pressure ?
1.7 How do I use cpusets ?
2. Usage Examples and Syntax
2.1 Basic Usage
2.2 Adding/removing cpus
@ -49,29 +52,6 @@ its cpus_allowed vector, and the kernel page allocator will not
allocate a page on a node that is not allowed in the requesting tasks
mems_allowed vector.
If a cpuset is cpu or mem exclusive, no other cpuset, other than a direct
ancestor or descendent, may share any of the same CPUs or Memory Nodes.
A cpuset that is cpu exclusive has a sched domain associated with it.
The sched domain consists of all cpus in the current cpuset that are not
part of any exclusive child cpusets.
This ensures that the scheduler load balacing code only balances
against the cpus that are in the sched domain as defined above and not
all of the cpus in the system. This removes any overhead due to
load balancing code trying to pull tasks outside of the cpu exclusive
cpuset only to be prevented by the tasks' cpus_allowed mask.
A cpuset that is mem_exclusive restricts kernel allocations for
page, buffer and other data commonly shared by the kernel across
multiple users. All cpusets, whether mem_exclusive or not, restrict
allocations of memory for user space. This enables configuring a
system so that several independent jobs can share common kernel
data, such as file system pages, while isolating each jobs user
allocation in its own cpuset. To do this, construct a large
mem_exclusive cpuset to hold all the jobs, and construct child,
non-mem_exclusive cpusets for each individual job. Only a small
amount of typical kernel memory, such as requests from interrupt
handlers, is allowed to be taken outside even a mem_exclusive cpuset.
User level code may create and destroy cpusets by name in the cpuset
virtual file system, manage the attributes and permissions of these
cpusets and which CPUs and Memory Nodes are assigned to each cpuset,
@ -192,9 +172,15 @@ containing the following files describing that cpuset:
- cpus: list of CPUs in that cpuset
- mems: list of Memory Nodes in that cpuset
- memory_migrate flag: if set, move pages to cpusets nodes
- cpu_exclusive flag: is cpu placement exclusive?
- mem_exclusive flag: is memory placement exclusive?
- tasks: list of tasks (by pid) attached to that cpuset
- notify_on_release flag: run /sbin/cpuset_release_agent on exit?
- memory_pressure: measure of how much paging pressure in cpuset
In addition, the root cpuset only has the following file:
- memory_pressure_enabled flag: compute memory_pressure?
New cpusets are created using the mkdir system call or shell
command. The properties of a cpuset, such as its flags, allowed
@ -228,7 +214,108 @@ exclusive cpuset. Also, the use of a Linux virtual file system (vfs)
to represent the cpuset hierarchy provides for a familiar permission
and name space for cpusets, with a minimum of additional kernel code.
1.4 How do I use cpusets ?
1.4 What are exclusive cpusets ?
--------------------------------
If a cpuset is cpu or mem exclusive, no other cpuset, other than
a direct ancestor or descendent, may share any of the same CPUs or
Memory Nodes.
A cpuset that is cpu_exclusive has a scheduler (sched) domain
associated with it. The sched domain consists of all CPUs in the
current cpuset that are not part of any exclusive child cpusets.
This ensures that the scheduler load balancing code only balances
against the CPUs that are in the sched domain as defined above and
not all of the CPUs in the system. This removes any overhead due to
load balancing code trying to pull tasks outside of the cpu_exclusive
cpuset only to be prevented by the tasks' cpus_allowed mask.
A cpuset that is mem_exclusive restricts kernel allocations for
page, buffer and other data commonly shared by the kernel across
multiple users. All cpusets, whether mem_exclusive or not, restrict
allocations of memory for user space. This enables configuring a
system so that several independent jobs can share common kernel data,
such as file system pages, while isolating each jobs user allocation in
its own cpuset. To do this, construct a large mem_exclusive cpuset to
hold all the jobs, and construct child, non-mem_exclusive cpusets for
each individual job. Only a small amount of typical kernel memory,
such as requests from interrupt handlers, is allowed to be taken
outside even a mem_exclusive cpuset.
1.5 What does notify_on_release do ?
------------------------------------
If the notify_on_release flag is enabled (1) in a cpuset, then whenever
the last task in the cpuset leaves (exits or attaches to some other
cpuset) and the last child cpuset of that cpuset is removed, then
the kernel runs the command /sbin/cpuset_release_agent, supplying the
pathname (relative to the mount point of the cpuset file system) of the
abandoned cpuset. This enables automatic removal of abandoned cpusets.
The default value of notify_on_release in the root cpuset at system
boot is disabled (0). The default value of other cpusets at creation
is the current value of their parents notify_on_release setting.
1.6 What is memory_pressure ?
-----------------------------
The memory_pressure of a cpuset provides a simple per-cpuset metric
of the rate that the tasks in a cpuset are attempting to free up in
use memory on the nodes of the cpuset to satisfy additional memory
requests.
This enables batch managers monitoring jobs running in dedicated
cpusets to efficiently detect what level of memory pressure that job
is causing.
This is useful both on tightly managed systems running a wide mix of
submitted jobs, which may choose to terminate or re-prioritize jobs that
are trying to use more memory than allowed on the nodes assigned them,
and with tightly coupled, long running, massively parallel scientific
computing jobs that will dramatically fail to meet required performance
goals if they start to use more memory than allowed to them.
This mechanism provides a very economical way for the batch manager
to monitor a cpuset for signs of memory pressure. It's up to the
batch manager or other user code to decide what to do about it and
take action.
==> Unless this feature is enabled by writing "1" to the special file
/dev/cpuset/memory_pressure_enabled, the hook in the rebalance
code of __alloc_pages() for this metric reduces to simply noticing
that the cpuset_memory_pressure_enabled flag is zero. So only
systems that enable this feature will compute the metric.
Why a per-cpuset, running average:
Because this meter is per-cpuset, rather than per-task or mm,
the system load imposed by a batch scheduler monitoring this
metric is sharply reduced on large systems, because a scan of
the tasklist can be avoided on each set of queries.
Because this meter is a running average, instead of an accumulating
counter, a batch scheduler can detect memory pressure with a
single read, instead of having to read and accumulate results
for a period of time.
Because this meter is per-cpuset rather than per-task or mm,
the batch scheduler can obtain the key information, memory
pressure in a cpuset, with a single read, rather than having to
query and accumulate results over all the (dynamically changing)
set of tasks in the cpuset.
A per-cpuset simple digital filter (requires a spinlock and 3 words
of data per-cpuset) is kept, and updated by any task attached to that
cpuset, if it enters the synchronous (direct) page reclaim code.
A per-cpuset file provides an integer number representing the recent
(half-life of 10 seconds) rate of direct page reclaims caused by
the tasks in the cpuset, in units of reclaims attempted per second,
times 1000.
1.7 How do I use cpusets ?
--------------------------
In order to minimize the impact of cpusets on critical kernel
@ -277,6 +364,30 @@ rewritten to the 'tasks' file of its cpuset. This is done to avoid
impacting the scheduler code in the kernel with a check for changes
in a tasks processor placement.
Normally, once a page is allocated (given a physical page
of main memory) then that page stays on whatever node it
was allocated, so long as it remains allocated, even if the
cpusets memory placement policy 'mems' subsequently changes.
If the cpuset flag file 'memory_migrate' is set true, then when
tasks are attached to that cpuset, any pages that task had
allocated to it on nodes in its previous cpuset are migrated
to the tasks new cpuset. Depending on the implementation,
this migration may either be done by swapping the page out,
so that the next time the page is referenced, it will be paged
into the tasks new cpuset, usually on the node where it was
referenced, or this migration may be done by directly copying
the pages from the tasks previous cpuset to the new cpuset,
where possible to the same node, relative to the new cpuset,
as the node that held the page, relative to the old cpuset.
Also if 'memory_migrate' is set true, then if that cpusets
'mems' file is modified, pages allocated to tasks in that
cpuset, that were on nodes in the previous setting of 'mems',
will be moved to nodes in the new setting of 'mems.' Again,
depending on the implementation, this might be done by swapping,
or by direct copying. In either case, pages that were not in
the tasks prior cpuset, or in the cpusets prior 'mems' setting,
will not be moved.
There is an exception to the above. If hotplug functionality is used
to remove all the CPUs that are currently assigned to a cpuset,
then the kernel will automatically update the cpus_allowed of all

View File

@ -22,6 +22,11 @@ journal=inum When a journal already exists, this option is
the inode which will represent the ext3 file
system's journal file.
journal_dev=devnum When the external journal device's major/minor numbers
have changed, this option allows to specify the new
journal location. The journal device is identified
through its new major/minor numbers encoded in devnum.
noload Don't load the journal on mounting.
data=journal All data are committed into the journal prior

View File

@ -1302,6 +1302,23 @@ VM has token based thrashing control mechanism and uses the token to prevent
unnecessary page faults in thrashing situation. The unit of the value is
second. The value would be useful to tune thrashing behavior.
drop_caches
-----------
Writing to this will cause the kernel to drop clean caches, dentries and
inodes from memory, causing that memory to become free.
To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches
As this is a non-destructive operation and dirty objects are not freeable, the
user should run `sync' first.
2.5 /proc/sys/dev - Device specific parameters
----------------------------------------------

View File

@ -143,12 +143,26 @@ as the following example:
dir /mnt 755 0 0
file /init initramfs/init.sh 755 0 0
Run "usr/gen_init_cpio" (after the kernel build) to get a usage message
documenting the above file format.
One advantage of the text file is that root access is not required to
set permissions or create device nodes in the new archive. (Note that those
two example "file" entries expect to find files named "init.sh" and "busybox" in
a directory called "initramfs", under the linux-2.6.* directory. See
Documentation/early-userspace/README for more details.)
The kernel does not depend on external cpio tools, gen_init_cpio is created
from usr/gen_init_cpio.c which is entirely self-contained, and the kernel's
boot-time extractor is also (obviously) self-contained. However, if you _do_
happen to have cpio installed, the following command line can extract the
generated cpio image back into its component files:
cpio -i -d -H newc -F initramfs_data.cpio --no-absolute-filenames
Contents of initramfs:
----------------------
If you don't already understand what shared libraries, devices, and paths
you need to get a minimal root filesystem up and running, here are some
references:
@ -161,13 +175,69 @@ designed to be a tiny C library to statically link early userspace
code against, along with some related utilities. It is BSD licensed.
I use uClibc (http://www.uclibc.org) and busybox (http://www.busybox.net)
myself. These are LGPL and GPL, respectively.
myself. These are LGPL and GPL, respectively. (A self-contained initramfs
package is planned for the busybox 1.2 release.)
In theory you could use glibc, but that's not well suited for small embedded
uses like this. (A "hello world" program statically linked against glibc is
over 400k. With uClibc it's 7k. Also note that glibc dlopens libnss to do
name lookups, even when otherwise statically linked.)
Why cpio rather than tar?
-------------------------
This decision was made back in December, 2001. The discussion started here:
http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1538.html
And spawned a second thread (specifically on tar vs cpio), starting here:
http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1587.html
The quick and dirty summary version (which is no substitute for reading
the above threads) is:
1) cpio is a standard. It's decades old (from the AT&T days), and already
widely used on Linux (inside RPM, Red Hat's device driver disks). Here's
a Linux Journal article about it from 1996:
http://www.linuxjournal.com/article/1213
It's not as popular as tar because the traditional cpio command line tools
require _truly_hideous_ command line arguments. But that says nothing
either way about the archive format, and there are alternative tools,
such as:
http://freshmeat.net/projects/afio/
2) The cpio archive format chosen by the kernel is simpler and cleaner (and
thus easier to create and parse) than any of the (literally dozens of)
various tar archive formats. The complete initramfs archive format is
explained in buffer-format.txt, created in usr/gen_init_cpio.c, and
extracted in init/initramfs.c. All three together come to less than 26k
total of human-readable text.
3) The GNU project standardizing on tar is approximately as relevant as
Windows standardizing on zip. Linux is not part of either, and is free
to make its own technical decisions.
4) Since this is a kernel internal format, it could easily have been
something brand new. The kernel provides its own tools to create and
extract this format anyway. Using an existing standard was preferable,
but not essential.
5) Al Viro made the decision (quote: "tar is ugly as hell and not going to be
supported on the kernel side"):
http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1540.html
explained his reasoning:
http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1550.html
http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1638.html
and, most importantly, designed and implemented the initramfs code.
Future directions:
------------------

View File

@ -44,30 +44,41 @@ relayfs can operate in a mode where it will overwrite data not yet
collected by userspace, and not wait for it to consume it.
relayfs itself does not provide for communication of such data between
userspace and kernel, allowing the kernel side to remain simple and not
impose a single interface on userspace. It does provide a separate
helper though, described below.
userspace and kernel, allowing the kernel side to remain simple and
not impose a single interface on userspace. It does provide a set of
examples and a separate helper though, described below.
klog, relay-app & librelay
==========================
klog and relay-apps example code
================================
relayfs itself is ready to use, but to make things easier, two
additional systems are provided. klog is a simple wrapper to make
writing formatted text or raw data to a channel simpler, regardless of
whether a channel to write into exists or not, or whether relayfs is
compiled into the kernel or is configured as a module. relay-app is
the kernel counterpart of userspace librelay.c, combined these two
files provide glue to easily stream data to disk, without having to
bother with housekeeping. klog and relay-app can be used together,
with klog providing high-level logging functions to the kernel and
relay-app taking care of kernel-user control and disk-logging chores.
relayfs itself is ready to use, but to make things easier, a couple
simple utility functions and a set of examples are provided.
It is possible to use relayfs without relay-app & librelay, but you'll
have to implement communication between userspace and kernel, allowing
both to convey the state of buffers (full, empty, amount of padding).
The relay-apps example tarball, available on the relayfs sourceforge
site, contains a set of self-contained examples, each consisting of a
pair of .c files containing boilerplate code for each of the user and
kernel sides of a relayfs application; combined these two sets of
boilerplate code provide glue to easily stream data to disk, without
having to bother with mundane housekeeping chores.
The 'klog debugging functions' patch (klog.patch in the relay-apps
tarball) provides a couple of high-level logging functions to the
kernel which allow writing formatted text or raw data to a channel,
regardless of whether a channel to write into exists or not, or
whether relayfs is compiled into the kernel or is configured as a
module. These functions allow you to put unconditional 'trace'
statements anywhere in the kernel or kernel modules; only when there
is a 'klog handler' registered will data actually be logged (see the
klog and kleak examples for details).
It is of course possible to use relayfs from scratch i.e. without
using any of the relay-apps example code or klog, but you'll have to
implement communication between userspace and kernel, allowing both to
convey the state of buffers (full, empty, amount of padding).
klog and the relay-apps examples can be found in the relay-apps
tarball on http://relayfs.sourceforge.net
klog, relay-app and librelay can be found in the relay-apps tarball on
http://relayfs.sourceforge.net
The relayfs user space API
==========================
@ -125,6 +136,8 @@ Here's a summary of the API relayfs provides to in-kernel clients:
relay_reset(chan)
relayfs_create_dir(name, parent)
relayfs_remove_dir(dentry)
relayfs_create_file(name, parent, mode, fops, data)
relayfs_remove_file(dentry)
channel management typically called on instigation of userspace:
@ -141,6 +154,8 @@ Here's a summary of the API relayfs provides to in-kernel clients:
subbuf_start(buf, subbuf, prev_subbuf, prev_padding)
buf_mapped(buf, filp)
buf_unmapped(buf, filp)
create_buf_file(filename, parent, mode, buf, is_global)
remove_buf_file(dentry)
helper functions:
@ -320,6 +335,71 @@ forces a sub-buffer switch on all the channel buffers, and can be used
to finalize and process the last sub-buffers before the channel is
closed.
Creating non-relay files
------------------------
relay_open() automatically creates files in the relayfs filesystem to
represent the per-cpu kernel buffers; it's often useful for
applications to be able to create their own files alongside the relay
files in the relayfs filesystem as well e.g. 'control' files much like
those created in /proc or debugfs for similar purposes, used to
communicate control information between the kernel and user sides of a
relayfs application. For this purpose the relayfs_create_file() and
relayfs_remove_file() API functions exist. For relayfs_create_file(),
the caller passes in a set of user-defined file operations to be used
for the file and an optional void * to a user-specified data item,
which will be accessible via inode->u.generic_ip (see the relay-apps
tarball for examples). The file_operations are a required parameter
to relayfs_create_file() and thus the semantics of these files are
completely defined by the caller.
See the relay-apps tarball at http://relayfs.sourceforge.net for
examples of how these non-relay files are meant to be used.
Creating relay files in other filesystems
-----------------------------------------
By default of course, relay_open() creates relay files in the relayfs
filesystem. Because relay_file_operations is exported, however, it's
also possible to create and use relay files in other pseudo-filesytems
such as debugfs.
For this purpose, two callback functions are provided,
create_buf_file() and remove_buf_file(). create_buf_file() is called
once for each per-cpu buffer from relay_open() to allow the client to
create a file to be used to represent the corresponding buffer; if
this callback is not defined, the default implementation will create
and return a file in the relayfs filesystem to represent the buffer.
The callback should return the dentry of the file created to represent
the relay buffer. Note that the parent directory passed to
relay_open() (and passed along to the callback), if specified, must
exist in the same filesystem the new relay file is created in. If
create_buf_file() is defined, remove_buf_file() must also be defined;
it's responsible for deleting the file(s) created in create_buf_file()
and is called during relay_close().
The create_buf_file() implementation can also be defined in such a way
as to allow the creation of a single 'global' buffer instead of the
default per-cpu set. This can be useful for applications interested
mainly in seeing the relative ordering of system-wide events without
the need to bother with saving explicit timestamps for the purpose of
merging/sorting per-cpu files in a postprocessing step.
To have relay_open() create a global buffer, the create_buf_file()
implementation should set the value of the is_global outparam to a
non-zero value in addition to creating the file that will be used to
represent the single buffer. In the case of a global buffer,
create_buf_file() and remove_buf_file() will be called only once. The
normal channel-writing functions e.g. relay_write() can still be used
- writes from any cpu will transparently end up in the global buffer -
but since it is a global buffer, callers should make sure they use the
proper locking for such a buffer, either by wrapping writes in a
spinlock, or by copying a write function from relayfs_fs.h and
creating a local version that internally does the proper locking.
See the 'exported-relayfile' examples in the relay-apps tarball for
examples of creating and using relay files in debugfs.
Misc
----

View File

@ -56,10 +56,12 @@ A request proceeds in the following manner:
(4) request_key() then forks and executes /sbin/request-key with a new session
keyring that contains a link to auth key V.
(5) /sbin/request-key execs an appropriate program to perform the actual
(5) /sbin/request-key assumes the authority associated with key U.
(6) /sbin/request-key execs an appropriate program to perform the actual
instantiation.
(6) The program may want to access another key from A's context (say a
(7) The program may want to access another key from A's context (say a
Kerberos TGT key). It just requests the appropriate key, and the keyring
search notes that the session keyring has auth key V in its bottom level.
@ -67,19 +69,19 @@ A request proceeds in the following manner:
UID, GID, groups and security info of process A as if it was process A,
and come up with key W.
(7) The program then does what it must to get the data with which to
(8) The program then does what it must to get the data with which to
instantiate key U, using key W as a reference (perhaps it contacts a
Kerberos server using the TGT) and then instantiates key U.
(8) Upon instantiating key U, auth key V is automatically revoked so that it
(9) Upon instantiating key U, auth key V is automatically revoked so that it
may not be used again.
(9) The program then exits 0 and request_key() deletes key V and returns key
(10) The program then exits 0 and request_key() deletes key V and returns key
U to the caller.
This also extends further. If key W (step 5 above) didn't exist, key W would be
created uninstantiated, another auth key (X) would be created [as per step 3]
and another copy of /sbin/request-key spawned [as per step 4]; but the context
This also extends further. If key W (step 7 above) didn't exist, key W would be
created uninstantiated, another auth key (X) would be created (as per step 3)
and another copy of /sbin/request-key spawned (as per step 4); but the context
specified by auth key X will still be process A, as it was in auth key V.
This is because process A's keyrings can't simply be attached to
@ -138,8 +140,8 @@ until one succeeds:
(3) The process's session keyring is searched.
(4) If the process has a request_key() authorisation key in its session
keyring then:
(4) If the process has assumed the authority associated with a request_key()
authorisation key then:
(a) If extant, the calling process's thread keyring is searched.

View File

@ -308,6 +308,8 @@ process making the call:
KEY_SPEC_USER_KEYRING -4 UID-specific keyring
KEY_SPEC_USER_SESSION_KEYRING -5 UID-session keyring
KEY_SPEC_GROUP_KEYRING -6 GID-specific keyring
KEY_SPEC_REQKEY_AUTH_KEY -7 assumed request_key()
authorisation key
The main syscalls are:
@ -498,7 +500,11 @@ The keyctl syscall functions are:
keyring is full, error ENFILE will result.
The link procedure checks the nesting of the keyrings, returning ELOOP if
it appears to deep or EDEADLK if the link would introduce a cycle.
it appears too deep or EDEADLK if the link would introduce a cycle.
Any links within the keyring to keys that match the new key in terms of
type and description will be discarded from the keyring as the new one is
added.
(*) Unlink a key or keyring from another keyring:
@ -628,6 +634,41 @@ The keyctl syscall functions are:
there is one, otherwise the user default session keyring.
(*) Set the timeout on a key.
long keyctl(KEYCTL_SET_TIMEOUT, key_serial_t key, unsigned timeout);
This sets or clears the timeout on a key. The timeout can be 0 to clear
the timeout or a number of seconds to set the expiry time that far into
the future.
The process must have attribute modification access on a key to set its
timeout. Timeouts may not be set with this function on negative, revoked
or expired keys.
(*) Assume the authority granted to instantiate a key
long keyctl(KEYCTL_ASSUME_AUTHORITY, key_serial_t key);
This assumes or divests the authority required to instantiate the
specified key. Authority can only be assumed if the thread has the
authorisation key associated with the specified key in its keyrings
somewhere.
Once authority is assumed, searches for keys will also search the
requester's keyrings using the requester's security label, UID, GID and
groups.
If the requested authority is unavailable, error EPERM will be returned,
likewise if the authority has been revoked because the target key is
already instantiated.
If the specified key is 0, then any assumed authority will be divested.
The assumed authorititive key is inherited across fork and exec.
===============
KERNEL SERVICES
===============

View File

@ -26,12 +26,13 @@ Currently, these files are in /proc/sys/vm:
- min_free_kbytes
- laptop_mode
- block_dump
- drop-caches
==============================================================
dirty_ratio, dirty_background_ratio, dirty_expire_centisecs,
dirty_writeback_centisecs, vfs_cache_pressure, laptop_mode,
block_dump, swap_token_timeout:
block_dump, swap_token_timeout, drop-caches:
See Documentation/filesystems/proc.txt
@ -102,3 +103,20 @@ This is used to force the Linux VM to keep a minimum number
of kilobytes free. The VM uses this number to compute a pages_min
value for each lowmem zone in the system. Each lowmem zone gets
a number of reserved free pages based proportionally on its size.
==============================================================
percpu_pagelist_fraction
This is the fraction of pages at most (high mark pcp->high) in each zone that
are allocated for each per cpu page list. The min value for this is 8. It
means that we don't allow more than 1/8th of pages in each zone to be
allocated in any single per_cpu_pagelist. This entry only changes the value
of hot per cpu pagelists. User can specify a number like 100 to allocate
1/100th of each zone to each per cpu page list.
The batch value of each per cpu pagelist is also updated as a result. It is
set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
The initial value is zero. Kernel does not use this value at boot time to set
the high water marks for each per cpu page list.

View File

@ -927,7 +927,6 @@ S: Maintained
FARSYNC SYNCHRONOUS DRIVER
P: Kevin Curtis
M: kevin.curtis@farsite.co.uk
M: kevin.curtis@farsite.co.uk
W: http://www.farsite.co.uk/
S: Supported

7
README
View File

@ -183,11 +183,8 @@ CONFIGURING the kernel:
COMPILING the kernel:
- Make sure you have gcc 2.95.3 available.
gcc 2.91.66 (egcs-1.1.2), and gcc 2.7.2.3 are known to miscompile
some parts of the kernel, and are *no longer supported*.
Also remember to upgrade your binutils package (for as/ld/nm and company)
if necessary. For more information, refer to Documentation/Changes.
- Make sure you have at least gcc 3.2 available.
For more information, refer to Documentation/Changes.
Please note that you can still run a.out user programs with this kernel.

View File

@ -18,9 +18,6 @@ config MMU
bool
default y
config UID16
bool
config RWSEM_GENERIC_SPINLOCK
bool

View File

@ -43,6 +43,11 @@
#include "proto.h"
#include "pci_impl.h"
/*
* Power off function, if any
*/
void (*pm_power_off)(void) = machine_power_off;
void
cpu_idle(void)
{

View File

@ -265,30 +265,16 @@ do_sys_ptrace(long request, long pid, long addr, long data,
lock_kernel();
DBG(DBG_MEM, ("request=%ld pid=%ld addr=0x%lx data=0x%lx\n",
request, pid, addr, data));
ret = -EPERM;
if (request == PTRACE_TRACEME) {
/* are we already being traced? */
if (current->ptrace & PT_PTRACED)
goto out_notsk;
ret = security_ptrace(current->parent, current);
if (ret)
goto out_notsk;
/* set the ptrace bit in the process ptrace flags. */
current->ptrace |= PT_PTRACED;
ret = 0;
ret = ptrace_traceme();
goto out_notsk;
}
if (pid == 1) /* you may not mess with init */
goto out_notsk;
ret = -ESRCH;
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
if (child)
get_task_struct(child);
read_unlock(&tasklist_lock);
if (!child)
child = ptrace_get_task_struct(pid);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
goto out_notsk;
}
if (request == PTRACE_ATTACH) {
ret = ptrace_attach(child);

View File

@ -46,10 +46,6 @@ config MCA
<file:Documentation/mca.txt> (and especially the web page given
there) before attempting to build an MCA bus kernel.
config UID16
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y

View File

@ -13,6 +13,7 @@
#include <linux/device.h>
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/platform_device.h>
#include <asm/io.h>
#include <asm/hardware/scoop.h>

View File

@ -23,20 +23,15 @@
#error Sorry, your compiler targets APCS-26 but this kernel requires APCS-32
#endif
/*
* GCC 2.95.1, 2.95.2: ignores register clobber list in asm().
* GCC 3.0, 3.1: general bad code generation.
* GCC 3.2.0: incorrect function argument offset calculation.
* GCC 3.2.x: miscompiles NEW_AUX_ENT in fs/binfmt_elf.c
* (http://gcc.gnu.org/PR8896) and incorrect structure
* initialisation in fs/jffs2/erase.c
*/
#if __GNUC__ < 2 || \
(__GNUC__ == 2 && __GNUC_MINOR__ < 95) || \
(__GNUC__ == 2 && __GNUC_MINOR__ == 95 && __GNUC_PATCHLEVEL__ != 0 && \
__GNUC_PATCHLEVEL__ < 3) || \
(__GNUC__ == 3 && __GNUC_MINOR__ < 3)
#if (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
#error Your compiler is too buggy; it is known to miscompile kernels.
#error Known good compilers: 2.95.3, 2.95.4, 2.96, 3.3
#error Known good compilers: 3.3
#endif
/* Use marker if you need to separate the values later */

View File

@ -684,8 +684,12 @@ int setup_irq(unsigned int irq, struct irqaction *new)
spin_lock_irqsave(&irq_controller_lock, flags);
p = &desc->action;
if ((old = *p) != NULL) {
/* Can't share interrupts unless both agree to */
if (!(old->flags & new->flags & SA_SHIRQ)) {
/*
* Can't share interrupts unless both agree to and are
* the same type.
*/
if (!(old->flags & new->flags & SA_SHIRQ) ||
(~old->flags & new->flags) & SA_TRIGGER_MASK) {
spin_unlock_irqrestore(&irq_controller_lock, flags);
return -EBUSY;
}
@ -705,6 +709,12 @@ int setup_irq(unsigned int irq, struct irqaction *new)
desc->running = 0;
desc->pending = 0;
desc->disable_depth = 1;
if (new->flags & SA_TRIGGER_MASK) {
unsigned int type = new->flags & SA_TRIGGER_MASK;
desc->chip->set_type(irq, type);
}
if (!desc->noautoenable) {
desc->disable_depth = 0;
desc->chip->unmask(irq);

View File

@ -601,6 +601,7 @@ EXPORT_SYMBOL(gpio_lock);
EXPORT_SYMBOL(gpio_modify_op);
EXPORT_SYMBOL(gpio_modify_io);
EXPORT_SYMBOL(cpld_modify);
EXPORT_SYMBOL(gpio_read);
/*
* Initialise any other hardware after we've got the PCI bus

View File

@ -96,7 +96,8 @@ static struct rtc_ops rtc_ops = {
.set_alarm = rtc_set_alarm,
};
static irqreturn_t rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs)
static irqreturn_t arm_rtc_interrupt(int irq, void *dev_id,
struct pt_regs *regs)
{
writel(0, rtc_base + RTC_EOI);
return IRQ_HANDLED;
@ -124,7 +125,7 @@ static int rtc_probe(struct amba_device *dev, void *id)
xtime.tv_sec = __raw_readl(rtc_base + RTC_DR);
ret = request_irq(dev->irq[0], rtc_interrupt, SA_INTERRUPT,
ret = request_irq(dev->irq[0], arm_rtc_interrupt, SA_INTERRUPT,
"rtc-pl030", dev);
if (ret)
goto map_out;

View File

@ -252,9 +252,8 @@ static void __init omap_serial_set_port_wakeup(int gpio_nr)
return;
}
omap_set_gpio_direction(gpio_nr, 1);
set_irq_type(OMAP_GPIO_IRQ(gpio_nr), IRQT_RISING);
ret = request_irq(OMAP_GPIO_IRQ(gpio_nr), &omap_serial_wake_interrupt,
0, "serial wakeup", NULL);
SA_TRIGGER_RISING, "serial wakeup", NULL);
if (ret) {
omap_free_gpio(gpio_nr);
printk(KERN_ERR "No interrupt for UART wake GPIO: %i\n",

View File

@ -213,15 +213,14 @@ static int corgi_mci_init(struct device *dev, irqreturn_t (*corgi_detect_int)(in
corgi_mci_platform_data.detect_delay = msecs_to_jiffies(250);
err = request_irq(CORGI_IRQ_GPIO_nSD_DETECT, corgi_detect_int, SA_INTERRUPT,
"MMC card detect", data);
err = request_irq(CORGI_IRQ_GPIO_nSD_DETECT, corgi_detect_int,
SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
"MMC card detect", data);
if (err) {
printk(KERN_ERR "corgi_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
return -1;
}
set_irq_type(CORGI_IRQ_GPIO_nSD_DETECT, IRQT_BOTHEDGE);
return 0;
}

View File

@ -146,15 +146,14 @@ static int poodle_mci_init(struct device *dev, irqreturn_t (*poodle_detect_int)(
poodle_mci_platform_data.detect_delay = msecs_to_jiffies(250);
err = request_irq(POODLE_IRQ_GPIO_nSD_DETECT, poodle_detect_int, SA_INTERRUPT,
"MMC card detect", data);
err = request_irq(POODLE_IRQ_GPIO_nSD_DETECT, poodle_detect_int,
SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
"MMC card detect", data);
if (err) {
printk(KERN_ERR "poodle_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
return -1;
}
set_irq_type(POODLE_IRQ_GPIO_nSD_DETECT, IRQT_BOTHEDGE);
return 0;
}

View File

@ -296,15 +296,14 @@ static int spitz_mci_init(struct device *dev, irqreturn_t (*spitz_detect_int)(in
spitz_mci_platform_data.detect_delay = msecs_to_jiffies(250);
err = request_irq(SPITZ_IRQ_GPIO_nSD_DETECT, spitz_detect_int, SA_INTERRUPT,
"MMC card detect", data);
err = request_irq(SPITZ_IRQ_GPIO_nSD_DETECT, spitz_detect_int,
SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING,
"MMC card detect", data);
if (err) {
printk(KERN_ERR "spitz_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
return -1;
}
set_irq_type(SPITZ_IRQ_GPIO_nSD_DETECT, IRQT_BOTHEDGE);
return 0;
}

View File

@ -13,6 +13,7 @@
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/smp.h>
#include <linux/jiffies.h>
#include <asm/mach/time.h>
#include <asm/hardware/arm_twd.h>

View File

@ -84,13 +84,13 @@ static void usb_simtec_enableoc(struct s3c2410_hcd_info *info, int on)
int ret;
if (on) {
ret = request_irq(IRQ_USBOC, usb_simtec_ocirq, SA_INTERRUPT,
ret = request_irq(IRQ_USBOC, usb_simtec_ocirq,
SA_INTERRUPT | SA_TRIGGER_RISING |
SA_TRIGGER_FALLING,
"USB Over-current", info);
if (ret != 0) {
printk(KERN_ERR "failed to request usb oc irq\n");
}
set_irq_type(IRQ_USBOC, IRQT_BOTHEDGE);
} else {
free_irq(IRQ_USBOC, info);
}

View File

@ -34,10 +34,6 @@ config FORCE_MAX_ZONEORDER
int
default 9
config UID16
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y

View File

@ -25,13 +25,6 @@
#if defined(__APCS_32__) && defined(CONFIG_CPU_26)
#error Sorry, your compiler targets APCS-32 but this kernel requires APCS-26
#endif
#if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 95)
#error Sorry, your compiler is known to miscompile kernels. Only use gcc 2.95.3 and later.
#endif
#if __GNUC__ == 2 && __GNUC_MINOR__ == 95
/* shame we can't detect the .1 or .2 releases */
#warning GCC 2.95.2 and earlier miscompiles kernels.
#endif
/* Use marker if you need to separate the values later */

View File

@ -9,10 +9,6 @@ config MMU
bool
default y
config UID16
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y

View File

@ -274,6 +274,11 @@ config GPREL_DATA_NONE
endchoice
config FRV_ONCPU_SERIAL
bool "Use on-CPU serial ports"
select SERIAL_8250
default y
config PCI
bool "Use PCI"
depends on MB93090_MB00
@ -305,23 +310,7 @@ config RESERVE_DMA_COHERENT
source "drivers/pci/Kconfig"
config PCMCIA
tristate "Use PCMCIA"
help
Say Y here if you want to attach PCMCIA- or PC-cards to your FR-V
board. These are credit-card size devices such as network cards,
modems or hard drives often used with laptops computers. There are
actually two varieties of these cards: the older 16 bit PCMCIA cards
and the newer 32 bit CardBus cards. If you want to use CardBus
cards, you need to say Y here and also to "CardBus support" below.
To use your PC-cards, you will need supporting software from David
Hinds pcmcia-cs package (see the file <file:Documentation/Changes>
for location). Please also read the PCMCIA-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
To compile this driver as modules, choose M here: the
modules will be called pcmcia_core and ds.
source "drivers/pcmcia/Kconfig"
#config MATH_EMULATION
# bool "Math emulation support (EXPERIMENTAL)"

View File

@ -2,32 +2,10 @@ menu "Kernel hacking"
source "lib/Kconfig.debug"
config EARLY_PRINTK
bool "Early printk"
depends on EMBEDDED && DEBUG_KERNEL
default n
help
Write kernel log output directly into the VGA buffer or to a serial
port.
This is useful for kernel debugging when your machine crashes very
early before the console code is initialized. For normal operation
it is not recommended because it looks ugly and doesn't cooperate
with klogd/syslogd or the X server. You should normally N here,
unless you want to debug such a crash.
config DEBUG_STACKOVERFLOW
bool "Check for stack overflows"
depends on DEBUG_KERNEL
config DEBUG_PAGEALLOC
bool "Page alloc debugging"
depends on DEBUG_KERNEL
help
Unmap pages from the kernel linear mapping after free_pages().
This results in a large slowdown, but helps to find certain types
of memory corruptions.
config GDBSTUB
bool "Remote GDB kernel debugging"
depends on DEBUG_KERNEL

View File

@ -109,10 +109,10 @@ bootstrap:
$(Q)$(MAKEBOOT) bootstrap
archmrproper:
$(Q)$(MAKE) -C arch/frv/boot mrproper
$(Q)$(MAKE) $(build)=arch/frv/boot mrproper
archclean:
$(Q)$(MAKE) -C arch/frv/boot clean
$(Q)$(MAKE) $(build)=arch/frv/boot clean
archdep: scripts/mkdep symlinks
$(Q)$(MAKE) -C arch/frv/boot dep
$(Q)$(MAKE) $(build)=arch/frv/boot dep

View File

@ -21,3 +21,4 @@ obj-$(CONFIG_PM) += pm.o cmode.o
obj-$(CONFIG_MB93093_PDK) += pm-mb93093.o
obj-$(CONFIG_SYSCTL) += sysctl.o
obj-$(CONFIG_FUTEX) += futex.o
obj-$(CONFIG_MODULES) += module.o

View File

@ -16,10 +16,11 @@
#include <asm/semaphore.h>
#include <asm/checksum.h>
#include <asm/hardirq.h>
#include <asm/current.h>
#include <asm/cacheflush.h>
extern void dump_thread(struct pt_regs *, struct user *);
extern long __memcpy_user(void *dst, const void *src, size_t count);
extern long __memset_user(void *dst, const void *src, size_t count);
/* platform dependent support */
@ -50,7 +51,11 @@ EXPORT_SYMBOL(disable_irq);
EXPORT_SYMBOL(__res_bus_clock_speed_HZ);
EXPORT_SYMBOL(__page_offset);
EXPORT_SYMBOL(__memcpy_user);
EXPORT_SYMBOL(flush_dcache_page);
EXPORT_SYMBOL(__memset_user);
EXPORT_SYMBOL(frv_dcache_writeback);
EXPORT_SYMBOL(frv_cache_invalidate);
EXPORT_SYMBOL(frv_icache_invalidate);
EXPORT_SYMBOL(frv_cache_wback_inv);
#ifndef CONFIG_MMU
EXPORT_SYMBOL(memory_start);
@ -72,6 +77,9 @@ EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL(memscan);
EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(__outsl_ns);
EXPORT_SYMBOL(__insl_ns);
EXPORT_SYMBOL(get_wchan);
#ifdef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
@ -80,14 +88,13 @@ EXPORT_SYMBOL(atomic_test_and_OR_mask);
EXPORT_SYMBOL(atomic_test_and_XOR_mask);
EXPORT_SYMBOL(atomic_add_return);
EXPORT_SYMBOL(atomic_sub_return);
EXPORT_SYMBOL(__xchg_8);
EXPORT_SYMBOL(__xchg_16);
EXPORT_SYMBOL(__xchg_32);
EXPORT_SYMBOL(__cmpxchg_8);
EXPORT_SYMBOL(__cmpxchg_16);
EXPORT_SYMBOL(__cmpxchg_32);
#endif
EXPORT_SYMBOL(__debug_bug_printk);
EXPORT_SYMBOL(__delay_loops_MHz);
/*
* libgcc functions - functions that are used internally by the
* compiler... (prototypes are not correct though, but that
@ -101,6 +108,8 @@ extern void __divdi3(void);
extern void __lshrdi3(void);
extern void __moddi3(void);
extern void __muldi3(void);
extern void __mulll(void);
extern void __umulll(void);
extern void __negdi2(void);
extern void __ucmpdi2(void);
extern void __udivdi3(void);
@ -116,8 +125,10 @@ EXPORT_SYMBOL(__ashrdi3);
EXPORT_SYMBOL(__lshrdi3);
//EXPORT_SYMBOL(__moddi3);
EXPORT_SYMBOL(__muldi3);
EXPORT_SYMBOL(__mulll);
EXPORT_SYMBOL(__umulll);
EXPORT_SYMBOL(__negdi2);
//EXPORT_SYMBOL(__ucmpdi2);
EXPORT_SYMBOL(__ucmpdi2);
//EXPORT_SYMBOL(__udivdi3);
//EXPORT_SYMBOL(__udivmoddi4);
//EXPORT_SYMBOL(__umoddi3);

View File

@ -32,6 +32,7 @@
#include <linux/irq.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/module.h>
#include <asm/atomic.h>
#include <asm/io.h>
@ -178,6 +179,8 @@ void disable_irq_nosync(unsigned int irq)
spin_unlock_irqrestore(&level->lock, flags);
}
EXPORT_SYMBOL(disable_irq_nosync);
/**
* disable_irq - disable an irq and wait for completion
* @irq: Interrupt to disable
@ -204,6 +207,8 @@ void disable_irq(unsigned int irq)
#endif
}
EXPORT_SYMBOL(disable_irq);
/**
* enable_irq - enable handling of an irq
* @irq: Interrupt to enable
@ -268,6 +273,8 @@ void enable_irq(unsigned int irq)
spin_unlock_irqrestore(&level->lock, flags);
}
EXPORT_SYMBOL(enable_irq);
/*****************************************************************************/
/*
* handles all normal device IRQ's
@ -425,6 +432,8 @@ int request_irq(unsigned int irq,
return retval;
}
EXPORT_SYMBOL(request_irq);
/**
* free_irq - free an interrupt
* @irq: Interrupt line to free
@ -496,6 +505,8 @@ void free_irq(unsigned int irq, void *dev_id)
}
}
EXPORT_SYMBOL(free_irq);
/*
* IRQ autodetection code..
*
@ -519,6 +530,8 @@ unsigned long probe_irq_on(void)
return 0;
}
EXPORT_SYMBOL(probe_irq_on);
/*
* Return a mask of triggered interrupts (this
* can handle only legacy ISA interrupts).
@ -542,6 +555,8 @@ unsigned int probe_irq_mask(unsigned long xmask)
return 0;
}
EXPORT_SYMBOL(probe_irq_mask);
/*
* Return the one interrupt that triggered (this can
* handle any interrupt source).
@ -571,6 +586,8 @@ int probe_irq_off(unsigned long xmask)
return -1;
}
EXPORT_SYMBOL(probe_irq_off);
/* this was setup_x86_irq but it seems pretty generic */
int setup_irq(unsigned int irq, struct irqaction *new)
{

80
arch/frv/kernel/module.c Normal file
View File

@ -0,0 +1,80 @@
/* module.c: FRV specific module loading bits
*
* Copyright (C) 2006 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
* - Derived from arch/i386/kernel/module.c, Copyright (C) 2001 Rusty Russell.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/moduleloader.h>
#include <linux/elf.h>
#include <linux/vmalloc.h>
#include <linux/fs.h>
#include <linux/string.h>
#include <linux/kernel.h>
#if 0
#define DEBUGP printk
#else
#define DEBUGP(fmt...)
#endif
void *module_alloc(unsigned long size)
{
if (size == 0)
return NULL;
return vmalloc_exec(size);
}
/* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region)
{
vfree(module_region);
/* FIXME: If module_region == mod->init_region, trim exception
table entries. */
}
/* We don't need anything special. */
int module_frob_arch_sections(Elf_Ehdr *hdr,
Elf_Shdr *sechdrs,
char *secstrings,
struct module *mod)
{
return 0;
}
int apply_relocate(Elf32_Shdr *sechdrs,
const char *strtab,
unsigned int symindex,
unsigned int relsec,
struct module *me)
{
printk(KERN_ERR "module %s: ADD RELOCATION unsupported\n", me->name);
return -ENOEXEC;
}
int apply_relocate_add(Elf32_Shdr *sechdrs,
const char *strtab,
unsigned int symindex,
unsigned int relsec,
struct module *me)
{
printk(KERN_ERR "module %s: ADD RELOCATION unsupported\n", me->name);
return -ENOEXEC;
}
int module_finalize(const Elf_Ehdr *hdr,
const Elf_Shdr *sechdrs,
struct module *me)
{
return 0;
}
void module_arch_cleanup(struct module *mod)
{
}

View File

@ -13,6 +13,7 @@
#include <linux/config.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/pm.h>
#include <linux/pm_legacy.h>
#include <linux/sched.h>
@ -27,6 +28,7 @@
#include "local.h"
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
extern void frv_change_cmode(int);

View File

@ -787,6 +787,7 @@ void __init setup_arch(char **cmdline_p)
#endif
/* register those serial ports that are available */
#ifdef CONFIG_FRV_ONCPU_SERIAL
#ifndef CONFIG_GDBSTUB_UART0
__reg(UART0_BASE + UART_IER * 8) = 0;
early_serial_setup(&__frv_uart0);
@ -795,6 +796,7 @@ void __init setup_arch(char **cmdline_p)
__reg(UART1_BASE + UART_IER * 8) = 0;
early_serial_setup(&__frv_uart1);
#endif
#endif
#if defined(CONFIG_CHR_DEV_FLASH) || defined(CONFIG_BLK_DEV_FLASH)
/* we need to initialize the Flashrom device here since we might

View File

@ -189,6 +189,8 @@ void do_gettimeofday(struct timeval *tv)
tv->tv_usec = usec;
}
EXPORT_SYMBOL(do_gettimeofday);
int do_settimeofday(struct timespec *tv)
{
time_t wtm_sec, sec = tv->tv_sec;
@ -218,6 +220,7 @@ int do_settimeofday(struct timespec *tv)
clock_was_set();
return 0;
}
EXPORT_SYMBOL(do_settimeofday);
/*

View File

@ -19,6 +19,7 @@
#include <linux/string.h>
#include <linux/linkage.h>
#include <linux/init.h>
#include <linux/module.h>
#include <asm/setup.h>
#include <asm/fpu.h>
@ -250,6 +251,8 @@ void dump_stack(void)
show_stack(NULL, NULL);
}
EXPORT_SYMBOL(dump_stack);
void show_stack(struct task_struct *task, unsigned long *sp)
{
}

View File

@ -10,6 +10,7 @@
*/
#include <linux/mm.h>
#include <linux/module.h>
#include <asm/uaccess.h>
/*****************************************************************************/
@ -58,8 +59,11 @@ long strncpy_from_user(char *dst, const char *src, long count)
memset(p, 0, count); /* clear remainder of buffer [security] */
return err;
} /* end strncpy_from_user() */
EXPORT_SYMBOL(strncpy_from_user);
/*****************************************************************************/
/*
* Return the size of a string (including the ending 0)
@ -92,4 +96,7 @@ long strnlen_user(const char *src, long count)
}
return p - src + 1; /* return length including NUL */
} /* end strnlen_user() */
EXPORT_SYMBOL(strnlen_user);

View File

@ -112,6 +112,7 @@ SECTIONS
#endif
)
SCHED_TEXT
LOCK_TEXT
*(.fixup)
*(.gnu.warning)
*(.exitcall.exit)

View File

@ -3,6 +3,6 @@
#
lib-y := \
__ashldi3.o __lshrdi3.o __muldi3.o __ashrdi3.o __negdi2.o \
__ashldi3.o __lshrdi3.o __muldi3.o __ashrdi3.o __negdi2.o __ucmpdi2.o \
checksum.o memcpy.o memset.o atomic-ops.o \
outsl_ns.o outsl_sw.o insl_ns.o insl_sw.o cache.o

45
arch/frv/lib/__ucmpdi2.S Normal file
View File

@ -0,0 +1,45 @@
/* __ucmpdi2.S: 64-bit unsigned compare
*
* Copyright (C) 2006 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
.text
.p2align 4
###############################################################################
#
# int __ucmpdi2(unsigned long long a [GR8:GR9],
# unsigned long long b [GR10:GR11])
#
# - returns 0, 1, or 2 as a <, =, > b respectively.
#
###############################################################################
.globl __ucmpdi2
.type __ucmpdi2,@function
__ucmpdi2:
or.p gr8,gr0,gr4
subcc gr8,gr10,gr0,icc0
setlos.p #0,gr8
bclr icc0,#2 ; a.msw < b.msw
setlos.p #2,gr8
bhilr icc0,#0 ; a.msw > b.msw
subcc.p gr9,gr11,gr0,icc1
setlos #0,gr8
setlos.p #2,gr9
setlos #1,gr7
cknc icc1,cc6
cor.p gr9,gr0,gr8, cc6,#1
cckls icc1,cc4, cc6,#1
andcr cc6,cc4,cc4
cor gr7,gr0,gr8, cc4,#1
bralr
.size __ucmpdi2, .-__ucmpdi2

View File

@ -127,48 +127,6 @@ atomic_sub_return:
.size atomic_sub_return, .-atomic_sub_return
###############################################################################
#
# uint8_t __xchg_8(uint8_t i, uint8_t *v)
#
###############################################################################
.globl __xchg_8
.type __xchg_8,@function
__xchg_8:
or.p gr8,gr8,gr10
0:
orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */
ckeq icc3,cc7
ldub.p @(gr9,gr0),gr8 /* LD.P/ORCR must be atomic */
orcr cc7,cc7,cc3 /* set CC3 to true */
cstb.p gr10,@(gr9,gr0) ,cc3,#1
corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */
beq icc3,#0,0b
bralr
.size __xchg_8, .-__xchg_8
###############################################################################
#
# uint16_t __xchg_16(uint16_t i, uint16_t *v)
#
###############################################################################
.globl __xchg_16
.type __xchg_16,@function
__xchg_16:
or.p gr8,gr8,gr10
0:
orcc gr0,gr0,gr0,icc3 /* set ICC3.Z */
ckeq icc3,cc7
lduh.p @(gr9,gr0),gr8 /* LD.P/ORCR must be atomic */
orcr cc7,cc7,cc3 /* set CC3 to true */
csth.p gr10,@(gr9,gr0) ,cc3,#1
corcc gr29,gr29,gr0 ,cc3,#1 /* clear ICC3.Z if store happens */
beq icc3,#0,0b
bralr
.size __xchg_16, .-__xchg_16
###############################################################################
#
# uint32_t __xchg_32(uint32_t i, uint32_t *v)
@ -190,56 +148,6 @@ __xchg_32:
.size __xchg_32, .-__xchg_32
###############################################################################
#
# uint8_t __cmpxchg_8(uint8_t *v, uint8_t test, uint8_t new)
#
###############################################################################
.globl __cmpxchg_8
.type __cmpxchg_8,@function
__cmpxchg_8:
or.p gr8,gr8,gr11
0:
orcc gr0,gr0,gr0,icc3
ckeq icc3,cc7
ldub.p @(gr11,gr0),gr8
orcr cc7,cc7,cc3
sub gr8,gr9,gr7
sllicc gr7,#24,gr0,icc0
bne icc0,#0,1f
cstb.p gr10,@(gr11,gr0) ,cc3,#1
corcc gr29,gr29,gr0 ,cc3,#1
beq icc3,#0,0b
1:
bralr
.size __cmpxchg_8, .-__cmpxchg_8
###############################################################################
#
# uint16_t __cmpxchg_16(uint16_t *v, uint16_t test, uint16_t new)
#
###############################################################################
.globl __cmpxchg_16
.type __cmpxchg_16,@function
__cmpxchg_16:
or.p gr8,gr8,gr11
0:
orcc gr0,gr0,gr0,icc3
ckeq icc3,cc7
lduh.p @(gr11,gr0),gr8
orcr cc7,cc7,cc3
sub gr8,gr9,gr7
sllicc gr7,#16,gr0,icc0
bne icc0,#0,1f
csth.p gr10,@(gr11,gr0) ,cc3,#1
corcc gr29,gr29,gr0 ,cc3,#1
beq icc3,#0,0b
1:
bralr
.size __cmpxchg_16, .-__cmpxchg_16
###############################################################################
#
# uint32_t __cmpxchg_32(uint32_t *v, uint32_t test, uint32_t new)

View File

@ -33,6 +33,7 @@
#include <net/checksum.h>
#include <asm/checksum.h>
#include <linux/module.h>
static inline unsigned short from32to16(unsigned long x)
{
@ -115,34 +116,52 @@ unsigned int csum_partial(const unsigned char * buff, int len, unsigned int sum)
return result;
}
EXPORT_SYMBOL(csum_partial);
/*
* this routine is used for miscellaneous IP-like checksums, mainly
* in icmp.c
*/
unsigned short ip_compute_csum(const unsigned char * buff, int len)
{
return ~do_csum(buff,len);
return ~do_csum(buff, len);
}
EXPORT_SYMBOL(ip_compute_csum);
/*
* copy from fs while checksumming, otherwise like csum_partial
*/
unsigned int
csum_partial_copy_from_user(const char *src, char *dst, int len, int sum, int *csum_err)
csum_partial_copy_from_user(const char __user *src, char *dst,
int len, int sum, int *csum_err)
{
if (csum_err) *csum_err = 0;
memcpy(dst, src, len);
int rem;
if (csum_err)
*csum_err = 0;
rem = copy_from_user(dst, src, len);
if (rem != 0) {
if (csum_err)
*csum_err = -EFAULT;
memset(dst + len - rem, 0, rem);
len = rem;
}
return csum_partial(dst, len, sum);
}
EXPORT_SYMBOL(csum_partial_copy_from_user);
/*
* copy from ds while checksumming, otherwise like csum_partial
*/
unsigned int
csum_partial_copy(const char *src, char *dst, int len, int sum)
{
memcpy(dst, src, len);
return csum_partial(dst, len, sum);
}
EXPORT_SYMBOL(csum_partial_copy);

View File

@ -3,7 +3,7 @@
#
ifeq "$(CONFIG_PCI)" "y"
obj-y := pci-frv.o pci-irq.o pci-vdk.o
obj-y := pci-frv.o pci-irq.o pci-vdk.o pci-iomap.o
ifeq "$(CONFIG_MMU)" "y"
obj-y += pci-dma.o

View File

@ -83,6 +83,8 @@ void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_hand
return NULL;
}
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
{
struct dma_alloc_record *rec;
@ -102,6 +104,8 @@ void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_
BUG();
}
EXPORT_SYMBOL(dma_free_coherent);
/*
* Map a single buffer of the indicated size for DMA in streaming mode.
* The 32-bit bus address to use is returned.
@ -120,6 +124,8 @@ dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
return virt_to_bus(ptr);
}
EXPORT_SYMBOL(dma_map_single);
/*
* Map a set of buffers described by scatterlist in streaming
* mode for DMA. This is the scather-gather version of the
@ -150,3 +156,5 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
return nents;
}
EXPORT_SYMBOL(dma_map_sg);

View File

@ -28,11 +28,15 @@ void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_hand
return ret;
}
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
{
consistent_free(vaddr);
}
EXPORT_SYMBOL(dma_free_coherent);
/*
* Map a single buffer of the indicated size for DMA in streaming mode.
* The 32-bit bus address to use is returned.
@ -51,6 +55,8 @@ dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
return virt_to_bus(ptr);
}
EXPORT_SYMBOL(dma_map_single);
/*
* Map a set of buffers described by scatterlist in streaming
* mode for DMA. This is the scather-gather version of the
@ -96,6 +102,8 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
return nents;
}
EXPORT_SYMBOL(dma_map_sg);
dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction)
{
@ -103,3 +111,5 @@ dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long off
flush_dcache_page(page);
return (dma_addr_t) page_to_phys(page) + offset;
}
EXPORT_SYMBOL(dma_map_page);

View File

@ -0,0 +1,29 @@
/* pci-iomap.c: description
*
* Copyright (C) 2006 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/pci.h>
#include <linux/module.h>
void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
{
unsigned long start = pci_resource_start(dev, bar);
unsigned long len = pci_resource_len(dev, bar);
unsigned long flags = pci_resource_flags(dev, bar);
if (!len || !start)
return NULL;
if ((flags & IORESOURCE_IO) || (flags & IORESOURCE_MEM))
return (void __iomem *) start;
return NULL;
}
EXPORT_SYMBOL(pci_iomap);

View File

@ -11,6 +11,7 @@
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/highmem.h>
#include <linux/module.h>
#include <asm/pgalloc.h>
/*****************************************************************************/
@ -38,6 +39,8 @@ void flush_dcache_page(struct page *page)
} /* end flush_dcache_page() */
EXPORT_SYMBOL(flush_dcache_page);
/*****************************************************************************/
/*
* ICI takes a virtual address and the page may not currently have one
@ -64,3 +67,5 @@ void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
}
} /* end flush_icache_user_range() */
EXPORT_SYMBOL(flush_icache_user_range);

View File

@ -43,7 +43,7 @@ static inline unsigned long search_one_table(const struct exception_table_entry
*/
unsigned long search_exception_table(unsigned long pc)
{
unsigned long ret = 0;
const struct exception_table_entry *extab;
/* determine if the fault lay during a memcpy_user or a memset_user */
if (__frame->lr == (unsigned long) &__memset_user_error_lr &&
@ -55,9 +55,10 @@ unsigned long search_exception_table(unsigned long pc)
*/
return (unsigned long) &__memset_user_error_handler;
}
else if (__frame->lr == (unsigned long) &__memcpy_user_error_lr &&
(unsigned long) &memcpy <= pc && pc < (unsigned long) &__memcpy_end
) {
if (__frame->lr == (unsigned long) &__memcpy_user_error_lr &&
(unsigned long) &memcpy <= pc && pc < (unsigned long) &__memcpy_end
) {
/* the fault occurred in a protected memset
* - we search for the return address (in LR) instead of the program counter
* - it was probably during a copy_to/from_user()
@ -65,27 +66,10 @@ unsigned long search_exception_table(unsigned long pc)
return (unsigned long) &__memcpy_user_error_handler;
}
#ifndef CONFIG_MODULES
/* there is only the kernel to search. */
ret = search_one_table(__start___ex_table, __stop___ex_table - 1, pc);
return ret;
extab = search_exception_tables(pc);
if (extab)
return extab->fixup;
#else
/* the kernel is the last "module" -- no need to treat it special */
unsigned long flags;
struct module *mp;
return 0;
spin_lock_irqsave(&modlist_lock, flags);
for (mp = module_list; mp != NULL; mp = mp->next) {
if (mp->ex_table_start == NULL || !(mp->flags & (MOD_RUNNING | MOD_INITIALIZING)))
continue;
ret = search_one_table(mp->ex_table_start, mp->ex_table_end - 1, pc);
if (ret)
break;
}
spin_unlock_irqrestore(&modlist_lock, flags);
return ret;
#endif
} /* end search_exception_table() */

View File

@ -9,6 +9,7 @@
* 2 of the License, or (at your option) any later version.
*/
#include <linux/highmem.h>
#include <linux/module.h>
void *kmap(struct page *page)
{
@ -18,6 +19,8 @@ void *kmap(struct page *page)
return kmap_high(page);
}
EXPORT_SYMBOL(kmap);
void kunmap(struct page *page)
{
if (in_interrupt())
@ -27,7 +30,12 @@ void kunmap(struct page *page)
kunmap_high(page);
}
EXPORT_SYMBOL(kunmap);
struct page *kmap_atomic_to_page(void *ptr)
{
return virt_to_page(ptr);
}
EXPORT_SYMBOL(kmap_atomic_to_page);

View File

@ -21,10 +21,6 @@ config FPU
bool
default n
config UID16
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y

View File

@ -29,10 +29,6 @@ config MMU
config SBUS
bool
config UID16
bool
default y
config GENERIC_ISA_DMA
bool
default y
@ -630,10 +626,6 @@ config REGPARM
and passes the first three arguments of a function call in registers.
This will probably break binary only modules.
This feature is only enabled for gcc-3.0 and later - earlier compilers
generate incorrect output with certain kernel constructs when
-mregparm=3 is used.
config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode"
depends on PROC_FS
@ -703,7 +695,7 @@ depends on PM && !X86_VISWS
config APM
tristate "APM (Advanced Power Management) BIOS support"
depends on PM && PM_LEGACY
depends on PM
---help---
APM is a BIOS specification for saving power using several different
techniques. This is mostly useful for battery powered laptops with

View File

@ -37,10 +37,7 @@ CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2)
# CPU-specific tuning. Anything which can be shared with UML should go here.
include $(srctree)/arch/i386/Makefile.cpu
# -mregparm=3 works ok on gcc-3.0 and later
#
GCC_VERSION := $(call cc-version)
cflags-$(CONFIG_REGPARM) += $(shell if [ $(GCC_VERSION) -ge 0300 ] ; then echo "-mregparm=3"; fi ;)
cflags-$(CONFIG_REGPARM) += -mregparm=3
# Disable unit-at-a-time mode, it makes gcc use a lot more stack
# due to the lack of sharing of stacklots.

View File

@ -1,7 +1,7 @@
# CPU tuning section - shared with UML.
# Must change only cflags-y (or [yn]), not CFLAGS! That makes a difference for UML.
#-mtune exists since gcc 3.4, and some -mcpu flavors didn't exist in gcc 2.95.
#-mtune exists since gcc 3.4
HAS_MTUNE := $(call cc-option-yn, -mtune=i386)
ifeq ($(HAS_MTUNE),y)
tune = $(call cc-option,-mtune=$(1),)
@ -14,7 +14,7 @@ cflags-$(CONFIG_M386) += -march=i386
cflags-$(CONFIG_M486) += -march=i486
cflags-$(CONFIG_M586) += -march=i586
cflags-$(CONFIG_M586TSC) += -march=i586
cflags-$(CONFIG_M586MMX) += $(call cc-option,-march=pentium-mmx,-march=i586)
cflags-$(CONFIG_M586MMX) += -march=pentium-mmx
cflags-$(CONFIG_M686) += -march=i686
cflags-$(CONFIG_MPENTIUMII) += -march=i686 $(call tune,pentium2)
cflags-$(CONFIG_MPENTIUMIII) += -march=i686 $(call tune,pentium3)
@ -23,8 +23,8 @@ cflags-$(CONFIG_MPENTIUM4) += -march=i686 $(call tune,pentium4)
cflags-$(CONFIG_MK6) += -march=k6
# Please note, that patches that add -march=athlon-xp and friends are pointless.
# They make zero difference whatsosever to performance at this time.
cflags-$(CONFIG_MK7) += $(call cc-option,-march=athlon,-march=i686 $(align)-functions=4)
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,$(call cc-option,-march=athlon,-march=i686 $(align)-functions=4))
cflags-$(CONFIG_MK7) += -march=athlon
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
@ -37,5 +37,5 @@ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
cflags-$(CONFIG_X86_ELAN) += -march=i486
# Geode GX1 support
cflags-$(CONFIG_MGEODEGX1) += $(call cc-option,-march=pentium-mmx,-march=i486)
cflags-$(CONFIG_MGEODEGX1) += -march=pentium-mmx

View File

@ -11,7 +11,7 @@
#include <linux/linkage.h>
#include <linux/vmalloc.h>
#include <linux/tty.h>
#include <linux/screen_info.h>
#include <asm/io.h>
#include <asm/page.h>

View File

@ -4,10 +4,10 @@
extra-y := head.o init_task.o vmlinux.lds
obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o vm86.o \
obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \
ptrace.o time.o ioport.o ldt.o setup.o i8259.o sys_i386.o \
pci-dma.o i386_ksyms.o i387.o dmi_scan.o bootflag.o \
doublefault.o quirks.o i8237.o
quirks.o i8237.o
obj-y += cpu/
obj-y += timers/
@ -33,6 +33,8 @@ obj-y += sysenter.o vsyscall.o
obj-$(CONFIG_ACPI_SRAT) += srat.o
obj-$(CONFIG_HPET_TIMER) += time_hpet.o
obj-$(CONFIG_EFI) += efi.o efi_stub.o
obj-$(CONFIG_DOUBLEFAULT) += doublefault.o
obj-$(CONFIG_VM86) += vm86.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
EXTRA_AFLAGS := -traditional

View File

@ -2291,7 +2291,9 @@ static int __init apm_init(void)
apm_info.disabled = 1;
return -ENODEV;
}
#ifdef CONFIG_PM_LEGACY
pm_active = 1;
#endif
/*
* Set up a segment that references the real mode segment 0x40
@ -2382,7 +2384,9 @@ static void __exit apm_exit(void)
exit_kapmd = 1;
while (kapmd_running)
schedule();
#ifdef CONFIG_PM_LEGACY
pm_active = 0;
#endif
}
module_init(apm_init);

View File

@ -609,8 +609,10 @@ void __devinit cpu_init(void)
load_TR_desc();
load_LDT(&init_mm.context);
#ifdef CONFIG_DOUBLEFAULT
/* Set up doublefault TSS pointer in the GDT */
__set_tss_desc(cpu, GDT_ENTRY_DOUBLEFAULT_TSS, &doublefault_tss);
#endif
/* Clear %fs and %gs. */
asm volatile ("xorl %eax, %eax; movl %eax, %fs; movl %eax, %gs");

View File

@ -323,6 +323,7 @@ work_notifysig: # deal with pending signals and
ALIGN
work_notifysig_v86:
#ifdef CONFIG_VM86
pushl %ecx # save ti_flags for do_notify_resume
call save_v86_state # %eax contains pt_regs pointer
popl %ecx
@ -330,6 +331,7 @@ work_notifysig_v86:
xorl %edx, %edx
call do_notify_resume
jmp resume_userspace
#endif
# perform syscall exit tracing
ALIGN

View File

@ -42,5 +42,5 @@ EXPORT_SYMBOL(init_task);
* per-CPU TSS segments. Threads are completely 'soft' on Linux,
* no more per-task TSS's.
*/
DEFINE_PER_CPU(struct tss_struct, init_tss) ____cacheline_maxaligned_in_smp = INIT_TSS;
DEFINE_PER_CPU(struct tss_struct, init_tss) ____cacheline_internodealigned_in_smp = INIT_TSS;

View File

@ -19,7 +19,7 @@
#include <linux/cpu.h>
#include <linux/delay.h>
DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_maxaligned_in_smp;
DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_internodealigned_in_smp;
EXPORT_PER_CPU_SYMBOL(irq_stat);
#ifndef CONFIG_X86_LOCAL_APIC

View File

@ -48,6 +48,7 @@
#include <asm/processor.h>
#include <asm/i387.h>
#include <asm/desc.h>
#include <asm/vm86.h>
#ifdef CONFIG_MATH_EMULATION
#include <asm/math_emu.h>
#endif

View File

@ -293,3 +293,4 @@ ENTRY(sys_call_table)
.long sys_inotify_init
.long sys_inotify_add_watch
.long sys_inotify_rm_watch
.long sys_migrate_pages

View File

@ -259,8 +259,6 @@ __setup("hpet=", hpet_setup);
#include <linux/mc146818rtc.h>
#include <linux/rtc.h>
extern irqreturn_t rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs);
#define DEFAULT_RTC_INT_FREQ 64
#define RTC_NUM_INTS 1

View File

@ -37,10 +37,6 @@ $(error Sorry, you need a newer version of the assember, one that is built from
ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz)
endif
ifneq ($(shell if [ $(GCC_VERSION) -lt 0300 ] ; then echo "bad"; fi ;),)
$(error Sorry, your compiler is too old. GCC v2.96 is known to generate bad code.)
endif
ifeq ($(GCC_VERSION),0304)
cflags-$(CONFIG_ITANIUM) += -mtune=merced
cflags-$(CONFIG_MCKINLEY) += -mtune=mckinley

View File

@ -1761,21 +1761,15 @@ sys32_ptrace (int request, pid_t pid, unsigned int addr, unsigned int data)
lock_kernel();
if (request == PTRACE_TRACEME) {
ret = sys_ptrace(request, pid, addr, data);
ret = ptrace_traceme();
goto out;
}
ret = -ESRCH;
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
if (child)
get_task_struct(child);
read_unlock(&tasklist_lock);
if (!child)
child = ptrace_get_task_struct(pid);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
goto out;
ret = -EPERM;
if (pid == 1) /* no messing around with init! */
goto out_tsk;
}
if (request == PTRACE_ATTACH) {
ret = sys_ptrace(request, pid, addr, data);

View File

@ -247,6 +247,32 @@ typedef struct kern_memdesc {
static kern_memdesc_t *kern_memmap;
#define efi_md_size(md) (md->num_pages << EFI_PAGE_SHIFT)
static inline u64
kmd_end(kern_memdesc_t *kmd)
{
return (kmd->start + (kmd->num_pages << EFI_PAGE_SHIFT));
}
static inline u64
efi_md_end(efi_memory_desc_t *md)
{
return (md->phys_addr + efi_md_size(md));
}
static inline int
efi_wb(efi_memory_desc_t *md)
{
return (md->attribute & EFI_MEMORY_WB);
}
static inline int
efi_uc(efi_memory_desc_t *md)
{
return (md->attribute & EFI_MEMORY_UC);
}
static void
walk (efi_freemem_callback_t callback, void *arg, u64 attr)
{
@ -595,8 +621,8 @@ efi_get_iobase (void)
return 0;
}
u32
efi_mem_type (unsigned long phys_addr)
static efi_memory_desc_t *
efi_memory_descriptor (unsigned long phys_addr)
{
void *efi_map_start, *efi_map_end, *p;
efi_memory_desc_t *md;
@ -610,55 +636,117 @@ efi_mem_type (unsigned long phys_addr)
md = p;
if (phys_addr - md->phys_addr < (md->num_pages << EFI_PAGE_SHIFT))
return md->type;
return md;
}
return 0;
}
static int
efi_memmap_has_mmio (void)
{
void *efi_map_start, *efi_map_end, *p;
efi_memory_desc_t *md;
u64 efi_desc_size;
efi_map_start = __va(ia64_boot_param->efi_memmap);
efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
efi_desc_size = ia64_boot_param->efi_memdesc_size;
for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
md = p;
if (md->type == EFI_MEMORY_MAPPED_IO)
return 1;
}
return 0;
}
u32
efi_mem_type (unsigned long phys_addr)
{
efi_memory_desc_t *md = efi_memory_descriptor(phys_addr);
if (md)
return md->type;
return 0;
}
u64
efi_mem_attributes (unsigned long phys_addr)
{
void *efi_map_start, *efi_map_end, *p;
efi_memory_desc_t *md;
u64 efi_desc_size;
efi_memory_desc_t *md = efi_memory_descriptor(phys_addr);
efi_map_start = __va(ia64_boot_param->efi_memmap);
efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
efi_desc_size = ia64_boot_param->efi_memdesc_size;
for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
md = p;
if (phys_addr - md->phys_addr < (md->num_pages << EFI_PAGE_SHIFT))
return md->attribute;
}
if (md)
return md->attribute;
return 0;
}
EXPORT_SYMBOL(efi_mem_attributes);
/*
* Determines whether the memory at phys_addr supports the desired
* attribute (WB, UC, etc). If this returns 1, the caller can safely
* access *size bytes at phys_addr with the specified attribute.
*/
static int
efi_mem_attribute_range (unsigned long phys_addr, unsigned long *size, u64 attr)
{
efi_memory_desc_t *md = efi_memory_descriptor(phys_addr);
unsigned long md_end;
if (!md || (md->attribute & attr) != attr)
return 0;
do {
md_end = efi_md_end(md);
if (phys_addr + *size <= md_end)
return 1;
md = efi_memory_descriptor(md_end);
if (!md || (md->attribute & attr) != attr) {
*size = md_end - phys_addr;
return 1;
}
} while (md);
return 0;
}
/*
* For /dev/mem, we only allow read & write system calls to access
* write-back memory, because read & write don't allow the user to
* control access size.
*/
int
valid_phys_addr_range (unsigned long phys_addr, unsigned long *size)
{
void *efi_map_start, *efi_map_end, *p;
efi_memory_desc_t *md;
u64 efi_desc_size;
return efi_mem_attribute_range(phys_addr, size, EFI_MEMORY_WB);
}
efi_map_start = __va(ia64_boot_param->efi_memmap);
efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
efi_desc_size = ia64_boot_param->efi_memdesc_size;
/*
* We allow mmap of anything in the EFI memory map that supports
* either write-back or uncacheable access. For uncacheable regions,
* the supported access sizes are system-dependent, and the user is
* responsible for using the correct size.
*
* Note that this doesn't currently allow access to hot-added memory,
* because that doesn't appear in the boot-time EFI memory map.
*/
int
valid_mmap_phys_addr_range (unsigned long phys_addr, unsigned long *size)
{
if (efi_mem_attribute_range(phys_addr, size, EFI_MEMORY_WB))
return 1;
for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
md = p;
if (efi_mem_attribute_range(phys_addr, size, EFI_MEMORY_UC))
return 1;
if (phys_addr - md->phys_addr < (md->num_pages << EFI_PAGE_SHIFT)) {
if (!(md->attribute & EFI_MEMORY_WB))
return 0;
/*
* Some firmware doesn't report MMIO regions in the EFI memory map.
* The Intel BigSur (a.k.a. HP i2000) has this problem. In this
* case, we can't use the EFI memory map to validate mmap requests.
*/
if (!efi_memmap_has_mmio())
return 1;
if (*size > md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - phys_addr)
*size = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - phys_addr;
return 1;
}
}
return 0;
}
@ -707,32 +795,6 @@ efi_uart_console_only(void)
return 0;
}
#define efi_md_size(md) (md->num_pages << EFI_PAGE_SHIFT)
static inline u64
kmd_end(kern_memdesc_t *kmd)
{
return (kmd->start + (kmd->num_pages << EFI_PAGE_SHIFT));
}
static inline u64
efi_md_end(efi_memory_desc_t *md)
{
return (md->phys_addr + efi_md_size(md));
}
static inline int
efi_wb(efi_memory_desc_t *md)
{
return (md->attribute & EFI_MEMORY_WB);
}
static inline int
efi_uc(efi_memory_desc_t *md)
{
return (md->attribute & EFI_MEMORY_UC);
}
/*
* Look for the first granule aligned memory descriptor memory
* that is big enough to hold EFI memory map. Make sure this

View File

@ -1600,5 +1600,6 @@ sys_call_table:
data8 sys_inotify_init
data8 sys_inotify_add_watch
data8 sys_inotify_rm_watch
data8 sys_migrate_pages // 1280
.org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls

View File

@ -1060,7 +1060,7 @@ SET_REG(b5);
* the clobber lists for spin_lock() in include/asm-ia64/spinlock.h.
*/
#if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
#if (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
GLOBAL_ENTRY(ia64_spinlock_contention_pre3_4)
.prologue

View File

@ -103,7 +103,7 @@ EXPORT_SYMBOL(unw_init_running);
#ifdef ASM_SUPPORTED
# ifdef CONFIG_SMP
# if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
# if (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
/*
* This is not a normal routine and we don't want a function descriptor for it, so we use
* a fake declaration here.

View File

@ -1422,14 +1422,7 @@ sys_ptrace (long request, pid_t pid, unsigned long addr, unsigned long data)
lock_kernel();
ret = -EPERM;
if (request == PTRACE_TRACEME) {
/* are we already being traced? */
if (current->ptrace & PT_PTRACED)
goto out;
ret = security_ptrace(current->parent, current);
if (ret)
goto out;
current->ptrace |= PT_PTRACED;
ret = 0;
ret = ptrace_traceme();
goto out;
}

View File

@ -32,7 +32,7 @@ typedef struct
u64 *prev_pfs_loc; /* state for WAR for old spinlock ool code */
} ia64_backtrace_t;
#if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
#if (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
/*
* Returns non-zero if the PC is in the spinlock contention out-of-line code
* with non-standard calling sequence (on older compilers).

View File

@ -50,6 +50,10 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
* Powermanagement idle function, if any..
*/
void (*pm_idle)(void) = NULL;
EXPORT_SYMBOL(pm_idle);
void (*pm_power_off)(void) = NULL;
EXPORT_SYMBOL(pm_power_off);
void disable_hlt(void)
{

View File

@ -762,28 +762,16 @@ asmlinkage long sys_ptrace(long request, long pid, long addr, long data)
int ret;
lock_kernel();
ret = -EPERM;
if (request == PTRACE_TRACEME) {
/* are we already being traced? */
if (current->ptrace & PT_PTRACED)
goto out;
/* set the ptrace bit in the process flags. */
current->ptrace |= PT_PTRACED;
ret = 0;
ret = ptrace_traceme();
goto out;
}
ret = -ESRCH;
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
if (child)
get_task_struct(child);
read_unlock(&tasklist_lock);
if (!child)
goto out;
ret = -EPERM;
if (pid == 1) /* you may not mess with init */
child = ptrace_get_task_struct(pid);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
goto out;
}
if (request == PTRACE_ATTACH) {
ret = ptrace_attach(child);

View File

@ -10,10 +10,6 @@ config MMU
bool
default y
config UID16
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y

View File

@ -17,10 +17,6 @@ config FPU
bool
default n
config UID16
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y

View File

@ -57,30 +57,16 @@ asmlinkage int sys32_ptrace(int request, int pid, int addr, int data)
(unsigned long) data);
#endif
lock_kernel();
ret = -EPERM;
if (request == PTRACE_TRACEME) {
/* are we already being traced? */
if (current->ptrace & PT_PTRACED)
goto out;
if ((ret = security_ptrace(current->parent, current)))
goto out;
/* set the ptrace bit in the process flags. */
current->ptrace |= PT_PTRACED;
ret = 0;
ret = ptrace_traceme();
goto out;
}
ret = -ESRCH;
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
if (child)
get_task_struct(child);
read_unlock(&tasklist_lock);
if (!child)
goto out;
ret = -EPERM;
if (pid == 1) /* you may not mess with init */
goto out_tsk;
child = ptrace_get_task_struct(pid);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
goto out;
}
if (request == PTRACE_ATTACH) {
ret = ptrace_attach(child);

View File

@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/signal.h> /* for SIGBUS */
#include <linux/sched.h> /* schow_regs(), force_sig() */
#include <asm/module.h>
#include <asm/sn/addrs.h>

View File

@ -19,9 +19,6 @@ config MMU
config STACK_GROWSUP
def_bool y
config UID16
bool
config RWSEM_GENERIC_SPINLOCK
def_bool y

View File

@ -26,9 +26,6 @@ config MMU
bool
default y
config UID16
bool
config GENERIC_HARDIRQS
bool
default y

View File

@ -45,33 +45,19 @@ long compat_sys_ptrace(int request, int pid, unsigned long addr,
unsigned long data)
{
struct task_struct *child;
int ret = -EPERM;
int ret;
lock_kernel();
if (request == PTRACE_TRACEME) {
/* are we already being traced? */
if (current->ptrace & PT_PTRACED)
goto out;
ret = security_ptrace(current->parent, current);
if (ret)
goto out;
/* set the ptrace bit in the process flags. */
current->ptrace |= PT_PTRACED;
ret = 0;
ret = ptrace_traceme();
goto out;
}
ret = -ESRCH;
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
if (child)
get_task_struct(child);
read_unlock(&tasklist_lock);
if (!child)
goto out;
ret = -EPERM;
if (pid == 1) /* you may not mess with init */
goto out_tsk;
child = ptrace_get_task_struct(pid);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
goto out;
}
if (request == PTRACE_ATTACH) {
ret = ptrace_attach(child);

View File

@ -8,9 +8,6 @@ config MMU
bool
default y
config UID16
bool
config GENERIC_HARDIRQS
bool
default y

View File

@ -712,35 +712,18 @@ sys_ptrace(long request, long pid, long addr, long data)
int ret;
lock_kernel();
if (request == PTRACE_TRACEME) {
/* are we already being traced? */
ret = -EPERM;
if (current->ptrace & PT_PTRACED)
goto out;
ret = security_ptrace(current->parent, current);
if (ret)
goto out;
/* set the ptrace bit in the process flags. */
current->ptrace |= PT_PTRACED;
ret = ptrace_traceme();
goto out;
}
child = ptrace_get_task_struct(pid);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
goto out;
}
ret = -EPERM;
if (pid == 1) /* you may not mess with init */
goto out;
ret = -ESRCH;
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
if (child)
get_task_struct(child);
read_unlock(&tasklist_lock);
if (!child)
goto out;
ret = do_ptrace(child, request, addr, data);
put_task_struct(child);
out:
unlock_kernel();

View File

@ -14,10 +14,6 @@ config SUPERH
gaming console. The SuperH port has a home page at
<http://www.linux-sh.org/>.
config UID16
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y

View File

@ -417,7 +417,7 @@ static __init unsigned int get_cpu_hz(void)
/*
** Regardless the toolchain, force the compiler to use the
** arbitrary register r3 as a clock tick counter.
** NOTE: r3 must be in accordance with rtc_interrupt()
** NOTE: r3 must be in accordance with sh64_rtc_interrupt()
*/
register unsigned long long __rtc_irq_flag __asm__ ("r3");
@ -482,7 +482,8 @@ static __init unsigned int get_cpu_hz(void)
#endif
}
static irqreturn_t rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs)
static irqreturn_t sh64_rtc_interrupt(int irq, void *dev_id,
struct pt_regs *regs)
{
ctrl_outb(0, RCR1); /* Disable Carry Interrupts */
regs->regs[3] = 1; /* Using r3 */
@ -491,7 +492,7 @@ static irqreturn_t rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs)
}
static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL};
static struct irqaction irq1 = { rtc_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "rtc", NULL, NULL};
static struct irqaction irq1 = { sh64_rtc_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "rtc", NULL, NULL};
void __init time_init(void)
{

View File

@ -9,10 +9,6 @@ config MMU
bool
default y
config UID16
bool
default y
config HIGHMEM
bool
default y

View File

@ -286,40 +286,17 @@ asmlinkage void do_ptrace(struct pt_regs *regs)
s, (int) request, (int) pid, addr, data, addr2);
}
#endif
if (request == PTRACE_TRACEME) {
int my_ret;
/* are we already being traced? */
if (current->ptrace & PT_PTRACED) {
pt_error_return(regs, EPERM);
goto out;
}
my_ret = security_ptrace(current->parent, current);
if (my_ret) {
pt_error_return(regs, -my_ret);
goto out;
}
/* set the ptrace bit in the process flags. */
current->ptrace |= PT_PTRACED;
ret = ptrace_traceme();
pt_succ_return(regs, 0);
goto out;
}
#ifndef ALLOW_INIT_TRACING
if (pid == 1) {
/* Can't dork with init. */
pt_error_return(regs, EPERM);
goto out;
}
#endif
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
if (child)
get_task_struct(child);
read_unlock(&tasklist_lock);
if (!child) {
pt_error_return(regs, ESRCH);
child = ptrace_get_task_struct(pid);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
pt_error_return(regs, -ret);
goto out;
}

View File

@ -309,11 +309,6 @@ config COMPAT
depends on SPARC32_COMPAT
default y
config UID16
bool
depends on SPARC32_COMPAT
default y
config BINFMT_ELF32
tristate "Kernel support for 32-bit ELF binaries"
depends on SPARC32_COMPAT

View File

@ -198,39 +198,15 @@ asmlinkage void do_ptrace(struct pt_regs *regs)
}
#endif
if (request == PTRACE_TRACEME) {
int ret;
/* are we already being traced? */
if (current->ptrace & PT_PTRACED) {
pt_error_return(regs, EPERM);
goto out;
}
ret = security_ptrace(current->parent, current);
if (ret) {
pt_error_return(regs, -ret);
goto out;
}
/* set the ptrace bit in the process flags. */
current->ptrace |= PT_PTRACED;
ret = ptrace_traceme();
pt_succ_return(regs, 0);
goto out;
}
#ifndef ALLOW_INIT_TRACING
if (pid == 1) {
/* Can't dork with init. */
pt_error_return(regs, EPERM);
goto out;
}
#endif
read_lock(&tasklist_lock);
child = find_task_by_pid(pid);
if (child)
get_task_struct(child);
read_unlock(&tasklist_lock);
if (!child) {
pt_error_return(regs, ESRCH);
child = ptrace_get_task_struct(pid);
if (IS_ERR(child)) {
ret = PTR_ERR(child);
pt_error_return(regs, -ret);
goto out;
}

Some files were not shown because too many files have changed in this diff Show More