dect
/
linux-2.6
Archived
13
0
Fork 0

Merge commit 'v2.6.37-rc3' into sched/core

Merge reason: Pick up latest fixes.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
Ingo Molnar 2010-11-26 15:03:27 +01:00
commit 22a867d817
412 changed files with 2470 additions and 1325 deletions

View File

@ -16,7 +16,7 @@
</orgname>
<address>
<email>hjk@linutronix.de</email>
<email>hjk@hansjkoch.de</email>
</address>
</affiliation>
</author>
@ -114,7 +114,7 @@ GPL version 2.
<para>If you know of any translations for this document, or you are
interested in translating it, please email me
<email>hjk@linutronix.de</email>.
<email>hjk@hansjkoch.de</email>.
</para>
</sect1>
@ -171,7 +171,7 @@ interested in translating it, please email me
<title>Feedback</title>
<para>Find something wrong with this document? (Or perhaps something
right?) I would love to hear from you. Please email me at
<email>hjk@linutronix.de</email>.</para>
<email>hjk@hansjkoch.de</email>.</para>
</sect1>
</chapter>

View File

@ -154,7 +154,7 @@ The stages that a patch goes through are, generally:
inclusion, it should be accepted by a relevant subsystem maintainer -
though this acceptance is not a guarantee that the patch will make it
all the way to the mainline. The patch will show up in the maintainer's
subsystem tree and into the staging trees (described below). When the
subsystem tree and into the -next trees (described below). When the
process works, this step leads to more extensive review of the patch and
the discovery of any problems resulting from the integration of this
patch with work being done by others.
@ -236,7 +236,7 @@ finding the right maintainer. Sending patches directly to Linus is not
normally the right way to go.
2.4: STAGING TREES
2.4: NEXT TREES
The chain of subsystem trees guides the flow of patches into the kernel,
but it also raises an interesting question: what if somebody wants to look
@ -250,7 +250,7 @@ changes land in the mainline kernel. One could pull changes from all of
the interesting subsystem trees, but that would be a big and error-prone
job.
The answer comes in the form of staging trees, where subsystem trees are
The answer comes in the form of -next trees, where subsystem trees are
collected for testing and review. The older of these trees, maintained by
Andrew Morton, is called "-mm" (for memory management, which is how it got
started). The -mm tree integrates patches from a long list of subsystem
@ -275,7 +275,7 @@ directory at:
Use of the MMOTM tree is likely to be a frustrating experience, though;
there is a definite chance that it will not even compile.
The other staging tree, started more recently, is linux-next, maintained by
The other -next tree, started more recently, is linux-next, maintained by
Stephen Rothwell. The linux-next tree is, by design, a snapshot of what
the mainline is expected to look like after the next merge window closes.
Linux-next trees are announced on the linux-kernel and linux-next mailing
@ -303,12 +303,25 @@ volatility of linux-next tends to make it a difficult development target.
See http://lwn.net/Articles/289013/ for more information on this topic, and
stay tuned; much is still in flux where linux-next is involved.
Besides the mmotm and linux-next trees, the kernel source tree now contains
the drivers/staging/ directory and many sub-directories for drivers or
filesystems that are on their way to being added to the kernel tree
proper, but they remain in drivers/staging/ while they still need more
work.
2.4.1: STAGING TREES
The kernel source tree now contains the drivers/staging/ directory, where
many sub-directories for drivers or filesystems that are on their way to
being added to the kernel tree live. They remain in drivers/staging while
they still need more work; once complete, they can be moved into the
kernel proper. This is a way to keep track of drivers that aren't
up to Linux kernel coding or quality standards, but people may want to use
them and track development.
Greg Kroah-Hartman currently (as of 2.6.36) maintains the staging tree.
Drivers that still need work are sent to him, with each driver having
its own subdirectory in drivers/staging/. Along with the driver source
files, a TODO file should be present in the directory as well. The TODO
file lists the pending work that the driver needs for acceptance into
the kernel proper, as well as a list of people that should be Cc'd for any
patches to the driver. Staging drivers that don't currently build should
have their config entries depend upon CONFIG_BROKEN. Once they can
be successfully built without outside patches, CONFIG_BROKEN can be removed.
2.5: TOOLS

View File

@ -89,7 +89,7 @@ static ssize_t childless_storeme_write(struct childless *childless,
char *p = (char *) page;
tmp = simple_strtoul(p, &p, 10);
if (!p || (*p && (*p != '\n')))
if ((*p != '\0') && (*p != '\n'))
return -EINVAL;
if (tmp > INT_MAX)

View File

@ -617,6 +617,16 @@ and have the following read/write attributes:
is configured as an output, this value may be written;
any nonzero value is treated as high.
If the pin can be configured as interrupt-generating interrupt
and if it has been configured to generate interrupts (see the
description of "edge"), you can poll(2) on that file and
poll(2) will return whenever the interrupt was triggered. If
you use poll(2), set the events POLLPRI and POLLERR. If you
use select(2), set the file descriptor in exceptfds. After
poll(2) returns, either lseek(2) to the beginning of the sysfs
file and read the new value or close the file and re-open it
to read the value.
"edge" ... reads as either "none", "rising", "falling", or
"both". Write these strings to select the signal edge(s)
that will make poll(2) on the "value" file return.

View File

@ -11,7 +11,7 @@ Authors:
Mark M. Hoffman <mhoffman@lightlink.com>
Ported to 2.6 by Eric J. Bowersox <ericb@aspsys.com>
Adapted to 2.6.20 by Carsten Emde <ce@osadl.org>
Modified for mainline integration by Hans J. Koch <hjk@linutronix.de>
Modified for mainline integration by Hans J. Koch <hjk@hansjkoch.de>
Module Parameters
-----------------

View File

@ -8,7 +8,7 @@ Supported chips:
Datasheet: http://pdfserv.maxim-ic.com/en/ds/MAX6650-MAX6651.pdf
Authors:
Hans J. Koch <hjk@linutronix.de>
Hans J. Koch <hjk@hansjkoch.de>
John Morris <john.morris@spirentcom.com>
Claus Gindhart <claus.gindhart@kontron.com>

View File

@ -37,6 +37,9 @@ Typical usage of the OPP library is as follows:
SoC framework -> modifies on required cases certain OPPs -> OPP layer
-> queries to search/retrieve information ->
Architectures that provide a SoC framework for OPP should select ARCH_HAS_OPP
to make the OPP layer available.
OPP layer expects each domain to be represented by a unique device pointer. SoC
framework registers a set of initial OPPs per device with the OPP layer. This
list is expected to be an optimally small number typically around 5 per device.

View File

@ -1829,6 +1829,13 @@ W: http://www.chelsio.com
S: Supported
F: drivers/net/cxgb4vf/
STMMAC ETHERNET DRIVER
M: Giuseppe Cavallaro <peppe.cavallaro@st.com>
L: netdev@vger.kernel.org
W: http://www.stlinux.com
S: Supported
F: drivers/net/stmmac/
CYBERPRO FB DRIVER
M: Russell King <linux@arm.linux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -2008,6 +2015,7 @@ F: drivers/hwmon/dme1737.c
DOCBOOK FOR DOCUMENTATION
M: Randy Dunlap <rdunlap@xenotime.net>
S: Maintained
F: scripts/kernel-doc
DOCKING STATION DRIVER
M: Shaohua Li <shaohua.li@intel.com>
@ -2018,6 +2026,7 @@ F: drivers/acpi/dock.c
DOCUMENTATION
M: Randy Dunlap <rdunlap@xenotime.net>
L: linux-doc@vger.kernel.org
T: quilt oss.oracle.com/~rdunlap/kernel-doc-patches/current/
S: Maintained
F: Documentation/

View File

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 37
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Flesh-Eating Bats with Fangs
# *DOCUMENTATION*

View File

@ -7,7 +7,6 @@
*/
#include <linux/module.h>
#include <linux/smp_lock.h>
#include <linux/unistd.h>
#include <linux/user.h>
#include <linux/uaccess.h>

View File

@ -16,7 +16,6 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>

View File

@ -28,7 +28,6 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>

View File

@ -202,7 +202,7 @@ simscsi_readwrite10 (struct scsi_cmnd *sc, int mode)
}
static int
simscsi_queuecommand (struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
simscsi_queuecommand_lck (struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
{
unsigned int target_id = sc->device->id;
char fname[MAX_ROOT_LEN+16];
@ -326,6 +326,8 @@ simscsi_queuecommand (struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *))
return 0;
}
static DEF_SCSI_QCMD(simscsi_queuecommand)
static int
simscsi_host_reset (struct scsi_cmnd *sc)
{

View File

@ -18,7 +18,6 @@
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>

View File

@ -19,7 +19,6 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>

View File

@ -14,7 +14,6 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>

View File

@ -28,7 +28,6 @@
#include <linux/namei.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
#include <linux/syscalls.h>
#include <linux/utsname.h>
#include <linux/vfs.h>

View File

@ -20,7 +20,6 @@
#include <linux/times.h>
#include <linux/time.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/sem.h>
#include <linux/msg.h>
#include <linux/shm.h>

View File

@ -4,6 +4,10 @@ config PPC32
bool
default y if !PPC64
config 32BIT
bool
default y if PPC32
config 64BIT
bool
default y if PPC64

View File

@ -33,9 +33,10 @@ __div64_32:
cntlzw r0,r5 # we are shifting the dividend right
li r10,-1 # to make it < 2^32, and shifting
srw r10,r10,r0 # the divisor right the same amount,
add r9,r4,r10 # rounding up (so the estimate cannot
addc r9,r4,r10 # rounding up (so the estimate cannot
andc r11,r6,r10 # ever be too large, only too small)
andc r9,r9,r10
addze r9,r9
or r11,r5,r11
rotlw r9,r9,r0
rotlw r11,r11,r0

View File

@ -337,7 +337,7 @@ char *dbg_get_reg(int regno, void *mem, struct pt_regs *regs)
/* FP registers 32 -> 63 */
#if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_SPE)
if (current)
memcpy(mem, current->thread.evr[regno-32],
memcpy(mem, &current->thread.evr[regno-32],
dbg_reg_def[regno].size);
#else
/* fp registers not used by kernel, leave zero */
@ -362,7 +362,7 @@ int dbg_set_reg(int regno, void *mem, struct pt_regs *regs)
if (regno >= 32 && regno < 64) {
/* FP registers 32 -> 63 */
#if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_SPE)
memcpy(current->thread.evr[regno-32], mem,
memcpy(&current->thread.evr[regno-32], mem,
dbg_reg_def[regno].size);
#else
/* fp registers not used by kernel, leave zero */

View File

@ -497,9 +497,8 @@ static void __init emergency_stack_init(void)
}
/*
* Called into from start_kernel, after lock_kernel has been called.
* Initializes bootmem, which is unsed to manage page allocation until
* mem_init is called.
* Called into from start_kernel this initializes bootmem, which is used
* to manage page allocation until mem_init is called.
*/
void __init setup_arch(char **cmdline_p)
{

View File

@ -23,7 +23,6 @@
#include <linux/resource.h>
#include <linux/times.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/sem.h>
#include <linux/msg.h>
#include <linux/shm.h>

View File

@ -1123,7 +1123,7 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
else
#endif /* CONFIG_PPC_HAS_HASH_64K */
rc = __hash_page_4K(ea, access, vsid, ptep, trap, local, ssize,
subpage_protection(pgdir, ea));
subpage_protection(mm, ea));
/* Dump some info in case of hash insertion failure, they should
* never happen so it is really useful to know if/when they do

View File

@ -138,8 +138,11 @@
cmpldi cr0,r15,0 /* Check for user region */
std r14,EX_TLB_ESR(r12) /* write crazy -1 to frame */
beq normal_tlb_miss
li r11,_PAGE_PRESENT|_PAGE_BAP_SX /* Base perm */
oris r11,r11,_PAGE_ACCESSED@h
/* XXX replace the RMW cycles with immediate loads + writes */
1: mfspr r10,SPRN_MAS1
mfspr r10,SPRN_MAS1
cmpldi cr0,r15,8 /* Check for vmalloc region */
rlwinm r10,r10,0,16,1 /* Clear TID */
mtspr SPRN_MAS1,r10

View File

@ -585,6 +585,6 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
ppc64_rma_size = min_t(u64, first_memblock_size, 0x40000000);
/* Finally limit subsequent allocations */
memblock_set_current_limit(ppc64_memblock_base + ppc64_rma_size);
memblock_set_current_limit(first_memblock_base + ppc64_rma_size);
}
#endif /* CONFIG_PPC64 */

View File

@ -47,6 +47,12 @@ config LPARCFG
config PPC_PSERIES_DEBUG
depends on PPC_PSERIES && PPC_EARLY_DEBUG
bool "Enable extra debug logging in platforms/pseries"
help
Say Y here if you want the pseries core to produce a bunch of
debug messages to the system log. Select this if you are having a
problem with the pseries core and want to see more of what is
going on. This does not enable debugging in lpar.c, which must
be manually done due to its verbosity.
default y
config PPC_SMLPAR

View File

@ -21,8 +21,6 @@
* Please address comments and feedback to Linas Vepstas <linas@austin.ibm.com>
*/
#undef DEBUG
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/list.h>

View File

@ -25,8 +25,6 @@
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#undef DEBUG
#include <linux/pci.h>
#include <asm/pci-bridge.h>
#include <asm/ppc-pci.h>

View File

@ -6,6 +6,18 @@ config TRACE_IRQFLAGS_SUPPORT
source "lib/Kconfig.debug"
config STRICT_DEVMEM
def_bool y
prompt "Filter access to /dev/mem"
---help---
This option restricts access to /dev/mem. If this option is
disabled, you allow userspace access to all memory, including
kernel and userspace memory. Accidental memory access is likely
to be disastrous.
Memory access is required for experts who want to debug the kernel.
If you are unsure, say Y.
config DEBUG_STRICT_USER_COPY_CHECKS
bool "Strict user copy size checks"
---help---

View File

@ -130,6 +130,11 @@ struct page;
void arch_free_page(struct page *page, int order);
void arch_alloc_page(struct page *page, int order);
static inline int devmem_is_allowed(unsigned long pfn)
{
return 0;
}
#define HAVE_ARCH_FREE_PAGE
#define HAVE_ARCH_ALLOC_PAGE

View File

@ -25,7 +25,6 @@
#include <linux/resource.h>
#include <linux/times.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/sem.h>
#include <linux/msg.h>
#include <linux/shm.h>

View File

@ -30,6 +30,7 @@
#include <asm/sections.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/hardirq.h>
DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@ -212,7 +213,7 @@ static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
/* Set the PER control regs, turns on single step for this address */
__ctl_load(kprobe_per_regs, 9, 11);
regs->psw.mask |= PSW_MASK_PER;
regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK);
regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT);
}
static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
@ -239,7 +240,7 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
__get_cpu_var(current_kprobe) = p;
/* Save the interrupt and per flags */
kcb->kprobe_saved_imask = regs->psw.mask &
(PSW_MASK_PER | PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK);
(PSW_MASK_PER | PSW_MASK_IO | PSW_MASK_EXT);
/* Save the control regs that govern PER */
__ctl_store(kcb->kprobe_saved_ctl, 9, 11);
}
@ -316,8 +317,6 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
return 1;
ss_probe:
if (regs->psw.mask & (PSW_MASK_PER | PSW_MASK_IO))
local_irq_disable();
prepare_singlestep(p, regs);
kcb->kprobe_status = KPROBE_HIT_SS;
return 1;
@ -350,6 +349,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
struct hlist_node *node, *tmp;
unsigned long flags, orig_ret_address = 0;
unsigned long trampoline_address = (unsigned long)&kretprobe_trampoline;
kprobe_opcode_t *correct_ret_addr = NULL;
INIT_HLIST_HEAD(&empty_rp);
kretprobe_hash_lock(current, &head, &flags);
@ -372,10 +372,32 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
/* another task is sharing our hash bucket */
continue;
if (ri->rp && ri->rp->handler)
ri->rp->handler(ri, regs);
orig_ret_address = (unsigned long)ri->ret_addr;
if (orig_ret_address != trampoline_address)
/*
* This is the real return address. Any other
* instances associated with this task are for
* other calls deeper on the call stack
*/
break;
}
kretprobe_assert(ri, orig_ret_address, trampoline_address);
correct_ret_addr = ri->ret_addr;
hlist_for_each_entry_safe(ri, node, tmp, head, hlist) {
if (ri->task != current)
/* another task is sharing our hash bucket */
continue;
orig_ret_address = (unsigned long)ri->ret_addr;
if (ri->rp && ri->rp->handler) {
ri->ret_addr = correct_ret_addr;
ri->rp->handler(ri, regs);
}
recycle_rp_inst(ri, &empty_rp);
if (orig_ret_address != trampoline_address) {
@ -387,7 +409,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
break;
}
}
kretprobe_assert(ri, orig_ret_address, trampoline_address);
regs->psw.addr = orig_ret_address | PSW_ADDR_AMODE;
reset_current_kprobe();
@ -465,8 +487,6 @@ static int __kprobes post_kprobe_handler(struct pt_regs *regs)
goto out;
}
reset_current_kprobe();
if (regs->psw.mask & (PSW_MASK_PER | PSW_MASK_IO))
local_irq_enable();
out:
preempt_enable_no_resched();
@ -482,7 +502,7 @@ out:
return 1;
}
int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
static int __kprobes kprobe_trap_handler(struct pt_regs *regs, int trapnr)
{
struct kprobe *cur = kprobe_running();
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
@ -508,8 +528,6 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
restore_previous_kprobe(kcb);
else {
reset_current_kprobe();
if (regs->psw.mask & (PSW_MASK_PER | PSW_MASK_IO))
local_irq_enable();
}
preempt_enable_no_resched();
break;
@ -553,6 +571,18 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
return 0;
}
int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
{
int ret;
if (regs->psw.mask & (PSW_MASK_IO | PSW_MASK_EXT))
local_irq_disable();
ret = kprobe_trap_handler(regs, trapnr);
if (regs->psw.mask & (PSW_MASK_IO | PSW_MASK_EXT))
local_irq_restore(regs->psw.mask & ~PSW_MASK_PER);
return ret;
}
/*
* Wrapper routine to for handling exceptions.
*/
@ -560,8 +590,12 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data)
{
struct die_args *args = (struct die_args *)data;
struct pt_regs *regs = args->regs;
int ret = NOTIFY_DONE;
if (regs->psw.mask & (PSW_MASK_IO | PSW_MASK_EXT))
local_irq_disable();
switch (val) {
case DIE_BPT:
if (kprobe_handler(args->regs))
@ -572,16 +606,17 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
ret = NOTIFY_STOP;
break;
case DIE_TRAP:
/* kprobe_running() needs smp_processor_id() */
preempt_disable();
if (kprobe_running() &&
kprobe_fault_handler(args->regs, args->trapnr))
if (!preemptible() && kprobe_running() &&
kprobe_trap_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
preempt_enable();
break;
default:
break;
}
if (regs->psw.mask & (PSW_MASK_IO | PSW_MASK_EXT))
local_irq_restore(regs->psw.mask & ~PSW_MASK_PER);
return ret;
}
@ -595,6 +630,7 @@ int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
/* setup return addr to the jprobe handler routine */
regs->psw.addr = (unsigned long)(jp->entry) | PSW_ADDR_AMODE;
regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT);
/* r14 is the function return address */
kcb->jprobe_saved_r14 = (unsigned long)regs->gprs[14];

View File

@ -20,18 +20,17 @@
static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
unsigned long end, int write, struct page **pages, int *nr)
{
unsigned long mask, result;
unsigned long mask;
pte_t *ptep, pte;
struct page *page;
result = write ? 0 : _PAGE_RO;
mask = result | _PAGE_INVALID | _PAGE_SPECIAL;
mask = (write ? _PAGE_RO : 0) | _PAGE_INVALID | _PAGE_SPECIAL;
ptep = ((pte_t *) pmd_deref(pmd)) + pte_index(addr);
do {
pte = *ptep;
barrier();
if ((pte_val(pte) & mask) != result)
if ((pte_val(pte) & mask) != 0)
return 0;
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);

View File

@ -12,7 +12,6 @@
#include <linux/sched.h>
#include <linux/threads.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/interrupt.h>
#include <linux/kernel_stat.h>
#include <linux/init.h>

View File

@ -17,7 +17,6 @@
#include <linux/resource.h>
#include <linux/times.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/sem.h>
#include <linux/msg.h>
#include <linux/shm.h>

View File

@ -19,7 +19,6 @@
#include <linux/mman.h>
#include <linux/utsname.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/ipc.h>
#include <asm/uaccess.h>

View File

@ -16,7 +16,6 @@
#include <asm/system.h>
#include <asm/uaccess.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/perf_event.h>
enum direction {

View File

@ -9,7 +9,6 @@
#include <linux/string.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <asm/uaccess.h>

View File

@ -21,7 +21,6 @@
#include <linux/kdev_t.h>
#include <linux/fs.h>
#include <linux/fcntl.h>
#include <linux/smp_lock.h>
#include <linux/uaccess.h>
#include <linux/signal.h>
#include <asm/syscalls.h>

View File

@ -15,7 +15,6 @@
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/kernel.h>
#include <linux/signal.h>
#include <linux/errno.h>

View File

@ -16,7 +16,6 @@
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/kernel.h>
#include <linux/signal.h>
#include <linux/errno.h>

View File

@ -18,7 +18,6 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/kernel_stat.h>
#include <linux/smp_lock.h>
#include <linux/bootmem.h>
#include <linux/notifier.h>
#include <linux/cpu.h>

View File

@ -20,7 +20,6 @@
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/syscalls.h>
#include <linux/mman.h>
#include <linux/file.h>

View File

@ -24,7 +24,6 @@
#include <linux/mman.h>
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/tty.h>

View File

@ -21,7 +21,6 @@
#include <linux/mm.h>
#include <linux/hugetlb.h>
#include <linux/pagemap.h>
#include <linux/smp_lock.h>
#include <linux/slab.h>
#include <linux/err.h>
#include <linux/sysctl.h>

View File

@ -5,7 +5,6 @@
#include "linux/stddef.h"
#include "linux/fs.h"
#include "linux/smp_lock.h"
#include "linux/ptrace.h"
#include "linux/sched.h"
#include "linux/slab.h"

View File

@ -28,7 +28,6 @@
#include <linux/syscalls.h>
#include <linux/times.h>
#include <linux/utsname.h>
#include <linux/smp_lock.h>
#include <linux/mm.h>
#include <linux/uio.h>
#include <linux/poll.h>

View File

@ -33,7 +33,6 @@
#include <linux/init.h>
#include <linux/poll.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/major.h>
#include <linux/fs.h>
#include <linux/device.h>

View File

@ -315,14 +315,18 @@ static void kgdb_remove_all_hw_break(void)
if (!breakinfo[i].enabled)
continue;
bp = *per_cpu_ptr(breakinfo[i].pev, cpu);
if (bp->attr.disabled == 1)
if (!bp->attr.disabled) {
arch_uninstall_hw_breakpoint(bp);
bp->attr.disabled = 1;
continue;
}
if (dbg_is_early)
early_dr7 &= ~encode_dr7(i, breakinfo[i].len,
breakinfo[i].type);
else
arch_uninstall_hw_breakpoint(bp);
bp->attr.disabled = 1;
else if (hw_break_release_slot(i))
printk(KERN_ERR "KGDB: hw bpt remove failed %lx\n",
breakinfo[i].addr);
breakinfo[i].enabled = 0;
}
}

View File

@ -30,7 +30,6 @@
#include <linux/init.h>
#include <linux/poll.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/major.h>
#include <linux/fs.h>
#include <linux/device.h>

View File

@ -3395,6 +3395,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
load_host_msrs(vcpu);
kvm_load_ldt(ldt_selector);
loadsegment(fs, fs_selector);
#ifdef CONFIG_X86_64
load_gs_index(gs_selector);
@ -3402,7 +3403,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
#else
loadsegment(gs, gs_selector);
#endif
kvm_load_ldt(ldt_selector);
reload_tss(vcpu);

View File

@ -821,10 +821,9 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
#endif
#ifdef CONFIG_X86_64
if (is_long_mode(&vmx->vcpu)) {
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
if (is_long_mode(&vmx->vcpu))
wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
}
#endif
for (i = 0; i < vmx->save_nmsrs; ++i)
kvm_set_shared_msr(vmx->guest_msrs[i].index,
@ -839,23 +838,23 @@ static void __vmx_load_host_state(struct vcpu_vmx *vmx)
++vmx->vcpu.stat.host_state_reload;
vmx->host_state.loaded = 0;
if (vmx->host_state.fs_reload_needed)
loadsegment(fs, vmx->host_state.fs_sel);
#ifdef CONFIG_X86_64
if (is_long_mode(&vmx->vcpu))
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
#endif
if (vmx->host_state.gs_ldt_reload_needed) {
kvm_load_ldt(vmx->host_state.ldt_sel);
#ifdef CONFIG_X86_64
load_gs_index(vmx->host_state.gs_sel);
wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
#else
loadsegment(gs, vmx->host_state.gs_sel);
#endif
}
if (vmx->host_state.fs_reload_needed)
loadsegment(fs, vmx->host_state.fs_sel);
reload_tss();
#ifdef CONFIG_X86_64
if (is_long_mode(&vmx->vcpu)) {
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
}
wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
#endif
if (current_thread_info()->status & TS_USEDFPU)
clts();

View File

@ -8,7 +8,6 @@
#include <linux/hdreg.h>
#include <linux/slab.h>
#include <linux/syscalls.h>
#include <linux/smp_lock.h>
#include <linux/types.h>
#include <linux/uaccess.h>

View File

@ -5,7 +5,6 @@
#include <linux/hdreg.h>
#include <linux/backing-dev.h>
#include <linux/buffer_head.h>
#include <linux/smp_lock.h>
#include <linux/blktrace_api.h>
#include <asm/uaccess.h>

View File

@ -3166,8 +3166,8 @@ static inline int __ata_scsi_queuecmd(struct scsi_cmnd *scmd,
/**
* ata_scsi_queuecmd - Issue SCSI cdb to libata-managed device
* @shost: SCSI host of command to be sent
* @cmd: SCSI command to be sent
* @done: Completion function, called when command is complete
*
* In some cases, this function translates SCSI commands into
* ATA taskfiles, and queues the taskfiles to be sent to
@ -3177,37 +3177,36 @@ static inline int __ata_scsi_queuecmd(struct scsi_cmnd *scmd,
* ATA and ATAPI devices appearing as SCSI devices.
*
* LOCKING:
* Releases scsi-layer-held lock, and obtains host lock.
* ATA host lock
*
* RETURNS:
* Return value from __ata_scsi_queuecmd() if @cmd can be queued,
* 0 otherwise.
*/
int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
int ata_scsi_queuecmd(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
{
struct ata_port *ap;
struct ata_device *dev;
struct scsi_device *scsidev = cmd->device;
struct Scsi_Host *shost = scsidev->host;
int rc = 0;
unsigned long irq_flags;
ap = ata_shost_to_port(shost);
spin_unlock(shost->host_lock);
spin_lock(ap->lock);
spin_lock_irqsave(ap->lock, irq_flags);
ata_scsi_dump_cdb(ap, cmd);
dev = ata_scsi_find_dev(ap, scsidev);
if (likely(dev))
rc = __ata_scsi_queuecmd(cmd, done, dev);
rc = __ata_scsi_queuecmd(cmd, cmd->scsi_done, dev);
else {
cmd->result = (DID_BAD_TARGET << 16);
done(cmd);
cmd->scsi_done(cmd);
}
spin_unlock(ap->lock);
spin_lock(shost->host_lock);
spin_unlock_irqrestore(ap->lock, irq_flags);
return rc;
}

View File

@ -538,7 +538,7 @@ static int vt8251_prepare_host(struct pci_dev *pdev, struct ata_host **r_host)
return 0;
}
static void svia_configure(struct pci_dev *pdev)
static void svia_configure(struct pci_dev *pdev, int board_id)
{
u8 tmp8;
@ -577,7 +577,7 @@ static void svia_configure(struct pci_dev *pdev)
}
/*
* vt6421 has problems talking to some drives. The following
* vt6420/1 has problems talking to some drives. The following
* is the fix from Joseph Chan <JosephChan@via.com.tw>.
*
* When host issues HOLD, device may send up to 20DW of data
@ -596,8 +596,9 @@ static void svia_configure(struct pci_dev *pdev)
*
* https://bugzilla.kernel.org/show_bug.cgi?id=15173
* http://article.gmane.org/gmane.linux.ide/46352
* http://thread.gmane.org/gmane.linux.kernel/1062139
*/
if (pdev->device == 0x3249) {
if (board_id == vt6420 || board_id == vt6421) {
pci_read_config_byte(pdev, 0x52, &tmp8);
tmp8 |= 1 << 2;
pci_write_config_byte(pdev, 0x52, tmp8);
@ -652,7 +653,7 @@ static int svia_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (rc)
return rc;
svia_configure(pdev);
svia_configure(pdev, board_id);
pci_set_master(pdev);
return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,

View File

@ -475,20 +475,33 @@ End:
*/
void dpm_resume_noirq(pm_message_t state)
{
struct device *dev;
struct list_head list;
ktime_t starttime = ktime_get();
INIT_LIST_HEAD(&list);
mutex_lock(&dpm_list_mtx);
transition_started = false;
list_for_each_entry(dev, &dpm_list, power.entry)
while (!list_empty(&dpm_list)) {
struct device *dev = to_device(dpm_list.next);
get_device(dev);
if (dev->power.status > DPM_OFF) {
int error;
dev->power.status = DPM_OFF;
mutex_unlock(&dpm_list_mtx);
error = device_resume_noirq(dev, state);
mutex_lock(&dpm_list_mtx);
if (error)
pm_dev_err(dev, state, " early", error);
}
if (!list_empty(&dev->power.entry))
list_move_tail(&dev->power.entry, &list);
put_device(dev);
}
list_splice(&list, &dpm_list);
mutex_unlock(&dpm_list_mtx);
dpm_show_time(starttime, state, "early");
resume_device_irqs();
@ -789,20 +802,33 @@ End:
*/
int dpm_suspend_noirq(pm_message_t state)
{
struct device *dev;
struct list_head list;
ktime_t starttime = ktime_get();
int error = 0;
INIT_LIST_HEAD(&list);
suspend_device_irqs();
mutex_lock(&dpm_list_mtx);
list_for_each_entry_reverse(dev, &dpm_list, power.entry) {
while (!list_empty(&dpm_list)) {
struct device *dev = to_device(dpm_list.prev);
get_device(dev);
mutex_unlock(&dpm_list_mtx);
error = device_suspend_noirq(dev, state);
mutex_lock(&dpm_list_mtx);
if (error) {
pm_dev_err(dev, state, " late", error);
put_device(dev);
break;
}
dev->power.status = DPM_OFF_IRQ;
if (!list_empty(&dev->power.entry))
list_move(&dev->power.entry, &list);
put_device(dev);
}
list_splice_tail(&list, &dpm_list);
mutex_unlock(&dpm_list_mtx);
if (error)
dpm_resume_noirq(resume_event(state));

View File

@ -62,8 +62,8 @@ static int cciss_scsi_proc_info(
int length, /* length of data in buffer */
int func); /* 0 == read, 1 == write */
static int cciss_scsi_queue_command (struct scsi_cmnd *cmd,
void (* done)(struct scsi_cmnd *));
static int cciss_scsi_queue_command (struct Scsi_Host *h,
struct scsi_cmnd *cmd);
static int cciss_eh_device_reset_handler(struct scsi_cmnd *);
static int cciss_eh_abort_handler(struct scsi_cmnd *);
@ -1406,7 +1406,7 @@ static void cciss_scatter_gather(ctlr_info_t *h, CommandList_struct *c,
static int
cciss_scsi_queue_command (struct scsi_cmnd *cmd, void (* done)(struct scsi_cmnd *))
cciss_scsi_queue_command_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
{
ctlr_info_t *h;
int rc;
@ -1504,6 +1504,8 @@ cciss_scsi_queue_command (struct scsi_cmnd *cmd, void (* done)(struct scsi_cmnd
return 0;
}
static DEF_SCSI_QCMD(cciss_scsi_queue_command)
static void cciss_unregister_scsi(ctlr_info_t *h)
{
struct cciss_scsi_adapter_data_t *sa;

View File

@ -36,7 +36,6 @@
#include <linux/memcontrol.h>
#include <linux/mm_inline.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
#include <linux/pkt_sched.h>
#define __KERNEL_SYSCALLS__
#include <linux/unistd.h>

View File

@ -26,7 +26,6 @@
#include <linux/module.h>
#include <linux/drbd.h>
#include <linux/sched.h>
#include <linux/smp_lock.h>
#include <linux/wait.h>
#include <linux/mm.h>
#include <linux/memcontrol.h>

View File

@ -39,7 +39,6 @@
#include <linux/mm.h>
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/smp_lock.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#include "agp.h"

View File

@ -81,7 +81,6 @@ static char *serial_version = "4.30";
#include <linux/mm.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
#include <linux/init.h>
#include <linux/bitops.h>
#include <linux/platform_device.h>

View File

@ -6,7 +6,6 @@
#include <linux/module.h>
#include <linux/smp_lock.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/tty.h>

View File

@ -14,7 +14,6 @@
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/smp_lock.h>
#include <linux/types.h>
#include <linux/miscdevice.h>
#include <linux/major.h>

View File

@ -37,7 +37,6 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/smp_lock.h>
#include <linux/init.h>
#include <linux/miscdevice.h>
#include <linux/delay.h>

View File

@ -21,7 +21,6 @@
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
#include <linux/interrupt.h>
#include <linux/tty.h>
#include <linux/tty_flip.h>

View File

@ -52,7 +52,6 @@
#include <linux/interrupt.h>
#include <linux/serial.h>
#include <linux/serialP.h>
#include <linux/smp_lock.h>
#include <linux/string.h>
#include <linux/fcntl.h>
#include <linux/ptrace.h>

View File

@ -87,7 +87,6 @@
#include <linux/tty_flip.h>
#include <linux/mm.h>
#include <linux/serial.h>
#include <linux/smp_lock.h>
#include <linux/fcntl.h>
#include <linux/major.h>
#include <linux/delay.h>

View File

@ -40,7 +40,6 @@
#include <linux/stallion.h>
#include <linux/ioport.h>
#include <linux/init.h>
#include <linux/smp_lock.h>
#include <linux/device.h>
#include <linux/delay.h>
#include <linux/ctype.h>

View File

@ -216,7 +216,6 @@
#include <linux/eisa.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
#include <linux/init.h>
#include <linux/miscdevice.h>
#include <linux/bitops.h>

View File

@ -23,7 +23,6 @@
#include <linux/interrupt.h>
#include <linux/time.h>
#include <linux/math64.h>
#include <linux/smp_lock.h>
#include <asm/genapic.h>
#include <asm/uv/uv_hub.h>

View File

@ -1468,7 +1468,7 @@ static int sbp2_map_scatterlist(struct sbp2_command_orb *orb,
/* SCSI stack integration */
static int sbp2_scsi_queuecommand(struct scsi_cmnd *cmd, scsi_done_fn_t done)
static int sbp2_scsi_queuecommand_lck(struct scsi_cmnd *cmd, scsi_done_fn_t done)
{
struct sbp2_logical_unit *lu = cmd->device->hostdata;
struct fw_device *device = target_device(lu->tgt);
@ -1534,6 +1534,8 @@ static int sbp2_scsi_queuecommand(struct scsi_cmnd *cmd, scsi_done_fn_t done)
return retval;
}
static DEF_SCSI_QCMD(sbp2_scsi_queuecommand)
static int sbp2_scsi_slave_alloc(struct scsi_device *sdev)
{
struct sbp2_logical_unit *lu = sdev->hostdata;

View File

@ -37,7 +37,6 @@
#include "drmP.h"
#include <linux/poll.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
/* from BKL pushdown: note that nothing else serializes idr_find() */
DEFINE_MUTEX(drm_global_mutex);

View File

@ -150,7 +150,8 @@ static const struct intel_device_info intel_ironlake_d_info = {
static const struct intel_device_info intel_ironlake_m_info = {
.gen = 5, .is_mobile = 1,
.need_gfx_hws = 1, .has_fbc = 1, .has_rc6 = 1, .has_hotplug = 1,
.need_gfx_hws = 1, .has_rc6 = 1, .has_hotplug = 1,
.has_fbc = 0, /* disabled due to buggy hardware */
.has_bsd_ring = 1,
};

View File

@ -1045,6 +1045,8 @@ void i915_gem_clflush_object(struct drm_gem_object *obj);
int i915_gem_object_set_domain(struct drm_gem_object *obj,
uint32_t read_domains,
uint32_t write_domain);
int i915_gem_object_flush_gpu(struct drm_i915_gem_object *obj,
bool interruptible);
int i915_gem_init_ringbuffer(struct drm_device *dev);
void i915_gem_cleanup_ringbuffer(struct drm_device *dev);
int i915_gem_do_init(struct drm_device *dev, unsigned long start,

View File

@ -547,6 +547,19 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
struct drm_i915_gem_object *obj_priv;
int ret = 0;
if (args->size == 0)
return 0;
if (!access_ok(VERIFY_WRITE,
(char __user *)(uintptr_t)args->data_ptr,
args->size))
return -EFAULT;
ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr,
args->size);
if (ret)
return -EFAULT;
ret = i915_mutex_lock_interruptible(dev);
if (ret)
return ret;
@ -564,23 +577,6 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
goto out;
}
if (args->size == 0)
goto out;
if (!access_ok(VERIFY_WRITE,
(char __user *)(uintptr_t)args->data_ptr,
args->size)) {
ret = -EFAULT;
goto out;
}
ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr,
args->size);
if (ret) {
ret = -EFAULT;
goto out;
}
ret = i915_gem_object_get_pages_or_evict(obj);
if (ret)
goto out;
@ -981,7 +977,20 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
struct drm_i915_gem_pwrite *args = data;
struct drm_gem_object *obj;
struct drm_i915_gem_object *obj_priv;
int ret = 0;
int ret;
if (args->size == 0)
return 0;
if (!access_ok(VERIFY_READ,
(char __user *)(uintptr_t)args->data_ptr,
args->size))
return -EFAULT;
ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr,
args->size);
if (ret)
return -EFAULT;
ret = i915_mutex_lock_interruptible(dev);
if (ret)
@ -994,30 +1003,12 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
}
obj_priv = to_intel_bo(obj);
/* Bounds check destination. */
if (args->offset > obj->size || args->size > obj->size - args->offset) {
ret = -EINVAL;
goto out;
}
if (args->size == 0)
goto out;
if (!access_ok(VERIFY_READ,
(char __user *)(uintptr_t)args->data_ptr,
args->size)) {
ret = -EFAULT;
goto out;
}
ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr,
args->size);
if (ret) {
ret = -EFAULT;
goto out;
}
/* We can only do the GTT pwrite on untiled buffers, as otherwise
* it would end up going through the fenced access, and we'll get
* different detiling behavior between reading and writing.
@ -2907,6 +2898,20 @@ i915_gem_object_set_to_display_plane(struct drm_gem_object *obj,
return 0;
}
int
i915_gem_object_flush_gpu(struct drm_i915_gem_object *obj,
bool interruptible)
{
if (!obj->active)
return 0;
if (obj->base.write_domain & I915_GEM_GPU_DOMAINS)
i915_gem_flush_ring(obj->base.dev, NULL, obj->ring,
0, obj->base.write_domain);
return i915_gem_object_wait_rendering(&obj->base, interruptible);
}
/**
* Moves a single object to the CPU read, and possibly write domain.
*

View File

@ -34,6 +34,25 @@
#include "i915_drm.h"
#include "i915_drv.h"
/* Here's the desired hotplug mode */
#define ADPA_HOTPLUG_BITS (ADPA_CRT_HOTPLUG_PERIOD_128 | \
ADPA_CRT_HOTPLUG_WARMUP_10MS | \
ADPA_CRT_HOTPLUG_SAMPLE_4S | \
ADPA_CRT_HOTPLUG_VOLTAGE_50 | \
ADPA_CRT_HOTPLUG_VOLREF_325MV | \
ADPA_CRT_HOTPLUG_ENABLE)
struct intel_crt {
struct intel_encoder base;
bool force_hotplug_required;
};
static struct intel_crt *intel_attached_crt(struct drm_connector *connector)
{
return container_of(intel_attached_encoder(connector),
struct intel_crt, base);
}
static void intel_crt_dpms(struct drm_encoder *encoder, int mode)
{
struct drm_device *dev = encoder->dev;
@ -129,7 +148,7 @@ static void intel_crt_mode_set(struct drm_encoder *encoder,
dpll_md & ~DPLL_MD_UDI_MULTIPLIER_MASK);
}
adpa = 0;
adpa = ADPA_HOTPLUG_BITS;
if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)
adpa |= ADPA_HSYNC_ACTIVE_HIGH;
if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)
@ -157,53 +176,44 @@ static void intel_crt_mode_set(struct drm_encoder *encoder,
static bool intel_ironlake_crt_detect_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct intel_crt *crt = intel_attached_crt(connector);
struct drm_i915_private *dev_priv = dev->dev_private;
u32 adpa, temp;
u32 adpa;
bool ret;
bool turn_off_dac = false;
temp = adpa = I915_READ(PCH_ADPA);
/* The first time through, trigger an explicit detection cycle */
if (crt->force_hotplug_required) {
bool turn_off_dac = HAS_PCH_SPLIT(dev);
u32 save_adpa;
if (HAS_PCH_SPLIT(dev))
turn_off_dac = true;
crt->force_hotplug_required = 0;
adpa &= ~ADPA_CRT_HOTPLUG_MASK;
if (turn_off_dac)
adpa &= ~ADPA_DAC_ENABLE;
save_adpa = adpa = I915_READ(PCH_ADPA);
DRM_DEBUG_KMS("trigger hotplug detect cycle: adpa=0x%x\n", adpa);
/* disable HPD first */
I915_WRITE(PCH_ADPA, adpa);
(void)I915_READ(PCH_ADPA);
adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER;
if (turn_off_dac)
adpa &= ~ADPA_DAC_ENABLE;
adpa |= (ADPA_CRT_HOTPLUG_PERIOD_128 |
ADPA_CRT_HOTPLUG_WARMUP_10MS |
ADPA_CRT_HOTPLUG_SAMPLE_4S |
ADPA_CRT_HOTPLUG_VOLTAGE_50 | /* default */
ADPA_CRT_HOTPLUG_VOLREF_325MV |
ADPA_CRT_HOTPLUG_ENABLE |
ADPA_CRT_HOTPLUG_FORCE_TRIGGER);
I915_WRITE(PCH_ADPA, adpa);
DRM_DEBUG_KMS("pch crt adpa 0x%x", adpa);
I915_WRITE(PCH_ADPA, adpa);
if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0,
1000))
DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER");
if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0,
1000))
DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER");
if (turn_off_dac) {
/* Make sure hotplug is enabled */
I915_WRITE(PCH_ADPA, temp | ADPA_CRT_HOTPLUG_ENABLE);
(void)I915_READ(PCH_ADPA);
if (turn_off_dac) {
I915_WRITE(PCH_ADPA, save_adpa);
POSTING_READ(PCH_ADPA);
}
}
/* Check the status to see if both blue and green are on now */
adpa = I915_READ(PCH_ADPA);
adpa &= ADPA_CRT_HOTPLUG_MONITOR_MASK;
if ((adpa == ADPA_CRT_HOTPLUG_MONITOR_COLOR) ||
(adpa == ADPA_CRT_HOTPLUG_MONITOR_MONO))
if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0)
ret = true;
else
ret = false;
DRM_DEBUG_KMS("ironlake hotplug adpa=0x%x, result %d\n", adpa, ret);
return ret;
}
@ -277,13 +287,12 @@ static bool intel_crt_ddc_probe(struct drm_i915_private *dev_priv, int ddc_bus)
return i2c_transfer(&dev_priv->gmbus[ddc_bus].adapter, msgs, 1) == 1;
}
static bool intel_crt_detect_ddc(struct drm_encoder *encoder)
static bool intel_crt_detect_ddc(struct intel_crt *crt)
{
struct intel_encoder *intel_encoder = to_intel_encoder(encoder);
struct drm_i915_private *dev_priv = encoder->dev->dev_private;
struct drm_i915_private *dev_priv = crt->base.base.dev->dev_private;
/* CRT should always be at 0, but check anyway */
if (intel_encoder->type != INTEL_OUTPUT_ANALOG)
if (crt->base.type != INTEL_OUTPUT_ANALOG)
return false;
if (intel_crt_ddc_probe(dev_priv, dev_priv->crt_ddc_pin)) {
@ -291,7 +300,7 @@ static bool intel_crt_detect_ddc(struct drm_encoder *encoder)
return true;
}
if (intel_ddc_probe(intel_encoder, dev_priv->crt_ddc_pin)) {
if (intel_ddc_probe(&crt->base, dev_priv->crt_ddc_pin)) {
DRM_DEBUG_KMS("CRT detected via DDC:0x50 [EDID]\n");
return true;
}
@ -300,9 +309,9 @@ static bool intel_crt_detect_ddc(struct drm_encoder *encoder)
}
static enum drm_connector_status
intel_crt_load_detect(struct drm_crtc *crtc, struct intel_encoder *intel_encoder)
intel_crt_load_detect(struct drm_crtc *crtc, struct intel_crt *crt)
{
struct drm_encoder *encoder = &intel_encoder->base;
struct drm_encoder *encoder = &crt->base.base;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
@ -434,7 +443,7 @@ static enum drm_connector_status
intel_crt_detect(struct drm_connector *connector, bool force)
{
struct drm_device *dev = connector->dev;
struct intel_encoder *encoder = intel_attached_encoder(connector);
struct intel_crt *crt = intel_attached_crt(connector);
struct drm_crtc *crtc;
int dpms_mode;
enum drm_connector_status status;
@ -443,28 +452,31 @@ intel_crt_detect(struct drm_connector *connector, bool force)
if (intel_crt_detect_hotplug(connector)) {
DRM_DEBUG_KMS("CRT detected via hotplug\n");
return connector_status_connected;
} else
} else {
DRM_DEBUG_KMS("CRT not detected via hotplug\n");
return connector_status_disconnected;
}
}
if (intel_crt_detect_ddc(&encoder->base))
if (intel_crt_detect_ddc(crt))
return connector_status_connected;
if (!force)
return connector->status;
/* for pre-945g platforms use load detect */
if (encoder->base.crtc && encoder->base.crtc->enabled) {
status = intel_crt_load_detect(encoder->base.crtc, encoder);
crtc = crt->base.base.crtc;
if (crtc && crtc->enabled) {
status = intel_crt_load_detect(crtc, crt);
} else {
crtc = intel_get_load_detect_pipe(encoder, connector,
crtc = intel_get_load_detect_pipe(&crt->base, connector,
NULL, &dpms_mode);
if (crtc) {
if (intel_crt_detect_ddc(&encoder->base))
if (intel_crt_detect_ddc(crt))
status = connector_status_connected;
else
status = intel_crt_load_detect(crtc, encoder);
intel_release_load_detect_pipe(encoder,
status = intel_crt_load_detect(crtc, crt);
intel_release_load_detect_pipe(&crt->base,
connector, dpms_mode);
} else
status = connector_status_unknown;
@ -536,17 +548,17 @@ static const struct drm_encoder_funcs intel_crt_enc_funcs = {
void intel_crt_init(struct drm_device *dev)
{
struct drm_connector *connector;
struct intel_encoder *intel_encoder;
struct intel_crt *crt;
struct intel_connector *intel_connector;
struct drm_i915_private *dev_priv = dev->dev_private;
intel_encoder = kzalloc(sizeof(struct intel_encoder), GFP_KERNEL);
if (!intel_encoder)
crt = kzalloc(sizeof(struct intel_crt), GFP_KERNEL);
if (!crt)
return;
intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);
if (!intel_connector) {
kfree(intel_encoder);
kfree(crt);
return;
}
@ -554,20 +566,20 @@ void intel_crt_init(struct drm_device *dev)
drm_connector_init(dev, &intel_connector->base,
&intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
drm_encoder_init(dev, &intel_encoder->base, &intel_crt_enc_funcs,
drm_encoder_init(dev, &crt->base.base, &intel_crt_enc_funcs,
DRM_MODE_ENCODER_DAC);
intel_connector_attach_encoder(intel_connector, intel_encoder);
intel_connector_attach_encoder(intel_connector, &crt->base);
intel_encoder->type = INTEL_OUTPUT_ANALOG;
intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) |
(1 << INTEL_ANALOG_CLONE_BIT) |
(1 << INTEL_SDVO_LVDS_CLONE_BIT);
intel_encoder->crtc_mask = (1 << 0) | (1 << 1);
crt->base.type = INTEL_OUTPUT_ANALOG;
crt->base.clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT |
1 << INTEL_ANALOG_CLONE_BIT |
1 << INTEL_SDVO_LVDS_CLONE_BIT);
crt->base.crtc_mask = (1 << 0) | (1 << 1);
connector->interlace_allowed = 1;
connector->doublescan_allowed = 0;
drm_encoder_helper_add(&intel_encoder->base, &intel_crt_helper_funcs);
drm_encoder_helper_add(&crt->base.base, &intel_crt_helper_funcs);
drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);
drm_sysfs_connector_add(connector);
@ -577,5 +589,22 @@ void intel_crt_init(struct drm_device *dev)
else
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
/*
* Configure the automatic hotplug detection stuff
*/
crt->force_hotplug_required = 0;
if (HAS_PCH_SPLIT(dev)) {
u32 adpa;
adpa = I915_READ(PCH_ADPA);
adpa &= ~ADPA_CRT_HOTPLUG_MASK;
adpa |= ADPA_HOTPLUG_BITS;
I915_WRITE(PCH_ADPA, adpa);
POSTING_READ(PCH_ADPA);
DRM_DEBUG_KMS("pch crt adpa set to 0x%x\n", adpa);
crt->force_hotplug_required = 1;
}
dev_priv->hotplug_supported_mask |= CRT_HOTPLUG_INT_STATUS;
}

View File

@ -1611,6 +1611,18 @@ intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
wait_event(dev_priv->pending_flip_queue,
atomic_read(&obj_priv->pending_flip) == 0);
/* Big Hammer, we also need to ensure that any pending
* MI_WAIT_FOR_EVENT inside a user batch buffer on the
* current scanout is retired before unpinning the old
* framebuffer.
*/
ret = i915_gem_object_flush_gpu(obj_priv, false);
if (ret) {
i915_gem_object_unpin(to_intel_framebuffer(crtc->fb)->obj);
mutex_unlock(&dev->struct_mutex);
return ret;
}
}
ret = intel_pipe_set_base_atomic(crtc, crtc->fb, x, y,

View File

@ -160,7 +160,7 @@ intel_gpio_create(struct drm_i915_private *dev_priv, u32 pin)
};
struct intel_gpio *gpio;
if (pin < 1 || pin > 7)
if (pin >= ARRAY_SIZE(map_pin_to_reg) || !map_pin_to_reg[pin])
return NULL;
gpio = kzalloc(sizeof(struct intel_gpio), GFP_KERNEL);
@ -172,7 +172,8 @@ intel_gpio_create(struct drm_i915_private *dev_priv, u32 pin)
gpio->reg += PCH_GPIOA - GPIOA;
gpio->dev_priv = dev_priv;
snprintf(gpio->adapter.name, I2C_NAME_SIZE, "GPIO%c", "?BACDEF?"[pin]);
snprintf(gpio->adapter.name, sizeof(gpio->adapter.name),
"i915 GPIO%c", "?BACDE?F"[pin]);
gpio->adapter.owner = THIS_MODULE;
gpio->adapter.algo_data = &gpio->algo;
gpio->adapter.dev.parent = &dev_priv->dev->pdev->dev;
@ -349,7 +350,7 @@ int intel_setup_gmbus(struct drm_device *dev)
"panel",
"dpc",
"dpb",
"reserved"
"reserved",
"dpd",
};
struct drm_i915_private *dev_priv = dev->dev_private;
@ -366,8 +367,8 @@ int intel_setup_gmbus(struct drm_device *dev)
bus->adapter.owner = THIS_MODULE;
bus->adapter.class = I2C_CLASS_DDC;
snprintf(bus->adapter.name,
I2C_NAME_SIZE,
"gmbus %s",
sizeof(bus->adapter.name),
"i915 gmbus %s",
names[i]);
bus->adapter.dev.parent = &dev->pdev->dev;

View File

@ -31,6 +31,7 @@
*/
#include <linux/backlight.h>
#include <linux/acpi.h>
#include "drmP.h"
#include "nouveau_drv.h"
@ -136,6 +137,14 @@ int nouveau_backlight_init(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
#ifdef CONFIG_ACPI
if (acpi_video_backlight_support()) {
NV_INFO(dev, "ACPI backlight interface available, "
"not registering our own\n");
return 0;
}
#endif
switch (dev_priv->card_type) {
case NV_40:
return nouveau_nv40_backlight_init(dev);

View File

@ -6829,7 +6829,7 @@ nouveau_bios_posted(struct drm_device *dev)
struct drm_nouveau_private *dev_priv = dev->dev_private;
unsigned htotal;
if (dev_priv->chipset >= NV_50) {
if (dev_priv->card_type >= NV_50) {
if (NVReadVgaCrtc(dev, 0, 0x00) == 0 &&
NVReadVgaCrtc(dev, 0, 0x1a) == 0)
return false;

View File

@ -143,8 +143,10 @@ nouveau_bo_new(struct drm_device *dev, struct nouveau_channel *chan,
nvbo->no_vm = no_vm;
nvbo->tile_mode = tile_mode;
nvbo->tile_flags = tile_flags;
nvbo->bo.bdev = &dev_priv->ttm.bdev;
nouveau_bo_fixup_align(dev, tile_mode, tile_flags, &align, &size);
nouveau_bo_fixup_align(dev, tile_mode, nouveau_bo_tile_layout(nvbo),
&align, &size);
align >>= PAGE_SHIFT;
nouveau_bo_placement_set(nvbo, flags, 0);
@ -176,6 +178,31 @@ set_placement_list(uint32_t *pl, unsigned *n, uint32_t type, uint32_t flags)
pl[(*n)++] = TTM_PL_FLAG_SYSTEM | flags;
}
static void
set_placement_range(struct nouveau_bo *nvbo, uint32_t type)
{
struct drm_nouveau_private *dev_priv = nouveau_bdev(nvbo->bo.bdev);
if (dev_priv->card_type == NV_10 &&
nvbo->tile_mode && (type & TTM_PL_FLAG_VRAM)) {
/*
* Make sure that the color and depth buffers are handled
* by independent memory controller units. Up to a 9x
* speed up when alpha-blending and depth-test are enabled
* at the same time.
*/
int vram_pages = dev_priv->vram_size >> PAGE_SHIFT;
if (nvbo->tile_flags & NOUVEAU_GEM_TILE_ZETA) {
nvbo->placement.fpfn = vram_pages / 2;
nvbo->placement.lpfn = ~0;
} else {
nvbo->placement.fpfn = 0;
nvbo->placement.lpfn = vram_pages / 2;
}
}
}
void
nouveau_bo_placement_set(struct nouveau_bo *nvbo, uint32_t type, uint32_t busy)
{
@ -190,6 +217,8 @@ nouveau_bo_placement_set(struct nouveau_bo *nvbo, uint32_t type, uint32_t busy)
pl->busy_placement = nvbo->busy_placements;
set_placement_list(nvbo->busy_placements, &pl->num_busy_placement,
type | busy, flags);
set_placement_range(nvbo, type);
}
int
@ -525,7 +554,8 @@ nv50_bo_move_m2mf(struct nouveau_channel *chan, struct ttm_buffer_object *bo,
stride = 16 * 4;
height = amount / stride;
if (new_mem->mem_type == TTM_PL_VRAM && nvbo->tile_flags) {
if (new_mem->mem_type == TTM_PL_VRAM &&
nouveau_bo_tile_layout(nvbo)) {
ret = RING_SPACE(chan, 8);
if (ret)
return ret;
@ -546,7 +576,8 @@ nv50_bo_move_m2mf(struct nouveau_channel *chan, struct ttm_buffer_object *bo,
BEGIN_RING(chan, NvSubM2MF, 0x0200, 1);
OUT_RING (chan, 1);
}
if (old_mem->mem_type == TTM_PL_VRAM && nvbo->tile_flags) {
if (old_mem->mem_type == TTM_PL_VRAM &&
nouveau_bo_tile_layout(nvbo)) {
ret = RING_SPACE(chan, 8);
if (ret)
return ret;
@ -753,7 +784,8 @@ nouveau_bo_vm_bind(struct ttm_buffer_object *bo, struct ttm_mem_reg *new_mem,
if (dev_priv->card_type == NV_50) {
ret = nv50_mem_vm_bind_linear(dev,
offset + dev_priv->vm_vram_base,
new_mem->size, nvbo->tile_flags,
new_mem->size,
nouveau_bo_tile_layout(nvbo),
offset);
if (ret)
return ret;
@ -894,7 +926,8 @@ nouveau_ttm_fault_reserve_notify(struct ttm_buffer_object *bo)
* nothing to do here.
*/
if (bo->mem.mem_type != TTM_PL_VRAM) {
if (dev_priv->card_type < NV_50 || !nvbo->tile_flags)
if (dev_priv->card_type < NV_50 ||
!nouveau_bo_tile_layout(nvbo))
return 0;
}

View File

@ -281,7 +281,7 @@ detect_analog:
nv_encoder = find_encoder_by_type(connector, OUTPUT_ANALOG);
if (!nv_encoder && !nouveau_tv_disable)
nv_encoder = find_encoder_by_type(connector, OUTPUT_TV);
if (nv_encoder) {
if (nv_encoder && force) {
struct drm_encoder *encoder = to_drm_encoder(nv_encoder);
struct drm_encoder_helper_funcs *helper =
encoder->helper_private;
@ -641,11 +641,28 @@ nouveau_connector_get_modes(struct drm_connector *connector)
return ret;
}
static unsigned
get_tmds_link_bandwidth(struct drm_connector *connector)
{
struct nouveau_connector *nv_connector = nouveau_connector(connector);
struct drm_nouveau_private *dev_priv = connector->dev->dev_private;
struct dcb_entry *dcb = nv_connector->detected_encoder->dcb;
if (dcb->location != DCB_LOC_ON_CHIP ||
dev_priv->chipset >= 0x46)
return 165000;
else if (dev_priv->chipset >= 0x40)
return 155000;
else if (dev_priv->chipset >= 0x18)
return 135000;
else
return 112000;
}
static int
nouveau_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct drm_nouveau_private *dev_priv = connector->dev->dev_private;
struct nouveau_connector *nv_connector = nouveau_connector(connector);
struct nouveau_encoder *nv_encoder = nv_connector->detected_encoder;
struct drm_encoder *encoder = to_drm_encoder(nv_encoder);
@ -663,11 +680,9 @@ nouveau_connector_mode_valid(struct drm_connector *connector,
max_clock = 400000;
break;
case OUTPUT_TMDS:
if ((dev_priv->card_type >= NV_50 && !nouveau_duallink) ||
!nv_encoder->dcb->duallink_possible)
max_clock = 165000;
else
max_clock = 330000;
max_clock = get_tmds_link_bandwidth(connector);
if (nouveau_duallink && nv_encoder->dcb->duallink_possible)
max_clock *= 2;
break;
case OUTPUT_ANALOG:
max_clock = nv_encoder->dcb->crtconf.maxfreq;
@ -709,44 +724,6 @@ nouveau_connector_best_encoder(struct drm_connector *connector)
return NULL;
}
void
nouveau_connector_set_polling(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct drm_crtc *crtc;
bool spare_crtc = false;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
spare_crtc |= !crtc->enabled;
connector->polled = 0;
switch (connector->connector_type) {
case DRM_MODE_CONNECTOR_VGA:
case DRM_MODE_CONNECTOR_TV:
if (dev_priv->card_type >= NV_50 ||
(nv_gf4_disp_arch(dev) && spare_crtc))
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
break;
case DRM_MODE_CONNECTOR_DVII:
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_HDMIA:
case DRM_MODE_CONNECTOR_DisplayPort:
case DRM_MODE_CONNECTOR_eDP:
if (dev_priv->card_type >= NV_50)
connector->polled = DRM_CONNECTOR_POLL_HPD;
else if (connector->connector_type == DRM_MODE_CONNECTOR_DVID ||
spare_crtc)
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
break;
default:
break;
}
}
static const struct drm_connector_helper_funcs
nouveau_connector_helper_funcs = {
.get_modes = nouveau_connector_get_modes,
@ -872,6 +849,7 @@ nouveau_connector_create(struct drm_device *dev, int index)
dev->mode_config.scaling_mode_property,
nv_connector->scaling_mode);
}
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
/* fall-through */
case DCB_CONNECTOR_TV_0:
case DCB_CONNECTOR_TV_1:
@ -888,11 +866,16 @@ nouveau_connector_create(struct drm_device *dev, int index)
dev->mode_config.dithering_mode_property,
nv_connector->use_dithering ?
DRM_MODE_DITHERING_ON : DRM_MODE_DITHERING_OFF);
if (dcb->type != DCB_CONNECTOR_LVDS) {
if (dev_priv->card_type >= NV_50)
connector->polled = DRM_CONNECTOR_POLL_HPD;
else
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
}
break;
}
nouveau_connector_set_polling(connector);
drm_sysfs_connector_add(connector);
dcb->drm = connector;
return dcb->drm;

View File

@ -52,9 +52,6 @@ static inline struct nouveau_connector *nouveau_connector(
struct drm_connector *
nouveau_connector_create(struct drm_device *, int index);
void
nouveau_connector_set_polling(struct drm_connector *);
int
nouveau_connector_bpp(struct drm_connector *);

View File

@ -100,6 +100,9 @@ struct nouveau_bo {
int pin_refcnt;
};
#define nouveau_bo_tile_layout(nvbo) \
((nvbo)->tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK)
static inline struct nouveau_bo *
nouveau_bo(struct ttm_buffer_object *bo)
{
@ -304,6 +307,7 @@ struct nouveau_fifo_engine {
void (*destroy_context)(struct nouveau_channel *);
int (*load_context)(struct nouveau_channel *);
int (*unload_context)(struct drm_device *);
void (*tlb_flush)(struct drm_device *dev);
};
struct nouveau_pgraph_object_method {
@ -336,6 +340,7 @@ struct nouveau_pgraph_engine {
void (*destroy_context)(struct nouveau_channel *);
int (*load_context)(struct nouveau_channel *);
int (*unload_context)(struct drm_device *);
void (*tlb_flush)(struct drm_device *dev);
void (*set_region_tiling)(struct drm_device *dev, int i, uint32_t addr,
uint32_t size, uint32_t pitch);
@ -485,13 +490,13 @@ enum nv04_fp_display_regs {
};
struct nv04_crtc_reg {
unsigned char MiscOutReg; /* */
unsigned char MiscOutReg;
uint8_t CRTC[0xa0];
uint8_t CR58[0x10];
uint8_t Sequencer[5];
uint8_t Graphics[9];
uint8_t Attribute[21];
unsigned char DAC[768]; /* Internal Colorlookuptable */
unsigned char DAC[768];
/* PCRTC regs */
uint32_t fb_start;
@ -539,43 +544,9 @@ struct nv04_output_reg {
};
struct nv04_mode_state {
uint32_t bpp;
uint32_t width;
uint32_t height;
uint32_t interlace;
uint32_t repaint0;
uint32_t repaint1;
uint32_t screen;
uint32_t scale;
uint32_t dither;
uint32_t extra;
uint32_t fifo;
uint32_t pixel;
uint32_t horiz;
int arbitration0;
int arbitration1;
uint32_t pll;
uint32_t pllB;
uint32_t vpll;
uint32_t vpll2;
uint32_t vpllB;
uint32_t vpll2B;
struct nv04_crtc_reg crtc_reg[2];
uint32_t pllsel;
uint32_t sel_clk;
uint32_t general;
uint32_t crtcOwner;
uint32_t head;
uint32_t head2;
uint32_t cursorConfig;
uint32_t cursor0;
uint32_t cursor1;
uint32_t cursor2;
uint32_t timingH;
uint32_t timingV;
uint32_t displayV;
uint32_t crtcSync;
struct nv04_crtc_reg crtc_reg[2];
};
enum nouveau_card_type {
@ -613,6 +584,12 @@ struct drm_nouveau_private {
struct work_struct irq_work;
struct work_struct hpd_work;
struct {
spinlock_t lock;
uint32_t hpd0_bits;
uint32_t hpd1_bits;
} hpd_state;
struct list_head vbl_waiting;
struct {
@ -1045,6 +1022,7 @@ extern int nv50_fifo_create_context(struct nouveau_channel *);
extern void nv50_fifo_destroy_context(struct nouveau_channel *);
extern int nv50_fifo_load_context(struct nouveau_channel *);
extern int nv50_fifo_unload_context(struct drm_device *);
extern void nv50_fifo_tlb_flush(struct drm_device *dev);
/* nvc0_fifo.c */
extern int nvc0_fifo_init(struct drm_device *);
@ -1122,6 +1100,8 @@ extern int nv50_graph_load_context(struct nouveau_channel *);
extern int nv50_graph_unload_context(struct drm_device *);
extern void nv50_graph_context_switch(struct drm_device *);
extern int nv50_grctx_init(struct nouveau_grctx *);
extern void nv50_graph_tlb_flush(struct drm_device *dev);
extern void nv86_graph_tlb_flush(struct drm_device *dev);
/* nvc0_graph.c */
extern int nvc0_graph_init(struct drm_device *);
@ -1239,7 +1219,6 @@ extern u16 nouveau_bo_rd16(struct nouveau_bo *nvbo, unsigned index);
extern void nouveau_bo_wr16(struct nouveau_bo *nvbo, unsigned index, u16 val);
extern u32 nouveau_bo_rd32(struct nouveau_bo *nvbo, unsigned index);
extern void nouveau_bo_wr32(struct nouveau_bo *nvbo, unsigned index, u32 val);
extern int nouveau_bo_sync_gpu(struct nouveau_bo *, struct nouveau_channel *);
/* nouveau_fence.c */
struct nouveau_fence;

View File

@ -249,6 +249,7 @@ alloc_semaphore(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_semaphore *sema;
int ret;
if (!USE_SEMA(dev))
return NULL;
@ -257,10 +258,14 @@ alloc_semaphore(struct drm_device *dev)
if (!sema)
goto fail;
ret = drm_mm_pre_get(&dev_priv->fence.heap);
if (ret)
goto fail;
spin_lock(&dev_priv->fence.lock);
sema->mem = drm_mm_search_free(&dev_priv->fence.heap, 4, 0, 0);
if (sema->mem)
sema->mem = drm_mm_get_block(sema->mem, 4, 0);
sema->mem = drm_mm_get_block_atomic(sema->mem, 4, 0);
spin_unlock(&dev_priv->fence.lock);
if (!sema->mem)

View File

@ -107,23 +107,29 @@ nouveau_gem_info(struct drm_gem_object *gem, struct drm_nouveau_gem_info *rep)
}
static bool
nouveau_gem_tile_flags_valid(struct drm_device *dev, uint32_t tile_flags) {
switch (tile_flags) {
case 0x0000:
case 0x1800:
case 0x2800:
case 0x4800:
case 0x7000:
case 0x7400:
case 0x7a00:
case 0xe000:
break;
default:
NV_ERROR(dev, "bad page flags: 0x%08x\n", tile_flags);
return false;
nouveau_gem_tile_flags_valid(struct drm_device *dev, uint32_t tile_flags)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
if (dev_priv->card_type >= NV_50) {
switch (tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK) {
case 0x0000:
case 0x1800:
case 0x2800:
case 0x4800:
case 0x7000:
case 0x7400:
case 0x7a00:
case 0xe000:
return true;
}
} else {
if (!(tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK))
return true;
}
return true;
NV_ERROR(dev, "bad page flags: 0x%08x\n", tile_flags);
return false;
}
int

View File

@ -519,11 +519,11 @@ nouveau_hw_fix_bad_vpll(struct drm_device *dev, int head)
struct pll_lims pll_lim;
struct nouveau_pll_vals pv;
uint32_t pllreg = head ? NV_RAMDAC_VPLL2 : NV_PRAMDAC_VPLL_COEFF;
enum pll_types pll = head ? PLL_VPLL1 : PLL_VPLL0;
if (get_pll_limits(dev, pllreg, &pll_lim))
if (get_pll_limits(dev, pll, &pll_lim))
return;
nouveau_hw_get_pllvals(dev, pllreg, &pv);
nouveau_hw_get_pllvals(dev, pll, &pv);
if (pv.M1 >= pll_lim.vco1.min_m && pv.M1 <= pll_lim.vco1.max_m &&
pv.N1 >= pll_lim.vco1.min_n && pv.N1 <= pll_lim.vco1.max_n &&
@ -536,7 +536,7 @@ nouveau_hw_fix_bad_vpll(struct drm_device *dev, int head)
pv.M1 = pll_lim.vco1.max_m;
pv.N1 = pll_lim.vco1.min_n;
pv.log2P = pll_lim.max_usable_log2p;
nouveau_hw_setpll(dev, pllreg, &pv);
nouveau_hw_setpll(dev, pll_lim.reg, &pv);
}
/*

View File

@ -415,6 +415,25 @@ nv_fix_nv40_hw_cursor(struct drm_device *dev, int head)
NVWriteRAMDAC(dev, head, NV_PRAMDAC_CU_START_POS, curpos);
}
static inline void
nv_set_crtc_base(struct drm_device *dev, int head, uint32_t offset)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
NVWriteCRTC(dev, head, NV_PCRTC_START, offset);
if (dev_priv->card_type == NV_04) {
/*
* Hilarious, the 24th bit doesn't want to stick to
* PCRTC_START...
*/
int cre_heb = NVReadVgaCrtc(dev, head, NV_CIO_CRE_HEB__INDEX);
NVWriteVgaCrtc(dev, head, NV_CIO_CRE_HEB__INDEX,
(cre_heb & ~0x40) | ((offset >> 18) & 0x40));
}
}
static inline void
nv_show_cursor(struct drm_device *dev, int head, bool show)
{

View File

@ -256,7 +256,7 @@ nouveau_i2c_find(struct drm_device *dev, int index)
if (index >= DCB_MAX_NUM_I2C_ENTRIES)
return NULL;
if (dev_priv->chipset >= NV_50 && (i2c->entry & 0x00000100)) {
if (dev_priv->card_type >= NV_50 && (i2c->entry & 0x00000100)) {
uint32_t reg = 0xe500, val;
if (i2c->port_type == 6) {

View File

@ -42,6 +42,13 @@
#include "nouveau_connector.h"
#include "nv50_display.h"
static DEFINE_RATELIMIT_STATE(nouveau_ratelimit_state, 3 * HZ, 20);
static int nouveau_ratelimit(void)
{
return __ratelimit(&nouveau_ratelimit_state);
}
void
nouveau_irq_preinstall(struct drm_device *dev)
{
@ -53,6 +60,7 @@ nouveau_irq_preinstall(struct drm_device *dev)
if (dev_priv->card_type >= NV_50) {
INIT_WORK(&dev_priv->irq_work, nv50_display_irq_handler_bh);
INIT_WORK(&dev_priv->hpd_work, nv50_display_irq_hotplug_bh);
spin_lock_init(&dev_priv->hpd_state.lock);
INIT_LIST_HEAD(&dev_priv->vbl_waiting);
}
}
@ -202,8 +210,8 @@ nouveau_fifo_irq_handler(struct drm_device *dev)
}
if (status & NV_PFIFO_INTR_DMA_PUSHER) {
u32 get = nv_rd32(dev, 0x003244);
u32 put = nv_rd32(dev, 0x003240);
u32 dma_get = nv_rd32(dev, 0x003244);
u32 dma_put = nv_rd32(dev, 0x003240);
u32 push = nv_rd32(dev, 0x003220);
u32 state = nv_rd32(dev, 0x003228);
@ -213,16 +221,18 @@ nouveau_fifo_irq_handler(struct drm_device *dev)
u32 ib_get = nv_rd32(dev, 0x003334);
u32 ib_put = nv_rd32(dev, 0x003330);
NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%02x%08x "
if (nouveau_ratelimit())
NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%02x%08x "
"Put 0x%02x%08x IbGet 0x%08x IbPut 0x%08x "
"State 0x%08x Push 0x%08x\n",
chid, ho_get, get, ho_put, put, ib_get, ib_put,
state, push);
chid, ho_get, dma_get, ho_put,
dma_put, ib_get, ib_put, state,
push);
/* METHOD_COUNT, in DMA_STATE on earlier chipsets */
nv_wr32(dev, 0x003364, 0x00000000);
if (get != put || ho_get != ho_put) {
nv_wr32(dev, 0x003244, put);
if (dma_get != dma_put || ho_get != ho_put) {
nv_wr32(dev, 0x003244, dma_put);
nv_wr32(dev, 0x003328, ho_put);
} else
if (ib_get != ib_put) {
@ -231,10 +241,10 @@ nouveau_fifo_irq_handler(struct drm_device *dev)
} else {
NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%08x "
"Put 0x%08x State 0x%08x Push 0x%08x\n",
chid, get, put, state, push);
chid, dma_get, dma_put, state, push);
if (get != put)
nv_wr32(dev, 0x003244, put);
if (dma_get != dma_put)
nv_wr32(dev, 0x003244, dma_put);
}
nv_wr32(dev, 0x003228, 0x00000000);
@ -266,8 +276,9 @@ nouveau_fifo_irq_handler(struct drm_device *dev)
}
if (status) {
NV_INFO(dev, "PFIFO_INTR 0x%08x - Ch %d\n",
status, chid);
if (nouveau_ratelimit())
NV_INFO(dev, "PFIFO_INTR 0x%08x - Ch %d\n",
status, chid);
nv_wr32(dev, NV03_PFIFO_INTR_0, status);
status = 0;
}
@ -544,13 +555,6 @@ nouveau_pgraph_intr_notify(struct drm_device *dev, uint32_t nsource)
nouveau_graph_dump_trap_info(dev, "PGRAPH_NOTIFY", &trap);
}
static DEFINE_RATELIMIT_STATE(nouveau_ratelimit_state, 3 * HZ, 20);
static int nouveau_ratelimit(void)
{
return __ratelimit(&nouveau_ratelimit_state);
}
static inline void
nouveau_pgraph_intr_error(struct drm_device *dev, uint32_t nsource)

View File

@ -33,9 +33,9 @@
#include "drmP.h"
#include "drm.h"
#include "drm_sarea.h"
#include "nouveau_drv.h"
#define MIN(a,b) a < b ? a : b
#include "nouveau_drv.h"
#include "nouveau_pm.h"
/*
* NV10-NV40 tiling helpers
@ -175,11 +175,10 @@ nv50_mem_vm_bind_linear(struct drm_device *dev, uint64_t virt, uint32_t size,
}
}
}
dev_priv->engine.instmem.flush(dev);
nv50_vm_flush(dev, 5);
nv50_vm_flush(dev, 0);
nv50_vm_flush(dev, 4);
dev_priv->engine.instmem.flush(dev);
dev_priv->engine.fifo.tlb_flush(dev);
dev_priv->engine.graph.tlb_flush(dev);
nv50_vm_flush(dev, 6);
return 0;
}
@ -209,11 +208,10 @@ nv50_mem_vm_unbind(struct drm_device *dev, uint64_t virt, uint32_t size)
pte++;
}
}
dev_priv->engine.instmem.flush(dev);
nv50_vm_flush(dev, 5);
nv50_vm_flush(dev, 0);
nv50_vm_flush(dev, 4);
dev_priv->engine.instmem.flush(dev);
dev_priv->engine.fifo.tlb_flush(dev);
dev_priv->engine.graph.tlb_flush(dev);
nv50_vm_flush(dev, 6);
}
@ -653,6 +651,7 @@ nouveau_mem_gart_init(struct drm_device *dev)
void
nouveau_mem_timing_init(struct drm_device *dev)
{
/* cards < NVC0 only */
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
struct nouveau_pm_memtimings *memtimings = &pm->memtimings;
@ -719,14 +718,14 @@ nouveau_mem_timing_init(struct drm_device *dev)
tUNK_19 = 1;
tUNK_20 = 0;
tUNK_21 = 0;
switch (MIN(recordlen,21)) {
case 21:
switch (min(recordlen, 22)) {
case 22:
tUNK_21 = entry[21];
case 20:
case 21:
tUNK_20 = entry[20];
case 19:
case 20:
tUNK_19 = entry[19];
case 18:
case 19:
tUNK_18 = entry[18];
default:
tUNK_0 = entry[0];
@ -756,24 +755,30 @@ nouveau_mem_timing_init(struct drm_device *dev)
timing->reg_100228 = (tUNK_12 << 16 | tUNK_11 << 8 | tUNK_10);
if(recordlen > 19) {
timing->reg_100228 += (tUNK_19 - 1) << 24;
} else {
}/* I cannot back-up this else-statement right now
else {
timing->reg_100228 += tUNK_12 << 24;
}
}*/
/* XXX: reg_10022c */
timing->reg_10022c = tUNK_2 - 1;
timing->reg_100230 = (tUNK_20 << 24 | tUNK_21 << 16 |
tUNK_13 << 8 | tUNK_13);
/* XXX: +6? */
timing->reg_100234 = (tRAS << 24 | (tUNK_19 + 6) << 8 | tRC);
if(tUNK_10 > tUNK_11) {
timing->reg_100234 += tUNK_10 << 16;
} else {
timing->reg_100234 += tUNK_11 << 16;
timing->reg_100234 += max(tUNK_10,tUNK_11) << 16;
/* XXX; reg_100238, reg_10023c
* reg: 0x00??????
* reg_10023c:
* 0 for pre-NV50 cards
* 0x????0202 for NV50+ cards (empirical evidence) */
if(dev_priv->card_type >= NV_50) {
timing->reg_10023c = 0x202;
}
/* XXX; reg_100238, reg_10023c */
NV_DEBUG(dev, "Entry %d: 220: %08x %08x %08x %08x\n", i,
timing->reg_100220, timing->reg_100224,
timing->reg_100228, timing->reg_10022c);

View File

@ -129,7 +129,7 @@ nouveau_gpuobj_new(struct drm_device *dev, struct nouveau_channel *chan,
if (ramin == NULL) {
spin_unlock(&dev_priv->ramin_lock);
nouveau_gpuobj_ref(NULL, &gpuobj);
return ret;
return -ENOMEM;
}
ramin = drm_mm_get_block_atomic(ramin, size, align);

View File

@ -284,6 +284,7 @@ nouveau_sysfs_fini(struct drm_device *dev)
}
}
#ifdef CONFIG_HWMON
static ssize_t
nouveau_hwmon_show_temp(struct device *d, struct device_attribute *a, char *buf)
{
@ -395,10 +396,12 @@ static struct attribute *hwmon_attributes[] = {
static const struct attribute_group hwmon_attrgroup = {
.attrs = hwmon_attributes,
};
#endif
static int
nouveau_hwmon_init(struct drm_device *dev)
{
#ifdef CONFIG_HWMON
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
struct device *hwmon_dev;
@ -425,13 +428,14 @@ nouveau_hwmon_init(struct drm_device *dev)
}
pm->hwmon = hwmon_dev;
#endif
return 0;
}
static void
nouveau_hwmon_fini(struct drm_device *dev)
{
#ifdef CONFIG_HWMON
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
@ -439,6 +443,7 @@ nouveau_hwmon_fini(struct drm_device *dev)
sysfs_remove_group(&pm->hwmon->kobj, &hwmon_attrgroup);
hwmon_device_unregister(pm->hwmon);
}
#endif
}
int

View File

@ -153,26 +153,42 @@ nouveau_ramht_insert(struct nouveau_channel *chan, u32 handle,
return -ENOMEM;
}
static struct nouveau_ramht_entry *
nouveau_ramht_remove_entry(struct nouveau_channel *chan, u32 handle)
{
struct nouveau_ramht *ramht = chan ? chan->ramht : NULL;
struct nouveau_ramht_entry *entry;
unsigned long flags;
if (!ramht)
return NULL;
spin_lock_irqsave(&ramht->lock, flags);
list_for_each_entry(entry, &ramht->entries, head) {
if (entry->channel == chan &&
(!handle || entry->handle == handle)) {
list_del(&entry->head);
spin_unlock_irqrestore(&ramht->lock, flags);
return entry;
}
}
spin_unlock_irqrestore(&ramht->lock, flags);
return NULL;
}
static void
nouveau_ramht_remove_locked(struct nouveau_channel *chan, u32 handle)
nouveau_ramht_remove_hash(struct nouveau_channel *chan, u32 handle)
{
struct drm_device *dev = chan->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_instmem_engine *instmem = &dev_priv->engine.instmem;
struct nouveau_gpuobj *ramht = chan->ramht->gpuobj;
struct nouveau_ramht_entry *entry, *tmp;
unsigned long flags;
u32 co, ho;
list_for_each_entry_safe(entry, tmp, &chan->ramht->entries, head) {
if (entry->channel != chan || entry->handle != handle)
continue;
nouveau_gpuobj_ref(NULL, &entry->gpuobj);
list_del(&entry->head);
kfree(entry);
break;
}
spin_lock_irqsave(&chan->ramht->lock, flags);
co = ho = nouveau_ramht_hash_handle(chan, handle);
do {
if (nouveau_ramht_entry_valid(dev, ramht, co) &&
@ -184,7 +200,7 @@ nouveau_ramht_remove_locked(struct nouveau_channel *chan, u32 handle)
nv_wo32(ramht, co + 0, 0x00000000);
nv_wo32(ramht, co + 4, 0x00000000);
instmem->flush(dev);
return;
goto out;
}
co += 8;
@ -194,17 +210,22 @@ nouveau_ramht_remove_locked(struct nouveau_channel *chan, u32 handle)
NV_ERROR(dev, "RAMHT entry not found. ch=%d, handle=0x%08x\n",
chan->id, handle);
out:
spin_unlock_irqrestore(&chan->ramht->lock, flags);
}
void
nouveau_ramht_remove(struct nouveau_channel *chan, u32 handle)
{
struct nouveau_ramht *ramht = chan->ramht;
unsigned long flags;
struct nouveau_ramht_entry *entry;
spin_lock_irqsave(&ramht->lock, flags);
nouveau_ramht_remove_locked(chan, handle);
spin_unlock_irqrestore(&ramht->lock, flags);
entry = nouveau_ramht_remove_entry(chan, handle);
if (!entry)
return;
nouveau_ramht_remove_hash(chan, entry->handle);
nouveau_gpuobj_ref(NULL, &entry->gpuobj);
kfree(entry);
}
struct nouveau_gpuobj *
@ -265,23 +286,19 @@ void
nouveau_ramht_ref(struct nouveau_ramht *ref, struct nouveau_ramht **ptr,
struct nouveau_channel *chan)
{
struct nouveau_ramht_entry *entry, *tmp;
struct nouveau_ramht_entry *entry;
struct nouveau_ramht *ramht;
unsigned long flags;
if (ref)
kref_get(&ref->refcount);
ramht = *ptr;
if (ramht) {
spin_lock_irqsave(&ramht->lock, flags);
list_for_each_entry_safe(entry, tmp, &ramht->entries, head) {
if (entry->channel != chan)
continue;
nouveau_ramht_remove_locked(chan, entry->handle);
while ((entry = nouveau_ramht_remove_entry(chan, 0))) {
nouveau_ramht_remove_hash(chan, entry->handle);
nouveau_gpuobj_ref(NULL, &entry->gpuobj);
kfree(entry);
}
spin_unlock_irqrestore(&ramht->lock, flags);
kref_put(&ramht->refcount, nouveau_ramht_del);
}

View File

@ -120,8 +120,8 @@ nouveau_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem)
dev_priv->engine.instmem.flush(nvbe->dev);
if (dev_priv->card_type == NV_50) {
nv50_vm_flush(dev, 5); /* PGRAPH */
nv50_vm_flush(dev, 0); /* PFIFO */
dev_priv->engine.fifo.tlb_flush(dev);
dev_priv->engine.graph.tlb_flush(dev);
}
nvbe->bound = true;
@ -162,8 +162,8 @@ nouveau_sgdma_unbind(struct ttm_backend *be)
dev_priv->engine.instmem.flush(nvbe->dev);
if (dev_priv->card_type == NV_50) {
nv50_vm_flush(dev, 5);
nv50_vm_flush(dev, 0);
dev_priv->engine.fifo.tlb_flush(dev);
dev_priv->engine.graph.tlb_flush(dev);
}
nvbe->bound = false;
@ -224,7 +224,11 @@ nouveau_sgdma_init(struct drm_device *dev)
int i, ret;
if (dev_priv->card_type < NV_50) {
aper_size = (64 * 1024 * 1024);
if(dev_priv->ramin_rsvd_vram < 2 * 1024 * 1024)
aper_size = 64 * 1024 * 1024;
else
aper_size = 512 * 1024 * 1024;
obj_size = (aper_size >> NV_CTXDMA_PAGE_SHIFT) * 4;
obj_size += 8; /* ctxdma header */
} else {

View File

@ -354,6 +354,15 @@ static int nouveau_init_engine_ptrs(struct drm_device *dev)
engine->graph.destroy_context = nv50_graph_destroy_context;
engine->graph.load_context = nv50_graph_load_context;
engine->graph.unload_context = nv50_graph_unload_context;
if (dev_priv->chipset != 0x86)
engine->graph.tlb_flush = nv50_graph_tlb_flush;
else {
/* from what i can see nvidia do this on every
* pre-NVA3 board except NVAC, but, we've only
* ever seen problems on NV86
*/
engine->graph.tlb_flush = nv86_graph_tlb_flush;
}
engine->fifo.channels = 128;
engine->fifo.init = nv50_fifo_init;
engine->fifo.takedown = nv50_fifo_takedown;
@ -365,6 +374,7 @@ static int nouveau_init_engine_ptrs(struct drm_device *dev)
engine->fifo.destroy_context = nv50_fifo_destroy_context;
engine->fifo.load_context = nv50_fifo_load_context;
engine->fifo.unload_context = nv50_fifo_unload_context;
engine->fifo.tlb_flush = nv50_fifo_tlb_flush;
engine->display.early_init = nv50_display_early_init;
engine->display.late_takedown = nv50_display_late_takedown;
engine->display.create = nv50_display_create;
@ -1041,6 +1051,9 @@ int nouveau_ioctl_getparam(struct drm_device *dev, void *data,
case NOUVEAU_GETPARAM_PTIMER_TIME:
getparam->value = dev_priv->engine.timer.read(dev);
break;
case NOUVEAU_GETPARAM_HAS_BO_USAGE:
getparam->value = 1;
break;
case NOUVEAU_GETPARAM_GRAPH_UNITS:
/* NV40 and NV50 versions are quite different, but register
* address is the same. User is supposed to know the card
@ -1051,7 +1064,7 @@ int nouveau_ioctl_getparam(struct drm_device *dev, void *data,
}
/* FALLTHRU */
default:
NV_ERROR(dev, "unknown parameter %lld\n", getparam->param);
NV_DEBUG(dev, "unknown parameter %lld\n", getparam->param);
return -EINVAL;
}
@ -1066,7 +1079,7 @@ nouveau_ioctl_setparam(struct drm_device *dev, void *data,
switch (setparam->param) {
default:
NV_ERROR(dev, "unknown parameter %lld\n", setparam->param);
NV_DEBUG(dev, "unknown parameter %lld\n", setparam->param);
return -EINVAL;
}

View File

@ -191,7 +191,7 @@ nv40_temp_get(struct drm_device *dev)
int offset = sensor->offset_mult / sensor->offset_div;
int core_temp;
if (dev_priv->chipset >= 0x50) {
if (dev_priv->card_type >= NV_50) {
core_temp = nv_rd32(dev, 0x20008);
} else {
core_temp = nv_rd32(dev, 0x0015b4) & 0x1fff;

View File

@ -158,7 +158,6 @@ nv_crtc_dpms(struct drm_crtc *crtc, int mode)
{
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct drm_connector *connector;
unsigned char seq1 = 0, crtc17 = 0;
unsigned char crtc1A;
@ -213,10 +212,6 @@ nv_crtc_dpms(struct drm_crtc *crtc, int mode)
NVVgaSeqReset(dev, nv_crtc->index, false);
NVWriteVgaCrtc(dev, nv_crtc->index, NV_CIO_CRE_RPC1_INDEX, crtc1A);
/* Update connector polling modes */
list_for_each_entry(connector, &dev->mode_config.connector_list, head)
nouveau_connector_set_polling(connector);
}
static bool
@ -831,7 +826,7 @@ nv04_crtc_do_mode_set_base(struct drm_crtc *crtc,
/* Update the framebuffer location. */
regp->fb_start = nv_crtc->fb.offset & ~3;
regp->fb_start += (y * drm_fb->pitch) + (x * drm_fb->bits_per_pixel / 8);
NVWriteCRTC(dev, nv_crtc->index, NV_PCRTC_START, regp->fb_start);
nv_set_crtc_base(dev, nv_crtc->index, regp->fb_start);
/* Update the arbitration parameters. */
nouveau_calc_arb(dev, crtc->mode.clock, drm_fb->bits_per_pixel,

Some files were not shown because too many files have changed in this diff Show More