aboutsummaryrefslogtreecommitdiffstats
path: root/arch/s390/lib
Commit message (Collapse)AuthorAgeFilesLines
* [S390] 4level-fixup cleanupMartin Schwidefsky2007-10-221-1/+6
| | | | | | Get independent from asm-generic/4level-fixup.h Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Introduce follow_table in uaccess_pt.cMartin Schwidefsky2007-10-221-63/+22
| | | | | | | Define and use follow_table inline in uaccess_pt.c to simplify the code. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* pid namespaces: define is_global_init() and is_container_init()Serge E. Hallyn2007-10-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | is_init() is an ambiguous name for the pid==1 check. Split it into is_global_init() and is_container_init(). A cgroup init has it's tsk->pid == 1. A global init also has it's tsk->pid == 1 and it's active pid namespace is the init_pid_ns. But rather than check the active pid namespace, compare the task structure with 'init_pid_ns.child_reaper', which is initialized during boot to the /sbin/init process and never changes. Changelog: 2.6.22-rc4-mm2-pidns1: - Use 'init_pid_ns.child_reaper' to determine if a given task is the global init (/sbin/init) process. This would improve performance and remove dependence on the task_pid(). 2.6.21-mm2-pidns2: - [Sukadev Bhattiprolu] Changed is_container_init() calls in {powerpc, ppc,avr32}/traps.c for the _exception() call to is_global_init(). This way, we kill only the cgroup if the cgroup's init has a bug rather than force a kernel panic. [akpm@linux-foundation.org: fix comment] [sukadev@us.ibm.com: Use is_global_init() in arch/m32r/mm/fault.c] [bunk@stusta.de: kernel/pid.c: remove unused exports] [sukadev@us.ibm.com: Fix capability.c to work with threaded init] Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@us.ibm.com> Acked-by: Pavel Emelianov <xemul@openvz.org> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Herbert Poetzel <herbert@13thfloor.at> Cc: Kirill Korotaev <dev@sw.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: fault feedback #2Nick Piggin2007-07-191-12/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch completes Linus's wish that the fault return codes be made into bit flags, which I agree makes everything nicer. This requires requires all handle_mm_fault callers to be modified (possibly the modifications should go further and do things like fault accounting in handle_mm_fault -- however that would be for another patch). [akpm@linux-foundation.org: fix alpha build] [akpm@linux-foundation.org: fix s390 build] [akpm@linux-foundation.org: fix sparc build] [akpm@linux-foundation.org: fix sparc64 build] [akpm@linux-foundation.org: fix ia64 build] Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Bryan Wu <bryan.wu@analog.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Matthew Wilcox <willy@debian.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Cc: Chris Zankel <chris@zankel.net> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ Still apparently needs some ARM and PPC loving - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [S390] Bogomips calculation for 64 bit.Martin Schwidefsky2007-07-101-2/+2
| | | | | | | | | | The bogomips calculation triggered via reading from /proc/cpuinfo can return incorrect values if the qrnnd assembly is called with a pointer in %r2 with any of the upper 32 bits set. Fix this by using 64 bit division / remainder operation provided by gcc instead of calling the assembly. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390]: Fix build on 31-bit.David S. Miller2007-04-252-3/+1
| | | | | | | | | | | | Allow s390 to properly override the generic __div64_32() implementation by: 1) Using obj-y for div64.o in s390's makefile instead of lib-y 2) Adding the weak attribute to the generic implementation. Signed-off-by: David S. Miller <davem@davemloft.net>
* [S390] prevent softirqs if delay is called disabledMartin Schwidefsky2007-02-211-0/+7
| | | | | | | | | | | The new delay implementation uses the clock comparator and an external interrupt even if it is called disabled for interrupts. To do this all external interrupt source except clock comparator are switched of before enabling external interrupts. The external interrupt at the end of the delay period may not execute softirqs or we can end up in a dead-lock. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Avoid excessive inlining.Heiko Carstens2007-02-051-5/+5
| | | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Calibrate delay and bogomips.Martin Schwidefsky2007-02-052-1/+78
| | | | | | | | | Preset the bogomips number to the cpu capacity value reported by store system information in SYSIB 1.2.2. This value is constant for a particular machine model and can be used to determine relative performance differences between machines. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] ETR support.Martin Schwidefsky2007-02-051-14/+34
| | | | | | | | | | | | | This patch adds support for clock synchronization to an external time reference (ETR). The external time reference sends an oscillator signal and a synchronization signal every 2^20 microseconds to keep the TOD clocks of all connected servers in sync. For availability two ETR units can be connected to a machine. If the clock deviates for more than the sync-check tolerance all cpus get a machine check that indicates that the clock is out of sync. For the lovely details how to get the clock back in sync see the code below. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] noexec protectionGerald Schaefer2007-02-052-1/+372
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This provides a noexec protection on s390 hardware. Our hardware does not have any bits left in the pte for a hw noexec bit, so this is a different approach using shadow page tables and a special addressing mode that allows separate address spaces for code and data. As a special feature of our "secondary-space" addressing mode, separate page tables can be specified for the translation of data addresses (storage operands) and instruction addresses. The shadow page table is used for the instruction addresses and the standard page table for the data addresses. The shadow page table is linked to the standard page table by a pointer in page->lru.next of the struct page corresponding to the page that contains the standard page table (since page->private is not really private with the pte_lock and the page table pages are not in the LRU list). Depending on the software bits of a pte, it is either inserted into both page tables or just into the standard (data) page table. Pages of a vma that does not have the VM_EXEC bit set get mapped only in the data address space. Any try to execute code on such a page will cause a page translation exception. The standard reaction to this is a SIGSEGV with two exceptions: the two system call opcodes 0x0a77 (sys_sigreturn) and 0x0aad (sys_rt_sigreturn) are allowed. They are stored by the kernel to the signal stack frame. Unfortunately, the signal return mechanism cannot be modified to use an SA_RESTORER because the exception unwinding code depends on the system call opcode stored behind the signal stack frame. This feature requires that user space is executed in secondary-space mode and the kernel in home-space mode, which means that the addressing modes need to be switched and that the noexec protection only works for user space. After switching the addressing modes, we cannot use the mvcp/mvcs instructions anymore to copy between kernel and user space. A new mvcos instruction has been added to the z9 EC/BC hardware which allows to copy between arbitrary address spaces, but on older hardware the page tables need to be walked manually. Signed-off-by: Gerald Schaefer <geraldsc@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Get rid of a lot of sparse warnings.Heiko Carstens2007-02-054-27/+47
| | | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] don't call handle_mm_fault() if in an atomic context.Heiko Carstens2007-01-092-3/+3
| | | | | | | | | | There are several places in the futex code where a spin_lock is held and still uaccesses happen. Deadlocks are avoided by increasing the preempt count. The pagefault handler will then not take any locks but will immediately search the fixup tables. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] uaccess_pt: add missing down_read() and convert to is_init().Heiko Carstens2006-12-081-2/+3
| | | | | | | Doesn't seem to be a good idea to duplicate code :) Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [PATCH] mm: pagefault_{disable,enable}()Peter Zijlstra2006-12-071-3/+3
| | | | | | | | | | | | | | | | | | | | Introduce pagefault_{disable,enable}() and use these where previously we did manual preempt increments/decrements to make the pagefault handler do the atomic thing. Currently they still rely on the increased preempt count, but do not rely on the disabled preemption, this might go away in the future. (NOTE: the extra barrier() in pagefault_disable might fix some holes on machines which have too many registers for their own good) [heiko.carstens@de.ibm.com: s390 fix] Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Nick Piggin <npiggin@suse.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [S390] Add dynamic size check for usercopy functions.Gerald Schaefer2006-12-044-59/+190
| | | | | | | | | | | Use a wrapper for copy_to/from_user to chose the best usercopy method. The mvcos instruction is better for sizes greater than 256 bytes, if mvcos is not available a page table walk is better for sizes greater than 1024 bytes. Also removed the redundant copy_to/from_user_std_small functions. Signed-off-by: Gerald Schaefer <geraldsc@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [PATCH] Directed yield: direct yield of spinlocks for s390.Martin Schwidefsky2006-10-011-23/+39
| | | | | | | | | | | Use the new diagnose 0x9c in the spinlock implementation for s390. It yields the remaining timeslice of the virtual cpu that tries to acquire a lock to the virtual cpu that is the current holder of the lock. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [S390] Inline assembly cleanup.Martin Schwidefsky2006-09-281-7/+4
| | | | | | | | | | | | Major cleanup of all s390 inline assemblies. They now have a common coding style. Quite a few have been shortened, mainly by using register asm variables. Use of the EX_TABLE macro helps as well. The atomic ops, bit ops and locking inlines new use the Q-constraint if a newer gcc is used. That results in slightly better code. Thanks to Christian Borntraeger for proof reading the changes. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] user readable uninitialised kernel memory.Martin Schwidefsky2006-09-282-16/+42
| | | | | | | | | | A user space program can read uninitialised kernel memory by appending to a file from a bad address and then reading the result back. The cause is the copy_from_user function that does not clear the remaining bytes of the kernel buffer after it got a fault on the user space address. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] __div64_32 for 31 bit.Martin Schwidefsky2006-09-282-0/+152
| | | | | | | | | | | | | | | | The clocksource infrastructure introduced with commit ad596171ed635c51a9eef829187af100cbf8dcf7 broke 31 bit s390. The reason is that the do_div() primitive for 31 bit always had a restriction: it could only divide an unsigned 64 bit integer by an unsigned 31 bit integer. The clocksource code now uses do_div() with a base value that has the most significant bit set. The result is that clock->cycle_interval has a funny value which causes the linux time to jump around like mad. The solution is "obvious": implement a proper __div64_32 function for 31 bit s390. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Use alternative user-copy operations for new hardware.Gerald Schaefer2006-09-202-0/+157
| | | | | | | | This introduces new user-copy operations which are optimized for copying more than 256 Bytes on new hardware. Signed-off-by: Gerald Schaefer <geraldsc@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Make user-copy operations run-time configurable.Gerald Schaefer2006-09-204-420/+341
| | | | | | | | Introduces a struct uaccess_ops which allows setting user-copy operations at run-time. Signed-off-by: Gerald Schaefer <geraldsc@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] broken copy_in_user function.Martin Schwidefsky2006-08-302-33/+35
| | | | | | | | | The copy_in_user primitive does not work as advertised. If the source and target area are available copy_in_user copies one byte too much. If one of the memory areas is not available it does not copy as much data as it can, but up to 257 bytes less. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* [S390] Fix sparse warnings.Heiko Carstens2006-07-121-2/+2
| | | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-301-1/+0
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [PATCH] s390: Increase spinlock retry code performanceChristian Ehrhardt2006-03-091-2/+13
| | | | | | | | | | | | | | Currently the code tries up to spin_retry times to grab a lock using the cs instruction. The cs instruction has exclusive access to a memory region and therefore invalidates the appropiate cache line of all other cpus. If there is contention on a lock this leads to cache line trashing. This can be avoided if we first check wether a cs instruction is likely to succeed before the instruction gets actually executed. Signed-off-by: Christian Ehrhardt <ehrhardt@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] s390: fix strnlen_user return valueGerald Schaefer2006-03-082-6/+6
| | | | | | | | | | | | strnlen_user is supposed to return then length count + 1 if no terminating \0 is found, and it should return 0 on exception. Found by David Howells <dhowells@redhat.com>. Signed-off-by: Gerald Schaefer <geraldsc@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-By: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] s390: fix __delay implementationHeiko Carstens2006-02-141-1/+1
| | | | | | | | | Fix __delay implementation. Called with an argument "1" or "0" it would loop nearly forever (since (1/2)-1 = 0xffffffff). Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] s390: Remove CVS generated informationHeiko Carstens2006-02-011-1/+1
| | | | | | | | | | | | | - Remove all CVS generated information like e.g. revision IDs from drivers/s390 and include/asm-s390 (none present in arch/s390). - Add newline at end of arch/s390/lib/Makefile to avoid diff message. Acked-by: Andreas Herrmann <aherrman@de.ibm.com> Acked-by: Frank Pavlic <pavlic@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] s390: spinlock fixesMartin Schwidefsky2006-01-142-8/+2
| | | | | | | | Remove useless spin_retry_counter and fix compilation for UP kernels. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] s390: cleanup KconfigMartin Schwidefsky2006-01-062-4/+3
| | | | | | | | | | Sanitize some s390 Kconfig options. We have ARCH_S390, ARCH_S390X, ARCH_S390_31, 64BIT, S390_SUPPORT and COMPAT. Replace these 6 options by S390, 64BIT and COMPAT. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] spinlock consolidationIngo Molnar2005-09-101-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch (written by me and also containing many suggestions of Arjan van de Ven) does a major cleanup of the spinlock code. It does the following things: - consolidates and enhances the spinlock/rwlock debugging code - simplifies the asm/spinlock.h files - encapsulates the raw spinlock type and moves generic spinlock features (such as ->break_lock) into the generic code. - cleans up the spinlock code hierarchy to get rid of the spaghetti. Most notably there's now only a single variant of the debugging code, located in lib/spinlock_debug.c. (previously we had one SMP debugging variant per architecture, plus a separate generic one for UP builds) Also, i've enhanced the rwlock debugging facility, it will now track write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too. All locks have lockup detection now, which will work for both soft and hard spin/rwlock lockups. The arch-level include files now only contain the minimally necessary subset of the spinlock code - all the rest that can be generalized now lives in the generic headers: include/asm-i386/spinlock_types.h | 16 include/asm-x86_64/spinlock_types.h | 16 I have also split up the various spinlock variants into separate files, making it easier to see which does what. The new layout is: SMP | UP ----------------------------|----------------------------------- asm/spinlock_types_smp.h | linux/spinlock_types_up.h linux/spinlock_types.h | linux/spinlock_types.h asm/spinlock_smp.h | linux/spinlock_up.h linux/spinlock_api_smp.h | linux/spinlock_api_up.h linux/spinlock.h | linux/spinlock.h /* * here's the role of the various spinlock/rwlock related include files: * * on SMP builds: * * asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the * initializers * * linux/spinlock_types.h: * defines the generic type and initializers * * asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel * implementations, mostly inline assembly code * * (also included on UP-debug builds:) * * linux/spinlock_api_smp.h: * contains the prototypes for the _spin_*() APIs. * * linux/spinlock.h: builds the final spin_*() APIs. * * on UP builds: * * linux/spinlock_type_up.h: * contains the generic, simplified UP spinlock type. * (which is an empty structure on non-debug builds) * * linux/spinlock_types.h: * defines the generic type and initializers * * linux/spinlock_up.h: * contains the __raw_spin_*()/etc. version of UP * builds. (which are NOPs on non-debug, non-preempt * builds) * * (included on UP-non-debug builds:) * * linux/spinlock_api_up.h: * builds the _spin_*() APIs. * * linux/spinlock.h: builds the final spin_*() APIs. */ All SMP and UP architectures are converted by this patch. arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should be mostly fine. From: Grant Grundler <grundler@parisc-linux.org> Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU). Builds 32-bit SMP kernel (not booted or tested). I did not try to build non-SMP kernels. That should be trivial to fix up later if necessary. I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids some ugly nesting of linux/*.h and asm/*.h files. Those particular locks are well tested and contained entirely inside arch specific code. I do NOT expect any new issues to arise with them. If someone does ever need to use debug/metrics with them, then they will need to unravel this hairball between spinlocks, atomic ops, and bit ops that exist only because parisc has exactly one atomic instruction: LDCW (load and clear word). From: "Luck, Tony" <tony.luck@intel.com> ia64 fix Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjanv@infradead.org> Signed-off-by: Grant Grundler <grundler@parisc-linux.org> Cc: Matthew Wilcox <willy@debian.org> Signed-off-by: Hirokazu Takata <takata@linux-m32r.org> Signed-off-by: Mikael Pettersson <mikpe@csd.uu.se> Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* kbuild: m68k,parisc,ppc,ppc64,s390,xtensa use generic asm-offsets.h supportSam Ravnborg2005-09-092-2/+2
| | | | | | Delete obsoleted parts form arch makefiles and rename to asm-offsets.h Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
* [PATCH] s390: spin lock retryMartin Schwidefsky2005-07-272-2/+135
| | | | | | | | | | | | | | | | | Split spin lock and r/w lock implementation into a single try which is done inline and an out of line function that repeatedly tries to get the lock before doing the cpu_relax(). Add a system control to set the number of retries before a cpu is yielded. The reason for the spin lock retry is that the diagnose 0x44 that is used to give up the virtual cpu is quite expensive. For spin locks that are held only for a short period of time the costs of the diagnoses outweights the savings for spin locks that are held for a longer timer. The default retry count is 1000. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2Linus Torvalds2005-04-165-0/+857
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!