| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
The comment in the code was asking "Do we have to do this?", and according
to x86 and s390 the answer is no, the scheduler will do it before calling
the arch hook.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
| |
A single full sync (mb()) is requrired to order the mmio to the qirr reg
with the set or clear of the message word. However, test_and_clear_bit
has the effect of smp_mb() and we are not doing any other io from here,
so we don't need a mb per bit processed.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
| |
Several printks were broken at word boundaries for line length. Some
even referred to old function names. Using __func__ and changing the
text slightly for the format allows these printk formats to fit on one
line.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
| |
It is physically per-cpu, and we want the irq layer to treat it that way.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
EOI normally has the side effect of returning the cpu to the base
priority to recieve the next interrupt. This is actually controlled
by the top byte of the xirr register. When we are exiting the
kernel in kexec we must eoi the ipi for the next kernel because we
never return from the handler, but we want to leave interrupt
delivery blocked until the next kernel takes action.
Since the hardware ipi vector is fixed, its easiest to just do the
eoi explicitly.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This factors out processors joining and unjoining the Global Interrupt
Queue into a separate function.
There is a bit of math to calculate the arguments to rtas to join
or leave the global interrupt queue, and a warning on failure
afterwards. Make a helper for the 3 callers.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
|
| |
We only need to check the ibm,interrupt-server#-size property once, not
once per global server and thread.
We can use !CONFIG_SMP cpu masks and hard_smp_processor_id() to avoid an ifdef.
Put the node when breaking out of the loop on lpar systems.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
| |
Trim unneeded includes from xics.c. We don't use signals or gfp
flags, we use only OF functions and don't need prom, and the 8259
is now handled by our caller.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
| |
The xirr is 32 bits in hardware, but the hypervisor requries the upper
bits of the register to be clear on the hcall. By changing the type
from signed to unsigned int we can drop masking it back to 32 bits.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
| |
Now that xics_update_irq_servers is called only from init and hotplug
code, it becomes possible to clean up the ordering of functions in the
file, grouping them but the interfaces they implement.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
xics supports only one ipi per cpu, and expects software to use some
queue to know why the interrupt was sent. In Linux, we use a an array
of bitmaps indexed by cpu to identify the message. Currently the bits
are set in smp.c and decoded in xics.c, with the data structure in a
header file. Consolidate the code in xics.c similar to mpic and other
interrupt controllers.
Also, while making the the array static, the message word doesn't need
to be volatile as set_bit and test_clear_bit take care of it for us, and
put it under ifdef smp.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
| |
Previously the FDT header field boot_cpuid_phys wasn't actually used
on ppc32. Instead the physical boot cpuid was assumed to be 0 for
!CONFIG_SMP.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
| |
There is an old workaround in the sysdev/Makefile for dealing
with arch/ppc vs. arch/powerpc compiles. This is no longer
needed as arch/ppc is dead.
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, every time we determine which irq server to use, we check if
default_server, which is the id of the bootcpu, is still online. But
default_server is a hardware cpu, not the logical cpu id needed to index
cpu_online_map.
Since the default server can only go offline during a cpu hotplug event,
explicitly check the default server and choose the new one when we move
irqs away from the cpu being offlined.
This has the added benefit of only needing the boot_cpuid to be updated
and not relying on the cpu being marked offline during migrate_irqs_away.
Also, since xics_update_irq_servers only reads device tree information, we
can call it before xics_init_host in xics_init_IRQ and then default_server
will always be valid when we can reach get_irq_server via the host ops.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When reciving an irq vector that does not have a linux mapping, the kernel
prints a message and calls RTAS to disable the irq source. Previously
the kernel did not EOI the interrupt, causing the source to think it is
still being processed by software. While this does add an additional
layer of protection against interrupt storms had RTAS failed to disable
the source, it also prevents the interrupt from working when a driver
later enables it. (We could alternatively send an EOI on startup, but
that strategy would likely fail on an emulated xics.)
All interrupts should be disabled when the kernel starts, but this can
be observed if a driver does not shutdown an interrupt in its reboot
hook before starting a new kernel with kexec.
Michael reports this can be reproduced trivially by banging the keyboard
while kexec'ing on a P5 LPAR: even though the hvc_console driver request's
the console irq later in boot, the console is non-functional because
we're receiving no console interrupts.
Reported-By: Michael Ellerman
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A mutex_unlock(&gang->aff_mutex) in spufs_create_context() is missing
in case spufs_context_open() fails. As a result, spu_create syscall
and spu_get_idle() may block.
This patch adds the mutex_unlock.
Signed-off-by: Kou Ishizaki <kou.ishizaki@toshiba.co.jp>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Andre Detsch <adetsch@br.ibm.com>
|
| |
| |
| |
| |
| |
| | |
Style change: use inc_nlink instead of incrementing i_nlink directly
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, an empty spufs root inode has nlink count of 1. However,
the directory has two links; / -> spu and /spu/ -> .
This change increments the link count of the root inode in spufs.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If there are multiple reserved memory blocks via lmb_reserve() that are
contiguous addresses and on different NUMA nodes we are losing track of which
address ranges to reserve in bootmem on which node. I discovered this
when I recently got to try 16GB huge pages on a system with more then 2 nodes.
When scanning the device tree in early boot we call lmb_reserve() with
the addresses of the 16G pages that we find so that the memory doesn't
get used for something else. For example the addresses for the pages
could be 4000000000, 4400000000, 4800000000, 4C00000000, etc - 8 pages,
one on each of eight nodes. In the lmb after all the pages have been
reserved it will look something like the following:
lmb_dump_all:
memory.cnt = 0x2
memory.size = 0x3e80000000
memory.region[0x0].base = 0x0
.size = 0x1e80000000
memory.region[0x1].base = 0x4000000000
.size = 0x2000000000
reserved.cnt = 0x5
reserved.size = 0x3e80000000
reserved.region[0x0].base = 0x0
.size = 0x7b5000
reserved.region[0x1].base = 0x2a00000
.size = 0x78c000
reserved.region[0x2].base = 0x328c000
.size = 0x43000
reserved.region[0x3].base = 0xf4e8000
.size = 0xb18000
reserved.region[0x4].base = 0x4000000000
.size = 0x2000000000
The reserved.region[0x4] contains the 16G pages. In
arch/powerpc/mm/num.c: do_init_bootmem() we loop through each of the
node numbers looking for the reserved regions that belong to the
particular node. It is not able to identify region 0x4 as being a part
of each of the 8 nodes. It is assuming that a reserved region is only
on a single node.
This patch takes out the reserved region loop from inside
the loop that goes over each node. It looks up the active region containing
the start of the reserved region. If it extends past that active region then
it adjusts the size and gets the next active region containing it.
Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit 9b09c6d909dfd8de96b99b9b9c808b94b0a71614 ("powerpc: Change the
default link address for pSeries zImage kernels") changed the
real-base value in the CHRP note added by the addnote program from
12MB to 32MB to give more space for Open Firmware to load the zImage.
(The real-base value says where we want OF to position itself in
memory.) However, this change was ineffective on most pSeries
machines, because the RPA note added by addnote has the "ignore me"
flag set to 1. This was intended to tell OF to ignore just the RPA
note, but has the side effect of also making OF ignore the CHRP note
(at least on most pSeries machines).
To solve this we have to set the "ignore me" flag to 0 in the RPA
note. (We can't just omit the RPA note because that is equivalent to
having an RPA note with default values, and the default values are not
what we want.) However, then we have to make sure the values in the
zImage's RPA note match up with the values that the kernel supplies
later in prom_init.c with either the ibm,client-architecture-support
call or the process-elf-header call in prom_send_capabilities().
So this sets the "ignore me" flag in the RPA note in addnote to 0, and
adjusts the RPA note values in addnote.c and in prom_init.c to be
consistent with each other and with the values in ibm_architecture_vec.
However, since the wrapper is independent of the kernel, this doesn't
ensure that the notes will stay consistent. To ensure that, this adds
code to addnote.c so that it can extract the kernel's RPA note from
the kernel binary and put that in the zImage. To that end, we put the
kernel's fake ELF header (which contains the kernel's RPA note) into
its own section, and arrange for wrapper to pull out that section with
objcopy and pass it to addnote, which then extracts the RPA note from
it and transfers it to the zImage.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The powerpc 32-bit and 64-bit kernel_thread functions don't properly
propagate errors being returned by the clone syscall. (In the case of
error, the syscall exit code returns a positive errno in r3 and sets
the CR0[SO] bit.)
This patch fixes that by negating r3 if CR0[SO] is set after the syscall.
Signed-off-by: Josh Poimboeuf <jpoimboe@us.ibm.com>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Offset is unsigned and when an address isn't found in the vma map
vma_map_lookup() returns the vma physical address + 0x10000000.
vma_map_lookup used to return 0xffffffff on a failed lookup, but
a change was made to return the vma physical address + 0x10000000
There are two callers of vam_map_lookup: one of them correctly
deals with this new return value, but the other (below) did not.
Signed-off-by: Roel Kluin <12o3l@tiscali.nl>
Acked-by: Maynard Johnson <maynardj@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Testing hotplug memory remove has revealed that we can oops in
pseries_lmb_remove(). The incorrect shift causes a NULL pointer
dereference in the page_zone() inline routine.
I have only been able to reproduce the oops on kernels with large pages
enabled.
Tested on Power5 and Power6 with and without large pages enabled.
Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When manipulating 64-bit PCI addresses, the code would lose the
top 32-bit in a couple of places when shifting a pfn due to missing
type casting from the 32-bit pfn to a 64-bit resource before the
shift.
This breaks using newer X servers for example on 440 machines
with the PCI bus above 32-bit.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
rtas_log_read() doesn't check file flags for O_NONBLOCK and blocks
non-blocking readers of /proc/ppc64/rtas/error_log when there is
no data available. This fixes it.
Also rtas_log_read() returns now with ENODATA to prevent suspending of
process in wait_event_interruptible() when logging facility was
switched off and log is already empty.
Signed-off-by: Vitaly Mayatskikh <v.mayatskih@gmail.com>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix various defconfigs for Freescale chip based boards to remove
CONFIG_PPC_PMAC or CONFIG_PPC_CHRP which crept in due to those
being default y
Signed-off-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
powerpc uses CONFIG_FORCE_MAX_ZONEORDER, and some things depend on it
being at least 10 when 64k pages are not configured (notably the dart
iommu code with CONFIG_PM). The defaults are fine, but when going from a
64K pages config to one without 64K pages, MAX_ORDER stays at 9 which is
too low for 4K pages.
This patch makes the Kconfig enforce at least the defaults.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Acked-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| | |
A bug in my initial 64-bit hibernation code breaks it when using
page sizes that aren't 4K.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit 8b150478 ("ppc: make phys_mem_access_prot() work with pfns
instead of addresses") fixed page_is_ram() in arch/ppc to avoid overflow
for addresses above 4G on 32-bit kernels. However arch/powerpc's
page_is_ram() is missing the same fix -- it computes a physical address
by doing pfn << PAGE_SHIFT, which overflows if pfn corresponds to a page
above 4G.
In particular this causes pages above 4G to be mapped with the wrong
caching attribute; for example many ppc440-based SoCs have PCI space
above 4G, and mmap()ing MMIO space may end up with a mapping that has
caching enabled.
Fix this by working with the pfn and avoiding the conversion to
physical address that causes the overflow. This patch compares the
pfn to max_pfn, which is a semantic change from the old code -- that
code compared the physical address to high_memory, which corresponds
to max_low_pfn. However, I think that was is another bug, since
highmem pages are still RAM.
Reported-by: vb <vb@vsbe.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Add a .gitignore in arch/powerpc/kernel to ignore the generated
vmlinux.lds.
Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add a defconfig for the AMCC Arches evaluation board
Signed-off-by: Victor Gallardo <vgallardo@amcc.com>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Basic functionality for the AMCC Arches eval Board.
Signed-off-by: Victor Gallardo <vgallardo@amcc.com>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The Arches Evaluation board is based on the AMCC 460GT SoC chip.
This board is a dual processor board with each processor providing
independent resources for Rapid IO, Gigabit Ethernet, and serial
communications. Each 460GT has it's own 512MB DDR2 memory, 32MB NOR FLASH,
UART, EEPROM and temperature sensor, along with a shared debug port.
The two 460GT's will communicate with each other via shared memory,
Gigabit Ethernet and x1 PCI-Express.
Signed-off-by: Victor Gallardo <vgallardo@amcc.com>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add support for the phy types found on the Arches and other
PowerPC 460 based boards.
Signed-off-by: Victor Gallardo <vgallardo@amcc.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch allows the 4xx (conventional) PCI bridge to be disabled
via the device tree. This is needed for 4xx PCI adapter hardware.
Use the PCI node's status property to disable the PCI bridge.
Signed-off-by: Matthias Fuchs <matthias.fuchs@esd-electronics.com>
Acked-by: Stefan Roese <sr@denx.de>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The PowerPC 405EZ SoC has some differences in the interrupt layout and
handling for the MAL. The SERR, TXDE, and RXDE interrupts are OR'd into
a single interrupt. Also, due to the possibility for interrupt coalescing,
the TXEOB and RXEOB interrupts require an interrupt bit to be cleared in
the ICINTSTAT SDR.
This sets the proper MAL feature bits for 405EZ boards, and adds a common
shared handler for SERR, TXDE, and RXDE. The defines for the ICINTSTAT DCR
are added to the proper header file as well.
This has been adapted from code originally written by Stefan Roese.
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This rearranges a bit of code, and adds support for
36-bit physical addressing for configs that use a
hashed page table. The 36b physical support is not
enabled by default on any config - it must be
explicitly enabled via the config system.
This patch *only* expands the page table code to accomodate
large physical addresses on 32-bit systems and enables the
PHYS_64BIT config option for 86xx. It does *not*
allow you to boot a board with more than about 3.5GB of
RAM - for that, SWIOTLB support is also required (and
coming soon).
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Implement _PAGE_SPECIAL and pte_special() for 32-bit powerpc. This bit will
be used by the fast get_user_pages() to differenciate PTEs that correspond
to a valid struct page from special mappings that don't such as IO mappings
obtained via io_remap_pfn_ranges().
We currently only implement this on sub-arch that support SMP or will so
in the future (6xx, 44x, FSL-BookE) and not (8xx, 40x).
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There are some minor issues with support 64-bit PTEs on a 32-bit processor
when dealing with SMP.
* We need to order the stores in set_pte_at to make sure the flag word
is set second.
* Change pte_clear to use pte_update so only the flag word is cleared
* Added a WARN_ON to set_pte_at to ensure the pte isn't present for
the 64-bit pte/SMP case (to ensure our assumption of this fact).
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Becky Bruce <becky.bruce@freescale.com>
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Introduced a new set of low level tlb invalidate functions that do not
broadcast invalidates on the bus:
_tlbil_all - invalidate all
_tlbil_pid - invalidate based on process id (or mm context)
_tlbil_va - invalidate based on virtual address (ea + pid)
On non-SMP configs _tlbil_all should be functionally equivalent to _tlbia and
_tlbil_va should be functionally equivalent to _tlbie.
The intent of this change is to handle SMP based invalidates via IPIs instead
of broadcasts as the mechanism scales better for larger number of cores.
On e500 (fsl-booke mmu) based cores move to using MMUCSR for invalidate alls
and tlbsx/tlbwe for invalidate virtual address.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We essentially adopt the 64-bit dma code, with some changes to support
32-bit systems, including HIGHMEM. dma functions on 32-bit are now
invoked via accessor functions which call the correct op for a device based
on archdata dma_ops. If there is no archdata dma_ops, this defaults
to dma_direct_ops.
In addition, the dma_map/unmap_page functions are added to dma_ops
because we can't just fall back on map/unmap_single when HIGHMEM is
enabled. In the case of dma_direct_*, we stop using map/unmap_single
and just use the page version - this saves a lot of ugly
ifdeffing. We leave map/unmap_single in the dma_ops definition,
though, because they are needed by the iommu code, which does not
implement map/unmap_page. Ideally, going forward, we will completely
eliminate map/unmap_single and just have map/unmap_page, if it's
workable for 64-bit.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use the struct device's numa_node instead; use accessor functions
to get/set numa_node.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
32-bit platforms are about to start using dma.c; move the iommu
dma ops into their own file to make this a bit cleaner.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|
| |/
| |
| |
| |
| |
| |
| |
| | |
This is in preparation for the merge of the 32 and 64-bit
dma code in arch/powerpc.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The math emulation code is centered around a set of generic macros that
provide the core of the emulation that are shared by the various
architectures and other projects (like glibc). Each arch implements its
own sfp-machine.h to specific various arch specific details.
For historic reasons that are now lost the powerpc math-emu code had
its own version of the common headers. This moves us to using the
kernel generic version and thus getting fixes when those are updated.
Also cleaned up exception/error reporting from the FP emulation functions.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
|