summaryrefslogtreecommitdiffstats
path: root/arch/mips/lib
Commit message (Collapse)AuthorAgeFilesLines
* MIPS: tlb-r3k: Move CP0.Wired register initialisation to `tlb_init'Maciej W. Rozycki2015-06-211-2/+0
| | | | | | | | | | | | | | | | | Move the initialisation of the CP0.Wired register implemented by Toshiba TX3922 and TX3927 processors from `tx39_cache_init' to `tlb_init' where it belongs, correcting code structure and making sure initialisation does not rely on `tx39_cache_init' being called before `tlb_init' to work correctly. Make `r3k_have_wired_reg' static as it's no longer externally referred to; remove a stale declaration too. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10195/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: dump_tlb: Take XPA into accountJames Hogan2015-06-211-5/+13
| | | | | | | | | | | | | | | XPA extends the physical addresses on MIPS32, including the EntryLo registers. Update dump_tlb() to concatenate the PFNX field from the high end of the EntryLo registers (as read by mfhc0). The width of physical and virtual addresses are also separated to show only 8 nibbles of virtual but 11 nibbles of physical with XPA. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10077/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: dump_tlb: Take RI/XI bits into accountJames Hogan2015-06-211-7/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | The RI/XI bits when present are above the PFN field in the EntryLo registers, at bits 63,62 when read with dmfc0, and bits 31,30 when read with mfc0. This makes them appear as part of the physical address, since the other bits are masked with PAGE_MASK, for example: Index: 253 pgmask=16kb va=77b18000 asid=75 [pa=1000744000 c=5 d=1 v=1 g=0] [pa=100134c000 c=5 d=1 v=1 g=0] The physical addresses have bit 36 set, which corresponds to bit 30 of EntryLo1, the XI bit. Explicitly mask off the RI and XI bits from the printed physical address, and print the RI and XI bits separately if they exist, giving output more like this: Index: 226 pgmask=16kb va=77be0000 asid=79 [ri=0 xi=1 pa=01288000 c=5 d=1 v=1 g=0] [ri=0 xi=0 pa=010e4000 c=5 d=0 v=1 g=0] Cc: linux-mips@linux-mips.org Cc: James Hogan <james.hogan@imgtec.com> Cc: David Daney <ddaney@caviumnetworks.com> Patchwork: https://patchwork.linux-mips.org/patch/10080/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: dump_tlb: Take EHINV bit into accountJames Hogan2015-06-211-0/+3
| | | | | | | | | | | | | | | | The EHINV bit in EntryHi allows a TLB entry to be properly marked invalid so that EntryHi doesn't have to be set to a unique value to avoid machine check exceptions due to multiple matching entries. Unfortunately dump_tlb() doesn't take this into account so it will print all the uninteresting invalid TLB entries if the current ASID happens to be 00. Therefore add a condition to skip entries which are marked invalid with the EHINV bit. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10076/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: dump_tlb: Take global bit into accountJames Hogan2015-06-212-3/+12
| | | | | | | | | | | | | | The TLB only matches the ASID when the global bit isn't set, so dump_tlb() shouldn't really be skipping global entries just because the ASID doesn't match. Fix the condition to read the TLB entry's global bit from EntryLo0. Note that after a TLB read the global bits in both EntryLo registers reflect the same global bit in the TLB entry. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10079/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: dump_tlb: Make use of EntryLo bit definitionsJames Hogan2015-06-212-12/+12
| | | | | | | | | | | Make use of recently added EntryLo bit definitions in mipsregs.h when dumping TLB contents. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10075/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: dump_tlb: Refactor TLB matchingJames Hogan2015-06-211-30/+35
| | | | | | | | | | | | Refactor the TLB matching code in dump_tlb() slightly so that the conditions which can cause a TLB entry to be skipped can be more easily extended. This should prevent the match condition getting unwieldy once it is updated to take further conditions into account. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10081/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: dump_tlb: Use tlbr hazard macrosJames Hogan2015-06-211-8/+3
| | | | | | | | | | | | | | | Use the new tlb read hazard macros from <asm/hazards.h> rather than the local BARRIER() macro which uses 7 ops regardless of the kernel configuration. We use mtc0_tlbr_hazard for the hazard between mtc0 to the index register and the tlbr, and tlb_read_hazard for the hazard between the tlbr and the mfc0 of the TLB registers written by tlbr. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10074/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: strnlen_user.S: Fix a CPU_DADDI_WORKAROUNDS regressionMaciej W. Rozycki2015-05-291-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | Correct a regression introduced with 8453eebd [MIPS: Fix strnlen_user() return value in case of overlong strings.] causing assembler warnings and broken code generated in __strnlen_kernel_nocheck_asm: arch/mips/lib/strnlen_user.S: Assembler messages: arch/mips/lib/strnlen_user.S:64: Warning: Macro instruction expanded into multiple instructions in a branch delay slot with the CPU_DADDI_WORKAROUNDS option set, resulting in the function looping indefinitely upon mounting NFS root. Use conditional assembly to avoid a microMIPS code size regression. Using $at unconditionally would cause such a regression as there are no 16-bit instruction encodings available for ALU operations using this register. Using $v1 unconditionally would produce short microMIPS encodings, but would prevent this register from being used across calls to this function. The extra LI operation introduced is free, replacing a NOP originally scheduled into the delay slot of the branch that follows. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10205/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: csum_partial: Improve instruction parallelism.Chen Jie2015-04-011-19/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Computing sum introduces true data dependency. This patch removes some true data depdendencies, hence increases instruction level parallelism. This patch brings up to 50% csum performance gain on Loongson 3a. One example about how this patch works is in CSUM_BIGCHUNK1: // ** original ** vs ** patch applied ** ADDC(sum, t0) ADDC(t0, t1) ADDC(sum, t1) ADDC(t2, t3) ADDC(sum, t2) ADDC(sum, t0) ADDC(sum, t3) ADDC(sum, t2) In the original implementation, each ADDC(sum, ...) depends on the sum value updated by previous ADDC(as source operand). With this patch applied, the first two ADDC operations are independent, hence can be executed simultaneously if possible. Another example is in the "copy and sum calculating chunk": // ** original ** vs ** patch applied ** STORE(t0, UNIT(0) ... STORE(t0, UNIT(0) ... ADDC(sum, t0) ADDC(t0, t1) STORE(t1, UNIT(1) ... STORE(t1, UNIT(1) ... ADDC(sum, t1) ADDC(sum, t0) STORE(t2, UNIT(2) ... STORE(t2, UNIT(2) ... ADDC(sum, t2) ADDC(t2, t3) STORE(t3, UNIT(3) ... STORE(t3, UNIT(3) ... ADDC(sum, t3) ADDC(sum, t2) With this patch applied, ADDC and the **next next** ADDC are independent. Signed-off-by: chenj <chenj@lemote.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9608/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: lib: memset: Add MIPS R6 supportLeonid Yegoshin2015-02-171-0/+47
| | | | | | | | | MIPS R6 dropped the unaligned load and store instructions so we need to re-write this part of the code for R6 to store one byte at a time. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memcpy: Add MIPS R6 supportLeonid Yegoshin2015-02-171-0/+23
| | | | | | | | | MIPS R6 does not support the unaligned load and store instructions so we add a special MIPS R6 case to copy one byte at a time if we need to read/write to unaligned memory addresses. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: asm: irqflags: Add MIPS R6 related definitionsMarkos Chandras2015-02-171-1/+1
| | | | | | Add the MIPS R6 related definitions to the IRQ related macros Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: Use generic checksum functions for MIPS R6Markos Chandras2015-02-171-0/+1
| | | | | | | | | | | The following instructions have been removed from MIPS R6 ulw, ulh, swl, lwr, lwl, swr. However, all of them are used in the MIPS specific checksum implementation. As a result of which, we will use the generic checksum on MIPS R6 Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memset: Clean up some MIPS{EL,EB} ifdeferyMarkos Chandras2014-11-241-4/+2
| | | | | | | | | | The toolchain defines exactly one of __MIPSEB__ and __MIPSEL__. As a result, simplify the ifdefery a little bit. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/8522/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: iomap: Use __mem_{read,write}{b,w,l} for MMIOMarkos Chandras2014-11-241-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using the __raw_{read,write}{b,w,l} functions to perform repeatable MMIO could result in problems if the host bus does not match the endianness of the PCI/ISA. This problem is visible on big-endian SEAD3 configurations after commit 2925f6c0c7af32720dcbadc586463aeceb6baa22 "net: smc911x: use io{read,write}*_rep accessors". This effectively moves away from using the __mem_* variants to __raw_* ones and causes a kernel bug as follows: Call Trace: CPU 0 Unable to handle kernel paging request at virtual address 00000000, epc == 00000000, ra == 8012b3b0 Oops[#1]: Cpu 0 $ 0 : 00000000 00000065 00000000 00000004 $ 4 : 00000000 00000000 9a82dd60 00000000 $ 8 : 00000000 00000000 a00ae278 00000007 $12 : 0000000e 00000011 804c4228 ffff9411 $16 : 00000100 00000000 80560000 807fc6d0 $20 : 807fc8d0 807fcad0 807fbec0 00000100 $24 : 00009150 80109be0 $28 : 9a82c000 9a82dd28 00000001 8012b3b0 Hi : 00000000 Lo : 00000000 epc : 00000000 (null) Not tainted ra : 8012b3b0 call_timer_fn.isra.39+0x24/0x84 Status: 10009503 KERNEL EXL IE Cause : 00800808 BadVA : 00000000 PrId : 00019c20 (MIPS M14Kc) Modules linked in: Process swapper (pid: 1, threadinfo=9a82c000, task=9a82ba18, tls=00000000) Stack : 00000040 00000000 00000007 8056732c 80580000 00000001 9a82dd60 00200200 80560000 8012b598 8056732c 80580000 00000001 00000000 9a82dd60 9a82dd60 00000000 807fbd44 807fbd40 805664e0 0000000a 80800000 00000004 80125924 0000fda0 000007f0 80000000 00000001 80800000 007f0000 00200140 80166338 00000000 8100fda0 0000fda0 000007f0 80000000 00000001 80800000 007f0000 ... Call Trace: [<8012b598>] run_timer_softirq+0x188/0x1f4 [<80125924>] __do_softirq+0xc4/0x18c [<80166338>] handle_percpu_irq+0x54/0x84 [<80125aa4>] do_softirq+0x68/0x70 [<80103b50>] do_IRQ+0x18/0x28 [<80125d1c>] irq_exit+0x94/0xc0 [<80125aa4>] do_softirq+0x68/0x70 [<80102130>] ret_from_irq+0x0/0x4 [<80102130>] ret_from_irq+0x0/0x4 [<80125d1c>] irq_exit+0x94/0xc0 [<803165b0>] __bzero+0xd4/0x164 [<80346d0c>] mem32_serial_out+0x0/0x1c [<8010d4ac>] free_init_pages+0x98/0xfc [<80180a08>] free_hot_cold_page+0x2c/0x1c4 [<80180bd8>] __free_pages+0x38/0x98 [<8010d4a0>] free_init_pages+0x8c/0xfc [<8010d4ac>] free_init_pages+0x98/0xfc [<8049fb04>] kernel_init+0x28/0x15c [<80147484>] schedule_tail+0x1c/0x60 [<8049fadc>] kernel_init+0x0/0x15c [<80102178>] ret_from_kernel_thread+0x14/0x1c [<8040a06f>] skb_pad+0xe7/0x13c Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: Steve Glendinning <steve.glendinning@shawell.net> Cc: Ben Boeckel <mathstuf@gmail.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: netdev@vger.kernel.org Cc: Jeffrey Deans <Jeffrey.Deans@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6672/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: lib: mips-atomic.c: Remove obsolete ifdeferyMarkos Chandras2014-11-241-20/+0
| | | | | | | | | | | Having #ifdefs just to guard comments is not really helpful so drop them. Moreover, the code wasn't really reached anyway since there is a #ifndef CONFIG_CPU_MIPSR2 on the top of the file. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/8513/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: R3000: Remove redundant parenthesesIsamu Mogi2014-11-241-1/+1
| | | | | | | | Signed-off-by: Isamu Mogi <isamu@leafytree.jp> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/8292/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: R3000: Replace magic numbers with macrosIsamu Mogi2014-11-241-5/+6
| | | | | | | | | | Also include asm/mmu_context.h for ASID_MASK. Signed-off-by: Isamu Mogi <isamu@leafytree.jp> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/8291/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Remove __strlen_user().Ralf Baechle2014-11-241-3/+0
| | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: lib: memcpy: Restore NOP on delay slot before returning to callerMarkos Chandras2014-11-191-0/+1
| | | | | | | | | | | | | | | | | | Commit cf62a8b8134dd3 ("MIPS: lib: memcpy: Use macro to build the copy_user code") switched to a macro in order to build the memcpy symbols in preparation for the EVA support. However, this commit also removed the NOP instruction after the 'jr ra' when returning back to the caller. This had no visible side-effects since the next instruction was a load to the t0 register which was already in the clobbered list, but it may have undesired effects in the future if some other code is introduced in between the .Ldone and the .Ll_exc_copy labels. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: <stable@vger.kernel.org> # v3.15+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/8512/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: R3000: Fix debug output for Virtual page numberIsamu Mogi2014-11-061-2/+2
| | | | | | | | | | | | Virtual page number of R3000 in entryhi is 20 bit from MSB. But in dump_tlb(), the bit mask to read it from entryhi is 19 bit (0xffffe000). The patch fixes that to 0xfffff000. Signed-off-by: Isamu Mogi <isamu@leafytree.jp> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/8290/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Fix strnlen_user() return value in case of overlong strings.Ralf Baechle2014-11-041-2/+4
| | | | | | | | We were returning maxlen like the userland strnlen if no '\0' character was encountered while the kernel version is expected to return a value larger than maxlen. Fixed to return maxlen + 1. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Use WSBH/DSBH/DSHD on Loongson 3AChen Jie2014-09-221-2/+8
| | | | | | | | | Signed-off-by: chenj <chenj@lemote.com> Cc: linux-mips@linux-mips.org Cc: chenhc@lemote.com Patchwork: https://patchwork.linux-mips.org/patch/7542/ Patchwork: https://patchwork.linux-mips.org/patch/7550/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: __delay ABI-dependent subtraction simplificationMaciej W. Rozycki2014-05-301-5/+3
| | | | | | | | | | | | This small update to the previous fix to __delay removes a conditional around the ABI-dependent subtraction operation within an inline asm in favor to the standard <asm/asm.h> LONG_SUBU macro. No change in code produced. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6703/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: MT: Remove SMTC supportRalf Baechle2014-05-241-40/+6
| | | | | | | | | | | | | | | Nobody is maintaining SMTC anymore and there also seems to be no userbase. Which is a pity - the SMTC technology primarily developed by Kevin D. Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT ASE's power and elegance. Based on Markos Chandras <Markos.Chandras@imgtec.com> patch https://patchwork.linux-mips.org/patch/6719/ which while very similar did no longer apply cleanly when I tried to merge it plus some additional post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to merge once upon a time. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: csum_partial.S CPU_DADDI_WORKAROUNDS bug fixMaciej W. Rozycki2014-05-131-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change reverts most of commit 60724ca59eda766a30be57aec6b49bc3e2bead91 [MIPS: IP checksums: Remove unncessary .set pseudos] that introduced warnings with the CPU_DADDI_WORKAROUNDS option set: arch/mips/lib/csum_partial.S: Assembler messages: arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3" [...] arch/mips/lib/csum_partial.S:577: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:577: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:577: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:601: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:601: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:601: Warning: used $3 with ".set at=$3" arch/mips/lib/csum_partial.S:601: Warning: used $3 with ".set at=$3" [and so on, and so on...] The warnings are benign and good code is produced regardless because no macros that'd use the assembler's temporary register are involved, however the `.set noat' directives removed by the commit referred are crucial to guarantee this is still going to be the case after any changes in the future. Therefore they need to be brought back to place which this change does. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6686/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: __strncpy_from_user_asm CPU_DADDI_WORKAROUNDS bug fixMaciej W. Rozycki2014-05-131-7/+6
| | | | | | | | | | | | | | | | | | This corrects assembler warnings and broken code generated in __strncpy_from_user_asm: arch/mips/lib/strncpy_user.S: Assembler messages: arch/mips/lib/strncpy_user.S:52: Warning: Macro instruction expanded into multiple instructions in a branch delay slot with the CPU_DADDI_WORKAROUNDS option set. The function schedules delay slots manually where there is really no need to as GAS is happy to do it all itself, so undo it all and remove `.set noreorder'. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6685/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: __delay CPU_DADDI_WORKAROUNDS bug fixMaciej W. Rozycki2014-05-131-4/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With CPU_DADDI_WORKAROUNDS enabled __delay assembles with a macro in a branch delay slot: {standard input}: Assembler messages: {standard input}:18: Warning: Macro instruction expanded into multiple instructions in a branch delay slot and broken code results: 0000000000000000 <__delay>: 0: 1480ffff bnez a0,0 <__delay> 4: 24010001 li at,1 8: 0081202f dsubu a0,a0,at c: 03e00008 jr ra 10: 00000000 nop 14: 00000000 nop Consequently the function loops indefinitely, showing up prominently as a hang in the delay loop calibration at bootstrap. This change corrects the problem by forcing the immediate 1 into a register while keeping code produced identical where CPU_DADDI_WORKAROUNDS is disabled. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6669/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: lib: csum_partial: Add EVA supportMarkos Chandras2014-03-261-0/+25
| | | | | | | Use EVA specific functions to read and write data to user address space. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: csum_partial: Add macro to build csum_partial symbolsMarkos Chandras2014-03-261-92/+108
| | | | | | | | | | In preparation for EVA support, we use a macro to build the __csum_partial_copy_user main code so it can be shared across multiple implementations. EVA uses the same code but it replaces the load/store/prefetch instructions with the EVA specific ones therefore using a macro avoids unnecessary code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: csum_partial: Merge EXC and load/store macrosMarkos Chandras2014-03-261-69/+91
| | | | | | | | | | | | | Each load/store macro always adds an entry to the __ex_table using the EXC macro. There are cases where a load instruction may never fail such as when we are sure the load happens in the kernel address space. Therefore, we merge these the EXC and LOADX/STOREX macros into a single one. We also expand the argument list in the EXC macro to make the macro more flexible. The extra 'type' argument is not used by this commit, but it will be used when EVA support is added to memcpy. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: checksum: Split the 'copy_user' symbolMarkos Chandras2014-03-261-3/+6
| | | | | | | | | | | | The 'copy_user' symbol can be used to copy from or to userland so we will use two different symbols for these operations. This makes no difference in the existing code, but when the core is operating in EVA mode, different instructions need to be used to read and write to userland address space. The old function has also been renamed to 'copy_kernel' to denote that it is suitable for copy data to and from kernel space. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memset: Add EVA support for the __bzero function.Markos Chandras2014-03-261-4/+23
| | | | | | | | | Build the __bzero function using the EVA load/store instructions when operating in the EVA mode. This function is only used when accessing user code so there is no need to build two distinct symbols for user and kernel operations respectively. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memset: Use macro to build the __bzero symbolMarkos Chandras2014-03-261-35/+60
| | | | | | | | Build the __bzero symbol using a macor. In EVA mode we will need to use similar code to do the userspace load operations so it is better if we use a macro to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memset: Whitespace fixesMarkos Chandras2014-03-261-15/+15
| | | | Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memcpy: Add EVA supportMarkos Chandras2014-03-261-0/+77
| | | | | | | | | | Add copy_{to,from,in}_user when the CPU operates in EVA mode. This is necessary so the EVA specific instructions can be used to perform the virtual to physical translation for user space addresses. We will use the non-EVA functions to read from kernel if needed. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memcpy: Use macro to build the copy_user codeMarkos Chandras2014-03-261-110/+143
| | | | | | | The code can be shared between EVA and non-EVA configurations, therefore use a macro to build it to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memcpy: Split source and destination prefetch macrosMarkos Chandras2014-03-261-14/+22
| | | | | | | | In preparation for EVA support, the PREF macro is split into two separate macros, PREFS and PREFD, for source and destination data prefetching respectively. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: memcpy: Merge EXC and load/store macrosMarkos Chandras2014-03-261-69/+89
| | | | | | | | | | Each load/store macro always adds an entry to the __ex_table using the EXC macro. Therefore, these load/store macros are now merged with the EXC one. The argument list is also expanded in order to make the macro more flexible. The extra 'type' argument is not used by this commit, but it will be used when the EVA support is added to the memcpy. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: strncpy_user: Add EVA supportMarkos Chandras2014-03-261-0/+19
| | | | | | | | | In non-EVA mode, strncpy_from_user* aliases are used for the strncpy_from_kernel* symbols since the code is identical. In EVA mode, new strcpy_from_user* symbols are used which use the EVA specific instructions to load values from userspace. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: strncpy_user: Use macro to build the strncpy_from_user symbolMarkos Chandras2014-03-261-8/+13
| | | | | | | | Build the __strncpy_from_user symbol using a macro. In EVA mode we will need to use similar code to do the userspace load operations so it is better if we use a macro to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: strlen_user: Add EVA supportMarkos Chandras2014-03-261-0/+20
| | | | | | | | | In non-EVA mode, strlen_user* aliases are used for the strlen_kernel* symbols since the code is identical. In EVA mode, new strlen_user* symbols are used which use the EVA specific instructions to load values from userspace. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: strlen_user: Use macro to build the strlen_user symbolMarkos Chandras2014-03-261-6/+10
| | | | | | | | Build the __strlen_user symbol using a macro. In EVA mode we will need to use similar code to do the userspace load operations so it is better if we use a macro to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: strnlen_user: Add EVA supportMarkos Chandras2014-03-261-0/+20
| | | | | | | | | In non-EVA mode, a strlen_user* alias is used for the strlen_kernel* symbols since the code is identical. In EVA mode, a new strlen_user* symbol is used which uses the EVA specific instructions to load values from userspace. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: lib: strnlen_user: Use macro to build the strnlen_user symbolMarkos Chandras2014-03-261-6/+10
| | | | | | | | Build the __strnlen_user symbol using a macro. In EVA mode we will need to use similar code to do the userspace load operations so it is better if we use a macro to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* mips: delete non-required instances of include <linux/init.h>Paul Gortmaker2014-01-241-1/+0
| | | | | | | | | | | None of these files are actually using any __init type directives and hence don't need to include <linux/init.h>. Most are just a left over from __devinit and __cpuinit removal, or simply due to code getting copied from one driver to the next. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: John Crispin <blogic@openwrt.org> Patchwork: http://patchwork.linux-mips.org/patch/6320/
* MIPS: Delete __cpuinit/__CPUINIT usage from MIPS codePaul Gortmaker2013-07-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 3747069b25e419f6b51395f48127e9812abc3596 upstream. The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. Here, we remove all the MIPS __cpuinit from C code and __CPUINIT from asm files. MIPS is interesting in this respect, because there are also uasm users hiding behind their own renamed versions of the __cpuinit macros. [1] https://lkml.org/lkml/2013/5/20/589 [ralf@linux-mips.org: Folded in Paul's followup fix.] Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/5494/ Patchwork: https://patchwork.linux-mips.org/patch/5495/ Patchwork: https://patchwork.linux-mips.org/patch/5509/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* Revert "MIPS: Allow ASID size to be determined at boot time."David Daney2013-05-162-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | This reverts commit d532f3d26716a39dfd4b88d687bd344fbe77e390. The original commit has several problems: 1) Doesn't work with 64-bit kernels. 2) Calls TLBMISS_HANDLER_SETUP() before the code is generated. 3) Calls TLBMISS_HANDLER_SETUP() twice in per_cpu_trap_init() when only one call is needed. [ralf@linux-mips.org: Also revert the bits of the ASID patch which were hidden in the KVM merge.] Signed-off-by: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: "Steven J. Hill" <Steven.Hill@imgtec.com> Cc: David Daney <david.daney@cavium.com> Patchwork: https://patchwork.linux-mips.org/patch/5242/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* Merge branch 'mti-next' of ↵Ralf Baechle2013-05-096-55/+84
|\ | | | | | | git://git.linux-mips.org/pub/scm/sjhill/linux-sjhill into mips-for-linux-next
OpenPOWER on IntegriCloud