- Apr 15, 2022
-
-
Omar Sandoval authored
Commit 3ee48b6a ("mm, x86: Saving vmcore with non-lazy freeing of vmas") introduced set_iounmap_nonlazy(), which sets vmap_lazy_nr to lazy_max_pages() + 1, ensuring that any future vunmaps() immediately purge the vmap areas instead of doing it lazily. Commit 690467c8 ("mm/vmalloc: Move draining areas out of caller context") moved the purging from the vunmap() caller to a worker thread. Unfortunately, set_iounmap_nonlazy() can cause the worker thread to spin (possibly forever). For example, consider the following scenario: 1. Thread reads from /proc/vmcore. This eventually calls __copy_oldmem_page() -> set_iounmap_nonlazy(), which sets vmap_lazy_nr to lazy_max_pages() + 1. 2. Then it calls free_vmap_area_noflush() (via iounmap()), which adds 2 pages (one page plus the guard page) to the purge list and vmap_lazy_nr. vmap_lazy_nr is now lazy_max_pages() + 3, so the drain_vmap_work is scheduled. 3. Thread returns from the kernel and is scheduled out. 4. Worker thread is scheduled in and calls drain_vmap_area_work(). It frees the 2 pages on the purge list. vmap_lazy_nr is now lazy_max_pages() + 1. 5. This is still over the threshold, so it tries to purge areas again, but doesn't find anything. 6. Repeat 5. If the system is running with only one CPU (which is typicial for kdump) and preemption is disabled, then this will never make forward progress: there aren't any more pages to purge, so it hangs. If there is more than one CPU or preemption is enabled, then the worker thread will spin forever in the background. (Note that if there were already pages to be purged at the time that set_iounmap_nonlazy() was called, this bug is avoided.) This can be reproduced with anything that reads from /proc/vmcore multiple times. E.g., vmcore-dmesg /proc/vmcore. It turns out that improvements to vmap() over the years have obsoleted the need for this "optimization". I benchmarked `dd if=/proc/vmcore of=/dev/null` with 4k and 1M read sizes on a system with a 32GB vmcore. The test was run on 5.17, 5.18-rc1 with a fix that avoided the hang, and 5.18-rc1 with set_iounmap_nonlazy() removed entirely: |5.17 |5.18+fix|5.18+removal 4k|40.86s| 40.09s| 26.73s 1M|24.47s| 23.98s| 21.84s The removal was the fastest (by a wide margin with 4k reads). This patch removes set_iounmap_nonlazy(). Link: https://lkml.kernel.org/r/52f819991051f9b865e9ce25605509bfdbacadcd.1649277321.git.osandov@fb.com Fixes: 690467c8 ("mm/vmalloc: Move draining areas out of caller context") Signed-off-by:
Omar Sandoval <osandov@fb.com> Acked-by:
Chris Down <chris@chrisdown.name> Reviewed-by:
Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Baoquan He <bhe@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 14, 2022
-
-
Linus Walleij authored
The Gemini defconfig needs to be updated due to DSA driver Kconfig changes in the v5.18 merge window: CONFIG_NET_DSA_REALTEK_SMI is now behind CONFIG_NET_DSA_REALTEK and the desired DSA switch need to be selected explicitly with CONFIG_NET_DSA_REALTEK_RTL8366RB. Take this opportunity to update some other minor config options: - CONFIG_MARVELL_PHY moved around because of Kconfig changes. - CONFIG_SENSORS_DRIVETEMP should be selected since is regulates the system critical alert temperature on some devices, which is nice if it is handled even if initramfs or root fails to mount. Fixes: 319a70a5 ("net: dsa: realtek-smi: move to subdirectory") Fixes: 765c39a4 ("net: dsa: realtek: convert subdrivers into modules") Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Cc: Hans Ulli Kroll <ulli.kroll@googlemail.com> Cc: Luiz Angelo Daros de Luca <luizluca@gmail.com> Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Rob Herring authored
Boolean properties in DT are present or not present and don't take a value. A property such as 'foo = <0>;' evaluated to true. IOW, the value doesn't matter. It may have been intended that 0 values are false, but there is no change in behavior with this patch. Signed-off-by:
Rob Herring <robh@kernel.org> Cc: Andy Gross <agross@kernel.org> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Krzysztof Kozlowski <krzk+dt@kernel.org> Cc: linux-arm-msm@vger.kernel.org Link: https://lore.kernel.org/r/20220407225254.2178644-1-robh@kernel.org ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Krzysztof Kozlowski authored
The node names should be generic and SPI NOR dtschema expects "flash". Signed-off-by:
Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Acked-by:
Viresh Kumar <viresh.kumar@linaro.org> Link: https://lore.kernel.org/r/20220407143027.294678-1-krzysztof.kozlowski@linaro.org ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Rob Herring authored
Boolean properties in DT are present or not present and don't take a value. A property such as 'foo = <0>;' evaluated to true. IOW, the value doesn't matter. It may have been intended that 0 values are false, but there is no change in behavior with this patch. Signed-off-by:
Rob Herring <robh@kernel.org> Reviewed-by:
Claudiu Beznea <claudiu.beznea@microchip.com> Reviewed-by:
Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Krzysztof Kozlowski <krzk+dt@kernel.org> Cc: Nicolas Ferre <nicolas.ferre@microchip.com> Cc: Alexandre Belloni <alexandre.belloni@bootlin.com> Cc: Claudiu Beznea <claudiu.beznea@microchip.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Pengutronix Kernel Team <kernel@pengutronix.de> Cc: Fabio Estevam <festevam@gmail.com> Cc: NXP Linux Team <linux-imx@nxp.com> Cc: "Benoît Cousson" <bcousson@baylibre.com> Cc: Tony Lindgren <tony@atomide.com> Cc: Andy Gross <agross@kernel.org> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-omap@vger.kernel.org Cc: linux-arm-msm@vger.kernel.org Link: https://lore.kernel.org/r/20220407225107.2175958-1-robh@kernel.org ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
- Apr 12, 2022
-
-
Mikulas Patocka authored
struct stat (defined in arch/x86/include/uapi/asm/stat.h) has 32-bit st_dev and st_rdev; struct compat_stat (defined in arch/x86/include/asm/compat.h) has 16-bit st_dev and st_rdev followed by a 16-bit padding. This patch fixes struct compat_stat to match struct stat. [ Historical note: the old x86 'struct stat' did have that 16-bit field that the compat layer had kept around, but it was changes back in 2003 by "struct stat - support larger dev_t": https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git/commit/?id=e95b2065677fe32512a597a79db94b77b90c968d and back in those days, the x86_64 port was still new, and separate from the i386 code, and had already picked up the old version with a 16-bit st_dev field ] Note that we can't change compat_dev_t because it is used by compat_loop_info. Also, if the st_dev and st_rdev values are 32-bit, we don't have to use old_valid_dev to test if the value fits into them. This fixes -EOVERFLOW on filesystems that are on NVMe because NVMe uses the major number 259. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Sven Schnelle authored
Signed-off-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Sven Schnelle authored
s390 defines current_stack_pointer as function while all other architectures use 'register unsigned long asm("<stackptr reg>"). This make codes like the following from check_stack_object() fail: if (IS_ENABLED(CONFIG_STACK_GROWSUP)) { if ((void *)current_stack_pointer < obj + len) return BAD_STACK; } else { if (obj < (void *)current_stack_pointer) return BAD_STACK; } because this would compare the address of current_stack_pointer() and not the stackpointer value. Reported-by:
Karsten Graul <kgraul@linux.ibm.com> Fixes: 2792d84e ("usercopy: Check valid lifetime via stack depth") Cc: Kees Cook <keescook@chromium.org> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by:
Sven Schnelle <svens@linux.ibm.com> Reviewed-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
- Apr 11, 2022
-
-
Vitaly Kuznetsov authored
The following WARN is triggered from kvm_vm_ioctl_set_clock(): WARNING: CPU: 10 PID: 579353 at arch/x86/kvm/../../../virt/kvm/kvm_main.c:3161 mark_page_dirty_in_slot+0x6c/0x80 [kvm] ... CPU: 10 PID: 579353 Comm: qemu-system-x86 Tainted: G W O 5.16.0.stable #20 Hardware name: LENOVO 20UF001CUS/20UF001CUS, BIOS R1CET65W(1.34 ) 06/17/2021 RIP: 0010:mark_page_dirty_in_slot+0x6c/0x80 [kvm] ... Call Trace: <TASK> ? kvm_write_guest+0x114/0x120 [kvm] kvm_hv_invalidate_tsc_page+0x9e/0xf0 [kvm] kvm_arch_vm_ioctl+0xa26/0xc50 [kvm] ? schedule+0x4e/0xc0 ? __cond_resched+0x1a/0x50 ? futex_wait+0x166/0x250 ? __send_signal+0x1f1/0x3d0 kvm_vm_ioctl+0x747/0xda0 [kvm] ... The WARN was introduced by commit 03c0304a86bc ("KVM: Warn if mark_page_dirty() is called without an active vCPU") but the change seems to be correct (unlike Hyper-V TSC page update mechanism). In fact, there's no real need to actually write to guest memory to invalidate TSC page, this can be done by the first vCPU which goes through kvm_guest_time_update(). Reported-by:
Maxim Levitsky <mlevitsk@redhat.com> Reported-by:
Naresh Kamboju <naresh.kamboju@linaro.org> Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20220407201013.963226-1-vkuznets@redhat.com>
-
Suravee Suthikulpanit authored
Since current AVIC implementation cannot support encrypted memory, inhibit AVIC for SEV-enabled guest. Signed-off-by:
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Message-Id: <20220408133710.54275-1-suravee.suthikulpanit@amd.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Pawan Gupta authored
A microcode update on some Intel processors causes all TSX transactions to always abort by default[*]. Microcode also added functionality to re-enable TSX for development purposes. With this microcode loaded, if tsx=on was passed on the cmdline, and TSX development mode was already enabled before the kernel boot, it may make the system vulnerable to TSX Asynchronous Abort (TAA). To be on safer side, unconditionally disable TSX development mode during boot. If a viable use case appears, this can be revisited later. [*]: Intel TSX Disable Update for Selected Processors, doc ID: 643557 [ bp: Drop unstable web link, massage heavily. ] Suggested-by:
Andrew Cooper <andrew.cooper3@citrix.com> Suggested-by:
Borislav Petkov <bp@alien8.de> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Tested-by:
Neelima Krishnan <neelima.krishnan@intel.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/3...
-
Pawan Gupta authored
tsx_clear_cpuid() uses MSR_TSX_FORCE_ABORT to clear CPUID.RTM and CPUID.HLE. Not all CPUs support MSR_TSX_FORCE_ABORT, alternatively use MSR_IA32_TSX_CTRL when supported. [ bp: Document how and why TSX gets disabled. ] Fixes: 29364930 ("x86/tsx: Clear CPUID bits when TSX always force aborts") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Tested-by:
Neelima Krishnan <neelima.krishnan@intel.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/5b323e77e251a9c8bcdda498c5cc0095be1e1d3c.1646943780.git.pawan.kumar.gupta@linux.intel.com
-
- Apr 10, 2022
-
-
Heiko Carstens authored
Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
- Apr 08, 2022
-
-
Heiko Stuebner authored
vcpu_fp uses the riscv_isa_extension mechanism which gets defined in hwcap.h but doesn't include that head file. While it seems to work in most cases, in certain conditions this can lead to build failures like ../arch/riscv/kvm/vcpu_fp.c: In function ‘kvm_riscv_vcpu_fp_reset’: ../arch/riscv/kvm/vcpu_fp.c:22:13: error: implicit declaration of function ‘riscv_isa_extension_available’ [-Werror=implicit-function-declaration] 22 | if (riscv_isa_extension_available(&isa, f) || | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../arch/riscv/kvm/vcpu_fp.c:22:49: error: ‘f’ undeclared (first use in this function) 22 | if (riscv_isa_extension_available(&isa, f) || Fix this by simply including the necessary header. Fixes: 0a86512d ("RISC-V: KVM: Factor-out FP virtualization into separate sources") Signed-off-by:
Heiko Stuebner <heiko@sntech.de> Signed-off-by:
Anup Patel <anup@brainfault.org>
-
Anup Patel authored
We might have RISC-V systems (such as QEMU) where VMID is not part of the TLB entry tag so these systems will have to flush all TLB entries upon any change in hgatp.VMID. Currently, we zero-out hgatp CSR in kvm_arch_vcpu_put() and we re-program hgatp CSR in kvm_arch_vcpu_load(). For above described systems, this will flush all TLB entries whenever VCPU exits to user-space hence reducing performance. This patch fixes above described performance issue by not clearing hgatp CSR in kvm_arch_vcpu_put(). Fixes: 34bde9d8 ("RISC-V: KVM: Implement VCPU world-switch") Cc: stable@vger.kernel.org Signed-off-by:
Anup Patel <apatel@ventanamicro.com> Signed-off-by:
Anup Patel <anup@brainfault.org>
-
Chanho Park authored
Add the MIDR part number info for the Arm Cortex-A78AE[1] and add it to spectre-BHB affected list[2]. [1]: https://developer.arm.com/Processors/Cortex-A78AE [2]: https://developer.arm.com/Arm%20Security%20Center/Spectre-BHB Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Signed-off-by:
Chanho Park <chanho61.park@samsung.com> Link: https://lore.kernel.org/r/20220407091128.8700-1-chanho61.park@samsung.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Guo Ren authored
These patch_text implementations are using stop_machine_cpuslocked infrastructure with atomic cpu_count. The original idea: When the master CPU patch_text, the others should wait for it. But current implementation is using the first CPU as master, which couldn't guarantee the remaining CPUs are waiting. This patch changes the last CPU as the master to solve the potential risk. Fixes: ae164807 ("arm64: introduce interfaces to hotpatch kernel and module code") Signed-off-by:
Guo Ren <guoren@linux.alibaba.com> Signed-off-by:
Guo Ren <guoren@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20220407073323.743224-2-guoren@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
- Apr 07, 2022
-
-
Reto Buerki authored
The x86 MSI message data is 32 bits in total and is either in compatibility or remappable format, see Intel Virtualization Technology for Directed I/O, section 5.1.2. Fixes: 6285aa50 ("x86/msi: Provide msi message shadow structs") Co-developed-by:
Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by:
Adrian-Ken Rueegsegger <ken@codelabs.ch> Signed-off-by:
Reto Buerki <reet@codelabs.ch> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220407110647.67372-1-reet@codelabs.ch
-
Rob Herring authored
Boolean properties in DT are present or not present and don't take a value. A property such as 'foo = <0>;' evaluated to true. IOW, the value doesn't matter. It may have been intended that 0 values are false, but there is no change in behavior with this patch. Signed-off-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/Yk3m92Sj26/v1mLG@robh.at.kernel.org ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Rob Herring authored
The common touchscreen properties are all 32-bit, not 16-bit. These properties must not be too important as they are all ignored in case of an error reading them. Signed-off-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/Yk3moe6Hz8ELM0iS@robh.at.kernel.org ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Rob Herring authored
Boolean properties in DT are present or not present and don't take a value. A property such as 'foo = <0>;' evaluated to true. IOW, the value doesn't matter. It may have been intended that 0 values are false, but there is no change in behavior with this patch. Signed-off-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/Yk3mR5yae3gCkKhp@robh.at.kernel.org ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Rob Herring authored
Boolean properties in DT are present or not present and don't take a value. A property such as 'foo = <0>;' evaluated to true. IOW, the value doesn't matter. It may have been intended that 0 values are false, but there is no change in behavior with this patch. Signed-off-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/Yk3nShkFzNJaI3/Z@robh.at.kernel.org ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Rob Herring authored
Boolean properties in DT are present or not present and don't take a value. A property such as 'foo = <0>;' evaluated to true. IOW, the value doesn't matter. It may have been intended that 0 values are false, but there is no change in behavior with this patch. Signed-off-by:
Rob Herring <robh@kernel.org> Reviewed-by:
Claudiu Beznea <claudiu.beznea@microchip.com> Link: https://lore.kernel.org/r/Yk3leykDEKGBN8rk@robh.at.kernel.org ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Jonathan Cameron authored
Missed the defconfig when removing the board files causing failures in builds using this defconfig. Fixes: 28f74201 ("ARM: pxa: remove Intel Imote2 and Stargate 2 boards") Reported-by:
Sudip Mukherjee <sudipm.mukherjee@gmail.com> Signed-off-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by:
Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20220405135252.10283-1-Jonathan.Cameron@huawei.com ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Alexander Sverdlin authored
Fix sparse warning: arch/arm/mach-ep93xx/clock.c:210:35: sparse: sparse: Using plain integer as NULL pointer Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Alexander Sverdlin <alexander.sverdlin@gmail.com> Link: https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org/thread/TLFJ6D7WGMDJSQ6XK7UZE4XR2PLRZJSV/ Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Alexander Sverdlin authored
arch/arm/mach-ep93xx/clock.c:154:2: warning: Use of memory after it is freed [clang-analyzer-unix.Malloc] arch/arm/mach-ep93xx/clock.c:151:2: note: Taking true branch if (IS_ERR(clk)) ^ arch/arm/mach-ep93xx/clock.c:152:3: note: Memory is released kfree(psc); ^~~~~~~~~~ arch/arm/mach-ep93xx/clock.c:154:2: note: Use of memory after it is freed return &psc->hw; ^ ~~~~~~~~ Fixes: 9645ccc7 ("ep93xx: clock: convert in-place to COMMON_CLK") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Alexander Sverdlin <alexander.sverdlin@gmail.com> Cc: stable@vger.kernel.org Link: https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org/thread/B5YCO2NJEXINCYE26Y255LCVMO55BGWW/ Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Heiko Carstens authored
Just use absolute_pointer() like e.g. in commit 545c2722 ("alpha: Silence -Warray-bounds warnings") to get rid of this warning: arch/s390/kernel/machine_kexec.c:59:9: warning: ‘memcpy’ offset [0, 511] is out of the bounds [0, 0] [-Warray-bounds] Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Sudeep Holla authored
There are more kernel-doc build warnings as below than the ones reported by kernel test robot recently for this file. | arch/arm/mach-vexpress/spc.c:125: warning: missing initial short description on line: | * ve_spc_global_wakeup_irq() | arch/arm/mach-vexpress/spc.c:131: warning: contents before sections | arch/arm/mach-vexpress/spc.c:148: warning: missing initial short description on line: | * ve_spc_cpu_wakeup_irq() | arch/arm/mach-vexpress/spc.c:154: warning: contents before sections | arch/arm/mach-vexpress/spc.c:203: warning: missing initial short description on line: | * ve_spc_powerdown() | arch/arm/mach-vexpress/spc.c:209: warning: contents before sections | arch/arm/mach-vexpress/spc.c:231: warning: missing initial short description on line: | * ve_spc_cpu_in_wfi() | 7 warnings Fix all these warnings. Link: https://lore.kernel.org/r/20220404130207.1162445-2-sudeep.holla@arm.com Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Sudeep Holla <sudeep.holla@arm.com>
-
Sudeep Holla authored
Kbuild bot reported the following kernel-doc build warning: | arch/arm/mach-versatile/spc.c:231: warning: This comment starts with | '/**', but isn't a kernel-doc comment. | Refer Documentation/doc-guide/kernel-doc.rst | * ve_spc_cpu_in_wfi(u32 cpu, u32 cluster) Fix the issue by dropping the parameters specified in the kernel doc. Link: https://lore.kernel.org/linux-doc/202204031026.4ogKxt89-lkp@intel.com Link: https://lore.kernel.org/r/20220404130207.1162445-1-sudeep.holla@arm.com Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Sudeep Holla <sudeep.holla@arm.com>
-
Nick Desaulniers authored
Bernardo reported an error that Nathan bisected down to (x86_64) defconfig+LTO_CLANG_FULL+X86_PMEM_LEGACY. LTO vmlinux.o ld.lld: error: <instantiation>:1:13: redefinition of 'found' .set found, 0 ^ <inline asm>:29:1: while in macro instantiation extable_type_reg reg=%eax, type=(17 | ((0) << 16)) ^ This appears to be another LTO specific issue similar to what was folded into commit 4b5305de ("x86/extable: Extend extable functionality"), where the `.set found, 0` in DEFINE_EXTABLE_TYPE_REG in arch/x86/include/asm/asm.h conflicts with the symbol for the static function `found` in arch/x86/kernel/pmem.c. Assembler .set directive declare symbols with global visibility, so the assembler may not rename such symbols in the event of a conflict. LTO could rename static functions if there was a conflict in C sources, but it cannot see into symbols defined in inline asm. The symbols are also retained in the symbol table, regardless of LTO. Give the symbols .L prefixes making them locally visible, so that they may be renamed for LTO to avoid conflicts, and to drop them from the symbol table regardless of LTO. Fixes: 4b5305de ("x86/extable: Extend extable functionality") Reported-by:
Bernardo Meurer Costa <beme@google.com> Debugged-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Tested-by:
Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20220329202148.2379697-1-ndesaulniers@google.com
-
Peter Zijlstra authored
Clang can inline emit_indirect_jump() and then folds constants, which results in: | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x6a4: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40 | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x67d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40 | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x386: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20 | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x35d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20 Suppress the optimization such that it must emit a code reference to the __x86_indirect_thunk_array[] base. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lkml.kernel.org/r/20220405075531.GB30877@worktop.programming.kicks-ass.net
-
- Apr 06, 2022
-
-
Kefeng Wang authored
This reverts commit 602946ec. If CONFIG_HIGHMEM is enabled, no highmem will be added with max_mapnr set to max_low_pfn, see mem_init(): for (pfn = highmem_mapnr; pfn < max_mapnr; ++pfn) { ... free_highmem_page(); } Now that virt_addr_valid() has been fixed in the previous commit, we can revert the change to max_mapnr. Fixes: 602946ec ("powerpc: Set max_mapnr correctly") Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Reported-by:
Erhard F. <erhard_f@mailbox.org> [mpe: Update change log to reflect series reordering] Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220406145802.538416-2-mpe@ellerman.id.au
-
Kefeng Wang authored
mpe: On 64-bit Book3E vmalloc space starts at 0x8000000000000000. Because of the way __pa() works we have: __pa(0x8000000000000000) == 0, and therefore virt_to_pfn(0x8000000000000000) == 0, and therefore virt_addr_valid(0x8000000000000000) == true Which is wrong, virt_addr_valid() should be false for vmalloc space. In fact all vmalloc addresses that alias with a valid PFN will return true from virt_addr_valid(). That can cause bugs with hardened usercopy as described below by Kefeng Wang: When running ethtool eth0 on 64-bit Book3E, a BUG occurred: usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)! kernel BUG at mm/usercopy.c:99 ... usercopy_abort+0x64/0xa0 (unreliable) __check_heap_object+0x168/0x190 __check_object_size+0x1a0/0x200 dev_ethtool+0x2494/0x2b20 dev_ioctl+0x5d0/0x770 sock_do_ioctl+0xf0/0x1d0 sock_ioctl+0x3ec/0x5a0 __se_sys_ioctl+0xf0/0x160 system_call_exception+0xfc/0x1f0 system_call_common+0xf8/0x200 The code shows below, data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN)); copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN)) The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true on 64-bit Book3E, which leads to the panic. As commit 4dd7554a ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in the virt_addr_valid() for 64-bit, also add upper limit check to make sure the virt is below high_memory. Meanwhile, for 32-bit PAGE_OFFSET is the virtual address of the start of lowmem, high_memory is the upper low virtual address, the check is suitable for 32-bit, this will fix the issue mentioned in commit 602946ec ("powerpc: Set max_mapnr correctly") too. On 32-bit there is a similar problem with high memory, that was fixed in commit 602946ec ("powerpc: Set max_mapnr correctly"), but that commit breaks highmem and needs to be reverted. We can't easily fix __pa(), we have code that relies on its current behaviour. So for now add extra checks to virt_addr_valid(). For 64-bit Book3S the extra checks are not necessary, the combination of virt_to_pfn() and pfn_valid() should yield the correct result, but they are harmless. Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by:
Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Add additional change log detail] Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220406145802.538416-1-mpe@ellerman.id.au
-
Linus Walleij authored
This is effectively a revert of the temporary disablement patch. Battery charging now works! We also enable static battery data for the Samsung SDI batteries as used by the U8500 Samsung phones. Cc: Lee Jones <lee.jones@linaro.org> Fixes: a1149ae9 ("ARM: ux500: Disable Power Supply and Battery Management by default") Signed-off-by:
Linus Walleij <linus.walleij@linaro.org>
-
Reiji Watanabe authored
KVM allows userspace to configure either all EL1 32bit or 64bit vCPUs for a guest. At vCPU reset, vcpu_allowed_register_width() checks if the vcpu's register width is consistent with all other vCPUs'. Since the checking is done even against vCPUs that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet, the uninitialized vCPUs are erroneously treated as 64bit vCPU, which causes the function to incorrectly detect a mixed-width VM. Introduce KVM_ARCH_FLAG_EL1_32BIT and KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED bits for kvm->arch.flags. A value of the EL1_32BIT bit indicates that the guest needs to be configured with all 32bit or 64bit vCPUs, and a value of the REG_WIDTH_CONFIGURED bit indicates if a value of the EL1_32BIT bit is valid (already set up). Values in those bits are set at the first KVM_ARM_VCPU_INIT for the guest based on KVM_ARM_VCPU_EL1_32BIT configuration for the vCPU. Check vcpu's register width against those new bits at the vcpu's KVM_ARM_VCPU_INIT (instead of against other vCPUs' register width). Fixes: 66e94d5c ("KVM: arm64: Prevent mixed-width VM creation") Signed-off-by:
Reiji Watanabe <reijiw@google.com> Reviewed-by:
Oliver Upton <oupton@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220329031924.619453-2-reijiw@google.com
-
Heiko Carstens authored
Add config and compile options which allow to compile with z16 optimizations if the compiler supports it. Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Add detection for machine types 0x3931 and 0x3932 and set ELF platform name to z16. Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Joey Gouly authored
The alternatives code must be `noinstr` such that it does not patch itself, as the cache invalidation is only performed after all the alternatives have been applied. Mark patch_alternative() as `noinstr`. Mark branch_insn_requires_update() and get_alt_insn() with `__always_inline` since they are both only called through patch_alternative(). Booting a kernel in QEMU TCG with KCSAN=y and ARM64_USE_LSE_ATOMICS=y caused a boot hang: [ 0.241121] CPU: All CPU(s) started at EL2 The alternatives code was patching the atomics in __tsan_read4() from LL/SC atomics to LSE atomics. The following fragment is using LL/SC atomics in the .text section: | <__tsan_unaligned_read4+304>: ldxr x6, [x2] | <__tsan_unaligned_read4+308>: add x6, x6, x5 | <__tsan_unaligned_read4+312>: stxr w7, x6, [x2] | <__tsan_unaligned_read4+316>: cbnz w7, <__tsan_unaligned_read4+304> This LL/SC atomic sequence was to be replaced with LSE atomics. However since the alternatives code was instrumentable, __tsan_read4() was being called after only the first instruction was replaced, which led to the following code in memory: | <__tsan_unaligned_read4+304>: ldadd x5, x6, [x2] | <__tsan_unaligned_read4+308>: add x6, x6, x5 | <__tsan_unaligned_read4+312>: stxr w7, x6, [x2] | <__tsan_unaligned_read4+316>: cbnz w7, <__tsan_unaligned_read4+304> This caused an infinite loop as the `stxr` instruction never completed successfully, so `w7` was always 0. Signed-off-by:
Joey Gouly <joey.gouly@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220405104733.11476-1-joey.gouly@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Yu Zhe authored
Remove unnecessary casts. Signed-off-by:
Yu Zhe <yuzhe@nfschina.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220329102059.268983-1-yuzhe@nfschina.com
-
Oliver Upton authored
It is possible to take a stage-2 permission fault on a page larger than PAGE_SIZE. For example, when running a guest backed by 2M HugeTLB, KVM eagerly maps at the largest possible block size. When dirty logging is enabled on a memslot, KVM does *not* eagerly split these 2M stage-2 mappings and instead clears the write bit on the pte. Since dirty logging is always performed at PAGE_SIZE granularity, KVM lazily splits these 2M block mappings down to PAGE_SIZE in the stage-2 fault handler. This operation must be done under the write lock. Since commit f783ef1c ("KVM: arm64: Add fast path to handle permission relaxation during dirty logging"), the stage-2 fault handler conditionally takes the read lock on permission faults with dirty logging enabled. To that end, it is possible to split a 2M block mapping while only holding the read lock. The problem is demonstrated by running kvm_page_table_test with 2M anonymous HugeTLB, which splats like so: WARNING: CPU: 5 PID: 15276 at arch/arm64/kvm/hyp/pgtable.c:153 stage2_map_walk_leaf+0x124/0x158 [...] Call trace: stage2_map_walk_leaf+0x124/0x158 stage2_map_walker+0x5c/0xf0 __kvm_pgtable_walk+0x100/0x1d4 __kvm_pgtable_walk+0x140/0x1d4 __kvm_pgtable_walk+0x140/0x1d4 kvm_pgtable_walk+0xa0/0xf8 kvm_pgtable_stage2_map+0x15c/0x198 user_mem_abort+0x56c/0x838 kvm_handle_guest_abort+0x1fc/0x2a4 handle_exit+0xa4/0x120 kvm_arch_vcpu_ioctl_run+0x200/0x448 kvm_vcpu_ioctl+0x588/0x664 __arm64_sys_ioctl+0x9c/0xd4 invoke_syscall+0x4c/0x144 el0_svc_common+0xc4/0x190 do_el0_svc+0x30/0x8c el0_svc+0x28/0xcc el0t_64_sync_handler+0x84/0xe4 el0t_64_sync+0x1a4/0x1a8 Fix the issue by only acquiring the read lock if the guest faulted on a PAGE_SIZE granule w/ dirty logging enabled. Add a WARN to catch locking bugs in future changes. Fixes: f783ef1c ("KVM: arm64: Add fast path to handle permission relaxation during dirty logging") Cc: Jing Zhang <jingzhangos@google.com> Signed-off-by:
Oliver Upton <oupton@google.com> Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220401194652.950240-1-oupton@google.com
-