- Apr 02, 2022
-
-
Beau Belgrave authored
Make sure the event_mutex is properly held during dyn_event_add call. This is required when adding dynamic events. Link: https://lkml.kernel.org/r/20220328223225.1992-1-beaub@linux.microsoft.com Reported-by:
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by:
Beau Belgrave <beaub@linux.microsoft.com> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
- Mar 22, 2022
-
-
Huang Ying authored
With the advent of various new memory types, some machines will have multiple types of memory, e.g. DRAM and PMEM (persistent memory). The memory subsystem of these machines can be called memory tiering system, because the performance of the different types of memory are usually different. In such system, because of the memory accessing pattern changing etc, some pages in the slow memory may become hot globally. So in this patch, the NUMA balancing mechanism is enhanced to optimize the page placement among the different memory types according to hot/cold dynamically. In a typical memory tiering system, there are CPUs, fast memory and slow memory in each physical NUMA node. The CPUs and the fast memory will be put in one logical node (called fast memory node), while the slow memory will be put in another (faked) logical node (called slow memory node). That is, the fast memory is regarded as local while the slow memory is regarded as remote. So it's possible for the recently accessed pages in the slow memory node to be promoted to the fast memory node via the existing NUMA balancing mechanism. The original NUMA balancing mechanism will stop to migrate pages if the free memory of the target node becomes below the high watermark. This is a reasonable policy if there's only one memory type. But this makes the original NUMA balancing mechanism almost do not work to optimize page placement among different memory types. Details are as follows. It's the common cases that the working-set size of the workload is larger than the size of the fast memory nodes. Otherwise, it's unnecessary to use the slow memory at all. So, there are almost always no enough free pages in the fast memory nodes, so that the globally hot pages in the slow memory node cannot be promoted to the fast memory node. To solve the issue, we have 2 choices as follows, a. Ignore the free pages watermark checking when promoting hot pages from the slow memory node to the fast memory node. This will create some memory pressure in the fast memory node, thus trigger the memory reclaiming. So that, the cold pages in the fast memory node will be demoted to the slow memory node. b. Define a new watermark called wmark_promo which is higher than wmark_high, and have kswapd reclaiming pages until free pages reach such watermark. The scenario is as follows: when we want to promote hot-pages from a slow memory to a fast memory, but fast memory's free pages would go lower than high watermark with such promotion, we wake up kswapd with wmark_promo watermark in order to demote cold pages and free us up some space. So, next time we want to promote hot-pages we might have a chance of doing so. The choice "a" may create high memory pressure in the fast memory node. If the memory pressure of the workload is high, the memory pressure may become so high that the memory allocation latency of the workload is influenced, e.g. the direct reclaiming may be triggered. The choice "b" works much better at this aspect. If the memory pressure of the workload is high, the hot pages promotion will stop earlier because its allocation watermark is higher than that of the normal memory allocation. So in this patch, choice "b" is implemented. A new zone watermark (WMARK_PROMO) is added. Which is larger than the high watermark and can be controlled via watermark_scale_factor. In addition to the original page placement optimization among sockets, the NUMA balancing mechanism is extended to be used to optimize page placement according to hot/cold among different memory types. So the sysctl user space interface (numa_balancing) is extended in a backward compatible way as follow, so that the users can enable/disable these functionality individually. The sysctl is converted from a Boolean value to a bits field. The definition of the flags is, - 0: NUMA_BALANCING_DISABLED - 1: NUMA_BALANCING_NORMAL - 2: NUMA_BALANCING_MEMORY_TIERING We have tested the patch with the pmbench memory accessing benchmark with the 80:20 read/write ratio and the Gauss access address distribution on a 2 socket Intel server with Optane DC Persistent Memory Model. The test results shows that the pmbench score can improve up to 95.9%. Thanks Andrew Morton to help fix the document format error. Link: https://lkml.kernel.org/r/20220221084529.1052339-3-ying.huang@intel.com Signed-off-by:
"Huang, Ying" <ying.huang@intel.com> Tested-by:
Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by:
Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by:
Johannes Weiner <hannes@cmpxchg.org> Reviewed-by:
Oscar Salvador <osalvador@suse.de> Reviewed-by:
Yang Shi <shy828301@gmail.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Rik van Riel <riel@surriel.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Wei Xu <weixugc@google.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Feng Tang <feng.tang@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Hildenbrand authored
Patch series "mm: enforce pageblock_order < MAX_ORDER". Having pageblock_order >= MAX_ORDER seems to be able to happen in corner cases and some parts of the kernel are not prepared for it. For example, Aneesh has shown [1] that such kernels can be compiled on ppc64 with 64k base pages by setting FORCE_MAX_ZONEORDER=8, which will run into a WARN_ON_ONCE(order >= MAX_ORDER) in comapction code right during boot. We can get pageblock_order >= MAX_ORDER when the default hugetlb size is bigger than the maximum allocation granularity of the buddy, in which case we are no longer talking about huge pages but instead gigantic pages. Having pageblock_order >= MAX_ORDER can only make alloc_contig_range() of such gigantic pages more likely to succeed. Reliable use of gigantic pages either requires boot time allcoation or CMA, no need to overcomplicate some places in the kernel to optimize for corner cases that are broken in other areas of the kernel. This patch (of 2): Let's enforce pageblock_order < MAX_ORDER and simplify. Especially patch #1 can be regarded a cleanup before: [PATCH v5 0/6] Use pageblock_order for cma and alloc_contig_range alignment. [2] [1] https://lkml.kernel.org/r/87r189a2ks.fsf@linux.ibm.com [2] https://lkml.kernel.org/r/20220211164135.1803616-1-zi.yan@sent.com Link: https://lkml.kernel.org/r/20220214174132.219303-2-david@redhat.com Signed-off-by:
David Hildenbrand <david@redhat.com> Reviewed-by:
Zi Yan <ziy@nvidia.com> Acked-by:
Rob Herring <robh@kernel.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Frank Rowand <frowand.list@gmail.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: John Garry via iommu <iommu@lists.linux-foundation.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Huang, Ying authored
Qian Cai reported a boot crash on arm64 systems, caused by: 0fb3978b ("sched/numa: Fix NUMA topology for systems with CPU-less nodes") The bug is that node_state() must be supplied a valid node_states[] array index, but in task_numa_placement() the max_nid search can fail with NUMA_NO_NODE, which is not a valid index. Fix it by checking that max_nid is a valid index. [ mingo: Added changelog. ] Fixes: 0fb3978b ("sched/numa: Fix NUMA topology for systems with CPU-less nodes") Reported-by:
Qian Cai <quic_qiancai@quicinc.com> Tested-by:
Qian Cai <quic_qiancai@quicinc.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
"Huang, Ying" <ying.huang@intel.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Mar 21, 2022
-
-
Matthew Wilcox (Oracle) authored
Instead of declaring a struct page_vma_mapped_walk directly, use these helpers to allow us to transition to a PFN approach in the following patches. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org>
-
Matthew Wilcox (Oracle) authored
invalidate_inode_page() is the only caller of invalidate_complete_page() and inlining it reveals that the first check is unnecessary (because we hold the page locked, and we just retrieved the mapping from the page). Actually, it does make a difference, in that tail pages no longer fail at this check, so it's now possible to remove a tail page from a mapping. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
John Hubbard <jhubbard@nvidia.com> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
David Howells authored
free_watch() does everything barring actually freeing the watch object. Fix this by adding the missing kfree. kmemleak produces a report something like the following. Note that as an address can be seen in the first word, the watch would appear to have gone through call_rcu(). BUG: memory leak unreferenced object 0xffff88810ce4a200 (size 96): comm "syz-executor352", pid 3605, jiffies 4294947473 (age 13.720s) hex dump (first 32 bytes): e0 82 48 0d 81 88 ff ff 00 00 00 00 00 00 00 00 ..H............. 80 a2 e4 0c 81 88 ff ff 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff8214e6cc>] kmalloc include/linux/slab.h:581 [inline] [<ffffffff8214e6cc>] kzalloc include/linux/slab.h:714 [inline] [<ffffffff8214e6cc>] keyctl_watch_key+0xec/0x2e0 security/keys/keyctl.c:1800 [<ffffffff8214ec84>] __do_sys_keyctl+0x3c4/0x490 security/keys/keyctl.c:2016 [<ffffffff84493a25>] do_syscall_x64 arch/x86/entry/common.c:50 [inline] [<ffffffff84493a25>] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 [<ffffffff84600068>] entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: c73be61c ("pipe: Add general notification queue support") Reported-and-tested-by:
<syzbot+6e2de48f06cdb2884bfc@syzkaller.appspotmail.com> Signed-off-by:
David Howells <dhowells@redhat.com>
-
David Howells authored
In watch_queue_set_size(), the error cleanup code doesn't take account of the fact that __free_page() can't handle a NULL pointer when trying to free up buffer pages that did get allocated. Fix this by only calling __free_page() on the pages actually allocated. Without the fix, this can lead to something like the following: BUG: KASAN: null-ptr-deref in __free_pages+0x1f/0x1b0 mm/page_alloc.c:5473 Read of size 4 at addr 0000000000000034 by task syz-executor168/3599 ... Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 __kasan_report mm/kasan/report.c:446 [inline] kasan_report.cold+0x66/0xdf mm/kasan/report.c:459 check_region_inline mm/kasan/generic.c:183 [inline] kasan_check_range+0x13d/0x180 mm/kasan/generic.c:189 instrument_atomic_read include/linux/instrumented.h:71 [inline] atomic_read include/linux/atomic/atomic-instrumented.h:27 [inline] page_ref_count include/linux/page_ref.h:67 [inline] put_page_testzero include/linux/mm.h:717 [inline] __free_pages+0x1f/0x1b0 mm/page_alloc.c:5473 watch_queue_set_size+0x499/0x630 kernel/watch_queue.c:275 pipe_ioctl+0xac/0x2b0 fs/pipe.c:632 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:874 [inline] __se_sys_ioctl fs/ioctl.c:860 [inline] __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:860 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: c73be61c ("pipe: Add general notification queue support") Reported-and-tested-by:
<syzbot+d55757faa9b80590767b@syzkaller.appspotmail.com> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Fabio M. De Francesco <fmdefrancesco@gmail.com>
-
- Mar 20, 2022
-
-
Steven Rostedt (Google) authored
When an enum is used in the visible parts of a trace event that is exported to user space, the user space applications like perf and trace-cmd do not have a way to know what the value of the enum is. To solve this, at boot up (or module load) the printk formats are modified to replace the enum with their numeric value in the string output. Array fields of the event are defined by [<nr-elements>] in the type portion of the format file so that the user space parsers can correctly parse the array into the appropriate size chunks. But in some trace events, an enum is used in defining the size of the array, which once again breaks the parsing of user space tooling. This was solved the same way as the print formats were, but it modified the type strings of the trace event. This caused crashes in some architectures because, as supposed to the print string, is a const string value. This was not detected on x86, as it appears that const strings are still writable (at least in boot up), but other architectures this is not the case, and writing to a const string will cause a kernel fault. To fix this, use kstrdup() to copy the type before modifying it. If the trace event is for the core kernel there's no need to free it because the string will be in use for the life of the machine being on line. For modules, create a link list to store all the strings being allocated for modules and when the module is removed, free them. Link: https://lore.kernel.org/all/yt9dr1706b4i.fsf@linux.ibm.com/ Link: https://lkml.kernel.org/r/20220318153432.3984b871@gandalf.local.home Tested-by:
Marc Zyngier <maz@kernel.org> Tested-by:
Sven Schnelle <svens@linux.ibm.com> Reported-by:
Sven Schnelle <svens@linux.ibm.com> Fixes: b3bc8547 ("tracing: Have TRACE_DEFINE_ENUM affect trace event types as well") Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
- Mar 17, 2022
-
-
Qian Cai authored
Previously, I failed to realize that Kees' patch [1] has not been merged into the mainline yet, and dropped DEBUG_INFO=y too eagerly from the mainline. As the results, "make debug.config" won't be able to flip DEBUG_INFO=n from the existing .config. This should close the gaps of a few weeks before Kees' patch is there, and work regardless of their merging status anyway. Link: https://lore.kernel.org/all/20220125075126.891825-1-keescook@chromium.org/ [1] Link: https://lkml.kernel.org/r/20220308153524.8618-1-quic_qiancai@quicinc.com Signed-off-by:
Qian Cai <quic_qiancai@quicinc.com> Reported-by:
Daniel Thompson <daniel.thompson@linaro.org> Reviewed-by:
Daniel Thompson <daniel.thompson@linaro.org> Cc: Kees Cook <keescook@chromium.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 15, 2022
-
-
Beau Belgrave authored
Tracefs by default is locked down heavily. System operators can open up some files, such as user_events to a broader set of users. These users do not have access within tracefs beyond just the user_event files. Due to this restriction the trace_add_event_call/remove calls will silently fail since the caller does not have permissions to create directories. To fix this trace_add_event_call/remove calls will be issued with override creds of the global root UID. Creds are reverted immediately afterward. Link: https://lkml.kernel.org/r/20220308222807.2040-1-beaub@linux.microsoft.com Signed-off-by:
Beau Belgrave <beaub@linux.microsoft.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Ingo Molnar authored
This header is not (yet) standalone. Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Mar 11, 2022
-
-
Steven Rostedt (Google) authored
kzalloc virtual addresses do not work with SetPageReserved, use the actual page virtual addresses instead via alloc_pages. The issue is reported when booting with user_events and DEBUG_VM_PGFLAGS=y. Also make the number of events based on the ORDER. Link: https://lore.kernel.org/all/CADYN=9+xY5Vku3Ws5E9S60SM5dCFfeGeRBkmDFbcxX0ZMoFing@mail.gmail.com/ Link: https://lore.kernel.org/all/20220311223028.1865-1-beaub@linux.microsoft.com/ Cc: Beau Belgrave <beaub@linux.microsoft.com> Reported-by:
Anders Roxell <anders.roxell@linaro.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
David Howells authored
watch_queue_clear() has a comment stating that setting ->defunct to true preventing new additions as well as preventing notifications. Whilst the latter is true, the first bit is superfluous since at the time this function is called, the pipe cannot be accessed to add new event sources. Remove the "new additions" bit from the comment. Fixes: c73be61c ("pipe: Add general notification queue support") Reported-by:
Jann Horn <jannh@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
There's nothing to synchronise post_one_notification() versus pipe_read(). Whilst posting is done under pipe->rd_wait.lock, the reader only takes pipe->mutex which cannot bar notification posting as that may need to be made from contexts that cannot sleep. Fix this by setting pipe->head with a barrier in post_one_notification() and reading pipe->head with a barrier in pipe_read(). If that's not sufficient, the rd_wait.lock will need to be taken, possibly in a ->confirm() op so that it only applies to notifications. The lock would, however, have to be dropped before copy_page_to_iter() is invoked. Fixes: c73be61c ("pipe: Add general notification queue support") Reported-by:
Jann Horn <jannh@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
Free the watch_queue note allocation bitmap when the watch_queue is destroyed. Fixes: c73be61c ("pipe: Add general notification queue support") Reported-by:
Jann Horn <jannh@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
Currently, watch_queue_set_size() sets the number of notes available in wqueue->nr_notes according to the number of notes allocated, but sets the size of the bitmap to the unrounded number of notes originally asked for. Fix this by setting the bitmap size to the number of notes we're actually going to make available (ie. the number allocated). Fixes: c73be61c ("pipe: Add general notification queue support") Reported-by:
Jann Horn <jannh@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Christophe JAILLET authored
Use bitmap_alloc() to simplify code, improve the semantic and reduce some open-coded arithmetic in allocator arguments. Also change a memset(0xff) into an equivalent bitmap_fill() to keep consistency. Signed-off-by:
Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
The pipe ring size must always be a power of 2 as the head and tail pointers are masked off by AND'ing with the size of the ring - 1. watch_queue_set_size(), however, lets you specify any number of notes between 1 and 511. This number is passed through to pipe_resize_ring() without checking/forcing its alignment. Fix this by rounding the number of slots required up to the nearest power of two. The request is meant to guarantee that at least that many notifications can be generated before the queue is full, so rounding down isn't an option, but, alternatively, it may be better to give an error if we aren't allowed to allocate that much ring space. Fixes: c73be61c ("pipe: Add general notification queue support") Reported-by:
Jann Horn <jannh@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
When a pipe ring descriptor points to a notification message, the refcount on the backing page is incremented by the generic get function, but the release function, which marks the bitmap, doesn't drop the page ref. Fix this by calling generic_pipe_buf_release() at the end of watch_queue_pipe_buf_release(). Fixes: c73be61c ("pipe: Add general notification queue support") Reported-by:
Jann Horn <jannh@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Howells authored
In watch_queue_set_filter(), there are a couple of places where we check that the filter type value does not exceed what the type_filter bitmap can hold. One place calculates the number of bits by: if (tf[i].type >= sizeof(wfilter->type_filter) * 8) which is fine, but the second does: if (tf[i].type >= sizeof(wfilter->type_filter) * BITS_PER_LONG) which is not. This can lead to a couple of out-of-bounds writes due to a too-large type: (1) __set_bit() on wfilter->type_filter (2) Writing more elements in wfilter->filters[] than we allocated. Fix this by just using the proper WATCH_TYPE__NR instead, which is the number of types we actually know about. The bug may cause an oops looking something like: BUG: KASAN: slab-out-of-bounds in watch_queue_set_filter+0x659/0x740 Write of size 4 at addr ffff88800d2c66bc by task watch_queue_oob/611 ... Call Trace: <TASK> dump_stack_lvl+0x45/0x59 print_address_description.constprop.0+0x1f/0x150 ... kasan_report.cold+0x7f/0x11b ... watch_queue_set_filter+0x659/0x740 ... __x64_sys_ioctl+0x127/0x190 do_syscall_64+0x43/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae Allocated by task 611: kasan_save_stack+0x1e/0x40 __kasan_kmalloc+0x81/0xa0 watch_queue_set_filter+0x23a/0x740 __x64_sys_ioctl+0x127/0x190 do_syscall_64+0x43/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae The buggy address belongs to the object at ffff88800d2c66a0 which belongs to the cache kmalloc-32 of size 32 The buggy address is located 28 bytes inside of 32-byte region [ffff88800d2c66a0, ffff88800d2c66c0) Fixes: c73be61c ("pipe: Add general notification queue support") Reported-by:
Jann Horn <jannh@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Steven Rostedt (Google) authored
Add ftrace_boot_snapshot kernel parameter that will take a snapshot at the end of boot up just before switching over to user space (it happens during the kernel freeing of init memory). This is useful when there's interesting data that can be collected from kernel start up, but gets overridden by user space start up code. With this option, the ring buffer content from the boot up traces gets saved in the snapshot at the end of boot up. This trace can be read from: /sys/kernel/tracing/snapshot Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Steven Rostedt (Google) authored
The macro TRACE_DEFINE_ENUM is used to convert enums in the kernel to their actual value when they are exported to user space via the trace event format file. Currently only the enums in the "print fmt" (TP_printk in the TRACE_EVENT macro) have the enums converted. But the enums can be used to denote array size: field:unsigned int fc_ineligible_rc[EXT4_FC_REASON_MAX]; offset:12; size:36; signed:0; The EXT4_FC_REASON_MAX has no meaning to userspace but it needs to know that information to know how to parse the array. Have the array indexes also be parsed as well. Link: https://lore.kernel.org/all/cover.1646922487.git.riteshh@linux.ibm.com/ Reported-by:
Ritesh Harjani <riteshh@linux.ibm.com> Tested-by:
Ritesh Harjani <riteshh@linux.ibm.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Tom Zanussi authored
0-day reported the strncpy error below: ../kernel/trace/trace_events_synth.c: In function 'last_cmd_set': ../kernel/trace/trace_events_synth.c:65:9: warning: 'strncpy' specified bound depends on the length o\ f the source argument [-Wstringop-truncation] 65 | strncpy(last_cmd, str, strlen(str) + 1); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../kernel/trace/trace_events_synth.c:65:32: note: length computed here 65 | strncpy(last_cmd, str, strlen(str) + 1); | ^~~~~~~~~~~ There's no reason to use strncpy here, in fact there's no reason to do anything but a simple kstrdup() (note we don't even need to check for failure since last_cmod is expected to be either the last cmd string or NULL, and the containing function is a void return). Link: https://lkml.kernel.org/r/77deca8cbfd226981b3f1eab203967381e9b5bd9.camel@kernel.org Fixes: 27c888da ("tracing: Remove size restriction on synthetic event cmd error logging") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Tom Zanussi <zanussi@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Beau Belgrave authored
Find user_events always while under the event_mutex and before leaving the lock, add a ref count to the user_event. This ensures that all paths under the event_mutex that check the ref counts will be synchronized. The ioctl add/delete paths are protected by the reg_mutex. However, dyn_event is only protected by the event_mutex. The dyn_event delete path cannot acquire reg_mutex, since that could cause a deadlock between the ioctl delete case acquiring event_mutex after acquiring the reg_mutex. Link: https://lkml.kernel.org/r/20220310001141.1660-1-beaub@linux.microsoft.com Signed-off-by:
Beau Belgrave <beaub@linux.microsoft.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
- Mar 10, 2022
-
-
Steven Rostedt (Google) authored
Allow custom events to be added to the events directory in the tracefs file system. For example, a module could be installed that attaches to an event and wants to be enabled and disabled via the tracefs file system. It would use trace_add_event_call() to add the event to the tracefs directory, and trace_remove_event_call() to remove it. Make both those functions EXPORT_SYMBOL_GPL(). Link: https://lkml.kernel.org/r/20220303220625.186988045@goodmis.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Tom Zanussi <zanussi@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Steven Rostedt (Google) authored
Using strnlen(dest, str, n) is confusing, as the size of dest must be strlen(dest) + n + 1. Even more confusing, using sizeof(string constant) gives you strlen(string constant) + 1 and not just strlen(). These two together made using strncat() with a constant string a bit off in the calculations as we have: len = sizeof(HIST_PREFIX) + strlen(str) + 1; kfree(last_cmd); last_cmd = kzalloc(len, GFP_KERNEL); strcpy(last_cmd, HIST_PREFIX); len -= sizeof(HIST_PREFIX) + 1; strncat(last_cmd, str, len); The above works if we s/sizeof/strlen/ with HIST_PREFIX (which is defined as "hist:", but because sizeof(HIST_PREFIX) is equal to strlen(HIST_PREFIX) + 1, we can drop the +1 in the code. But at least comment that we are doing so. Link: https://lore.kernel.org/all/202203082112.Iu7tvFl4-lkp@intel.com/ Fixes: 9f8e5aee ("tracing: Fix allocation of last_cmd in last_cmd_set()") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Beau Belgrave authored
Ensure name is initialized by default to NULL to prevent possible edge cases that could lead to it being left uninitialized. Add an explicit check for NULL name to ensure edge boundaries. Link: https://lore.kernel.org/bpf/20220224105334.GA2248@kili/ Link: https://lore.kernel.org/linux-trace-devel/20220224181637.2129-1-beaub@linux.microsoft.com Signed-off-by:
Beau Belgrave <beaub@linux.microsoft.com> Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
- Mar 09, 2022
-
-
Jiapeng Chong authored
Clean up the following clang-w1 warning: kernel/trace/ftrace.c:7827: warning: Function parameter or member 'ops' not described in 'unregister_ftrace_function'. kernel/trace/ftrace.c:7805: warning: Function parameter or member 'ops' not described in 'register_ftrace_function'. Link: https://lkml.kernel.org/r/20220307004303.26399-1-jiapeng.chong@linux.alibaba.com Reported-by:
Abaci Robot <abaci@linux.alibaba.com> Signed-off-by:
Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Nicolas Saenz Julienne authored
At the moment running osnoise on a nohz_full CPU or uncontested FIFO priority and a PREEMPT_RCU kernel might have the side effect of extending grace periods too much. This will entice RCU to force a context switch on the wayward CPU to end the grace period, all while introducing unwarranted noise into the tracer. This behaviour is unavoidable as overly extending grace periods might exhaust the system's memory. This same exact problem is what extended quiescent states (EQS) were created for, conversely, rcu_momentary_dyntick_idle() emulates them by performing a zero duration EQS. So let's make use of it. In the common case rcu_momentary_dyntick_idle() is fairly inexpensive: atomically incrementing a local per-CPU counter and doing a store. So it shouldn't affect osnoise's measurements (which has a 1us granularity), so we'll call it unanimously. The uncommon case involve calling rcu_momentary_dyntick_idle() after having the osnoise process: - Receive an expedited quiescent state IPI with preemption disabled or during an RCU critical section. (activates rdp->cpu_no_qs.b.exp code-path). - Being preempted within in an RCU critical section and having the subsequent outermost rcu_read_unlock() called with interrupts disabled. (t->rcu_read_unlock_special.b.blocked code-path). Neither of those are possible at the moment, and are unlikely to be in the future given the osnoise's loop design. On top of this, the noise generated by the situations described above is unavoidable, and if not exposed by rcu_momentary_dyntick_idle() will be eventually seen in subsequent rcu_read_unlock() calls or schedule operations. Link: https://lkml.kernel.org/r/20220307180740.577607-1-nsaenzju@redhat.com Cc: stable@vger.kernel.org Fixes: bce29ac9 ("trace: Add osnoise tracer") Signed-off-by:
Nicolas Saenz Julienne <nsaenzju@redhat.com> Acked-by:
Paul E. McKenney <paulmck@kernel.org> Acked-by:
Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Daniel Bristot de Oliveira authored
Nicolas reported that using: # trace-cmd record -e all -M 10 -p osnoise --poll Resulted in the following kernel warning: ------------[ cut here ]------------ WARNING: CPU: 0 PID: 1217 at kernel/tracepoint.c:404 tracepoint_probe_unregister+0x280/0x370 [...] CPU: 0 PID: 1217 Comm: trace-cmd Not tainted 5.17.0-rc6-next-20220307-nico+ #19 RIP: 0010:tracepoint_probe_unregister+0x280/0x370 [...] CR2: 00007ff919b29497 CR3: 0000000109da4005 CR4: 0000000000170ef0 Call Trace: <TASK> osnoise_workload_stop+0x36/0x90 tracing_set_tracer+0x108/0x260 tracing_set_trace_write+0x94/0xd0 ? __check_object_size.part.0+0x10a/0x150 ? selinux_file_permission+0x104/0x150 vfs_write+0xb5/0x290 ksys_write+0x5f/0xe0 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7ff919a18127 [...] ---[ end trace 0000000000000000 ]--- The warning complains about an attempt to unregister an unregistered tracepoint. This happens on trace-cmd because it first stops tracing, and then switches the tracer to nop. Which is equivalent to: # cd /sys/kernel/tracing/ # echo osnoise > current_tracer # echo 0 > tracing_on # echo nop > current_tracer The osnoise tracer stops the workload when no trace instance is actually collecting data. This can be caused both by disabling tracing or disabling the tracer itself. To avoid unregistering events twice, use the existing trace_osnoise_callback_enabled variable to check if the events (and the workload) are actually active before trying to deactivate them. Link: https://lore.kernel.org/all/c898d1911f7f9303b7e14726e7cc9678fbfb4a0e.camel@redhat.com/ Link: https://lkml.kernel.org/r/938765e17d5a781c2df429a98f0b2e7cc317b022.1646823913.git.bristot@kernel.org Cc: stable@vger.kernel.org Cc: Marcelo Tosatti <mtosatti@redhat.com> Fixes: 2fac8d64 ("tracing/osnoise: Allow multiple instances of the same tracer") Reported-by:
Nicolas Saenz Julienne <nsaenzju@redhat.com> Signed-off-by:
Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
- Mar 08, 2022
-
-
K Prateek Nayak authored
While investigating the sparse warning reported by the LKP bot [1], observed that we have a redundant variable "top" in the function build_sched_domains that was introduced in the recent commit e496132e ("sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCs") The existing variable "sd" suffices which allows us to remove the redundant variable "top" while annotating the other variable "top_p" with the "__rcu" annotation to silence the sparse warning. [1] https://lore.kernel.org/lkml/202202170853.9vofgC3O-lkp@intel.com/ Fixes: e496132e ("sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCs") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Valentin Schneider <valentin.schneider@arm.com> Link: https://lore.kernel.org/r/20220218162743.1134-1-kprateek.nayak@amd.com
-
Dietmar Eggemann authored
The `struct rq *rq` parameter isn't used. Remove it. Signed-off-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Juri Lelli <juri.lelli@redhat.com> Link: https://lore.kernel.org/r/20220302183433.333029-7-dietmar.eggemann@arm.com
-
Dietmar Eggemann authored
The need_pull_[rt|dl]_task() and pull_[rt|dl]_task() functions are not used on a !CONFIG_SMP system. Remove them. Signed-off-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Juri Lelli <juri.lelli@redhat.com> Link: https://lore.kernel.org/r/20220302183433.333029-6-dietmar.eggemann@arm.com
-
Dietmar Eggemann authored
Deploy __node_2_pdl(node), __node_2_dle(node) and rb_first_cached() consistently throughout the sched class source file which makes the code at least easier to read. Signed-off-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Juri Lelli <juri.lelli@redhat.com> Link: https://lore.kernel.org/r/20220302183433.333029-5-dietmar.eggemann@arm.com
-
Dietmar Eggemann authored
Both functions are doing almost the same, that is checking if admission control is still respected. With exclusive cpusets, dl_task_can_attach() checks if the destination cpuset (i.e. its root domain) has enough CPU capacity to accommodate the task. dl_cpu_busy() checks if there is enough CPU capacity in the cpuset in case the CPU is hot-plugged out. dl_task_can_attach() is used to check if a task can be admitted while dl_cpu_busy() is used to check if a CPU can be hotplugged out. Make dl_cpu_busy() able to deal with a task and use it instead of dl_task_can_attach() in task_can_attach(). Signed-off-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Juri Lelli <juri.lelli@redhat.com> Link: https://lore.kernel.org/r/20220302183433.333029-4-dietmar.eggemann@arm.com
-
Dietmar Eggemann authored
Move the deadline bandwidth management (admission control) functions __dl_add(), __dl_sub() and __dl_overflow() as well as the bandwidth reclaim function __dl_update() from private task scheduler header file to the deadline sched class source file. The functions are only used internally so they don't have to be exported. Signed-off-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Juri Lelli <juri.lelli@redhat.com> Link: https://lore.kernel.org/r/20220302183433.333029-3-dietmar.eggemann@arm.com
-
Dietmar Eggemann authored
Since commit 1724813d ("sched/deadline: Remove the sysctl_sched_dl knobs") the default deadline bandwidth control structure has no purpose. Remove it. Signed-off-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Juri Lelli <juri.lelli@redhat.com> Link: https://lore.kernel.org/r/20220302183433.333029-2-dietmar.eggemann@arm.com
-
- Mar 07, 2022
-
-
Frederic Weisbecker authored
RCU_SOFTIRQ used to be special in that it could be raised on purpose within the idle path to prevent from stopping the tick. Some code still prevents from unnecessary warnings related to this specific behaviour while entering in dynticks-idle mode. However the nohz layout has changed quite a bit in ten years, and the removal of CONFIG_RCU_FAST_NO_HZ has been the final straw to this safe-conduct. Now the RCU_SOFTIRQ vector is expected to be raised from sane places. A remaining corner case is admitted though when the vector is invoked in fragile hotplug path. Signed-off-by:
Frederic Weisbecker <frederic@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Paul Menzel <pmenzel@molgen.mpg.de>
-
Frederic Weisbecker authored
With the removal of CONFIG_RCU_FAST_NO_HZ, the parameters in rcu_needs_cpu() are not necessary anymore. Simply remove them. Signed-off-by:
Frederic Weisbecker <frederic@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Paul Menzel <pmenzel@molgen.mpg.de>
-