Forum | Documentation | Website | Blog

Skip to content
Snippets Groups Projects
  1. Feb 17, 2021
  2. Jan 27, 2021
  3. Sep 25, 2020
  4. Oct 29, 2019
    • Patrick Bellasi's avatar
      sched/fair/util_est: Implement faster ramp-up EWMA on utilization increases · b8c96361
      Patrick Bellasi authored
      
      The estimated utilization for a task:
      
         util_est = max(util_avg, est.enqueue, est.ewma)
      
      is defined based on:
      
       - util_avg: the PELT defined utilization
       - est.enqueued: the util_avg at the end of the last activation
       - est.ewma:     a exponential moving average on the est.enqueued samples
      
      According to this definition, when a task suddenly changes its bandwidth
      requirements from small to big, the EWMA will need to collect multiple
      samples before converging up to track the new big utilization.
      
      This slow convergence towards bigger utilization values is not
      aligned to the default scheduler behavior, which is to optimize for
      performance. Moreover, the est.ewma component fails to compensate for
      temporarely utilization drops which spans just few est.enqueued samples.
      
      To let util_est do a better job in the scenario depicted above, change
      its definition by making util_est directly follow upward motion and
      only decay the est.ewma on downward.
      
      Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@matbug.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Douglas Raillard <douglas.raillard@arm.com>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Perret <qperret@google.com>
      Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20191023205630.14469-1-patrick.bellasi@matbug.net
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b8c96361
  5. Jun 03, 2019
  6. Oct 02, 2018
    • Dietmar Eggemann's avatar
      sched/fair: Disable LB_BIAS by default · fdf5f315
      Dietmar Eggemann authored
      LB_BIAS allows the adjustment on how conservative load should be
      balanced.
      
      The rq->cpu_load[idx] array is used for this functionality. It contains
      weighted CPU load decayed average values over different intervals
      (idx = 1..4). Idx = 0 is the weighted CPU load itself.
      
      The values are updated during scheduler_tick, before idle balance and at
      nohz exit.
      
      There are 5 different types of idx's per sched domain (sd). Each of them
      is used to index into the rq->cpu_load[idx] array in a specific scenario
      (busy, idle and newidle for load balancing, forkexec for wake-up
      slow-path load balancing and wake for affine wakeup based on weight).
      Only the sd idx's for busy and idle load balancing are set to 2,3 or 1,2
      respectively. All the other sd idx's are set to 0.
      
      Conservative load balancing is achieved for sd idx's >= 1 by using the
      min/max (source_load()/target_load()) value between the current weighted
      CPU load and the rq->cpu_load[sd idx -1] for the busiest(idlest)/local
      CPU load in load balancing or vice versa in the wake-up slow-path load
      balancing.
      There is no conservative balancing for sd idx = 0 since only current
      weighted CPU load is used in this case.
      
      It is very likely that LB_BIAS' influence on load balancing can be
      neglected (see test results below). This is further supported by:
      
      (1) Weighted CPU load today is by itself a decayed average value (PELT)
          (cfs_rq->avg->runnable_load_avg) and not the instantaneous load
          (rq->load.weight) it was when LB_BIAS was introduced.
      
      (2) Sd imbalance_pct is used for CPU_NEWLY_IDLE and CPU_NOT_IDLE (relate
          to sd's newidle and busy idx) in find_busiest_group() when comparing
          busiest and local avg load to make load balancing even more
          conservative.
      
      (3) The sd forkexec and newidle idx are always set to 0 so there is no
          adjustment on how conservatively load balancing is done here.
      
      (4) Affine wakeup based on weight (wake_affine_weight()) will not be
          impacted since the sd wake idx is always set to 0.
      
      Let's disable LB_BIAS by default for a few kernel releases to make sure
      that no workload and no scheduler topology is affected. The benefit of
      being able to remove the LB_BIAS dependency from source_load() and
      target_load() is that the entire rq->cpu_load[idx] code could be removed
      in this case.
      
      It is really hard to say if there is no regression w/o testing this with
      a lot of different workloads on a lot of different platforms, especially
      NUMA machines.
      The following 104 LKP (Linux Kernel Performance) tests were run by the
      0-Day guys mostly on multi-socket hosts with a larger number of logical
      cpus (88, 192).
      The base for the test was commit b3dae109
      
       ("sched/swait: Rename to
      exclusive") (tip/sched/core v4.18-rc1).
      Only 2 out of the 104 tests had a significant change in one of the
      metrics (fsmark/1x-1t-1HDD-btrfs-nfsv4-4M-60G-NoSync-performance +7%
      files_per_sec, unixbench/300s-100%-syscall-performance -11% score).
      Tests which showed a change in one of the metrics are marked with a '*'
      and this change is listed as well.
      
      (a) lkp-bdw-ep3:
            88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz 64G
      
          dd-write/10m-1HDD-cfq-btrfs-100dd-performance
          fsmark/1x-1t-1HDD-xfs-nfsv4-4M-60G-NoSync-performance
        * fsmark/1x-1t-1HDD-btrfs-nfsv4-4M-60G-NoSync-performance
            7.50  7%  8.00  ±  6%  fsmark.files_per_sec
          fsmark/1x-1t-1HDD-btrfs-nfsv4-4M-60G-fsyncBeforeClose-performance
          fsmark/1x-1t-1HDD-btrfs-4M-60G-NoSync-performance
          fsmark/1x-1t-1HDD-btrfs-4M-60G-fsyncBeforeClose-performance
          kbuild/300s-50%-vmlinux_prereq-performance
          kbuild/300s-200%-vmlinux_prereq-performance
          kbuild/300s-50%-vmlinux_prereq-performance-1HDD-ext4
          kbuild/300s-200%-vmlinux_prereq-performance-1HDD-ext4
      
      (b) lkp-skl-4sp1:
            192 threads Intel(R) Xeon(R) Platinum 8160 768G
      
          dbench/100%-performance
          ebizzy/200%-100x-10s-performance
          hackbench/1600%-process-pipe-performance
          iperf/300s-cs-localhost-tcp-performance
          iperf/300s-cs-localhost-udp-performance
          perf-bench-numa-mem/2t-300M-performance
          perf-bench-sched-pipe/10000000ops-process-performance
          perf-bench-sched-pipe/10000000ops-threads-performance
          schbench/2-16-300-30000-30000-performance
          tbench/100%-cs-localhost-performance
      
      (c) lkp-bdw-ep6:
            88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz 128G
      
          stress-ng/100%-60s-pipe-performance
          unixbench/300s-1-whetstone-double-performance
          unixbench/300s-1-shell1-performance
          unixbench/300s-1-shell8-performance
          unixbench/300s-1-pipe-performance
        * unixbench/300s-1-context1-performance
            312  315  unixbench.score
          unixbench/300s-1-spawn-performance
          unixbench/300s-1-syscall-performance
          unixbench/300s-1-dhry2reg-performance
          unixbench/300s-1-fstime-performance
          unixbench/300s-1-fsbuffer-performance
          unixbench/300s-1-fsdisk-performance
          unixbench/300s-100%-whetstone-double-performance
          unixbench/300s-100%-shell1-performance
          unixbench/300s-100%-shell8-performance
          unixbench/300s-100%-pipe-performance
          unixbench/300s-100%-context1-performance
          unixbench/300s-100%-spawn-performance
        * unixbench/300s-100%-syscall-performance
            3571  ±  3%  -11%  3183  ±  4%  unixbench.score
          unixbench/300s-100%-dhry2reg-performance
          unixbench/300s-100%-fstime-performance
          unixbench/300s-100%-fsbuffer-performance
          unixbench/300s-100%-fsdisk-performance
          unixbench/300s-1-execl-performance
          unixbench/300s-100%-execl-performance
        * will-it-scale/brk1-performance
            365004  360387  will-it-scale.per_thread_ops
        * will-it-scale/dup1-performance
            432401  437596  will-it-scale.per_thread_ops
          will-it-scale/eventfd1-performance
          will-it-scale/futex1-performance
          will-it-scale/futex2-performance
          will-it-scale/futex3-performance
          will-it-scale/futex4-performance
          will-it-scale/getppid1-performance
          will-it-scale/lock1-performance
          will-it-scale/lseek1-performance
          will-it-scale/lseek2-performance
        * will-it-scale/malloc1-performance
            47025  45817  will-it-scale.per_thread_ops
            77499  76529  will-it-scale.per_process_ops
          will-it-scale/malloc2-performance
        * will-it-scale/mmap1-performance
            123399  120815  will-it-scale.per_thread_ops
            152219  149833  will-it-scale.per_process_ops
        * will-it-scale/mmap2-performance
            107327  104714  will-it-scale.per_thread_ops
            136405  133765  will-it-scale.per_process_ops
          will-it-scale/open1-performance
        * will-it-scale/open2-performance
            171570  168805  will-it-scale.per_thread_ops
            532644  526202  will-it-scale.per_process_ops
          will-it-scale/page_fault1-performance
          will-it-scale/page_fault2-performance
          will-it-scale/page_fault3-performance
          will-it-scale/pipe1-performance
          will-it-scale/poll1-performance
        * will-it-scale/poll2-performance
            176134  172848  will-it-scale.per_thread_ops
            281361  275053  will-it-scale.per_process_ops
          will-it-scale/posix_semaphore1-performance
          will-it-scale/pread1-performance
          will-it-scale/pread2-performance
          will-it-scale/pread3-performance
          will-it-scale/pthread_mutex1-performance
          will-it-scale/pthread_mutex2-performance
          will-it-scale/pwrite1-performance
          will-it-scale/pwrite2-performance
          will-it-scale/pwrite3-performance
        * will-it-scale/read1-performance
            1190563  1174833  will-it-scale.per_thread_ops
        * will-it-scale/read2-performance
            1105369  1080427  will-it-scale.per_thread_ops
          will-it-scale/readseek1-performance
        * will-it-scale/readseek2-performance
            261818  259040  will-it-scale.per_thread_ops
          will-it-scale/readseek3-performance
        * will-it-scale/sched_yield-performance
            2408059  2382034  will-it-scale.per_thread_ops
          will-it-scale/signal1-performance
          will-it-scale/unix1-performance
          will-it-scale/unlink1-performance
          will-it-scale/unlink2-performance
        * will-it-scale/write1-performance
            976701  961588  will-it-scale.per_thread_ops
        * will-it-scale/writeseek1-performance
            831898  822448  will-it-scale.per_thread_ops
        * will-it-scale/writeseek2-performance
            228248  225065  will-it-scale.per_thread_ops
        * will-it-scale/writeseek3-performance
            226670  224058  will-it-scale.per_thread_ops
          will-it-scale/context_switch1-performance
          aim7/performance-fork_test-2000
        * aim7/performance-brk_test-3000
            74869  76676  aim7.jobs-per-min
          aim7/performance-disk_cp-3000
          aim7/performance-disk_rd-3000
          aim7/performance-sieve-3000
          aim7/performance-page_test-3000
          aim7/performance-creat-clo-3000
          aim7/performance-mem_rtns_1-8000
          aim7/performance-disk_wrt-8000
          aim7/performance-pipe_cpy-8000
          aim7/performance-ram_copy-8000
      
      (d) lkp-avoton3:
            8 threads Intel(R) Atom(TM) CPU C2750 @ 2.40GHz 16G
      
          netperf/ipv4-900s-200%-cs-localhost-TCP_STREAM-performance
      
      Signed-off-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Li Zhijian <zhijianx.li@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180809135753.21077-1-dietmar.eggemann@arm.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      fdf5f315
  7. Mar 20, 2018
    • Patrick Bellasi's avatar
      sched/fair: Update util_est only on util_avg updates · d519329f
      Patrick Bellasi authored
      
      The estimated utilization of a task is currently updated every time the
      task is dequeued. However, to keep overheads under control, PELT signals
      are effectively updated at maximum once every 1ms.
      
      Thus, for really short running tasks, it can happen that their util_avg
      value has not been updates since their last enqueue.  If such tasks are
      also frequently running tasks (e.g. the kind of workload generated by
      hackbench) it can also happen that their util_avg is updated only every
      few activations.
      
      This means that updating util_est at every dequeue potentially introduces
      not necessary overheads and it's also conceptually wrong if the util_avg
      signal has never been updated during a task activation.
      
      Let's introduce a throttling mechanism on task's util_est updates
      to sync them with util_avg updates. To make the solution memory
      efficient, both in terms of space and load/store operations, we encode a
      synchronization flag into the LSB of util_est.enqueued.
      This makes util_est an even values only metric, which is still
      considered good enough for its purpose.
      The synchronization bit is (re)set by __update_load_avg_se() once the
      PELT signal of a task has been updated during its last activation.
      
      Such a throttling mechanism allows to keep under control util_est
      overheads in the wakeup hot path, thus making it a suitable mechanism
      which can be enabled also on high-intensity workload systems.
      Thus, this now switches on by default the estimation utilization
      scheduler feature.
      
      Suggested-by: default avatarChris Redpath <chris.redpath@arm.com>
      Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
      Cc: Steve Muckle <smuckle@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Todd Kjos <tkjos@android.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Link: http://lkml.kernel.org/r/20180309095245.11071-5-patrick.bellasi@arm.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d519329f
    • Patrick Bellasi's avatar
      sched/fair: Add util_est on top of PELT · 7f65ea42
      Patrick Bellasi authored
      
      The util_avg signal computed by PELT is too variable for some use-cases.
      For example, a big task waking up after a long sleep period will have its
      utilization almost completely decayed. This introduces some latency before
      schedutil will be able to pick the best frequency to run a task.
      
      The same issue can affect task placement. Indeed, since the task
      utilization is already decayed at wakeup, when the task is enqueued in a
      CPU, this can result in a CPU running a big task as being temporarily
      represented as being almost empty. This leads to a race condition where
      other tasks can be potentially allocated on a CPU which just started to run
      a big task which slept for a relatively long period.
      
      Moreover, the PELT utilization of a task can be updated every [ms], thus
      making it a continuously changing value for certain longer running
      tasks. This means that the instantaneous PELT utilization of a RUNNING
      task is not really meaningful to properly support scheduler decisions.
      
      For all these reasons, a more stable signal can do a better job of
      representing the expected/estimated utilization of a task/cfs_rq.
      Such a signal can be easily created on top of PELT by still using it as
      an estimator which produces values to be aggregated on meaningful
      events.
      
      This patch adds a simple implementation of util_est, a new signal built on
      top of PELT's util_avg where:
      
          util_est(task) = max(task::util_avg, f(task::util_avg@dequeue))
      
      This allows to remember how big a task has been reported by PELT in its
      previous activations via f(task::util_avg@dequeue), which is the new
      _task_util_est(struct task_struct*) function added by this patch.
      
      If a task should change its behavior and it runs longer in a new
      activation, after a certain time its util_est will just track the
      original PELT signal (i.e. task::util_avg).
      
      The estimated utilization of cfs_rq is defined only for root ones.
      That's because the only sensible consumer of this signal are the
      scheduler and schedutil when looking for the overall CPU utilization
      due to FAIR tasks.
      
      For this reason, the estimated utilization of a root cfs_rq is simply
      defined as:
      
          util_est(cfs_rq) = max(cfs_rq::util_avg, cfs_rq::util_est::enqueued)
      
      where:
      
          cfs_rq::util_est::enqueued = sum(_task_util_est(task))
                                       for each RUNNABLE task on that root cfs_rq
      
      It's worth noting that the estimated utilization is tracked only for
      objects of interests, specifically:
      
       - Tasks: to better support tasks placement decisions
       - root cfs_rqs: to better support both tasks placement decisions as
                       well as frequencies selection
      
      Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
      Cc: Steve Muckle <smuckle@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Todd Kjos <tkjos@android.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Link: http://lkml.kernel.org/r/20180309095245.11071-2-patrick.bellasi@arm.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      7f65ea42
  8. Nov 02, 2017
    • Greg Kroah-Hartman's avatar
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman authored
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard...
      b2441318
  9. Oct 10, 2017
    • Peter Zijlstra's avatar
      sched/core: Address more wake_affine() regressions · f2cdd9cc
      Peter Zijlstra authored
      
      The trivial wake_affine_idle() implementation is very good for a
      number of workloads, but it comes apart at the moment there are no
      idle CPUs left, IOW. the overloaded case.
      
      hackbench:
      
      		NO_WA_WEIGHT		WA_WEIGHT
      
      hackbench-20  : 7.362717561 seconds	6.450509391 seconds
      
      (win)
      
      netperf:
      
      		  NO_WA_WEIGHT		WA_WEIGHT
      
      TCP_SENDFILE-1	: Avg: 54524.6		Avg: 52224.3
      TCP_SENDFILE-10	: Avg: 48185.2          Avg: 46504.3
      TCP_SENDFILE-20	: Avg: 29031.2          Avg: 28610.3
      TCP_SENDFILE-40	: Avg: 9819.72          Avg: 9253.12
      TCP_SENDFILE-80	: Avg: 5355.3           Avg: 4687.4
      
      TCP_STREAM-1	: Avg: 41448.3          Avg: 42254
      TCP_STREAM-10	: Avg: 24123.2          Avg: 25847.9
      TCP_STREAM-20	: Avg: 15834.5          Avg: 18374.4
      TCP_STREAM-40	: Avg: 5583.91          Avg: 5599.57
      TCP_STREAM-80	: Avg: 2329.66          Avg: 2726.41
      
      TCP_RR-1	: Avg: 80473.5          Avg: 82638.8
      TCP_RR-10	: Avg: 72660.5          Avg: 73265.1
      TCP_RR-20	: Avg: 52607.1          Avg: 52634.5
      TCP_RR-40	: Avg: 57199.2          Avg: 56302.3
      TCP_RR-80	: Avg: 25330.3          Avg: 26867.9
      
      UDP_RR-1	: Avg: 108266           Avg: 107844
      UDP_RR-10	: Avg: 95480            Avg: 95245.2
      UDP_RR-20	: Avg: 68770.8          Avg: 68673.7
      UDP_RR-40	: Avg: 76231            Avg: 75419.1
      UDP_RR-80	: Avg: 34578.3          Avg: 35639.1
      
      UDP_STREAM-1	: Avg: 64684.3          Avg: 66606
      UDP_STREAM-10	: Avg: 52701.2          Avg: 52959.5
      UDP_STREAM-20	: Avg: 30376.4          Avg: 29704
      UDP_STREAM-40	: Avg: 15685.8          Avg: 15266.5
      UDP_STREAM-80	: Avg: 8415.13          Avg: 7388.97
      
      (wins and losses)
      
      sysbench:
      
      		    NO_WA_WEIGHT		WA_WEIGHT
      
      sysbench-mysql-2  :  2135.17 per sec.		 2142.51 per sec.
      sysbench-mysql-5  :  4809.68 per sec.            4800.19 per sec.
      sysbench-mysql-10 :  9158.59 per sec.            9157.05 per sec.
      sysbench-mysql-20 : 14570.70 per sec.           14543.55 per sec.
      sysbench-mysql-40 : 22130.56 per sec.           22184.82 per sec.
      sysbench-mysql-80 : 20995.56 per sec.           21904.18 per sec.
      
      sysbench-psql-2   :  1679.58 per sec.            1705.06 per sec.
      sysbench-psql-5   :  3797.69 per sec.            3879.93 per sec.
      sysbench-psql-10  :  7253.22 per sec.            7258.06 per sec.
      sysbench-psql-20  : 11166.75 per sec.           11220.00 per sec.
      sysbench-psql-40  : 17277.28 per sec.           17359.78 per sec.
      sysbench-psql-80  : 17112.44 per sec.           17221.16 per sec.
      
      (increase on the top end)
      
      tbench:
      
      NO_WA_WEIGHT
      
      Throughput 685.211 MB/sec   2 clients   2 procs  max_latency=0.123 ms
      Throughput 1596.64 MB/sec   5 clients   5 procs  max_latency=0.119 ms
      Throughput 2985.47 MB/sec  10 clients  10 procs  max_latency=0.262 ms
      Throughput 4521.15 MB/sec  20 clients  20 procs  max_latency=0.506 ms
      Throughput 9438.1  MB/sec  40 clients  40 procs  max_latency=2.052 ms
      Throughput 8210.5  MB/sec  80 clients  80 procs  max_latency=8.310 ms
      
      WA_WEIGHT
      
      Throughput 697.292 MB/sec   2 clients   2 procs  max_latency=0.127 ms
      Throughput 1596.48 MB/sec   5 clients   5 procs  max_latency=0.080 ms
      Throughput 2975.22 MB/sec  10 clients  10 procs  max_latency=0.254 ms
      Throughput 4575.14 MB/sec  20 clients  20 procs  max_latency=0.502 ms
      Throughput 9468.65 MB/sec  40 clients  40 procs  max_latency=2.069 ms
      Throughput 8631.73 MB/sec  80 clients  80 procs  max_latency=8.605 ms
      
      (increase on the top end)
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f2cdd9cc
    • Peter Zijlstra's avatar
      sched/core: Fix wake_affine() performance regression · d153b153
      Peter Zijlstra authored
      Eric reported a sysbench regression against commit:
      
        3fed382b
      
       ("sched/numa: Implement NUMA node level wake_affine()")
      
      Similarly, Rik was looking at the NAS-lu.C benchmark, which regressed
      against his v3.10 enterprise kernel.
      
      PRE (current tip/master):
      
       ivb-ep sysbench:
      
         2: [30 secs]     transactions:                        64110  (2136.94 per sec.)
         5: [30 secs]     transactions:                        143644 (4787.99 per sec.)
        10: [30 secs]     transactions:                        274298 (9142.93 per sec.)
        20: [30 secs]     transactions:                        418683 (13955.45 per sec.)
        40: [30 secs]     transactions:                        320731 (10690.15 per sec.)
        80: [30 secs]     transactions:                        355096 (11834.28 per sec.)
      
       hsw-ex NAS:
      
       OMP_PROC_BIND/lu.C.x_threads_144_run_1.log: Time in seconds =                    18.01
       OMP_PROC_BIND/lu.C.x_threads_144_run_2.log: Time in seconds =                    17.89
       OMP_PROC_BIND/lu.C.x_threads_144_run_3.log: Time in seconds =                    17.93
       lu.C.x_threads_144_run_1.log: Time in seconds =                   434.68
       lu.C.x_threads_144_run_2.log: Time in seconds =                   405.36
       lu.C.x_threads_144_run_3.log: Time in seconds =                   433.83
      
      POST (+patch):
      
       ivb-ep sysbench:
      
         2: [30 secs]     transactions:                        64494  (2149.75 per sec.)
         5: [30 secs]     transactions:                        145114 (4836.99 per sec.)
        10: [30 secs]     transactions:                        278311 (9276.69 per sec.)
        20: [30 secs]     transactions:                        437169 (14571.60 per sec.)
        40: [30 secs]     transactions:                        669837 (22326.73 per sec.)
        80: [30 secs]     transactions:                        631739 (21055.88 per sec.)
      
       hsw-ex NAS:
      
       lu.C.x_threads_144_run_1.log: Time in seconds =                    23.36
       lu.C.x_threads_144_run_2.log: Time in seconds =                    22.96
       lu.C.x_threads_144_run_3.log: Time in seconds =                    22.52
      
      This patch takes out all the shiny wake_affine() stuff and goes back to
      utter basics. Between the two CPUs involved with the wakeup (the CPU
      doing the wakeup and the CPU we ran on previously) pick the CPU we can
      run on _now_.
      
      This restores much of the regressions against the older kernels,
      but leaves some ground in the overloaded case. The default-enabled
      WA_WEIGHT (which will be introduced in the next patch) is an attempt
      to address the overloaded situation.
      
      Reported-by: default avatarEric Farman <farman@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: jinpuwang@gmail.com
      Cc: vcaputo@pengaru.com
      Fixes: 3fed382b
      
       ("sched/numa: Implement NUMA node level wake_affine()")
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d153b153
  10. Jun 08, 2017
    • Peter Zijlstra's avatar
      sched/core: Implement new approach to scale select_idle_cpu() · 1ad3aaf3
      Peter Zijlstra authored
      Hackbench recently suffered a bunch of pain, first by commit:
      
        4c77b18c ("sched/fair: Make select_idle_cpu() more aggressive")
      
      and then by commit:
      
        c743f0a5
      
       ("sched/fair, cpumask: Export for_each_cpu_wrap()")
      
      which fixed a bug in the initial for_each_cpu_wrap() implementation
      that made select_idle_cpu() even more expensive. The bug was that it
      would skip over CPUs when bits were consequtive in the bitmask.
      
      This however gave me an idea to fix select_idle_cpu(); where the old
      scheme was a cliff-edge throttle on idle scanning, this introduces a
      more gradual approach. Instead of stopping to scan entirely, we limit
      how many CPUs we scan.
      
      Initial benchmarks show that it mostly recovers hackbench while not
      hurting anything else, except Mason's schbench, but not as bad as the
      old thing.
      
      It also appears to recover the tbench high-end, which also suffered like
      hackbench.
      
      Tested-by: default avatarMatt Fleming <matt@codeblueprint.co.uk>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: hpa@zytor.com
      Cc: kitsunyan <kitsunyan@inbox.ru>
      Cc: linux-kernel@vger.kernel.org
      Cc: lvenanci@redhat.com
      Cc: riel@redhat.com
      Cc: xiaolong.ye@intel.com
      Link: http://lkml.kernel.org/r/20170517105350.hk5m4h4jb6dfr65a@hirez.programming.kicks-ass.net
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1ad3aaf3
  11. May 15, 2017
    • Peter Zijlstra's avatar
      sched/topology: Remove FORCE_SD_OVERLAP · af85596c
      Peter Zijlstra authored
      
      Its an obsolete debug mechanism and future code wants to rely on
      properties this undermines.
      
      Namely, it would be good to assume that SD_OVERLAP domains have
      children, but if we build the entire hierarchy with SD_OVERLAP this is
      obviously false.
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      af85596c
  12. Mar 16, 2017
  13. Mar 02, 2017
    • Peter Zijlstra's avatar
      sched/fair: Make select_idle_cpu() more aggressive · 4c77b18c
      Peter Zijlstra authored
      Kitsunyan reported desktop latency issues on his Celeron 887 because
      of commit:
      
        1b568f0a
      
       ("sched/core: Optimize SCHED_SMT")
      
      ... even though his CPU doesn't do SMT.
      
      The effect of running the SMT code on a !SMT part is basically a more
      aggressive select_idle_cpu(). Removing the avg condition fixed things
      for him.
      
      I also know FB likes this test gone, even though other workloads like
      having it.
      
      For now, take it out by default, until we get a better idea.
      
      Reported-by: default avatarkitsunyan <kitsunyan@inbox.ru>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      4c77b18c
  14. Sep 13, 2015
  15. Jul 07, 2015
  16. Mar 23, 2015
    • Steven Rostedt's avatar
      sched/rt: Use IPI to trigger RT task push migration instead of pulling · b6366f04
      Steven Rostedt authored
      When debugging the latencies on a 40 core box, where we hit 300 to
      500 microsecond latencies, I found there was a huge contention on the
      runqueue locks.
      
      Investigating it further, running ftrace, I found that it was due to
      the pulling of RT tasks.
      
      The test that was run was the following:
      
       cyclictest --numa -p95 -m -d0 -i100
      
      This created a thread on each CPU, that would set its wakeup in iterations
      of 100 microseconds. The -d0 means that all the threads had the same
      interval (100us). Each thread sleeps for 100us and wakes up and measures
      its latencies.
      
      cyclictest is maintained at:
       git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
      
      
      
      What happened was another RT task would be scheduled on one of the CPUs
      that was running our test, when the other CPU tests went to sleep and
      scheduled idle. This caused the "pull" operation to execute on all
      these CPUs. Each one of these saw the RT task that was overloaded on
      the CPU of the test that was still running, and each one tried
      to grab that task in a thundering herd way.
      
      To grab the task, each thread would do a double rq lock grab, grabbing
      its own lock as well as the rq of the overloaded CPU. As the sched
      domains on this box was rather flat for its size, I saw up to 12 CPUs
      block on this lock at once. This caused a ripple affect with the
      rq locks especially since the taking was done via a double rq lock, which
      means that several of the CPUs had their own rq locks held while trying
      to take this rq lock. As these locks were blocked, any wakeups or load
      balanceing on these CPUs would also block on these locks, and the wait
      time escalated.
      
      I've tried various methods to lessen the load, but things like an
      atomic counter to only let one CPU grab the task wont work, because
      the task may have a limited affinity, and we may pick the wrong
      CPU to take that lock and do the pull, to only find out that the
      CPU we picked isn't in the task's affinity.
      
      Instead of doing the PULL, I now have the CPUs that want the pull to
      send over an IPI to the overloaded CPU, and let that CPU pick what
      CPU to push the task to. No more need to grab the rq lock, and the
      push/pull algorithm still works fine.
      
      With this patch, the latency dropped to just 150us over a 20 hour run.
      Without the patch, the huge latencies would trigger in seconds.
      
      I've created a new sched feature called RT_PUSH_IPI, which is enabled
      by default.
      
      When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
      and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
      is enabled, the IPI is sent to the overloaded CPU to do a push.
      
      To enabled or disable this at run time:
      
       # mount -t debugfs nodev /sys/kernel/debug
       # echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
      or
       # echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
      
      Update: This original patch would send an IPI to all CPUs in the RT overload
      list. But that could theoretically cause the reverse issue. That is, there
      could be lots of overloaded RT queues and one CPU lowers its priority. It would
      then send an IPI to all the overloaded RT queues and they could then all try
      to grab the rq lock of the CPU lowering its priority, and then we have the
      same problem.
      
      The latest design sends out only one IPI to the first overloaded CPU. It tries to
      push any tasks that it can, and then looks for the next overloaded CPU that can
      push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
      tasks that have priorities greater than the source CPU are covered. In case the
      source CPU lowers its priority again, a flag is set to tell the IPI traversal to
      restart with the first RT overloaded CPU after the source CPU.
      
      Parts-suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Joern Engel <joern@purestorage.com>
      Cc: Clark Williams <williams@redhat.com>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b6366f04
  17. Jun 05, 2014
    • Nicolas Pitre's avatar
      sched: Rename capacity related flags · 5d4dfddd
      Nicolas Pitre authored
      
      It is better not to think about compute capacity as being equivalent
      to "CPU power".  The upcoming "power aware" scheduler work may create
      confusion with the notion of energy consumption if "power" is used too
      liberally.
      
      Let's rename the following feature flags since they do relate to capacity:
      
      	SD_SHARE_CPUPOWER  -> SD_SHARE_CPUCAPACITY
      	ARCH_POWER         -> ARCH_CAPACITY
      	NONTASK_POWER      -> NONTASK_CAPACITY
      
      Signed-off-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: linaro-kernel@lists.linaro.org
      Cc: Andy Fleming <afleming@freescale.com>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Grant Likely <grant.likely@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
      Cc: Rob Herring <robh+dt@kernel.org>
      Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: devicetree@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Link: http://lkml.kernel.org/n/tip-e93lpnxb87owfievqatey6b5@git.kernel.org
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5d4dfddd
  18. Oct 09, 2013
  19. Apr 19, 2013
    • Waiman Long's avatar
      mutex: Move mutex spinning code from sched/core.c back to mutex.c · 41fcb9f2
      Waiman Long authored
      
      As mentioned by Ingo, the SCHED_FEAT_OWNER_SPIN scheduler
      feature bit was really just an early hack to make with/without
      mutex-spinning testable. So it is no longer necessary.
      
      This patch removes the SCHED_FEAT_OWNER_SPIN feature bit and
      move the mutex spinning code from kernel/sched/core.c back to
      kernel/mutex.c which is where they should belong.
      
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Chandramouleeswaran Aswin <aswin@hp.com>
      Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
      Cc: Norton Scott J <scott.norton@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Clark Williams <williams@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1366226594-5506-2-git-send-email-Waiman.Long@hp.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      41fcb9f2
  20. Dec 11, 2012
    • Mel Gorman's avatar
      mm: sched: numa: Delay PTE scanning until a task is scheduled on a new node · 5bca2303
      Mel Gorman authored
      
      Due to the fact that migrations are driven by the CPU a task is running
      on there is no point tracking NUMA faults until one task runs on a new
      node. This patch tracks the first node used by an address space. Until
      it changes, PTE scanning is disabled and no NUMA hinting faults are
      trapped. This should help workloads that are short-lived, do not care
      about NUMA placement or have bound themselves to a single node.
      
      This takes advantage of the logic in "mm: sched: numa: Implement slow
      start for working set sampling" to delay when the checks are made. This
      will take advantage of processes that set their CPU and node bindings
      early in their lifetime. It will also potentially allow any initial load
      balancing to take place.
      
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      5bca2303
    • Mel Gorman's avatar
      mm: sched: numa: Control enabling and disabling of NUMA balancing · 1a687c2e
      Mel Gorman authored
      
      This patch adds Kconfig options and kernel parameters to allow the
      enabling and disabling of automatic NUMA balancing. The existance
      of such a switch was and is very important when debugging problems
      related to transparent hugepages and we should have the same for
      automatic NUMA placement.
      
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      1a687c2e
    • Peter Zijlstra's avatar
      mm: numa: Add fault driven placement and migration · cbee9f88
      Peter Zijlstra authored
      
      NOTE: This patch is based on "sched, numa, mm: Add fault driven
      	placement and migration policy" but as it throws away all the policy
      	to just leave a basic foundation I had to drop the signed-offs-by.
      
      This patch creates a bare-bones method for setting PTEs pte_numa in the
      context of the scheduler that when faulted later will be faulted onto the
      node the CPU is running on.  In itself this does nothing useful but any
      placement policy will fundamentally depend on receiving hints on placement
      from fault context and doing something intelligent about it.
      
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      cbee9f88
  21. Oct 16, 2012
  22. Sep 13, 2012
  23. Sep 04, 2012
  24. Apr 26, 2012
    • Peter Zijlstra's avatar
      sched: Fix more load-balancing fallout · eb95308e
      Peter Zijlstra authored
      Commits 367456c7 ("sched: Ditch per cgroup task lists for
      load-balancing") and 5d6523eb
      
       ("sched: Fix load-balance wreckage")
      left some more wreckage.
      
      By setting loop_max unconditionally to ->nr_running load-balancing
      could take a lot of time on very long runqueues (hackbench!). So keep
      the sysctl as max limit of the amount of tasks we'll iterate.
      
      Furthermore, the min load filter for migration completely fails with
      cgroups since inequality in per-cpu state can easily lead to such
      small loads :/
      
      Furthermore the change to add new tasks to the tail of the queue
      instead of the head seems to have some effect.. not quite sure I
      understand why.
      
      Combined these fixes solve the huge hackbench regression reported by
      Tim when hackbench is ran in a cgroup.
      
      Reported-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Acked-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Mo...
      eb95308e
  25. Dec 06, 2011
  26. Nov 17, 2011
  27. Nov 14, 2011
  28. Aug 14, 2011
  29. Jul 20, 2011
    • Peter Zijlstra's avatar
      sched: Allow for overlapping sched_domain spans · e3589f6c
      Peter Zijlstra authored
      
      Allow for sched_domain spans that overlap by giving such domains their
      own sched_group list instead of sharing the sched_groups amongst
      each-other.
      
      This is needed for machines with more than 16 nodes, because
      sched_domain_node_span() will generate a node mask from the
      16 nearest nodes without regard if these masks have any overlap.
      
      Currently sched_domains have a sched_group that maps to their child
      sched_domain span, and since there is no overlap we share the
      sched_group between the sched_domains of the various CPUs. If however
      there is overlap, we would need to link the sched_group list in
      different ways for each cpu, and hence sharing isn't possible.
      
      In order to solve this, allocate private sched_groups for each CPU's
      sched_domain but have the sched_groups share a sched_group_power
      structure such that we can uniquely track the power.
      
      Reported-and-tested-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/n/tip-08bxqw9wis3qti9u5inifh3y@git.kernel.org
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e3589f6c
  30. Jul 14, 2011
    • Glauber Costa's avatar
      sched: adjust scheduler cpu power for stolen time · 095c0aa8
      Glauber Costa authored
      
      This patch makes update_rq_clock() aware of steal time.
      The mechanism of operation is not different from irq_time,
      and follows the same principles. This lives in a CONFIG
      option itself, and can be compiled out independently of
      the rest of steal time reporting. The effect of disabling it
      is that the scheduler will still report steal time (that cannot be
      disabled), but won't use this information for cpu power adjustments.
      
      Everytime update_rq_clock_task() is invoked, we query information
      about how much time was stolen since last call, and feed it into
      sched_rt_avg_update().
      
      Although steal time reporting in account_process_tick() keeps
      track of the last time we read the steal clock, in prev_steal_time,
      this patch do it independently using another field,
      prev_steal_time_rq. This is because otherwise, information about time
      accounted in update_process_tick() would never reach us in update_rq_clock().
      
      Signed-off-by: default avatarGlauber Costa <glommer@redhat.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Tested-by: default avatarEric B Munson <emunson@mgebm.net>
      CC: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      CC: Anthony Liguori <aliguori@us.ibm.com>
      Signed-off-by: default avatarAvi Kivity <avi@redhat.com>
      095c0aa8
  31. Apr 14, 2011
  32. Nov 18, 2010