depot_init_pool has two call sites:
1. In depot_alloc_stack with a potentially NULL prealloc.
2. In __stack_depot_save with a non-NULL prealloc.
At the same time depot_init_pool can only return false when prealloc is
NULL.
As the second call site makes sure that prealloc is not NULL, the WARN_ON
there can never trigger. Thus, drop the WARN_ON and also move the prealloc
check from depot_init_pool to its first call site.
Also change the return type of depot_init_pool to void as it now always
returns true.
Link: https://lkml.kernel.org/r/ce149f9bdcbc80a92549b54da67eafb27f846b7b.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Use "pool" instead of "slab" for naming memory regions stack depot
uses to store stack traces. Using "slab" is confusing, as stack depot
pools have nothing to do with the slab allocator.
Also give better names to pool-related global variables: change
"depot_" prefix to "pool_" to point out that these variables are
related to stack depot pools.
Also rename the slabindex (poolindex) field in handle_parts to pool_index
to align its name with the pool_index global variable.
No functional changes.
Link: https://lkml.kernel.org/r/923c507edb350c3b6ef85860f36be489dfc0ad21.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Alexander Potapenko <glider@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Give more meaningful names to hash table-related constants and variables:
1. Rename STACK_HASH_SCALE to STACK_HASH_TABLE_SCALE to point out that it
is related to scaling the hash table.
2. Rename STACK_HASH_ORDER_MIN/MAX to STACK_BUCKET_NUMBER_ORDER_MIN/MAX
to point out that it is related to the number of hash table buckets.
3. Rename stack_hash_order to stack_bucket_number_order for the same
reason as #2.
No functional changes.
Link: https://lkml.kernel.org/r/f166dd6f3cb2378aea78600714393dd568c33ee9.1676063693.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Pull misc fixes from Andrew Morton:
"Twelve hotfixes, mostly against mm/.
Five of these fixes are cc:stable"
* tag 'mm-hotfixes-stable-2023-02-13-13-50' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
of: reserved_mem: Have kmemleak ignore dynamically allocated reserved mem
scripts/gdb: fix 'lx-current' for x86
lib: parser: optimize match_NUMBER apis to use local array
mm: shrinkers: fix deadlock in shrinker debugfs
mm: hwpoison: support recovery from ksm_might_need_to_copy()
kasan: fix Oops due to missing calls to kasan_arch_is_ready()
revert "squashfs: harden sanity check in squashfs_read_xattr_id_table"
fsdax: dax_unshare_iter() should return a valid length
mm/gup: add folio to list when folio_isolate_lru() succeed
aio: fix mremap after fork null-deref
mailmap: add entry for Alexander Mikhalitsyn
mm: extend max struct page size for kmsan
Include the support for enumerating and provisioning ram regions for
v6.3. This also include a default policy change for ram / volatile
device-dax instances to assign them to the dax_kmem driver by default.
Add the gnu_printf (__printf()) attribute to the
kunit_fail_current_test() implementation in
__kunit_fail_current_test_impl(). While it's not actually useful here,
as this function is never called directly, it nevertheless was
triggering -Wsuggest-attribute=format warnings, so we should add it to
reduce the noise.
Fixes: cc3ed2fe5c93 ("kunit: Add "hooks" to call into KUnit when it's built as a module")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Rae Moar <rmoar@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Add a KUnit test for the kernel hashtable implementation in
include/linux/hashtable.h.
Note that this version does not yet test each of the rcu
alternative versions of functions.
Signed-off-by: Rae Moar <rmoar@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Add a simple way of redirecting calls to functions by including a
special prologue in the "real" function which checks to see if the
replacement function should be called (and, if so, calls it).
To redirect calls to a function, make the first (non-declaration) line
of the function:
KUNIT_STATIC_STUB_REDIRECT(function_name, [function arguments]);
(This will compile away to nothing if KUnit is not enabled, otherwise it
will check if a redirection is active, call the replacement function,
and return. This check is protected by a static branch, so has very
little overhead when there are no KUnit tests running.)
Calls to the real function can be redirected to a replacement using:
kunit_activate_static_stub(test, real_fn, replacement_fn);
The redirection will only affect calls made from within the kthread of
the current test, and will be automatically disabled when the test
completes. It can also be manually disabled with
kunit_deactivate_static_stub().
The 'example' KUnit test suite has a more complete example.
Co-developed-by: Daniel Latypov <dlatypov@google.com>
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
KUnit has several macros and functions intended for use from non-test
code. These hooks, currently the kunit_get_current_test() and
kunit_fail_current_test() macros, didn't work when CONFIG_KUNIT=m.
In order to support this case, the required functions and static data
need to be available unconditionally, even when KUnit itself is not
built-in. The new 'hooks.c' file is therefore always included, and has
both the static key required for kunit_get_current_test(), and a table
of function pointers in struct kunit_hooks_table. This is filled in with
the real implementations by kunit_install_hooks(), which is kept in
hooks-impl.h and called when the kunit module is loaded.
This can be extended for future features which require similar
"hook" behaviour, such as static stubs, by simply adding new entries to
the struct, and the appropriate code to set them.
Fixed white-space errors during commit:
Shuah Khan <skhan@linuxfoundation.org>
Resolved merge conflicts with:
db105c37a4 ("kunit: Export kunit_running()")
This patch supersedes the above.
Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Rae Moar <rmoar@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Now that we have an iterator-based alternative for a very common case
of using cpumask_local_spread for all cpus in a row, it's worth to
mention that in comment to cpumask_local_spread().
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Now after moving all NUMA logic into sched_numa_find_nth_cpu(),
else-branch of cpumask_local_spread() is just a function call, and
we can simplify logic by using ternary operator.
While here, replace BUG() with WARN_ON().
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Peter Lafreniere <peter@n8pjl.ca>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
In the following patches the function is used to implement in-place bitmaps
traversing without storing intermediate result in temporary bitmaps.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Peter Lafreniere <peter@n8pjl.ca>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Pull perf fix from Borislav Petkov:
- Lock the proper critical section when dealing with perf event context
* tag 'perf_urgent_for_v6.2_rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf: Fix perf_event_pmu_context serialization
Pull misc fixes from Andrew Morton:
"25 hotfixes, mainly for MM. 13 are cc:stable"
* tag 'mm-hotfixes-stable-2023-02-02-19-24-2' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (26 commits)
mm: memcg: fix NULL pointer in mem_cgroup_track_foreign_dirty_slowpath()
Kconfig.debug: fix the help description in SCHED_DEBUG
mm/swapfile: add cond_resched() in get_swap_pages()
mm: use stack_depot_early_init for kmemleak
Squashfs: fix handling and sanity checking of xattr_ids count
sh: define RUNTIME_DISCARD_EXIT
highmem: round down the address passed to kunmap_flush_on_unmap()
migrate: hugetlb: check for hugetlb shared PMD in node migration
mm: hugetlb: proc: check for hugetlb shared PMD in /proc/PID/smaps
mm/MADV_COLLAPSE: catch !none !huge !bad pmd lookups
Revert "mm: kmemleak: alloc gray object for reserved region with direct map"
freevxfs: Kconfig: fix spelling
maple_tree: should get pivots boundary by type
.mailmap: update e-mail address for Eugen Hristev
mm, mremap: fix mremap() expanding for vma's with vm_ops->close()
squashfs: harden sanity check in squashfs_read_xattr_id_table
ia64: fix build error due to switch case label appearing next to declaration
mm: multi-gen LRU: fix crash during cgroup migration
Revert "mm: add nodes= arg to memory.reclaim"
zsmalloc: fix a race with deferred_handles storing
...
Patch series "lib/zlib: Set of s390 DFLTCC related patches for kernel zlib".
Patches 1-7 represent a set of s390 zlib hardware support (DFLTCC) related
fixes and enhancements integrated from zlib-ng repo relevant to kernel
zlib (https://github.com/zlib-ng/zlib-ng).
Since the official zlib repository never got DFLTCC support code merged,
all the patches have been picked from zlib-ng fork (zlib data compression
library for the next generation systems). This repo contains new
optimizations and fixes not getting implemented into the official zlib
repository and falls under the same zlib License. All of the original
patches from zlib-ng were authored by Ilya Leoshkevich
<iii@linux.ibm.com>. Coding style has been preserved for future
maintainability.
Patches 1-2 should have no effect for the kernel zlib but make the code
closer to zlib-ng for future maintainability.
Only Patch 3 touches common zlib_deflate code, other patches are relevant
to s390 tree only.
Patch 8 is separate and intends to resolve an issue with kernel PPP driver
which can use kernel zlib for packet compression. Without this patch PPP
decompression can fail due to error code returned by hardware
(dfltcc_inflate) and PPP disables zlib compression for further packets.
This patch (of 8):
This commit is based on:
https://github.com/zlib-ng/zlib-ng/commit/d8b67f5
Link: https://lkml.kernel.org/r/20230126131428.1222214-1-zaslonko@linux.ibm.com
Link: https://lkml.kernel.org/r/20230126131428.1222214-2-zaslonko@linux.ibm.com
Signed-off-by: Mikhail Zaslonko <zaslonko@linux.ibm.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com> (s390)
Cc: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Use try_cmpxchg instead of cmpxchg (*ptr, old, new) == old in
{set,clear}_bits_ll. x86 CMPXCHG instruction returns success in ZF flag,
so this change saves a compare after cmpxchg (and related move instruction
in front of cmpxchg).
Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg
fails.
Note that the value from *ptr should be read using READ_ONCE to prevent
the compiler from merging, refetching or reordering the read.
The patch also declares these two functions inline, to ensure inlining.
No functional change intended.
Link: https://lkml.kernel.org/r/20230118150703.4024-1-ubizjak@gmail.com
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
When working on SoC bring-up, (a full) userspace may not be available,
making it hard to benchmark the CPU performance of the system under
development. Still, one may want to have a rough idea of the (relative)
performance of one or more CPU cores, especially when working on e.g. the
clock driver that controls the CPU core clock(s).
Hence make the classical Dhrystone 2.1 benchmark available as a Linux
kernel test module, based on[1].
When built-in, this benchmark can be run without any userspace present.
Parallel runs (run on multiple CPU cores) are supported, just kick the
"run" file multiple times.
Note that the actual figures depend on the configuration options that
control compiler optimization (e.g. CONFIG_CC_OPTIMIZE_FOR_SIZE vs.
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE), and on the compiler options used when
building the kernel in general. Hence numbers may differ from those
obtained by running similar benchmarks in userspace.
[1] https://github.com/qris/dhrystone-deb.git
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lkml.kernel.org/r/4d07ad990740a5f1e426ce4566fb514f60ec9bdd.1670509558.git.geert+renesas@glider.be
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Brendan Higgins <brendanhiggins@google.com>
Cc: David Gow <davidgow@google.com>
[geert+renesas@glider.be: fix uninitialized use of ret]
Link: https://lkml.kernel.org/r/alpine.DEB.2.22.394.2212190857310.137329@ramsan.of.borg
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>