Commit Graph

13380 Commits

Author SHA1 Message Date
Mahesh Cherukumilli 5018b12b0d Release notes for 9.0.0 GA
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2025-10-21 09:05:42 -07:00
Binbin 28cf7ba66a Initialize the lua attributes of the luaFunction script (#2750)
This was introduced in #1826. This create an `Uninitialised value was
created by a heap allocation` in the CI.

Signed-off-by: Binbin <binloveplay1314@qq.com>
(cherry picked from commit 5d3cb3d04c)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Jacob Murphy c87e47ed81 Fix invalid memory address caused by hashtable shrinking during safe iteration (#2753)
Safe iterators pause rehashing, but don't pause auto shrinking. This
allows stale bucket references which then cause use after free (in this
case, via compactBucketChain on a deleted bucket).

This problem is easily reproducible via atomic slot migration, where we
call delKeysInSlot which relies on calling delete within a safe
iterator. After the fix, it no longer causes a crash.

Since all cases where rehashing is paused expect auto shrinking to also
be paused, I am making this happen automatically as part of pausing
reshashing.

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
(cherry picked from commit 1cf0df9fc3)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Jacob Murphy 3f243ff2a7 Fix incorrect kvstore size and BIT accounting after completed migration (#2749)
When working on #2635 I errorneously duplicated the
setSlotImportingStateInAllDbs call for successful imports. This resulted
in us doubling the key count in the kvstore. This results in DBSIZE
reporting an incorrect sum, and also causes BIT corruption that can
eventually result in a crash.

The solution is:

1. Only call setSlotImportingStateInAllDbs once (in our
finishSlotMigrationJob function)
2. Make setSlotImportingStateInAllDbs idempotent by checking if the
delete from the kvstore importing hashtable is a no-op

This also fixes a bug where the number of importing keys is not lowered
after the migration, but this is less critical since it is only used
when resizing the dictionary on RDB load. However, it could result in
un-loadable RDBs if the importing key count gets large enough.

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
(cherry picked from commit 2a914aa521)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Harkrishn Patro 937976c053 Bump old engine version(s) for compatibility test (#2741)
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
(cherry picked from commit 95154feaa1)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Roshan Khatri 4e113d28ce Deflake Psync established within grace period (#2743)
increased the wait time to a total of 10 seconds where we check the log
for `Done loading RDB` message

Fixes https://github.com/valkey-io/valkey/issues/2694

CI run (100 times):
https://github.com/roshkhatri/valkey/actions/runs/18576201712/job/52961907806

Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
(cherry picked from commit 898172bc9c)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Binbin 791b792d78 FUNCTION FLUSH re-create lua VM, fix flush not gc, fix flush async + load crash (#1826)
There will be two issues in this test:
```
test {FUNCTION - test function flush} {
    for {set i 0} {$i < 10000} {incr i} {
        r function load [get_function_code LUA test_$i test_$i {return 'hello'}]
    }
    set before_flush_memory [s used_memory_vm_functions]
    r function flush sync
    set after_flush_memory [s used_memory_vm_functions]
    puts "flush sync, before_flush_memory: $before_flush_memory, after_flush_memory: $after_flush_memory"

    for {set i 0} {$i < 10000} {incr i} {
        r function load [get_function_code LUA test_$i test_$i {return 'hello'}]
    }
    set before_flush_memory [s used_memory_vm_functions]
    r function flush async
    set after_flush_memory [s used_memory_vm_functions]
    puts "flush async, before_flush_memory: $before_flush_memory, after_flush_memory: $after_flush_memory"

    for {set i 0} {$i < 10000} {incr i} {
        r function load [get_function_code LUA test_$i test_$i {return 'hello'}]
    }
    puts "Test done"
}
```

The first one is the test output, we can see that after executing
FUNCTION FLUSH,
used_memory_vm_functions has not changed at all:
```
flush sync, before_flush_memory: 2962432, after_flush_memory: 2962432
flush async, before_flush_memory: 4504576, after_flush_memory: 4504576
```

The second one is there is a crash when loading the functions during the
async
flush:
```
=== VALKEY BUG REPORT START: Cut & paste starting from here ===
 # valkey 255.255.255 crashed by signal: 11, si_code: 2
 # Accessing address: 0xe0429b7100000a3c
 # Crashed running the instruction at: 0x102e0b09c

------ STACK TRACE ------
EIP:
0   valkey-server                       0x0000000102e0b09c luaH_getstr + 52

Backtrace:
0   libsystem_platform.dylib            0x000000018b066584 _sigtramp + 56
1   valkey-server                       0x0000000102e01054 luaD_precall + 96
2   valkey-server                       0x0000000102e01b10 luaD_call + 104
3   valkey-server                       0x0000000102e00d1c luaD_rawrunprotected + 76
4   valkey-server                       0x0000000102e01e3c luaD_pcall + 60
5   valkey-server                       0x0000000102dfc630 lua_pcall + 300
6   valkey-server                       0x0000000102f77770 luaEngineCompileCode + 708
7   valkey-server                       0x0000000102f71f50 scriptingEngineCallCompileCode + 104
8   valkey-server                       0x0000000102f700b0 functionsCreateWithLibraryCtx + 2088
9   valkey-server                       0x0000000102f70898 functionLoadCommand + 312
10  valkey-server                       0x0000000102e3978c call + 416
11  valkey-server                       0x0000000102e3b5b8 processCommand + 3340
12  valkey-server                       0x0000000102e563cc processInputBuffer + 520
13  valkey-server                       0x0000000102e55808 readQueryFromClient + 92
14  valkey-server                       0x0000000102f696e0 connSocketEventHandler + 180
15  valkey-server                       0x0000000102e20480 aeProcessEvents + 372
16  valkey-server                       0x0000000102e4aad0 main + 26412
17  dyld                                0x000000018acab154 start + 2476

------ STACK TRACE DONE ------
```

The reason is that, in the old implementation (introduced in 7.0),
FUNCTION FLUSH
use lua_unref to remove the script from lua VM. lua_unref does not
trigger the gc,
it causes us to not be able to effectively reclaim memory after the
FUNCTION FLUSH.

The other issue is that, since we don't re-create the lua VM in FUNCTION
FLUSH,
loading the functions during a FUNCTION FLUSH ASYNC will result a crash
because
lua engine state is not thread-safe.

The correct solution is to re-create a new Lua VM to use, just like
SCRIPT FLUSH.

---------

Signed-off-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
Co-authored-by: Ricardo Dias <ricardo.dias@percona.com>
(cherry picked from commit b4c93cc9c2)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Viktor Söderqvist 11f47a107c Fix double MOVED reply on unblock at failover (#2734)
#2329 intoduced a bug that causes a blocked client in cluster mode to
receive two MOVED redirects instead of one. This was not seen in tests,
except in the reply schema validator.

The fix makes sure the client's pending command is cleared after sending
the MOVED redirect, to prevent if from being reprocessed.

Fixes #2676.

---------

Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
(cherry picked from commit 54da8344c1)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Jacob Murphy 6bb61d175b Stop using DEBUG LOADAOF on replica in ASM tests (#2719)
DEBUG LOADAOF sometimes works but it results in -LOADING responses to
the primary so there are lots of race conditions. It isn't something we
should be doing anyways. To test, I just disconnect the replica before
loading the AOF, then reconnect it.

Fixes #2712

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
(cherry picked from commit dbcf022480)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Jacob Murphy 626b3653f2 Deflake atomic slot migration client flag test (#2720)
This test was failing, and causing the next test to throw an exception.
It is failing since we never waited for the slot migration to connect
before proceeding.

Fixes #2692

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
(cherry picked from commit 19474c867a)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Jacob Murphy b2dc84b07e Fix crash that occurs sometimes when aborting a slot migration while child snapshot is active (#2721)
The race condition causes the client to be used and subsequently double
freed by the slot migration read pipe handler. The order of events is:

1. We kill the slot migration child process during CANCELSLOTMIGRATIONS
1. We then free the associated client to the target node
1. Although we kill the child process, it is not guaranteed that the
pipe will be empty from child to parent
1. If the pipe is not empty, we later will read that out in the
slotMigrationPipeReadHandler
1. In the pipe read handler, we attempt to write to the connection. If
writing to the connection fails, we will attempt to free the client
1. However, the client was already freed, so this a double free

Notably, the slot migration being aborted doesn't need to be triggered
by `CANCELSLOTMIGRATIONS`, it can be any failure.

To solve this, we simply:

1. Set the slot migration pipe connection to NULL whenever it is
unlinked
2. Bail out early in slot migration pipe read handler if the connection
is NULL

I also consolidate the killSlotMigrationChild call to one code path,
which is executed on client unlink. Before, there were two code paths
that would do this twice (once on slot migration job finish, and once on
client unlink). Sending the signal twice is fine, but inefficient.

Also, add a test to cancel during the slot migration snapshot to make
sure this case is covered (we only caught it during the module test).

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
(cherry picked from commit 28e5dcce2c)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Ran Shidlansik e76143329b HSETEX with FXX should not create an object if it does not exist (#2716)
When the hash object does not exist FXX should simply fail the check
without creating the object while FNX should be trivial and succeed.

Note - also fix a potential compilation warning on some COMPILERS doing
constant folding of variable length array when size is const expression.

Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
(cherry picked from commit 8182f4a0b9)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Harkrishn Patro 8253c38ce2 Add compatibility test with Valkey 7.2/8.0 (#2342)
* Add cross version compatibility test to run with Valkey 7.2 and 8.0
* Add mechanism in TCL test to skip tests dynamically - #2711

---------

Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Signed-off-by: Harkrishn Patro <bunty.hari@gmail.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
(cherry picked from commit 18214be490)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Harkrishn Patro 010fa64bcf Fix memory leak with CLIENT LIST/KILL duplicate filters (#2362)
With #1401, we introduced additional filters to CLIENT LIST/KILL
subcommand. The intended behavior was to pick the last value of the
filter. However, we introduced memory leak for all the preceding
filters.

Before this change:
```
> CLIENT LIST IP 127.0.0.1 IP 127.0.0.1
id=4 addr=127.0.0.1:37866 laddr=127.0.0.1:6379 fd=10 name= age=0 idle=0 flags=N capa= db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=21 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=16989 events=r cmd=client|list user=default redir=-1 resp=2 lib-name= lib-ver= tot-net-in=49 tot-net-out=0 tot-cmds=0
```
Leak:
```
Direct leak of 11 byte(s) in 1 object(s) allocated from:
    #0 0x7f2901aa557d in malloc (/lib64/libasan.so.4+0xd857d)
    #1 0x76db76 in ztrymalloc_usable_internal /workplace/harkrisp/valkey/src/zmalloc.c:156
    #2 0x76db76 in zmalloc_usable /workplace/harkrisp/valkey/src/zmalloc.c:200
    #3 0x4c4121 in _sdsnewlen.constprop.230 /workplace/harkrisp/valkey/src/sds.c:113
    #4 0x4dc456 in parseClientFiltersOrReply.constprop.63 /workplace/harkrisp/valkey/src/networking.c:4264
    #5 0x4bb9f7 in clientListCommand /workplace/harkrisp/valkey/src/networking.c:4600
    #6 0x641159 in call /workplace/harkrisp/valkey/src/server.c:3772
    #7 0x6431a6 in processCommand /workplace/harkrisp/valkey/src/server.c:4434
    #8 0x4bfa9b in processCommandAndResetClient /workplace/harkrisp/valkey/src/networking.c:3571
    #9 0x4bfa9b in processInputBuffer /workplace/harkrisp/valkey/src/networking.c:3702
    #10 0x4bffa3 in readQueryFromClient /workplace/harkrisp/valkey/src/networking.c:3812
    #11 0x481015 in callHandler /workplace/harkrisp/valkey/src/connhelpers.h:79
    #12 0x481015 in connSocketEventHandler.lto_priv.394 /workplace/harkrisp/valkey/src/socket.c:301
    #13 0x7d3fb3 in aeProcessEvents /workplace/harkrisp/valkey/src/ae.c:486
    #14 0x7d4d44 in aeMain /workplace/harkrisp/valkey/src/ae.c:543
    #15 0x453925 in main /workplace/harkrisp/valkey/src/server.c:7319
    #16 0x7f2900cd7139 in __libc_start_main (/lib64/libc.so.6+0x21139)
```

Note: For filter ID / NOT-ID we group all the option and perform
filtering whereas for remaining filters we only pick the last filter
option.

---------

Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
(cherry picked from commit 155b0bb821)
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Mahesh Cherukumilli f667f4ec98 Bump version to 9.0.0 GA
Signed-off-by: cherukum-amazon <cherukum@amazon.com>
2025-10-21 09:05:42 -07:00
Sarthak Aggarwal 6700272f31
Deflake replica selection test by relaxing cluster configurations (#2672)
We have relaxed the `cluster-ping-interval` and `cluster-node-timeout`
so that cluster has enough time to stabilize and propagate changes.

Fixes this test occasional failure when running with valgrind:

[err]: Node #10 should eventually replicate node #5 in
tests/unit/cluster/slave-selection.tcl
    #10 didn't became slave of #5

Backported to the 9.0 branch in #2731.

Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
2025-10-13 21:47:32 +02:00
Jacob Murphy 128f30bab7 Reduce flakiness of atomic slot migration AOF test (#2705)
If we don't wait for the replica to resync, the migration may be
cancelled by the time the replica resyncs, resulting in a test failure
when we can't find the migration on the replica.

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2025-10-08 13:19:23 -07:00
Jacob Murphy 40a257d3ac Release notes for 9.0.0-rc3
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Jacob Murphy 32e4a0bbfb Use correct arguments in LOLWUT test (#2708)
Seeing test failures due to this on the 9.0.0 branch:

```
[exception]: Executing test client: ERR Syntax error. Use: LOLWUT [columns rows] [real imaginary].
ERR Syntax error. Use: LOLWUT [columns rows] [real imaginary]
```

It turns out we are just providing the version as an argument, instead
of specifying which version to run on

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Jacob Murphy b90ac09720 Introduce SYNCSLOTS CAPA for forwards compatibility (#2688)
For now, introduce this and have it do nothing. Eventually, we can use
this to negotiate supported capabilities on either end. Right now, there
is nothing to send or support, so it just accepts it and doesn't reply.

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Jacob Murphy 68a666c698 Prevent exposure of importing keys on replicas during atomic slot migration (#2635)
# Problem

In the current slot migration design, replicas are completely unaware of
the slot migration. Because of this, they do not know to hide importing
keys, which results in exposure of these keys to commands like KEYS,
SCAN, RANDOMKEY, and DBSIZE.

# Design

The main part of the design is that we will now listen for and process
the `SYNCSLOTS ESTABLISH` command on the replica. When a `SYNCSLOTS
ESTABLISH` command is received from the primary, we begin tracking a new
slot import in a special `SLOT_IMPORT_OCCURRING_ON_PRIMARY` state.
Replicas use this state to track the import, and await for a future
`SYNCSLOTS FINISH` message that tells them the import is
successful/failed.

## Success Case

```
     Source                                          Target                         Target Replica
       |                                                |                                 |
       |------------ SYNCSLOTS ESTABLISH -------------->|                                 |
       |                                                |----- SYNCSLOTS ESTABLISH ------>|
       |<-------------------- +OK ----------------------|                                 |
       |                                                |                                 |
       |~~~~~~~~~~~~~~ snapshot as AOF ~~~~~~~~~~~~~~~~>|                                 |
       |                                                |~~~~~~ forward snapshot ~~~~~~~~>|
       |----------- SYNCSLOTS SNAPSHOT-EOF ------------>|                                 |
       |                                                |                                 |
       |<----------- SYNCSLOTS REQUEST-PAUSE -----------|                                 |
       |                                                |                                 |
       |~~~~~~~~~~~~ incremental changes ~~~~~~~~~~~~~~>|                                 |
       |                                                |~~~~~~ forward changes ~~~~~~~~~>|
       |--------------- SYNCSLOTS PAUSED -------------->|                                 |
       |                                                |                                 |
       |<---------- SYNCSLOTS REQUEST-FAILOVER ---------|                                 |
       |                                                |                                 |
       |---------- SYNCSLOTS FAILOVER-GRANTED --------->|                                 |
       |                                                |                                 |
       |                                            (performs takeover &                  |
       |                                             propagates topology)                 |
       |                                                |                                 |
       |                                                |------- SYNCSLOTS FINISH ------->|
 (finds out about topology                              |                                 |
  change & marks migration done)                        |                                 |
       |                                                |                                 |
```

## Failure Case
```
     Source                                          Target                         Target Replica
       |                                                |                                 |
       |------------ SYNCSLOTS ESTABLISH -------------->|                                 |
       |                                                |----- SYNCSLOTS ESTABLISH ------>|
       |<-------------------- +OK ----------------------|                                 |
     ...                                              ...                               ...
       |                                                |                                 |
       |                                             <FAILURE>                            |
       |                                                |                                 |
       |                                      (performs cleanup)                          |
       |                                                | ~~~~~~ UNLINK <key> ... ~~~~~~~>|
       |                                                |                                 |
       |                                                | ------ SYNCSLOTS FINISH ------->|
       |                                                |                                 |
```

## Full Sync, Partial Sync, and RDB

In order to ensure replicas that resync during the import are still
aware of the import, the slot import is serialized to a new
`cluster-slot-imports` aux field. The encoding includes the job name,
the source node name, and the slot ranges being imported. Upon loading
an RDB with the `cluster-slot-imports` aux field, replicas will add a
new migration in the `SLOT_IMPORT_OCCURRING_ON_PRIMARY` state.

It's important to note that a previously saved RDB file can be used as
the basis for partial sync with a primary. Because of this, whenever we
load an RDB file with the `cluster-slot-imports` aux field, even from
disk, we will still add a new migration to track the import. If after
loading the RDB, the Valkey node is a primary, it will cancel the slot
migration. Having this tracking state loaded on primaries will ensure
that replicas partial syncing to a restarted primary still get their
`SYNCSLOTS FINISH` message in the replication stream.

## AOF

Since AOF cannot be used as the basis for a partial sync, we don't
necessarily need to persist the `SYNCSLOTS ESTABLISH` and `FINISH`
commands to the AOF.

However, considering there is work to change this (#59 #1901) this
design doesn't make any assumptions about this.

We will propagate the `ESTABLISH` and `FINISH` commands to the AOF, and
ensure that they can be properly replayed on AOF load to get to the
right state. Similar to RDB, if there are any pending "ESTABLISH"
commands that don't have a "FINISH" afterwards upon becoming primary, we
will make sure to fail those in `verifyClusterConfigWithData`.

Additionally, there was a bug in the existing slot migration where slot
import clients were not having their commands persisted to AOF. This has
been fixed by ensuring we still propagate to AOF even for slot import
clients.

## Promotion & Demotion

Since the primary is solely responsible for cleaning up unowned slots,
primaries that are demoted will not clean up previously active slot
imports. The promoted replica will be responsible for both cleaning up
the slot (`verifyClusterConifgWithData`) and sending a `SYNCSLOTS
FINISH`.

# Other Options Considered

I also considered tracking "dirty" slots rather than using the slot
import state machine. In this setup, primaries and replicas would simply
mark each slot's hashtable in the kvstore as dirty when something is
written to it and we do not currently own that slot.

This approach is simpler, but has a problem in that modules loaded on
the replica would still not get slot migration start/end notifications.
If the modules on the replica do not get such notifications, they will
not be able to properly contain these dirty keys during slot migration
events.

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Madelyn Olson b25f87be77 Fix format issues with CVE fix (#2679)
The CVE fixes had a formatting and external test issue that wasn't
caught because private branches don't run those CI steps.

Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Madelyn Olson 61cac56d0c Merge commit from fork
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Jacob Murphy 21a70f6eea Add slot migration client flags and module context flags (#2639)
New client flags in reported by CLIENT INFO and CLIENT LIST:

* `i` for atomic slot migration importing client
* `E` for atomic slot migration exporting client

New flags in return value of `ValkeyModule_GetContextFlags`:

* `VALKEYMODULE_CTX_FLAGS_SLOT_IMPORT_CLIENT`: Indicate the that client
attached to this context is the slot import client.
* `VALKEYMODULE_CTX_FLAGS_SLOT_EXPORT_CLIENT`: Indicate the that client
attached to this context is the slot export client.

Users could use this to monitor the underlying client info of the slot
migration, and more clearly understand why they see extra clients during
the migration.

Modules can use these to detect keyspace notifications on import
clients. I am also adding export flags for symmetry, although there
should not be keyspace notifications. But they would potentially be
visible in command filters or in server events triggered by that client.

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Alina Liu 35e052ee5e Defrag if slab 1/8 full to fix defrag didn't stop issue (#2656)
**Issue History:**
1. The flaky test issue "defrag didn't stop" was originally detected in
February 2025: https://github.com/valkey-io/valkey/issues/1746
Solution for 1746: https://github.com/valkey-io/valkey/pull/1762
2. Similar issue occurred recently:
https://github.com/valkey-io/valkey/actions/runs/16585350083/job/46909359496#step:5:7640

**Investigation:**
1. First, the issue occurs specifically to Active Defrag stream in
cluster mode.
2. After investigating `test_stream` in `memefficiency.tcl`, I found the
root cause is in defrag logic rather than the test itself - There are
still failed tests with the same error even if I tried different
parameters for the test.
3. Then I looked at malloc-stats and identified potential defrag issues,
particularly in the 80B bin where utilization only reaches ~75% after
defrag instead of the expected near 100%, while other bins show proper
defrag behavior - 80B actually is the size of a new stream(confirmed in
`t_stream.c`) that we add during test.
4. For 80B, after adding 200000 streams and fragmenting, `curregs `=
100030, after a lot of defrag cycles, there are still 122
nonfull-slabs/511 slabs with the remaining 446 items not defragged
(average 4/nonfull-slab).

**Detailed malloc-stats:**
- Total slabs: 511
- Non-full slabs: 122
- Full slabs: 511-122=389
- Theoretical maximum per slab: 256 items
- Allocated items in non-full slabs: 100030-389*256=446
- Average items per non-full slab: 446/122=3.66

**Root Cause:**
**There are some immovable items which  prevent complete defrag**

**Problems in old defrag logic:**
1. The previous condition (we don't defrag if slab utilization > 'avg
utilization' * 1.125), the 12.5% threshold doesn’t work well with low
utilizations.

- Let's imagine we have 446 items in 122 nonfull-slabs (avg 3.66
items/nonfull-slab), let's say, e.g. we have 81 slabs with 5 items each
+41 slabs with 1 item each)
- 12.5% threshold: 3.66*1.125=4.11
- If those 41 single items are immovable, they actually lower the
average, so the rest 81 slabs will be above the threshold (5>4.11) and
will not be defragged - defrag didn't stop.

2. Distribution of immovable items across slabs was causing inconsistent
defragmentation and flaky test outcome.

- If those 41 single items are movable, they will be moved and the avg
will be 5, then 12.5% threshold: 5*1.125=5.625, so the rest 81 slabs
will fall below the threshold (5<5.625) and will be defragged - defrag
success.
- This can explain why we got flaky defrag tests.

**Final solution :**
1. Add one more condition before the old logic in `makeDefragDecision
`to trigger defragmentation when slab is less than 1/8 full (1/8
threshold (12.5%) chosen to align with existing utilization threshold
factor) - Ensures no low-utilization slabs left without defragged, and
stabilize the defrag behavior.
2. The reason why we have immovable items and how to handle them is
going to be investigate later.
3. Be sure to rebuild Valkey before testing it.

**Local test result:**
- Before fix:
pass rate 80.8% (63/78)
- After fix:
Test only stream: pass rate 100% (200/200)
Test the whole memefficiency.tcl: pass rate 100% (100/100)

Resolves #2398 , the "defrag didn't stop" issue, with help from @JimB123
@madolson

---------

Signed-off-by: Alina Liu <liusalisa6363@gmail.com>
Signed-off-by: asagegeLiu <liusalisa6363@gmail.com>
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Madelyn Olson 83592593fa Implement a lolwut for version 9 (#2646)
As requested, here is a version of lolwut for 9 that visualizes a Julia
set with ASCII art.

Example:
```
127.0.0.1:6379> lolwut version 9

                                     .............
                                 ......................
                              ............................
                           ......:::--:::::::::::::::.......
                         .....:::=+*@@@=--------=+===--::....
                        ....:::-+@@@@&*+=====+%@@@@@@@@=-::....
                      .....:::-=+@@@@@%%*++*@@@@@@@@@&*=--::....
                     .....::--=++#@@@@@@@@##@@@@@@@@@@@@@@=::....
                    ......:-=@&#&@@@@@@@@@@@@@@@@@@@@@@@@@%-::...
                   ......::-+@@@@@@@@@@@@@@@@@@&&@@@#%#&@@@-::....
                  .......::-=+%@@@@@@@@@@@@@@@@#%%*+++++%@+-:.....
                  .......::-=@&@@@@@@@@@@@@@@@@&*++=====---::.....
                 .......:::--*@@@@@@@@@@@@@@@@@%++===----::::.....
                ........::::-=+*%&@@@@@@@@@&&&%*+==----:::::......
                ........::::--=+@@@@@@@@@@&##%*++==---:::::.......
                .......:::::---=+#@@@@@@@@&&&#%*+==---:::::.......
               ........:::::---=++*%%#&&@@@@@@@@@+=---::::........
               .......:::::----=++*%##&@@@@@@@@@@%+=--::::.......
               ......::::-----==++#@@@@@@@@@@@@@&%*+=-:::........
               ......:::---====++*@@@@@@@@@@@@@@@@@@+-:::.......
               .....::-=++==+++**%@@@@@@@@@@@@@@@@#*=--::.......
                ....:-%@@%****%###&@@@@@@@@@@@@@@@@&+--:.......
                ....:-=@@@@@&@@@@@@@@@@@@@@@@@@@@@@@@=::......
                 ...::+@@@@@@@@@@@@@@@&&@@@@@@@@%**@+-::.....
                 ....::-=+%#@@@@@@@@@&%%%&@@@@@@*==-:::.....
                  ....::--+%@@@@@@@%++==++*#@@@@&=-:::....
                   ....:::-*@**@@+==----==*%@@@@+-:::....
                     .....:::---::::::::--=+@=--::.....
                       .........::::::::::::::.......
                         .........................
                             ..................
                                    ...

Ascii representation of Julia set with constant 0.41 + 0.29i
Don't forget to have fun! Valkey ver. 255.255.255
```

You can pass in arbitrary rows and colums (it's best when rows is 2x
number of columns) and an arbitrary julia constant so it is repeatable.
Worst case it takes about ~100us on my m2 macbook, which should be fine
to make sure it's not taking too many system resources.

---------

Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Ran Shidlansik aab4af58f7 Fix module key memory usage accounting (#2661)
Make objectComputeSize account for the key size as well when the key is
a module datatype

fixes: https://github.com/valkey-io/valkey/issues/2660

---------

Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Jacob Murphy 5bda19a575 Fix atomic slot migration snapshot never proceeding with hz 1 (#2636)
The problem is that ACKs run on a set loop (once every second) and this
will happen every loop with hz 1.

Instead, we can do the ACK after running the main logic. We can also do
an additional optimization where we don't send an ACK from source to
target if we already sent some data this cron loop, since the target
will reset the ack timer on any data over the connection.

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
cjx-zar d3a6c6c819 Redirect blocked clients after failover (#2329)
In standalone mode, after a switch over, the command that was originally
blocking on primary returns -REDIRECT instead of -UNBLOCKED when the
client has the redirect capability.

Similarly, in cluster mode, after a switch over, the blocked commands
receive a -MOVED redirect instead of -UNBLOCKED.

After this fix, the handling of blocked connections during a switch over
between standalone and cluster is nearly identical. This can be
summarized as follows:

Standalone:

1. Client that has the redirect capability blocked on the key on the
primary node will receive a -REDIRECT after the switch over completes
instead of -UNBLOCKED.
2. Readonly clients blocked on the primary or replica node will remain
blocked throughout the switch over.

Cluster:

1. Client blocked on the key served by the primary node will receive a
-MOVED instead of a probabilistic -UNBLOCKED error.
2. Readonly clients blocked on the key served by primary or replica node
will remain blocked throughout the switch over.

---------

Signed-off-by: cjx-zar <56825069+cjx-zar@users.noreply.github.com>
Co-authored-by: Simon Baatz <gmbnomis@gmail.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Binbin bb44da0885 Minor fix for dual rdb channel connection conn error log (#2658)
This should be server.repl_rdb_transfer_s

Signed-off-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Jacob Murphy 9e02deebac Add atomic slot migration test for unblock on migration complete (#2637)
This is already handled by `clusterRedirectBlockedClientIfNeeded`. With
the work we are doing in #2329, it makes sense to have an explicit test here
to prevent regression.

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Signed-off-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Sarthak Aggarwal 4e8cebae7c Increasing retries to allow succcessful meet in Valgrind (#2644)
There is a daily test failure in valgrind, which looks like an issue related to
slowness in valgrind mode.

Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
Signed-off-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
chzhoo 5d16af8a40 Optimize skiplist random level generation logic (#2631)
Each insertion of a skiplist node requires generating a random level
(via the `zslRandomLevel` function), and some commands (such as
`zunionstore`) call the `zslRandomLevel` function multiple times.
Therefore, optimizing `zslRandomLevel` can significantly improve the
performance of these commands.

The main optimization approach is as follows:

1. Original logic: Each iteration called the `random` function, with a
0.25 probability of continuing to call `random` again. In the worst-case
scenario, it required up to 32 calls (though the probability of this
occurring is extremely low).
2. Optimized logic: We only need to call the `genrand64_int64` function
once. Each iteration uses only 2 bits of randomness, effectively
achieving the equivalent of 32 iterations in the original algorithm.
3. Additionally, the introduction of `__builtin_clzll` significantly
reduces CPU usage, which compiles into a single, highly efficient CPU
instruction (e.g., LZCNT on x86, CLZ on ARM) on supported hardware
platforms
4. Although I've explained a lot, the actual code changes are quite
minimal, so just look at the code directly.

---------

Signed-off-by: chzhoo <czawyx@163.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
korjeek 0f1eab89e4 Adding unit tests for sha256 (#2632)
Adding comprehensive unit tests for SHA-256 implementation.

 These tests verify:
  1. Basic functionality with known test vectors (e.g., "abc")
  2. Handling of large input data (4KB repeated 1000 times)
3. Edge case with repeated single-byte input (1 million 'a' characters)

The tests ensure compatibility with standard SHA-256 implementations and
will help detect regressions during future code changes.

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-10-08 13:19:23 -07:00
Ricardo Dias ba9a1049f8 Valkey release 9.0.0-rc2
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
2025-09-23 12:30:50 +01:00
Sarthak Aggarwal 1c4ff1b638 Fix closing slot migration pipe read (#2630)
We probably should close the correct `slot_migration_pipe_read`. It
should resolve the valgrind errors.

Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
2025-09-23 12:30:50 +01:00
Ricardo Dias 7612f134c9 Fix test that checks `extended-redis-compatibility` config deprecation rules (#2629)
Following the decision in #2189, we need to fix this test because the
`extended-redis-compatibility` config option is not going to be
deprecated in 9.0.

This commit changes the test to postpone the deprecation of
`extended-redis-compatibility` until 10.0 release.

Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
2025-09-23 12:30:50 +01:00
Sarthak Aggarwal 69b397f718 Fix flaky cluster flush slot test (#2626)
The reason is that the replication stream may not have yet reached
the replica for execution. We could add a wait_for_condition. We decided
to replace those assert calls with assert_replication_stream to verify
the contents of the replication stream rather than the commandstats.
```
*** [err]: Flush slot command propagated to replica in tests/unit/cluster/cluster-flush-slot.tcl
```

Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
Signed-off-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
2025-09-23 12:30:50 +01:00
Roshan Khatri 8b16f18c91 Update automated benchmarking configs (#2625)
reduce the req and warmup time to finish in 6 hrs as the github workflow
times out after 6 hrs

---------

Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
2025-09-23 12:30:50 +01:00
Binbin d759a7b4ea Separate RDB snapshotting from atomic slot migration (#2533)
When we adding atomic slot migration in #1949, we reused a lot of rdb save code,
it was an easier way to implement ASM in the first time, but it comes with some
side effect. Like we are using CHILD_TYPE_RDB to do the fork, we use rdb.c/rdb.h
function to save the snapshot, these mess up the logs (we will print some logs
saying we are doing RDB stuff) and mess up the info fields (we will say we are
rdb_bgsave_in_progress but actually we are doing slot migration).

In addition, it makes the code difficult to maintain. The rdb_save method uses
a lot of rdb_* variables, but we are actually doing slot migration. If we want
to support one fork with multiple target nodes, we need to rewrite these code
for a better cleanup.

Note that the changes to rdb.c/rdb.h are reverting previous changes from when
we was reusing this code for slot migration. The slot migration snapshot logic
is similar to the previous diskless replication. We use pipe to transfer the
snapshot data from the child process to the parent process.

Interface changes:
- New slot_migration_fork_in_progress info field.
- New cow_size field in CLUSTER GETSLOTMIGRATIONS command.
- Also add slot migration fork to the cluster class trace latency.

Signed-off-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Co-authored-by: Jacob Murphy <jkmurphy@google.com>
2025-09-23 12:30:50 +01:00
uriyage 9b8ac85a48 Fix memory leak in deferred reply buffer (#2615)
Set free method for deferred_reply list to properly clean up 
ClientReplyValue objects when the list is destroyed

Signed-off-by: Uri Yagelnik <uriy@amazon.com>
2025-09-23 12:30:50 +01:00
Roshan Khatri b4e71024c5 Adds io-threads configs to PR-perf tests (#2598)
- Adds io-thread enabled perf-tests for pr
- changes the server and benchmark client cpu ranges so there are on
separate NUMA nodes of the metal machine.
- Also kill any servers that are active on the metal machine if anything
fails.
- Adds a benchmark wf to benchmark versions and publish on a issue id
provided:
<img width="340" height="449" alt="Screenshot 2025-09-11 at 12 14 28 PM"
src="https://github.com/user-attachments/assets/04f6a781-e163-4d6b-9b70-deedad15c9ef"
/>

- Comments on the issue with the full comparison like this:
 
<img width="936" height="1152" alt="Screenshot 2025-09-11 at 12 15
35 PM"
src="https://github.com/user-attachments/assets/e1584c8e-25dc-433f-a4d4-5b08d7548ddf"
/>

https://github.com/roshkhatri/valkey/pull/3#issuecomment-3282289440

---------

Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
2025-09-23 12:30:50 +01:00
Sarthak Aggarwal ad4f09e79f Increase wait time condition for New Master down consecutively test (#2612)
With #2604 merged, the `Node #10 should eventually replicate node #5`
started passing successfully with valgrind, but I guess we are seeing a
new daily failure from a `New Master down consecutively` test that runs
shortly after.

Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
2025-09-23 12:30:50 +01:00
Sarthak Aggarwal 7471252a48 Fix accounting for dual channel RDB bytes in replication stats (#2602)
Resolves #2545 

Followed the steps to reproduce the issue, and was able to get non-zero
`total_net_repl_output_bytes`.

```
(base) ~/workspace/valkey git:[fix-bug-2545]
src/valkey-cli INFO | grep total_net_repl_output_bytes
total_net_repl_output_bytes:1788
```

---------

Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
2025-09-23 12:30:50 +01:00
Vitali a36c9904f6 Expand wait condition time for slave selection test (#2604)
## Summary
- extend replication wait time in `slave-selection` test

```
*** [err]: Node #10 should eventually replicate node #5 in tests/unit/cluster/slave-selection.tcl
#10 didn't became slave of #5
```

## Testing
- `./runtest --single unit/cluster/slave-selection`
- `./runtest --single unit/cluster/slave-selection --valgrind`

Signed-off-by: Vitali Arbuzov <Vitali.Arbuzov@proton.me>
Signed-off-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Harkrishn Patro <bunty.hari@gmail.com>
2025-09-23 12:30:50 +01:00
Jacob Murphy cab4fa5c7f Make modules opt-in to atomic slot migration and add server events (#2593)
As discussed in https://github.com/valkey-io/valkey/issues/2579

Notably, I am exposing this feature as "Atomic Slot Migration" to
modules. If we want to call it something else, we should consider that
now (e.g.. "Replication-Based Slot Migration"?)

Also note: I am exposing both target and source node in the event. It is
not guaranteed that either target or source would be the node the event
fires on (e.g. replicas will fire the event after replica containment is
introduced). Even though it could be potentially inferred from CLUSTER
SLOTS, it should help modules parse it this way. Modules should be able
to infer whether it is occurring on primary/replica from `ctx` flags, so
not duplicating that here.

Closes #2579

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
2025-09-23 12:30:50 +01:00
Sarthak Aggarwal 99a14c8ff7 Evict client only when limit is breached (#2596)
I believe we should evict the clients when the client eviction limit is
breached instead of _at_ the breach. I came across this function in the
failed [daily
test](https://github.com/valkey-io/valkey/actions/runs/17521272806/job/49765359298#step:6:7770),
which could possibly be related.

Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
2025-09-23 12:30:50 +01:00
Ran Shidlansik 9709974efa Increase frequency of time check during fields active expiration (#2595)
When we introduced the new Hash fields expiration functionality,
we decided to combine the current active expiration job between generic
keys and hash fields.
During that job we run a tight loop. In each loop iteration we scan over
maximum of 20 keys (with default expire effort) and try to "expire"
them.
For hash fields expiration job, the "expire" of a hash key, means
expiring maximum of 80 fields (with default expire effort).
The problem is that we might do much more work per each iteration of
hash fields expiration job.
The current code is shared between the 2 jobs, and currently we only
perform time check every 16 iterations.
as a result the CPU of fields active expiration can spike and consume
much higher CPU% than the current 25% bound allows.

Example:

Before this PR

| Scenario | AVG CPU | Time to expire all fields |

|----------------------------------------------------|---------|---------------------------|
| Expiring 10M volatile fields from a single hash | 20.18% | 26 seconds
|
| Expiring 10M volatile fields from 10K hash objects | 32.72% | 7
seconds |


After this PR
Scenario | AVG CPU | Time to expire all fields
| Scenario | AVG CPU | Time to expire all fields |

|----------------------------------------------------|---------|---------------------------|
| Expiring 10M volatile fields from a single hash | 20.23%. | 26 seconds
|
| Expiring 10M volatile fields from 10K hash objects | 20.76%. | 11
seconds |

*NOTE*
The change introduced here make the field job check the time every
iteration. We offer compile time option to use efficient time check
using TSC (X86) or VCR (ARM) on most modern CPU, so the impact is
expected to be low. Still, in order to avoid degradation for existing
workloads, the code change was made so it will not impact the existing
generic keys active expiration job.

---------

Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2025-09-23 12:30:50 +01:00
Zhijun Liao b0b4fad130 valkey-cli: Add word-jump navigation (Alt/Option/Ctrl + ←/→) (#2583)
Interactive use of valkey-cli often involves working with long keys
(e.g. MY:INCREDIBLY:LONG:keythattakesalongtimetotype). In shells like
bash, zsh, or psql, users can quickly move the cursor word by word with
**Alt/Option+Left/Right**, **Ctrl+Left/Right** or **Alt/Option+b/f**.
This makes editing long commands much more efficient.

Until now, valkey-cli (via linenoise) only supported single-character
cursor moves, which is painful for frequent key editing.

This patch adds such support, with simple code changes in linenoise. It
now supports both the Meta (Alt/Option) style and CSI (control sequence
introducer) style:

| | Meta style | CSI style (Ctrl) | CSI style (Alt) |
| --------------- | ---------- | ---------------- | --------- |
| move word left  | ESC b      | ESC [1;5D        | ESC [1;3D |
| move word right | ESC f      | ESC [1;5C        | ESC [1;3C |

Notice that I handle these two styles differently since people have
different preference on the definition of "what is a word".
Specifically, I define:
- "sub-word": just letters and digits. For example "my:namespace:key"
has 3 sub-words. This is handled by Meta style.
- "big-word": as any character that is not space. For example
"my:namespace:key" is just one single big-word. This is handled by CSI
style.


## How I verified

I'm using MacOS default terminal (`$TERM = xterm-256color`). I
customized the terminal keyboard setting to map option + left to `\033b`
, and ctrl + left to `\033[1;5D` so that I can produce both the Meta
style and CSI style. This code change should also work for
Linux/BSD/other terminal users.

Now the valkey-cli works like the following. `|` shows where the cursor
is currently at.

Press Alt + left (escape sequence `ESC b` ):

```
set cache:item itemid
                   |

set cache:item itemid
               |

set cache:item itemid
          |

set cache:item itemid
    |

set cache:item itemid
|
```

Press Ctrl + left (escape sequence `ESC [1;5D` ):

```
set cache:item itemid
                    |

set cache:item itemid
               |

set cache:item itemid
    |

set cache:item itemid
|
```

Press Alt + right (escape sequence `ESC f` ):

```
set cache:item itemid
|

set cache:item itemid
    |

set cache:item itemid
          |	

set cache:item itemid
               |

set cache:item itemid
                     |
```

Press Ctrl + right  (escape sequence `ESC [1;5C` ):

```
set cache:item itemid
|

set cache:item itemid
   |
    
set cache:item itemid
              |    

set cache:item itemid
                     |
```

---------

Signed-off-by: Zhijun <dszhijun@gmail.com>
2025-09-23 12:30:50 +01:00
Marvin Rösch a893f809d7 Add cluster-announce-client-(port|tls-port) configs (#2429)
New config options:

 * cluster-announce-client-port
 * cluster-announce-client-tls-port

If enabled, clients will always get to see the configured port for a
node instead of the internally announced port(s), the same way that
`cluster-announce-client-ipv4` and `cluster-announce-client-ipv6` work.
Cluster-internal communication uses the non-client variant of these
options.

The configuration is propagated throughout the cluster using new ping
extensions.

Closes #2377

---------

Signed-off-by: Marvin Rösch <marvinroesch99@gmail.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
2025-09-23 12:30:50 +01:00