After introducing the dual channel replication in #60, we decided in #915
not to add a new configuration item to limit the replica's local replication
buffer, just use "client-output-buffer-limit replica hard" to limit it.
We need to document this behavior and mention that once the limit is reached,
all future data will accumulate in the primary side.
Signed-off-by: Binbin <binloveplay1314@qq.com>
In dual channel replication, when the rdb channel client finish
the RDB transfer, it will enter REPLICA_STATE_RDB_TRANSMITTED
state. During this time, there will be a brief window that we are
not able to see the connection in the INFO REPLICATION.
In the worst case, we might not see the connection for the
DEFAULT_WAIT_BEFORE_RDB_CLIENT_FREE seconds. I guess there is no
harm to list this state, showing connected_slaves but not showing
the connection is bad when troubleshooting.
Note that this also affects the `valkey-cli --rdb` and `--functions-rdb`
options. Before the client is in the `rdb_transmitted` state and is
released, we will now see it in the info (see the example later).
Before, not showing the replica info
```
role:master
connected_slaves:1
```
After, for dual channel replication:
```
role:master
connected_slaves:1
slave0:ip=xxx,port=xxx,state=rdb_transmitted,offset=0,lag=0,type=rdb-channel
```
After, for valkey-cli --rdb-only and --functions-rdb:
```
role:master
connected_slaves:1
slave0:ip=xxx,port=xxx,state=rdb_transmitted,offset=0,lag=0,type=replica
```
Signed-off-by: Binbin <binloveplay1314@qq.com>
This commit adds script function flags to the module API, which allows
function scripts to specify the function flags programmatically.
When the scripting engine compiles the script code can extract the flags
from the code and set the flags on the compiled function objects.
---------
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
The original test code only checks:
The original test code only checks:
1. wait_for_cluster_size 4, which calls cluster_size_consistent for every node.
Inside that function, for each node, cluster_size_consistent queries cluster_known_nodes,
which is calculated as (unsigned long long)dictSize(server.cluster->nodes). However, when
a new node is added to the cluster, it is first created in the HANDSHAKE state, and
clusterAddNode adds it to the nodes hash table. Therefore, it is possible for the new
node to still be in HANDSHAKE status (processed asynchronously) even though it appears
that all nodes “know” there are 4 nodes in the cluster.
2. cluster_state for every node, but when a new node is added, server.cluster->state remains FAIL.
Some handshake processes may not have completed yet, which likely causes the flakiness.
To address this, added a --cluster check to ensure that the config state is consistent.
Fixes#2693.
Signed-off-by: Hanxi Zhang <hanxizh@amazon.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
Fix a little miss in "Hash field TTL and active expiry propagates
correctly through chain replication" test in `hashexpire.tcl`.
The test did not wait for the initial sync of the chained replica and thus made the test flakey
Signed-off-by: Arad Zilberstein <aradz@amazon.com>
Addresses: https://github.com/valkey-io/valkey/issues/2588
## Overview
Previously we call `emptyData()` during a fullSync before validating the
RDB version is compatible.
This change adds an rdb flag that allows us to flush the database from
within `rdbLoadRioWithLoadingCtx`. THhis provides the option to only
flush the data if the rdb has a valid version and signature. In the case
where we do have an invalid version and signature, we don't emptyData,
so if a full sync fails for that reason a replica can still serve stale
data instead of clients experiencing cache misses.
## Changes
- Added a new flag `RDBFLAGS_EMPTY_DATA` that signals to flush the
database after rdb validation
- Added logic to call `emptyData` in `rdbLoadRioWithLoadingCtx` in
`rdb.c`
- Added logic to not clear data if the RDB validation fails in
`replication.c` using new return type `RDB_INCOMPATIBLE`
- Modified the signature of `rdbLoadRioWithLoadingCtx` to return RDB
success codes and updated all calling sites.
## Testing
Added a tcl test that uses the debug command `reload nosave` to load
from an RDB that has a future version number. This triggers the same
code path that full sync's will use, and verifies that we don't flush
the data until after the validation is complete.
A test already exists that checks that the data is flushed:
https://github.com/valkey-io/valkey/blob/unstable/tests/integration/replication.tcl#L1504
---------
Signed-off-by: Venkat Pamulapati <pamuvenk@amazon.com>
Signed-off-by: Venkat Pamulapati <33398322+ChiliPaneer@users.noreply.github.com>
Co-authored-by: Venkat Pamulapati <pamuvenk@amazon.com>
Co-authored-by: Harkrishn Patro <bunty.hari@gmail.com>
Test the SCAN consistency by alternating SCAN
calls to primary and replica.
We cannot rely on the exact order of the elements and the returned
cursor number.
---------
Signed-off-by: yzc-yzc <96833212+yzc-yzc@users.noreply.github.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
By default, when the number of elements in a zset exceeds 128, the
underlying data structure adopts a skiplist. We can reduce memory usage
by embedding elements into the skiplist nodes. Change the `zskiplistNode`
memory layout as follows:
```
Before
+-------------+
+-----> | element-sds |
| +-------------+
|
+------------------+-------+------------------+---------+-----+---------+
| element--pointer | score | backward-pointer | level-0 | ... | level-N |
+------------------+-------+------------------+---------+-----+---------+
After
+-------+------------------+---------+-----+---------+-------------+
+ score | backward-pointer | level-0 | ... | level-N | element-sds |
+-------+------------------+---------+-----+---------+-------------+
```
Before the embedded SDS representation, we include one byte representing
the size of the SDS header, i.e. the offset into the SDS representation
where that actual string starts.
The memory saving is therefore one pointer minus one byte = 7 bytes per
element, regardless of other factors such as element size or number of
elements.
### Benchmark step
I generated the test data using the following lua script && cli command.
And check memory usage using the `info` command.
**lua script**
```
local start_idx = tonumber(ARGV[1])
local end_idx = tonumber(ARGV[2])
local elem_count = tonumber(ARGV[3])
for i = start_idx, end_idx do
local key = "zset:" .. string.format("%012d", i)
local members = {}
for j = 0, elem_count - 1 do
table.insert(members, j)
table.insert(members, "member:" .. j)
end
redis.call("ZADD", key, unpack(members))
end
return "OK: Created " .. (end_idx - start_idx + 1) .. " zsets"
```
**valkey-cli command**
`valkey-cli EVAL "$(catcreate_zsets.lua)" 0 0 100000
${ZSET_ELEMENT_NUM}`
### Benchmark result
|number of elements in a zset | memory usage before optimization |
memory usage after optimization | change |
|-------|-------|-------|-------|
| 129 | 1047MB | 943MB | -9.9% |
| 256 | 2010MB| 1803MB| -10.3%|
| 512 | 3904MB|3483MB| -10.8%|
---------
Signed-off-by: chzhoo <czawyx@163.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
We need to verify total duration was at least 2 seconds, elapsed time
can be quite variable to check upper-bound
Fixes https://github.com/valkey-io/valkey/issues/2843
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
After #2829, valgrind report a test failure, it seems that the time is
not enough to generate a COB limit in valgrind.
Signed-off-by: Binbin <binloveplay1314@qq.com>
PSYNC_FULLRESYNC_DUAL_CHANNEL is also a full sync, as the comment says,
we need to allow it. While we have not yet identified the exact edge case
that leads to this line, but during a failover, there should be no difference
between different sync strategies.
Signed-off-by: Binbin <binloveplay1314@qq.com>
Related test failures:
https://github.com/valkey-io/valkey/actions/runs/19282092345/job/55135193394https://github.com/valkey-io/valkey/actions/runs/19200556305/job/54887767594
> *** [err]: scan family consistency with configured hash seed in
tests/integration/scan-family-consistency.tcl
> Expected '5 {k:1 k:25 z k:11 k:18 k:27 k:45 k:7 k:12 k:19 k:29 k:40
k:41 k:43}' to be equal to '5 {k:1 k:25 k:11 z k:18 k:27 k:45 k:7 k:12
k:19 k:29 k:40 k:41 k:43}' (context: type eval line 26 cmd {assert_equal
$primary_cursor_next $replica_cursor_next} proc ::start_server)
The reason is that the RDB part of the primary-replica synchronization
affects the resize policy of the hashtable.
See
b835463a73/src/server.c (L807-L818)
Signed-off-by: yzc-yzc <96833212+yzc-yzc@users.noreply.github.com>
Consider this scenario:
1. Replica starts loading the RDB using the rdb connection
2. Replica finishes loading the RDB before the replica main connection has
initiated the PSYNC request
3. Replica stops replicating after receiving replicaof no one
4. Primary can't know that the replica main connection will never ask for
PSYNC, so it keeps the reference to the replica's replication buffer block
5. Primary has a shutdown-timeout configured and requires to wait for the rdb
connection to close before it can shut down.
The current 60-second wait time (DEFAULT_WAIT_BEFORE_RDB_CLIENT_FREE) is excessive
and leads to prolonged resource retention in edge cases. Reducing this timeout to
5 seconds would provide adequate time for legitimate PSYNC requests while mitigating
the issue described above.
Signed-off-by: Binbin <binloveplay1314@qq.com>
This commit fixes the cluster slot stats for scripts executed by
scripting engines when the scripts access cross-slot keys.
This was not a bug in Lua scripting engine, but `VM_Call` was missing a
call to invalidate the script caller client slot to prevent the
accumulation of stats.
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
It's handy to be able to automatically do a warmup and/or test by
duration rather than request count. 🙂
I changed the real-time output a bit - not sure if that's wanted or not.
(Like, would it break people's weird scripts? It'll break my weird
scripts, but I know the price of writing weird fragile scripts.)
```
Prepended "Warming up " when in warmup phase:
Warming up SET: rps=69211.2 (overall: 69747.5) avg_msec=0.425 (overall: 0.393) 3.8 seconds
^^^^^^^^^^
Appended running request counter when based on -n:
SET: rps=70892.0 (overall: 69878.1) avg_msec=0.385 (overall: 0.398) 612482 requests
^^^^^^^^^^^^^^^
Appended running second counter when in warmup or based on --duration:
SET: rps=61508.0 (overall: 61764.2) avg_msec=0.430 (overall: 0.426) 4.8 seconds
^^^^^^^^^^^
```
To be clear, the report at the end remains unchanged.
---------
Signed-off-by: Rain Valentine <rsg000@gmail.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
This PR fixes the freebsd daily job that has been failing consistently
for the last days with the error "pkg: No packages available to install
matching 'lang/tclx' have been found in the repositories".
The package name is corrected from `lang/tclx` to `lang/tclX`. The
lowercase version worked previously but appears to have stopped working
in an update of freebsd's pkg tool to 2.4.x.
Example of failed job:
https://github.com/valkey-io/valkey/actions/runs/19282092345/job/55135193499
Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
As we can see, we expected to get hexpired, but we got hexpire instead,
this means tht the expiration time has expired during execution.
```
*** [err]: HGETEX EXAT keyspace notifications for active expiry in tests/unit/hashexpire.tcl
Expected 'pmessage __keyevent@* __keyevent@9__:hexpired myhash' to match 'pmessage __keyevent@* __keyevent@*:hexpire myhash'
```
We should remove the EXAT and PXAT from these fixtures. And we indeed
have
the dedicated tests that verify that we get 'expired' when EX,PX are set
to 0
or EXAT,PXAT are in the past.
Signed-off-by: Binbin <binloveplay1314@qq.com>
## Description
This change introduces the ability for modules to check ACL permissions
against key prefix. The update adds a dedicated `prefixmatchlen` helper
and extends the core ACL selector logic to support a prefix‑matching
mode.
The new API `ValkeyModule_ACLCheckPrefixPermissions` is registered and
exposed to modules, and a corresponding implementation is added in
`module.c`. Existing internal callers that already perform prefix checks
(e.g., `VM_ACLCheckKeyPermissions`) are updated to use the new flag,
while all legacy paths remain unchanged.
The change also modifies the `aclcheck§ test module that exercises the
new prefix‑checking API, ensuring that read/write operations are
correctly allowed or denied based on the ACL configuration.
Key areas touched:
* ACL logic
* Module API
* Testing
# Motivation
The search module presently makes costly calls to verify index
permissions
(see https://github.com/valkey-io/valkey-search/blob/main/src/acl.cc#L295).
This PR introduces a more efficient approach for that.
---------
Signed-off-by: Eran Ifrah <eifrah@amazon.com>
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Ran Shidlansik <ranshid@amazon.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
After network failure nodes that come back to cluster do not always send
and/or receive messages from other nodes in shard, this fix avoids usage
of light weight messages to nodes with not ready bidirectional links.
When a light message comes before any normal message, freeing of cluster
link is happening because on the just established connection link->node
is not assigned yet. It is assigned in getNodeFromLinkAndMsg right after
the condition if (is_light).
So on a cluster with heavy pubsub load a long loop of disconnects is
possible, and we got this.
1. node A establishes cluster link to node B
2. node A propagates PUBLISH to node B
3. node B frees cluster link because of link->node == null as it has not
received non-light messages yet
4. go to 1.
During this loop subscribers of node B does not receive any messages
published to node A.
So here we want to make sure that PING was sent (and link->node was
initialized) on this connection before using lightweight messages.
---------
Signed-off-by: Daniil Kashapov <daniil.kashapov.ykt@gmail.com>
Co-authored-by: Harkrishn Patro <bunty.hari@gmail.com>
GEOADD is allocating/destroying a string object for "ZADD"
each time it is called. Created a shared string instead.
Signed-off-by: Jim Brunner <brunnerj@amazon.com>
This increases the times we check for the logs from 20 to 40.
I found that every `wait-for` check takes about 1.5 to 1.57 milliseconds
so when we were checking 2000 times after 1ms we were actually spending
(2000 * 1) + (2000 *1.75) = 5500ms time waiting.
this can be founds under: for 10 checks we took 35 ms more so thats
around 1.75 ms per check
```
Execution time: 2034 ms (failed)
[err]: 20 100 - Test dual-channel: primary tracking replica backlog refcount - start with empty backlog in tests/integration/dual-channel-replication-flaky.tcl
```
That is why increasing it to 40 100 will check for approx 4,070 ms which
is still less than the original 5500ms but should passes every single
time here:
https://github.com/roshkhatri/valkey/actions/runs/19279424967/job/55126976882
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
The AOF preamble mechanism replaces the traditional AOF base file with
an RDB snapshot during rewrite operations, which reduces I/O overhead
and improves loading performance.
However, when valkey loads the RDB-formatted preamble from the base AOF
file, it does not process the replication ID (replid) information within
the RDB AUX fields. This omission has two limitations:
* On a primary, it prevents the primary from accepting PSYNC continue
requests after restarting with a preamble-enabled AOF file.
* On a replica, it prevents the replica from successfully performing
partial sync requests (avoiding full sync) after restarting with a
preamble-enabled AOF file.
To resolve this, this commit aligns the AOF preamble handling with the
logic used for standalone RDB files, by storing the replication ID and
replication offset in the AOF preamble and restoring them when loading
the AOF file.
Resolves#2677
---------
Signed-off-by: arthur.lee <liziang.arthur@bytedance.com>
Signed-off-by: Arthur Lee <arthurkiller@users.noreply.github.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
This commit extends the Module API to expose the `SIMPLE_STRING` and
`ARRAY_NULL` reply types to modules, by passing the new flag `X` to
the `ValkeyModule_Call` function.
By only exposing the new reply types behind a flag we keep the
backward compatibility with existing module implementations and
allow new modules to working with these reply type, which are
required for scripts to process correctly the reply type of commands
called inside scripts.
Before this change, commands like `PING` or `SET`, which return `"OK"`
as a simple string reply, would be returned as string replies to
scripts.
To allow the support of the Lua engine as an external module, we need to
distinguish between simple string and string replies to keep backward
compatibility.
---------
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
The new module context flag `VALKEYMODULE_CTX_SCRIPT_EXECUTION` denotes
that the module API function is being called in the context of a
scripting engine execution.
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
clientReplyBlock stores the size of the actual allocation in it size
field (minus the header size). This can be used for more effective
deallocation with zfree_with_size.
Signed-off-by: Vadym Khoptynets <vadymkh@amazon.com>
Resolves: https://github.com/valkey-io/valkey/issues/2695
Increase the wait time for periodic log check for rdb load time. Also,
increases the delay of log check frequency.
---------
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <117414976+roshkhatri@users.noreply.github.com>
Co-authored-by: Harkrishn Patro <bunty.hari@gmail.com>
Related test failures:
*** [err]: Replica importing key containment (slot 0 from node 0 to 2) - DBSIZE command excludes importing keys in tests/unit/cluster/cluster-migrateslots.tcl
Expected '1' to match '0' (context: type eval line 2 cmd {assert_match "0" [R $node_idx DBSIZE]} proc ::test)
The reason is that we don't wait for the primary-replica synchronization
to complete before starting the next testcase.
---------
Signed-off-by: yzc-yzc <96833212+yzc-yzc@users.noreply.github.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
In this commit we add a new context for the ACL log entries that is used
to log ACL failures that occur during scripts execution. To maintain
backward compatibility we still maintain the "lua" context for failures
that happen when running Lua scripts. For other scripting engines the
context description will be just "script".
---------
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
The `README.md` file is currently missing a section to build Valkey with
`fast_float`, which was introduced in Valkey 8.1 as an optional
dependency (#1260)
Signed-off-by: hieu2102 <hieund2102@gmail.com>
Introduce a new config `hash-seed` which can be set only at startup and
controls the hash seed for the server. This includes all hash tables.
This change makes it so that both primaries and replicas will return the
same results for SCAN/HSCAN/ZSCAN/SSCAN cursors. This is useful in order
to make sure SCAN behaves correctly after a failover.
Resolves#4
---------
Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
Signed-off-by: Sarthak Aggarwal <sarthakaggarwal97@gmail.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
In `entry.c`, the `entry` is a block of memory with variable contents.
The structure can be difficult to understand. A new header comment more
clearly documents the contents/layout of the `entry`.
Also, in `entry.h`, the `entry` was defined by `typedef void entry`.
This allows blind casting to the `entry` type. It defeats compiler type
checking.
Even though the `entry` has a variable definition, we can define entry
as a legitimate type which allows the compiler to perform type checking.
By performing `typedef struct _entry entry`, now the `entry` is
understood to be a pointer to some type of undefined structure. We can
pass a pointer and the compiler can typecheck the pointer. (Of course we
can't dereference it, because we haven't actually defined the struct.)
Signed-off-by: Jim Brunner <brunnerj@amazon.com>
### Summary
Addresses https://github.com/valkey-io/valkey/issues/2619.
This PR extends the `HSETEX` command to support optional key-level `NX`
and `XX` flags, allowing operations conditional on the existence of the
hash key.
### Changes
- Updated `hsetex.json` and regenerated `commands.def`.
- Extended argument parsing for NX/XX.
- Added key-level `NX`/`XX` support in `HSETEX`.
- Added tests covering all four NX/XX scenarios.
---------
Signed-off-by: Hanxi Zhang <hanxizh@amazon.com>
Co-authored-by: Ran Shidlansik <ranshid@amazon.com>
Since Valkey Sentinel 9.0, sentinel tries to abort an ongoing failover
when changing the role of a monitored instance. Since the result of the
command is ignored, the "FAILOVER ABORT" command is sent irrespective of
the actual failover status.
However, when using the documented pre 9.0 ACLs for a dedicated sentinel
user, the FAILOVER command is not allowed and _all_ failover cases fail.
(Additionally, the necessary ACL adaptation was not communicated well.)
Address this by:
- Updating the documentation in "sentinel.conf" to reflect the need for
an adapted ACL
- Only abort a failover when sentinel detected an ongoing (probably
stuck) failover. This means that standard failover and manual failover
continue to work with unchanged pre 9.0 ACLs. Only the new "SENTINEL
FAILOVER COORDINATED" requires to adapt the ACL on all Valkey nodes.
- Actually use a dedicated sentinel user and ACLs when testing standard
failover, manual failover, and manual coordinated failover.
Fixes#2779
Signed-off-by: Simon Baatz <gmbnomis@gmail.com>
There’s an issue with the LTRIM command. When LTRIM does not actually
modify the key — for example, with `LTRIM key 0 -1` — the server.dirty
counter is not updated because both ltrim and rtrim values are 0. As a
result, the command is not propagated. However, `signalModifiedKey` is
still called regardless of whether server.dirty changes. This behavior
is unexpected and can cause a mismatch between the source and target
during propagation, since the LTRIM command is not sent.
Signed-off-by: Harry Lin <harrylhl@amazon.com>
Co-authored-by: Harry Lin <harrylhl@amazon.com>
Just setting the authenticated flag actually authenticates to the
default user in this case. The default user may be granted no permission
to use CLUSTER SYNCSLOTS.
Instaed, we now authenticate to the NULL/internal user, which grants
access to all commands. This is the same as what we do for replication:
864de555ce/src/replication.c (L4717)
Add a test for this case as well.
Closes#2783
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Note: these changes are part of the effort to run Lua engine as an
external scripting engine module.
The new function `ValkeyModule_ReplyWithCustomErrorFormat` is being
added to the module API to allow scripting engines to return errors that
originated from running commands within the script code, without
counting twice in the error stats counters.
More details on why this is needed by scripting engines can be read in
an older commit aa856b39f2 messsage.
This PR also adds a new test to ensure the correctness of the newly
added function.
---------
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
Fixes an assert crash in _writeToClient():
serverAssert(c->io_last_written.data_len == 0 ||
c->io_last_written.buf == c->buf);
The issue occurs when clientsCronResizeOutputBuffer() grows or
reallocates c->buf while io_last_written still points to the old buffer
and data_len is non-zero. On the next write, both conditions in the
assertion become false.
Reset io_last_written when resizing the output buffer to prevent stale
pointers and keep state consistent.
fixes https://github.com/valkey-io/valkey/issues/2769
Signed-off-by: xbasel <103044017+xbasel@users.noreply.github.com>
Note: these changes are another step towards being able to run Lua
engine as an external scripting engine module.
In this commit we improve the `ValkeyModule_Call` API function code to
match the validations and behavior of the `scriptCall` function,
currently used by the Lua engine to run commands using `server.call` Lua
Valkey API.
The changes made are backward compatible. The new behavior/validations
are only enabled when calling `ValkeyModule_Call` while running a script
using `EVAL` or `FCALL`.
To test these changes, we improved the `HELLO` dummy scripting engine
module to support calling commands, and compare the behavior with
calling the same command from a Lua script.
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
This commit adds a new `INFO` section called "Scripting Engines" that
shows the information of the current scripting engines available in the
server.
Here's an output example:
```
> info scriptingengines
# Scripting Engines
engines_count:2
engines_total_used_memory:68608
engines_total_memory_overhead:56
engine_0:name=LUA,module=built-in,abi_version=4,used_memory=68608,memory_overhead=16
engine_1:name=HELLO,module=helloengine,abi_version=4,used_memory=0,memory_overhead=40
```
---------
Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
Skip IPv6 tests automatically when IPv6 is not available.
This fixes the problem that tests fail when IPv6 is not available on the
system, which can worry users when they run `make test`.
IPv6 availibility is detected by opening a dummy server socket and
trying to connect to it using a client socket over IPv6.
Fixes#2643
---------
Signed-off-by: diego-ciciani01 <diego.ciciani@gmail.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Currently, monotonic clock initialization relies on the model name field
from /proc/cpuinfo to retrieve the clock speed. However, this is not
always present. In case it is not present, measure the clock tick and
use it instead.
Before fix:
```
monotonic: x86 linux, unable to determine clock rate
```
After fix:
```
21695:M 25 Oct 2025 20:16:23.168 * monotonic clock: X86 TSC @ 2649 ticks/us
```
Fixes#2774
---------
Signed-off-by: Ken Nam <otherscase@gmail.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
Co-authored-by: Ran Shidlansik <ranshid@amazon.com>
- Moved `server-cpulist`, `bio-cpulist`, `aof-rewrite-cpulist`,
`bgsave-cpulist` configurations to ADVANCED CONFIG.
- Moved `ignore-warnings` configuration to ADVANCED CONFIG.
- Moved `availability-zone` configuration to GENERAL.
These configs were incorrectly placed at the end of the file in the
ACTIVE DEFRAGMENTATION section.
Fixes#2736
---------
Signed-off-by: ritoban23 <ankudutt101@gmail.com>
A super tiny change to optimize the function
`sentinelAskPrimaryStateToOtherSentinels` to early return when the
sentinel does not observe the primary as subjectively down.
Signed-off-by: Zhijun <dszhijun@gmail.com>