Skip to content

VerneMQ 2.1.1 results in Kubernetes container memory increase 10% with every restart #2470

@deepakholalchikkanna-tomtom

Description

I have VerneMQ 2.1.1 cluster (size: 3) deployed on Kubernetes cluster, during long duration performance benchmark testing observed that the container memory increases each time by 5-10% particularly after VerneMQ node/pod restarted every 6 hours, data is persisted in PVC and is correctly reloaded.

below is the simplified Vernemq config details I am using, I have ~4 million retained messages and 10K concurrent clients connected.

# Retained message expiry cleanup interval (seconds) - 0 means disabled
# Disabled as I have found that cleanup process causes issues with the msg sync between VerneMQ nodes.
expire_retain_cache = 0  

persistent_client_expiration = 1h
max_online_messages = 1000
max_offline_messages = 100
max_message_rate = 500
listener.max_connections = 120000
listener.nr_of_acceptors = 200

metadata_plugin = vmq_swc
leveldb.maximum_memory.percent = 15 (container memory is 42 GiB)
leveldb.compression.algorithm = lz4
leveldb.compression = on

allow_register_during_netsplit = on
allow_publish_during_netsplit = on
allow_subscribe_during_netsplit = on 
allow_unsubscribe_during_netsplit = on

queue_deliver_mode = balance
max_msgs_per_drain_step = 1000

outgoing_clustering_buffer_size

is container memory increase known issue? Any suggest How to solve it?
Do you see any mistake in my configuration?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions