Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLUSTERDOWN Hash slot not served #1

Open
insekticid opened this issue Sep 4, 2017 · 6 comments
Open

CLUSTERDOWN Hash slot not served #1

insekticid opened this issue Sep 4, 2017 · 6 comments

Comments

@insekticid
Copy link
Contributor

insekticid commented Sep 4, 2017

Hi Shuliyey,
you have done nice work in this package, I have stucked on error CLUSTERDOWN Hash slot not served when I try to write to master

root@redis-redis-sentinel-2:/data# redis-cli -h redis-sentinel -p 26379 info sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=10.42.234.56:6379,slaves=2,sentinels=4
telnet 10.42.234.56 6379
set lol 1;
-CLUSTERDOWN Hash slot not served

What I am doing wrong?

@insekticid
Copy link
Contributor Author

another bug.

if master goes down, slaves wont reload to get new master ip.
If you do manual reload, you can stuck on

4. 9. 2017 19:31:05
4. 9. 2017 19:31:05*** FATAL CONFIG FILE ERROR ***
4. 9. 2017 19:31:05Reading the configuration file, at line 281
4. 9. 2017 19:31:05>>> 'slaveof 10.42.12.91 6379'
4. 9. 2017 19:31:05slaveof directive not allowed in cluster mode
4. 9. 2017 19:31:08i am the leader

now is this container master, byt slave config will remain

@insekticid
Copy link
Contributor Author


########################## CLUSTER DOCKER/NAT support  ########################

# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node known its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380

@Shuliyey
Copy link
Owner

Shuliyey commented Nov 1, 2017

Hi @insekticid

for the first issue, i believe it could be linked to, the need to create the redis cluster first.
https://stackoverflow.com/questions/36125071/redis-cluster-master-slave-not-able-to-add-key

for the second issue @ahfeel raised about this before, and i believe more change needs to be done to the sentinel container.

https://github.com/ahfeel/rancher-redis-cluster

i think @ahfeel did some great work on making this rancher redis cluster more tolerant when master node goes down. I will try give his repo a test, when i get my rancher server back (i lost my rancher server, as my uni reclaimed our resources 😆 )

@ahfeel
Copy link

ahfeel commented Nov 2, 2017

Hello @insekticid,

You can have a look at my images, it's actually a full rewrite to take in account all (hopefully? or at least most) of the down/edge cases. I have been running on them on production for a while now and it's working fine so far.

Warning though: It's an automatic master/slave setup with failover etc, NOT A CLUSTER, because redis cluster doesn't support all kind of operations we needed. So I preferred to stick to an automatic and fully working master/slave solution.

We have a few high traffic websites running on it and it's working like a breeze so far.

Cheers,
Jeremie

@insekticid
Copy link
Contributor Author

@ahfeel yes, I am using your version for some time ;) but I found strange issue, duno why.
When my docker php container is on another host, that is redis master node, it slows down 10x. if my docker php container runs on the same host like redis master, it is super fast. All hosts are in the same region

@ahfeel
Copy link

ahfeel commented Nov 2, 2017

Do you have experience in running redis on another host than your PHP host before ? Network latency is actually a very tricky pain point. For example on our website we fetch like 40 to 50 cache keys from redis for a given web page. At the beginning we had the redis server on the same host than the PHP server and things were running fast. When distributing the infrastructure, we noticed a huge impact on our web request. This is just the sad reality of network latency, 40/50 times 1ms round trip = 50ms.

The solution was to gather around the cache keys we needed and send a MGET to fetch them all at once, everything went back to normal.

Let me know if this helps :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants