Replies: 7 comments 4 replies
-
|
I've deleted But I'm not sure if this method is good enough to repair the node :) |
Beta Was this translation helpful? Give feedback.
-
|
It will automatically repair itself.If you are concerned, you can access all the data at once. |
Beta Was this translation helpful? Give feedback.
-
|
The node wasn't repaired after one day, looks like auto-repair doesn't work for me |
Beta Was this translation helpful? Give feedback.
-
|
https://github.com/rustfs/rustfs/tree/weisd/scan This branch merge will fix this bug. If the updated PVC has no data, you can access the object, and it will be automatically fixed. |
Beta Was this translation helpful? Give feedback.
-
|
@loverustfs any news about it? Recently I've upgraded rustfs from 79 to 80 and got broken cluster. |
Beta Was this translation helpful? Give feedback.
-
|
I've tried scaling down the rustfs sts to 0 replicas, then scaling up to 1 to run the first node, then scaling up to previous value 4 On other nodes With no any activity after (the pvc size is 4G, so the recovery time should be fast in theory) I guess it's a serious blocker to use rustfs in k8s cluster. Current rustfs version 1.0.0-alpha.80. |
Beta Was this translation helpful? Give feedback.
-
|
The issue was fixed after upgrading to version 1.0.0-alpha.81 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
k8s version - 1.33
rustfs helm version - 0.0.76
rustfs setup - 4 replicas, distributed mode, one data PVC per pod, PVC size 1Gi
UI status

Let's recreate rustfs-0 pod
UI status

rustfs-0 pod's log
I'm expecting that pod will be recreated successfully, the PVCs and other pods are fine.
What's the reason that rustfs-0 pod couldn't find the disk?
Beta Was this translation helpful? Give feedback.
All reactions