Self hosted runners block scale down of nodes in GKE #120949
Replies: 3 comments
-
I would probably post this over at https://github.com/actions/actions-runner-controller/issues |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
🕒 Discussion Activity Reminder 🕒 This Discussion has been labeled as dormant by an automated system for having no activity in the last 60 days. Please consider one the following actions: 1️⃣ Close as Out of Date: If the topic is no longer relevant, close the Discussion as 2️⃣ Provide More Information: Share additional details or context — or let the community know if you've found a solution on your own. 3️⃣ Mark a Reply as Answer: If your question has been answered by a reply, mark the most helpful reply as the solution. Note: This dormant notification will only apply to Discussions with the Thank you for helping bring this Discussion to a resolution! 💬 |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Question
Body
We use the Actions Runner Controller on a GKE cluster. When nodes are underutilized, the cluster autoscaler wants to drain the node and move runners to another node.
Then we see the following error message:
Our workaround for now is to set
minRunners: 0
. This is not ideal, since we want to have a couple of runners online all the time.Another possible solution is to add the annotation
'cluster-autoscaler.kubernetes.io/safe-to-evict': 'true'
to the runner pods. This is also not ideal, because the autoscaler kills runner pods even if workflows are running.Any suggestions would be very appreciated, thank you!
Beta Was this translation helpful? Give feedback.
All reactions