The root issue for Kubernetes: System.IO.IOException inotify instances error is caused by the Kubernetes cluster running out of inotify resources at the OS level. The resource limit for the inotify resources is defined by “fs.inotify.max_user_watches” and “fs.inotify.max_user_instances” environment variables.

You can view the limits on the Kubernetes cluster by executing into a pod of your choosing and running the command “sysctl fs.inotify”. This should show you something like this when it is run.

root@pod-application-wew242w:/app#  sysctl fs.inotify
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 524288

Now, in order to fix the issue caused by the limit, we are going to increase the size of the limit. In this case, we will increase fs.inotify.max_user_instances to the maximum value. This will be done using the following DaemonSet based on the solution here.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-setup
  namespace: kube-system
  labels:
    k8s-app: node-setup
spec:
  selector:
    matchLabels:
      name: node-setup
  template:
    metadata:
      labels:
        name: node-setup
    spec:
      containers:
      - name: node-setup
        image: ubuntu
        command: ["/bin/sh","-c"]
        args: ["/script/node-setup.sh; while true; do echo Sleeping && sleep 3600; done"]
        env:
          - name: PARTITION_NUMBER
            valueFrom:
              configMapKeyRef:
                name: node-setup-config
                key: partition_number
        volumeMounts:
          - name: node-setup-script
            mountPath: /script
          - name: dev
            mountPath: /dev
          - name: etc-lvm
            mountPath: /etc/lvm
        securityContext:
          allowPrivilegeEscalation: true
          privileged: true
      volumes:
        - name: node-setup-script
          configMap:
            name: node-setup-script
            defaultMode: 0755
        - name: dev
          hostPath:
            path: /dev
        - name: etc-lvm
          hostPath:
            path: /etc/lvm
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: node-setup-config
  namespace: kube-system
data:
  partition_number: "3"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: node-setup-script
  namespace: kube-system
data:
  node-setup.sh: |
    #!/bin/bash
    set -e

    # change the file-instances max-count on each node to 524288

    # insert the new value into the system config
    sysctl -w fs.inotify.max_user_instances=524288

    # check that the new value was applied
    cat /proc/sys/fs/inotify/max_user_instances

Applying this DaemonSet to your Kubernetes cluster will create a pod for each node and execute the commands from “node-setup.sh”. Thus increasing the limit of “fs.inotify.max_user_instances” to the maximum value of 524288. As well you can use the same DaemonSet code to edit the value of “fs.inotify.max_user_watches” by changing “max_user_instances” to max_user_watches”.

After applying that DaemonSet if you run “sysctl fs.inotify” again in the pod you should see something like this.

root@pod-application-wew242w:/app#  sysctl fs.inotify
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288

That concludes how to solve the “System.IO.IOException: The configured user limit (#) on the number of inotify instances has been reached” issue.

You can checkout more of our blogs here

Leave a Reply

Your email address will not be published. Required fields are marked *