Skip to main content


Cassandra error: Too many open files

Suggest edit Updated on March 11, 2021

The Cassandra process might crash with an error that indicates that there are too many open files. By performing the following task, you can check for issues with querying, saving, or synchronizing data, and then correct the errors.

The root cause is that the Cassandra process has run into system-imposed limits on the maximum number of open files. See the following code snippet for example of the error message:

Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException: 
mc_txn_flush_8bdc78f0-7d48-11e9-9b2e-0f78ea2b6c2b.log: Too many open files
  1. For Linux, enter the following commands in the Unix shell to check the limits on the number of open files:
    • To check the hard limit, enter ulimit -Hn

      Only the root user can raise this limit but any process can lower it.

    • To check the soft limit, enter ulimit -Sn

      Any process can change this limit.

  2. Change the limit on the maximum number of open files, depending on your business needs.
    Do not raise the limit on open files above 100,000. For more information about changing open file limits, see the Apache Cassandra documentation.
Did you find this content helpful? YesNo

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.

Ready to crush complexity?

Experience the benefits of Pega Community when you log in.

We'd prefer it if you saw us at our best. is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us