Memory Leak Processes

Pay special attention to the processes in this group as to the most probable cause of your out-of-memory issues (sorted based on server role):

Load Balancers #

Common recommendations #

Allocate more RAM to a corresponding node – handled services could just require more memory for normal operability.

Related processes #

ProcessResolution
varnishdAllocate more RAM to a node – handled services could just require more memory for normal operability

Application Servers #

Click on points within the list below to view some common recommendations on dealing with memory shortage issues considering used programming language, as well as appropriate resolutions for the most demanding related processes:

Common recommendations #

Review the main memory management configurations for your Java machine and, if required, adjust them according to your application needs, e.g.:

java -Xmx2048m -Xms256m

where

  • Xmx flag specifies the maximum heap memory that could be allocated for a Java Virtual Machine (JVM)
  • Xms flag defines initial memory allocation pool

Refer to the official documentation for more info on Java memory management system.

Tip: CirrusGrid also implements supplementary automated memory management for Java containers using Garbage Collector. You are able to customize its settings considering your application specifics to avoid OOM issues and gain more efficient memory utilization.

Also, take into consideration, that JVM needs more memory than just heap – read through Java Memory Structure reference to get deeper insights.


Related processes #

ProcessResolution
javaCheck the xmxxmsxmn parameters of your Java machine and configure them according to your application needs

Common recommendations #

1. If the problem occurs with the httpd (httpd.itk) service, adjust server memory management parameters as follows:

  • check the average amount of RAM, used by each httpd instance
  • remove the Jelastic autoconfiguration mark within the /etc/httpd/httpd.conf file
  • decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
Note:In case you note persistent growth of memory usage per instance (leak), you need to decrease theMaxRequestsPerChildvalue (to around 1000-5000).

2. For the nginx process, connect to your container via SSH and check the size of php-fpm instances (e.g. with ps or top tools):

  • if all of them consume ~50-100Mb of RAM, disable CirrusGrid auto configuration and decrease the max_children parameter
  • if instances size varies greatly or is over 200-300Mb, the process is probably leaking – inspect and optimize your code or, alternatively, disable CirrusGrid auto configuration and decrease the max_requests_per_child parameter

Related processes #

ProcessResolution
httpd1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
lsyncdAllocate more RAM to a node – handled services could just require more memory for normal operability
httpd.itk1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
nginxDisable CirrusGrid auto configuration and adjust the appropriate parameters according to your application specifics
phpDisable CirrusGrid auto configuration and adjust the appropriate parameters according to your application specifics
php-fpmDisable CirrusGrid auto configuration and adjust the appropriate parameters according to your application specifics
php-fpm7.0Disable CirrusGrid auto configuration and adjust the appropriate parameters according to your application specifics
php7.0Disable CirrusGrid auto configuration and adjust the appropriate parameters according to your application specifics

Common recommendations #

Memory leak issues are rather common for Ruby, so, as the first thing to do, consider to inspect and optimize your code. Alternatively, try to increase RAM limit for an instance.


Related processes #

ProcessResolution
httpd1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
httpd.itk1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
bundleAllocate more RAM to a node – handled services could just require more memory for normal operability
gemAllocate more RAM to a node – handled services could just require more memory for normal operability
rubyConsider to inspect and optimize your code or add more RAM to a node

Common recommendations #

1. If the problem occurs with the httpd (httpd.itk) service, adjust server memory management parameters as follows:

  • check the average amount of RAM, used by each httpd instance
  • remove the Jelastic autoconfiguration mark within the /etc/httpd/httpd.conf file
  • decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM

Note: In case you note persistent growth of memory usage per instance (leak), decrease theMaxRequestsPerChildvalue (to around 1000-5000).

2. Otherwise, allocate more RAM to a node – the main Python process could just require more memory for normal operability.


Related processes #

ProcessResolution
httpd1. Check the average amount of RAM, used by each httpd instance
2. Remove the Jelastic autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
lsyncdAllocate more RAM to a node – handled application could just require more memory for normal operability
httpd.itk1. Check the average amount of RAM, used by each httpd instance
2. Remove the Jelastic autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
pipCan be caused by network issues (so that the download process stucks); otherwise, allocate more RAM to a node – handled application could just require more memory for normal operability
pythonAllocate more RAM to a node – handled application could just require more memory for normal operability
python2.7Allocate more RAM to a node – handled application could just require more memory for normal operability

Common recommendations #

Restart container to restore the killed process(es). If the issue repeats, allocate more RAM to a node – handled application could just require more memory for normal operability.


Related processes #

ProcessResolution
lsyncdAllocate more RAM to a node – handled application could just require more memory for normal operability
gruntAllocate more RAM to a node – handled application could just require more memory for normal operability
nodeAllocate more RAM to a node – handled application could just require more memory for normal operability
npmAllocate more RAM to a node – handled application could just require more memory for normal operability
phantomjsAllocate more RAM to a node – handled application could just require more memory for normal operability

Database Servers #

Click on the required DB stack within the list below to reveal appropriate common recommendations on coping with OOM issues, as well as resolutions for particular killed processes:

Common recommendations #

1. If using InnoDB engine (embedded since 5.5 MySQL version), check buffers size with the next command:

SHOW ENGINE INNODB STATUS\G;

In case of high buffers value (over 80% of total container RAM), reduce size of the allowed pool with the innodb_buffer_pool_size parameter in /etc/my.cnf file; otherwise, allocate more RAM to a server.

2. Also, check MySQL logs for warnings and recommendations.


Related processes #

ProcessResolution
httpd1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
mysqld1. If using InnoDB engine (by default for MySQL 5.5 and higher), check buffers size with the SHOW ENGINE INNODB STATUS\G; command. In case of high buffers value (over 80% of total container RAM), reduce the allowed pool size with the innodb_buffer_pool_size parameter in /etc/my.cnf file
2. Check MySQL logs for warnings and recommendations

Common recommendations #

If the problem occurs with thehttpdservice, adjust server memory management parameters as follows:

  • check the average amount of RAM, used by eachhttpdinstance
  • remove theCirrusGrid autoconfiguration markwithin the/etc/httpd/httpd.conffile
  • decreaseServerLimitandMaxClientsvalues according to the formula:(Total_RAM – 5%) / Average_RAM
Note: In case you note persistent growth of memory usage per instance (leak), decrease the MaxRequestsPerChild value (to around 1000-5000).

Related processes #

ProcessResolution
httpd1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
mongodAllocate more RAM to a node – handled services could just require more memory for normal operability

Common recommendations #

Allocate more RAM to a corresponding node – handled services could just require more memory for normal operability.


Related processes #

ProcessResolution
httpd1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
postgresAllocate more RAM to a node – handled services could just require more memory for normal operability

Common recommendations #

Allocate more RAM to a corresponding node – handled services could just require more memory for normal operability.


Related processes #

ProcessResolution
redis-serverAllocate more RAM to a node – handled services could just require more memory for normal operability

Common Processes for Different-Type Stacks #

Common recommendations #

Processes in this section can be run and, subsequently, killed within different node types. Thus, OOM resolutions for them vary and depend on a process itself – see the table below to find the appropriate recommendations.

Related processes #

ProcessStackResolution
httpdPHP
Ruby
Python
MySQL
MongoDB
PostgreSQL
1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
lsyncdPHP
NodeJS
Python
Allocate more RAM to a node – handled services could just require more memory for normal operability
httpd.itkPHP
Ruby
Python
1. Check the average amount of RAM, used by each httpd instance
2. Remove the CirrusGrid autoconfiguration mark within the /etc/httpd/httpd.conf file
3. Decrease ServerLimit and MaxClients values according to the formula: (Total_RAM – 5%) / Average_RAM
procmailAnyRestart container in order to restore the process
vsftpdAnyRestart container in order to restore the process
yumAnyRestart container in order to restore the process
cc13rd partyAllocate more RAM to a node – handled services could just require more memory for normal operability
clamd3rd partyAllocate more RAM to a node – handled services could just require more memory for normal operability
ffmpeg3rd partyAllocate more RAM to a node – handled services could just require more memory for normal operability
firefox3rd partyAllocate more RAM to a node – handled services could just require more memory for normal operability
newrelic-daemon3rd partyRestart the main stack service (nginx, tomcat, nodejs, etc)

Powered by BetterDocs