Pay special attention to the processes in this group as to the most probable cause of your out-of-memory issues (sorted based on server role):
Allocate more RAM to a corresponding node – handled services could just require more memory for normal operability.
Click on points within the list below to view some common recommendations on dealing with memory shortage issues considering used programming language, as well as appropriate resolutions for the most demanding related processes:
Review the main memory management configurations for your Java machine and, if required, adjust them according to your application needs, e.g.:
java -Xmx2048m -Xms256m
where
Refer to the official documentation for more info on Java memory management system.
Tip: CirrusGrid also implements supplementary automated memory management for Java containers using Garbage Collector. You are able to customize its settings considering your application specifics to avoid OOM issues and gain more efficient memory utilization.
Also, take into consideration, that JVM needs more memory than just heap – read through Java Memory Structure reference to get deeper insights.
1. If the problem occurs with the httpd (httpd.itk) service, adjust server memory management parameters as follows:
2. For the nginx process, connect to your container via SSH and check the size of php-fpm instances (e.g. with ps or top tools):
Memory leak issues are rather common for Ruby, so, as the first thing to do, consider to inspect and optimize your code. Alternatively, try to increase RAM limit for an instance.
Note: In case you note persistent growth of memory usage per instance (leak), decrease theMaxRequestsPerChildvalue (to around 1000-5000).
2. Otherwise, allocate more RAM to a node – the main Python process could just require more memory for normal operability.
Restart container to restore the killed process(es). If the issue repeats, allocate more RAM to a node – handled application could just require more memory for normal operability.
Click on the required DB stack within the list below to reveal appropriate common recommendations on coping with OOM issues, as well as resolutions for particular killed processes:
1. If using InnoDB engine (embedded since 5.5 MySQL version), check buffers size with the next command:
SHOW ENGINE INNODB STATUS\G;
In case of high buffers value (over 80% of total container RAM), reduce size of the allowed pool with the innodb_buffer_pool_size parameter in /etc/my.cnf file; otherwise, allocate more RAM to a server.
2. Also, check MySQL logs for warnings and recommendations.
If the problem occurs with thehttpdservice, adjust server memory management parameters as follows:
Processes in this section can be run and, subsequently, killed within different node types. Thus, OOM resolutions for them vary and depend on a process itself – see the table below to find the appropriate recommendations.
Powered by BetterDocs
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.