, also known as segmentation
violation or segmentation failure, is a signal used by Unix-based operating systems (such as Linux). Indicates an attempt by a program to write or read out of its allocated memory, either due to a programming error, a software or hardware compatibility issue, or a malicious attack, such as buffer overflow.
indicated by the following codes:
On Unix/Linux, SIGSEGV is the
- operating system signal 11 In Docker containers,
- when a Docker container terminates due to a SIGSEV error, it throws out code 139
- SIGSEGV is
The default action for
abnormal process termination. Additionally, the following may occur:
- a central file is typically generated to enable debugging
- sigsegv signals can be recorded in more detail for troubleshooting and security purposes
- perform platform-specific operations The operating system
- may allow the process itself to handle the violation
The operating system can
SIGSEGV segmentation is a common cause of container termination in Kubernetes. However, Kubernetes does not enable SIGSEGV directly. To resolve the issue, you will need to debug the problematic container or underlying host.
SIGSEGV (exit code 139) vs SIGABRT (
exit code 134
) SIGSEGV and SIGABRT
are two Unix signals that can cause a process to terminate
SIGSEGV is triggered by the operating system, which detects that a process is carrying out a memory violation, and may terminate it as a result
SIGABRT (signal interrupt) is a signal triggered by a process itself. It abnormally finishes the process, closes and rinses the open flows. Once activated, it cannot be blocked by the process (similar to SIGKILL, but different in that SIGKILL is activated by the operating system).
Before sending the SIGABRT signal, the
- Call the abort() function in the libc library, which unlocks the SIGABRT signal. The process can then abort itself by activating
- SIGABRT Call the assert() macro, which is used in debugging, and aborts the program using SIGABRT if the assertion is false.
Exit codes 139 and 134 are parallel to SIGSEGV and SIGABRT in Docker containers: Docker exit code 139: means that the container received a SIGSEGV
- from the underlying operating system due to a memory violation Docker Exit Code 134: means that the
- container triggered a SIGABRT and was abnormally terminated. What causes SIGSEGV
Modern general-purpose computer systems include memory management units (MMUs). An MMU allows memory protection in operating systems such as Linux, preventing different processes from accessing or modifying each other’s memory, except through a tightly controlled API. This simplifies troubleshooting and makes processes more resilient, as they are carefully isolated from each other.
A SIGSEGV signal or segmentation error occurs when a process tries to use a memory address that was not assigned to it by the MMU. This can happen for three common reasons
- Coding error: Segmentation violations can occur if a process does not initialize properly or if it tries to access memory through a pointer to previously freed memory. This will result in a segmentation violation in a specific process or binary file under specific circumstances.
- Incompatibility between binaries and librarians: If a process executes a binary file that is not supported by a shared library, it can result in segmentation violations. For example, if a developer updates a library, changing its binary interface, but does not update the version number, an older binary can be loaded into the latest version. This can cause the older binary to try to access inappropriate memory addresses.
- Hardware incompatibility or misconfiguration: If segmentation violations occur frequently across multiple libraries, without a repeating pattern, this may indicate a problem with the machine’s memory subsystems or incorrect low-level system configuration values.
Unix-based operating system, by default, a SIGSEGV signal will result in abnormal termination of the offending process
Additional actions performed by
the operating system In addition to terminating the
process, the operating system can generate core files to help with debugging and can also perform other platform-dependent operations. For example, on Linux, you can use the grsecurity utility to log SIGSEGV signals in detail, to monitor related security risks, such as buffer overflow.
Allowing the process to handle
On Linux and Windows, the operating system allows processes to handle their response to segmentation violations. For example, the program can collect a stack trace with information such as processor register values and memory addresses that were involved in the segmentation failure.
An example of this is segvcatch, a C++ library that supports multiple operating systems and can convert segmentation failures and other hardware-related exceptions into software language exceptions. This makes it possible to handle “hard” errors like segmentation violations with a simple test/capture code. This makes it possible for the software to identify a segmentation violation and correct it during program execution.
When troubleshooting segmentation errors
or testing programs to avoid these errors, it may be necessary to intentionally cause a segmentation violation to investigate its impact. Most operating systems allow handling SIGSEGV in such a way that they will allow the program to run even after the segmentation error occurs, to allow investigation and registration.
Troubleshoot common segmentation errors in
SIGSEGV errors are highly relevant to Kubernetes users and administrators. It is quite common for a container to fail due to a segmentation violation.
However, unlike other signals such as SIGTERM and SIGKILL, Kubernetes does not trigger a SIGSEGV signal directly. Rather, the host machine on a Kubernetes node can trigger SIGSEGV when a container is detected performing a memory violation. The container terminates, Kubernetes detects it, and you can try to restart it depending on the pod configuration.
When a Docker container ends with a SIGSEGV signal, it throws exit code 139. This may indicate:
A problem with the application code in one of the libraries running in the container An
- incompatibility between different libraries running in the container An incompatibility
- between those libraries and the hardware on the
- Problems with host memory management systems or incorrect memory
configuration To debug and resolve a SIGSEGV issue in a container, follow these steps:
- Gain root access to the host computer and review the logs for additional information about the failed container. A SIGSEGV error similar to the following in kubelet logs: [SIGSEGV signal: segmentation violation code = 0x1 addr = 0x18 pc = 0x1bdaed0
- error occurs: it could be in the specific application code or lower in the base image of the container
- Run docker pull [image-id] to extract the image from the container terminated by SIGSEGV.
- Make sure you have debugging tools (for example, curl or vim) installed, or add them.
- Use kubectl to run in the container. See if you can replicate the SIGSEGV error to confirm which library is causing the problem.
- try modifying the image to correct the library causing the memory violation, or replace it with another library. Most often, upgrading a library to a newer version, or a version that is compatible with the environment on the host, will resolve the problem.
- If you cannot identify a library that is consistently causing the error, the problem may be with the host. Check for problems with the host memory configuration or memory hardware.
] Try to identify at which layer of the container image the
If you have identified the library or libraries causing the memory violation,
The above process can help you resolve simple SIGSEGV errors, but in many cases troubleshooting can become very complex and require nonlinear investigation involving multiple components. That’s exactly why we built Komodor to fix memory errors and other complex Kubernetes issues before they get out of control.
Troubleshooting Kubernetes container termination
issues with Komodor
As a Kubernetes administrator or user, pods or containers that terminate unexpectedly can be a nuisance and can lead to serious production issues. Container termination can be the result of multiple problems in different components and can be difficult to diagnose. The troubleshooting process in Kubernetes is complex, and without the right tools, it can be stressful, inefficient, and time-consuming.
Some best practices can help minimize the chances of SIGSEGV or SIGABRT signals affecting your applications, but eventually something will go wrong, simply because it can.
This is why we created Komodor, a tool that helps development and operations teams stop wasting their precious time looking for needles in hay piles whenever things go wrong.
Acting as a single source of truth (SSOT) for all your k8s troubleshooting needs, Komodor offers
- Change intelligence: Every problem is the result of change. In a matter of seconds we can help you understand exactly who did what and when.
- In-depth visibility: A complete upstream timeline, showing all code and configuration changes, deployments, alerts, code differences, pod logs, and more. All within a glass panel with easy to break down options.
- Information about service dependencies: An easy way to understand changes between services and visualize their ripple effects throughout the system.
- Seamless notifications: Direct integration with your existing communication channels (e.g. Slack) so you have all the information you need, when you need it.
If you’re interested in watching Komodor, use this link to sign up for a free trial.