81 points by signa11 2 days ago | 22 comments
agilob 2 hours ago
An aircraft company discovered that it was cheaper to fly its planes with less fuel on board. The planes would be lighter and use less fuel and money was saved. On rare occasions however the amount of fuel was insufficient, and the plane would crash. This problem was solved by the engineers of the company by the development of a special OOF (out-of-fuel) mechanism. In emergency cases a passenger was selected and thrown out of the plane. (When necessary, the procedure was repeated.) A large body of theory was developed and many publications were devoted to the problem of properly selecting the victim to be ejected. Should the victim be chosen at random? Or should one choose the heaviest person? Or the oldest? Should passengers pay in order not to be ejected, so that the victim would be the poorest on board? And if for example the heaviest person was chosen, should there be a special exception in case that was the pilot? Should first class passengers be exempted? Now that the OOF mechanism existed, it would be activated every now and then, and eject passengers even when there was no fuel shortage. The engineers are still studying precisely how this malfunction is caused.

https://lwn.net/Articles/104185/

self_awareness 27 minutes ago
I'm wondering which overcommit strategy this example referrs to.

Because If my bitcoin price checker built on electron will start allocating all memory on the machine, then some arbitrary process (e.g. systemd) can get malloc error. But it's not systemd's fault the memory got eaten; so why it's being punished for low memory conditions?

It's like choosing a random person to be ejected from the plane.

LordGrey 2 days ago
For anyone not familiar with the meaning of '2' in this context:

The Linux kernel supports the following overcommit handling modes

0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slightly more memory in this mode. This is the default.

1 - Always overcommit. Appropriate for some scientific applications. Classic example is code using sparse arrays and just relying on the virtual memory consisting almost entirely of zero pages.

2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap + a configurable amount (default is 50%) of physical RAM. Depending on the amount you use, in most situations this means a process will not be killed while accessing pages but will receive errors on memory allocation as appropriate. Useful for applications that want to guarantee their memory allocations will be available in the future without having to initialize every page.

dbdr 12 hours ago
> exceed swap + a configurable amount (default is 50%) of physical RAM

Naive question: why is this default 50%, and more generally why is this not the entire RAM, what happens to the rest?

godelski 6 hours ago
There's a lot of options. If you want to go down the rabbithole try typing `sysctl -a | grep -E "^vm"` and that'll give you a lot of things to google ;)
vin10 11 hours ago
it's a (then-)safe default from the age when having 1GB of RAM and 2GB of swap was the norm: https://linux-kernel.vger.kernel.narkive.com/U64kKQbW/should...
dasil003 11 hours ago
Not sure if I understand your question but nothing "happens to the rest", overcommitting just means processes can allocate memory in excess of RAM + swap. The percentage is arbitrary, could be 50%, 100% or 1000%. Allocating additional memory is not a problem per se, it only becomes a problem when you try to actually write (and subsequently read) more than you have.
adastra22 1 hour ago
They’re talking about the never-overcommit setting.
crote 11 hours ago
Just a guess, but I reckon it doesn't account for things like kernel memory usage, such as caches and buffers. Assigning 100% of physical RAM to applications is probably going to have a Really Bad Outcome.
Wowfunhappy 5 hours ago
But the memory being used by the kernel has already been allocated by the kernel. So obviously that RAM isn't available.

I can understand leaving some amount free in case the kernel needs to allocate additional memory in the future, but anything near half seems like a lot!

sidewndr46 11 hours ago
Do any of the settings actually result in "malloc" or a similar function returning NULL?
LordGrey 11 hours ago
malloc() and friends may always return NULL. From the man page:

If successful, calloc(), malloc(), realloc(), reallocf(), valloc(), and aligned_alloc() functions return a pointer to allocated memory. If there is an error, they return a NULL pointer and set errno to ENOMEM.

In practice, I find a lot of code that does not check for NULL, which is rather distressing.

johncolanduoni 3 hours ago
No non-embedded libc will actually return NULL. Very, very little practical C code actually relies only on specified behavior of the spec and will work with literally any compliant C compiler on any architecture, so I don’t find this particularly concerning.

Usefully handling allocation errors is very hard to do well, since it infects literally every error handling path in your codebase. Any error handling that calls a function that might return an indirect allocation error needs to not allocate itself. Even if you have a codepath that speculatively allocates and can fallback, the process is likely so close to ruin that some other function that allocates will fail soon.

It’s almost universally more effective (not to mention easier) to keep track of your large/variable allocations proactively, and then maintain a buffer for little “normal” allocations that should have an approximate constant bound.

jclulow 2 hours ago
> No non-embedded libc will actually return NULL

This is just a Linux ecosystem thing. Other full size operating systems do memory accounting differently, and are able to correctly communicate when more memory is not available.

johncolanduoni 1 hour ago
There are functions on many C allocators that are explicitly for non-trivial allocation scenarios, but what major operating system malloc implementation returns NULL? MSVC’s docs reserve the right to return NULL, but the actual code is not capable of doing so (because it would be a security nightmare).
qhwudbebd 18 minutes ago
I hack on various C projects on a linux/musl box, and I'm pretty sure I've seen musl's malloc() return 0, although possibly the only cases where I've triggered that fall into the 'unreasonably huge' category, where a typo made my enormous request fail some sanity check before even trying to allocate.
sidewndr46 11 hours ago
It's been a while but while I agree the man page says that, my limited understanding was the typical libc on linux won't really return NULL under any sane scenario. Even when the memory can't be backed
LordGrey 10 hours ago
I think you're right, but "typical" is the key word. Embedded systems, systems where overcommit is disabled, bumping into low ulimit -v settings, etc can all trigger an immediate failure with malloc(). Those are edge cases, to be sure, but some of them could be applied to a typical Linux system and me, as a coder, won't be aware of it.

As an aside: To me, checking malloc() for NULL is easier than checking a pointer returned by malloc on first use. That's what you're supposed to do in the presence of overcommit.

nextaccountic 6 hours ago
Even with overcommit enabled, malloc may fail if there is no contiguous address space available. Not a problem in 64 bits but may occasionally happen in 32 bits
Bjartr 4 hours ago
But why would you want to violate the docs on something as fundamental as malloc? Why risk relying on implementation specific quirks in the first place?
im3w1l 1 hour ago
Because it's orders of magnitudes easier not to handle it. It's really as simple as that.
themafia 4 hours ago
malloc() is an interface. There are many implementations.
kentonv 5 hours ago
When your system is out of memory, you do not want to return an error to the next process that allocates memory. That might be an important process, it might have nothing to do with the reason the system is out of memory, and it might not be able to gracefully handle allocation failure (realistically, most programs can't).

Instead, you want to kill the process that's hogging all the memory.

The OOM killer heuristic is not perfect, but it will generally avoid killing critical processes and is fairly good at identifying memory hogs.

And if you agree that using the OOM killer is better than returning failure to a random unlucky process, then there's no reason not to use overcommit.

Besides, overcommit is useful. Virtual-memory-based copy-on-write, allocate-on-write, sparse arrays, etc. are all useful and widely-used.

c0l0 12 hours ago
I realize this is mostly tangential to the article, but a word of warning for those who are about to mess with overcommit for the first time: In my experience, the extreme stance of "always do [thing] with overcommit" is just not defensible, because most (yes, also "server") software is just not written under the assumption that being able to deal with allocation failures in a meaningful way is a necessity. At best, there's an "malloc() or die"-like stanza in the source, and that's that.

You can and maybe even should disable overcommit this way when running postgres on the server (and only a minimum of what you would these days call sidecar processes (monitoring and backup agents, etc.) on the same host/kernel), but once you have a typical zoo of stuff using dynamic languages living there, you WILL blow someone's leg off.

bawolff 22 minutes ago
> At best, there's an "malloc() or die"-like stanza in the source, and that's that.

In fairness, i don't know what else general purpose software is supposed to do here other than die. Its not like there is a graceful way to handle insufficient memory to run the program.

kg 12 hours ago
I run my development VM with overcommit disabled and the way stuff fails when it runs out of memory is really confusing and mysterious sometimes. It's useful for flushing out issues that would otherwise cause system degradation w/overcommit enabled, so I keep it that way, but yeah... doing it in production with a bunch of different applications running is probably asking for trouble.
Tuna-Fish 11 hours ago
The fundamental problem is that your machine is running software from a thousand different projects or libraries just to provide the basic system, and most of them do not handle allocation failure gracefully. If program A allocates too much memory and overcommit is off, that doesn't necessarily mean that A gets an allocation failure. It might also mean that code in library B in background process C gets the failure, and fails in a way that puts the system in a state that's not easily recoverable, and is possibly very different every time it happens.

For cleanly surfacing errors, overcommit=2 is a bad choice. For most servers, it's much better to leave overcommit on, but make the OOM killer always target your primary service/container, using oom-score-adj, and/or memory.oom.group to take out the whole cgroup. This way, you get to cleanly combine your OOM condition handling with the general failure case and can restart everything from a known foundation, instead of trying to soldier on while possibly lacking some piece of support infrastructure that is necessary but usually invisible.

MrDrMcCoy 5 hours ago
There's also cgroup resource controls to separately govern max memory and swap usage. Thanks to systemd and systemd-run, you can easily apply and adjust them on arbitrary processes. The manpages you want are systemd.resource-control and systemd.exec. I haven't found any other equivalent tools that expose these cgroup features to the extent that systemd does.
b112 1 hour ago
I really dislike systemd, and its monolithic mass of over-engineered, all encompassing code. So I have to hang a comment here, showing just how easy this is to manage in a simple startup script. How these features are always exposed.

Taken from a SO post:

  # Create a cgroup
  mkdir /sys/fs/cgroup/memory/my_cgroup
  # Add the process to it
  echo $PID > /sys/fs/cgroup/memory/my_cgroup/cgroup.procs
  
  # Set the limit to 40MB
  echo $((40 \* 1024 \* 1024)) > /sys/fs/cgroup/memory/my_cgroup/memory.limit_in_bytes
Linux is so beautiful. Unix is. Systemd is like a person with makeup plastered 1" thick all over their face. It detracts, obscures the natural beauty, and is just a lot of work for no reason.
ece 9 hours ago
This is a better explanation and fix than others I've seen. There will be differences between desktop and server uses, but misbehaving applications and libraries exist on both.
vin10 12 hours ago
> he way stuff fails when it runs out of memory is really confusing

have you checked what your `vm.overcommit_ratio` is? If its < 100%, then you will get OOM kills even if plenty of RAM is free since the default is 50 i.e. 50% of RAM can be COMMITTED and no more.

curious what kind of failures you are alluding to.

kg 7 hours ago
The main scenario that caused me a lot of grief is temporary RAM usage spikes, like a single process run during a build that uses ~8gb of RAM or more for a mere few seconds and then exits. In some cases the oom killer was reaping the wrong process or the build was just failing cryptically and if I examined stuff like top I wouldn't see any issue, plenty of free RAM. The tooling for examining this historical memory usage is pretty bad, my only option was to look at the oom killer logs and hope that eventually the culprit would show up.

Thanks for the tip about vm.overcommit_ratio though, I think it's set to the default.

PunchyHamster 6 hours ago
you can get statistics off cgroups to get idea what it was (assuming it's a service and not something user ran), but that requires probing it often enough
EdiX 11 hours ago
This is completely wrong. First, disabling overcommit is wasteful because of fork and because of the way thread stacks are allocated. Sorry, you don't get exact memory accounting with C, not even Windows will do exact accounting of thread stacks.

Secondly, memory is a global resource so you don't get local failures when it's exhausted, whoever allocates first after memory has been exhausted will get an error they might be the application responsible for the exhaustion or they might not be. They might crash on the error or they might "handle it", keep going and render the system completely unusable.

No, exact accounting is not a solution. Ulimits and configuring the OOM killer are solutions.

inkyoto 5 hours ago
Without going into a discussion about whether this is right or wrong, how is fork(2) wasteful?

fork(2) has been copy-on-write for decades and does not copy the entire process address space. Thread stacks are a non-sequitur either as stack uses the data pages, the thread stacks are rather small in size in most scanarios, hence the thread stacks are also subject to copy-on-write.

The only overhead that the use of fork(2) incurs is an extra copy of process' memory descriptior pages, which is a drop in the ocean for modern systems for large amounts of RAM.

barchar 5 hours ago
Unless you reserve on fork you're still over committing because after the fork writes to basically any page in either process will trigger memory commitment.

Thread stacks come up because reserving them completely ahead of time would incur large amounts of memory usage. Typically they start small and grow when you touch the guards. This is a form of overcommit. Even windows dynamically grows stacks like this

inkyoto 4 hours ago
Also,

> Thread stacks come up because reserving them completely ahead of time would incur large amounts of memory usage. Typically they start small and grow when you touch the guards. This is a form of overcommit.

Ahead of the time memory reservation entails a page entry being allocated in the process’s page catalogue («logical» allocation), and the page «sits» dormant until it is accessed and causes a memory access fault – that is the moment when the physical allocation takes place. So copying reserved but not accessed yet pages has zero effect on the physical memory consumption of the process.

What actually happens to the thread stacks depends on the actual number of active threads. In modern designs, threads are consumed from thread pools that implement some sort of a run queue where the threads sit idle until they get assigned a unit of work. So if a thread is idle, it does not use its own stack thread and, consequently, there is no side effect on the child's COW address space.

Granted, if the child was copied with a large number of active threads, the impact will be very different.

> Even windows dynamically grows stacks like this

Windows employs a distinct process/thread design, making the UNIX concept of a process foreign. Threads are the primary construct in Windows and the kernel is highly optimised for thread management rather than processes. Cygwin has outlined significant challenges in supporting fork(2) semantics on Windows and has extensively documented the associated difficulties. However, I am veering off-topic.

barchar 2 hours ago
I am aware reserving excess memory doesn't commit said memory. But it does reserve memory, which is what we were talking about. The point was that because you can have a lot of threads and restricting reserved stacks to some small value is annoying all systems overcommit stack. Windows initially commits some memory (reserving space in the page file/ram) for each but will dynamically commit more when you touch the guard. This is overcommit. Linux does similarly.

Idle threads do increase the amount of committed stack. Once their stack grows it stays grown, it's not common to unmap the end and shrink the stacks. In a system without overcommit these stacks will contribute to total reserved phys/swap in the child, though ofc the pages will be cow.

> Windows employs a distinct process/thread design, making the UNIX concept of a process foreign. Threads are the primary construct in Windows and the kernel is highly optimised for thread management rather than processes. Cygwin has outlined significant challenges in supporting fork(2) semantics on Windows and has extensively documented the associated difficulties. However, I am veering off-topic.

The nt kernel actually works similarly to Linux w.r.t. processes and threads. Internally they are the same thing. The userspace is what makes process creation slow. Actually thread creation is also much slower than on Linux, but it's better than processes. Defender also contributes to the problems here.

Windows can do cow mappings, fork might even be implementable with undocumented APIs. Exec is essentially impossible though. You can't change the identity of a process like that without changing the PID and handles.

Fun fact: the clone syscall will let you create a new task that both shares VM and keeps the same stack as the parent. Chaos results, but it is fun. You used to be able to share your PID with the parent too, which also caused much destruction.

inkyoto 5 hours ago
> […] because after the fork writes to basically any page in either process will trigger memory commitment.

This is largely not true for most processes. For a child process to start writing into its own data pages en masse, there has to exist a specific code path that causes such behaviour. Processes do not randomly modify their own data space – it is either a bug or a peculiar workload that causes it.

You would have a stronger case if you mentioned, e.g., the JVM, which has a high complexity garbage collector (rather, multiple types of garbage collectors – each with its own behaviour), but the JVM ameliorates the problem by attempting to lock in the entire heap size at startup or bailing if it fails to do so.

In most scenarios, forking a process has a negligible effect on the overall memory consumption in the system.

minitech 3 hours ago
> This is largely not true for most processes.

> In most scenarios, forking a process has a negligible effect on the overall memory consumption in the system.

Yes, that’s what they’re getting at. It’s good overcommitment. It’s still overcommitment, because the OS has no way of knowing whether the process has the kind of rare path you’re talking about for the purposes of memory accounting. They said that disabling overcommit is wasteful, not that fork is wasteful.

barchar 2 hours ago
Yep. If you aren't overcommitting on fork it's quite wasteful, and if you are overcommitting on fork then you've already given up on not having to handle oom conditions after malloc has returned.
otabdeveloper4 11 hours ago
> because of fork and because of the way thread stacks are allocated

For modern (post-x86_64) memory allocators a common strategy is to allocate hundreds of gigabytes of virtual memory and let the kernel handle deal with actually swapping in physical memory pages upon use.

This way you can partition the virtual memory space into arenas as you like. This works really well.

zrm 5 hours ago
Which is a major way turning off overcommit can cause problems. The expectation for disabling it is that if you request memory you're going to use it, which is frequently not true. So if you turn it off, your memory requirements go from, say, 64GB to 512GB.

Obviously you don't want to have to octuple your physical memory for pages that will never be used, especially these days, so the typical way around that is to allocate a lot of swap. Then the allocations that aren't actually used can be backed by swap instead of RAM.

Except then you've essentially reimplemented overcommit. Allocations report success because you have plenty of swap but if you try to really use that much the system grinds to a halt.

themafia 3 hours ago
> your memory requirements go from, say, 64GB to 512GB.

Then your memory requirements always were potentially 512GB. It may just happen to be even with that amount of allocation you may only need 64GB of actual physical storage; however, there is clearly a path for your application to suddenly require 512GB of storage. Perhaps when it's under an attack or facing misconfigured clients.

If your failure strategy is "just let the server fall over under pressure" then this might be fine for you.

zrm 2 hours ago
> Then your memory requirements always were potentially 512GB. It may just happen to be even with that amount of allocation you may only need 64GB of actual physical storage; however, there is clearly a path for your application to suddenly require 512GB of storage.

If an allocator unconditionally maps in 512GB at once to minimize expensive reallocations, that doesn't inherently have any relationship to the maximum that could actually be used in the program.

Or suppose a generic library uses buffers that are ten times bigger than the maximum message supported by your application. Your program would deterministically never access 90% of the memory pages the library allocated.

> If your failure strategy is "just let the server fall over under pressure" then this might be fine for you.

The question is, what do you intend to happen when there is memory pressure?

If you start denying allocations, even if your program is designed to deal with that, so many others aren't that your system is likely to crash, or worse, take a trip down rarely-exercised code paths into the land of eldritch bugs.

kccqzy 3 hours ago
Yeah but these only request address space. The memory is neither readable nor writable until a subsequent mprotect call. Ideally reserving address space only shouldn’t be counted as overcommit.
Skunkleton 2 hours ago
Over commit is a design choice, and it is a design choice that is pretty core to Linux. Basic stuff like fork(), for example, gets wasteful when you don't over commit. Less obvious stuff like buffer caches also get less effective. There are certainly places where you would rather fail at allocation time, but that isn't everywhere and it doesn't belong as a default.
Asmod4n 1 hour ago
There are some situations where you can somewhat handle malloc returning NULL.

One would be where you have frequent large mallocs which get freed fast. Another would be where you have written a garbage collected language in C/C++.

When calling free, delete or letting your GC do that for you the memory isn't actually given back immediately, glibc has malloc_trim(0) for that, which tries it's best to give back as much unused memory to the OS as possible.

Then you can retry your call to malloc and see if it fails and then just let your supervisor restart your service/host/whatever or not.

vin10 12 hours ago
For anyone feeling brave enough to disable overcommit after reading this, be mindful that default `vm.overcommit_ratio` is 50% which means that if no swap is available, on a system with 2GB of total RAM, more than 1GB of RAM can't be allocated and requests will fail with preemptive OOMs. (e.g. postgresql servers typically disable overcommit)

- https://github.com/torvalds/linux/blob/master/mm/util.c#L753

barchar 5 hours ago
Fwiw you can use pressure stall information to load shed. This is superior to disabling overcommit and then praying the first allocation to fail is in the process you want to actually respond to the resource starvation.

Fact is that by the time small allocations are failing you are almost no better off handling the null than you would be handling segfaults and the sigterms from the killer.

Often for servers performance will fall off a cliff long before the oom killer is needed, too.

wmf 2 days ago
This doesn't address the fact that forking large processes requires either overcommit or a lot of swap. That may be the source of the Redis problem.
PunchyHamster 5 hours ago
Author is just ignorant to the technicals and laser focusing on some particular cases that he thinks are problem but are not.

The redis engineers KNOW fork-to-save will at most result in few tens of MBs of extra memory used in vast majority of cases and benefit of seamless saving. Like, there is a theoretical where it uses double the memory but it would require all the data in database be replaced during short interval snapshot is saved and that's just unrealistic

loeg 12 hours ago
Why? Most COWed pages will remain untouched. They only need to allocate when touched.
pm215 11 hours ago
Because the point of forbidding overcommit is to ensure that the only time you can discover you're out of memory is when you make a syscall that tries (explicitly or implicitly) to allocate more memory. If you don't account the COW pages to both the parent and the child process, you have a situation where you can discover the out of memory condition when the process tries to dirty the RAM and there's no page available to do that with...
inkyoto 5 hours ago
The described scenario (and, consequently, a concern) is mostly a philosophical question or a real concern for a very specific workload.

Memory allocation is a highly non-deterministic process which highly depends on the code path, and it is generally impossible to predict how the child will handle its own memory space – it can be little or it can be more (relatively to the parent), and it is usually somewhere in the middle. Most daemons, for example, consuming next to zero extra memory after forking.

The Ruby garbage collector «mark-and-sweep» (old versions of Ruby – 1.8 and 1.9) and Python reference counting (the Instagram case) bugs are prime examples of pathological cases when a child would walk over its data pages, making each dirty and causing a system collapse, but the bugs have been fixed or workarounds have been applied. Honourable mention goes to Redis in a situation when THP (transparent huge pages) are enabled.

No heuristics exist out there that would turn memory allocation into a deterministic process.

mahkoh 11 hours ago
The point of disabling overcommit, as per the article, is that all pages in virtual memory must be backed by physical memory at all times. Therefore all virtual memory must reserve physical memory at the time of the fork call, even if the contents of the pages only get copied when they are touched.
loeg 10 hours ago
Surely e.g. shared memory segments that are mapped by multiple processes are not double-counted? So it's only COW memory in particular that gets this treatment? Linux could just not do that.
masklinn 51 minutes ago
COW is not shared memory, it’s an optimised copy. There is no way to guarantee that the optimisation will hold forever thus it is a form of overcommit (and indeed the reason most unices overcommit in the first place): most children will not touch most of the virtual memory they inherited but any can, so if you require precise memory accounting you have to account for that in the same way you account for large anonymous maps.
kibwen 10 hours ago
But forking duplicates the process space and has to assume that a write might happen, so it has to defensively reserve enough for the new process if overcommit is off.
Tuna-Fish 11 hours ago
If you have overcommit on, that happens. But if you have it off, it has to assume the worst case, otherwise there can be a failure when someone writes to a page.
11 hours ago
laurencerowe 11 hours ago
Disabling overcommit on V8 servers like Deno will be incredibly inefficient. Your process might only need ~100MB of memory or so but V8's cppgc caged heap requires a 64GB allocation in order to get a 32GB aligned area in which to contain its pointers. This is a security measure to prevent any possibility of out of cage access.
silon42 11 hours ago
Maybe it should use MAP_NORESERVE ?
laurencerowe 8 hours ago
I expect it does already, but I don’t think it would help here:

> In mode 2 the MAP_NORESERVE flag is ignored.

https://www.kernel.org/doc/Documentation/vm/overcommit-accou...

simscitizen 3 hours ago
There's already a popular OS that disables overcommit by default (Windows). The problem with this is that disallowing overcommit (especially with software that doesn't expect that) can mean you don't get anywhere close to actually using all the RAM that's installed on your system.
masklinn 49 minutes ago
Windows also splits memory allocations between allocating the virtual space and committing real memory. So you can allocate a large VM when you need one and use piecemeal commits within that space.

POSIX not so much.

machinationu 47 minutes ago
I am regularly getting close to 100% RAM usage on Windows doing data processing in Python/numpy
Animats 12 hours ago
Setting 2 is still pretty generous. It means "Kernel does not allow allocations that exceed swap + (RAM × overcommit_ratio / 100)." It's not a "never swap or overcommit" setting. You can still get into thrashing by memory overload.

We may be entering an era when everyone in computing has to get serious about resource consumption. NVidia says GPUs are going to get more expensive for the next five years. DRAM prices are way up, and Samsung says it's not getting better for the next few years. Bulk electricity prices are up due to all those AI data centers. We have to assume for planning purposes that computing gets a little more expensive each year through at least 2030.

Somebody may make a breakthrough, but there's nothing in the fab pipeline likely to pay off before 2030, if then.

silon42 12 hours ago
For me, on the desktop, thrashing overload is the most common way the Linux system effectively crashes... (I've left it overnight a few times, sometimes it recovered, but not always).

I'm not disabling overcommit for now, but maybe I should.

minitech 41 minutes ago
SysRq+F to trigger the OOM killer manually might help (has to be enabled with the kernel.sysrq sysctl, see https://docs.kernel.org/admin-guide/sysrq.html#how-do-i-enab...).
webstrand 4 hours ago
That thrashing is probably executable pages getting evicted, and then having to be reloaded from disk when the process resumes. Even with no swap and overcommit disabled, you'll still get thrashing before the OOM killer gets triggered.

I recommend everyone to enable linux's new multi-generational LRU, that can be configured to trigger the OOM when the workingset of the last N deciseconds doesn't fit in memory. And <https://github.com/hakavlad/nohang> has some more suggestions.

PunchyHamster 5 hours ago
It's not gonna change anything. But you might get interested into software like earlyoom and similar, that basically tried to preempt oomkiller and kill something before it gets to sluggish state
Tuna-Fish 11 hours ago
disabling overcommit does not fix trashing. Reducing the size of your swap does.
silon42 11 hours ago
Yes, but not fully, it may still thrash on mmaped files (especially readonly ones).
renehsz 3 days ago
Strongly agree with this article. It highlights really well why overcommit is so harmful.

Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.

But it feels like a lost cause these days...

So much software breaks once you turn off overcommit, even in situations where you're nowhere close to running out of physical memory.

What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory. Large virtual memory buffers that aren't fully committed can be very useful in certain situations. But it should be something a program has to ask for, not the default behavior.

charcircuit 4 hours ago
>terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software

Having an assumption that your process will never crash is not safe. There will always be freak things like CPUs taking the wrong branch or bits randomly flipping. Parting of design a robust system is being tolerant to things like this.

Another point also mentioned is this thread is that by the time you run out of memory the system already is going to be in a bad state and now you probably don't have enough memory to even get out of it. Memory should have been freed already by telling programs to lighten up on their memory usage or by killing them and reclaiming the resources.

PunchyHamster 5 hours ago
It's not harmful. It's necessary for modern systems that are not "an ECU in a car"

> Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.

The big software is not written that way. In fact, writing software that way means you will have to sacrifice performance, memory usage, or both because you either * need to allocate exactly what you always need and free it when it gets smaller (if you want to keep memory footprint similar)m and that will add latency * over-allocate, and waste RAM

And you'd end up with MORE memory related issues, not less. Writing app where every allocation can fail is just nightmarish waste of time for 99% of the apps that are not "onboard computer of a space ship/plane"

barchar 5 hours ago
Even besides the aforementioned fork problems not having overcommit doesn't mean you can handle oom correctly by just handling errors from malloc!
201984 11 hours ago
> What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory.

mmap with PROT_NONE is such a reservation and doesn't count towards the commit limit. A later mmap with MAP_FIXED and PROT_READ | PROT_WRITE can commit parts of the reserved region, and mmap calls with PROT_NONE and MAP_FIXED will decommit.

hparadiz 12 hours ago
That's a normal failure state that happens occasionally. Out of memory errors come up all the time when writing robust async job queues. There are a lot of other reasons a failure could happen but running out of memory is just one of them. Sure I can force the system to use swap but that would degrade performance for everything else so it's better to let it die and log the result and check your dead letter queue after.
deathanatos 12 hours ago
This is quite the bold statement to make with RAM prices sky high.

I want to agree with the locality of errors argument, and while in simple cases, yes, it holds true, it isn't necessarily true. If we don't overcommit, the allocation that kills us is simply the one that fails. Whether this allocation is the problematic one is a different question: if we have a slow leak that, every 10k allocation allocs and leaks, we're probably (9999 / 10k, assuming spherical allocations) going to fail on one that isn't the problem. We get about as much info as the oom-killer would have, anyways: this program is allocating too much.

jleyank 1 day ago
As I recall, this appeared in the 90’s and it was a real pain debugging then as well. Having errors deferred added a Heisenbug component to what should have been a quick, clean crash.

Has malloc ever returned zero since then? Or has somebody undone this, erm, feature at times?

baq 12 hours ago
This is exactly what the article’s title does
charcircuit 11 hours ago
>Would you rather debug a crash at the allocation site

The allocation site is not necessarily what is leaking memory. What you actually want in either case is a memory dump where you can tell what is leaking or using the memory.

blibble 12 hours ago
redis uses the copy-on-write property of fork() to implement saving

which is elegant and completely legitimate

ycombinatrix 12 hours ago
How does fork() work with vm.overcommit=2?

A forked process would assume memory is already allocated, but I guess it would fail when writing to it as if vm.overcommit is set to 0 or 1.

pm215 11 hours ago
I believe (per the stuff at the bottom of https://www.kernel.org/doc/Documentation/vm/overcommit-accou... ) that the kernel does the accounting of how much memory the new child process needs and will fail the fork() if there isn't enough. All the COW pages should be in the "shared anonymous" category so get counted once per user (i.e. once for the parent process, once for the child), ensuring that the COW copy can't fail if the fork succeeded.
toast0 11 hours ago
As pm215 states, it doubles your memory commit. It's somewhat common for large programs/runtimes that may fork at runtime to spawn an intermediary process during startup to use for runtime forks to avoid the cost of CoW on memory and mapppings and etc where the CoW isn't needed or desirable; but redis has to fork the actual service process because it uses CoW to effectively snapshot memory.
loeg 10 hours ago
It seems like a wrong accounting to count CoWed pages twice.
toast0 9 hours ago
It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten. In that case, even though the cache is fixed size / no new allocations, all of the pages are touched and so the total used memory is double from before the fork. If you want to guarantee allocation failures rather than demand paging failures, and you don't have enough ram/swap to back twice the allocations, you must fail the fork.

On the other hand, if you have a pretty good idea that the child will finish persisting and exit before the cache is fully rewritten, double is too much. There's not really a mechanism for that though. Even if you could set an optimistic multiplier for multiple mapped CoW pages, you're back to demand paging failures --- although maybe it's still worthwhile.

PunchyHamster 5 hours ago
> It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten.

It's wrong 99.99999% of the time. Because alternative is either "make it take double and waste half the RAM" or "write in memory data in a way that allows for snapshotting, throwing a bunch of performance into the trash"

kibwen 9 hours ago
Not if your goal is to make it such that OOM can only occur during allocation failure, and not during an arbitrary later write, as the OP purports to want.
loeg 11 hours ago
Can you elaborate on how this comment is connected to the article?
blibble 11 hours ago
did you read it the article? there's a large section on redis

the author says it's bad design, but has entirely missed WHY it wants overcommit

loeg 10 hours ago
You haven't made a connection, though. What does fork have to do with overcommit? You didn't connect the dots.
Spivak 6 hours ago
If you turn overcommit off then when you fork you double the memory usage. The pages are CoW but for accounting purposes it counts as double because writes could require allocating memory and that's not allowed to fail since it's not a malloc. So the kernel has to count it as reserved.
12 hours ago
pizlonator 11 hours ago
This is such an old debate. The real answer, as with all such things, is "it depends".

Two reasons why overcommit is a good idea:

- It lets you reserve memory and use the dirtying of that memory to be the thing that commits it. Some algorithms and data structures rely on this strongly (i.e. you would have to use a significantly different algorithm, which is demonstrably slower or more memory intensive, if you couldn't rely on overcommit).

- Many applications have no story for out-of-memory other halting. You can scream and yell at them to do better, but that won't help, because those apps that find themselves in that supposedly-bad situation ended up there for complex and well-considered reasons. My favorite: having complex OOM error handling paths is the worst kind of attack surface, since it's hard to get test coverage for it. So, it's better to just have the program killed instead, because that nixes the untested code path. For those programs, there's zero value in having the memory allocator be able to report OOM conditions other than by asserting in prod that mmap/madvise always succeed, which then means that the value of not overcommitting is much smaller.

Are there server apps where the value of gracefully handling out of memory errors outweighs the perf benefits of overcommit and the attack surface mitigation of halting on OOM? Yeah! But I bet that not all server apps fall into that bucket

PunchyHamster 5 hours ago
It's also performance, as there is no penalty for asking for more RAM than you need right now, you can reduce amount of allocation calls without sacrificing memory usage (as you would have to without overcommit)
pizlonator 5 hours ago
That's what I mean by my first reason
PunchyHamster 6 hours ago
Sure if you don't like your stuff to work well. 0 is default for a reason, and "my specific workload is buggy with 0" is not a problem with it, just the reason there are other options

Advertising for 2 with "but apps should handle it" is utter ignorance, and redis example shows that, the database is using the COW fork feature for basically the reason it exists, as do many, many servers and the warning is pretty much tailored for people thinking they are clever and not understanding memory subsystem

jcalvinowens 12 hours ago
There's a reason nobody does this: RAM is expensive. Disabling overcommit on your typical server workload will waste a great deal of it. TFA completely ignores this.

This is one of those classic money vs idealism things. In my experience, the money always wins this particular argument: nobody is going to buy more RAM for you so you can do this.

toast0 12 hours ago
Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.

The difference is that you'll fail allocations, where there's a reasonable interface for errors, rather than failing at demand paging when writing to previously unused pages where there's not a good interface.

Of course, there are many software patterns where excessive allocations are made without any intent of touching most of the pages; that's fine with overcommit, but it will lead to allocation failures when you disable overcommit.

Disabling overcommit does make fork in a large process tricky; I don't think the rant about redis in the article is totally on target; fork to persist is a pretty good solution, copy on write is a reasonable cost to pay while dumping the data to disk and then it returns to normal when the dump is done. But without overcommit, it doubles the memory commitment while the dump is running, and that's likely to cause issues if redis is large relative to memory and that's worth checking for and warning about. The linked jemalloc issue seems like it could be problematic too, but I only skimmed; seems like that's worth warning about as well.

For the fork path, it might be nice if you could request overcommit in certain circumstances... fork but only commit X% rather than the whole memory space.

PunchyHamster 5 hours ago
> Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.

Doesn't really change the point. The RAM might not be completely wasted, but given that near every app will over-allocate and just use the pooled memory, you will waste memory that could otherwise be used to run more stuff.

And it can be quite significant, like it's pretty common for server apps to start a big process and then have COWed thread per connection in a pool, so your apache2 eating maybe 60MB per thread in pool is now in gigabytes range at very small pool sizes.

Blog is essentially call to "let's make apps like we did in the DOS era" which is ridiculus

jcalvinowens 12 hours ago
You're correct it doesn't prefault the mappings, but that's irrelevant: it accounts them as allocated, and a later allocation which goes over the limit will immediately fail.

Remember, the limit is artificial and defined by the user with overcommit=2, by overcommit_ratio and user_reserve_kbytes. Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).

toast0 11 hours ago
> Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).

The RAM is not unusable, it will be used. Some portion of ram may be unallocatable, but that doesn't mean it's wasted.

There's a tradeoff. With overcommit disabled, you will get allocation failure rather than OOM killer. But you'll likely get allocation failures at memory pressure below that needed to trigger the OOM killer. And if you're running a wide variety of software, you'll run into problems because overcommit is the mainstream default for Linux, so many things are only widely tested with it enabled.

jcalvinowens 11 hours ago
> The RAM is not unusable, it will be used. Some portion of ram may be unallocatable

I think that's a meaningless distinction: if userspace can't allocate it, it is functionally wasted.

I completely agree with your second paragraph, but again, some portion of RAM obtainable with overcommit=0 will be unobtainable with overcommit=2.

Maybe a better way to say it is that a system with overcommit=2 will fail at a lower memory pressure than one with overcommit=0. Additional RAM would have to be added to the former system to successfully run the same workload. That RAM is waste.

PunchyHamster 5 hours ago
it's absolutely wasted if apps on server don't use disk (disk cache is pretty much only thing that can use that reserved memory).

You can have simple web server that took less than 100MB of RAM take gigabytes, just because it spawned few COW-ed threads

loeg 11 hours ago
If the overcommit ratio is 1, there is no portion rendered unusable? This seems to contradict your "necessarily" wastes RAM claim?
jcalvinowens 11 hours ago
Read the comment again, that wasn't the only one I mentioned.
loeg 10 hours ago
Please point out what you're talking about, because the comment is short and I read it fully multiple times now.
wmf 12 hours ago
If you have enough swap there's no waste.
jcalvinowens 12 hours ago
Wasted swap is still waste, and the swapping costs cycles.
wmf 12 hours ago
With overcommit off the swap isn't used; it's only necessary for accounting purposes. I agree that it's a waste of disk space.
loeg 12 hours ago
How does disabling overcommit waste RAM?
PunchyHamster 5 hours ago
overcommit 0

- Apache2 runs. - Apache2 takes 50MB. - Apache2 spawns 32 threads. - Apache2 takes 50MB + (per-thread vars * 32)

overcommit 2

- Apache2 runs. - Apache2 takes 50MB. - Apache2 spawns 32 threads. - Apache2 takes 50MB + (5032) + (per-thread vars 32)

Boom, Now your simple apache server serving some static files can't fit on 512MB VM and needs in excess of 1.7GB of memory just to allocate

jcalvinowens 11 hours ago
Because userspace rarely actually faults in all the pages it allocates.
loeg 11 hours ago
Surely the source of the waste here is the userspace program not using the memory it allocated, rather than whether or not the kernel overcommits memory. Attributing this to overcommit behavior is invalid.
PunchyHamster 5 hours ago
The waste comes with asterisk

That "waste" (that overcommit turns into "not a waste") means you can do far less allocations, with overcommit you can just allocate a bunch of memory and use it gradually, instead of having to do malloc() every time you need a bit of memory and free() every time you get rid of it.

You'd also increase memory fragmentation that way possibly hitting performance.

It's also pretty much required for GCed languages to work sensibly

jcalvinowens 11 hours ago
Obviously. But all programs do that and have done it forever, it's literally the very reason overcommit exists.
userbinator 5 hours ago
Only the poorly-written ones, which are unfortunately the majority of them.
nickelpro 11 hours ago
Reading COW memory doesn't cause a fault. It doesn't mean unused literally.

And even if it's not COW, there's nothing wrong or inefficient about opportunistically allocating pages ahead of time to avoid syscall latency. Or mmapping files and deciding halfway through you don't need the whole thing.

There are plenty of reasons overcommit is the default.