In comp.lang.c Kaz Kylheku <firstname.lastname@example.org> wrote:
> ["Followup-To:" header set to comp.lang.c.]
> On 2012-02-17, Keith Thompson <email@example.com> wrote:
> > If you call malloc() and it overcommits, it won't crash the
> > program until you access the allocated memory. (The rationale for
> > overcommitting is that most programs don't actually use most of
> > the memory the memory the allocate. I find that odd)
> Odd or not, it is borne out empirically. Applications are physically
> smaller than their virtual footprints.
Supposedly, and years ago when application developers where chummy with
the kernel folks.
> It may be the case that C programs that malloc something usually use
> the whole block.
> But overcommitting is not implemented at the level of malloc, but
> at the level of a lower level allocator like mmap.
> If the system maps a large block to give you a smaller one, that large
> block will not be all used immediately.
No, but it's not like allocators are mmap'ing large fractions of available
memory. The trend is toward more and smaller mmap'd backing blocks to, e.g.,
improve address randomization.
> Another example is thread stacks. If you give each thread a one megabyte
> stack and make 100 threads, that's 100 megs of virtual space. But that one
> megabyte is a worst case that few, if any, of the threads will hit.
Fortunately GCC already supports segmented stacks. Such support would have
been there sooner were it not for overcommit reducing the apparent demand
for such a feature. Although, glibc doesn't seem to be too eager to jump
aboard. Perhaps they feel that a 5% performance hit is too high a cost to
get rid of applications randomly crashing under high loads.