Why is MongoDB's local database almost completely in memory? -
i have replicated mongodb setup , seeing lot of page faults. started investigate , found out (through vmmap
) entire local
database in memory (that is, part of working set). collection of significance there of course oplog.rs
used replication. looking @ queries being run, ones on oplog data lot closer tail head of oplog. why entire thing still in memory? surely should swapped out due large amount of faults.
am misunderstanding here? reading vmmap
information incorrectly? or going wrong?
note testing setup , there other mongod
instances running on hardware, therefore total amount of memory used here not sum total in machine. overall, memory usage ~100% though.
mongo delegates page management kernel - since uses mapped files, relies on kernel decide page out. local
database being touched every time write, or receive read another. oplog capped collection, it's going modifying fixed space in data files (and thus, fixed space in ram), should keep touched , not high on priority list being paged out.
as high number of page faults, possible simple cache warming? mongo isn't going load working set memory when it's freshly started, it'll take bit of querying things warm , stuff off of disk , memory.
don't forget account caches , buffers - memory usage might reading @ 100%, kernel going expire caches , buffers before pages out other things, might while system reporting close 100% usage, significant chunk of caches , buffers that'll flushed needed, meaning mongo's working set never has paged out @ all.
you might able test running program designed eat more , more memory (like one) , see how mongo behaves once system hitting swap.
Comments
Post a Comment