On system memory… specifically the difference between `tmpfs,` `shm,` and `hugepages…`












14















I've been curious lately about the various Linux kernel memory based filesystems.



Note: As far as I'm concerned, the questions below should be considered more or less optional when compared with a better understanding of that posed in the title. I ask them below because I believe answering them can better help me to understand the differences, but as my understanding is admittedly limited, it follows that others may know better. I am prepared to accept any answer that enriches my understanding of the differences between the three filesystems mentioned in the title.



Ultimately I think I'd like to mount a usable filesystem with hugepages, though some light research (and still lighter tinkering) has led me to believe that a rewritable hugepage mount is not an option. Am I mistaken? What are the mechanics at play here?



Also regarding hugepages:



     uname -a
3.13.3-1-MANJARO
#1 SMP PREEMPT
x86_64 GNU/Linux

tail -n8 /proc/meminfo
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 8223772 kB
DirectMap2M: 16924672 kB
DirectMap1G: 2097152 kB


(Here are full-text versions of /proc/meminfo and /proc/cpuinfo )



What's going on in the above? Am I already allocating hugepages? Is there a difference between DirectMap memory pages and hugepages?



Update After a bit of a nudge from @Gilles, I've added 4 more lines above and it seems there must be a difference, though I'd never heard of DirectMap before pulling that tail yesterday... maybe DMI or something?



Only a little more...



Failing any success with the hugepages endeavor, and assuming harddisk backups of any image files, what are the risks of mounting loops from tmpfs? Is my filesystem being swapped the worst-case scenario? I understand tmpfs is mounted filesystem cache - can my mounted loopfile be pressured out of memory? Are there mitigating actions I can take to avoid this?



Last - exactly what is shm, anyway? How does it differ from or include either hugepages or tmpfs?










share|improve this question




















  • 1





    What about the previous lines in /proc/meminfo that contain HugePage (or does your kernel version not have these)? What architecture is this on (x86_64 I suppose)?

    – Gilles
    Mar 20 '14 at 23:14











  • Ill add them. I was just worried about it being too long.

    – mikeserv
    Mar 20 '14 at 23:15











  • @Gilles - I've linked to plain text above. I hope that's ok. Thanks for asking - I should have included it in the first place - I don't know how I missed that.

    – mikeserv
    Mar 20 '14 at 23:30


















14















I've been curious lately about the various Linux kernel memory based filesystems.



Note: As far as I'm concerned, the questions below should be considered more or less optional when compared with a better understanding of that posed in the title. I ask them below because I believe answering them can better help me to understand the differences, but as my understanding is admittedly limited, it follows that others may know better. I am prepared to accept any answer that enriches my understanding of the differences between the three filesystems mentioned in the title.



Ultimately I think I'd like to mount a usable filesystem with hugepages, though some light research (and still lighter tinkering) has led me to believe that a rewritable hugepage mount is not an option. Am I mistaken? What are the mechanics at play here?



Also regarding hugepages:



     uname -a
3.13.3-1-MANJARO
#1 SMP PREEMPT
x86_64 GNU/Linux

tail -n8 /proc/meminfo
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 8223772 kB
DirectMap2M: 16924672 kB
DirectMap1G: 2097152 kB


(Here are full-text versions of /proc/meminfo and /proc/cpuinfo )



What's going on in the above? Am I already allocating hugepages? Is there a difference between DirectMap memory pages and hugepages?



Update After a bit of a nudge from @Gilles, I've added 4 more lines above and it seems there must be a difference, though I'd never heard of DirectMap before pulling that tail yesterday... maybe DMI or something?



Only a little more...



Failing any success with the hugepages endeavor, and assuming harddisk backups of any image files, what are the risks of mounting loops from tmpfs? Is my filesystem being swapped the worst-case scenario? I understand tmpfs is mounted filesystem cache - can my mounted loopfile be pressured out of memory? Are there mitigating actions I can take to avoid this?



Last - exactly what is shm, anyway? How does it differ from or include either hugepages or tmpfs?










share|improve this question




















  • 1





    What about the previous lines in /proc/meminfo that contain HugePage (or does your kernel version not have these)? What architecture is this on (x86_64 I suppose)?

    – Gilles
    Mar 20 '14 at 23:14











  • Ill add them. I was just worried about it being too long.

    – mikeserv
    Mar 20 '14 at 23:15











  • @Gilles - I've linked to plain text above. I hope that's ok. Thanks for asking - I should have included it in the first place - I don't know how I missed that.

    – mikeserv
    Mar 20 '14 at 23:30
















14












14








14


7






I've been curious lately about the various Linux kernel memory based filesystems.



Note: As far as I'm concerned, the questions below should be considered more or less optional when compared with a better understanding of that posed in the title. I ask them below because I believe answering them can better help me to understand the differences, but as my understanding is admittedly limited, it follows that others may know better. I am prepared to accept any answer that enriches my understanding of the differences between the three filesystems mentioned in the title.



Ultimately I think I'd like to mount a usable filesystem with hugepages, though some light research (and still lighter tinkering) has led me to believe that a rewritable hugepage mount is not an option. Am I mistaken? What are the mechanics at play here?



Also regarding hugepages:



     uname -a
3.13.3-1-MANJARO
#1 SMP PREEMPT
x86_64 GNU/Linux

tail -n8 /proc/meminfo
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 8223772 kB
DirectMap2M: 16924672 kB
DirectMap1G: 2097152 kB


(Here are full-text versions of /proc/meminfo and /proc/cpuinfo )



What's going on in the above? Am I already allocating hugepages? Is there a difference between DirectMap memory pages and hugepages?



Update After a bit of a nudge from @Gilles, I've added 4 more lines above and it seems there must be a difference, though I'd never heard of DirectMap before pulling that tail yesterday... maybe DMI or something?



Only a little more...



Failing any success with the hugepages endeavor, and assuming harddisk backups of any image files, what are the risks of mounting loops from tmpfs? Is my filesystem being swapped the worst-case scenario? I understand tmpfs is mounted filesystem cache - can my mounted loopfile be pressured out of memory? Are there mitigating actions I can take to avoid this?



Last - exactly what is shm, anyway? How does it differ from or include either hugepages or tmpfs?










share|improve this question
















I've been curious lately about the various Linux kernel memory based filesystems.



Note: As far as I'm concerned, the questions below should be considered more or less optional when compared with a better understanding of that posed in the title. I ask them below because I believe answering them can better help me to understand the differences, but as my understanding is admittedly limited, it follows that others may know better. I am prepared to accept any answer that enriches my understanding of the differences between the three filesystems mentioned in the title.



Ultimately I think I'd like to mount a usable filesystem with hugepages, though some light research (and still lighter tinkering) has led me to believe that a rewritable hugepage mount is not an option. Am I mistaken? What are the mechanics at play here?



Also regarding hugepages:



     uname -a
3.13.3-1-MANJARO
#1 SMP PREEMPT
x86_64 GNU/Linux

tail -n8 /proc/meminfo
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 8223772 kB
DirectMap2M: 16924672 kB
DirectMap1G: 2097152 kB


(Here are full-text versions of /proc/meminfo and /proc/cpuinfo )



What's going on in the above? Am I already allocating hugepages? Is there a difference between DirectMap memory pages and hugepages?



Update After a bit of a nudge from @Gilles, I've added 4 more lines above and it seems there must be a difference, though I'd never heard of DirectMap before pulling that tail yesterday... maybe DMI or something?



Only a little more...



Failing any success with the hugepages endeavor, and assuming harddisk backups of any image files, what are the risks of mounting loops from tmpfs? Is my filesystem being swapped the worst-case scenario? I understand tmpfs is mounted filesystem cache - can my mounted loopfile be pressured out of memory? Are there mitigating actions I can take to avoid this?



Last - exactly what is shm, anyway? How does it differ from or include either hugepages or tmpfs?







linux filesystems memory tmpfs shared-memory






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 10 at 19:15









Rui F Ribeiro

40.5k1479137




40.5k1479137










asked Mar 20 '14 at 5:11









mikeservmikeserv

45.7k668159




45.7k668159








  • 1





    What about the previous lines in /proc/meminfo that contain HugePage (or does your kernel version not have these)? What architecture is this on (x86_64 I suppose)?

    – Gilles
    Mar 20 '14 at 23:14











  • Ill add them. I was just worried about it being too long.

    – mikeserv
    Mar 20 '14 at 23:15











  • @Gilles - I've linked to plain text above. I hope that's ok. Thanks for asking - I should have included it in the first place - I don't know how I missed that.

    – mikeserv
    Mar 20 '14 at 23:30
















  • 1





    What about the previous lines in /proc/meminfo that contain HugePage (or does your kernel version not have these)? What architecture is this on (x86_64 I suppose)?

    – Gilles
    Mar 20 '14 at 23:14











  • Ill add them. I was just worried about it being too long.

    – mikeserv
    Mar 20 '14 at 23:15











  • @Gilles - I've linked to plain text above. I hope that's ok. Thanks for asking - I should have included it in the first place - I don't know how I missed that.

    – mikeserv
    Mar 20 '14 at 23:30










1




1





What about the previous lines in /proc/meminfo that contain HugePage (or does your kernel version not have these)? What architecture is this on (x86_64 I suppose)?

– Gilles
Mar 20 '14 at 23:14





What about the previous lines in /proc/meminfo that contain HugePage (or does your kernel version not have these)? What architecture is this on (x86_64 I suppose)?

– Gilles
Mar 20 '14 at 23:14













Ill add them. I was just worried about it being too long.

– mikeserv
Mar 20 '14 at 23:15





Ill add them. I was just worried about it being too long.

– mikeserv
Mar 20 '14 at 23:15













@Gilles - I've linked to plain text above. I hope that's ok. Thanks for asking - I should have included it in the first place - I don't know how I missed that.

– mikeserv
Mar 20 '14 at 23:30







@Gilles - I've linked to plain text above. I hope that's ok. Thanks for asking - I should have included it in the first place - I don't know how I missed that.

– mikeserv
Mar 20 '14 at 23:30












3 Answers
3






active

oldest

votes


















12





+100









There is no difference betweem tmpfs and shm. tmpfs is the new name for shm. shm stands for SHaredMemory.



See: Linux tmpfs.



The main reason tmpfs is even used today is this comment in my /etc/fstab on my gentoo box. BTW Chromium won't build with the line missing:



# glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for 
# POSIX shared memory (shm_open, shm_unlink).
shm /dev/shm tmpfs nodev,nosuid,noexec 0 0


which came out of the linux kernel documentation



Quoting:




tmpfs has the following uses:



1) There is always a kernel internal mount which you will not see at

all. This is used for shared anonymous mappings and SYSV shared

memory.



This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
set, the user visible part of tmpfs is not build. But the internal

mechanisms are always present.



2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for

POSIX shared memory (shm_open, shm_unlink). Adding the following

line to /etc/fstab should take care of this:



tmpfs /dev/shm tmpfs defaults 0 0



Remember to create the directory that you intend to mount tmpfs on
if necessary.



This mount is not needed for SYSV shared memory. The internal

mount is used for that. (In the 2.3 kernel versions it was

necessary to mount the predecessor of tmpfs (shm fs) to use SYSV

shared memory)



3) Some people (including me) find it very convenient to mount it

e.g. on /tmp and /var/tmp and have a big swap partition. And now

loop mounts of tmpfs files do work, so mkinitrd shipped by most

distributions should succeed with a tmpfs /tmp.



4) And probably a lot more I do not know about :-)



tmpfs has three mount options for sizing:



size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
nr_blocks: The same as size, but in blocks of PAGE_CACHE_SIZE.
nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages, or (on a
machine with highmem) the number of lowmem RAM pages,
whichever is the lower.




From the Transparent Hugepage Kernel Doc:




Transparent Hugepage Support maximizes the usefulness of free memory
if compared to the reservation approach of hugetlbfs by allowing all
unused memory to be used as cache or other movable (or even unmovable
entities). It doesn't require reservation to prevent hugepage
allocation failures to be noticeable from userland. It allows paging
and all other advanced VM features to be available on the hugepages.
It requires no modifications for applications to take advantage of it.



Applications however can be further optimized to take advantage of
this feature, like for example they've been optimized before to avoid
a flood of mmap system calls for every malloc(4k). Optimizing userland
is by far not mandatory and khugepaged already can take care of long
lived page allocations even for hugepage unaware applications that
deals with large amounts of memory.






New Comment after doing some calculations:



HugePage Size: 2MB

HugePages Used: None/Off, as evidenced by the all 0's, but enabled as per the 2Mb above.

DirectMap4k: 8.03Gb

DirectMap2M: 16.5Gb

DirectMap1G: 2Gb



Using the Paragraph above regarding Optimization in THS, it looks as tho 8Gb of your memory is being used by applications that operate using mallocs of 4k, 16.5Gb, has been requested by applications using mallocs of 2M. The applications using mallocs of 2M are mimicking HugePage Support by offloading the 2M sections to the kernel. This is the preferred method, because once the malloc is released by the kernel, the memory is released to the system, whereas mounting tmpfs using hugepage wouldn't result in a full cleaning until the system was rebooted. Lastly, the easy one, you had 2 programs open/running that requested a malloc of 1Gb



For those of you reading that don't know a malloc is a Standard Structure in C that stands for Memory ALLOCation. These calculations serve as proof that the OP's correlation between DirectMapping and THS maybe correct. Also note that mounting a HUGEPAGE ONLY fs would only result in a gain in Increments of 2MB, whereas letting the system manage memory using THS occurs mostly in 4k blocks, meaning in terms of memory management every malloc call saves the system 2044k(2048 - 4) for some other process to use.






share|improve this answer





















  • 2





    This is really good- is the THS my DirectMap?

    – mikeserv
    Apr 15 '14 at 3:17











  • That I can't answer as I googled DirectMapping and found nothing related to tmpfs etc. The only thing I could find was how to configure HugeMem Support for Oracle Databases running on their flavor of Linux, which means they are using HugePages instead of the THS I referred to. All kernels in the 2.6 branch support THS though. As a hunch tho, see my new comment above.

    – eyoung100
    Apr 15 '14 at 14:29













  • Yeah I turned up very little as well. I have done some reading on HP, THP. I'm pretty intrigued by your comment. This is really shaping up, man. This last part - HP only - should I interpret this to mean that I can mount a read/write filesystem atop a hugepage mount? Like, an image file loop-mounted from a hugepage mount? Writable?

    – mikeserv
    Apr 16 '14 at 3:30











  • Yes, and it is writable when mounted properly, but be aware: 1. That since you mounted it, you're in charge of cleanup 2. It's wasteful: Using your example, lets say that your loop only contained a text file, with the Characters: Hello, my name is Mike. Assuming each character is 1k, that file will save as 23k. You've wasted 2025k as the Hugepage gave you 2MB's. That wasteful behavior is why memory management was built into the kernel. It also prevents us from needing a wrapper DLL like kernel32

    – eyoung100
    Apr 16 '14 at 4:02













  • and lastly 3. You lose your mount upon reboot or crash.

    – eyoung100
    Apr 16 '14 at 4:10



















4














To address the "DirectMap" issue: the kernel has a linear ("direct") mapping of physical memory, separate from the virtual mappings allocated to each user process.



The kernel uses the largest possible pages for this mapping to cut down on TLB pressure.



DirectMap1G is visible if your CPU supports 1Gb pages (Barcelona onwards; some virtual environments disable them), and if enabled in the kernel - the default is on for 2.6.29+.






share|improve this answer































    2














    There's no difference between shm and tmpfs (actually, tmpfs is only the new name of former shmfs). hugetlbfs is a tmpfs-based filesystem that allocates its space from kernel huge pages and needs some additional configuration afford (how to use this is explained in Documentation/vm/hugetlbpage.txt).






    share|improve this answer
























    • This was a good try, and I had read those docs, of course. Or maybe not of course - but I think I'm going to put this out for a 100rep bounty, but before I do, I will offer it to you if you can expand on this. So far you've yet to enrich my understanding - I already knew most of it, except that the two were merely synonyms. In any case, If you can make this a better answer by tomorrow morning the 100rep bounty is yours. Especially interesting to me is I find no mention of DirectMap at all in the procfs man page. How come?

      – mikeserv
      Apr 11 '14 at 16:45











    • @mikeserv - I found this diff that shows what function the DirectMaps are calculated from: lkml.org/lkml/2008/11/6/163

      – slm
      Apr 17 '14 at 7:57











    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f120525%2fon-system-memory-specifically-the-difference-between-tmpfs-shm-and-hug%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    12





    +100









    There is no difference betweem tmpfs and shm. tmpfs is the new name for shm. shm stands for SHaredMemory.



    See: Linux tmpfs.



    The main reason tmpfs is even used today is this comment in my /etc/fstab on my gentoo box. BTW Chromium won't build with the line missing:



    # glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for 
    # POSIX shared memory (shm_open, shm_unlink).
    shm /dev/shm tmpfs nodev,nosuid,noexec 0 0


    which came out of the linux kernel documentation



    Quoting:




    tmpfs has the following uses:



    1) There is always a kernel internal mount which you will not see at

    all. This is used for shared anonymous mappings and SYSV shared

    memory.



    This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
    set, the user visible part of tmpfs is not build. But the internal

    mechanisms are always present.



    2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for

    POSIX shared memory (shm_open, shm_unlink). Adding the following

    line to /etc/fstab should take care of this:



    tmpfs /dev/shm tmpfs defaults 0 0



    Remember to create the directory that you intend to mount tmpfs on
    if necessary.



    This mount is not needed for SYSV shared memory. The internal

    mount is used for that. (In the 2.3 kernel versions it was

    necessary to mount the predecessor of tmpfs (shm fs) to use SYSV

    shared memory)



    3) Some people (including me) find it very convenient to mount it

    e.g. on /tmp and /var/tmp and have a big swap partition. And now

    loop mounts of tmpfs files do work, so mkinitrd shipped by most

    distributions should succeed with a tmpfs /tmp.



    4) And probably a lot more I do not know about :-)



    tmpfs has three mount options for sizing:



    size: The limit of allocated bytes for this tmpfs instance. The
    default is half of your physical RAM without swap. If you
    oversize your tmpfs instances the machine will deadlock
    since the OOM handler will not be able to free that memory.
    nr_blocks: The same as size, but in blocks of PAGE_CACHE_SIZE.
    nr_inodes: The maximum number of inodes for this instance. The default
    is half of the number of your physical RAM pages, or (on a
    machine with highmem) the number of lowmem RAM pages,
    whichever is the lower.




    From the Transparent Hugepage Kernel Doc:




    Transparent Hugepage Support maximizes the usefulness of free memory
    if compared to the reservation approach of hugetlbfs by allowing all
    unused memory to be used as cache or other movable (or even unmovable
    entities). It doesn't require reservation to prevent hugepage
    allocation failures to be noticeable from userland. It allows paging
    and all other advanced VM features to be available on the hugepages.
    It requires no modifications for applications to take advantage of it.



    Applications however can be further optimized to take advantage of
    this feature, like for example they've been optimized before to avoid
    a flood of mmap system calls for every malloc(4k). Optimizing userland
    is by far not mandatory and khugepaged already can take care of long
    lived page allocations even for hugepage unaware applications that
    deals with large amounts of memory.






    New Comment after doing some calculations:



    HugePage Size: 2MB

    HugePages Used: None/Off, as evidenced by the all 0's, but enabled as per the 2Mb above.

    DirectMap4k: 8.03Gb

    DirectMap2M: 16.5Gb

    DirectMap1G: 2Gb



    Using the Paragraph above regarding Optimization in THS, it looks as tho 8Gb of your memory is being used by applications that operate using mallocs of 4k, 16.5Gb, has been requested by applications using mallocs of 2M. The applications using mallocs of 2M are mimicking HugePage Support by offloading the 2M sections to the kernel. This is the preferred method, because once the malloc is released by the kernel, the memory is released to the system, whereas mounting tmpfs using hugepage wouldn't result in a full cleaning until the system was rebooted. Lastly, the easy one, you had 2 programs open/running that requested a malloc of 1Gb



    For those of you reading that don't know a malloc is a Standard Structure in C that stands for Memory ALLOCation. These calculations serve as proof that the OP's correlation between DirectMapping and THS maybe correct. Also note that mounting a HUGEPAGE ONLY fs would only result in a gain in Increments of 2MB, whereas letting the system manage memory using THS occurs mostly in 4k blocks, meaning in terms of memory management every malloc call saves the system 2044k(2048 - 4) for some other process to use.






    share|improve this answer





















    • 2





      This is really good- is the THS my DirectMap?

      – mikeserv
      Apr 15 '14 at 3:17











    • That I can't answer as I googled DirectMapping and found nothing related to tmpfs etc. The only thing I could find was how to configure HugeMem Support for Oracle Databases running on their flavor of Linux, which means they are using HugePages instead of the THS I referred to. All kernels in the 2.6 branch support THS though. As a hunch tho, see my new comment above.

      – eyoung100
      Apr 15 '14 at 14:29













    • Yeah I turned up very little as well. I have done some reading on HP, THP. I'm pretty intrigued by your comment. This is really shaping up, man. This last part - HP only - should I interpret this to mean that I can mount a read/write filesystem atop a hugepage mount? Like, an image file loop-mounted from a hugepage mount? Writable?

      – mikeserv
      Apr 16 '14 at 3:30











    • Yes, and it is writable when mounted properly, but be aware: 1. That since you mounted it, you're in charge of cleanup 2. It's wasteful: Using your example, lets say that your loop only contained a text file, with the Characters: Hello, my name is Mike. Assuming each character is 1k, that file will save as 23k. You've wasted 2025k as the Hugepage gave you 2MB's. That wasteful behavior is why memory management was built into the kernel. It also prevents us from needing a wrapper DLL like kernel32

      – eyoung100
      Apr 16 '14 at 4:02













    • and lastly 3. You lose your mount upon reboot or crash.

      – eyoung100
      Apr 16 '14 at 4:10
















    12





    +100









    There is no difference betweem tmpfs and shm. tmpfs is the new name for shm. shm stands for SHaredMemory.



    See: Linux tmpfs.



    The main reason tmpfs is even used today is this comment in my /etc/fstab on my gentoo box. BTW Chromium won't build with the line missing:



    # glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for 
    # POSIX shared memory (shm_open, shm_unlink).
    shm /dev/shm tmpfs nodev,nosuid,noexec 0 0


    which came out of the linux kernel documentation



    Quoting:




    tmpfs has the following uses:



    1) There is always a kernel internal mount which you will not see at

    all. This is used for shared anonymous mappings and SYSV shared

    memory.



    This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
    set, the user visible part of tmpfs is not build. But the internal

    mechanisms are always present.



    2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for

    POSIX shared memory (shm_open, shm_unlink). Adding the following

    line to /etc/fstab should take care of this:



    tmpfs /dev/shm tmpfs defaults 0 0



    Remember to create the directory that you intend to mount tmpfs on
    if necessary.



    This mount is not needed for SYSV shared memory. The internal

    mount is used for that. (In the 2.3 kernel versions it was

    necessary to mount the predecessor of tmpfs (shm fs) to use SYSV

    shared memory)



    3) Some people (including me) find it very convenient to mount it

    e.g. on /tmp and /var/tmp and have a big swap partition. And now

    loop mounts of tmpfs files do work, so mkinitrd shipped by most

    distributions should succeed with a tmpfs /tmp.



    4) And probably a lot more I do not know about :-)



    tmpfs has three mount options for sizing:



    size: The limit of allocated bytes for this tmpfs instance. The
    default is half of your physical RAM without swap. If you
    oversize your tmpfs instances the machine will deadlock
    since the OOM handler will not be able to free that memory.
    nr_blocks: The same as size, but in blocks of PAGE_CACHE_SIZE.
    nr_inodes: The maximum number of inodes for this instance. The default
    is half of the number of your physical RAM pages, or (on a
    machine with highmem) the number of lowmem RAM pages,
    whichever is the lower.




    From the Transparent Hugepage Kernel Doc:




    Transparent Hugepage Support maximizes the usefulness of free memory
    if compared to the reservation approach of hugetlbfs by allowing all
    unused memory to be used as cache or other movable (or even unmovable
    entities). It doesn't require reservation to prevent hugepage
    allocation failures to be noticeable from userland. It allows paging
    and all other advanced VM features to be available on the hugepages.
    It requires no modifications for applications to take advantage of it.



    Applications however can be further optimized to take advantage of
    this feature, like for example they've been optimized before to avoid
    a flood of mmap system calls for every malloc(4k). Optimizing userland
    is by far not mandatory and khugepaged already can take care of long
    lived page allocations even for hugepage unaware applications that
    deals with large amounts of memory.






    New Comment after doing some calculations:



    HugePage Size: 2MB

    HugePages Used: None/Off, as evidenced by the all 0's, but enabled as per the 2Mb above.

    DirectMap4k: 8.03Gb

    DirectMap2M: 16.5Gb

    DirectMap1G: 2Gb



    Using the Paragraph above regarding Optimization in THS, it looks as tho 8Gb of your memory is being used by applications that operate using mallocs of 4k, 16.5Gb, has been requested by applications using mallocs of 2M. The applications using mallocs of 2M are mimicking HugePage Support by offloading the 2M sections to the kernel. This is the preferred method, because once the malloc is released by the kernel, the memory is released to the system, whereas mounting tmpfs using hugepage wouldn't result in a full cleaning until the system was rebooted. Lastly, the easy one, you had 2 programs open/running that requested a malloc of 1Gb



    For those of you reading that don't know a malloc is a Standard Structure in C that stands for Memory ALLOCation. These calculations serve as proof that the OP's correlation between DirectMapping and THS maybe correct. Also note that mounting a HUGEPAGE ONLY fs would only result in a gain in Increments of 2MB, whereas letting the system manage memory using THS occurs mostly in 4k blocks, meaning in terms of memory management every malloc call saves the system 2044k(2048 - 4) for some other process to use.






    share|improve this answer





















    • 2





      This is really good- is the THS my DirectMap?

      – mikeserv
      Apr 15 '14 at 3:17











    • That I can't answer as I googled DirectMapping and found nothing related to tmpfs etc. The only thing I could find was how to configure HugeMem Support for Oracle Databases running on their flavor of Linux, which means they are using HugePages instead of the THS I referred to. All kernels in the 2.6 branch support THS though. As a hunch tho, see my new comment above.

      – eyoung100
      Apr 15 '14 at 14:29













    • Yeah I turned up very little as well. I have done some reading on HP, THP. I'm pretty intrigued by your comment. This is really shaping up, man. This last part - HP only - should I interpret this to mean that I can mount a read/write filesystem atop a hugepage mount? Like, an image file loop-mounted from a hugepage mount? Writable?

      – mikeserv
      Apr 16 '14 at 3:30











    • Yes, and it is writable when mounted properly, but be aware: 1. That since you mounted it, you're in charge of cleanup 2. It's wasteful: Using your example, lets say that your loop only contained a text file, with the Characters: Hello, my name is Mike. Assuming each character is 1k, that file will save as 23k. You've wasted 2025k as the Hugepage gave you 2MB's. That wasteful behavior is why memory management was built into the kernel. It also prevents us from needing a wrapper DLL like kernel32

      – eyoung100
      Apr 16 '14 at 4:02













    • and lastly 3. You lose your mount upon reboot or crash.

      – eyoung100
      Apr 16 '14 at 4:10














    12





    +100







    12





    +100



    12




    +100





    There is no difference betweem tmpfs and shm. tmpfs is the new name for shm. shm stands for SHaredMemory.



    See: Linux tmpfs.



    The main reason tmpfs is even used today is this comment in my /etc/fstab on my gentoo box. BTW Chromium won't build with the line missing:



    # glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for 
    # POSIX shared memory (shm_open, shm_unlink).
    shm /dev/shm tmpfs nodev,nosuid,noexec 0 0


    which came out of the linux kernel documentation



    Quoting:




    tmpfs has the following uses:



    1) There is always a kernel internal mount which you will not see at

    all. This is used for shared anonymous mappings and SYSV shared

    memory.



    This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
    set, the user visible part of tmpfs is not build. But the internal

    mechanisms are always present.



    2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for

    POSIX shared memory (shm_open, shm_unlink). Adding the following

    line to /etc/fstab should take care of this:



    tmpfs /dev/shm tmpfs defaults 0 0



    Remember to create the directory that you intend to mount tmpfs on
    if necessary.



    This mount is not needed for SYSV shared memory. The internal

    mount is used for that. (In the 2.3 kernel versions it was

    necessary to mount the predecessor of tmpfs (shm fs) to use SYSV

    shared memory)



    3) Some people (including me) find it very convenient to mount it

    e.g. on /tmp and /var/tmp and have a big swap partition. And now

    loop mounts of tmpfs files do work, so mkinitrd shipped by most

    distributions should succeed with a tmpfs /tmp.



    4) And probably a lot more I do not know about :-)



    tmpfs has three mount options for sizing:



    size: The limit of allocated bytes for this tmpfs instance. The
    default is half of your physical RAM without swap. If you
    oversize your tmpfs instances the machine will deadlock
    since the OOM handler will not be able to free that memory.
    nr_blocks: The same as size, but in blocks of PAGE_CACHE_SIZE.
    nr_inodes: The maximum number of inodes for this instance. The default
    is half of the number of your physical RAM pages, or (on a
    machine with highmem) the number of lowmem RAM pages,
    whichever is the lower.




    From the Transparent Hugepage Kernel Doc:




    Transparent Hugepage Support maximizes the usefulness of free memory
    if compared to the reservation approach of hugetlbfs by allowing all
    unused memory to be used as cache or other movable (or even unmovable
    entities). It doesn't require reservation to prevent hugepage
    allocation failures to be noticeable from userland. It allows paging
    and all other advanced VM features to be available on the hugepages.
    It requires no modifications for applications to take advantage of it.



    Applications however can be further optimized to take advantage of
    this feature, like for example they've been optimized before to avoid
    a flood of mmap system calls for every malloc(4k). Optimizing userland
    is by far not mandatory and khugepaged already can take care of long
    lived page allocations even for hugepage unaware applications that
    deals with large amounts of memory.






    New Comment after doing some calculations:



    HugePage Size: 2MB

    HugePages Used: None/Off, as evidenced by the all 0's, but enabled as per the 2Mb above.

    DirectMap4k: 8.03Gb

    DirectMap2M: 16.5Gb

    DirectMap1G: 2Gb



    Using the Paragraph above regarding Optimization in THS, it looks as tho 8Gb of your memory is being used by applications that operate using mallocs of 4k, 16.5Gb, has been requested by applications using mallocs of 2M. The applications using mallocs of 2M are mimicking HugePage Support by offloading the 2M sections to the kernel. This is the preferred method, because once the malloc is released by the kernel, the memory is released to the system, whereas mounting tmpfs using hugepage wouldn't result in a full cleaning until the system was rebooted. Lastly, the easy one, you had 2 programs open/running that requested a malloc of 1Gb



    For those of you reading that don't know a malloc is a Standard Structure in C that stands for Memory ALLOCation. These calculations serve as proof that the OP's correlation between DirectMapping and THS maybe correct. Also note that mounting a HUGEPAGE ONLY fs would only result in a gain in Increments of 2MB, whereas letting the system manage memory using THS occurs mostly in 4k blocks, meaning in terms of memory management every malloc call saves the system 2044k(2048 - 4) for some other process to use.






    share|improve this answer















    There is no difference betweem tmpfs and shm. tmpfs is the new name for shm. shm stands for SHaredMemory.



    See: Linux tmpfs.



    The main reason tmpfs is even used today is this comment in my /etc/fstab on my gentoo box. BTW Chromium won't build with the line missing:



    # glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for 
    # POSIX shared memory (shm_open, shm_unlink).
    shm /dev/shm tmpfs nodev,nosuid,noexec 0 0


    which came out of the linux kernel documentation



    Quoting:




    tmpfs has the following uses:



    1) There is always a kernel internal mount which you will not see at

    all. This is used for shared anonymous mappings and SYSV shared

    memory.



    This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
    set, the user visible part of tmpfs is not build. But the internal

    mechanisms are always present.



    2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for

    POSIX shared memory (shm_open, shm_unlink). Adding the following

    line to /etc/fstab should take care of this:



    tmpfs /dev/shm tmpfs defaults 0 0



    Remember to create the directory that you intend to mount tmpfs on
    if necessary.



    This mount is not needed for SYSV shared memory. The internal

    mount is used for that. (In the 2.3 kernel versions it was

    necessary to mount the predecessor of tmpfs (shm fs) to use SYSV

    shared memory)



    3) Some people (including me) find it very convenient to mount it

    e.g. on /tmp and /var/tmp and have a big swap partition. And now

    loop mounts of tmpfs files do work, so mkinitrd shipped by most

    distributions should succeed with a tmpfs /tmp.



    4) And probably a lot more I do not know about :-)



    tmpfs has three mount options for sizing:



    size: The limit of allocated bytes for this tmpfs instance. The
    default is half of your physical RAM without swap. If you
    oversize your tmpfs instances the machine will deadlock
    since the OOM handler will not be able to free that memory.
    nr_blocks: The same as size, but in blocks of PAGE_CACHE_SIZE.
    nr_inodes: The maximum number of inodes for this instance. The default
    is half of the number of your physical RAM pages, or (on a
    machine with highmem) the number of lowmem RAM pages,
    whichever is the lower.




    From the Transparent Hugepage Kernel Doc:




    Transparent Hugepage Support maximizes the usefulness of free memory
    if compared to the reservation approach of hugetlbfs by allowing all
    unused memory to be used as cache or other movable (or even unmovable
    entities). It doesn't require reservation to prevent hugepage
    allocation failures to be noticeable from userland. It allows paging
    and all other advanced VM features to be available on the hugepages.
    It requires no modifications for applications to take advantage of it.



    Applications however can be further optimized to take advantage of
    this feature, like for example they've been optimized before to avoid
    a flood of mmap system calls for every malloc(4k). Optimizing userland
    is by far not mandatory and khugepaged already can take care of long
    lived page allocations even for hugepage unaware applications that
    deals with large amounts of memory.






    New Comment after doing some calculations:



    HugePage Size: 2MB

    HugePages Used: None/Off, as evidenced by the all 0's, but enabled as per the 2Mb above.

    DirectMap4k: 8.03Gb

    DirectMap2M: 16.5Gb

    DirectMap1G: 2Gb



    Using the Paragraph above regarding Optimization in THS, it looks as tho 8Gb of your memory is being used by applications that operate using mallocs of 4k, 16.5Gb, has been requested by applications using mallocs of 2M. The applications using mallocs of 2M are mimicking HugePage Support by offloading the 2M sections to the kernel. This is the preferred method, because once the malloc is released by the kernel, the memory is released to the system, whereas mounting tmpfs using hugepage wouldn't result in a full cleaning until the system was rebooted. Lastly, the easy one, you had 2 programs open/running that requested a malloc of 1Gb



    For those of you reading that don't know a malloc is a Standard Structure in C that stands for Memory ALLOCation. These calculations serve as proof that the OP's correlation between DirectMapping and THS maybe correct. Also note that mounting a HUGEPAGE ONLY fs would only result in a gain in Increments of 2MB, whereas letting the system manage memory using THS occurs mostly in 4k blocks, meaning in terms of memory management every malloc call saves the system 2044k(2048 - 4) for some other process to use.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Nov 19 '14 at 20:38

























    answered Apr 14 '14 at 21:28









    eyoung100eyoung100

    4,8131441




    4,8131441








    • 2





      This is really good- is the THS my DirectMap?

      – mikeserv
      Apr 15 '14 at 3:17











    • That I can't answer as I googled DirectMapping and found nothing related to tmpfs etc. The only thing I could find was how to configure HugeMem Support for Oracle Databases running on their flavor of Linux, which means they are using HugePages instead of the THS I referred to. All kernels in the 2.6 branch support THS though. As a hunch tho, see my new comment above.

      – eyoung100
      Apr 15 '14 at 14:29













    • Yeah I turned up very little as well. I have done some reading on HP, THP. I'm pretty intrigued by your comment. This is really shaping up, man. This last part - HP only - should I interpret this to mean that I can mount a read/write filesystem atop a hugepage mount? Like, an image file loop-mounted from a hugepage mount? Writable?

      – mikeserv
      Apr 16 '14 at 3:30











    • Yes, and it is writable when mounted properly, but be aware: 1. That since you mounted it, you're in charge of cleanup 2. It's wasteful: Using your example, lets say that your loop only contained a text file, with the Characters: Hello, my name is Mike. Assuming each character is 1k, that file will save as 23k. You've wasted 2025k as the Hugepage gave you 2MB's. That wasteful behavior is why memory management was built into the kernel. It also prevents us from needing a wrapper DLL like kernel32

      – eyoung100
      Apr 16 '14 at 4:02













    • and lastly 3. You lose your mount upon reboot or crash.

      – eyoung100
      Apr 16 '14 at 4:10














    • 2





      This is really good- is the THS my DirectMap?

      – mikeserv
      Apr 15 '14 at 3:17











    • That I can't answer as I googled DirectMapping and found nothing related to tmpfs etc. The only thing I could find was how to configure HugeMem Support for Oracle Databases running on their flavor of Linux, which means they are using HugePages instead of the THS I referred to. All kernels in the 2.6 branch support THS though. As a hunch tho, see my new comment above.

      – eyoung100
      Apr 15 '14 at 14:29













    • Yeah I turned up very little as well. I have done some reading on HP, THP. I'm pretty intrigued by your comment. This is really shaping up, man. This last part - HP only - should I interpret this to mean that I can mount a read/write filesystem atop a hugepage mount? Like, an image file loop-mounted from a hugepage mount? Writable?

      – mikeserv
      Apr 16 '14 at 3:30











    • Yes, and it is writable when mounted properly, but be aware: 1. That since you mounted it, you're in charge of cleanup 2. It's wasteful: Using your example, lets say that your loop only contained a text file, with the Characters: Hello, my name is Mike. Assuming each character is 1k, that file will save as 23k. You've wasted 2025k as the Hugepage gave you 2MB's. That wasteful behavior is why memory management was built into the kernel. It also prevents us from needing a wrapper DLL like kernel32

      – eyoung100
      Apr 16 '14 at 4:02













    • and lastly 3. You lose your mount upon reboot or crash.

      – eyoung100
      Apr 16 '14 at 4:10








    2




    2





    This is really good- is the THS my DirectMap?

    – mikeserv
    Apr 15 '14 at 3:17





    This is really good- is the THS my DirectMap?

    – mikeserv
    Apr 15 '14 at 3:17













    That I can't answer as I googled DirectMapping and found nothing related to tmpfs etc. The only thing I could find was how to configure HugeMem Support for Oracle Databases running on their flavor of Linux, which means they are using HugePages instead of the THS I referred to. All kernels in the 2.6 branch support THS though. As a hunch tho, see my new comment above.

    – eyoung100
    Apr 15 '14 at 14:29







    That I can't answer as I googled DirectMapping and found nothing related to tmpfs etc. The only thing I could find was how to configure HugeMem Support for Oracle Databases running on their flavor of Linux, which means they are using HugePages instead of the THS I referred to. All kernels in the 2.6 branch support THS though. As a hunch tho, see my new comment above.

    – eyoung100
    Apr 15 '14 at 14:29















    Yeah I turned up very little as well. I have done some reading on HP, THP. I'm pretty intrigued by your comment. This is really shaping up, man. This last part - HP only - should I interpret this to mean that I can mount a read/write filesystem atop a hugepage mount? Like, an image file loop-mounted from a hugepage mount? Writable?

    – mikeserv
    Apr 16 '14 at 3:30





    Yeah I turned up very little as well. I have done some reading on HP, THP. I'm pretty intrigued by your comment. This is really shaping up, man. This last part - HP only - should I interpret this to mean that I can mount a read/write filesystem atop a hugepage mount? Like, an image file loop-mounted from a hugepage mount? Writable?

    – mikeserv
    Apr 16 '14 at 3:30













    Yes, and it is writable when mounted properly, but be aware: 1. That since you mounted it, you're in charge of cleanup 2. It's wasteful: Using your example, lets say that your loop only contained a text file, with the Characters: Hello, my name is Mike. Assuming each character is 1k, that file will save as 23k. You've wasted 2025k as the Hugepage gave you 2MB's. That wasteful behavior is why memory management was built into the kernel. It also prevents us from needing a wrapper DLL like kernel32

    – eyoung100
    Apr 16 '14 at 4:02







    Yes, and it is writable when mounted properly, but be aware: 1. That since you mounted it, you're in charge of cleanup 2. It's wasteful: Using your example, lets say that your loop only contained a text file, with the Characters: Hello, my name is Mike. Assuming each character is 1k, that file will save as 23k. You've wasted 2025k as the Hugepage gave you 2MB's. That wasteful behavior is why memory management was built into the kernel. It also prevents us from needing a wrapper DLL like kernel32

    – eyoung100
    Apr 16 '14 at 4:02















    and lastly 3. You lose your mount upon reboot or crash.

    – eyoung100
    Apr 16 '14 at 4:10





    and lastly 3. You lose your mount upon reboot or crash.

    – eyoung100
    Apr 16 '14 at 4:10













    4














    To address the "DirectMap" issue: the kernel has a linear ("direct") mapping of physical memory, separate from the virtual mappings allocated to each user process.



    The kernel uses the largest possible pages for this mapping to cut down on TLB pressure.



    DirectMap1G is visible if your CPU supports 1Gb pages (Barcelona onwards; some virtual environments disable them), and if enabled in the kernel - the default is on for 2.6.29+.






    share|improve this answer




























      4














      To address the "DirectMap" issue: the kernel has a linear ("direct") mapping of physical memory, separate from the virtual mappings allocated to each user process.



      The kernel uses the largest possible pages for this mapping to cut down on TLB pressure.



      DirectMap1G is visible if your CPU supports 1Gb pages (Barcelona onwards; some virtual environments disable them), and if enabled in the kernel - the default is on for 2.6.29+.






      share|improve this answer


























        4












        4








        4







        To address the "DirectMap" issue: the kernel has a linear ("direct") mapping of physical memory, separate from the virtual mappings allocated to each user process.



        The kernel uses the largest possible pages for this mapping to cut down on TLB pressure.



        DirectMap1G is visible if your CPU supports 1Gb pages (Barcelona onwards; some virtual environments disable them), and if enabled in the kernel - the default is on for 2.6.29+.






        share|improve this answer













        To address the "DirectMap" issue: the kernel has a linear ("direct") mapping of physical memory, separate from the virtual mappings allocated to each user process.



        The kernel uses the largest possible pages for this mapping to cut down on TLB pressure.



        DirectMap1G is visible if your CPU supports 1Gb pages (Barcelona onwards; some virtual environments disable them), and if enabled in the kernel - the default is on for 2.6.29+.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Sep 20 '14 at 13:04









        GreenReaperGreenReaper

        19113




        19113























            2














            There's no difference between shm and tmpfs (actually, tmpfs is only the new name of former shmfs). hugetlbfs is a tmpfs-based filesystem that allocates its space from kernel huge pages and needs some additional configuration afford (how to use this is explained in Documentation/vm/hugetlbpage.txt).






            share|improve this answer
























            • This was a good try, and I had read those docs, of course. Or maybe not of course - but I think I'm going to put this out for a 100rep bounty, but before I do, I will offer it to you if you can expand on this. So far you've yet to enrich my understanding - I already knew most of it, except that the two were merely synonyms. In any case, If you can make this a better answer by tomorrow morning the 100rep bounty is yours. Especially interesting to me is I find no mention of DirectMap at all in the procfs man page. How come?

              – mikeserv
              Apr 11 '14 at 16:45











            • @mikeserv - I found this diff that shows what function the DirectMaps are calculated from: lkml.org/lkml/2008/11/6/163

              – slm
              Apr 17 '14 at 7:57
















            2














            There's no difference between shm and tmpfs (actually, tmpfs is only the new name of former shmfs). hugetlbfs is a tmpfs-based filesystem that allocates its space from kernel huge pages and needs some additional configuration afford (how to use this is explained in Documentation/vm/hugetlbpage.txt).






            share|improve this answer
























            • This was a good try, and I had read those docs, of course. Or maybe not of course - but I think I'm going to put this out for a 100rep bounty, but before I do, I will offer it to you if you can expand on this. So far you've yet to enrich my understanding - I already knew most of it, except that the two were merely synonyms. In any case, If you can make this a better answer by tomorrow morning the 100rep bounty is yours. Especially interesting to me is I find no mention of DirectMap at all in the procfs man page. How come?

              – mikeserv
              Apr 11 '14 at 16:45











            • @mikeserv - I found this diff that shows what function the DirectMaps are calculated from: lkml.org/lkml/2008/11/6/163

              – slm
              Apr 17 '14 at 7:57














            2












            2








            2







            There's no difference between shm and tmpfs (actually, tmpfs is only the new name of former shmfs). hugetlbfs is a tmpfs-based filesystem that allocates its space from kernel huge pages and needs some additional configuration afford (how to use this is explained in Documentation/vm/hugetlbpage.txt).






            share|improve this answer













            There's no difference between shm and tmpfs (actually, tmpfs is only the new name of former shmfs). hugetlbfs is a tmpfs-based filesystem that allocates its space from kernel huge pages and needs some additional configuration afford (how to use this is explained in Documentation/vm/hugetlbpage.txt).







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Mar 24 '14 at 11:49









            Andreas WieseAndreas Wiese

            7,7092234




            7,7092234













            • This was a good try, and I had read those docs, of course. Or maybe not of course - but I think I'm going to put this out for a 100rep bounty, but before I do, I will offer it to you if you can expand on this. So far you've yet to enrich my understanding - I already knew most of it, except that the two were merely synonyms. In any case, If you can make this a better answer by tomorrow morning the 100rep bounty is yours. Especially interesting to me is I find no mention of DirectMap at all in the procfs man page. How come?

              – mikeserv
              Apr 11 '14 at 16:45











            • @mikeserv - I found this diff that shows what function the DirectMaps are calculated from: lkml.org/lkml/2008/11/6/163

              – slm
              Apr 17 '14 at 7:57



















            • This was a good try, and I had read those docs, of course. Or maybe not of course - but I think I'm going to put this out for a 100rep bounty, but before I do, I will offer it to you if you can expand on this. So far you've yet to enrich my understanding - I already knew most of it, except that the two were merely synonyms. In any case, If you can make this a better answer by tomorrow morning the 100rep bounty is yours. Especially interesting to me is I find no mention of DirectMap at all in the procfs man page. How come?

              – mikeserv
              Apr 11 '14 at 16:45











            • @mikeserv - I found this diff that shows what function the DirectMaps are calculated from: lkml.org/lkml/2008/11/6/163

              – slm
              Apr 17 '14 at 7:57

















            This was a good try, and I had read those docs, of course. Or maybe not of course - but I think I'm going to put this out for a 100rep bounty, but before I do, I will offer it to you if you can expand on this. So far you've yet to enrich my understanding - I already knew most of it, except that the two were merely synonyms. In any case, If you can make this a better answer by tomorrow morning the 100rep bounty is yours. Especially interesting to me is I find no mention of DirectMap at all in the procfs man page. How come?

            – mikeserv
            Apr 11 '14 at 16:45





            This was a good try, and I had read those docs, of course. Or maybe not of course - but I think I'm going to put this out for a 100rep bounty, but before I do, I will offer it to you if you can expand on this. So far you've yet to enrich my understanding - I already knew most of it, except that the two were merely synonyms. In any case, If you can make this a better answer by tomorrow morning the 100rep bounty is yours. Especially interesting to me is I find no mention of DirectMap at all in the procfs man page. How come?

            – mikeserv
            Apr 11 '14 at 16:45













            @mikeserv - I found this diff that shows what function the DirectMaps are calculated from: lkml.org/lkml/2008/11/6/163

            – slm
            Apr 17 '14 at 7:57





            @mikeserv - I found this diff that shows what function the DirectMaps are calculated from: lkml.org/lkml/2008/11/6/163

            – slm
            Apr 17 '14 at 7:57


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f120525%2fon-system-memory-specifically-the-difference-between-tmpfs-shm-and-hug%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to make a Squid Proxy server?

            Is this a new Fibonacci Identity?

            19世紀