Zephyr API Documentation
2.7.0-rc2
A Scalable Open Source RTOS
|
#include <sys/util.h>
#include <toolchain.h>
#include <stdint.h>
#include <stddef.h>
#include <inttypes.h>
#include <sys/__assert.h>
#include <syscalls/mem_manage.h>
Go to the source code of this file.
Data Structures | |
struct | k_mem_paging_stats_t |
struct | k_mem_paging_histogram_t |
Macros | |
#define | K_MEM_CACHE_NONE 2 |
#define | K_MEM_CACHE_WT 1 |
#define | K_MEM_CACHE_WB 0 |
#define | K_MEM_CACHE_MASK (BIT(3) - 1) |
#define | K_MEM_PERM_RW BIT(3) |
#define | K_MEM_PERM_EXEC BIT(4) |
#define | K_MEM_PERM_USER BIT(5) |
#define | K_MEM_MAP_UNINIT BIT(16) |
The mapped region is not guaranteed to be zeroed. More... | |
#define | K_MEM_MAP_LOCK BIT(17) |
#define | K_MEM_MAP_GUARD __DEPRECATED_MACRO BIT(18) |
#define K_MEM_CACHE_MASK (BIT(3) - 1) |
Reserved bits for cache modes in k_map() flags argument
#define K_MEM_CACHE_NONE 2 |
No caching. Most drivers want this.
#define K_MEM_CACHE_WB 0 |
Full write-back caching. Any RAM mapped wants this.
#define K_MEM_CACHE_WT 1 |
Write-through caching. Used by certain drivers.
#define K_MEM_MAP_GUARD __DEPRECATED_MACRO BIT(18) |
A un-mapped virtual guard page will be placed in memory immediately preceding the mapped region. This page will still be noted as being used by the virtual memory manager. The total size of the allocation will be the requested size plus the size of this guard page. The returned address pointer will not include the guard page immediately below it. The typical use-case is downward-growing thread stacks.
Zephyr treats page faults on this guard page as a fatal K_ERR_STACK_CHK_FAIL if it determines it immediately precedes a stack buffer, this is implemented in the architecture layer.
DEPRECATED: k_mem_map() will always allocate guard pages, so this bit no longer has any effect.
#define K_MEM_MAP_LOCK BIT(17) |
Region will be pinned in memory and never paged
Such memory is guaranteed to never produce a page fault due to page-outs or copy-on-write once the mapping call has returned. Physical page frames will be pre-fetched as necessary and pinned.
#define K_MEM_MAP_UNINIT BIT(16) |
The mapped region is not guaranteed to be zeroed.
This may improve performance. The associated page frames may contain indeterminate data, zeroes, or even sensitive information.
This may not be used with K_MEM_PERM_USER as there are no circumstances where this is safe.
#define K_MEM_PERM_EXEC BIT(4) |
Region will be executable (normally forbidden)
#define K_MEM_PERM_RW BIT(3) |
Region will have read/write access (and not read-only)
#define K_MEM_PERM_USER BIT(5) |
Region will be accessible to user mode (normally supervisor-only)
size_t k_mem_free_get | ( | void | ) |
Return the amount of free memory available
The returned value will reflect how many free RAM page frames are available. If demand paging is enabled, it may still be possible to allocate more.
The information reported by this function may go stale immediately if concurrent memory mappings or page-ins take place.
Map anonymous memory into Zephyr's address space
This function effectively increases the data space available to Zephyr. The kernel will choose a base virtual address and return it to the caller. The memory will have access permissions for all contexts set per the provided flags argument.
If user thread access control needs to be managed in any way, do not enable K_MEM_PERM_USER flags here; instead manage the region's permissions with memory domain APIs after the mapping has been established. Setting K_MEM_PERM_USER here will allow all user threads to access this memory which is usually undesirable.
Unless K_MEM_MAP_UNINIT is used, the returned memory will be zeroed.
The mapped region is not guaranteed to be physically contiguous in memory. Physically contiguous buffers should be allocated statically and pinned at build time.
Pages mapped in this way have write-back cache settings.
The returned virtual memory pointer will be page-aligned. The size parameter, and any base address for re-mapping purposes must be page- aligned.
Note that the allocation includes two guard pages immediately before and after the requested region. The total size of the allocation will be the requested size plus the size of these two guard pages.
Many K_MEM_MAP_* flags have been implemented to alter the behavior of this function, with details in the documentation for these flags.
size | Size of the memory mapping. This must be page-aligned. |
flags | K_MEM_PERM_*, K_MEM_MAP_* control flags. |
size_t k_mem_region_align | ( | uintptr_t * | aligned_addr, |
size_t * | aligned_size, | ||
uintptr_t | addr, | ||
size_t | size, | ||
size_t | align | ||
) |
Given an arbitrary region, provide a aligned region that covers it
The returned region will have both its base address and size aligned to the provided alignment value.
aligned_addr | [out] Aligned address |
aligned_size | [out] Aligned region size |
addr | Region base address |
size | Region size |
align | What to align the address and size to |
offset | between aligned_addr and addr |
Un-map mapped memory
This removes a memory mapping for the provided page-aligned region. Associated page frames will be free and the kernel may re-use the associated virtual address region. Any paged out data pages may be discarded.
Calling this function on a region which was not mapped to begin with is undefined behavior.
addr | Page-aligned memory region base virtual address |
size | Page-aligned memory region size |