Zephyr API Documentation  2.7.0-rc2
A Scalable Open Source RTOS
mem_manage.h File Reference
#include <sys/util.h>
#include <toolchain.h>
#include <stdint.h>
#include <stddef.h>
#include <inttypes.h>
#include <sys/__assert.h>
#include <syscalls/mem_manage.h>

Go to the source code of this file.

Data Structures

struct  k_mem_paging_stats_t
 
struct  k_mem_paging_histogram_t
 

Macros

#define K_MEM_CACHE_NONE   2
 
#define K_MEM_CACHE_WT   1
 
#define K_MEM_CACHE_WB   0
 
#define K_MEM_CACHE_MASK   (BIT(3) - 1)
 
#define K_MEM_PERM_RW   BIT(3)
 
#define K_MEM_PERM_EXEC   BIT(4)
 
#define K_MEM_PERM_USER   BIT(5)
 
#define K_MEM_MAP_UNINIT   BIT(16)
 The mapped region is not guaranteed to be zeroed. More...
 
#define K_MEM_MAP_LOCK   BIT(17)
 
#define K_MEM_MAP_GUARD   __DEPRECATED_MACRO BIT(18)
 

Functions

size_t k_mem_free_get (void)
 
voidk_mem_map (size_t size, uint32_t flags)
 
void k_mem_unmap (void *addr, size_t size)
 
size_t k_mem_region_align (uintptr_t *aligned_addr, size_t *aligned_size, uintptr_t addr, size_t size, size_t align)
 
int k_mem_page_out (void *addr, size_t size)
 
void k_mem_page_in (void *addr, size_t size)
 
void k_mem_pin (void *addr, size_t size)
 
void k_mem_unpin (void *addr, size_t size)
 
void k_mem_paging_stats_get (struct k_mem_paging_stats_t *stats)
 
void k_mem_paging_thread_stats_get (struct k_thread *thread, struct k_mem_paging_stats_t *stats)
 
void k_mem_paging_histogram_eviction_get (struct k_mem_paging_histogram_t *hist)
 
void k_mem_paging_histogram_backing_store_page_in_get (struct k_mem_paging_histogram_t *hist)
 
void k_mem_paging_histogram_backing_store_page_out_get (struct k_mem_paging_histogram_t *hist)
 
struct z_page_frame * k_mem_paging_eviction_select (bool *dirty)
 
void k_mem_paging_eviction_init (void)
 
int k_mem_paging_backing_store_location_get (struct z_page_frame *pf, uintptr_t *location, bool page_fault)
 
void k_mem_paging_backing_store_location_free (uintptr_t location)
 
void k_mem_paging_backing_store_page_out (uintptr_t location)
 
void k_mem_paging_backing_store_page_in (uintptr_t location)
 
void k_mem_paging_backing_store_page_finalize (struct z_page_frame *pf, uintptr_t location)
 
void k_mem_paging_backing_store_init (void)
 

Macro Definition Documentation

◆ K_MEM_CACHE_MASK

#define K_MEM_CACHE_MASK   (BIT(3) - 1)

Reserved bits for cache modes in k_map() flags argument

◆ K_MEM_CACHE_NONE

#define K_MEM_CACHE_NONE   2

No caching. Most drivers want this.

◆ K_MEM_CACHE_WB

#define K_MEM_CACHE_WB   0

Full write-back caching. Any RAM mapped wants this.

◆ K_MEM_CACHE_WT

#define K_MEM_CACHE_WT   1

Write-through caching. Used by certain drivers.

◆ K_MEM_MAP_GUARD

#define K_MEM_MAP_GUARD   __DEPRECATED_MACRO BIT(18)

A un-mapped virtual guard page will be placed in memory immediately preceding the mapped region. This page will still be noted as being used by the virtual memory manager. The total size of the allocation will be the requested size plus the size of this guard page. The returned address pointer will not include the guard page immediately below it. The typical use-case is downward-growing thread stacks.

Zephyr treats page faults on this guard page as a fatal K_ERR_STACK_CHK_FAIL if it determines it immediately precedes a stack buffer, this is implemented in the architecture layer.

DEPRECATED: k_mem_map() will always allocate guard pages, so this bit no longer has any effect.

◆ K_MEM_MAP_LOCK

#define K_MEM_MAP_LOCK   BIT(17)

Region will be pinned in memory and never paged

Such memory is guaranteed to never produce a page fault due to page-outs or copy-on-write once the mapping call has returned. Physical page frames will be pre-fetched as necessary and pinned.

◆ K_MEM_MAP_UNINIT

#define K_MEM_MAP_UNINIT   BIT(16)

The mapped region is not guaranteed to be zeroed.

This may improve performance. The associated page frames may contain indeterminate data, zeroes, or even sensitive information.

This may not be used with K_MEM_PERM_USER as there are no circumstances where this is safe.

◆ K_MEM_PERM_EXEC

#define K_MEM_PERM_EXEC   BIT(4)

Region will be executable (normally forbidden)

◆ K_MEM_PERM_RW

#define K_MEM_PERM_RW   BIT(3)

Region will have read/write access (and not read-only)

◆ K_MEM_PERM_USER

#define K_MEM_PERM_USER   BIT(5)

Region will be accessible to user mode (normally supervisor-only)

Function Documentation

◆ k_mem_free_get()

size_t k_mem_free_get ( void  )

Return the amount of free memory available

The returned value will reflect how many free RAM page frames are available. If demand paging is enabled, it may still be possible to allocate more.

The information reported by this function may go stale immediately if concurrent memory mappings or page-ins take place.

Returns
Free physical RAM, in bytes

◆ k_mem_map()

void * k_mem_map ( size_t  size,
uint32_t  flags 
)

Map anonymous memory into Zephyr's address space

This function effectively increases the data space available to Zephyr. The kernel will choose a base virtual address and return it to the caller. The memory will have access permissions for all contexts set per the provided flags argument.

If user thread access control needs to be managed in any way, do not enable K_MEM_PERM_USER flags here; instead manage the region's permissions with memory domain APIs after the mapping has been established. Setting K_MEM_PERM_USER here will allow all user threads to access this memory which is usually undesirable.

Unless K_MEM_MAP_UNINIT is used, the returned memory will be zeroed.

The mapped region is not guaranteed to be physically contiguous in memory. Physically contiguous buffers should be allocated statically and pinned at build time.

Pages mapped in this way have write-back cache settings.

The returned virtual memory pointer will be page-aligned. The size parameter, and any base address for re-mapping purposes must be page- aligned.

Note that the allocation includes two guard pages immediately before and after the requested region. The total size of the allocation will be the requested size plus the size of these two guard pages.

Many K_MEM_MAP_* flags have been implemented to alter the behavior of this function, with details in the documentation for these flags.

Parameters
sizeSize of the memory mapping. This must be page-aligned.
flagsK_MEM_PERM_*, K_MEM_MAP_* control flags.
Returns
The mapped memory location, or NULL if insufficient virtual address space, insufficient physical memory to establish the mapping, or insufficient memory for paging structures.

◆ k_mem_region_align()

size_t k_mem_region_align ( uintptr_t aligned_addr,
size_t *  aligned_size,
uintptr_t  addr,
size_t  size,
size_t  align 
)

Given an arbitrary region, provide a aligned region that covers it

The returned region will have both its base address and size aligned to the provided alignment value.

Parameters
aligned_addr[out] Aligned address
aligned_size[out] Aligned region size
addrRegion base address
sizeRegion size
alignWhat to align the address and size to
Return values
offsetbetween aligned_addr and addr

◆ k_mem_unmap()

void k_mem_unmap ( void addr,
size_t  size 
)

Un-map mapped memory

This removes a memory mapping for the provided page-aligned region. Associated page frames will be free and the kernel may re-use the associated virtual address region. Any paged out data pages may be discarded.

Calling this function on a region which was not mapped to begin with is undefined behavior.

Parameters
addrPage-aligned memory region base virtual address
sizePage-aligned memory region size