前言
MMC卡、eMMC、 SD卡、 Flash、 TF卡和ddr3简介。
MMC卡
(Multi Media Card): 即”多媒体卡”,是一种非易失性存储器件。-
eMMC
(embedded multi media card):用于存储操作系统、应用程序和用户数据。为MMC协会订立的,主要针对手机或平板电脑等产品的内嵌式存储器标准规格。在封装中集成了控制器,提供标准接口管理内存。eMMC = NAND flash + controller + standard interface。 -
SD卡
(Secure Digital Card): 即”安全数字卡”,可拆卸非易失性存储。基于MMC协议,一般用在大一些的电子设备:如电脑,相机等。 -
Flash
:NAND Flash
: 读取速度慢,容量大,价格低。没有采取内存的随机读取技术,它的读取是以一次读取一块的形式来进行的,通常是一次读取512个字节。接口需要满足协议要求,读取方式跟SRAM不一样,不能直接执行。NOR Flash
: 读取速度快,容量小,价格贵。和常见的SDRAM的读取一样,用户可以直接运行装载在NOR Flash里面的代码。接口和SRAM一样,可以直接访问地址,指令存储在NOR Flash里面,可以直接执行。
TF卡
:T-Flash, 又叫micro SD卡,即微型SD卡,基于NAND Flash技术。可拆卸非易失性存储,一般用在手机上。-
DDR
(Double Data Rate): 提供快速的运行内存,存储正在运行的程序和临时数据。属于SDRAM(同步动态随机存取存储器)家族,通过其高速数据传输,大容量存储,低功耗等特点,为现代电子设备提供了强大的临时存储和数据访问能力。广泛嵌入在服务器,工作站,电脑,消费电子,汽车和其它系统设计中。
Zephyr文件系统简介
文件系统是操作系统用来管理文件的软件部分。
Zephyr SDK当前支持两种文件系统,分别是LittleFS和FatFS。
详细说明参考官方文档:https://docs.zephyrproject.org/latest/services/file_system/index.html
LittleFS架构图:
FatFS架构图:
逻辑流程:
滚动日志型文件系统方案
zephyr 目前不支持通过分区label或分区节点让LittleFS只操作 eMMC/SD (块设备)的某个分区,只能靠应用层约定offset/size。可以直接指定分区操作Flash设备,Zephyr官方推荐在设备树定义分区(fixed-paartitions),代码中用FIXED_PARTITION_ID(label)
或DEVICE_DT_GET(DT_NODELABEL(xxx))
作为fs_mount_t.storage_dev
。
官方文档:https://docs.zephyrproject.org/latest/services/storage/flash_map/flash_map.html
滚动日志型文件系统方案结合了LittleFS数据掉电保护的特性,同时通过扫描索引的方法,避免在大量写入后文件系统启动时间过长。eMMC磁盘被分为两个部分,一个部分专门用于LittleFS存储文件索引,另一个较大的部分用于循环记录数据。本文不做详细说明,在这里简单引用一下。
什么是磁盘分区管理
磁盘分区管理是指将一块物理硬盘划分成多个独立的逻辑区域(即分区)经行管理的过程,每个分区独立使用,方便管理或者提高性能。
本文中用软件约定的方法,使用zephyr的设备树定义eMMC磁盘分区,一个分区挂载LittleFS文件系统读写验证,一个分区使用原始API读写验证,并验证两个分区的操作是否会彼此影响。经过验证,方案是可行的。
LittleFS
:
轻量级文件系统,专为嵌入式系统设计。提供了标准的文件系统操作接口,如文件的创建、读写、删除等。优点:有多种文件保护机制,如日志结构,元数据校验等,能有效防止因掉电等异常情况导致数据损坏,即掉电保护。缺点:在处理大量写入操作之后,系统需要加载和处理所有的文件索引信息,导致速度变慢。
官方文档:https://docs.zephyrproject.org/latest/samples/subsys/fs/littlefs/README.html#littlefs
Raw API
:
底层的接口,允许直接对存储设备进行读写操作,在某些场景下可以提供更高的性能和更低的开销。
API:https://docs.zephyrproject.org/latest/doxygen/html/group__disk__access__interface.html#gaba3fead8c0ce65945b440bf824fd4144
专业术语:动态分区,存储分区技术,分区管理技术。
Structure
zephyr/
└── samples/emmc_dual_api/
├── src/main.c
├── boards/xxx.overlay
├── prj.conf
└── CMakeLists.txt
xxx.overlay
/**
* Copyright (c) 2024 SYSTech Co.
* SPDX-License-Identifier: Apache-2.0
*/
#include <mem.h>
&mmc0 {
status = "okay";
non-removable;
mmc {
compatible = "zephyr,mmc-disk";
status = "okay";
bus-width = <4>;
disk-name = "MMC";
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
// LittleFS
storage_partition: partition@0 {
label = "filesystem";
reg = <0x00000000 DT_SIZE_M(1)>; // 1MB offset
};
// RAW API
data_partition: partition@100000 {
label = "raw_data";
reg = <0x00100000 DT_SIZE_M(1)>; // 1MB @ 1MB offset
};
};
};
};
non-removable
: 表示该 eMMC 设备是不可拆卸的。
compatible = "zephyr,mmc-disk";
: 表示该节点兼容Zephyr的eMMC磁盘驱动。
bus-width = <4>
: eMMC 总线宽度为 4 位。
disk-name = "MMC";
: 设备名为”MMC”, 代码中通过这个名字访问设备。
partitions {...}
: 定义eMMC上的固定分区。
执行流程图:
main.c
完整源码在后面。
程序用到的数据结构和API如下,详细信息在头文件可以查看。
数据结构:
fs_mount_t
:
我要把LittleFS文件系统挂载到名为’MMC’的存储设备上,挂载点是’/MMC’,并使用disk_access方式访问底层存储。
#define MMC_DEVICE_NAME "MMC"
#define MOUNT_POINT "/MMC:"
static struct fs_littlefs lfsfs;
static struct fs_mount_t lfs_mount_point = {
.type = FS_LITTLEFS,
.fs_data = &lfsfs,
.storage_dev = (void *)MMC_DEVICE_NAME,
.mnt_point = MOUNT_POINT,
.flags = FS_MOUNT_FLAG_USE_DISK_ACCESS,
};
partition_info
:
存储磁盘分区的偏移地址和大小。
struct partition_info {
uint32_t offset;
uint32_t size;
};
设备树宏
DT_NODELABEL
:获取设备树节点。
DT_REG_ADDR
:获取设备树节点的寄存器地址(偏移地址)。
DT_REG_SIZE
:获取设备树节点的寄存器大小。
#define LFS_PART_NODE DT_NODELABEL(storage_partition)
info->offset = DT_REG_ADDR(LFS_PART_NODE);
info->size = DT_REG_SIZE(LFS_PART_NODE);
#define RAW_PART_NODE DT_NODELABEL(data_partition)
info->offset = DT_REG_ADDR(RAW_PART_NODE);
info->size = DT_REG_SIZE(RAW_PART_NODE);
磁盘访问API
直接操作存储设备,进行底层的扇区读写操作。
disk_access_init
: 初始化磁盘设备
disk_access_ioctl
: 磁盘控制操作(获取扇区大小、扇区数量等)
disk_access_write
: 写入磁盘扇区
disk_access_read
: 读取磁盘扇区
文件系统API
fs_file_t_init
: 初始化文件结构体
fs_dir_t_init
: 初始化目录结构体
fs_opendir
: 打开目录
fs_readdir
: 读取目录条目
fs_closedir
: 关闭目录
fs_mount
: 挂载文件系统
fs_unmount
: 卸载文件系统
fs_open
: 打开文件
fs_write
: 写入文件
fs_close
: 关闭文件
初始化磁盘操作:
使用磁盘访问API初始化eMMC设备,然后使用ioctl调用驱动获取设备的块大小,个数,打印信息,验证设备树的磁盘分区是否成功。
/**
* @brief Initialize disk access system
*
* @return 0 on success, negative on error
*/
static int disk_access_init_system(void)
{
int ret;
uint32_t sector_size, sector_count;
/* Initialize disk subsystem */
ret = disk_access_init(MMC_DEVICE_NAME);
if (ret) {
LOG_ERR("Failed to initialize disk access for %s: %d", MMC_DEVICE_NAME, ret);
return ret;
}
/* Check if the disk is ready */
ret = disk_access_ioctl(MMC_DEVICE_NAME, DISK_IOCTL_GET_SECTOR_SIZE, §or_size);
if (ret) {
LOG_ERR("Failed to get sector size: %d", ret);
return ret;
}
ret = disk_access_ioctl(MMC_DEVICE_NAME, DISK_IOCTL_GET_SECTOR_COUNT, §or_count);
if (ret) {
LOG_ERR("Failed to get sector count: %d", ret);
return ret;
}
LOG_INF(" - Sector size: %u bytes, Sector count: %u, Total capacity: %u MB", sector_size,
sector_count, (sector_count * sector_size) / (1024 * 1024));
return 0;
}
原始API写数据:
定义了一个get_partition_info
函数,传入char
型标签指针,和设备数据结构指针,使用C库标准函数strcmp
匹配字符,在用设备树宏获取对应节点的分区偏移地址还有大小信息。
把RAW API write封装在raw_api_write
中,使用get_partition_info
函数获取设备分区信息,定义了块的起始地址sector_start
,总块数total_sectors
,每次写入的最大扇区个数sectors_per_write
,索引值written_sectors
。
使用while循环,只要没有写满就一直写,当写超过内存的时候,或者剩余扇区数不足一次写满缓冲区,则只写剩下的部分,避免越界。
/**
* @brief Get partition info
*
* @param label Partition label to look up
* @param info Pointer to partition_info structure to fill
*/
static void get_partition_info(const char *label, struct partition_info *info)
{
if (strcmp(label, "filesystem") == 0) {
#define LFS_PART_NODE DT_NODELABEL(storage_partition)
info->offset = DT_REG_ADDR(LFS_PART_NODE);
info->size = DT_REG_SIZE(LFS_PART_NODE);
} else if (strcmp(label, "raw_data") == 0) {
#define RAW_PART_NODE DT_NODELABEL(data_partition)
info->offset = DT_REG_ADDR(RAW_PART_NODE);
info->size = DT_REG_SIZE(RAW_PART_NODE);
} else {
LOG_ERR("Unknown partition label: %s", label);
info->offset = 0;
info->size = 0;
}
}
/**
* @brief Write data to raw partition using disk API
*
* @param value Value to fill the buffer with
* @return 0 on success, negative on error
*/
static int raw_api_write(uint8_t value)
{
int ret;
struct partition_info raw_info;
get_partition_info("raw_data", &raw_info);
uint32_t sector_start = raw_info.offset / SECTOR_SIZE;
uint32_t total_sectors = raw_info.size / SECTOR_SIZE;
uint32_t sectors_per_write = TEST_BUFFER_SIZE / SECTOR_SIZE;
uint32_t written_sectors = 0;
memset(test_buffer, value, TEST_BUFFER_SIZE);
while (written_sectors < total_sectors) {
uint32_t write_now = sectors_per_write;
if (written_sectors + write_now > total_sectors) {
write_now = total_sectors - written_sectors;
}
ret = disk_access_write(MMC_DEVICE_NAME, test_buffer,
sector_start + written_sectors, write_now);
if (ret) {
LOG_ERR("Failed to write to raw partition at sector %u: %d",
sector_start + written_sectors, ret);
return ret;
}
written_sectors += write_now;
}
LOG_INF(" ■ Successfully wrote %u bytes to raw partition", raw_info.size);
return 0;
}
LittleFS初始化和挂载:
调用get_partition_info
函数获取LittleFS分区信息,在打印分区信息,用于校验是否分区成功,然后使用fs_mount
挂载。
/**
* @brief Initialize and mount LittleFS
*
* @return 0 on success, negative on error
*/
static int littlefs_init_and_mount(void)
{
static struct partition_info lfs_info;
get_partition_info("filesystem", &lfs_info);
lfsfs.cfg.block_size = SECTOR_SIZE;
lfsfs.cfg.block_count = lfs_info.size / SECTOR_SIZE;
LOG_INF("LittleFS block_size=%u, block_count=%u, total=%u bytes", lfsfs.cfg.block_size,
lfsfs.cfg.block_count, lfsfs.cfg.block_size * lfsfs.cfg.block_count);
return fs_mount(&lfs_mount_point);
}
LittleFS清除文件:
定义文件结构体变量dir
和路径结构体变量entry
,便于对文件和路径经行操作。
使用s_dir_t_init
文件系统API初始化文件变量,fs_opendir
打开文件,entry.name[0]
第一个字符用于判断是否读取完所有条目,snprintf
将挂载点路径和条目名称拼接成完整路径,fs_unlink()
用于删除文件
static int littlefs_clean_all(void)
{
struct fs_dir_t dir;
struct fs_dirent entry;
char path[128];
int ret;
fs_dir_t_init(&dir);
ret = fs_opendir(&dir, MOUNT_POINT);
if (ret) {
LOG_ERR("fs_opendir failed: %d", ret);
return ret;
}
while (fs_readdir(&dir, &entry) == 0) {
if (entry.name[0] == 0) {
break;
}
snprintf(path, sizeof(path), "%s/%s", MOUNT_POINT, entry.name);
if (entry.type == FS_DIR_ENTRY_FILE) {
ret = fs_unlink(path);
if (ret) {
LOG_WRN("Failed to delete file %s: %d", path, ret);
}
} else if (entry.type == FS_DIR_ENTRY_DIR) {
LOG_WRN("Subdir %s not deleted (not implemented)", path);
}
}
fs_closedir(&dir);
return 0;
}
LittleFS创建测试文件:
使用文件系统API初始化文件句柄,填充缓冲区数据,以创建和写入模式打开文件bigfile.bin
,
将缓冲区数据写入文件,关闭文件,成功返回0.
/**
* @brief Create test files in LittleFS
*
* @return 0 on success, negative on error
*/
static int littlefs_create_test_files(void)
{
struct fs_file_t file;
int ret = 0;
const size_t big_file_size = 900 * 1024; /* 900KB */
static char big_data[900 * 1024];
memset(big_data, 0x5A, sizeof(big_data));
fs_file_t_init(&file);
ret = fs_open(&file, MOUNT_POINT "/bigfile.bin", FS_O_CREATE | FS_O_WRITE);
if (ret) {
LOG_ERR("Failed to create bigfile.bin: %d", ret);
return ret;
}
ssize_t written = fs_write(&file, big_data, big_file_size);
if (written != big_file_size) {
LOG_ERR("Failed to write bigfile.bin: %zd/%zu", written, big_file_size);
fs_close(&file);
return -EIO;
}
fs_close(&file);
LOG_INF(" - Created bigfile.bin (%zu bytes)", big_file_size);
return 0;
}
后续操作。
原始API读取文件验证完整性。
LittleFS取消挂载。
原始API读取文件验证完整性。
LittleFS再次挂载,并且验证文件完整性。
main.c 完整源码
/*
* Copyright (c) 2025 SYSTech Co.
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/kernel.h>
#include <zephyr/device.h>
#include <zephyr/devicetree.h>
#include <zephyr/logging/log.h>
#include <zephyr/fs/fs.h>
#include <zephyr/fs/littlefs.h>
#include <zephyr/storage/disk_access.h>
#include <string.h>
#include <stdio.h>
LOG_MODULE_REGISTER(emmc_dual_api, LOG_LEVEL_DBG);
/* Test data values */
#define INITIAL_VALUE_A5 0xA5
/* Buffer sizes */
#define SECTOR_SIZE 512
#define TEST_BUFFER_SIZE 4096
/* LittleFS mount point */
#define MOUNT_POINT "/MMC:"
/* Device names */
#define MMC_DEVICE_NAME "MMC"
/* LittleFS configuration */
static struct fs_littlefs lfsfs;
/* Global variables */
static struct fs_mount_t lfs_mount_point = {
.type = FS_LITTLEFS,
.fs_data = &lfsfs,
.storage_dev = (void *)MMC_DEVICE_NAME,
.mnt_point = MOUNT_POINT,
.flags = FS_MOUNT_FLAG_USE_DISK_ACCESS,
};
static uint8_t test_buffer[TEST_BUFFER_SIZE];
static uint8_t read_buffer[TEST_BUFFER_SIZE];
struct partition_info {
uint32_t offset;
uint32_t size;
};
/**
* @brief Get partition info
*
* @param label Partition label to look up
* @param info Pointer to partition_info structure to fill
*/
static void get_partition_info(const char *label, struct partition_info *info)
{
if (strcmp(label, "filesystem") == 0) {
#define LFS_PART_NODE DT_NODELABEL(storage_partition)
info->offset = DT_REG_ADDR(LFS_PART_NODE);
info->size = DT_REG_SIZE(LFS_PART_NODE);
} else if (strcmp(label, "raw_data") == 0) {
#define RAW_PART_NODE DT_NODELABEL(data_partition)
info->offset = DT_REG_ADDR(RAW_PART_NODE);
info->size = DT_REG_SIZE(RAW_PART_NODE);
} else {
LOG_ERR("Unknown partition label: %s", label);
info->offset = 0;
info->size = 0;
}
}
/**
* @brief Initialize disk access system
*
* @return 0 on success, negative on error
*/
static int disk_access_init_system(void)
{
int ret;
uint32_t sector_size, sector_count;
/* Initialize disk subsystem */
ret = disk_access_init(MMC_DEVICE_NAME);
if (ret) {
LOG_ERR("Failed to initialize disk access for %s: %d", MMC_DEVICE_NAME, ret);
return ret;
}
/* Check if the disk is ready */
ret = disk_access_ioctl(MMC_DEVICE_NAME, DISK_IOCTL_GET_SECTOR_SIZE, §or_size);
if (ret) {
LOG_ERR("Failed to get sector size: %d", ret);
return ret;
}
ret = disk_access_ioctl(MMC_DEVICE_NAME, DISK_IOCTL_GET_SECTOR_COUNT, §or_count);
if (ret) {
LOG_ERR("Failed to get sector count: %d", ret);
return ret;
}
LOG_INF(" - Sector size: %u bytes, Sector count: %u, Total capacity: %u MB", sector_size,
sector_count, (sector_count * sector_size) / (1024 * 1024));
return 0;
}
/**
* @brief Write data to raw partition using disk API
*
* @param value Value to fill the buffer with
* @return 0 on success, negative on error
*/
static int raw_api_write(uint8_t value)
{
int ret;
struct partition_info raw_info;
get_partition_info("raw_data", &raw_info);
uint32_t sector_start = raw_info.offset / SECTOR_SIZE;
uint32_t total_sectors = raw_info.size / SECTOR_SIZE;
uint32_t sectors_per_write = TEST_BUFFER_SIZE / SECTOR_SIZE;
uint32_t written_sectors = 0;
memset(test_buffer, value, TEST_BUFFER_SIZE);
while (written_sectors < total_sectors) {
uint32_t write_now = sectors_per_write;
if (written_sectors + write_now > total_sectors) {
write_now = total_sectors - written_sectors;
}
ret = disk_access_write(MMC_DEVICE_NAME, test_buffer,
sector_start + written_sectors, write_now);
if (ret) {
LOG_ERR("Failed to write to raw partition at sector %u: %d",
sector_start + written_sectors, ret);
return ret;
}
written_sectors += write_now;
}
LOG_INF(" ■ Successfully wrote %u bytes to raw partition", raw_info.size);
return 0;
}
/**
* @brief Read data from raw partition using disk API
*
* @param expected_value Expected value to compare against
* @return 0 on success, negative on error
*/
static int raw_api_read_and_verify(uint8_t expected_value)
{
int ret;
struct partition_info raw_info;
get_partition_info("raw_data", &raw_info);
uint32_t sector_start = raw_info.offset / SECTOR_SIZE;
uint32_t total_sectors = raw_info.size / SECTOR_SIZE;
uint32_t sectors_per_read = TEST_BUFFER_SIZE / SECTOR_SIZE;
uint32_t read_sectors = 0;
while (read_sectors < total_sectors) {
uint32_t read_now = sectors_per_read;
if (read_sectors + read_now > total_sectors) {
read_now = total_sectors - read_sectors;
}
memset(read_buffer, 0, TEST_BUFFER_SIZE);
ret = disk_access_read(MMC_DEVICE_NAME, read_buffer, sector_start + read_sectors,
read_now);
if (ret) {
LOG_ERR("Failed to read from raw partition at sector %u: %d",
sector_start + read_sectors, ret);
return ret;
}
for (int i = 0; i < read_now * SECTOR_SIZE; i++) {
if (read_buffer[i] != expected_value) {
LOG_ERR("Data mismatch at offset %u: expected 0x%02X, got 0x%02X",
read_sectors * SECTOR_SIZE + i, expected_value,
read_buffer[i]);
return -EIO;
}
}
read_sectors += read_now;
}
LOG_INF(" ■ Raw partition data verification successful - all %u bytes match 0x%02X",
raw_info.size, expected_value);
return 0;
}
/**
* @brief Initialize and mount LittleFS
*
* @return 0 on success, negative on error
*/
static int littlefs_init_and_mount(void)
{
static struct partition_info lfs_info;
get_partition_info("filesystem", &lfs_info);
lfsfs.cfg.block_size = SECTOR_SIZE;
lfsfs.cfg.block_count = lfs_info.size / SECTOR_SIZE;
LOG_INF("LittleFS block_size=%u, block_count=%u, total=%u bytes", lfsfs.cfg.block_size,
lfsfs.cfg.block_count, lfsfs.cfg.block_size * lfsfs.cfg.block_count);
return fs_mount(&lfs_mount_point);
}
/**
* @brief Create test files in LittleFS
*
* @return 0 on success, negative on error
*/
static int littlefs_create_test_files(void)
{
struct fs_file_t file;
int ret = 0;
const size_t big_file_size = 900 * 1024; /* 900KB */
static char big_data[900 * 1024];
memset(big_data, 0x5A, sizeof(big_data));
fs_file_t_init(&file);
ret = fs_open(&file, MOUNT_POINT "/bigfile.bin", FS_O_CREATE | FS_O_WRITE);
if (ret) {
LOG_ERR("Failed to create bigfile.bin: %d", ret);
return ret;
}
ssize_t written = fs_write(&file, big_data, big_file_size);
if (written != big_file_size) {
LOG_ERR("Failed to write bigfile.bin: %zd/%zu", written, big_file_size);
fs_close(&file);
return -EIO;
}
fs_close(&file);
LOG_INF(" - Created bigfile.bin (%zu bytes)", big_file_size);
return 0;
}
/**
* @brief Verify test files in LittleFS
*
* @return 0 on success, negative on error
*/
static int littlefs_verify_test_files(void)
{
struct fs_file_t file;
struct fs_dirent entry;
int ret;
char read_buf[256];
/* List directory contents */
LOG_INF(" -> Directory listing of %s:", MOUNT_POINT);
struct fs_dir_t dir;
fs_dir_t_init(&dir);
ret = fs_opendir(&dir, MOUNT_POINT);
if (ret) {
LOG_ERR("Failed to open directory %s: %d", MOUNT_POINT, ret);
return ret;
}
while (fs_readdir(&dir, &entry) == 0) {
if (entry.name[0] == 0) {
break;
}
LOG_INF(" - %s (%s, size: %zu)", entry.name,
(entry.type == FS_DIR_ENTRY_FILE) ? "file" : "dir", entry.size);
}
fs_closedir(&dir);
const char *bigfile = MOUNT_POINT "/bigfile.bin";
fs_file_t_init(&file);
ret = fs_open(&file, bigfile, FS_O_READ);
if (ret) {
LOG_ERR("Failed to open file %s: %d", bigfile, ret);
return ret;
}
memset(read_buf, 0, sizeof(read_buf));
ret = fs_read(&file, read_buf, sizeof(read_buf) - 1);
if (ret < 0) {
LOG_ERR("Failed to read file %s: %d", bigfile, ret);
fs_close(&file);
return ret;
}
fs_close(&file);
return 0;
}
/**
* @brief Unmount LittleFS
*
* @return 0 on success, negative on error
*/
static int littlefs_unmount(void)
{
int ret;
ret = fs_unmount(&lfs_mount_point);
if (ret) {
LOG_ERR("Failed to unmount LittleFS: %d", ret);
return ret;
}
return 0;
}
static int littlefs_clean_all(void)
{
struct fs_dir_t dir;
struct fs_dirent entry;
char path[128];
int ret;
fs_dir_t_init(&dir);
ret = fs_opendir(&dir, MOUNT_POINT);
if (ret) {
LOG_ERR("fs_opendir failed: %d", ret);
return ret;
}
while (fs_readdir(&dir, &entry) == 0) {
if (entry.name[0] == 0) {
break;
}
snprintf(path, sizeof(path), "%s/%s", MOUNT_POINT, entry.name);
if (entry.type == FS_DIR_ENTRY_FILE) {
ret = fs_unlink(path);
if (ret) {
LOG_WRN("Failed to delete file %s: %d", path, ret);
}
} else if (entry.type == FS_DIR_ENTRY_DIR) {
LOG_WRN("Subdir %s not deleted (not implemented)", path);
}
}
fs_closedir(&dir);
return 0;
}
/**
* @brief Remount LittleFS and verify data persistence
*
* @return 0 on success, negative on error
*/
static int littlefs_remount_and_verify(void)
{
int ret;
/* Remount the filesystem */
ret = fs_mount(&lfs_mount_point);
if (ret) {
LOG_ERR("Failed to remount LittleFS: %d", ret);
return ret;
}
/* Verify files still exist and contain correct data */
ret = littlefs_verify_test_files();
if (ret) {
LOG_ERR("File verification failed after remount");
return ret;
}
LOG_INF(" ■ All files verified successfully after remount - data persistence confirmed");
return 0;
}
/**
* @brief Main application entry point
*/
int main(void)
{
int ret;
LOG_INF("====== eMMC Dual API Partition Isolation Test ======");
/* ===== STEP 0: Initialize disk access system ===== */
ret = disk_access_init_system();
if (ret) {
LOG_ERR("Step 0 failed: %d", ret);
return ret;
}
/* ===== STEP 1: Write initial data to raw partition ===== */
ret = raw_api_write(INITIAL_VALUE_A5);
if (ret) {
LOG_ERR("Step 1 failed: %d", ret);
return ret;
}
/* ===== STEP 2: Initialize LittleFS and create test files ===== */
ret = littlefs_init_and_mount();
if (ret) {
LOG_ERR("Step 2a failed: %d", ret);
return ret;
}
/* ===== Clear the LittleFS partition ===== */
ret = littlefs_clean_all();
if (ret) {
LOG_ERR("LittleFS clean failed: %d", ret);
return ret;
}
LOG_INF(" ■ LittleFS partition cleaned successfully");
ret = littlefs_create_test_files();
if (ret) {
LOG_ERR("Step 2b failed: %d", ret);
return ret;
}
/* ===== STEP 3: Verify raw partition data integrity ===== */
ret = raw_api_read_and_verify(INITIAL_VALUE_A5);
if (ret) {
LOG_ERR(" PARTITION ISOLATION FAILED: Raw partition data was corrupted by "
"LittleFS!");
return ret;
}
/* ===== STEP 4: Test remount persistence ===== */
/* Unmount LittleFS */
ret = littlefs_unmount();
if (ret) {
LOG_ERR("Step 4a failed: %d", ret);
return ret;
}
LOG_INF(" ■ LittleFS successfully unmounted");
/* Verify raw partition is still intact after unmount */
ret = raw_api_read_and_verify(INITIAL_VALUE_A5);
if (ret) {
LOG_ERR(" Raw partition data corrupted after LittleFS unmount!");
return ret;
}
/* Remount and verify LittleFS data */
ret = littlefs_remount_and_verify();
if (ret) {
LOG_ERR("Step 4b failed: %d", ret);
return ret;
}
/* Final verification of raw partition */
ret = raw_api_read_and_verify(INITIAL_VALUE_A5);
if (ret) {
LOG_ERR(" Raw partition data corrupted after LittleFS remount!");
return ret;
}
/* ===== FINAL SUCCESS MESSAGE ===== */
LOG_INF("\n====== TEST COMPLETED SUCCESSFULLY ======");
LOG_INF(" ■ Partition isolation verified");
LOG_INF(" ■ Raw API partition remains unaffected by LittleFS operations");
LOG_INF(" ■ LittleFS data persists across mount/unmount cycles");
LOG_INF(" ■ Both partitions operate independently without interference");
return 0;
}
prj.conf
# Logging configuration for detailed debug output
CONFIG_LOG=y
# Basic system configuration
CONFIG_MAIN_STACK_SIZE=32768
# Additional system safety configurations
CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE=16384
# Disk access and eMMC configuration
CONFIG_DISK_ACCESS=y
CONFIG_DISK_DRIVER_MMC=y
CONFIG_MMC_STACK=y
CONFIG_MMC_VOLUME_NAME="MMC"
# File system configuration
CONFIG_FILE_SYSTEM=y
CONFIG_FILE_SYSTEM_LITTLEFS=y
# LittleFS backend configuration - use block device mode with partition awareness
CONFIG_FS_LITTLEFS_BLK_DEV=y
CONFIG_FS_LITTLEFS_HEAP_PER_ALLOC_OVERHEAD_SIZE=128
# Memory pool configuration - increase for LittleFS stability
CONFIG_HEAP_MEM_POOL_SIZE=524288
# LittleFS specific configurations for stability
CONFIG_FS_LITTLEFS_CACHE_SIZE=512
CONFIG_FS_LITTLEFS_LOOKAHEAD_SIZE=128
CONFIG_FS_LITTLEFS_BLOCK_CYCLES=512
# Logging configuration
CONFIG_SERIAL=y
CONFIG_LOG_MODE_IMMEDIATE=y
CONFIG_LOG_BACKEND_UART=y
CMakeLists.txt
cmake_minimum_required(VERSION 3.20.0)
find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
project(emmc_dual_api)
target_sources(app PRIVATE src/main.c)
LOG
*** Booting Zephyr OS build rbs2530/v0.29-8-g2ddb1e82f3dd ***
[00:00:00.008,000] <inf> emmc_dual_api: ====== eMMC Dual API Partition Isolation Test ======
[00:00:00.221,000] <wrn> sdhc_dw: Command(8) timeout
[00:00:00.236,000] <inf> sd: Card does not support CMD8, assuming legacy card
[00:00:00.300,000] <inf> sd: CID decoding not supported for MMC
[00:00:00.309,000] <inf> sd: Card block count is 7618560, block size is 512
[00:00:00.317,000] <inf> emmc_dual_api: - Sector size: 512 bytes, Sector count: 7618560, Total capacity: 3720 MB
[00:00:00.594,000] <inf> emmc_dual_api: ■ Successfully wrote 1048576 bytes to raw partition
[00:00:00.603,000] <inf> emmc_dual_api: LittleFS block_size=512, block_count=2048, total=1048576 bytes
[00:00:00.613,000] <inf> littlefs: LittleFS version 2.9, disk version 2.1
[00:00:00.620,000] <inf> littlefs: FS at MMC: is 7618560 0x200-byte blocks with 512 cycle
[00:00:00.628,000] <inf> littlefs: sizes: rd 512 ; pr 512 ; ca 512 ; la 2048
[00:00:00.641,000] <inf> emmc_dual_api: ■ LittleFS partition cleaned successfully
[00:00:05.891,000] <inf> emmc_dual_api: - Created bigfile.bin (921600 bytes)
[00:00:06.009,000] <inf> emmc_dual_api: ■ Raw partition data verification successful - all 1048576 bytes match 0xA5
[00:00:06.020,000] <inf> littlefs: /MMC: unmounted
[00:00:06.025,000] <inf> emmc_dual_api: ■ LittleFS successfully unmounted
[00:00:06.144,000] <inf> emmc_dual_api: ■ Raw partition data verification successful - all 1048576 bytes match 0xA5
[00:00:06.155,000] <inf> littlefs: LittleFS version 2.9, disk version 2.1
[00:00:06.162,000] <inf> littlefs: FS at MMC: is 7618560 0x200-byte blocks with 512 cycle
[00:00:06.171,000] <inf> littlefs: sizes: rd 512 ; pr 512 ; ca 512 ; la 2048
[00:00:06.179,000] <inf> emmc_dual_api: -> Directory listing of /MMC::
[00:00:06.186,000] <inf> emmc_dual_api: - bigfile.bin (file, size: 921600)
[00:00:06.194,000] <inf> emmc_dual_api: ■ All files verified successfully after remount - data persistence confirmed
[00:00:06.316,000] <inf> emmc_dual_api: ■ Raw partition data verification successful - all 1048576 bytes match 0xA5
[00:00:06.327,000] <inf> emmc_dual_api:
====== TEST COMPLETED SUCCESSFULLY ======
[00:00:06.335,000] <inf> emmc_dual_api: ■ Partition isolation verified
[00:00:06.342,000] <inf> emmc_dual_api: ■ Raw API partition remains unaffected by LittleFS operations
[00:00:06.352,000] <inf> emmc_dual_api: ■ LittleFS data persists across mount/unmount cycles
[00:00:06.361,000] <inf> emmc_dual_api: ■ Both partitions operate independently without interference
了解 Heiweilu的小世界 的更多信息
订阅后即可通过电子邮件收到最新文章。