Reputation: 361
I'm working on a kernel live-patch, some code in the live-patch module looks like this:
void list_checker() {
struct list_head *head, *iter;
head = (struct list_head *)kallsyms_lookup_name("module_name:symbol_name");
for (iter = head->next; iter != head; iter = iter->next) {
// do something.
}
}
This code gets the address of a kernel symbol (which is type struct list_head
) and tries to iterate the list. But for some reason, some nodes in the list may be broken, resulting in the next
pointer of some node being invalid (like NULL, 0xABABABAB, or other random numbers), and dereferencing the next
pointer may cause the kernel to crash.
So is there a way to check whether a pointer is safe to access?
I have checked two previous answers:
How to check a memory address is valid inside Linux kernel?
How can I tell if a virtual address has valid mapping in ARM linux kernel?
They tell me to use virt_addr_valid
. I have some surely accessible address, like 0xFFFFFFFFA032C040, but virt_addr_valid
always returns false, which makes me unable to distinguish "accessible" and "non-accessible" addresses in my live-patch module.
Upvotes: 0
Views: 2978
Reputation: 676
It took me a while but I found kern_addr_valid(addr) in source/arch/x86/mm/init_64.c does the trick. It walks the page tables, accounting for large pages to make sure the address is valid.
Upvotes: 0
Reputation: 361
In my case, the memory address I want to check should be allocated using kmalloc()
, but may be polluted (i.e., random values) due to some bugs.
virt_addr_valid()
checks whether the address resides in the "kernel code area" or in the "direct mapping area" in the kernel (check this link for x86_64 memory layout). And the memory which is allocated by kmalloc()
resides in the "direct mapping area", so using virt_addr_valid()
on kmalloc'ed memories is always true. But on the other side, due to my experiment, some address may get virt_addr_valid=true, but is not accessible, and dereferencing the address may cause the machine to crash. So I also need to make sure the address is correctly mapped in the page table in order not to crash the machine.
So the solution contains two steps:
virt_addr_valid()
returns true on the addressSince the memory mapping in the "virt_addr_valid-is-true" area does not change, there is no need to hold a lock.
below is the code for x86_64 with 4-level page tables.
static bool page_mapping_exist(unsigned long addr, size_t size) {
pgd_t *pgd;
pmd_t *pmd;
pud_t *pud;
pte_t *pte;
struct mm_struct *mm = current->mm;
unsigned long end_addr;
pgd = pgd_offset(mm, addr);
if (unlikely(!pgd) || unlikely(pgd_none(*pgd)) || unlikely(!pgd_present(*pgd)) )
return false;
pud = pud_offset(pgd, addr);
if (unlikely(!pud) || unlikely(pud_none(*pud)) || unlikely(!pud_present(*pud)))
return false;
pmd = pmd_offset(pud, addr);
if (unlikely(!pmd) || unlikely(pmd_none(*pmd)) || unlikely(!pmd_present(*pmd)))
return false;
if (pmd_trans_huge(*pmd)) {
end_addr = (((addr >> PMD_SHIFT) + 1) << PMD_SHIFT) - 1;
goto end;
}
pte = pte_offset_map(pmd, addr);
if (unlikely(!pte) || unlikely(!pte_present(*pte)))
return false;
end_addr = (((addr >> PAGE_SHIFT) + 1) << PAGE_SHIFT) - 1;
end:
if (end_addr >= addr + size - 1)
return true;
return page_mapping_exist(end_addr + 1, size - (end_addr - addr + 1));
}
static bool addr_valid(unsigned long addr, size_t size) {
int i;
for (i = 0; i < size; i++) {
if (!virt_addr_valid(addr + i))
return false;
}
if (!page_mapping_exist(addr, size))
return false;
return true;
}
Upvotes: 1