linux - Some kernel ARM code -
i reading through arm kernel sources till stumbled upon following function :-
314 #define __get_user_asm_byte(x, addr, err) \ 315 __asm__ __volatile__( \ 316 "1: " tuser(ldrb) " %1,[%2],#0\n" \ 317 "2:\n" \ 318 " .pushsection .fixup,\"ax\"\n" \ 319 " .align 2\n" \ 320 "3: mov %0, %3\n" \ 321 " mov %1, #0\n" \ 322 " b 2b\n" \ 323 " .popsection\n" \ 324 " .pushsection __ex_table,\"a\"\n" \ 325 " .align 3\n" \ 326 " .long 1b, 3b\n" \ 327 " .popsection" \ 328 : "+r" (err), "=&r" (x) \ 329 : "r" (addr), "i" (-efault) \ 330 : "cc")
the calling context seems follows :-
299 #define __get_user_err(x, ptr, err) \ 300 { \ 301 unsigned long __gu_addr = (unsigned long)(ptr); \ 302 unsigned long __gu_val; \ 303 __chk_user_ptr(ptr); \ 304 might_fault(); \ 305 switch (sizeof(*(ptr))) { \ 306 case 1: __get_user_asm_byte(__gu_val, __gu_addr, err); break; \
now have few doubts i'd clear arm assembly above.
- what function
__get_user_asm_byte
doing? can see r3 copied r0 , value 0 moved r1. after branches offset 0x2b? - what function trying do?
"+r" (err), "=&r" (x)
line 328 onward mean? - what .pushsection , .popsection?
- why arm assembly thats written kernel different syntactically(what's assembler used? why there
%<regno>
instead ofr<regno>
?)
first, comments have noted, syntax standard gcc inline assembly syntax (the +r
, =&r
, %<arg>
parts).
the rest kernel magic designed handle page faults. point of get_user_asm_byte pull byte user-space. however, in pulling data user-space, 2 special situations need accommodated:
- a legitimate user-space address not present @ moment (i.e. because it's paged out)
- an illegal user-space address.
either cause page fault. (1), desired behavior restore user page (read in swap space or whatever), re-execute load instruction, resulting in eventual success; (2), desired behavior fail operation without retrying, returning efault
error caller.
without special handling, normal behavior of page fault in kernel mode "oops". special sections coordinate page fault handling code , make possible recover correctly. if want understand details of how works, search __ex_table
in kernel source.
also, checkout kernel doc at: documentation/x86/exception-tables.txt
the arm specific item use either ldrt
or mmu domains; conditional in domain.h. modern arm cpus support domains. translated load/store variants apply user mode access older cpus.
Comments
Post a Comment