Memory Management: Address Translation Efficiency and Protection

Published on 19 Mar 2019, 18:31
The efficiency of virtual memory support has received considerable attention as modern architectures and operating systems adapt to workloads with increasing instruction and data sizes. Even on modern processors with their ever larger translation lookaside buffers (TLB), a variety of widely-used applications on desktop, server, and mobile platforms suffer from high address translation overhead. Most of the attention so far has been focused on data. However, our analysis shows that modern applications spend up to 12% of execution time servicing instruction address translation. They have increasingly large main executables and invoke a considerable number of shared libraries. The instruction and data address translation compete for space on the shared TLB.

In this talk, I will introduce an OS-level approach to improving the instruction address translation performance of Android applications. Although the physical memory of the shared libraries is shared, in today's OSes, each application has private page table and TLB entries. Due to the Android process creation model, the address translation information of the commonly-used shared libraries is identical across all Android applications, resulting in duplication. Our design eliminates the duplication by sharing page table and TLB entries for shared libraries across all Android applications, resulting in, for example, a 2.1x speedup for executing fork. In addition, address translation can also be leveraged to launch powerful side-channel attacks. Compromised operating systems may be leveraged to track changes to page tables and thereby infer access behavior of the victims. I will also describe our defenses against page table and last-level cache side-channel attacks launched by a compromised OS in a trusted execution environment.

See more at
news tech music