没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
SLUBStick 是一种新颖的内核利用技术,旨在通过跨缓存攻击提升 Linux 内核漏洞的利用能力。近年来,Linux 内核中的漏洞数量显著增加,但大多数仅能导致有限的内存破坏,使得实际利用变得困难。SLUBStick 通过多阶段操作,将有限的堆漏洞转化为任意内存读写原语,显著提高了跨缓存攻击的成功率。 SLUBStick 的核心在于利用内核分配器的定时侧信道泄露,可靠地触发特定内存目标的回收和再利用过程。通过这一技术,SLUBStick 将跨缓存攻击的成功率提升至 99% 以上。随后,利用 Linux 内核中普遍存在的代码模式,SLUBStick 将有限的堆漏洞转化为页面表操纵,从而实现任意内存读写能力。 在实验证明中,SLUBStick 对两种 Linux 内核版本(v5.19 和 v6.2)进行了系统分析,展示了其在启用最先进内核防御措施下的特权升级和容器逃逸能力。通过合成漏洞和 9 个实际 CVE 漏洞的验证,SLUBStick 展示了其在不同内核版本和架构上的独立性和高效性。 注意:此文档为英文原文档
资源推荐
资源详情
资源评论
SLUBStick: Arbitrary Memory Writes through Practical Software Cross-Cache
Attacks within the Linux Kernel
Lukas Maar
Graz University of Technology
Stefan Gast
Graz University of Technology
Martin Unterguggenberger
Graz University of Technology
Mathias Oberhuber
Graz University of Technology
Stefan Mangard
Graz University of Technology
Abstract
While the number of vulnerabilities in the Linux kernel has
increased significantly in recent years, most have limited capa-
bilities, such as corrupting a few bytes in restricted allocator
caches. To elevate their capabilities, security researchers have
proposed software cross-cache attacks, exploiting the mem-
ory reuse of the kernel allocator. However, such cross-cache
attacks are impractical due to their low success rate of only
40 %
, with failure scenarios often resulting in a system crash.
In this paper, we present SLUBStick, a novel kernel ex-
ploitation technique elevating a limited heap vulnerability to
an arbitrary memory read-and-write primitive. SLUBStick
operates in multiple stages: Initially, it exploits a timing side
channel of the allocator to perform a cross-cache attack reli-
ably. Concretely, exploiting the side-channel leakage pushes
the success rate to above
99 %
for frequently used generic
caches. SLUBStick then exploits code patterns prevalent in
the Linux kernel to convert a limited heap vulnerability into
a page table manipulation, thereby granting the capability to
read and write memory arbitrarily. We demonstrate the appli-
cability of SLUBStick by systematically analyzing two Linux
kernel versions, v5.19 and v6.2. Lastly, we evaluate SLUB-
Stick with a synthetic vulnerability and 9 real-world CVEs,
showcasing privilege escalation and container escape in the
Linux kernel with state-of-the-art kernel defenses enabled.
1 Introduction
Operating system kernels, such as Linux, are susceptible to
memory safety vulnerabilities due to their size and complexity.
However, most of these vulnerabilities have limited capabil-
ities, such as corrupting a few bytes in restricted allocator
caches. These limitations make exploitation difficult in prac-
tice. To make these vulnerabilities even more difficult to
exploit, researchers and kernel developers have included de-
fenses such as SMAP, KASLR, and kCFI [37]. In addition, the
kernel’s allocator is designed to restrict exploits that propagate
from heap vulnerabilities. One particular hardening strategy
is to enforce coarse-grained heap separation. This separation
places objects in distinct allocator caches that maintain blocks
of adjacent pages, called slabs, and separate security-critical
objects from frequently used objects. Hence, vulnerabilities
in frequently used caches cannot be directly exploited to ma-
nipulate security-critical objects, such as credentials.
To circumvent coarse-grained heap separation, security re-
searchers [50] presented software cross-cache attacks, which
have been used by several kernel exploits [2,13,17, 19, 28
–
30,
47]. Software cross-cache attacks exploit the memory reuse of
the kernel allocator as follows: Initially, an adversary triggers
a heap vulnerability to obtain and hold on to a write capability
for a victim object. They then free all memory slots on the
slab page containing the victim object and allocate a different
(sensitive) object type. This triggers the allocation of new
slab pages, presumably reclaiming the previously freed and
recycled slab page. The adversary then continues to overwrite
the victim object, which now resides in the same memory
location as the newly allocated sensitive object, corrupting it.
The Linux kernel has two types of allocator caches: ded-
icated and generic caches. While dedicated caches can be
reliably exploited for cross-cache attacks [2, 17, 19, 28, 47],
generic caches cannot [28, 50]. In particular, exploitation of
generic caches has a success rate of only
40 %
[50], with
failure scenarios often leading to system crashes. To in-
crease the reliability of generic cache exploitation, security
experts [13, 29, 30, 47] have used stabilization objects, e.g.,
msg_msg
or
pipe_buffer
. However, these objects cannot be
used in newer kernel versions due to more refined heap sepa-
ration, i.e., v5.14 introduced
kmalloc-cg-*
. Therefore, for
newer kernel versions, cross-cache attacks on generic caches
do not provide the reliability required in practice [28, 50].
In this paper, we present SLUBStick, a novel kernel ex-
ploitation technique that converts a limited kernel heap vul-
nerability into an arbitrary read-and-write primitive. At its
core, SLUBStick exploits timing side-channel leakage of the
kernel’s allocator to reliably trigger the recycling and reclaim-
ing process for a specific memory target. Exploiting this side-
channel leakage significantly enhances the success rate of soft-
ware cross-cache attacks, exceeding
99 %
for generic caches
with a single slab page and
82 %
for multiple slab pages.
With this substantial increase, our approach overcomes the
prior unreliability and makes cross-cache attacks practical
for exploitation. Using our reliable side-channel supported
approach, SLUBStick performs a cross-cache attack to recy-
cle a slab page that contains a write capability. SLUBStick
then reclaims the slab page as a page table, i.e., Page Upper
Directory (PUD), used for userspace address translation. By
triggering the write capability, SLUBStick overwrites page
table entries, obtaining arbitrary read and write capabilities.
To perform SLUBStick, we overcome the following tech-
nical challenges: First, we present reliably exploitable prim-
itives for our timing side channel that are accessible to un-
privileged users. Second, hardly any kernel heap vulnerabil-
ities provide the capability to modify kernel data directly.
Therefore, we present techniques that exploit code patterns
prevalent in the Linux kernel. These techniques convert heap
vulnerabilities before the recycling phase to allow a write ca-
pability after reclamation as a page table. Third, manipulating
page table entries to obtain an arbitrary memory read-and-
write primitive is challenging because the physical memory
layout is randomized due to KASLR, and we do not assume
address information leakage. Hence, we introduce a reliable
solution that obtains such a primitive from an overwrite.
We conduct a systematic analysis for two Linux kernel
versions, v5.19 and v6.2, providing a comprehensive list of
primitives to successfully execute SLUBStick for all generic
caches from
kmalloc-8
to
kmalloc-4096
. We also evaluate
SLUBStick with a synthetic vulnerability as well as with 9
real-world CVEs for both kernel versions on x86_64 as well
as aarch64, demonstrating its architecture and kernel version
independence. Based on these findings, we conclude that
SLUBStick poses a significant threat to kernel security.
Contributions.
The main contributions of SLUBStick are:
(1)
Side-Channel Supported Recycling and Reclaiming:
We present a novel approach to reliably trigger the recy-
cling process of a specific memory target and reclaim it
by using a software timing side channel. Our approach
shows success rates exceeding
99 %
for frequently used
generic caches, making cross-cache attacks practical.
(2)
Novel Exploitation Method: Leveraging our reliable
side-channel supported recycling and reclaiming ap-
proach, we present a novel exploitation technique to
convert kernel heap vulnerabilities with limited capabili-
ties into an arbitrary memory read-and-write primitive
with state-of-the-art kernel defenses enabled.
(3)
Comprehensive Analysis and Attack Evaluation: We
systematically analyze two Linux kernel versions, v5.19
and v6.2, showing that SLUBStick can be executed for
generic cache from
kmalloc-8
to
kmalloc-4096
. We
also evaluate SLUBStick using a synthetic vulnerability
and 9 real-world CVEs to escalate privileges.
Outline.
Section 2 describes the background and threat
kmem_cache {
kmem_cache_cpu *c __per_cpu;
...
kmem_cache_node *n[];
}
kmem_cache_cpu {
void **freelist;
...
slab *slab;
...
slab *partial;
}
kmem_cache_node {
...
list_head partial;
...
}
slab A
slab B
slab C
slab D
slab E
partial->next
slab->slab_list
slab->freelist
partial->freelist
slab = container_of(&partial, slab)
slab->freelist
free list next pointer
freed object
allocated object
Figure 1:
kmem_cache
layout for the SLUB implementation.
model. Section 3 presents SLUBStick. Section 4 introduces
our reliable recycling and reclaiming process. Section 5 de-
scribes pivoting heap vulnerabilities. Section 6 details how to
gain arbitrary read-and-write capabilities. Section 7 compre-
hensively evaluates our attack. Section 8 discusses valuable
insights and kernel defenses. Section 9 concludes this work.
2 Background and Threat Model
2.1 Buddy and SLUB Allocator
Linux’s page allocator is based on the Binary Buddy Alloca-
tor [23], mainly referred to as buddy allocator. It allocates
physical contiguous memory in chunks of page order size,
i.e.,
2
n
· PAGE_SIZE
, where
n
is the page order. Moreover, it
combines this page-order allocation with free chunk merging.
SLUB allocator.
As the buddy allocator only provides
page-order allocations, the slab allocator caches available ob-
jects with a predefined size in a multi-level free-list hierarchy,
using pages obtained from the buddy allocator. There are
three main implementations: SLUB is the default choice for
several Linux distributions [22], while SLOB has become
obsolete, and SLAB will be deprecated soon [8].
In the Linux kernel, the SLUB allocator provides two pri-
mary types of allocator caches: dedicated and generic caches.
Dedicated caches are employed for frequently used fixed-size
objects, such as
cred
or
task_struct
. Generic caches are
utilized for generic object allocation and deallocation or for
objects whose sizes are not known during compile-time, e.g.,
elastic objects [5]. Both types of caches utilize
kmem_cache
,
with each dedicated cache having its own
kmem_cache
, while
generic caches have multiple
kmem_cache
s matched to dif-
ferent sizes. When allocating memory from a generic cache,
the kernel matches the requested size to one of these caches
and allocates an object from the corresponding
kmem_cache
.
c->freelist c->slab
c->partial n->partial
get object from
c->freelist
move c->slab
to c->freelist
move c->partial
to c->slab
move n->partial
to c->partial
alloc memory
chunk from buddy
empty
partial
empty
partial
empty
partial
empty
partial
Increase in allocation time
① ② ③ ④ ⑤
Figure 2: SLUB allocation of an object, where the terms
c
and
n
refer to the
kmem_cache_cpu
and
kmem_cache_node
,
respectively. The free lists (i.e.,
c->freelist
and
c->slab->freelist
) and slab lists (i.e.,
c->partial
and
n->partial
)
are checked to be either empty or partial.
is slab ==
c->slab?
was slab empty?
is node partial
slab list full?
is CPU partial
slab list full?
is slab full?
put object to
c->freelist
put object to
slab
put slab into CPU
partial slab list
move slabs to node
partial slab list
discard memory
chunks of slabs
yes
no
no
yes
yes yes
yes
Increase in deallocation time
① ② ③ ④ ⑤
Figure 3: SLUB deallocation of an object, where the term
slab
refers to the slab that contains the object to be freed. The term
c
represents the
kmem_cache_cpu
associated with this slab. The
slab
is either active (i.e., slab stored as
c->slab
), or stored in
the partial slab list (i.e., slab located within c->partial) or node partial slab list (i.e., slab located within n->partial).
The Linux kernel incorporates cache aliasing to optimize
memory management. Cache aliasing merges freed objects
stored in distinct
kmem_cache
s with similar characteristics,
e.g., object size and allocation properties. For security rea-
sons,
kmem_cache
s of dedicated or generic caches consid-
ered security-critical are marked as accounted to prevent
aliasing. Essentially, these accounted
kmem_cache
s separate
accounted objects from the non-accounted ones. Security-
critical caches include those that store sensitive information,
such as
cred
, and objects commonly used for exploitation
(allocated using kmalloc-cg-*), such as elastic objects [5].
Architecture.
The architecture of a
kmem_cache
[22],
shown in Figure 1, includes a
kmem_cache_cpu
for each
logical CPU and an array of
kmem_cache_node
s. The
kmem_cache_cpu
comprises various free lists: a CPU free
list (
c->freelist
), a slab free list (
slab->freelist
), and
additional free lists of partial slabs (
partial->freelist
,
maintained as a single-linked list). Despite each slab having
its free list, the separate CPU free list allows lockless alloca-
tion, improving performance. The
kmem_cache_node
has a
double-linked list of slabs (
partial
) also containing freed
objects. In the context of this work, we refer to a list (i.e.,
free list, and single- and double-linked list) as full when it
reaches its capacity of containing objects. It is considered
empty when no object is present in it. A list is classified as
partial when it is neither full nor empty.
Allocation and deallocation. kmem_cache
stores objects
in a multi-level free-list hierarchy. As shown in Figure 2,
the allocation process starts by searching for an available
object in the lower free-list levels [22]. This process continues
throughout the hierarchy until an available object is found.
These levels include the CPU free list
①
, slab free list
②
, CPU
partial slab list
③
, and node partial slab list
④
, with each level
taking more allocation time. If no object is available in any
of these free lists, the SLUB allocator falls back to the buddy
allocator ⑤, which allocates a memory chunk.
When deallocating, the SLUB allocator attempts to place
the object in the lower free-list levels, e.g., CPU free
list
①
[22], as shown in Figure 3. Upon deallocation, the
kernel may check the number of free slabs, i.e., the number of
slabs with full free list stored in the node partial slab list
③
. If
this number exceeds a particular capability (see Table 4), the
SLUB allocator deallocates the slab’s memory chunks
⑤
, re-
turning them to the buddy allocator. Memory chunks returned
in such a recycling phase are reused for future allocations.
Timing attacks on allocation.
Lee et al. [26] demon-
strated with PSPRAY the feasibility of performing a timing
side channel on the SLUB allocator. PSPRAY deduces when
the allocator allocates a fresh memory chunk (see
⑤
in Fig-
ure 2). This insight increases the likelihood of successful ker-
nel heap exploitation, which primarily relied on heap spraying,
i.e., for Use-After-Free (UAF) and Double-Free (DF), or heap
grooming, i.e., for Out-Of-Bounds (OOB) [4, 26, 53].
However, their method relies on a precise measurement
primitive that is no longer available in recent kernel versions.
Their primary proposed primitive uses
msg_msg
. Since it is al-
located via the segregated
kmalloc-cg-*
for kernel versions
v5.14 or higher, it is limited to scenarios where the vulnerable
freed
obj
obj1
freed
obj
obj2
freed
obj
slab A
write capability
ptr1 ptr2
❶
recycling
freed
obj
freed
obj
freed
obj
freed
obj
freed
obj
freed slab A
write capability
❷
reclaiming
freed
obj
obj4
freed
obj
obj5
obj3
sensitive slab B
write capability
Figure 4: Software cross-cache attack with an initial state,
where a write capability refers to a freed object. An attacker
enforces a recycling
❶
of
slab A
’s memory chunk by freeing
obj1/2
. By allocating sensitive objects, the attacker presum-
ably reclaims
❷
the chunk for a sensitive
slab B
, resulting in
write capability referring obj3. Lastly, obj3 is overwritten.
object is also allocated from the segregated generic cache.
Other proposed primitives are limited because the compu-
tational overhead of non-allocation tasks primarily masks
allocation timing (e.g.,
read
). Furthermore, their approach
fails to identify suitable measurement primitives, e.g., for
kmalloc-8/16
. For other identified primitives, we could not
reproduce the allocation of a single data object (e.g.,
fchown
),
or the primitives are privileged and therefore unusable (e.g.,
kexec_load
). We contacted the authors about the applica-
bility of using their identified syscalls (apart from
msg_msg
)
to determine the timing difference. They confirmed that the
overhead of the syscalls they identified limits their applicabil-
ity. In summary, while their work demonstrates feasibility, its
applicability is limited to older kernel versions.
2.2 Software Cross-Cache Attacks
When the SLUB allocator frees memory chunks using the
buddy allocator, as shown with
⑤
in Figure 3, these chunks
are reused. Classic cross-cache attacks [2,17,19,28,29,47,50]
exploit this reusing behavior. Initially, an adversary compels
the SLUB allocator to recycle a memory chunk containing a
write capability due to vulnerabilities. Subsequently, they allo-
cate numerous sensitive objects from another allocator cache,
hoping to reclaim the previously freed chunk. If successful,
the memory that was previously occupied with the write ca-
pability will now be occupied by sensitive objects. Lastly,
they trigger the write capability to this memory, corrupting
a sensitive data object. The recycling
❶
and reclaiming
❷
phases are shown in a simplified setting in Figure 4.
Xu et al. [50] demonstrated the feasibility of cross-cache
attacks, and subsequent research [28, 29] has further explored
their impact. However, executing such attacks is notably chal-
lenging, particularly for frequently used generic caches. One
significant hurdle is the introduction of noise through uncon-
trolled allocations, complicating to achieve state
⑤
in Figure 3.
For example, unknown allocations from a kernel thread can
thwart the freeing of the slab’s memory chunk, thereby pre-
venting the reclamation [28]. Adding to the complexity, the
unpredictable occurrence of both phases, recycling
❶
and
reclaiming ❷, introduces instability to the exploit.
In summary, mounting cross-cache attacks is complex and
fraught with challenges. Although these attacks are com-
pelling, in practice, they have a limited success rate, as low
as
40 %
[50]. Importantly, this percentage only represents the
success rate of the cross-cache attack, excluding additional
stages of an end-to-end exploit, e.g., vulnerability triggering
and memory manipulation, which further reduces the overall
success rate. The process of repeatedly triggering vulnerabil-
ities carries its risks. Traces left in the kernel often make it
difficult to trigger the same vulnerability again. For instance,
an OOB write may corrupt lists when triggered. Hence, re-
peated activation of the vulnerability can result in a crash,
severely limiting the attack’s practicality.
2.3 Threat Model
We assume that an unprivileged user has code execution. Ad-
ditionally, we consider the presence of a heap vulnerability
in the Linux kernel. We assume that the Linux kernel incor-
porates all defense mechanisms available in version 6.4, the
most recent Linux kernel version when we started our work.
These mechanisms include features such as
WˆX
, KASLR,
SMAP, and kCFI [37]. We do not assume any microarchitec-
tural vulnerabilities, e.g., transient execution [24, 31], fault
injection [43], or hardware side channels [3, 51].
In this work, we primarily focus on heap vulnerabilities
(most common type of software vulnerability according to
Microsoft [36]) that result in a Double-Free (DF), or a Use-
After-Free (UAF) or an Out-Of-Bounds (OOB) allowing for
a limited writing capability. For instance, CVE-2023-21400
enables the double free of an object within the
kmalloc-32
generic cache, while CVE-2023-3609 permits a write opera-
tion at offset 0x18 on an object allocated from kmalloc-64.
3 Technical Overview and Challenges
This section outlines SLUBStick’s capability to overcome
several technical challenges when exploiting a limited heap
vulnerability to obtain an arbitrary read-and-write primitive.
3.1 Overview
Obtaining an arbitrary read-and-write primitive with SLUB-
Stick involves three stages, as depicted in Figure 5. In the
first stage (see Figure 5a), SLUBStick exploits a heap vulner-
ability to acquire a Memory Write Primitive (MWP). This
剩余17页未读,继续阅读
资源评论
H_kiwi
- 粉丝: 312
- 资源: 11
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功