51 lines
2.1 KiB
Diff
51 lines
2.1 KiB
Diff
From e24882a961e2d85cc4c8319a56734a0d7c7867fc Mon Sep 17 00:00:00 2001
|
|
From: Jann Horn <jannh@google.com>
|
|
Date: Fri, 3 Jan 2025 19:39:38 +0100
|
|
Subject: x86/mm: Fix flush_tlb_range() when used for zapping normal PMDs
|
|
|
|
On the following path, flush_tlb_range() can be used for zapping normal
|
|
PMD entries (PMD entries that point to page tables) together with the PTE
|
|
entries in the pointed-to page table:
|
|
|
|
collapse_pte_mapped_thp
|
|
pmdp_collapse_flush
|
|
flush_tlb_range
|
|
|
|
The arm64 version of flush_tlb_range() has a comment describing that it can
|
|
be used for page table removal, and does not use any last-level
|
|
invalidation optimizations. Fix the X86 version by making it behave the
|
|
same way.
|
|
|
|
Currently, X86 only uses this information for the following two purposes,
|
|
which I think means the issue doesn't have much impact:
|
|
|
|
- In native_flush_tlb_multi() for checking if lazy TLB CPUs need to be
|
|
IPI'd to avoid issues with speculative page table walks.
|
|
- In Hyper-V TLB paravirtualization, again for lazy TLB stuff.
|
|
|
|
The patch "x86/mm: only invalidate final translations with INVLPGB" which
|
|
is currently under review (see
|
|
<https://lore.kernel.org/all/20241230175550.4046587-13-riel@surriel.com/>)
|
|
would probably be making the impact of this a lot worse.
|
|
|
|
Fixes: 016c4d92cd16 ("x86/mm/tlb: Add freed_tables argument to flush_tlb_mm_range")
|
|
Signed-off-by: Jann Horn <jannh@google.com>
|
|
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
|
|
Cc: stable@vger.kernel.org
|
|
Link: https://lkml.kernel.org/r/20250103-x86-collapse-flush-fix-v1-1-3c521856cfa6@google.com
|
|
---
|
|
arch/x86/include/asm/tlbflush.h | 2 +-
|
|
1 file changed, 1 insertion(+), 1 deletion(-)
|
|
|
|
--- a/arch/x86/include/asm/tlbflush.h
|
|
+++ b/arch/x86/include/asm/tlbflush.h
|
|
@@ -311,7 +311,7 @@ static inline bool mm_in_asid_transition
|
|
flush_tlb_mm_range((vma)->vm_mm, start, end, \
|
|
((vma)->vm_flags & VM_HUGETLB) \
|
|
? huge_page_shift(hstate_vma(vma)) \
|
|
- : PAGE_SHIFT, false)
|
|
+ : PAGE_SHIFT, true)
|
|
|
|
extern void flush_tlb_all(void);
|
|
extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
|