blob: aab86ea5f074876559c32b5acf00af34f20d53f8 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
From 0d3ceb3058201868765ff3aa1126685f3f7f9ecc Mon Sep 17 00:00:00 2001
From: Andrew Calvano <calvano@fb.com>
Date: Fri, 17 Nov 2023 17:29:04 +0000
Subject: [PATCH] Fix for PyTorch mobile flatbuffer loader out of bounds reads
(#110162)
Summary:
The mobile_ivalue_size field in the mobile_bytecode flatbuffer schema can be larger than the ivalues vector. This introduces potential for memory corruption when parsing the mobile_bytecode Module.
This diff fixes the issue by ensuring that mobile_ivalue_size is less than the size of the ivalues vector.
Test Plan: contbuild & OSS CI
Differential Revision: D49687548
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110162
Approved by: https://github.com/malfet
---
torch/csrc/jit/mobile/flatbuffer_loader.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/torch/csrc/jit/mobile/flatbuffer_loader.cpp b/torch/csrc/jit/mobile/flatbuffer_loader.cpp
index 2fb12a4f..2069330b 100644
--- a/torch/csrc/jit/mobile/flatbuffer_loader.cpp
+++ b/torch/csrc/jit/mobile/flatbuffer_loader.cpp
@@ -302,7 +302,7 @@ mobile::Module FlatbufferLoader::parseModule(
storage_loaded_.resize(module->storage_data_size(), false);
mobile_ivalue_size_ = module_->mobile_ivalue_size();
- if (mobile_ivalue_size_ == 0) {
+ if (mobile_ivalue_size_ == 0 || mobile_ivalue_size_ > ivalues->size()) {
mobile_ivalue_size_ = ivalues->size();
}
--
2.43.0
|