diff options
| author | Adrian Prantl <aprantl@apple.com> | 2019-01-15 18:07:52 +0000 |
|---|---|---|
| committer | Adrian Prantl <aprantl@apple.com> | 2019-01-15 18:07:52 +0000 |
| commit | d963a7c39891ae33e87500502d5199946af22bde (patch) | |
| tree | 1a5d123b49e1e9e039a62b08a9011cb91560883f /lldb/source/DataFormatters | |
| parent | 5e54bc18e27b7fa2af240ff93f1771faa425c319 (diff) | |
| download | bcm5719-llvm-d963a7c39891ae33e87500502d5199946af22bde.tar.gz bcm5719-llvm-d963a7c39891ae33e87500502d5199946af22bde.zip | |
Make CompilerType::getBitSize() / getByteSize() return an optional result. NFC
The code in LLDB assumes that CompilerType and friends use the size 0
as a sentinel value to signal an error. This works for C++, where no
zero-sized type exists, but in many other programming languages
(including I believe C) types of size zero are possible and even
common. This is a particular pain point in swift-lldb, where extra
code exists to double-check that a type is *really* of size zero and
not an error at various locations.
To remedy this situation, this patch starts by converting
CompilerType::getBitSize() and getByteSize() to return an optional
result. To avoid wasting space, I hand-rolled my own optional data
type assuming that no type is larger than what fits into 63
bits. Follow-up patches would make similar changes to the ValueObject
hierarchy.
rdar://problem/47178964
Differential Revision: https://reviews.llvm.org/D56688
llvm-svn: 351214
Diffstat (limited to 'lldb/source/DataFormatters')
| -rw-r--r-- | lldb/source/DataFormatters/TypeFormat.cpp | 18 | ||||
| -rw-r--r-- | lldb/source/DataFormatters/VectorType.cpp | 13 |
2 files changed, 18 insertions, 13 deletions
diff --git a/lldb/source/DataFormatters/TypeFormat.cpp b/lldb/source/DataFormatters/TypeFormat.cpp index a7520300647..e9a8c3b8953 100644 --- a/lldb/source/DataFormatters/TypeFormat.cpp +++ b/lldb/source/DataFormatters/TypeFormat.cpp @@ -94,16 +94,18 @@ bool TypeFormatImpl_Format::FormatObject(ValueObject *valobj, return false; } + ExecutionContextScope *exe_scope = + exe_ctx.GetBestExecutionContextScope(); + auto size = compiler_type.GetByteSize(exe_scope); + if (!size) + return false; StreamString sstr; - ExecutionContextScope *exe_scope( - exe_ctx.GetBestExecutionContextScope()); compiler_type.DumpTypeValue( - &sstr, // The stream to use for display - GetFormat(), // Format to display this type with - data, // Data to extract from - 0, // Byte offset into "m_data" - compiler_type.GetByteSize( - exe_scope), // Byte size of item in "m_data" + &sstr, // The stream to use for display + GetFormat(), // Format to display this type with + data, // Data to extract from + 0, // Byte offset into "m_data" + *size, // Byte size of item in "m_data" valobj->GetBitfieldBitSize(), // Bitfield bit size valobj->GetBitfieldBitOffset(), // Bitfield bit offset exe_scope); diff --git a/lldb/source/DataFormatters/VectorType.cpp b/lldb/source/DataFormatters/VectorType.cpp index bf22e351af8..5ab438e6abb 100644 --- a/lldb/source/DataFormatters/VectorType.cpp +++ b/lldb/source/DataFormatters/VectorType.cpp @@ -174,10 +174,10 @@ static size_t CalculateNumChildren( auto container_size = container_type.GetByteSize(exe_scope); auto element_size = element_type.GetByteSize(exe_scope); - if (element_size) { - if (container_size % element_size) + if (container_size && element_size && *element_size) { + if (*container_size % *element_size) return 0; - return container_size / element_size; + return *container_size / *element_size; } return 0; } @@ -197,8 +197,11 @@ public: lldb::ValueObjectSP GetChildAtIndex(size_t idx) override { if (idx >= CalculateNumChildren()) - return lldb::ValueObjectSP(); - auto offset = idx * m_child_type.GetByteSize(nullptr); + return {}; + auto size = m_child_type.GetByteSize(nullptr); + if (!size) + return {}; + auto offset = idx * *size; StreamString idx_name; idx_name.Printf("[%" PRIu64 "]", (uint64_t)idx); ValueObjectSP child_sp(m_backend.GetSyntheticChildAtOffset( |

