summaryrefslogtreecommitdiffstats
path: root/llvm/lib/DebugInfo/CodeView
Commit message (Collapse)AuthorAgeFilesLines
* [llvm-pdbutil] Add support for dumping lines and inlinee lines.Zachary Turner2017-06-151-0/+7
| | | | llvm-svn: 305529
* Resubmit "[llvm-pdbutil] rewrite the "raw" output style."Zachary Turner2017-06-154-77/+87
| | | | | | | | | This resubmits commit c0c249e9f2ef83e1d1e5f166b50673d92f3579d7. It was broken due to some weird template issues, which have since been fixed. llvm-svn: 305517
* Revert "[llvm-pdbutil] rewrite the "raw" output style."Zachary Turner2017-06-154-87/+77
| | | | | | | | | This reverts commit 83ea17ebf2106859a51fbc2a86031b44d33696ad. This is failing due to some strange template problems, so reverting until it can be straightened out. llvm-svn: 305505
* [llvm-pdbutil] rewrite the "raw" output style.Zachary Turner2017-06-154-77/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After some internal discussions, we agreed that the raw output style had outlived its usefulness. It was originally created before we had even thought of dumping to YAML, and it was intended to give us some insight into the internals of a PDB file. Now we have YAML mode which does almost exactly this but is more powerful in that it can round-trip back to a PDB, which the raw mode could not do. So the raw mode had become purely a maintenance burden. One option was to just delete it. However, its original goal was to be as readable as possible while staying close to the "metal" - i.e. presenting the output in a way that maps directly to the underlying file format. We don't actually need that last requirement anymore since it's covered by the yaml mode, so we could repurpose "raw" mode to actually just be as readable as possible. This patch implements about 80% of the functionality previously in raw mode, but in a completely different style that is more akin to what cvdump outputs. Records are very compressed, often times appearing on just one line. One nice thing about this is that it makes full record matching easier, because you can grep for indices, names, and leaf types on a single line often. See the tests for some examples of what the new output looks like. Note that this patch actually regresses the functionality of raw mode in a few areas, but only because the patch was already unreasonably large and going 100% would have been even worse. Specifically, this patch is missing: The ability to dump module debug subsections (checksums, lines, etc) The ability to dump section headers Aside from that everything is here. While goign through the tests fixing them all up, I found many duplicate tests. They've been deleted. In subsequent patches I will go through and re-add the missing functionality. Differential Revision: https://reviews.llvm.org/D34191 llvm-svn: 305495
* Resubmit "[codeview] Make obj2yaml/yaml2obj support .debug$S..."Zachary Turner2017-06-145-36/+59
| | | | | | | | | This was originally reverted because of some non-deterministic failures on certain buildbots. Luckily ASAN eventually caught this as a stack-use-after-scope, so the fix is included in this patch. llvm-svn: 305393
* Revert "[codeview] Make obj2yaml/yaml2obj support .debug$S..."Zachary Turner2017-06-145-59/+36
| | | | | | | | This is causing failures on linux bots with an invalid stream read. It doesn't repro in any configuration on Windows, so reverting until I have a chance to investigate on Linux. llvm-svn: 305371
* [codeview] Make obj2yaml/yaml2obj support .debug$S/T sections.Zachary Turner2017-06-145-36/+59
| | | | | | | | | | | | | | | This allows us to use yaml2obj and obj2yaml to round-trip CodeView symbol and type information without having to manually specify the bytes of the section. This makes for much easier to maintain tests. See the tests under lld/COFF in this patch for example. Before they just said SectionData: <blob> whereas now we can use meaningful record descriptions. Note that it still supports the SectionData yaml field, which could be useful for initializing a section to invalid bytes for testing, for example. Differential Revision: https://reviews.llvm.org/D34127 llvm-svn: 305366
* Same expressions on both sides of the returnSylvestre Ledru2017-06-121-1/+1
| | | | | | | | | | | | | | | | | | | Summary: I guess we want PointerToMemberFunction & PointerToDataMember Fix coverity cid 1376038 Reviewers: zturner Reviewed By: zturner Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34110 llvm-svn: 305219
* [pdb] Support CoffSymbolRVA debug subsection.Zachary Turner2017-06-093-0/+39
| | | | llvm-svn: 305108
* Allow VarStreamArray to use stateful extractors.Zachary Turner2017-06-095-16/+15
| | | | | | | | | | | Previously extractors tried to be stateless with any additional context information needed in order to parse items being passed in via the extraction method. This led to quite cumbersome implementation challenges and awkwardness of use. This patch brings back support for stateful extractors, making the implementation and usage simpler. llvm-svn: 305093
* [codeview] use 32-bit integer for RelocOffset in DebugLinesSubsectionBob Haarman2017-06-091-1/+1
| | | | | | | | | | | | | | | | | Summary: RelocOffset is a 32-bit value, but we previously truncated it to 16 bits. Fixes PR33335. Reviewers: zturner, hiraditya! Reviewed By: zturner Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33968 llvm-svn: 305043
* [pdb] Don't crash on unknown debug subsections.Zachary Turner2017-06-091-13/+0
| | | | | | | | More and more unknown debug subsection kinds are being discovered so we should make it possible to dump these and display the bytes. llvm-svn: 305041
* [CodeView] Support remaining debug subsection typesZachary Turner2017-06-093-17/+41
| | | | | | | | | | | | | | | | This adds support for Symbols, StringTable, and FrameData subsection types. Even though these subsections rarely if ever appear in a PDB file (they are usually in object files), there's no theoretical reason why they *couldn't* appear in a PDB. The real issue though is that in order to add support for dumping and writing them (which will be useful for object files), we need a way to test them. And since there is no support for reading and writing them to / from object files yet, making PDB support them is the best way to both add support for the underlying format and add support for tests at the same time. Later, when we go to add support for reading / writing them from object files, we'll need only minimal changes in the underlying read/write code. llvm-svn: 305037
* [llvm-pdbdump] Support native ordering of subsections in raw mode.Zachary Turner2017-06-082-6/+42
| | | | | | | | | | | | | This is the same change for the YAML Output style applied to the raw output style. Previously we would queue up all subsections until every one had been read, and then output them in a pre- determined order. This was because some subsections need to be read first in order to properly dump later subsections. This patch allows them to be dumped in the order they appear. Differential Revision: https://reviews.llvm.org/D34015 llvm-svn: 305034
* [CodeView] Fix endianness bug.Zachary Turner2017-06-051-1/+3
| | | | | | | We should be outputting in little endian, but we were writing in host endianness. llvm-svn: 304741
* [CodeView] Handle Cross Module Imports and Exports.Zachary Turner2017-06-055-4/+162
| | | | | | | | | | | | | | | While it's not entirely clear why a compiler or linker might put this information into an object or PDB file, one has been spotted in the wild which was causing llvm-pdbdump to crash. This patch adds support for reading-writing these sections. Since I don't know how to get one of the native tools to generate this kind of debug info, the only test here is one in which we feed YAML into the tool to produce a PDB and then spit out YAML from the resulting PDB and make sure that it matches. llvm-svn: 304738
* [CodeView] Support CodeView subsections in any order.Zachary Turner2017-06-022-10/+10
| | | | | | | | | | | | | | | Previously we would expect certain subsections to appear in a certain order because some subsections would reference other subsections, but in practice we need to support arbitrary orderings since some object file and PDB file producers generate them this way. This also paves the way for supporting Yaml <-> Object File conversion of CodeView, since Object Files typically have quite a large number of subsections in their debug info. Differential Revision: https://reviews.llvm.org/D33807 llvm-svn: 304588
* Fix 2 more -Wreorder warnings.Zachary Turner2017-06-011-4/+4
| | | | llvm-svn: 304494
* [CodeView] Properly align symbol records on read/write.Zachary Turner2017-06-015-14/+39
| | | | | | | | | | | | | | | | | Object files have symbol records not aligned to any particular boundary (e.g. 1-byte aligned), while PDB files have symbol records padded to 4-byte aligned boundaries. Since they share the same reading / writing code, we have to provide an option to specify the alignment and propagate it up to the producer or consumer who knows what the alignment is supposed to be for the given container type. Added a test for this by modifying the existing PDB -> YAML -> PDB round-tripping code to round trip symbol records as well as types. Differential Revision: https://reviews.llvm.org/D33785 llvm-svn: 304484
* [CodeView] Move CodeView YAML code to ObjectYAML.Zachary Turner2017-05-305-8/+8
| | | | | | | | | | | | | | | | | | | | | | This is the beginning of an effort to move the codeview yaml reader / writer into ObjectYAML so that it can be shared. Currently the only consumer / producer of CodeView YAML is llvm-pdbdump, but CodeView can exist outside of PDB files, and indeed is put into object files and passed to the linker to produce PDB files. Furthermore, there are subtle differences in the types of records that show up in object file CodeView vs PDB file CodeView, but they are otherwise 99% the same. By having this code in ObjectYAML, we can have llvm-pdbdump reuse this code, while teaching obj2yaml and yaml2obj to use this syntax for dealing with object files that can contain CodeView. This patch only adds support for CodeView type information to ObjectYAML. Subsequent patches will add support for CodeView symbol information. llvm-svn: 304248
* [CodeView] Add more DebugSubsection implementations.Zachary Turner2017-05-309-28/+120
| | | | | | | | This adds implementations for Symbols and FrameData, and renames the existing codeview::StringTable class to conform to the DebugSectionStringTable convention. llvm-svn: 304222
* [CodeView] Rename ModuleDebugFragment -> DebugSubsection.Zachary Turner2017-05-3010-219/+210
| | | | | | | This is more concise, and matches the terminology used in other parts of the codebase more closely. llvm-svn: 304218
* Remove unused member.Zachary Turner2017-05-251-2/+0
| | | | llvm-svn: 303942
* [CV Type Merging] Find nested type indices faster.Zachary Turner2017-05-253-347/+413
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merging two type streams is one of the most time consuming parts of generating a PDB, and as such it needs to be as fast as possible. The visitor abstractions used for interoperating nicely with many different types of inputs and outputs have been used widely and help greatly for testability and implementing tools, but the abstractions build up and get in the way of performance. This patch removes all of the visitation stuff from the type stream merger, essentially re-inventing the leaf / member switch and loop, but at a very low level. This allows us many other optimizations, such as not actually deserializing *any* records (even member records which don't describe their own length), as the operation of "figure out how long this record is" is somewhat faster than "figure out how long this record *and* get all its fields out". Furthermore, whereas before we had to deserialize, re-write type indices, then re-serialize, now we don't have to do any of those 3 steps. We just find out where the type indices are and pull them directly out of the byte stream and re-write them. This is worth a 50-60% performance increase. On top of all other optimizations that have been applied this week, I now get the following numbers when linking lld.exe and lld.pdb MSVC: 25.67s Before This Patch: 18.59s After This Patch: 8.92s So this is a huge performance win. Differential Revision: https://reviews.llvm.org/D33564 llvm-svn: 303935
* [CodeView Type Merging] Don't keep re-allocating temp serializer.Zachary Turner2017-05-252-9/+19
| | | | | | | | | | | | | | | | Previously, every time we wanted to serialize a field list record, we would create a new copy of FieldListRecordBuilder, which would in turn create a temporary instance of TypeSerializer, which itself had a std::vector<> that was about 128K in size. So this 128K allocation was happening every time. We can re-use the same instance over and over, we just have to clear its internal hash table and seen records list between each run. This saves us from the constant re-allocations. This is worth an ~18.5% speed increase (3.75s -> 3.05s) in my tests. Differential Revision: https://reviews.llvm.org/D33506 llvm-svn: 303919
* [CodeView Type Merging] Avoid record deserialization when possible.Zachary Turner2017-05-253-144/+243
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A profile shows the majority of time doing type merging is spent deserializing records from sequences of bytes into friendly C++ structures that we can easily access members of in order to find the type indices to re-write. Records are prefixed with their length, however, and most records have type indices that appear at fixed offsets in the record. For these records, we can save some cycles by just looking at the right place in the byte sequence and re-writing the value, then skipping the record in the type stream. This saves us from the costly deserialization of examining every field, including potentially null terminated strings which are the slowest, even though it was unnecessary to begin with. In addition, we apply another optimization. Previously, after deserializing a record and re-writing its type indices, we would unconditionally re-serialize it in order to compute the hash of the re-written record. This would result in an alloc and memcpy for every record. If no type indices were re-written, however, this was an unnecessary allocation. In this patch re-writing is made two phase. The first phase discovers the indices that need to be rewritten and their new values. This information is passed through to the de-duplication code, which only copies and re-writes type indices in the serialized byte sequence if at least one type index is different. Some records have type indices which only appear after variable length strings, or which have lists of type indices, or various other situations that can make it tricky to make this optimization. While I'm not giving up on optimizing these cases as well, for now we can get the easy cases out of the way and lay the groundwork for more complicated cases later. This patch yields another 50% speedup on top of the already large speedups submitted over the past 2 days. In two tests I have run, I went from 9 seconds to 3 seconds, and from 16 seconds to 8 seconds. Differential Revision: https://reviews.llvm.org/D33480 llvm-svn: 303914
* Don't do a full scan of the type stream before processing records.Zachary Turner2017-05-241-11/+11
| | | | | | | | | | | | | LazyRandomTypeCollection is designed for random access, and in order to provide this it lazily indexes ranges of types. In the case of types from an object file, there is no partial index to build off of, so it has to index the full stream up front. However, merging types only requires sequential access, and when that is needed, this extra work is simply wasted. Changing the algorithm to work on sequential arrays of types rather than random access type collections eliminates this up front scan. llvm-svn: 303707
* [CodeView] Eliminate redundant hashes and allocations.Zachary Turner2017-05-231-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When writing field list records, we would construct a temporary type serializer that shared a bump ptr allocator with the rest of the application, so anything allocated from here would live forever. Furthermore, this temporary serializer had all the properties of a full blown serializer including record hashing and de-duplication. These features are required when you're merging multiple type streams into each other, because different streams may contain identical records, but records from the same type stream will never collide with each other. So all of this hashing was unnecessary. To solve this, two fixes are made: 1) The temporary serializer keeps its own bump ptr allocator instead of sharing a global one. When it's finished, all of its memory is freed. 2) Instead of using the same temporary serializer for the life of an entire type stream, we use it only for the life of a single field list record and delete it when the field list record is completed. This way the hash table will not grow as other records from the same type stream get inserted. Further improvements could eliminate hashing entirely from this codepath. This reduces the link time by 85% in my test, from 1 minute to 9 seconds. llvm-svn: 303676
* Speculative build fix for non-WindowsReid Kleckner2017-05-231-0/+2
| | | | llvm-svn: 303667
* [PDB] Hash types up front when merging types instead of using StringMapReid Kleckner2017-05-232-66/+120
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: First, StringMap uses llvm::HashString, which is only good for short identifiers and really bad for large blobs of binary data like type records. Moving to `DenseMap<StringRef, TypeIndex>` with some tricks for memory allocation fixes that. Unfortunately, that didn't buy very much performance. Profiling showed that we spend a long time during DenseMap growth rehashing existing entries. Also, in general, DenseMap is faster when the keys are small. This change takes that to the logical conclusion by introducing a small wrapper value type around a pointer to key data. The key data contains a precomputed hash, the original record data (pointer and size), and the type index, which is the "value" of our original map. This reduces the time to produce llvm-as.exe and llvm-as.pdb from ~15s on my machine to 3.5s, which is about a 4x improvement. Reviewers: zturner, inglorion, ruiu Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33428 llvm-svn: 303665
* Revert "Make TypeSerializer's StringMap use the same allocator."Zachary Turner2017-05-232-13/+32
| | | | | | | | This reverts commit e34ccb7b57da25cc89ded913d8638a2906d1110a. This is causing failures on the ASAN bots. llvm-svn: 303640
* Implement various flavors of type merging.Zachary Turner2017-05-221-76/+171
| | | | | | | | | | | | | | Previous algotirhm assumed that types and ids are in a single unified stream. For inputs that come from object files, this is the case. But if the input is already a PDB, or is the result of a previous merge, then the types and ids will already have been split up, in which case we need an algorithm that can accept operate on independent streams of types and ids that refer across stream boundaries to each other. Differential Revision: https://reviews.llvm.org/D33417 llvm-svn: 303577
* Make TypeSerializer's StringMap use the same allocator.Zachary Turner2017-05-222-32/+13
| | | | llvm-svn: 303576
* Resubmit "[CodeView] Provide a common interface for type collections."Zachary Turner2017-05-1911-252/+535
| | | | | | | | | | | | This was originally reverted because it was a breaking a bunch of bots and the breakage was not surfacing on Windows. After much head-scratching this was ultimately traced back to a bug in the lit test runner related to its pipe handling. Now that the bug in lit is fixed, Windows correctly reports these test failures, and as such I have finally (hopefully) fixed all of them in this patch. llvm-svn: 303446
* Revert "[CodeView] Provide a common interface for type collections."Zachary Turner2017-05-1911-532/+252
| | | | | | | | | | | | | | | | | | This is a squash of ~5 reverts of, well, pretty much everything I did today. Something is seriously broken with lit on Windows right now, and as a result assertions that fire in tests are triggering failures. I've been breaking non-Windows bots all day which has seriously confused me because all my tests have been passing, and after running lit with -a to view the output even on successful runs, I find out that the tool is crashing and yet lit is still reporting it as a success! At this point I don't even know where to start, so rather than leave the tree broken for who knows how long, I will get this back to green, and then once lit is fixed on Windows, hopefully hopefully fix the remaining set of problems for real. llvm-svn: 303409
* Don't crash if someone tries to visit an empty type stream.Zachary Turner2017-05-191-0/+3
| | | | llvm-svn: 303408
* [CodeView] Reduce memory usage in TypeSerializer.Zachary Turner2017-05-191-5/+28
| | | | | | | | | | | We were using a BumpPtrAllocator to allocate stable storage for a record, then trying to insert that into a hash table. If a collision occurred, the bytes were never inserted and the allocation was unnecessary. At the cost of an extra hash computation, check first if it exists, and only if it does do we allocate and insert. llvm-svn: 303407
* Fix crasher in CodeView test.Zachary Turner2017-05-191-1/+1
| | | | | | | | | | | Apparently this was always broken, but previously we were more graceful about it and we would print "unknown udt" if we couldn't find the type index, whereas now we just segfault because we assume it's valid. But this exposed a real bug, which is that we weren't looking in the right place. So fix that, and also fix this crash at the same time. llvm-svn: 303397
* Fix some build errors and warnings.Zachary Turner2017-05-181-1/+1
| | | | llvm-svn: 303391
* [CodeView] Raise the source to ID map out of the TypeStreamMerger.Zachary Turner2017-05-181-4/+8
| | | | | | | This map will be needed to rewrite symbol streams after re-writing the corresponding type streams. llvm-svn: 303390
* [CodeView] Provide a common interface for type collections.Zachary Turner2017-05-1811-251/+528
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now we have multiple notions of things that represent collections of types. Most commonly used are TypeDatabase, which is supposed to keep mappings from TypeIndex to type name when reading a type stream, which happens when reading PDBs. And also TypeTableBuilder, which is used to build up a collection of types dynamically which we will later serialize (i.e. when writing PDBs). But often you just want to do some operation on a collection of types, and you may want to do the same operation on any kind of collection. For example, you might want to merge two TypeTableBuilders or you might want to merge two type streams that you loaded from various files. This dichotomy between reading and writing is responsible for a lot of the existing code duplication and overlapping responsibilities in the existing CodeView library classes. For example, after building up a TypeTableBuilder with a bunch of type records, if we want to dump it we have to re-invent a bunch of extra glue because our dumper takes a TypeDatabase or a CVTypeArray, which are both incompatible with TypeTableBuilder. This patch introduces an abstract base class called TypeCollection which is shared between the various type collection like things. Wherever we previously stored a TypeDatabase& in some common class, we now store a TypeCollection&. The advantage of this is that all the details of how the collection are implemented, such as lazy deserialization of partial type streams, is completely transparent and you can just treat any collection of types the same regardless of where it came from. Differential Revision: https://reviews.llvm.org/D33293 llvm-svn: 303388
* [CodeView] Simplify the use of visiting type records & streams.Zachary Turner2017-05-175-47/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is often a lot of boilerplate code required to visit a type record or type stream. The #1 use case is that you have a sequence of bytes that represent one or more records, and you want to deserialize each one, switch on it, and call a callback with the deserialized record that the user can examine. Currently this requires at least 6 lines of code: codeview::TypeVisitorCallbackPipeline Pipeline; Pipeline.addCallbackToPipeline(Deserializer); Pipeline.addCallbackToPipeline(MyCallbacks); codeview::CVTypeVisitor Visitor(Pipeline); consumeError(Visitor.visitTypeRecord(Record)); With this patch, it becomes one line of code: consumeError(codeview::visitTypeRecord(Record, MyCallbacks)); This is done by having the deserialization happen internally inside of the visitTypeRecord function. Since this is occasionally not desirable, the function provides a 3rd parameter that can be used to change this behavior. Hopefully this can significantly reduce the barrier to entry to using the visitation infrastructure. Differential Revision: https://reviews.llvm.org/D33245 llvm-svn: 303271
* [CodeView] Add a random access type visitor.Zachary Turner2017-05-126-88/+159
| | | | | | | | | | | | This adds a visitor that is capable of accessing type records randomly and caching intermediate results that it learns about during partial linear scans. This yields amortized O(1) access to a type stream even though type streams cannot normally be indexed. Differential Revision: https://reviews.llvm.org/D33009 llvm-svn: 302936
* Removing a file that is not necessary (and was causing link diagnostics with ↵Aaron Ballman2017-05-092-11/+0
| | | | | | MSVC 2015); NFC. llvm-svn: 302531
* [CodeView] Add support for random access type visitors.Zachary Turner2017-05-083-31/+136
| | | | | | | | | | | | | | | | | | | | | | | Previously type visitation was done strictly sequentially, and TypeIndexes were computed by incrementing the TypeIndex of the last visited record. This works fine for situations like dumping, but not when you want to visit types in random order. For example, in a debug session someone might lookup a symbol by name, find that it has TypeIndex 10,000 and then want to go straight to TypeIndex 10,000. In order to make this work, the visitation framework needs a mode where it can plumb TypeIndices through the callback pipeline. This patch adds such a mode. In doing so, it is necessary to provide an alternative implementation of TypeDatabase that supports random access, so that is done as well. Nothing actually uses these random access capabilities yet, but this will be done in subsequent patches. Differential Revision: https://reviews.llvm.org/D32928 llvm-svn: 302454
* [CodeView] Reserve TypeDatabase records up front.Zachary Turner2017-05-051-0/+5
| | | | | | | | | | | | | Most of the time we know exactly how many type records we have in a list, and we want to use the visitor to deserialize them into actual records in a database. Previously we were just using push_back() every time without reserving the space up front in the vector. This is obviously terrible from a performance standpoint, and it's not uncommon to have PDB files with half a million type records, where the performance degredation was quite noticeable. llvm-svn: 302302
* Remove unused private field.Zachary Turner2017-05-031-3/+2
| | | | llvm-svn: 302069
* [CodeView] Remove constructor initialization of a removed field.Davide Italiano2017-05-031-2/+2
| | | | | | I should've staged this with my last commit. llvm-svn: 302059
* [CodeView] Use actual strings for dealing with checksums and lines.Zachary Turner2017-05-034-17/+41
| | | | | | | | | | | | | | | | | | | | | The raw CodeView format references strings by "offsets", but it's confusing what table the offset refers to. In the case of line number information, it's an offset into a buffer of records, and an indirection is required to get another offset into a different table to find the final string. And in the case of checksum information, there is no indirection, and the offset refers directly to the location of the string in another buffer. This would be less confusing if we always just referred to the strings by their value, and have the library be smart enough to correctly resolve the offsets on its own from the right location. This patch makes that possible. When either reading or writing, all the user deals with are strings, and the library does the appropriate translations behind the scenes. llvm-svn: 302053
* [llvm-readobj] Update readobj to re-use parsing code.Zachary Turner2017-05-032-18/+19
| | | | | | | | | | llvm-readobj hand rolls some CodeView parsing code for string tables, so this patch updates it to re-use some of the newly introduced parsing code in LLVMDebugInfoCodeView. Differential Revision: https://reviews.llvm.org/D32772 llvm-svn: 302052
OpenPOWER on IntegriCloud