diff options
| author | Adrian Prantl <aprantl@apple.com> | 2018-04-30 16:49:04 +0000 |
|---|---|---|
| committer | Adrian Prantl <aprantl@apple.com> | 2018-04-30 16:49:04 +0000 |
| commit | 05097246f352eca76207c9ebb08656c88bdf751a (patch) | |
| tree | bfc4ec8250a939aaf4ade6fc6c528726183e5367 /lldb/source/Breakpoint/BreakpointResolverFileLine.cpp | |
| parent | add59c052dd6768fd54431e6a3bf045e7f25cb59 (diff) | |
| download | bcm5719-llvm-05097246f352eca76207c9ebb08656c88bdf751a.tar.gz bcm5719-llvm-05097246f352eca76207c9ebb08656c88bdf751a.zip | |
Reflow paragraphs in comments.
This is intended as a clean up after the big clang-format commit
(r280751), which unfortunately resulted in many of the comment
paragraphs in LLDB being very hard to read.
FYI, the script I used was:
import textwrap
import commands
import os
import sys
import re
tmp = "%s.tmp"%sys.argv[1]
out = open(tmp, "w+")
with open(sys.argv[1], "r") as f:
header = ""
text = ""
comment = re.compile(r'^( *//) ([^ ].*)$')
special = re.compile(r'^((([A-Z]+[: ])|([0-9]+ )).*)|(.*;)$')
for line in f:
match = comment.match(line)
if match and not special.match(match.group(2)):
# skip intentionally short comments.
if not text and len(match.group(2)) < 40:
out.write(line)
continue
if text:
text += " " + match.group(2)
else:
header = match.group(1)
text = match.group(2)
continue
if text:
filled = textwrap.wrap(text, width=(78-len(header)),
break_long_words=False)
for l in filled:
out.write(header+" "+l+'\n')
text = ""
out.write(line)
os.rename(tmp, sys.argv[1])
Differential Revision: https://reviews.llvm.org/D46144
llvm-svn: 331197
Diffstat (limited to 'lldb/source/Breakpoint/BreakpointResolverFileLine.cpp')
| -rw-r--r-- | lldb/source/Breakpoint/BreakpointResolverFileLine.cpp | 40 |
1 files changed, 20 insertions, 20 deletions
diff --git a/lldb/source/Breakpoint/BreakpointResolverFileLine.cpp b/lldb/source/Breakpoint/BreakpointResolverFileLine.cpp index ca69ec9e4ef..ecef88eb998 100644 --- a/lldb/source/Breakpoint/BreakpointResolverFileLine.cpp +++ b/lldb/source/Breakpoint/BreakpointResolverFileLine.cpp @@ -110,10 +110,10 @@ BreakpointResolverFileLine::SerializeToStructuredData() { // Filter the symbol context list to remove contexts where the line number was // moved into a new function. We do this conservatively, so if e.g. we cannot -// resolve the function in the context (which can happen in case of -// line-table-only debug info), we leave the context as is. The trickiest part -// here is handling inlined functions -- in this case we need to make sure we -// look at the declaration line of the inlined function, NOT the function it was +// resolve the function in the context (which can happen in case of line-table- +// only debug info), we leave the context as is. The trickiest part here is +// handling inlined functions -- in this case we need to make sure we look at +// the declaration line of the inlined function, NOT the function it was // inlined into. void BreakpointResolverFileLine::FilterContexts(SymbolContextList &sc_list, bool is_relative) { @@ -133,8 +133,8 @@ void BreakpointResolverFileLine::FilterContexts(SymbolContextList &sc_list, // relative parts of the path match the path from support files auto sc_dir = sc.line_entry.file.GetDirectory().GetStringRef(); if (!sc_dir.endswith(relative_path)) { - // We had a relative path specified and the relative directory - // doesn't match so remove this one + // We had a relative path specified and the relative directory doesn't + // match so remove this one LLDB_LOG(log, "removing not matching relative path {0} since it " "doesn't end with {1}", sc_dir, relative_path); sc_list.RemoveContextAtIndex(i); @@ -199,20 +199,20 @@ BreakpointResolverFileLine::SearchCallback(SearchFilter &filter, assert(m_breakpoint != NULL); // There is a tricky bit here. You can have two compilation units that - // #include the same file, and in one of them the function at m_line_number is - // used (and so code and a line entry for it is generated) but in the other it - // isn't. If we considered the CU's independently, then in the second - // inclusion, we'd move the breakpoint to the next function that actually - // generated code in the header file. That would end up being confusing. So - // instead, we do the CU iterations by hand here, then scan through the - // complete list of matches, and figure out the closest line number match, and - // only set breakpoints on that match. - - // Note also that if file_spec only had a file name and not a directory, there - // may be many different file spec's in the resultant list. The closest line - // match for one will not be right for some totally different file. So we go - // through the match list and pull out the sets that have the same file spec - // in their line_entry and treat each set separately. + // #include the same file, and in one of them the function at m_line_number + // is used (and so code and a line entry for it is generated) but in the + // other it isn't. If we considered the CU's independently, then in the + // second inclusion, we'd move the breakpoint to the next function that + // actually generated code in the header file. That would end up being + // confusing. So instead, we do the CU iterations by hand here, then scan + // through the complete list of matches, and figure out the closest line + // number match, and only set breakpoints on that match. + + // Note also that if file_spec only had a file name and not a directory, + // there may be many different file spec's in the resultant list. The + // closest line match for one will not be right for some totally different + // file. So we go through the match list and pull out the sets that have the + // same file spec in their line_entry and treat each set separately. FileSpec search_file_spec = m_file_spec; const bool is_relative = m_file_spec.IsRelative(); |

