删除python注释的正则表达式

2024-09-27 07:23:59 发布

您现在位置:Python中文网/ 问答频道 /正文

我想删除python文件中的所有注释。 文件如下: ---------------comment.py------------------

# this is comment line.
age = 18  # comment in line
msg1 = "I'm #1."  # comment. there's a # in code.
msg2 = 'you are #2. ' + 'He is #3'  # strange sign ' # ' in comment. 
print('Waiting your answer')

我编写了许多正则表达式来提取所有注释,其中一些如下:

(?(?<=['"])(?<=['"])\s*#.*$|\s*#.*$)
get:  #1."  # comment. there's a # in code.

(?<=('|")[^\1]*\1)\s*#.*$|\s*#.*$
wrong. it's not 0-width in lookaround (?<=..)

但它不能正常工作。什么是正确的正则表达式? 你能帮帮我吗


Tags: 文件inpyyouageislinecomment
2条回答

您可以尝试使用tokenize而不是regex,正如@OlvinRoght所说,在这种情况下,使用regex解析代码可能是个坏主意。如您所见here,您可以尝试以下方法来检测注释:

import tokenize
fileObj = open('yourpath\comment.py', 'r')
for toktype, tok, start, end, line in tokenize.generate_tokens(fileObj.readline):
    # we can also use token.tok_name[toktype] instead of 'COMMENT'
    # from the token module 
    if toktype == tokenize.COMMENT:
        print('COMMENT' + " " + tok)

输出:

COMMENT # -*- coding: utf-8 -*-
COMMENT # this is comment line.
COMMENT # comment in line
COMMENT # comment. there's a # in code.
COMMENT # strange sign ' # ' in comment.

然后,为了获得预期的结果,即不带注释的python文件,您可以尝试以下方法:

nocomments=[]
for toktype, tok, start, end, line in tokenize.generate_tokens(fileObj.readline):
    if toktype != tokenize.COMMENT:
        nocomments.append(tok)

print(' '.join(nocomments))

输出:

 age = 18 
 msg1 = "I'm #1." 
 msg2 = 'you are #2. ' + 'He is #3' 
 print ( 'Waiting your answer' )  

信用证:https://gist.github.com/BroHui/aca2b8e6e6bdf3cb4af4b246c9837fa3

这就行了。它使用标记化。您可以根据自己的使用情况修改此代码

""" Strip comments and docstrings from a file.
"""

import sys, token, tokenize

def do_file(fname):
    """ Run on just one file.
    """
    source = open(fname)
    mod = open(fname + ",strip", "w")

    prev_toktype = token.INDENT
    first_line = None
    last_lineno = -1
    last_col = 0

    tokgen = tokenize.generate_tokens(source.readline)
    for toktype, ttext, (slineno, scol), (elineno, ecol), ltext in tokgen:
        if 0:   # Change to if 1 to see the tokens fly by.
            print("%10s %-14s %-20r %r" % (
                tokenize.tok_name.get(toktype, toktype),
                "%d.%d-%d.%d" % (slineno, scol, elineno, ecol),
                ttext, ltext
                ))
        if slineno > last_lineno:
            last_col = 0
        if scol > last_col:
            mod.write(" " * (scol - last_col))
        if toktype == token.STRING and prev_toktype == token.INDENT:
            # Docstring
            mod.write("# ")
        elif toktype == tokenize.COMMENT:
            # Comment
            mod.write("\n")
        else:
            mod.write(ttext)
        prev_toktype = toktype
        last_col = ecol
        last_lineno = elineno

if __name__ == '__main__':
    do_file("text.txt")

text.txt:

# this is comment line.
age = 18  # comment in line
msg1 = "I'm #1."  # comment. there's a # in code.
msg2 = 'you are #2. ' + 'He is #3'  # strange sign ' # ' in comment. 
print('Waiting your answer')

输出:

age = 18  

msg1 = "I'm #1."  

msg2 = 'you are #2. ' + 'He is #3'  

print('Waiting your answer')

输入:

msg1 = "I'm #1."  # comment. there's a # in code.  the regex#.*$ will match #1."  # comment. there's a # in code. . Right match shoud be # comment. there's a # in code.

输出:

msg1 = "I'm #1."  

相关问题 更多 >

    热门问题