我目前正在学习Scala,并在寻找一种优雅的解决方案来解决使用co例程很容易解决的问题。在
由于Scala中默认情况下不启用co例程,所以我认为它们至少不是被广泛接受的最佳实践,因此我想在不使用它们的情况下编写代码。在
一个令人信服的论点,即联合例程/延续是最佳实践,这将是一个可接受的替代答案。在
我想写一个函数来搜索基本目录中的文件。 匹配和下降条件应该由具有“PathMatcher”特征的类的实例提供。(至少我认为这是Scala的方法)
PathMatcher可用于确定fs_item_path是否匹配,并确定搜索是否应进入目录(如果fs_item_path是目录的路径)。在
我现在采用的Python实现方法只是为了说明我想要的功能。在
我想用“Scala方式”来编写这段代码。在
我的目标是找到一个具有以下特点的解决方案:
我假设解决方案将涉及延迟计算流,但我无法以有效的方式组装流。在
我也读过,如果使用不当,lazy streams会保留“旧值”的副本。我想要的解决方案不会这样做。在
开始搜索的目录的绝对路径
目录名的列表,指示我们已下降到基\u abs\u path子目录的程度
具有PathMatcher特征的类的实例。在
在下面的示例中,我使用了一个正则表达式实现,但我不想将使用限制在正则表达式上。在
这里有一个完整的Python程序(用python3.4测试),其中包括一个Python版本的“generate_all_matching_paths”。在
程序将在“d:\Projects”中搜索以“json”结尾的文件系统路径,分析文件使用的缩进方式,然后打印出结果。在
如果路径包含子字符串“python_portable”,则搜索将不会下降到该目录。在
import os
import re
import codecs
#
# this is the bespoke function I want to port to Scala
#
def generate_all_matching_paths(
base_dir_abs_path,
rel_ancestor_dir_list,
rel_path_matcher
):
rooted_ancestor_dir_list = [base_dir_abs_path] + rel_ancestor_dir_list
current_dir_abs_path = os.path.join(*rooted_ancestor_dir_list)
dir_listing = os.listdir(current_dir_abs_path)
for fs_item_name in dir_listing:
fs_item_abs_path = os.path.join(
current_dir_abs_path,
fs_item_name
)
fs_item_rel_ancestor_list = rel_ancestor_dir_list + [fs_item_name]
fs_item_rel_path = os.path.join(
*fs_item_rel_ancestor_list
)
result = rel_path_matcher.match(fs_item_rel_path)
if result.is_match:
yield fs_item_abs_path
if result.do_descend and os.path.isdir(fs_item_abs_path):
child_ancestor_dir_list = rel_ancestor_dir_list + [fs_item_name]
for r in generate_all_matching_paths(
base_dir_abs_path,
child_ancestor_dir_list,
rel_path_matcher
):
yield r
#
# all following code is only a context giving example of how generate_all_matching_paths might be used
#
class MyMatchResult:
def __init__(
self,
is_match,
do_descend
):
self.is_match = is_match
self.do_descend = do_descend
# in Scala this should implement the PathMatcher trait
class MyMatcher:
def __init__(
self,
rel_path_regex,
abort_dir_descend_regex_list
):
self.rel_path_regex = rel_path_regex
self.abort_dir_descend_regex_list = abort_dir_descend_regex_list
def match(self, path):
rel_path_match = self.rel_path_regex.match(path)
is_match = rel_path_match is not None
do_descend = True
for abort_dir_descend_regex in self.abort_dir_descend_regex_list:
abort_match = abort_dir_descend_regex.match(path)
if abort_match:
do_descend = False
break
r = MyMatchResult(is_match, do_descend)
return r
def leading_whitespace(file_path):
b_leading_spaces = False
b_leading_tabs = False
with codecs.open(file_path, "r", "utf-8") as f:
for line in f:
for c in line:
if c == '\t':
b_leading_tabs = True
elif c == ' ':
b_leading_spaces = True
else:
break
if b_leading_tabs and b_leading_spaces:
break
return b_leading_spaces, b_leading_tabs
def print_paths(path_list):
for path in path_list:
print(path)
def main():
leading_spaces_file_path_list = []
leading_tabs_file_path_list = []
leading_mixed_file_path_list = []
leading_none_file_path_list = []
base_dir_abs_path = r'd:\Projects'
rel_path_regex = re.compile('.*json$')
abort_dir_descend_regex_list = [
re.compile('^.*python_portable.*$')
]
rel_patch_matcher = MyMatcher(rel_path_regex, abort_dir_descend_regex_list)
ancestor_dir_list = []
for fs_item_path in generate_all_matching_paths(
base_dir_abs_path,
ancestor_dir_list,
rel_patch_matcher
):
if os.path.isfile(fs_item_path):
b_leading_spaces, b_leading_tabs = leading_whitespace(fs_item_path)
if b_leading_spaces and b_leading_tabs:
leading_mixed_file_path_list.append(fs_item_path)
elif b_leading_spaces:
leading_spaces_file_path_list.append(fs_item_path)
elif b_leading_tabs:
leading_tabs_file_path_list.append(fs_item_path)
else:
leading_none_file_path_list.append(fs_item_path)
print('space indentation:')
print_paths(leading_spaces_file_path_list)
print('tab indentation:')
print_paths(leading_tabs_file_path_list)
print('mixed indentation:')
print_paths(leading_mixed_file_path_list)
print('no indentation:')
print_paths(leading_none_file_path_list)
print('space: {}'.format(len(leading_spaces_file_path_list)))
print('tab: {}'.format(len(leading_tabs_file_path_list)))
print('mixed: {}'.format(len(leading_mixed_file_path_list)))
print('none: {}'.format(len(leading_none_file_path_list)))
if __name__ == '__main__':
main()
下面是在Scala中实现这一点的另一种方法(再次使用Streams):
您是对的,您通常会将python
yield
替换为某种惰性求值。这里有一个概念证明,它使用case类来表示一个目录,以避免在本例中进行文件IO操作。在您基本上可以像处理整个文件系统一样处理流,但在内部,它只会根据需要获取它们,并以同样快的速度丢弃它们,除非您在其他地方保留对它们的引用。在
相关问题 更多 >
编程相关推荐