我在Kubernetes和AWS上测试自动缩放Dask分布式实现时遇到了一个演示问题,我不确定我是否正确地解决了这个问题。在
我的场景是给定一个字符串的md5散列(表示密码)查找原始字符串。我碰到了三个主要问题。在
A)参数空间很大,试图创建一个包含2.8211099e+12个成员的dask包会导致内存问题(因此,您将在下面的示例代码中看到“explode”函数)。在
B)早期发现时清理出口。我认为使用take(1, npartitions=-1)
可以实现这一点,但我不确定。最初,我引发了一个异常raise Exception("%s is your answer' % test_str)
,它起作用了,但感觉“肮脏”
C)鉴于这是一个长期运行的过程,有时工人或AWS机器会死亡,如何存储进度是最好的?在
示例代码:
import distributed
import math
import dask.bag as db
import hashlib
import dask
import os
if os.environ.get('SCHED_URL', False):
sched_url = os.environ['SCHED_URL']
client = distributed.Client(sched_url)
versions = client.get_versions(True)
dask.set_options(get=client.get)
difficulty = 'easy'
settings = {
'hard': (hashlib.md5('welcome1'.encode('utf-8')).hexdigest(),'abcdefghijklmnopqrstuvwxyz1234567890', 8),
'mid-hard': (hashlib.md5('032abgh'.encode('utf-8')).hexdigest(),'abcdefghijklmnop1234567890', 7),
'mid': (hashlib.md5('b08acd'.encode('utf-8')).hexdigest(),'0123456789abcdef', 6),
'easy': (hashlib.md5('0812'.encode('utf-8')).hexdigest(),'0123456789', 4)
}
hashed_pw, keyspace, max_guess_length = settings[difficulty]
def is_pw(guess):
return hashlib.md5(guess.encode('utf-8')).hexdigest() == hashed_pw
def guess(n):
guess = ''
size = len(keyspace)
while n>0 :
n -= 1
guess += keyspace[n % size];
n = math.floor(n / size);
return guess
def make_exploder(num_partitions, max_val):
"""Creates a function that maps a int to a range based on the number maximum value aimed for
and the number of partitions that are expected.
Used in this code used with map and flattent to take a short list
i.e 1->1e6 to a large one 1->1e20 in dask rather than on the host machine."""
steps = math.ceil(max_val / num_partitions)
def explode(partition):
return range(partition * steps, partition * steps + steps)
return explode
max_val = len(keyspace) ** max_guess_length # How many possiable password permutation
partitions = math.floor(max_val / 100)
partitions = partitions if partitions < 100000 else 100000 # split in to a maximum of 10000 partitions. Too many partitions caused issues, memory I think.
exploder = make_exploder(partitions, max_val) # Sort of the opposite of a reduce. make_exploder(10, 100)(3) => [30, 31, ..., 39]. Expands the problem back in to the full problem space.
print("max val: %s, partitions:%s" % (max_val, partitions))
search = db.from_sequence(range(partitions), npartitions=partitions).map(exploder).flatten().filter(lambda i: i <= max_val).map(guess).filter(is_pw)
search.take(1,npartitions=-1)
我发现“easy”在本地运行良好,“mid hard”在我们的6到8*m4.2xlarge AWS集群上运行得很好。但到目前为止还没有hard
工作。在
这在很大程度上取决于你如何把你的元素放进袋子里。如果每个元素都在自己的分区中,那么是的,这肯定会杀死所有的东西。1e12分区非常昂贵。我建议将分区的数量保持在数千或上万。在
如果你想要这个,我建议不要用垃圾袋,而是使用concurrent.futures interface,尤其是as_completed迭代器。在
只要您能保证调度程序能够生存,Dask应该对此有弹性。如果您使用concurrent futures接口而不是dask bag,那么您还可以跟踪客户机进程上的中间结果。在
相关问题 更多 >
编程相关推荐