7:00 – 3:00 ASRC GOES
- Dissertation
- Usability study! Done!
- Discussion. This is going to take some framing. I want to tie it back to earlier navigation, particularly the transition from stories and mappaemundi to isotropic maps of Ptolemy and Mercator.
- Sent Don and Danilo sql file
- Start satellite component list
- Evolver
- Adding threads to handle the GPU. This looks like what I want (from here):
import logging import concurrent.futures import threading import time def thread_function(name): logging.info("Task %s: starting on thread %s", name, threading.current_thread().name) time.sleep(2) logging.info("Task %s: finishing on thread %s", name, threading.current_thread().name) if __name__ == "__main__": num_tasks = 5 num_gpus = 1 format = "%(asctime)s: %(message)s" logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S") with concurrent.futures.ThreadPoolExecutor(max_workers=num_gpus) as executor: result = executor.map(thread_function, range(num_tasks)) logging.info("Main : all done")
As you can see, it’s possible to have a thread for each gpu, while having them iterate over a larger set of tasks. Now I need to extract the gpu name from the thread info. In other words, ThreadPoolExecutor-0_0 needs to map to gpu:1.
- Ok, this seems to do everything I need, with less cruft:
import concurrent.futures import threading import time from typing import List import re last_num_in_str_re = '(\d+)(?!.*\d)' prog = re.compile(last_num_in_str_re) def thread_function(args:List): num = prog.search(threading.current_thread().name) # get the last number in a string gpu_str = "gpu:{}".format(int(num.group(0))+1) print("{}: starting on {}".format(args["name"], gpu_str)) time.sleep(2) print("{}: finishing on {}".format(args["name"], gpu_str)) if __name__ == "__main__": num_tasks = 5 num_gpus = 5 task_list = [] for i in range(num_tasks): task = {"name":"task_{}".format(i), "value":2+(i/10)} task_list.append(task) with concurrent.futures.ThreadPoolExecutor(max_workers=num_gpus) as executor: result = executor.map(thread_function, task_list) print("Finished Main")
And that gives me:
task_0: starting on gpu:1 task_1: starting on gpu:2 task_0: finishing on gpu:1, after sleeping 2.0 seconds task_2: starting on gpu:1 task_1: finishing on gpu:2, after sleeping 2.1 seconds task_3: starting on gpu:2 task_2: finishing on gpu:1, after sleeping 2.2 seconds task_4: starting on gpu:1 task_3: finishing on gpu:2, after sleeping 2.3 seconds task_4: finishing on gpu:1, after sleeping 2.4 seconds Finished Main
So the only think left is to integrate this into TimeSeriesMl2
- Adding threads to handle the GPU. This looks like what I want (from here):