Python concurrent.future 使用教程及源碼初剖

垃圾話

很久沒寫博客了,想了想不能再划水,於是給自己定了一個目標,寫點 concurrent.future 的內容,於是這篇文章就是來聊聊 Python 3.2 中新增的 concurrent.future 模塊。

正文

Python 的非同步處理

有一個 Python 開發工程師小明,在面試過程中,突然接到這樣一個需求:去請求幾個網站,拿到他們的數據,小明定睛一想,簡單啊,噼里啪啦,他寫了如下的代碼

import multiprocessingnimport timennndef request_url(query_url: str):n time.sleep(3) # 請求處理邏輯nnnif __name__ == __main__:n url_list = ["abc.com", "xyz.com"]n task_list = [multiprocessing.Process(target=request_url, args=(url,)) for url in url_list]n [task.start() for task in task_list]n [task.join() for task in task_list]n

Easy, 好了,現在新的需求來了,我們想獲取每一個請求結果,怎麼辦?小明想了想,又寫出如下的代碼

import multiprocessingnimport timennndef request_url(query_url: str, result_dict: dict):n time.sleep(3) # 請求處理邏輯n result_dict[query_url] = {} # 返回結果nnnif __name__ == __main__:n process_manager = multiprocessing.Manager()n result_dict = process_manager.dict()n url_list = ["abc.com", "xyz.com"]n task_list = [multiprocessing.Process(target=request_url, args=(url, result_dict)) for url in url_list]n [task.start() for task in task_list]n [task.join() for task in task_list]n print(result_dict)n

好了,面試官說,恩看起來不錯,好了,我再改改題目,首先,我們不能阻塞主進程,主進程需要根據任務當前的狀態(結束/未結束)來及時的獲取對應的結果,怎麼改?,小明想了想,要不,我們直接用信號量,讓任務完成後,向父進程發送一個信號量?然後直接暴力出奇蹟?還有更簡單的方法么?貌似沒了?最後面試官心理說了一句 naive ,臉上笑而不語,讓小明回去慢慢等消息。

從小明的窘境,我們可以看出一個這樣的問題,就是我們最常用的 multiprocessing 或者是 threding 兩個模塊,對於我們想實現非同步任務的場景來說,其實略有一點不友好的,我們往往需要做一些額外的工作,才能比較乾淨的實現一些非同步的需求。為了解決這樣的窘境,09 年 10 月,Brian Quinlan 先生提出了 PEP 3148 ,在這個提案中,他提出將我們常用的 multiprocessingthreding 模塊進行進一步封裝,達成較好的支持非同步操作的目的。最終這個提案在 Python 3.2 中被引入。也就是我們今天要聊聊的 concurrent.future

Future 模式

在我們正式開始聊新模塊之前,我們需要去了解關於 Future 模式的相關姿勢

首先 Future 模式,是什麼?

Future 其實是生產-消費者模型的一種擴展,在生產-消費者模型中,生產者不關心消費者什麼時候處理完數據,也不關心消費者處理的結果。比如我們經常寫出如下的代碼

import multiprocessing, Queuenimport osnimport timenfrom multiprocessing import Processnfrom time import sleepnfrom random import randintnnclass Producer(multiprocessing.Process):n def __init__(self, queue):n multiprocessing.Process.__init__(self)n self.queue = queuen n def run(self):n while True:n self.queue.put(one product)n print(multiprocessing.current_process().name + str(os.getpid()) + produced one product, the no of queue now is: %d %self.queue.qsize())n sleep(randint(1, 3))n n nclass Consumer(multiprocessing.Process):n def __init__(self, queue):n multiprocessing.Process.__init__(self)n self.queue = queuen n def run(self):n while True:n d = self.queue.get(1)n if d != None:n print(multiprocessing.current_process().name + str(os.getpid()) + consumed %s, the no of queue now is: %d %(d,self.queue.qsize()))n sleep(randint(1, 4))n continuen else:n breakn n#create queuenqueue = multiprocessing.Queue(40)n nif __name__ == "__main__":n print(Excited!")n #create processes n processed = []n for i in range(3):n processed.append(Producer(queue))n processed.append(Consumer(queue))n n #start processes n for i in range(len(processed)):n processed[i].start()n n #join processes n for i in range(len(processed)):n processed[i].join() n

這就是生產-消費者模型的一個簡單的實現,我們利用一個 multiprocessing 中的 Queue 來作為通信渠道,我們的生產者負責往隊列中傳入數據,消費者負責從隊列中獲取數據並處理。不過就如同上面所說的一樣,在這種模式中,生產者並不關心消費者何時處理完數據,也不關心處理的結果。而在 Future 中,我們可以讓生產者等待消息處理完成,如果需要的話,我們還可以獲取相關的計算結果。

比如,大家可以看看下面這樣一段 Java 代碼

package concurrent;nnimport java.util.concurrent.Callable;nnpublic class DataProcessThread implements Callable<String> {nn @Overriden public String call() throws Exception {n // TODO Auto-generated method stubn Thread.sleep(10000);//模擬數據處理n return "數據返回";n }nn}n

這是我們負責處理數據的代碼。

package concurrent;nnimport java.util.concurrent.ExecutionException;nimport java.util.concurrent.ExecutorService;nimport java.util.concurrent.Executors;nimport java.util.concurrent.FutureTask;nnpublic class MainThread {nn public static void main(String[] args) throws InterruptedException,n ExecutionException {n // TODO Auto-generated method stubn DataProcessThread dataProcessThread = new DataProcessThread();n FutureTask<String> future = new FutureTask<String>(dataProcessThread);nn ExecutorService executor = Executors.newFixedThreadPool(1);n executor.submit(future);nn Thread.sleep(10000);//模擬繼續處理自身其他業務n while (true) {n if (future.isDone()) {n System.out.println(future.get());n break;n }n }n executor.shutdown();n }nn}n

這是我們主線程,大家可以看到,我們可以很方便的獲取數據處理任務的狀態。同時獲取相關的結果。

Python 中的 concurrent.futures

前面說了,在 Python 3.2 以後,concurrent.futures 是內置的模塊,我們可以直接使用

Note: 如果你需要在 Python 2.7 中使用 concurrent.futures , 那麼請用 pip 進行安裝,pip install futures

好了,準備就緒後,我們來看看怎麼使用這個東西呢

from concurrent.futures import ProcessPoolExecutornimport timennndef return_future_result(message):n time.sleep(2)n return messagennnif __name__ == __main__:n pool = ProcessPoolExecutor(max_workers=2) # 創建一個最大可容納2個task的進程池n future1 = pool.submit(return_future_result, ("hello")) # 往進程池裡面加入一個taskn future2 = pool.submit(return_future_result, ("world")) # 往進程池裡面加入一個taskn print(future1.done()) # 判斷task1是否結束n time.sleep(3)n print(future2.done()) # 判斷task2是否結束n print(future1.result()) # 查看task1返回的結果n print(future2.result()) # 查看task2返回的結果n

首先 from concurrent.futures import ProcessPoolExecutorconcurrent.futures 引入 ProcessPoolExecutor 作為我們的進程池,處理我們後面的數據。(在 concurrent.futures 中,為我們提供了兩種 Executor ,一種是我們現在用的 ProcessPoolExecutor, 一種是 ThreadPoolExecutor 他們對外暴露的方法一致,大家可以根據自己的實際需求選用。)

緊接著,初始化一個最大容量為 2 的進程池。然後我們調用進程池中的 submit 方法提交一個任務。好了有意思的點來了,我們在調用 submit 方法後,得到了一個特殊的變數,這個變數是 Future 類的實例,代表著一個在未來完成的操作。換句話說,當 submit 返回 Future 實例的時候,我們的任務可能還沒有完成,我們可以通過調用 Future 實例中的 done 方法來獲取當前任務的運行狀態,如果任務結束後,我們可以通過 result 方法來獲取返回的結果。如果在執行後續的邏輯時,我們因為一些原因想要取消任務時,我們可以通過調用 cancel 方法來取消當前的任務。

現在新的問題來了,我們如果想要提交很多個任務應該怎麼辦呢?concurrent.future 為我們提供了 map 方法來方便我們批量添加任務。

import concurrent.futuresnimport requestsnntask_url = [(http://www.baidu.com, 40), (http://example.com/, 40), (https://www.github.com/, 40)]nnndef load_url(params: tuple):n return requests.get(params[0], timeout=params[1]).textnnnif __name__ == __main__:n with concurrent.futures.ProcessPoolExecutor(max_workers=3) as executor:n for url, data in zip(task_url, executor.map(load_url, task_url)):n print(%r page is %d bytes % (url, len(data)))n

恩,concurrent.future 中線程/進程池所提供的 map 方法和標準庫中的 map 函數使用方法一樣。

剖一下 concurrent.futures

前面講了怎麼使用 concurrent.futures 後,我們都比較好奇,concurrent.futures 是怎麼實現 Future 模式的。裡面是怎麼將任務和結果進行關聯的。我們現在開始從 submit 方法著手來簡單看一下 ProcessPoolExecutor 的實現。

首先,在初始化 ProcessPoolExecutor 時,它的 __init__ 方法中進行了一些關鍵變數的初始化操作。

class ProcessPoolExecutor(_base.Executor):n def __init__(self, max_workers=None):n """Initializes a new ProcessPoolExecutor instance.nn Args:n max_workers: The maximum number of processes that can be used ton execute the given calls. If None or not given then as manyn worker processes will be created as the machine has processors.n """n _check_system_limits()nn if max_workers is None:n self._max_workers = os.cpu_count() or 1n else:n if max_workers <= 0:n raise ValueError("max_workers must be greater than 0")nn self._max_workers = max_workersnn # Make the call queue slightly larger than the number of processes ton # prevent the worker processes from idling. But dont make it too bign # because futures in the call queue cannot be cancelled.n self._call_queue = multiprocessing.Queue(self._max_workers +n EXTRA_QUEUED_CALLS)n # Killed worker processes can produce spurious "broken pipe"n # tracebacks in the queues own worker thread. But we detect killedn # processes anyway, so silence the tracebacks.n self._call_queue._ignore_epipe = Truen self._result_queue = SimpleQueue()n self._work_ids = queue.Queue()n self._queue_management_thread = Nonen # Map of pids to processesn self._processes = {}nn # Shutdown is a two-step process.n self._shutdown_thread = Falsen self._shutdown_lock = threading.Lock()n self._broken = Falsen self._queue_count = 0n self._pending_work_items = {}n

好了,我們來看看我們今天的入口 submit 方法

def submit(self, fn, *args, **kwargs):n with self._shutdown_lock:n if self._broken:n raise BrokenProcessPool(A child process terminated n abruptly, the process pool is not usable anymore)n if self._shutdown_thread:n raise RuntimeError(cannot schedule new futures after shutdown)n f = _base.Future()n w = _WorkItem(f, fn, args, kwargs)n self._pending_work_items[self._queue_count] = wn self._work_ids.put(self._queue_count)n self._queue_count += 1n # Wake up queue management threadn self._result_queue.put(None)n self._start_queue_management_thread()n return fn

首先,傳入的參數 fn 是我們的處理函數,args 以及 kwargs 是我們要傳遞 fn 函數的參數。在 submit 函數最開始,首先根據 _broken_shutdown_thread 的值來判斷當前進程池中處理進程的狀態以及目前進程池的狀態。如果處理進程突然被銷毀或者進程池已經被關閉,那麼將拋出異常表明目前不再接受新的 submit 操作。

如果前面狀態沒有問題,首先,實例化 Future 類,然後將這個實例和處理函數和相關參數一起,作為參數來實例化 _WorkItem 類,然後將實例 w 作為 value ,_queue_count 作為 key 存入 _pending_work_items 中。然後調用 _start_queue_management_thread 方法開啟進程池中的管理線程。現在來看看這部分代碼

def _start_queue_management_thread(self):n # When the executor gets lost, the weakref callback will wake upn # the queue management thread.n def weakref_cb(_, q=self._result_queue):n q.put(None)nn if self._queue_management_thread is None:n # Start the processes so that their sentinels are known.n self._adjust_process_count()n self._queue_management_thread = threading.Thread(n target=_queue_management_worker,n args=(weakref.ref(self, weakref_cb),n self._processes,n self._pending_work_items,n self._work_ids,n self._call_queue,n self._result_queue))n self._queue_management_thread.daemon = Truen self._queue_management_thread.start()n _threads_queues[self._queue_management_thread] = self._result_queuen

這一部分很簡單,首先運行 _adjust_process_count 方法,然後開啟一個守護線程,運行 _queue_management_worker 方法。我們首先來看看 _adjust_process_count 方法。

def _adjust_process_count(self):n for _ in range(len(self._processes), self._max_workers):n p = multiprocessing.Process(n target=_process_worker,n args=(self._call_queue,n self._result_queue))n p.start()n self._processes[p.pid] = pn

根據在 __init__ 方法中設定的 _max_workers 來開啟對應數量的進程,在進程中運行 _process_worker 函數。

恩,順藤摸瓜,我們先來看看 _process_worker 函數吧?

def _process_worker(call_queue, result_queue):n """Evaluates calls from call_queue and places the results in result_queue.nn This worker is run in a separate process.nn Args:n call_queue: A multiprocessing.Queue of _CallItems that will be read andn evaluated by the worker.n result_queue: A multiprocessing.Queue of _ResultItems that will writtenn to by the worker.n shutdown: A multiprocessing.Event that will be set as a signal to then worker that it should exit when call_queue is empty.n """n while True:n call_item = call_queue.get(block=True)n if call_item is None:n # Wake up queue management threadn result_queue.put(os.getpid())n returnn try:n r = call_item.fn(*call_item.args, **call_item.kwargs)n except BaseException as e:n exc = _ExceptionWithTraceback(e, e.__traceback__)n result_queue.put(_ResultItem(call_item.work_id, exception=exc))n else:n result_queue.put(_ResultItem(call_item.work_id,n result=r))n

首先,這裡搞了一個死循環,緊接著,我們從 call_queue 隊列中獲取一個 _WorkItem 實例,然後如果獲取的值為 None 的話,那麼證明沒有新的任務進來了,我們可以把當前進程的 pid 放入結果隊列中。然後結束進程。

如果收到了任務,那麼執行這個任務。不管是在執行過程中發生異常,亦或者是得到最終的結果,都將其封裝為 _ResultItem 實例,並將其放入結果隊列中。

好了,我們回到剛剛看了一半的 _start_queue_management_thread 函數,

def _start_queue_management_thread(self):n # When the executor gets lost, the weakref callback will wake upn # the queue management thread.n def weakref_cb(_, q=self._result_queue):n q.put(None)nn if self._queue_management_thread is None:n # Start the processes so that their sentinels are known.n self._adjust_process_count()n self._queue_management_thread = threading.Thread(n target=_queue_management_worker,n args=(weakref.ref(self, weakref_cb),n self._processes,n self._pending_work_items,n self._work_ids,n self._call_queue,n self._result_queue))n self._queue_management_thread.daemon = Truen self._queue_management_thread.start()n _threads_queues[self._queue_management_thread] = self._result_queuen

在執行完 _adjust_process_count 函數後,我們進程池中的 _processes 變數(它是一個 dict )便關聯了一些處理進程。然後我們開啟一個後台守護線程,來執行 _queue_management_worker 函數,我們給它傳了幾個變數,首先 _processes 是我們的進程映射,_pending_work_items 中存放著我們待處理任務,還有 _call_queue_result_queue 。好了還有一個參數大家可能不太理解,就是 weakref.ref(self, weakref_cb) 這貨。

首先,Python 是一門具有垃圾回收機制的語言,有著 GC (Garbage Collection) 機制意味著我們在大多數時候,不太需要去關注內存的分配與回收。在 Python 中,什麼時候對象會被回收是由其引用計數所決定的。當引用計數為 0 的時候,這個對象會被回收。在有一些情況下,我們對象因為交叉引用或者其餘的一些原因,造成引用計數始終不為0,這意味著這個對象無法被回收。造成內存泄露 。因此區別於我們普通的引用,Python 中新增了一個引用機制叫做弱引用,弱引用的意義在於,某個變數持有一個對象,卻不會增加這個對象的引用計數。因此 weakref.ref(self, weakref_cb) 在大多數而言,等價於 self (至於這裡為什麼要使用弱引用,我們這裡先不講,會開一個單章來說)

好了,這一部分代碼看完,我們來看看,_queue_management_worker 怎麼實現的

def _queue_management_worker(executor_reference,n processes,n pending_work_items,n work_ids_queue,n call_queue,n result_queue):n """Manages the communication between this process and the worker processes.nn This function is run in a local thread.nn executor_reference: A weakref.ref to the ProcessPoolExecutor that ownsn Args:n process: A list of the multiprocessing.Process instances used asn this thread. Used to determine if the ProcessPoolExecutor has beenn garbage collected and that this function can exit.n workers.n pending_work_items: A dict mapping work ids to _WorkItems e.g.n {5: <_WorkItem...>, 6: <_WorkItem...>, ...}n work_ids_queue: A queue.Queue of work ids e.g. Queue([5, 6, ...]).n call_queue: A multiprocessing.Queue that will be filled with _CallItemsn derived from _WorkItems for processing by the process workers.n result_queue: A multiprocessing.Queue of _ResultItems generated by then process workers.n """n executor = Nonenn def shutting_down():n return _shutdown or executor is None or executor._shutdown_threadnn def shutdown_worker():n # This is an upper boundn nb_children_alive = sum(p.is_alive() for p in processes.values())n for i in range(0, nb_children_alive):n call_queue.put_nowait(None)n # Release the queues resources as soon as possible.n call_queue.close()n # If .join() is not called on the created processes thenn # some multiprocessing.Queue methods may deadlock on Mac OS X.n for p in processes.values():n p.join()nn reader = result_queue._readernn while True:n _add_call_item_to_queue(pending_work_items,n work_ids_queue,n call_queue)nn sentinels = [p.sentinel for p in processes.values()]n assert sentinelsn ready = wait([reader] + sentinels)n if reader in ready:n result_item = reader.recv()n else:n # Mark the process pool broken so that submits fail right now.n executor = executor_reference()n if executor is not None:n executor._broken = Truen executor._shutdown_thread = Truen executor = Nonen # All futures in flight must be marked failedn for work_id, work_item in pending_work_items.items():n work_item.future.set_exception(n BrokenProcessPool(n "A process in the process pool was "n "terminated abruptly while the future was "n "running or pending."n ))n # Delete references to object. See issue16284n del work_itemn pending_work_items.clear()n # Terminate remaining workers forcibly: the queues or theirn # locks may be in a dirty state and block forever.n for p in processes.values():n p.terminate()n shutdown_worker()n returnn if isinstance(result_item, int):n # Clean shutdown of a worker using its PIDn # (avoids marking the executor broken)n assert shutting_down()n p = processes.pop(result_item)n p.join()n if not processes:n shutdown_worker()n returnn elif result_item is not None:n work_item = pending_work_items.pop(result_item.work_id, None)n # work_item can be None if another process terminated (see above)n if work_item is not None:n if result_item.exception:n work_item.future.set_exception(result_item.exception)n else:n work_item.future.set_result(result_item.result)n # Delete references to object. See issue16284n del work_itemn # Check whether we should start shutting down.n executor = executor_reference()n # No more work items can be added if:n # - The interpreter is shutting down ORn # - The executor that owns this worker has been collected ORn # - The executor that owns this worker has been shutdown.n if shutting_down():n try:n # Since no new work items can be added, it is safe to shutdownn # this thread if there are no pending work items.n if not pending_work_items:n shutdown_worker()n returnn except Full:n # This is not a problem: we will eventually be woken up (inn # result_queue.get()) and be able to send a sentinel again.n passn executor = Nonen

熟悉的大循環,循環的第一步,利用 _add_call_item_to_queue 函數來將等待隊列中的任務加入到調用隊列中去,先來看看這一部分代碼

def _add_call_item_to_queue(pending_work_items,n work_ids,n call_queue):n """Fills call_queue with _WorkItems from pending_work_items.nn This function never blocks.nn Args:n pending_work_items: A dict mapping work ids to _WorkItems e.g.n {5: <_WorkItem...>, 6: <_WorkItem...>, ...}n work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work idsn are consumed and the corresponding _WorkItems fromn pending_work_items are transformed into _CallItems and put inn call_queue.n call_queue: A multiprocessing.Queue that will be filled with _CallItemsn derived from _WorkItems.n """n while True:n if call_queue.full():n returnn try:n work_id = work_ids.get(block=False)n except queue.Empty:n returnn else:n work_item = pending_work_items[work_id]nn if work_item.future.set_running_or_notify_cancel():n call_queue.put(_CallItem(work_id,n work_item.fn,n work_item.args,n work_item.kwargs),n block=True)n else:n del pending_work_items[work_id]n continuen

首先,判斷調用隊列是不是已經滿了,如果滿了,則放棄這次循環。緊接著從 work_id 隊列中取出,然後從等待任務中取出對應的 _WorkItem 實例。緊接著,調用實例中綁定的 Future 實例的 set_running_or_notify_cancel 方法來設置任務的狀態,緊接著將其扔入調用隊列中。

def set_running_or_notify_cancel(self):n """Mark the future as running or process any cancel notifications.nn Should only be used by Executor implementations and unit tests.nn If the future has been cancelled (cancel() was called and returnedn True) then any threads waiting on the future completing (though callsn to as_completed() or wait()) are notified and False is returned.nn If the future was not cancelled then it is put in the running staten (future calls to running() will return True) and True is returned.nn This method should be called by Executor implementations beforen executing the work associated with this future. If this method returnsn False then the work should not be executed.nn Returns:n False if the Future was cancelled, True otherwise.nn Raises:n RuntimeError: if this method was already called or if set_result()n or set_exception() was called.n """n with self._condition:n if self._state == CANCELLED:n self._state = CANCELLED_AND_NOTIFIEDn for waiter in self._waiters:n waiter.add_cancelled(self)n # self._condition.notify_all() is not necessary becausen # self.cancel() triggers a notification.n return Falsen elif self._state == PENDING:n self._state = RUNNINGn return Truen else:n LOGGER.critical(Future %s in unexpected state: %s,n id(self),n self._state)n raise RuntimeError(Future in unexpected state)n

這一部分內容很簡單,檢查當前實例如果處於等待狀態,就返回 True ,如果處於被取消的狀態,就返回 False , 在 _add_call_item_to_queue 函數中,會將已經處於 cancel 狀態的 _WorkItem 從等待任務中移除。

好了,我們繼續回到 _queue_management_worker 函數中去,

def _queue_management_worker(executor_reference,n processes,n pending_work_items,n work_ids_queue,n call_queue,n result_queue):n """Manages the communication between this process and the worker processes.nn This function is run in a local thread.nn executor_reference: A weakref.ref to the ProcessPoolExecutor that ownsn Args:n process: A list of the multiprocessing.Process instances used asn this thread. Used to determine if the ProcessPoolExecutor has beenn garbage collected and that this function can exit.n workers.n pending_work_items: A dict mapping work ids to _WorkItems e.g.n {5: <_WorkItem...>, 6: <_WorkItem...>, ...}n work_ids_queue: A queue.Queue of work ids e.g. Queue([5, 6, ...]).n call_queue: A multiprocessing.Queue that will be filled with _CallItemsn derived from _WorkItems for processing by the process workers.n result_queue: A multiprocessing.Queue of _ResultItems generated by then process workers.n """n executor = Nonenn def shutting_down():n return _shutdown or executor is None or executor._shutdown_threadnn def shutdown_worker():n # This is an upper boundn nb_children_alive = sum(p.is_alive() for p in processes.values())n for i in range(0, nb_children_alive):n call_queue.put_nowait(None)n # Release the queues resources as soon as possible.n call_queue.close()n # If .join() is not called on the created processes thenn # some multiprocessing.Queue methods may deadlock on Mac OS X.n for p in processes.values():n p.join()nn reader = result_queue._readernn while True:n _add_call_item_to_queue(pending_work_items,n work_ids_queue,n call_queue)nn sentinels = [p.sentinel for p in processes.values()]n assert sentinelsn ready = wait([reader] + sentinels)n if reader in ready:n result_item = reader.recv()n else:n # Mark the process pool broken so that submits fail right now.n executor = executor_reference()n if executor is not None:n executor._broken = Truen executor._shutdown_thread = Truen executor = Nonen # All futures in flight must be marked failedn for work_id, work_item in pending_work_items.items():n work_item.future.set_exception(n BrokenProcessPool(n "A process in the process pool was "n "terminated abruptly while the future was "n "running or pending."n ))n # Delete references to object. See issue16284n del work_itemn pending_work_items.clear()n # Terminate remaining workers forcibly: the queues or theirn # locks may be in a dirty state and block forever.n for p in processes.values():n p.terminate()n shutdown_worker()n returnn if isinstance(result_item, int):n # Clean shutdown of a worker using its PIDn # (avoids marking the executor broken)n assert shutting_down()n p = processes.pop(result_item)n p.join()n if not processes:n shutdown_worker()n returnn elif result_item is not None:n work_item = pending_work_items.pop(result_item.work_id, None)n # work_item can be None if another process terminated (see above)n if work_item is not None:n if result_item.exception:n work_item.future.set_exception(result_item.exception)n else:n work_item.future.set_result(result_item.result)n # Delete references to object. See issue16284n del work_itemn # Check whether we should start shutting down.n executor = executor_reference()n # No more work items can be added if:n # - The interpreter is shutting down ORn # - The executor that owns this worker has been collected ORn # - The executor that owns this worker has been shutdown.n if shutting_down():n try:n # Since no new work items can be added, it is safe to shutdownn # this thread if there are no pending work items.n if not pending_work_items:n shutdown_worker()n returnn except Full:n # This is not a problem: we will eventually be woken up (inn # result_queue.get()) and be able to send a sentinel again.n passn executor = Nonen

result_item 變數

我們看看

首先,大家可能在這裡有點疑問了

sentinels = [p.sentinel for p in processes.values()]nassert sentinelsnready = wait([reader] + sentinels)n

這個 wait 是什麼鬼啊,reader 又是什麼鬼啊。一步步來。首先,我們看到,前面,reader = result_queue._reader 也會引起大家的疑問,這裡我們 result_queuemultiprocess 裡面的 SimpleQueue啊,它沒有 _reader 方法啊QAQ

class SimpleQueue(object):n def __init__(self, *, ctx):n self._reader, self._writer = connection.Pipe(duplex=False)n self._rlock = ctx.Lock()n self._poll = self._reader.polln if sys.platform == win32:n self._wlock = Nonen else:n self._wlock = ctx.Lock()n

上面這貼出來的,是 SimpleQueue 的部分代碼,我們可以很清楚的看到,SimpleQueue 本質是利用一個 Pipe來進行進程間通信的,然後 _reader 是讀取 Pipe 的一個變數。

Note : 大家可以複習下其餘幾種進程間通信的方法了

好了,這一部分看懂後,我們來看看 wait 方法吧。

def wait(object_list, timeout=None):n n Wait till an object in object_list is ready/readable.nn Returns list of those objects in object_list which are ready/readable.n n with _WaitSelector() as selector:n for obj in object_list:n selector.register(obj, selectors.EVENT_READ)nn if timeout is not None:n deadline = time.time() + timeoutnn while True:n ready = selector.select(timeout)n if ready:n return [key.fileobj for (key, events) in ready]n else:n if timeout is not None:n timeout = deadline - time.time()n if timeout < 0:n return readyn

這一部分代碼很簡單,首先將我們待讀取的對象,進行一次註冊,然後當 timeout 為 None 的時候,就一直等待到有對象讀取數據成功為止

好了,我們繼續回到前面的 _queue_management_worker 函數中去,來看看這樣一段代碼

ready = wait([reader] + sentinels)n if reader in ready:n result_item = reader.recv()n else:n # Mark the process pool broken so that submits fail right now.n executor = executor_reference()n if executor is not None:n executor._broken = Truen executor._shutdown_thread = Truen executor = Nonen # All futures in flight must be marked failedn for work_id, work_item in pending_work_items.items():n work_item.future.set_exception(n BrokenProcessPool(n "A process in the process pool was "n "terminated abruptly while the future was "n "running or pending."n ))n # Delete references to object. See issue16284n del work_itemn pending_work_items.clear()n # Terminate remaining workers forcibly: the queues or theirn # locks may be in a dirty state and block forever.n for p in processes.values():n p.terminate()n shutdown_worker()n returnn

我們用 wait 函數來讀取一系列對象,因為我們沒有設置 Timeout ,所以當我們拿到可讀取對象的結果時,如果 result_queue._reader 沒有在列表中,那麼意味著,有處理進程突然異常關閉了,這個時候,我們開始執行後面的語句來執行目前進程池的關閉操作。如果在列表中,我們讀取數據,得到 result_item 變數

我們再看看下面的代碼

if isinstance(result_item, int):n # Clean shutdown of a worker using its PIDn # (avoids marking the executor broken)n assert shutting_down()n p = processes.pop(result_item)n p.join()n if not processes:n shutdown_worker()n returnnelif result_item is not None:n work_item = pending_work_items.pop(result_item.work_id, None)n # work_item can be None if another process terminated (see above)n if work_item is not None:n if result_item.exception:n work_item.future.set_exception(result_item.exception)n else:n work_item.future.set_result(result_item.result)n # Delete references to object. See issue16284n del work_itemn

首先,如果 result_item 變數是 int 類型的話,不知道大家還記不記得在 _process_worker 函數中有這樣一段邏輯

call_item = call_queue.get(block=True)nif call_item is None:n # Wake up queue management threadn result_queue.put(os.getpid())n returnn

當調用隊列中沒有新的任務時,將進程 pid 放入 result_queue 中。那麼我們 result_item 如果值為 int 那麼意味著,我們之前任務處理工作已經完畢,於是開始清理,關閉我們的進程池。

如果 result_item 既不為 int 也不為 None , 那麼必然是 _ResultItem 的實例,我們根據 work_id 取出 _WorkItem 實例,並將產生的異常或者值和 _WorkItem 實例中的 Future 實例(也就是我們 submit 後返回的那貨)進行綁定。

最後,刪除這個 work_item ,完事兒,手工

最後

洋洋洒洒寫了一大篇辣雞文章,希望大家不要介意,其實我們能看到 concurrent.future 的實現,其實並沒有用什麼高深的黑魔法,但是其中細節值得我們一一品味,所以這篇文章我們先寫到這裡。後面有機會的話,我們再去看看 concurrent.future 其餘部分代碼。也有蠻多值得品味的地方。

Reference

1.Python 3 multiprocessing

2.Python 3 weakref

3.並發編程之Future模式

4.Python並發編程之線程池/進程池

5.Future 模式詳解(並發使用)


推薦閱讀:

TAG:Python | Python库 | Python入门 |