![]() |
Rosetta
2021.16
|
#include <JobExtractor.hh>

Public Types | |
| typedef std::list< core::Size > | SizeList |
| typedef std::map< core::Size, LarvalJobOP > | JobMap |
| typedef utility::pointer::shared_ptr < JobMap > | JobMapOP |
| typedef std::set< core::Size > | JobSet |
| typedef utility::pointer::shared_ptr < JobSet > | JobSetOP |
| typedef std::map< core::Size, JobSetOP > | OutstandingJobsForDigraphNodeMap |
| typedef std::map< core::Size, core::Size > | DigraphNodeForJobMap |
| typedef std::map< core::Size, core::Size > | WorkerNodeForJobMap |
Public Member Functions | |
| JobExtractor () | |
| ~JobExtractor () override | |
| void | set_job_queen (JobQueenOP queen) |
| dummy for master/slave version More... | |
| void | set_maximum_jobs_to_hold_in_memory (core::Size max_njobs_at_once) |
| JobDigraphOP | get_initial_job_dag_and_queue () |
| bool | job_queue_empty () const |
| LarvalJobOP | pop_job_from_queue () |
| void | push_job_to_front_of_queue (LarvalJobOP job) |
| The JD is allowed to pull jobs out of the queue and then to reinsert them back into the queue, e.g., as it might do when it has recovered from a checkpoint and needs to re-launch jobs. More... | |
| void | note_job_no_longer_running (core::Size job_id) |
| bool | retrieve_and_reset_node_recently_completed () |
| Did we just declare a node complete? Returns true if so, and sets the internal tracking variable to false. More... | |
| bool | not_done () |
| Should the JobDistributor keep going based on there being jobs in the job queue, or outstanding jobs that have not completed, or nodes that have not yet been marked as completed, or the JobQueen providing new nodes in the JobDAG. More... | |
| bool | jobs_remain () |
| Are there any jobs that have not yet been executed, but perhaps are not ready to be submitted because the JobDirectedNode they belong to is downstream of another Node whose jobs are still running? The JobQueen would not have told the JobDistributor about these jobs yet. Perhaps the JobQueen has not even told the JobDistributor about the JobDirectedNodes yet. Basically, we must say "yes" as long as there are jobs that have not yet completed unless we've emptied the digraph_nodes_ready_to_be_run_ queue and then asked the JobQueen to update the job DAG, and she has declined to add any new nodes. More... | |
| LarvalJobOP | running_job (core::Size job_index) const |
| bool | complete () const |
Private Member Functions | |
| void | query_job_queen_for_more_jobs_for_current_node () |
| void | mark_node_as_complete (core::Size digraph_node) |
| void | find_jobs_for_next_node () |
| void | queue_initial_digraph_nodes_and_jobs () |
Private Attributes | |
| JobQueenOP | job_queen_ |
| JobDigraphOP | job_dag_ |
| SizeList | digraph_nodes_ready_to_be_run_ |
| SizeList | worker_nodes_waiting_for_jobs_ |
| JobDAGNodeID | current_digraph_node_ |
| The digraph node for which we are currently assigning new jobs – it is possible for multiple digraph nodes to have their jobs running concurrently. More... | |
| LarvalJobs | jobs_for_current_digraph_node_ |
| JobMap | running_jobs_ |
| DigraphNodeForJobMap | digraph_node_for_job_ |
| OutstandingJobsForDigraphNodeMap | jobs_running_for_digraph_nodes_ |
| bool | first_call_to_determine_job_list_ |
| bool | node_recently_completed_ |
| numeric::DiscreteIntervalEncodingTree < core::Size > | job_indices_seen_ |
| bool | complete_ |
| core::Size | maximum_jobs_to_hold_in_memory_ |
| typedef std::map< core::Size, core::Size > protocols::jd3::job_distributors::JobExtractor::DigraphNodeForJobMap |
| typedef std::map< core::Size, LarvalJobOP > protocols::jd3::job_distributors::JobExtractor::JobMap |
| typedef utility::pointer::shared_ptr< JobMap > protocols::jd3::job_distributors::JobExtractor::JobMapOP |
| typedef std::set< core::Size > protocols::jd3::job_distributors::JobExtractor::JobSet |
| typedef utility::pointer::shared_ptr< JobSet > protocols::jd3::job_distributors::JobExtractor::JobSetOP |
| typedef std::map< core::Size, JobSetOP > protocols::jd3::job_distributors::JobExtractor::OutstandingJobsForDigraphNodeMap |
| typedef std::list< core::Size > protocols::jd3::job_distributors::JobExtractor::SizeList |
| typedef std::map< core::Size, core::Size > protocols::jd3::job_distributors::JobExtractor::WorkerNodeForJobMap |
| protocols::jd3::job_distributors::JobExtractor::JobExtractor | ( | ) |
|
overridedefault |
|
private |
We need to find the next set of jobs to run, and so we'll look at the nodes in the digraph_nodes_ready_to_be_run_queue_. Pop one of the nodes off and ask the job queen if there are any nodes for this job. It is entirely possible that the job queen will return an empty list of jobs for this node, (which we'll detect by looking at the current_digraph_node_ index), in which case, we need to mark the node as complete, which could in turn repopulate the digraph_nodes_read_to_be_run_ queue.
The call to query_job_queen_for_more_jobs_for_current_node function itself may call find_jobs_for_next_node: infinite recursion is avoided by the following two facts: 1) if the digraph_nodes_ready_to_be_run_ queue is not empty, then we decrease its size by one by popping an element off of it, and 2) each node is only put into the digraph_nodes_ready_to_be_run_ queue a single time.
References current_digraph_node_, digraph_nodes_ready_to_be_run_, jobs_for_current_digraph_node_, mark_node_as_complete(), query_job_queen_for_more_jobs_for_current_node(), and protocols::jd3::job_distributors::TR().
Referenced by mark_node_as_complete(), not_done(), query_job_queen_for_more_jobs_for_current_node(), and queue_initial_digraph_nodes_and_jobs().
| JobDigraphOP protocols::jd3::job_distributors::JobExtractor::get_initial_job_dag_and_queue | ( | ) |
References job_dag_, job_queen_, and queue_initial_digraph_nodes_and_jobs().
| bool protocols::jd3::job_distributors::JobExtractor::job_queue_empty | ( | ) | const |
References jobs_for_current_digraph_node_.
| bool protocols::jd3::job_distributors::JobExtractor::jobs_remain | ( | ) |
Are there any jobs that have not yet been executed, but perhaps are not ready to be submitted because the JobDirectedNode they belong to is downstream of another Node whose jobs are still running? The JobQueen would not have told the JobDistributor about these jobs yet. Perhaps the JobQueen has not even told the JobDistributor about the JobDirectedNodes yet. Basically, we must say "yes" as long as there are jobs that have not yet completed unless we've emptied the digraph_nodes_ready_to_be_run_ queue and then asked the JobQueen to update the job DAG, and she has declined to add any new nodes.
This function relies on the not_done() function to have asked the JobQueen to update the JobDigraph, and then to check the jobs_running_for_digraph_nodes_ map to see if it's still empty.
References complete_.
|
private |
Once the JobQueen has informed the JobExtractor that no more jobs remain for a particular node, then we are in a position where we need to check the Job DAG to see if there were nodes waiting for this particular node to complete. So we iterate across all of the edges leaving the completed node, and for each node downstream, we look at all of its upstream parents. If each of the upstream parents has completed (all of its jobs have completed), then the node is ready to be queued.
References protocols::jd3::JobDirectedNode::all_jobs_completed(), digraph_nodes_ready_to_be_run_, find_jobs_for_next_node(), job_dag_, jobs_running_for_digraph_nodes_, node_recently_completed_, and protocols::jd3::job_distributors::TR().
Referenced by find_jobs_for_next_node(), and note_job_no_longer_running().
| bool protocols::jd3::job_distributors::JobExtractor::not_done | ( | ) |
Should the JobDistributor keep going based on there being jobs in the job queue, or outstanding jobs that have not completed, or nodes that have not yet been marked as completed, or the JobQueen providing new nodes in the JobDAG.
References complete_, digraph_nodes_ready_to_be_run_, find_jobs_for_next_node(), job_dag_, job_queen_, jobs_for_current_digraph_node_, protocols::jd3::JobDigraphUpdater::orig_num_nodes(), and running_jobs_.
| void protocols::jd3::job_distributors::JobExtractor::note_job_no_longer_running | ( | core::Size | job_id | ) |
| LarvalJobOP protocols::jd3::job_distributors::JobExtractor::pop_job_from_queue | ( | ) |
| void protocols::jd3::job_distributors::JobExtractor::push_job_to_front_of_queue | ( | LarvalJobOP | job | ) |
The JD is allowed to pull jobs out of the queue and then to reinsert them back into the queue, e.g., as it might do when it has recovered from a checkpoint and needs to re-launch jobs.
References jobs_for_current_digraph_node_.
|
private |
We have run out of jobs for the digraph node indicated; ask the JobQueen for more jobs, and if she doesn't give us any, then consider the node in the digraph exhausted.
References current_digraph_node_, find_jobs_for_next_node(), first_call_to_determine_job_list_, job_dag_, job_indices_seen_, job_queen_, jobs_for_current_digraph_node_, maximum_jobs_to_hold_in_memory_, core::id::to_string(), and protocols::jd3::job_distributors::TR().
Referenced by find_jobs_for_next_node(), and pop_job_from_queue().
|
private |
References digraph_nodes_ready_to_be_run_, find_jobs_for_next_node(), and job_dag_.
Referenced by get_initial_job_dag_and_queue().
| bool protocols::jd3::job_distributors::JobExtractor::retrieve_and_reset_node_recently_completed | ( | ) |
Did we just declare a node complete? Returns true if so, and sets the internal tracking variable to false.
References node_recently_completed_.
| LarvalJobOP protocols::jd3::job_distributors::JobExtractor::running_job | ( | core::Size | job_index | ) | const |
References running_jobs_.
| void protocols::jd3::job_distributors::JobExtractor::set_job_queen | ( | JobQueenOP | queen | ) |
dummy for master/slave version
References job_queen_.
| void protocols::jd3::job_distributors::JobExtractor::set_maximum_jobs_to_hold_in_memory | ( | core::Size | max_njobs_at_once | ) |
References maximum_jobs_to_hold_in_memory_.
|
private |
Referenced by complete(), jobs_remain(), and not_done().
|
private |
The digraph node for which we are currently assigning new jobs – it is possible for multiple digraph nodes to have their jobs running concurrently.
Referenced by find_jobs_for_next_node(), note_job_no_longer_running(), and query_job_queen_for_more_jobs_for_current_node().
|
private |
Referenced by note_job_no_longer_running(), and pop_job_from_queue().
|
private |
|
private |
Referenced by query_job_queen_for_more_jobs_for_current_node().
|
private |
|
private |
Referenced by query_job_queen_for_more_jobs_for_current_node().
|
private |
|
private |
|
private |
Referenced by mark_node_as_complete(), note_job_no_longer_running(), and pop_job_from_queue().
|
private |
Referenced by query_job_queen_for_more_jobs_for_current_node(), and set_maximum_jobs_to_hold_in_memory().
|
private |
Referenced by mark_node_as_complete(), and retrieve_and_reset_node_recently_completed().
|
private |
Referenced by not_done(), note_job_no_longer_running(), pop_job_from_queue(), and running_job().
|
private |
1.8.7