Tools for the submission of Tasks.

class ScriptEditor[source]

Bases: object

Simple editor that simplifies the writing of shell scripts


Add a comment


Add an empty line.

declare_var(key, val)[source]

Declare a env variable. If val is None the variable is unset.


Declare the variables defined in the dictionary d.

export_envar(key, val)[source]

Export an environment variable.


Export the environment variables contained in the dict env.


Returns a string with the script and reset the editor if reset is True


Load the list of specified modules.


Reset the editor.


Adds the shebang line.

class PyLauncher(flow, **kwargs)[source]

Bases: object

This object handle the submission of the tasks contained in a Flow

Initialize the object

  • flowFlow object

  • max_njobs_inqueue – The launcher will stop submitting jobs when the number of jobs in the queue is >= Max number of jobs


alias of PyLauncherError


Return the list of tasks that can be submitted. Empty list if no task has been found.

rapidfire(max_nlaunch=-1, max_loops=1, sleep_time=5)[source]

Keeps submitting Tasks until we are out of jobs or no job is ready to run.

  • max_nlaunch – Maximum number of launches. default: no limit.

  • max_loops – Maximum number of loops

  • sleep_time – seconds to sleep between rapidfire loop iterations


The number of tasks launched.


Run the first Task than is ready for execution.


Number of jobs launched.

class PyFlowScheduler(**kwargs)[source]

Bases: object

This object schedules the submission of the tasks in a Flow. There are two types of errors that might occur during the execution of the jobs:

  1. Python exceptions

  2. Errors in the ab-initio code

Python exceptions are easy to detect and are usually due to a bug in the python code or random errors such as IOError. The set of errors in the ab-initio is much much broader. It includes wrong input data, segmentation faults, problems with the resource manager, etc. The flow tries to handle the most common cases but there’s still a lot of room for improvement. Note, in particular, that PyFlowScheduler will shutdown automatically in the following cases:

  1. The number of python exceptions is > max_num_pyexcs

  2. The number of task errors (i.e. the number of tasks whose status is S_ERROR) is > max_num_abierrs

  3. The number of jobs launched becomes greater than (safety_ratio * total_number_of_tasks).

  4. The scheduler will send an email to the user (specified by mailto) every remindme_s seconds. If the mail cannot be sent, the scheduler will shutdown automatically. This check prevents the scheduler from being trapped in an infinite loop.

  • weeks – number of weeks to wait (DEFAULT: 0).

  • days – number of days to wait (DEFAULT: 0).

  • hours – number of hours to wait (DEFAULT: 0).

  • minutes – number of minutes to wait (DEFAULT: 0).

  • seconds – number of seconds to wait (DEFAULT: 0).

  • mailto – The scheduler will send an email to mailto every remindme_s seconds. (DEFAULT: None i.e. not used).

  • verbose – (int) verbosity level. (DEFAULT: 0)

  • use_dynamic_manager – “yes” if the TaskManager must be re-initialized from file before launching the jobs. (DEFAULT: “no”)

  • max_njobs_inqueue – Limit on the number of jobs that can be present in the queue. (DEFAULT: 200)

  • max_ncores_used – Maximum number of cores that can be used by the scheduler.

  • remindme_s – The scheduler will send an email to the user specified by mailto every remindme_s seconds. (int, DEFAULT: 1 day).

  • max_num_pyexcs – The scheduler will exit if the number of python exceptions is > max_num_pyexcs (int, DEFAULT: 0)

  • max_num_abierrs – The scheduler will exit if the number of errored tasks is > max_num_abierrs (int, DEFAULT: 0)

  • safety_ratio – The scheduler will exits if the number of jobs launched becomes greater than safety_ratio * total_number_of_tasks_in_flow. (int, DEFAULT: 5)

  • max_nlaunches – Maximum number of tasks launched in a single iteration of the scheduler. (DEFAULT: -1 i.e. no limit)

  • debug – Debug level. Use 0 for production (int, DEFAULT: 0)

  • fix_qcritical – “yes” if the launcher should try to fix QCritical Errors (DEFAULT: “no”)

  • rmflow – If “yes”, the scheduler will remove the flow directory if the calculation completed successfully. (DEFAULT: “no”)

  • killjobs_if_errors – “yes” if the scheduler should try to kill all the runnnig jobs before exiting due to an error. (DEFAULT: “yes”)


alias of PyFlowSchedulerError

USER_CONFIG_DIR = '/Users/shyuepingong/.abinit/abipy'
YAML_FILE = 'scheduler.yml'

Add an Flow flow to the scheduler.

classmethod autodoc()[source]

The function that will be executed by the scheduler.


Cleanup routine: remove the pid file and save the pickle database



classmethod from_file(filepath)[source]

Read the configuration parameters from a Yaml file.

classmethod from_string(s)[source]

Create an istance from string s containing a YAML dictionary.

classmethod from_user_config()[source]

Initialize the PyFlowScheduler from the YAML file ‘scheduler.yml’. Search first in the working directory and then in the configuration directory of abipy.




Returns a timedelta object representing with the elapsed time.


Number of exceptions raised so far.


The pid of the process associated to the scheduler.


Absolute path of the file with the pid. The file is located in the workdir of the flow

send_email(msg, tag=None)[source]

Send an e-mail before completing the shutdown. Returns 0 if success.


Shutdown the scheduler.


Starts the scheduler in a new thread. Returns 0 if success. In standalone mode, this method will block until there are no more scheduled jobs.