zfit.util.execution module

class zfit.util.execution.RunManager(n_cpu='auto')[source]

Bases: object

Handle the resources and runtime specific options. The run method is equivalent to sess.run

DEFAULT_MODE = {'autograd': True, 'graph': 'auto'}
property mode
property chunksize
property n_cpu
set_n_cpu(n_cpu='auto', strict=False)[source]

Set the number of cpus to be used by zfit. For more control, use set_cpus_explicit.

Parameters
  • n_cpu (Union[str, int]) – Number of cpus, will be the number for inter-op parallelism

  • strict (bool) – If strict, sets intra parallelism to 1

Return type

None

set_cpus_explicit(intra, inter)[source]

Set the number of threads (cpus) used for inter-op and intra-op parallelism

Parameters
  • intra (int) – Number of threads used to perform an operation. For larger operations, e.g. large Tensors, this is usually beneficial to have >= 2.

  • inter (int) – Parallelization on the level of ops. This is beneficial, if many operations can be computed independently in parallel.

Return type

None

aquire_cpu(max_cpu=- 1)[source]
Return type

List[str]

static experimental_enable_eager(eager=False)[source]

DEPRECEATED! Enable eager makes tensorflow run like numpy. Useful for debugging. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use set_graph_mode(False)

Do NOT directly mix it with Numpy (and if, also enable the numberical gradient).

This can BREAK in the future.

set_graph_mode(graph)[source]

Set the policy for graph building and the usage of automatic vs numerical gradients.

zfit runs on top of TensorFlow, a modern, powerful computing engine very similar in design to Numpy. An interactive tutorial can be found at https://github.com/zfit/zfit-tutorials

Graph building

It has two ways to be run where the first defaults to the normal mode we are in except inside a function() decorated function. Setting the mode allows to control the behavior of decorated functions to not always trigger a graph building.

  • numpy-like/eager: in this mode, the syntax slightly differs from pure numpy but is similar. For example,

    tf.sqrt, tf.math.log etc. The return values are EagerTensors that represent “wrapped Numpy arrays” and can directly be used with any Numpy function. They can explicitly be converted to a Numpy array with zfit.run(EagerTensor), which takes also care of nested structures and already existing np.ndarrays, or just a .numpy() method. The difference to Numpy is that TensorFlow tries to optimize the calculation slightly beforehand and may also executes on the GPU. This will result in a slight performance penalty for very small computations compared to Numpy, on the other hand an improved performance for larger computations.

  • graph: a function can be decorated with function(), which will not execute its

    content immediately, but first trace it and build a graph. This is done by recording all `tf.*` operations and adding them to the graph while any Python operation, e.g. `np.random.*` will use a fixed value added to the graph. Building a graph greatly reduces the flexibility, since only `tf.*` operations can effectively be used to have dynamics in there, on the other hand it can greatly increase the performance. When the graph is built, it is cached (for later re-use), optimized and then executed. Calling a `tf.function` decorated function does therefore not make an actualy difference *for the caller. But it is a difference on how the function behaves.

    @z.function
    def add_rnd(x):
         res1 = x + np.random.uniform()  # returns a Python scalar. This exact scalar will be constant
         res2 = x + z.random.uniform(shape=())  # returns a tf.Tensor. This will be flexible
         return res1, res2
    
    res_np1, res_tf1 = add_rnd(5)
    res_np2, res_tf2 = add_rnd(5)
    
    assert res_np1 == res_np2  # they will be the same!
    assert res_tf1 != res_tf2  # these differ
    

    While writing TensorFlow is just like Numpy, if we build a graph, only tf.* dynamics “survives”. Important: while values are usually constant, changing a zfit.Parameter value with )() will change the value in the graph as well.

    
    

    @z.function def add(x, param):

    return x + param

    param = zfit.Parameter(‘param1’, 36) assert add_rnd(5, param) == 41 param.set_value(6) assert add_rnd(5, param) == 42 # the value changed!

    Every graph generation takes some additional time and is stored, consuming memory and slowing down the overall execution process. To clear all caches and force a rebuild of the graph, zfit.run.clear_graph_cache() can be used.

    If a function is not decorated with z.function, this does not guarantee that it is executed in eager, as an outer function may uses a decorator. A typical case is the loss, which is decorated. Therefore, any Model called inside will be evaluated with a graph building first.

    When to use what:
    • Any repeated call (as a typical call to the loss function in the minimization process) is usually better suited within a z.function.

    • A single call (e.g. for plotting) or repeated calls with different arguments should rather be run without a graph built first

    • Debugging is usually way easier without graph building. Therefore, set the graph mode to False

    • If the minimization fails but the pdf works without graph, maybe the graph mode can be switched on for everything to have the same behavior in the pdf as when the loss is called.

Parameters

graph (Union[bool, str, dict]) –

Policy for when to build a graph with which function. Currently allowed values are - True: this will make all zfit.z.function() decorated function to be traced. Useful

to have a consistent behavior overall, as e.g. a PDF may not be traced if pdf or integrate is called, but may be traced when inside a loss.

  • False: this will make everything execute immediately, like Numpy (this is not enough to be fully Numpy compatible in the sense of using , also see the `autograd option)

  • ’auto’: Something in between, where sampling (currently) and the loss builds a graph but all model methods, such as pdf, integrate (except of *sample*) do not and are executed eagerly.

  • (advanced and experimental!): a dictionary containing the string of a wrapped function identifier (see also function() for more information about this) with a boolean that switches explicitly on/off the graph building for this type of decorated functions.

set_autograd_mode(autograd)[source]

Use automatic or numerical gradients.

zfit runs on top of TensorFlow, a modern, powerful computing engine very similar in design to Numpy. An interactive tutorial can be found at https://github.com/zfit/zfit-tutorials

automatic gradient

A strong feature of TensorFlow is the possibility to derive an analytic expression for the gradient by successively applying the chain rule to all of its operations. This is independent of whether the code is run in graph or eager execution, but requires all operations that are dynamic to be tf.* operations. For example, multiplying by a constant (constant as in not chaning ever) does not require the constant to be a tf.constant(…) but can be a Python scalar. For example, it is also fine to use a fixed template shape using Numpy (Scipy etc), as the template shape will stay constant (this requires though to use a z.py_function to work, but this is another story about graph mode or not).

To allow to have dynamic numpy operations in a component, preferably wrapped with z.py_function instead of forced eager, and to still retrieve a meaningful gradient, a numerical gradient has to be used. In general, this can be achieved by setting the autograd to False. Any derivative received will then be numerically computed. Furthermore, some minimizers (e.g. Minuit) have their own way of calculating gradients, which can be faster. Disabling autograd and using the zfit builting numerical way of calculating the gradients and hessian can be less stable and may raises errors.

Parameters

autograd – Whether the automatic gradient feature of TensorFlow should be used or a numerical procedure instead. If any non-constant Python (numpy, scipy,…) code is used inside, this should be switched on.

set_mode(graph=None, autograd=None)[source]

DEPRECATED! Use set_graph_mode or set_autograd_mode. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use set_graph_mode or set_autograd_mode.

get_graph_mode()[source]

Return the current policy for graph building.

Return type

Union[bool, str]

Returns

The current policy. For more information, check set_mode().

current_policy_graph()[source]

DEPRECEATED! Use get_graph_mode instead. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use get_graph_mode instead.

Return type

Union[bool, str]

get_autograd_mode()[source]

The current policy for using the automatic gradient or falling back to the numerical

Return type

bool

Returns

If autograd is being used.

current_policy_autograd()[source]

DEPRECATED! Use get_autograd_mode instead.

Return type

bool

set_mode_default()[source]

Reset the mode to the default of graph = ‘auto’ and autograd = True.

clear_graph_cache()[source]

Clear all generated graphs and effectively reset. Should not affect execution, only performance.

In a simple fit scenario, this is not used. But if several fits are performed with different python objects such as a scan over a range (by changing the norm_range and creating a new dataset), doing minimization and therefore invoking the loss (by default creating a graph) will leave the graphs in the cache, even tough the already scanned ranges are not needed anymore.

To clean, this function can be invoked. The only effect should be to speed up things, but should not have any side-effects other than that.

assert_executing_eagerly()[source]

Assert that the execution is eager and Python side effects are taken into account.

This can be placed inside a model _in case python side-effects are necessary_ and no other way is possible.

property experimental_is_eager

DEPRECATED FUNCTION

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use current_policy_graph() is False

experimental_clear_caches()[source]

DEPRECATED! Use clear_graph_caches instead. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use clear_graph_caches instead.

zfit.util.execution.eval_object(obj)[source]
Return type

object