utils package

Subpackages

Submodules

utils.checkpoint_utils module

utils.checkpoint_utils.get_model_state_dict(model: Module) Dict[source]

Returns state_dict of a given model.

Parameters:

model – A torch model (it can be also a wrapped model, e.g., with DDP).

Returns:

state_dict of the model. If model is an EMA instance, the state_dict corresponding to EMA parameters is

returned.

utils.checkpoint_utils.load_state_dict(model: Module, state_dict: Dict, strict: bool = True) Module[source]

Load the given state_dict into the model.

Parameters:
  • model – A torch model (it can be also a wrapped model, e.g., with DDP).

  • state_dict – A state dict dictionary to load model parameters from.

  • strict – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict function. Default: True.

Returns:

model loaded with parameters from the given state_dict

utils.checkpoint_utils.average_ckpts(ckpt_loc_list: List[str]) Dict[source]

Compute averaged parameters from a list of checkpoints.

Parameters:

ckpt_loc_list – List of paths to model checkpoints to be averaged.

Returns:

state_dict corresponding to the averaged parameters.

utils.checkpoint_utils.avg_and_save_k_checkpoints(model_state: Dict, best_metric: float, k_best_checkpoints: int, max_ckpt_metric: bool, ckpt_str: str) None[source]

Save top-k checkpoints and their average.

Parameters:
  • model_statestate_dict containing model parameters.

  • best_metric – Best observed value of the tracking validation metric. For example, best top-1 validation accuracy that is observed until the current iteration.

  • k_best_checkpoints – An integer k determining number of top (based on validation metric) checkpoints to keep. If k_best_checkpoints is smaller than 1, only best checkpoint is stored.

  • max_ckpt_metric – A boolean demonstrating whether the tracking validation metric is higher the better, or lower the better.

  • ckpt_str – String determining path prefix for checkpoints to be saved.

utils.checkpoint_utils.save_interval_checkpoint(iterations: int, epoch: int, model: Module, optimizer: BaseOptim | Optimizer, best_metric: float, save_dir: str, gradient_scaler: GradScaler, model_ema: Module | None = None, *args, **kwargs) None[source]

Save current iteration training checkpoint.

Parameters:
  • iterations – An integer denoting training iteration number. Each iteration corresponds to forward-backward passes

  • GPUs. (on a batch with all) –

  • epoch – An integer denoting epoch number.

  • model – The model being trained.

  • optimizer – Optimizer object, which possibly store training optimization state variables.

  • best_metric – Best observed value of the tracking validation metric. For example, best top-1 validation accuracy that is observed until the current iteration.

  • save_dir – Path to a directory to save checkpoints.

  • gradient_scalerGradScaler object storing required automatic mixed precision state.

  • model_ema – EMA model to be stored in the checkpoint.

utils.checkpoint_utils.get_training_state(iterations: int, epoch: int, model: Module, optimizer: BaseOptim | Optimizer, best_metric: float, gradient_scaler: GradScaler, model_ema: Module | None = None) Dict[source]

Create a checkpoint dictionary that includes all required states to resume the training from its current state.

Parameters:
  • iterations – An integer denoting training iteration number. Each iteration corresponds to forward-backward passes on a batch with all GPUs.

  • epoch – An integer denoting epoch number.

  • model – The model being trained.

  • optimizer – Optimizer object, which possibly store training optimization state variables.

  • best_metric – Best observed value of the tracking validation metric. For example, best top-1 validation accuracy that is observed until the current iteration.

  • gradient_scalerGradScaler object storing required automatic mixed precision state.

  • model_ema – EMA model to be stored in the checkpoint.

Returns:

A dictionary that includes all required states to resume the training from its current state.

utils.checkpoint_utils.save_checkpoint(iterations: int, epoch: int, model: Module, optimizer: BaseOptim | Optimizer, best_metric: float, is_best: bool, save_dir: str, gradient_scaler: GradScaler, model_ema: Module | None = None, is_ema_best: bool = False, ema_best_metric: float | None = None, max_ckpt_metric: bool = False, k_best_checkpoints: int = -1, save_all_checkpoints: bool = False, *args, **kwargs) None[source]

Save checkpoints corresponding to the current state of the training.

Parameters:
  • iterations – An integer denoting training iteration number. Each iteration corresponds to forward-backward passes on a batch with all GPUs.

  • epoch – An integer denoting epoch number.

  • model – The model being trained.

  • optimizer – Optimizer object, which possibly store training optimization state variables.

  • best_metric – Best observed value of the tracking validation metric. For example, best top-1 validation accuracy that is observed until the current iteration.

  • is_best – A boolean demonstrating whether the current model obtains the best validation metric compared to the previously saved checkpoints.

  • save_dir – Path to a directory to save checkpoints.

  • gradient_scalerGradScaler object storing required automatic mixed precision state.

  • model_ema – EMA model to be stored in the checkpoint.

  • is_ema_best – A boolean demonstrating whether the current EMA model obtains the best validation metric compared to the previously saved checkpoints.

  • ema_best_metric – Best observed value of the tracking validation metric by the EMA model.

  • max_ckpt_metric – A boolean demonstrating whether the tracking validation metric is higher the better, or lower the better.

  • k_best_checkpoints – An integer k determining number of top (based on validation metric) checkpoints to keep. If k_best_checkpoints is smaller than 1, only best checkpoint is stored.

  • save_all_checkpoints – If True, will save model_state checkpoints (main model and its EMA) for all epochs.

utils.checkpoint_utils.load_checkpoint(opts: Namespace, model: Module, optimizer: BaseOptim | Optimizer, gradient_scaler: GradScaler, model_ema: Module | None = None) Tuple[Module, BaseOptim | Optimizer, GradScaler, int, int, float, Module | None][source]

Load a training checkpoint to resume training.

Parameters:
  • opts – Input arguments.

  • model – The model to be loaded with model_state_dict from the checkpoint.

  • optimizer – Optimizer object to be loaded with optim_state_dict from the checkpoint.

  • gradient_scaler – A GradScaler object to be loaded with gradient_scaler_state_dict from the checkpoint.

  • model_ema – (Optional) EMA model to be loaded with ema_state_dict from the checkpoint.

Returns:

(model, optimizer, gradient_scaler, start_epoch, start_iteration, best_metric, model_ema)

Return type:

Tuple of loaded objects and value

utils.checkpoint_utils.load_model_state(opts: Namespace, model: Module, model_ema: Module | None = None) Tuple[Module, Module | None][source]

Load the model (and optionally the EMA model) for finetuning.

Parameters:
  • opts – Input arguments.

  • model – The model to be loaded with checkpoint at common.finetune.

  • model_ema – The EMA model to be loaded with checkpoint at common.finetune_ema.

Returns:

Tuple of loaded model and EMA model. The second returned value is None when model_ema is not passed.

utils.checkpoint_utils.copy_weights(model_src: Module, model_tgt: Module) Module[source]

Copy state_dict from source model to target model.

Parameters:
  • model_src – The source model.

  • model_tgt – The target model.

Returns:

Target model with state_dict loaded from model_src.

utils.color_map module

class utils.color_map.Colormap(n: int | None = 256, normalized: bool | None = False)[source]

Bases: object

Generate colormap for visualizing segmentation masks or bounding boxes.

This is based on the MATLab code in the PASCAL VOC repository:

http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html#devkit

__init__(n: int | None = 256, normalized: bool | None = False)[source]
static get_bit_at_idx(val, idx)[source]
get_color_map() ndarray[source]
get_box_color_codes() List[source]
get_color_map_list() List[source]

utils.common_utils module

utils.common_utils.unwrap_model_fn(model: Module) Module[source]

Helper function to unwrap the model.

Parameters:

model – An instance of torch.nn.Module.

Returns:

Unwrapped instance of torch.nn.Module.

utils.common_utils.check_compatibility() None[source]
utils.common_utils.check_frozen_norm_layer(model: Module) Tuple[bool, int][source]
utils.common_utils.device_setup(opts)[source]

Helper function for setting up the device

utils.common_utils.create_directories(dir_path: str, is_master_node: bool) None[source]

Helper function to create directories

utils.common_utils.move_to_device(opts, x: Any, device: str | None = 'cpu', non_blocking: bool | None = True, *args, **kwargs) Any[source]

Helper function to move data to a device

utils.common_utils.is_coreml_conversion(opts) bool[source]

utils.ddp_utils module

utils.ddp_utils.is_master(opts) bool[source]
utils.ddp_utils.dist_barrier()[source]
utils.ddp_utils.dist_monitored_barrier(timeout: float | None = None, wait_all_ranks: bool | None = False, group: Optional = None)[source]
utils.ddp_utils.is_start_rank_node(opts) bool[source]
utils.ddp_utils.get_world_size()[source]
utils.ddp_utils.get_node_rank()[source]
utils.ddp_utils.distributed_init(opts) int[source]

utils.dict_utils module

utils.dict_utils.filter_keys(d: Dict, whitelist: Collection[str] | None = None) Dict[source]

Returns a copy of the input dict @d, with a subset of keys that are in @whitelist.

Parameters:
  • d – Intput dictionary that will be copied with a subset of keys.

  • whitelist – List of keys to keep in the output (if exist in input dict).

utils.download_utils module

utils.download_utils_base module

utils.download_utils_base.get_basic_local_path(opts: Namespace, path: str, cache_loc: str = '/tmp/cvnets', force_delete: bool | None = None, use_start_rank: bool = True, sync_ranks: bool = True, *args, **kwargs) str[source]

If File name is a URL, download to TMP_CACHE_LOC and then return the local path. Otherwise, don’t do anything

utils.import_utils module

utils.import_utils.import_modules_from_folder(folder_name: str, extra_roots: Sequence[str] = ()) None[source]

Automatically import all modules from public library root folder, in addition to the @extra_roots directories.

The @folder_name directory must exist in LIBRARY_ROOT, but existence in @extra_roots is optional.

Parameters:
  • folder_name – Name of the folder to search for its internal and public modules.

  • extra_roots – By default, this function only imports from LIBRARY_ROOT/{folder_name}/**/*.py. For any extra_root provided, it will also import LIBRARY_ROOT/{extra_root}/{folder_name}/**/*.py modules.

utils.logger module

utils.logger.get_curr_time_stamp() str[source]
utils.logger.error(message: str) None[source]
utils.logger.color_text(in_text: str) str[source]
utils.logger.log(message: str, end='\n') None[source]
utils.logger.warning(message: str | Warning) None[source]
utils.logger.ignore_exception_with_warning(message: str) None[source]

After catching a tolerable exception E1 (e.g. when Model.forward() fails during profiling with try-catch, it’ll be helpful to log the exception for future investigation. But printing the error stack trace, as is, could be confusing when an uncaught (non-tolerable) exception “E2” raises down the road. Then, the log will contain two stack traces for E1, E2. When looking for errors in logs, users should look for E2, but they may find E1.

This function appends “(WARNING)” at the end of all lines of the E1 traceback, so that the user can distinguish E1 from uncaught exception E2.

Parameters:

message – Extra explanation and context for debugging. (Note: the exception obj

will be automatically fetched from python. No need to pass it as an argument or as message)

utils.logger.info(message: str, print_line: bool | None = False) None[source]
utils.logger.debug(message: str) None[source]
utils.logger.double_dash_line(dashes: int | None = 75) None[source]
utils.logger.singe_dash_line(dashes: int | None = 67) None[source]
utils.logger.print_header(header: str) None[source]
utils.logger.print_header_minor(header: str) None[source]
utils.logger.disable_printing()[source]
utils.logger.enable_printing()[source]

utils.math_utils module

utils.math_utils.make_divisible(v: float | int, divisor: int | None = 8, min_value: float | int | None = None) float | int[source]

This function is taken from the original tf repo. It ensures that all layers have a channel number that is divisible by 8 It can be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py :param v: :param divisor: :param min_value: :return:

utils.math_utils.bound_fn(min_val: float | int, max_val: float | int, value: float | int) float | int[source]

utils.object_utils module

utils.object_utils.is_iterable(x)[source]
utils.object_utils.apply_recursively(x, cb, *args, **kwargs)[source]
utils.object_utils.flatten_to_dict(x, name: str, dict_sep: str = '/', list_sep: str = '_') Dict[str, Number][source]
utils.object_utils.is_pytest_environment() bool[source]

Helper function to check if pytest environment or not

utils.object_utils_test module

utils.object_utils_test.test_apply_on_values()[source]
utils.object_utils_test.test_flatten_to_dict()[source]

utils.pytorch_to_coreml module

utils.pytorch_to_coreml.convert_pytorch_to_coreml(opts, pytorch_model: Module, jit_model_only: bool | None = False, *args, **kwargs) Dict[source]

Convert Pytorch model to CoreML

Parameters:
  • opts – Arguments

  • pytorch_model – Pytorch model that needs to be converted to JIT or CoreML

  • input_tensor – Input tensor, usually a 4-dimensional tensor of shape Batch x 3 x Height x Width

Returns:

CoreML model or package

utils.pytorch_to_coreml.assertion_check(py_out: Tensor | Dict | Tuple, jit_out: Tensor | Dict | Tuple) None[source]

utils.registry module

class utils.registry.Registry(registry_name: str, base_class: type | None = None, separator: str | None = ':', lazy_load_dirs: List[str] | None = None, internal_dirs: Sequence[str] = ())[source]

Bases: object

A key/object registry class. This class is used in CVNets to do Dependency Injection in configs, so when you write “resnet” in a config, it knows which module to load. You can potentially provide a base_class to ensures that all items in the registry are of type base_class.

Registry also allows for passing arguments to a registered item: For example: “top1” -> “top1(pred=logits)”

Usage: >>> my_registry = Registry(“registry_name”) >>> @my_registry.register(“awesome_class_or_func”) … def my_awesome_class_or_func(): … pass >>> assert “awesome_class_or_func” in my_registry

It allows for vanilla key/object definition as well as functional argument injection: >>> reg = Registry(“registry_name”) >>> reg.register(“awesome_dict”)(dict) >>> reg[“awesome_dict(name=hello, type=fifo)]() {‘name’: ‘hello’, ‘type’: ‘fifo’}

__init__(registry_name: str, base_class: type | None = None, separator: str | None = ':', lazy_load_dirs: List[str] | None = None, internal_dirs: Sequence[str] = ()) None[source]
Parameters:
  • registry_name – registry name, used for debugging and error messages

  • base_class – If provided, will ensure that all items inside the registry are of type base_class.

  • separator – Separator between name and type in register function.

  • lazy_load_dirs – If provided, will load all directories under these directories when inspecting for the modules of the registry.

items() List[Tuple[str, RegistryItem]][source]
keys() List[str][source]
register(name: str, type: str = '') Callable[source]
all_arguments(parser: ArgumentParser) ArgumentParser[source]

Iterates through all items and fetches their arguments.

Note: make sure that all items are already registered before calling this method.

parse_key(key: str) Tuple[str, Dict[str, str]][source]

Parses key which can contain arguments in the form of: <key_name>(arg1=value1, arg2=value2, …)

Returns:

(base_name: str, parameters: dict)

Return type:

Tuple

utils.registry_test module

utils.registry_test.test_functional_registry() None[source]
utils.registry_test.test_basic_registration() None[source]

utils.resources module

utils.resources.cpu_count()

Returns the number of CPUs in the system

utils.tensor_utils module

utils.tensor_utils.image_size_from_opts(opts) Tuple[int, int][source]
utils.tensor_utils.video_size_from_opts(opts) Tuple[int, int, int][source]
utils.tensor_utils.create_rand_tensor(opts, device: str | None = 'cpu', batch_size: int | None = 1) Tensor[source]
utils.tensor_utils.reduce_tensor(inp_tensor: Tensor) Tensor[source]
utils.tensor_utils.reduce_tensor_sum(inp_tensor: Tensor) Tensor[source]
utils.tensor_utils.all_gather_list(data: List | Tensor | Dict[str, Tensor])[source]
utils.tensor_utils.gather_all_features(features: Tensor, dim=0)[source]
utils.tensor_utils.tensor_to_python_float(inp_tensor: int | float | Tensor, is_distributed: bool, reduce_op: str = 'mean') int | float | ndarray[source]

Given a number or a Tensor (potentially in distributed setting) returns the float value. If is_distributed is true, the Tensor must be aggregated first.

Parameters:
  • inp_tensor – the input tensor

  • is_distributed – indicates whether we are in distributed mode

  • reduce_op – reduce operation for aggregation If equals to mean, will reduce using mean, otherwise sum operation

utils.tensor_utils.to_numpy(img_tensor: Tensor) ndarray[source]

utils.visualization_utils module

utils.visualization_utils.visualize_boxes_xyxy(image: ndarray, boxes: ndarray) ndarray[source]

Utility function to draw bounding boxes of objects on a given image

utils.visualization_utils.create_colored_mask(mask: ndarray, num_classes: int, *args, **kwargs) ndarray[source]

Create a colored mask with random colors

utils.visualization_utils.draw_bounding_boxes(image: ndarray, boxes: ndarray, labels: ndarray, scores: ndarray, masks: ndarray | None = None, color_map: Optional = None, object_names: List | None = None, is_bgr_format: bool | None = False, save_path: str | None = None, num_classes: int | None = 81) None[source]

Utility function to draw bounding boxes of objects along with their labels and score on a given image

utils.visualization_utils.convert_to_cityscape_format(img: Tensor) Tensor[source]

Utility to map predicted segmentation labels to cityscapes format

Module contents