cvnets.misc package
Subpackages
Submodules
cvnets.misc.averaging_utils module
- class cvnets.misc.averaging_utils.EMA(model: Module, ema_momentum: float | None = 0.0005, device: str | None = 'cpu', *args, **kwargs)[source]
Bases:
object
For a given model, this class computes the exponential moving average of weights
- Parameters:
model (torch.nn.Module) – Model
ema_momentum (Optional[float]) – Momentum value shows the contribution of weights at current iteration. Default: 0.0005
device (Optional[str]) – Device (CPU or GPU) on which model resides. Default: cpu
cvnets.misc.box_utils module
- cvnets.misc.box_utils.convert_locations_to_boxes(pred_locations: Tensor, anchor_boxes: Tensor, center_variance: float, size_variance: float) Tensor [source]
This is an inverse of convert_boxes_to_locations function (or Eq.(2) in SSD paper
- Parameters:
pred_locations (Tensor) – predicted locations from detector
anchor_boxes (Tensor) – prior boxes in center form
center_variance (float) – variance value for centers (c_x and c_y)
size_variance (float) – variance value for size (height and width)
- Returns:
predicted boxes tensor in center form
- cvnets.misc.box_utils.convert_boxes_to_locations(gt_boxes: Tensor, prior_boxes: Tensor, center_variance: float, size_variance: float)[source]
This function implements Eq.(2) in the SSD paper
- Parameters:
gt_boxes (Tensor) – Ground truth boxes in center form (cx, cy, w, h)
prior_boxes (Tensor) – Prior boxes in center form (cx, cy, w, h)
center_variance (float) – variance value for centers (c_x and c_y)
size_variance (float) – variance value for size (height and width)
- Returns:
boxes tensor for training
cvnets.misc.common module
- cvnets.misc.common.clean_strip(obj: str | List[str], sep: str | None = ',', strip: bool = True) List[str] [source]
- cvnets.misc.common.load_pretrained_model(model: Module, wt_loc: str, opts: Namespace, *args, **kwargs) Module [source]
Helper function to load pre-trained weights. :param model: Model whose weights will be loaded. :param wt_loc: Path to file to load state_dict from. :param opts: Input arguments.
- Returns:
The model loaded with the given weights.
- cvnets.misc.common.parameter_list(named_parameters, weight_decay: float | None = 0.0, no_decay_bn_filter_bias: bool | None = False, *args, **kwargs) List[Dict] [source]
- cvnets.misc.common.freeze_module(module: Module, force_eval: bool = True) Module [source]
Sets requires_grad = False on all the given module parameters, and put the module in eval mode. By default, it also overrides the module’s train method to make sure that it always stays in eval mode (ie calling
module.train(mode=True)
executesmodule.train(mode=False)
)>>> module = nn.Linear(10, 20).train() >>> module.training True >>> module.weight.requires_grad True >>> freeze_module(module).train().training False >>> module.weight.requires_grad False
- cvnets.misc.common.freeze_modules_based_on_opts(opts: Namespace, model: Module, verbose: bool = True) Module [source]
Allows for freezing immediate modules and parameters of the model using –model.freeze-modules.
–model.freeze-modules should be a list of strings or a comma-separated list of regex expressions.
- Examples of –model.freeze-modules:
“conv.*” # see example below: can freeze all (top-level) conv layers “^((?!classifier).)*$” # freezes everything except for “classifier”: useful for linear probing “conv1,layer1,layer2,layer3” # freeze all layers up to layer3
>>> model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1, 20, 5)), ('relu1', nn.ReLU()), ('conv2', nn.Conv2d(20, 64, 5)), ('relu2', nn.ReLU()) ])) >>> opts = argparse.Namespace(**{"model.freeze_modules": "conv1"}) >>> _ = freeze_modules_based_on_opts(opts, model) INFO - Freezing module: conv1 >>> model.train() >>> model.conv1.training False >>> model.conv2.training True
cvnets.misc.init_utils module
- cvnets.misc.init_utils.initialize_conv_layer(module, init_method: str | None = 'kaiming_normal', std_val: float | None = 0.01) None [source]
Helper function to initialize convolution layers
- cvnets.misc.init_utils.initialize_fc_layer(module, init_method: str | None = 'normal', std_val: float | None = 0.01) None [source]
Helper function to initialize fully-connected layers