About the Package#
- 5 Minute Read 
Summary#
- This tutorial describes the three important aspects of the - Parameterized Transformspackage:- Why do we need this package?, 
- What does the package provide?, and 
- How to use the package? 
 
- NOTE: We will be using the terms Augmentation (noun) / Augment (verb) interchangeably with Transform (noun) / Transform (verb) throughout the tutorials. 
The Why Aspect#
- Augmentation strategies are important in computer vision research for improving the performance of deep learning approaches. 
- Popular libraries like - torchvisionand- korniaprovide implementations of widely used and important transforms.
- Many recent research ideas revolve around using the information of augmentation parameters in order to learn better representations. In this context, different popular libraries have different pros and cons: - For instance, most of recent deep learning approaches define their augmentation stacks in terms of the - torchvision-based transforms, experiment with them, and report the best-performing stacks. However,- torchvision-based transforms do NOT provide access to their parameters, thereby limiting the research possibilities aimed at extracting information provided by augmentation parameters to learn better data representations.
- On the other hand, although - kornia-based augmentation stacks do provide access to the parameters of the augmentations, reproducing results obtained with- torchvisionstacks using- kornia-based augmentations is difficult due to the differences in their implementation.
 
- Ideally, we want to have transforms implementations that have the following desired properties: - they can provide access to their parameters by exposing them, 
- they allow reproducible augmentations by enabling application of the transform defined by given parameters, 
- they are easy to subclass and extend in order to tweak their functionality, and 
- they have implementations that match those of the transforms used in obtaining the state-of-the-art results (mostly, - torchvision).
 
- This is very difficult to achieve with any of the currently existing libraries. 
The What Aspect#
- What this package provides is a modular, uniform, and easily extendable skeleton with a re-implementation of - torchvision-based transforms that gives you access to their augmentation parameters and allows reproducible augmentations.
- In particular, these transforms can perform two extremely crucial tasks associated with exposing their parameters: - Given an image, the transform can return an augmentation along with the parameters used for the augmentation. 
- Given an image and well-defined augmentation parameters, the transform can return the corresponding augmented image. 
 
- The uniform template for all transforms and a modular re-implementation means that you can easily subclass the transforms and tweak their functionalities. 
- In addition, you can write your own custom transforms using the provided templates and combine them seamlessly with other custom or package-defined transforms for your experimentation. 
The How Aspect#
- To start using the package, we recommend the following– - Read through the Prerequisites listed below and be well-acquainted with them. 
- Install the Package as described in the link. 
- Read through the Tutorial Series. 
- After that, you should be ready to write and experiment with parameterized transforms! 
 
Prerequisites#
Here are the prerequisites for this package–
- numpy: being comfortable with- numpyarrays and operations,
- PIL: being comfortable with basic- PILoperations and the- PIL.Image.Imageclass,
- torch: being comfortable with- torchtensors and operations, and
- torchvision: being comfortable with- torchvisiontransforms and operations.
A Preview of All Tutorials#
- Here is an overview of the tutorials in this series and the topics they cover– 
| Title | Contents | 
|---|---|
| An overview of the package | |
| Explanation of the base classes  | |
| A walk-through of writing custom transforms– an atomic transform named  | |
| * Information about all the transforms provided in this package  | |
| * Visualization of augmentations produced by the custom transforms  | |
| 5. Migrate From  | Instructions to easily modify code with  | 
Credits#
In case you find our work useful in your research, you can use the following bibtex entry to cite us–
@software{Dhekane_Parameterized_Transforms_2025,
    author = {Dhekane, Eeshan Gunesh},
    month = {2},
    title = {{Parameterized Transforms}},
    url = {https://github.com/apple/parameterized-transforms},
    version = {1.0.0},
    year = {2025}
}
About the Next Tutorial#
- In the next tutorial 001-The-Structure-of-Parametrized-Transforms.md, we will describe the core structure of parameterized transforms. 
- We will see two different types of transforms, Atomic and Composing, and describe their details.