Albumentations
Product Information
Use Cases
Public chat
Support Plans
There are currently no OSS plans available
If you are a provider or contributor to the repository, you can start adding your OSS plan.
Add an OSS planContact us if you are looking for a plan for this open source.
We will help you get in touch with professional providers.
Product Details
Albumentations
Docs | Discord | Twitter | LinkedIn
Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to increase the quality of trained models. The purpose of image augmentation is to create new training samples from the existing data.
Here is an example of how you can apply some pixel-level augmentations from Albumentations to create new images from the original one:
Why Albumentations
- Albumentations supports all common computer vision tasks such as classification, semantic segmentation, instance segmentation, object detection, and pose estimation.
- The library provides a simple unified API to work with all data types: images (RGB-images, grayscale images, multispectral images), segmentation masks, bounding boxes, and keypoints.
- The library contains more than 70 different augmentations to generate new training samples from the existing data.
- Albumentations is fast. We benchmark each new release to ensure that augmentations provide maximum speed.
- It works with popular deep learning frameworks such as PyTorch and TensorFlow. By the way, Albumentations is a part of the PyTorch ecosystem.
- Written by experts. The authors have experience both working on production computer vision systems and participating in competitive machine learning. Many core team members are Kaggle Masters and Grandmasters.
- The library is widely used in industry, deep learning research, machine learning competitions, and open source projects.
Sponsors
Table of contents
- Albumentations
- Why Albumentations
- Sponsors
- Table of contents
- Authors
- Current Maintainer
- Emeritus Core Team Members
- Installation
- Documentation
- A simple example
- Getting started
- I am new to image augmentation
- I want to use Albumentations for the specific task such as classification or segmentation
- I want to know how to use Albumentations with deep learning frameworks
- I want to explore augmentations and see Albumentations in action
- Who is using Albumentations
- See also
- List of augmentations
- Pixel-level transforms
- Spatial-level transforms
- A few more examples of augmentations
- Semantic segmentation on the Inria dataset
- Medical imaging
- Object detection and semantic segmentation on the Mapillary Vistas dataset
- Keypoints augmentation
- Benchmarking results
- System Information
- Benchmark Parameters
- Library Versions
- Performance Comparison
- Contributing
- Community and Support
- Comments
- Citing
Authors
Current Maintainer
Vladimir I. Iglovikov | Kaggle Grandmaster
Emeritus Core Team Members
Mikhail Druzhinin | Kaggle Expert
Alexander Buslaev | Kaggle Master
Eugene Khvedchenya | Kaggle Grandmaster
Installation
Albumentations requires Python 3.9 or higher. To install the latest version from PyPI:
pip install -U albumentations
Other installation options are described in the documentation.
Documentation
The full documentation is available at https://albumentations.ai/docs/.
A simple example
import albumentations as A
import cv2
# Declare an augmentation pipeline
transform = A.Compose([
A.RandomCrop(width=256, height=256),
A.HorizontalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.2),
])
# Read an image with OpenCV and convert it to the RGB colorspace
image = cv2.imread("image.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Augment an image
transformed = transform(image=image)
transformed_image = transformed["image"]
Getting started
I am new to image augmentation
Please start with the introduction articles about why image augmentation is important and how it helps to build better models.
I want to use Albumentations for the specific task such as classification or segmentation
If you want to use Albumentations for a specific task such as classification, segmentation, or object detection, refer to the set of articles that has an in-depth description of this task. We also have a list of examples on applying Albumentations for different use cases.
I want to know how to use Albumentations with deep learning frameworks
We have examples of using Albumentations along with PyTorch and TensorFlow.
I want to explore augmentations and see Albumentations in action
Check the online demo of the library. With it, you can apply augmentations to different images and see the result. Also, we have a list of all available augmentations and their targets.
Who is using Albumentations
See also
- A list of papers that cite Albumentations.
- A list of teams that were using Albumentations and took high places in machine learning competitions.
- Open source projects that use Albumentations.
List of augmentations
Pixel-level transforms
Pixel-level transforms will change just an input image and will leave any additional targets such as masks, bounding boxes, and keypoints unchanged. The list of pixel-level transforms:
- AdvancedBlur
- Blur
- CLAHE
- ChannelDropout
- ChannelShuffle
- ChromaticAberration
- ColorJitter
- Defocus
- Downscale
- Emboss
- Equalize
- FDA
- FancyPCA
- FromFloat
- GaussNoise
- GaussianBlur
- GlassBlur
- HistogramMatching
- HueSaturationValue
- ISONoise
- ImageCompression
- InvertImg
- MedianBlur
- MotionBlur
- MultiplicativeNoise
- Normalize
- PixelDistributionAdaptation
- PlanckianJitter
- Posterize
- RGBShift
- RandomBrightnessContrast
- RandomFog
- RandomGamma
- RandomGravel
- RandomGrayscale
- RandomJPEG
- RandomRain
- RandomShadow
- RandomSnow
- RandomSunFlare
- RandomToneCurve
- RingingOvershoot
- Sharpen
- ShotNoise
- Solarize
- Spatter
- Superpixels
- TemplateTransform
- TextImage
- ToFloat
- ToGray
- ToRGB
- ToSepia
- UnsharpMask
- ZoomBlur
Spatial-level transforms
Spatial-level transforms will simultaneously change both an input image as well as additional targets such as masks, bounding boxes, and keypoints. The following table shows which additional targets are supported by each transform.
A few more examples of augmentations
Semantic segmentation on the Inria dataset
Medical imaging
Object detection and semantic segmentation on the Mapillary Vistas dataset
Keypoints augmentation
Benchmarking results
System Information
- Platform: macOS-15.0.1-arm64-arm-64bit
- Processor: arm
- CPU Count: 10
- Python Version: 3.12.7
Benchmark Parameters
- Number of images: 1000
- Runs per transform: 10
- Max warmup iterations: 1000
Library Versions
- albumentations: 1.4.20
- augly: 1.0.0
- imgaug: 0.4.0
- kornia: 0.7.3
- torchvision: 0.20.0
Performance Comparison
Transform | albumentations 1.4.20 |
augly 1.0.0 |
imgaug 0.4.0 |
kornia 0.7.3 |
torchvision 0.20.0 |
---|---|---|---|---|---|
HorizontalFlip | 8618 ± 1233 | 4807 ± 818 | 6042 ± 788 | 390 ± 106 | 914 ± 67 |
VerticalFlip | 22847 ± 2031 | 9153 ± 1291 | 10931 ± 1844 | 1212 ± 402 | 3198 ± 200 |
Rotate | 1146 ± 79 | 1119 ± 41 | 1136 ± 218 | 143 ± 11 | 181 ± 11 |
Affine | 682 ± 192 | - | 774 ± 97 | 147 ± 9 | 130 ± 12 |
Equalize | 892 ± 61 | - | 581 ± 54 | 152 ± 19 | 479 ± 12 |
RandomCrop80 | 47341 ± 20523 | 25272 ± 1822 | 11503 ± 441 | 1510 ± 230 | 32109 ± 1241 |
ShiftRGB | 2349 ± 76 | - | 1582 ± 65 | - | - |
Resize | 2316 ± 166 | 611 ± 78 | 1806 ± 63 | 232 ± 24 | 195 ± 4 |
RandomGamma | 8675 ± 274 | - | 2318 ± 269 | 108 ± 13 | - |
Grayscale | 3056 ± 47 | 2720 ± 932 | 1681 ± 156 | 289 ± 75 | 1838 ± 130 |
RandomPerspective | 412 ± 38 | - | 554 ± 22 | 86 ± 11 | 96 ± 5 |
GaussianBlur | 1728 ± 89 | 242 ± 4 | 1090 ± 65 | 176 ± 18 | 79 ± 3 |
MedianBlur | 868 ± 60 | - | 813 ± 30 | 5 ± 0 | - |
MotionBlur | 4047 ± 67 | - | 612 ± 18 | 73 ± 2 | - |
Posterize | 9094 ± 301 | - | 2097 ± 68 | 430 ± 49 | 3196 ± 185 |
JpegCompression | 918 ± 23 | 778 ± 5 | 459 ± 35 | 71 ± 3 | 625 ± 17 |
GaussianNoise | 166 ± 12 | 67 ± 2 | 206 ± 11 | 75 ± 1 | - |
Elastic | 201 ± 5 | - | 235 ± 20 | 1 ± 0 | 2 ± 0 |
Clahe | 454 ± 22 | - | 335 ± 43 | 94 ± 9 | - |
CoarseDropout | 13368 ± 744 | - | 671 ± 38 | 536 ± 87 | - |
Blur | 5267 ± 543 | 246 ± 3 | 3807 ± 325 | - | - |
ColorJitter | 628 ± 55 | 255 ± 13 | - | 55 ± 18 | 46 ± 2 |
Brightness | 8956 ± 300 | 1163 ± 86 | - | 472 ± 101 | 429 ± 20 |
Contrast | 8879 ± 1426 | 736 ± 79 | - | 425 ± 52 | 335 ± 35 |
RandomResizedCrop | 2828 ± 186 | - | - | 287 ± 58 | 511 ± 10 |
Normalize | 1196 ± 56 | - | - | 626 ± 40 | 519 ± 12 |
PlankianJitter | 2204 ± 385 | - | - | 813 ± 211 | - |
Contributing
To create a pull request to the repository, follow the documentation at CONTRIBUTING.md
Community and Support
Comments
In some systems, in the multiple GPU regime, PyTorch may deadlock the DataLoader if OpenCV was compiled with OpenCL optimizations. Adding the following two lines before the library import may help. For more details https://github.com/pytorch/pytorch/issues/1355
cv2.setNumThreads(0)
cv2.ocl.setUseOpenCL(False)
Citing
If you find this library useful for your research, please consider citing Albumentations: Fast and Flexible Image Augmentations:
@Article{info11020125,
AUTHOR = {Buslaev, Alexander and Iglovikov, Vladimir I. and Khvedchenya, Eugene and Parinov, Alex and Druzhinin, Mikhail and Kalinin, Alexandr A.},
TITLE = {Albumentations: Fast and Flexible Image Augmentations},
JOURNAL = {Information},
VOLUME = {11},
YEAR = {2020},
NUMBER = {2},
ARTICLE-NUMBER = {125},
URL = {https://www.mdpi.com/2078-2489/11/2/125},
ISSN = {2078-2489},
DOI = {10.3390/info11020125}
}