imgtools.balance

This module including

  • Chromatic adaptation

  • White balance

  • Low-light compensation

  • Correlated color temperature estimation



Documents

imgtools.balance.balance_by_scaling(img: Tensor, scaled_max: int | float | Tensor, ret_factors: bool = False) Tensor | tuple[Tensor, Tensor]

Wrong von Kries transform. Multiplies an image by

coeff_channel = scaled_max / maximum_of_channel.

Parameters:
imgtorch.Tensor

Image in RGB space with shape (*, C, H, W).

scaled_maxint | float | torch.Tensor

The maximum(s) after scaling.

  • A single number: A coefficient for all channels.

  • Tensor with shape (C,): the coefficients of each channels.

ret_factorsbool, default=False

If true, returns image and scaling factors.

Returns:
balancedtorch.Tensor

An image with the shape (*, C, H, W).

factorstorch.Tensor

Scaling factors with shape (C,). factors is returned only if ret_factors is true.

Examples

>>> from imgtools.balance import balance_by_scaling
>>>
>>> rgb = torch.rand((3, 512, 512))
>>> maxi = torch.tensor((1.0, 1.0, 0.95))
>>> balanced, factors = balance_by_scaling(rgb, maxi, ret_factors=True)
>>> factors.reshape(3)  # tensor([1.0000, 1.0000, 0.9500])
imgtools.balance.cheng_pca_balance(rgb: Tensor, adaptation: str = 'von kries', rgb_spec: str = 'srgb', white: str = 'D65', obs: str | int = 10) Tensor

White balance by Cheng’s PCA method [1]. Estimate the illuminant and applies chromatic adaptation transformation.

Parameters:
rgbtorch.Tensor

An RGB image in the range of [0, 1] with shape (*, C, H, W).

adaptationLiteral[‘rgb’, ‘von kries’], default=’von kries’

Chromatic adaptation method. RGB scaling or von Kries transformation. - ‘RGB’: Scaling the illuminant to 1. - ‘von kries’: von Kries transformation.

rgb_specRGBSpec, default=’srgb’

The name of RGB specification. The argument is case-insensitive. Only works for adaptation=’von kries’.

whiteStandardIlluminants, default=’D65’

White point. The input is case-insensitive. Only works for adaptation=’von kries’.

obs{2, ‘2’, 10, ‘10’}, default=10

The degree of oberver. Only works for adaptation=’von kries’.

Returns:
torch.Tensor

A balanced image with shape (*, C, H, W).

Raises:
ValueError

When adaptation is not in (‘rgb’, ‘von kries’)

References

[1] Cheng, Dongliang, Dilip K. Prasad, and Michael S. Brown. “Illuminant

estimation for color constancy: why spatial-domain methods work and the role of the color distribution.” JOSA A 31.5 (2014): 1049-1058.

Examples

>>> from imgtools.balance import cheng_pca_balance
>>>
>>> rgb = torch.rand((3, 512, 512))
>>> balanced = cheng_pca_balance(rgb)
imgtools.balance.clipping_balance(img: Tensor, dark_percent: float = 0.0, light_percent: float = 0.0)

Clip top-k1 and bottom-k2 percentage values and normalize to [0, 1].

Parameters:
imgtorch.Tensor

An image with shape (*, C, H, W).

dark_percentfloat, default=0.0

The percentage value for clipping the lowest `dark_percent`% values.

light_percentfloat, default=0.0

The percentage value for clipping the highest `light_percent`% values.

Returns:
torch.Tensor

A balanced image with shape (*, C, H, W).

imgtools.balance.get_von_kries_transform_matrix(xyz_white: Tensor, xyz_target_white: Tensor, method: str = 'bradford') Tensor

Returns a transformation matrix for von Kries adaptation, which converts colors from a illuminant to another illuminant.

Parameters:
xyz_whitetorch.Tensor

The source white point in CIE XYZ space. Shape (*, 3).

xyz_target_whitetorch.Tensor

The target white point in CIE XYZ space. Shape (*, 3).

methodCATMethod, default=’bradford’

Chromatic adaptation method.

Returns:
torch.Tensor

Matrix with shape=`(*, 3, 3)`. Same dtype and device as xyz_white.

Examples

>>> from imgtools.balance import get_von_kries_transform_matrix
>>> from imgtools.color import get_rgb_to_xyz_matrix, rgb_to_xyz, xyz_to_rgb
>>> from imgtools.utils import matrix_transform
>>>
>>> rgb = torch.tensor((0.75, 0.1, 0.23)).reshape(3, 1, 1)
>>> xyz, mat = rgb_to_xyz(rgb, 'srgb', 'D65', ret_matrix=True)
>>> white_d65 = mat.sum(1)
>>> white_d50 = get_rgb_to_xyz_matrix('srgb', 'D50').sum(1)
>>>
>>> mat_adap = get_von_kries_transform_matrix(white_d65, white_d50)
>>> new_xyz = matrix_transform(xyz, mat_adap)
>>> # Equivalent to: new_xyz = von_kries_transform(xyz, white_d65, white_d50)
>>> new_rgb = xyz_to_rgb(xyz, 'srgb', 'D50')  # tensor([0.6935, 0.1019, 0.2713])
imgtools.balance.gray_edge_balance(rgb: Tensor, edge: Tensor, ret_factors: bool = False) Tensor | tuple[Tensor, Tensor]

White balance by the gray-edge algorithm. Multiplies each channel by

coeff_channel = mean_of_gradient / mean_of_gradient_of_channel.

Parameters:
rgbtorch.Tensor

Image in RGB space with shape (*, C, H, W).

edgetorch.Tensor

The edge of the image with shape (*, C, H, W).

ret_factorsbool, default=False

If true, returns image and scaling factors.

Returns:
balancedtorch.Tensor

An image with the shape (*, C, H, W).

factorstorch.Tensor

Scaling factors with shape (C,). factors is returned only if ret_factors is true.

Examples

>>> from imgtools.balance import gray_edge_balance
>>> from imgtools.filter import laplacian
>>>
>>> rgb = torch.rand((3, 512, 512))
>>> edge = laplacian(rgb)
>>> balanced, factors = gray_edge_balance(rgb, edge, ret_factors=True)
>>> factors.reshape(3)  # tensor([1.0094, 0.9822, 1.0089])
imgtools.balance.gray_world_balance(rgb: Tensor, ret_factors: bool = False) Tensor | tuple[Tensor, Tensor]

White balance by the gray-world algorithm. Multiplies each channel by

coeff_channel = mean / mean_of_channel.

Parameters:
rgbtorch.Tensor

An Image in RGB space with shape (*, C, H, W).

ret_factorsbool, default=False

If true, returns image and scaling factors.

Returns:
balancedtorch.Tensor

An image with the shape (*, C, H, W).

factorstorch.Tensor

Scaling factors with shape (C,). factors is returned only if ret_factors is true.

Examples

>>> from imgtools.balance import gray_world_balance
>>>
>>> rgb = torch.rand((3, 512, 512))
>>> balanced, factors = gray_world_balance(rgb, ret_factors=True)
>>> factors.reshape(3)  # tensor([1.0003, 1.0013, 0.9984])
imgtools.balance.von_kries_transform(xyz: Tensor, xyz_white: Tensor, xyz_target_white: Tensor, method: str = 'bradford', ret_matrix: bool = False) Tensor | tuple[Tensor, Tensor]

Applies chromatic adaptation transformation to an image in CIE XYZ space with given source and target white points.

If method is set to be ‘xyz’, the transformation matrix between XYZ and LMS is the identity matrix. Thus, the result is a wrong von Kries transformation and the

Parameters:
xyztorch.Tensor

An image in CIE XYZ space with shape (*, 3, H, W).

xyz_whitetorch.Tensor

The source white point in CIE XYZ space. A tensor with numel = 3.

xyz_target_whitetorch.Tensor

The target white point in CIE XYZ space. A tensor with numel = 3.

methodCATMethod, default=’bradford’

Chromatic adaptation method. If method is a Tensor, then it will be regarded as the transformation matrix ( XYZ -> LMS -> scaling LMS -> XYZ).

ret_matrixbool, default=False

If false, only the image is returned. If true, also returns the transformation matrix.

Returns:
new_xyztorch.Tensor

An image in CIE XYZ space with the shape (*, 3, H, W).

mattorch.Tensor

A chromatic adaptation matrix. mat is returned only if ret_matrix is true.

Examples

>>> from imgtools.balance import von_kries_transform
>>> from imgtools.color import get_rgb_to_xyz_matrix, rgb_to_xyz, xyz_to_rgb
>>> from imgtools.utils import matrix_transform
>>>
>>> rgb = torch.tensor((0.75, 0.1, 0.23)).reshape(3, 1, 1)
>>> xyz, mat = rgb_to_xyz(rgb, 'srgb', 'D65', ret_matrix=True)
>>> white_d65 = mat.sum(1)
>>> white_d50 = get_rgb_to_xyz_matrix('srgb', 'D50').sum(1)
>>>
>>> new_xyz = von_kries_transform(xyz, white_d65, white_d50)
>>> # Equivalent to:
>>> # mat_adap = get_von_kries_transform_matrix(white_d65, white_d50)
>>> # new_xyz = matrix_transform(xyz, mat_adap)
>>> new_rgb = xyz_to_rgb(xyz, 'srgb', 'D50')  # tensor([0.6935, 0.1019, 0.2713])
imgtools.balance.white_patch_balance(rgb: Tensor, q: int | float | Tensor = 1.0, ret_factors: bool = False) Tensor | tuple[Tensor, Tensor]

White balance by generalized white patch algorithm. Multiplies each channel of an RGB image by

coeff_channel = q_quantile_of_image / q_quantile_of_channel.

When q = 1.0, it is the standard white patch balance and equivalent to balance by scaling for maximum = 1.

Parameters:
rgbtorch.Tensor

An RGB Image in range of [0, 1] with shape (*, C, H, W). If ndim > 3, the quantile value is calculated across images, and images will be scaled by same factors.

qint | float | torch.Tensor, default=1.0

q-quantile. The values will be cliped to [0, 1]. - A single number: the quantile for all channels. - Tensor with shape (3,): the quantiles of channels.

ret_factorsbool, default=False

If false, only the image is returned. If true, also return the scaling factors.

Returns:
balancedtorch.Tensor

An image with the shape (*, C, H, W).

factorstorch.Tensor

Scaling factors with shape (C,). factors is returned only if ret_factors is true.

Examples

>>> from imgtools.balance import white_patch_balance
>>> from imgtools.filter import laplacian
>>>
>>> rgb = torch.rand((3, 512, 512))
>>> balanced, factors = white_patch_balance(rgb, 0.9, ret_factors=True)
>>> factors.reshape(3)  # tensor([1.0008, 0.9999, 1.0005])
imgtools.balance.hernandez_andre_approximation(xy: Tensor) Tensor

Calculates CCT from xy chromaticity coordinates by Hernández-André’s approximation.

Parameters:
xytorch.Tensor

Chromaticity coordinates, a tensor with shape (2, *).

Returns:
torch.Tensor

Correlated color temperature in Kelvin.

imgtools.balance.mccamy_approximation(xy: Tensor) Tensor

Calculates CCT from xy chromaticity coordinates by McCamy’s approximation.

Parameters:
xytorch.Tensor

Chromaticity coordinates, a tensor with shape (2, *).

Returns:
torch.Tensor

Correlated color temperature in Kelvin.

imgtools.balance.estimate_illuminant_cheng(img: Tensor, n_selected: int | float = 3.5) Tensor

Estimate the illuminant in the image by Cheng’s PCA method [1].

Parameters:
imgtorch.Tensor

An RGB image in the range of [0, 1] with shape (*, C, H, W).

n_selectedint | float, default=3.5

Percentage for selecting top-n and bottom-n points. Select (2 * n_selected)% points in total.

Returns:
torch.Tensor

The illuminant of the image. An RGB value with shape (*, C).

References

[1] Cheng, Dongliang, Dilip K. Prasad, and Michael S. Brown. “Illuminant

estimation for color constancy: why spatial-domain methods work and the role of the color distribution.” JOSA A 31.5 (2014): 1049-1058.