| Top |  |  |  |  | 
| int | vips_conv () | 
| int | vips_convf () | 
| int | vips_convi () | 
| int | vips_conva () | 
| int | vips_convsep () | 
| int | vips_convasep () | 
| int | vips_compass () | 
| int | vips_gaussblur () | 
| int | vips_sharpen () | 
| int | vips_spcor () | 
| int | vips_fastcor () | 
| int | vips_sobel () | 
| int | vips_canny () | 
These operations convolve an image in some way, or are operations based on simple convolution, or are useful with convolution.
int vips_conv (VipsImage *in,VipsImage **out,VipsImage *mask,...);
Optional arguments:
precision
: VipsPrecision, calculation accuracy
layers
: gint, number of layers for approximation
cluster
: gint, cluster lines closer than this distance
Convolution.
Perform a convolution of in
 with mask
.
Each output pixel is calculated as:
| 1 | sigma[i]{pixel[i] * mask[i]} / scale + offset | 
where scale and offset are part of mask
. 
By default, precision
 is 
VIPS_PRECISION_FLOAT. The output image 
is always VIPS_FORMAT_FLOAT unless in
 is VIPS_FORMAT_DOUBLE, in which case
out
 is also VIPS_FORMAT_DOUBLE. 
If precision
 is VIPS_PRECISION_INTEGER, then 
elements of mask
 are converted to
integers before convolution, using rint(),
and the output image 
always has the same VipsBandFormat as the input image. 
For VIPS_FORMAT_UCHAR images and VIPS_PRECISION_INTEGER precision
, 
vips_conv() uses a fast vector path based on
fixed-point arithmetic. This can produce slightly different results. 
Disable the vector path with --vips-novector or VIPS_NOVECTOR or
vips_vector_set_enabled().
If precision
 is VIPS_PRECISION_APPROXIMATE then, like
VIPS_PRECISION_INTEGER, mask
 is converted to int before convolution, and 
the output image 
always has the same VipsBandFormat as the input image. 
Larger values for layers
 give more accurate
results, but are slower. As layers
 approaches the mask radius, the
accuracy will become close to exact convolution and the speed will drop to 
match. For many large masks, such as Gaussian, n_layers
 need be only 10% of
this value and accuracy will still be good.
Smaller values of cluster
 will give more accurate results, but be slower
and use more memory. 10% of the mask radius is a good rule of thumb.
See also: vips_convsep().
[method]
| in | input image | |
| out | output image. | [out] | 
| mask | convolve with this mask | |
| ... | 
 | 
int vips_convf (VipsImage *in,VipsImage **out,VipsImage *mask,...);
Convolution. This is a low-level operation, see vips_conv() for something
more convenient. 
Perform a convolution of in
 with mask
.
Each output pixel is
calculated as sigma[i]{pixel[i] * mask[i]} / scale + offset, where scale
and offset are part of mask
. 
The convolution is performed with floating-point arithmetic. The output image 
is always VIPS_FORMAT_FLOAT unless in
 is VIPS_FORMAT_DOUBLE, in which case
out
 is also VIPS_FORMAT_DOUBLE. 
See also: vips_conv().
[method]
| in | input image | |
| out | output image. | [out] | 
| mask | convolve with this mask | |
| ... | 
 | 
int vips_convi (VipsImage *in,VipsImage **out,VipsImage *mask,...);
Integer convolution. This is a low-level operation, see vips_conv() for 
something more convenient. 
mask
 is converted to an integer mask with rint() of each element, rint of
scale and rint of offset. Each output pixel is then calculated as 
| 1 | sigma[i]{pixel[i] * mask[i]} / scale + offset | 
The output image always has the same VipsBandFormat as the input image.
For VIPS_FORMAT_UCHAR images, vips_convi() uses a fast vector path based on
half-float arithmetic. This can produce slightly different results. 
Disable the vector path with --vips-novector or VIPS_NOVECTOR or
vips_vector_set_enabled().
See also: vips_conv().
[method]
| in | input image | |
| out | output image. | [out] | 
| mask | convolve with this mask | |
| ... | 
 | 
int vips_conva (VipsImage *in,VipsImage **out,VipsImage *mask,...);
Optional arguments:
Perform an approximate integer convolution of in
 with mask
.
This is a low-level operation, see 
vips_conv() for something more convenient. 
The output image 
always has the same VipsBandFormat as the input image. 
Elements of mask
 are converted to
integers before convolution.
Larger values for layers
 give more accurate
results, but are slower. As layers
 approaches the mask radius, the
accuracy will become close to exact convolution and the speed will drop to 
match. For many large masks, such as Gaussian, layers
 need be only 10% of
this value and accuracy will still be good.
Smaller values of cluster
 will give more accurate results, but be slower
and use more memory. 10% of the mask radius is a good rule of thumb.
See also: vips_conv().
[method]
| in | input image | |
| out | output image. | [out] | 
| mask | convolution mask | |
| ... | 
 | 
int vips_convsep (VipsImage *in,VipsImage **out,VipsImage *mask,...);
Optional arguments:
precision
: calculation accuracy
layers
: number of layers for approximation
cluster
: cluster lines closer than this distance
Perform a separable convolution of in
 with mask
.
See vips_conv() for a detailed description.
The mask must be 1xn or nx1 elements.
The image is convolved twice: once with mask
 and then again with mask
 
rotated by 90 degrees. This is much faster for certain types of mask
(gaussian blur, for example) than doing a full 2D convolution.
See also: vips_conv(), vips_gaussmat().
[method]
| in | input image | |
| out | output image. | [out] | 
| mask | convolution mask | |
| ... | 
 | 
int vips_convasep (VipsImage *in,VipsImage **out,VipsImage *mask,...);
Optional arguments:
layers
: gint, number of layers for approximation
Approximate separable integer convolution. This is a low-level operation, see 
vips_convsep() for something more convenient. 
The image is convolved twice: once with mask
 and then again with mask
 
rotated by 90 degrees. 
mask
 must be 1xn or nx1 elements. 
Elements of mask
 are converted to
integers before convolution.
Larger values for layers
 give more accurate
results, but are slower. As layers
 approaches the mask radius, the
accuracy will become close to exact convolution and the speed will drop to 
match. For many large masks, such as Gaussian, layers
 need be only 10% of
this value and accuracy will still be good.
The output image always has the same VipsBandFormat as the input image.
See also: vips_convsep().
[method]
| in | input image | |
| out | output image. | [out] | 
| mask | convolve with this mask | |
| ... | 
 | 
int vips_compass (VipsImage *in,VipsImage **out,VipsImage *mask,...);
Optional arguments:
times
: gint, how many times to rotate and convolve
angle
: VipsAngle45, rotate mask by this much between colvolutions
combine
: VipsCombine, combine results like this
precision
: VipsPrecision, precision for blur, default float
layers
: gint, number of layers for approximation
cluster
: gint, cluster lines closer than this distance
This convolves in
 with mask
 times
 times, rotating mask
 by angle
each time. By default, it comvolves twice, rotating by 90 degrees, taking
the maximum result.
See also: vips_conv().
[method]
| in | input image | |
| out | output image. | [out] | 
| mask | convolve with this mask | |
| ... | 
 | 
int vips_gaussblur (VipsImage *in,VipsImage **out,double sigma,...);
Optional arguments:
precision
: VipsPrecision, precision for blur, default int
min_ampl
: minimum amplitude, default 0.2
This operator runs vips_gaussmat() and vips_convsep() for you on an image.
Set min_ampl
 smaller to generate a larger, more accurate mask. Set sigma
larger to make the blur more blurry. 
See also: vips_gaussmat(), vips_convsep().
[method]
| in | input image | |
| out | output image. | [out] | 
| sigma | how large a mask to use | |
| ... | 
 | 
int vips_sharpen (VipsImage *in,VipsImage **out,...);
Optional arguments:
sigma
: sigma of gaussian
x1
: flat/jaggy threshold
y2
: maximum amount of brightening
y3
: maximum amount of darkening
m1
: slope for flat areas
m2
: slope for jaggy areas
Selectively sharpen the L channel of a LAB image. The input image is transformed to VIPS_INTERPRETATION_LABS.
The operation performs a gaussian blur and subtracts from in
 to generate a
high-frequency signal. This signal is passed through a lookup table formed
from the five parameters and added back to in
.
The lookup table is formed like this:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | . ^ . y2 |- - - - - ----------- . | / . | / slope m2 . | .../ . -x1 | ... | . -------------------...----------------------> . | ... | x1 . |... slope m1 . / | . / m2 | . / | . / | . / | . / | . ______/ _ _ _ _ _ _ | -y3 . | | 
For screen output, we suggest the following settings (the defaults):
| 1 2 3 4 5 6 | sigma == 0.5 x1 == 2 y2 == 10 (don't brighten by more than 10 L*) y3 == 20 (can darken by up to 20 L*) m1 == 0 (no sharpening in flat areas) m2 == 3 (some sharpening in jaggy areas) | 
If you want more or less sharpening, we suggest you just change the m2 parameter.
The sigma
 parameter changes the width of the fringe and can be 
adjusted according to the output printing resolution. As an approximate 
guideline, use 0.5 for 4 pixels/mm (display resolution), 
1.0 for 12 pixels/mm and 1.5 for 16 pixels/mm (300 dpi == 12 
pixels/mm). These figures refer to the image raster, not the half-tone 
resolution.
See also: vips_conv().
[method]
| in | input image | |
| out | output image. | [out] | 
| ... | 
 | 
int vips_spcor (VipsImage *in,VipsImage *ref,VipsImage **out,...);
Calculate a correlation surface.
ref
 is placed at every position in in
 and the correlation coefficient
calculated. The output
image is always float.
The output image is the same size as the input. Extra input edge pixels are made by copying the existing edges outwards.
The correlation coefficient is calculated as:
| 1 2 3 4 | sumij (ref(i,j)-mean(ref))(inkl(i,j)-mean(inkl)) c(k,l) = ------------------------------------------------ sqrt(sumij (ref(i,j)-mean(ref))^2) * sqrt(sumij (inkl(i,j)-mean(inkl))^2) | 
where inkl is the area of in
 centred at position (k,l).
from Niblack "An Introduction to Digital Image Processing", Prentice/Hall, pp 138.
If the number of bands differs, one of the images must have one band. In this case, an n-band image is formed from the one-band image by joining n copies of the one-band image together, and then the two n-band images are operated upon.
The output image is always float, unless either of the two inputs is double, in which case the output is also double.
See also: vips_fastcor().
[method]
| in | input image | |
| ref | reference image | |
| out | output image. | [out] | 
| ... | 
 | 
int vips_fastcor (VipsImage *in,VipsImage *ref,VipsImage **out,...);
Calculate a fast correlation surface.
ref
 is placed at every position in in
 and the sum of squares of
differences calculated. 
The output image is the same size as the input. Extra input edge pixels are made by copying the existing edges outwards.
If the number of bands differs, one of the images must have one band. In this case, an n-band image is formed from the one-band image by joining n copies of the one-band image together, and then the two n-band images are operated upon.
The output type is uint if both inputs are integer, float if both are float or complex, and double if either is double or double complex. In other words, the output type is just large enough to hold the whole range of possible values.
See also: vips_spcor().
[method]
| in | input image | |
| ref | reference image | |
| out | output image. | [out] | 
| ... | 
 | 
int vips_sobel (VipsImage *in,VipsImage **out,...);
Simple Sobel edge detector.
See also: vips_canny().
[method]
| in | input image | |
| out | output image. | [out] | 
| ... | 
 | 
int vips_canny (VipsImage *in,VipsImage **out,...);
Optional arguments:
sigma
: gdouble, sigma for gaussian blur
precision
: VipsPrecision, calculation accuracy
Find edges by Canny's method: The maximum of the derivative of the gradient in the direction of the gradient. Output is float, except for uchar input, where output is uchar, and double input, where output is double. Non-complex images only.
Use sigma
 to control the scale over which gradient is measured. 1.4 is
usually a good value.
Use precision
 to set the precision of edge detection. For uchar images,
setting this to VIPS_PRECISION_INTEGER will make edge detection much 
faster, but sacrifice some sensitivity. 
You will probably need to process the output further to eliminate weak edges.
See also: vips_sobel().
[method]
| in | input image | |
| out | output image. | [out] | 
| sigma | how large a mask to use | |
| ... | 
 |