This chapter describes SVG's declarative filter effects feature set, which when combined with the 2D power of SVG can describe much of the common artwork on the Web in such a way that client-side generation and alteration can be performed easily. In addition, the ability to apply filter effects to SVG graphics elements and container elements helps to maintain the semantic structure of the document, instead of resorting to images which aside from generally being a fixed resolution tend to obscure the original semantics of the elements they replace. This is especially true for effects applied to text.
A filter effect consists of a series of graphics operations that are applied to a givensource graphicto produce a modified graphical result. The result of the filter effect is rendered to the target device instead of the original source graphic. The following illustrates the process:
View this example as SVG (SVG-enabled browsers only)
Filter effects are defined by‘filter’elements. To apply a filter effect to a graphics element or a container element, you set the value of the‘filter’property on the given element such that it references the filter effect.
Each‘filter’element contains a set offilter primitivesas its children. Each filter primitive performs a single fundamental graphical operation (e.g., a blur or a lighting effect) on one or more inputs, producing a graphical result. Because most of the filter primitives represent some form of image processing, in most cases the output from a filter primitive is a single RGBA image.
The original source graphic or the result from a filter primitive can be used as input into one or more other filter primitives. A common application is to use the source graphic multiple times. For example, a simple filter could replace one graphic by two by adding a black copy of original source graphic offset to create a drop shadow. In effect, there are now two layers of graphics, both with the same original source graphics.
When applied to container elements such as‘g’, the‘filter’property applies to the contents of the group as a whole. The group's children do not render to the screen directly; instead, the graphics commands necessary to render the children are stored temporarily. Typically, the graphics commands are executed as part of the processing of the referenced‘filter’element via use of the keywordsSourceGraphicorSourceAlpha. Filter effects can be applied to container elements with no content (e.g., an empty‘g’element), in which case theSourceGraphicorSourceAlphaconsist of a transparent black rectangle that is the size of thefilter effects region.
Sometimes filter primitives result in undefined pixels. For example, filter primitive‘feOffset’can shift an image down and to the right, leaving undefined pixels at the top and left. In these cases, the undefined pixels are set to transparent black.
15.2 An example
The following shows an example of a filter effect.
Example filters01- introducing filter effects.
Example filters01
View this example as SVG (SVG-enabled browsers only)
The filter effect used in the example above is repeated here with reference numbers in the left column before each of the six filter primitives:
1 2 3 4 5 6
Produces a 3D lighting effect.
The following pictures show the intermediate image results from each of the six filter elements:
Source graphic
After filter primitive 1
After filter primitive 2
After filter primitive 3
After filter primitive 4
After filter primitive 5
After filter primitive 6
Filter primitive‘feGaussianBlur’takes inputSourceAlpha, which is the alpha channel of the source graphic. The result is stored in a temporary buffer named "blur". Note that "blur" is used as input to both filter primitives 2 and 3.
Filter primitive‘feOffset’takes buffer "blur", shifts the result in a positive direction in both x and y, and creates a new buffer named "offsetBlur". The effect is that of a drop shadow.
Filter primitive‘feSpecularLighting’, uses buffer "blur" as a model of a surface elevation and generates a lighting effect from a single point source. The result is stored in buffer "specOut".
Filter primitive‘feComposite’masks out the result of filter primitive 3 by the original source graphics alpha channel so that the intermediate result is no bigger than the original source graphic.
Filter primitive‘feComposite’composites the result of the specular lighting with the original source graphic.
Filter primitive‘feMerge’composites two layers together. The lower layer consists of the drop shadow result from filter primitive 2. The upper layer consists of the specular lighting result from filter primitive 5.
15.3 The‘filter’element
The description of the‘filter’element follows:
‘filter’
Categories:
None
Content model:
Any number of the following elements, in any order:
Specifies the coordinate system for the various length values within the filter primitives and for the attributes that define the filter primitive subregion.
If
primitiveUnits="userSpaceOnUse", any length values within the filter definitions represent values in the current user coordinate system in place at the time when the
‘filter’element is referenced (i.e., the user coordinate system for the element referencing the
‘filter’element via a
‘filter’property).
If
primitiveUnits="objectBoundingBox", then any length values within the filter definitions represent fractions or percentages of the bounding box on the referencing element (see Object bounding box units). Note that if only one number was specified in a value this number is expanded out before the
‘primitiveUnits’computation takes place.
If attribute
‘primitiveUnits’is not specified, then the effect is as if a value of
userSpaceOnUsewere specified.
Animatable: yes.
x= "
"
See Filter effects region.
y= "
"
See Filter effects region.
width= "
"
See Filter effects region.
height= "
"
See Filter effects region.
filterRes= "
"
See Filter effects region.
xlink:href= "
"
An IRI reference to another
‘filter’element within the current SVG document fragment. Any attributes which are defined on the referenced
‘filter’element which are not defined on this element are inherited by this element. If this element has no defined filter nodes, and the referenced element has defined filter nodes (possibly due to its own
‘xlink:href’attribute), then this element inherits the filter nodes defined from the referenced
‘filter’element. Inheritance can be indirect to an arbitrary level; thus, if the referenced
‘filter’element inherits attributes or its filter node specification due to its own
‘xlink:href’attribute, then the current element can inherit those attributes or filter node specifications.
Animatable: yes.
Properties inherit into the‘filter’element from its ancestors; properties donotinherit from the element referencing the‘filter’element.
‘filter’elements are never rendered directly; their only usage is as something that can be referenced using the‘filter’property. The‘display’property does not apply to the‘filter’element; thus,‘filter’elements are not directly rendered even if the‘display’property is set to a value other thannone, and‘filter’elements are available for referencing even when the‘display’property on the‘filter’element or any of its ancestors is set tonone.
15.4 The‘filter’property
The description of the‘filter’property is as follows:
‘filter’
Value:
| none |inherit
Initial:
none
Applies to:
container elements (except‘mask’) and graphics elements
Inherited:
no
Percentages:
N/A
Media:
visual
Animatable:
yes
An Functional IRI reference to a
‘filter’element which defines the filter effects that shall be applied to this element.
none
Do not apply any filter effects to this element.
15.5 Filter effects region
A‘filter’element can define a region on the canvas to which a given filter effect applies and can provide a resolution for any intermediate continuous tone images used to process any raster-based filter primitives. The‘filter’element has the following attributes which work together to define the filter effects region:
‘filterUnits’
Defines the coordinate system for attributes‘x’,‘y’,‘width’and‘height’.
IffilterUnits="userSpaceOnUse",‘x’,‘y’,‘width’and‘height’represent values in the current user coordinate system in place at the time when the‘filter’is referenced (i.e., the user coordinate system for the element referencing the‘filter’via a‘filter’property).
IffilterUnits="objectBoundingBox", then‘x’,‘y’,‘width’and‘height’represent fractions or percentages of the bounding box on the referencing element (see Object bounding box units).
If attribute‘filterUnits’is not specified, then the effect is if a value of'objectBoundingBox'were specified.
Animatable: yes.
‘x’,
‘y’,
‘width’and
‘height’
These attributes define a rectangular region on the canvas to which this filter applies.
The amount of memory and processing time required to apply the filter are related to the size of this rectangle and the‘filterRes’attribute of the filter.
The coordinate system for these attributes depends on the value for attribute‘filterUnits’.
Negative values for‘width’or‘height’are an error (see Error processing). Zero values disable rendering of the element which referenced the filter.
The bounds of this rectangle act as a hard clipping region for each filter primitive included with a given‘filter’element; thus, if the effect of a given filter primitive would extend beyond the bounds of the rectangle (this sometimes happens when using a‘feGaussianBlur’filter primitive with a very large‘stdDeviation’), parts of the effect will get clipped.
If‘x’or‘y’is not specified, the effect is as if a value of-10%were specified.
If‘width’or‘height’is not specified, the effect is as if a value of120%were specified.
Animatable: yes.
‘filterRes’
This attribute takes the formx-pixels [y-pixels], and indicates the width and height of the intermediate images in pixels. If not provided, then the user agent will use reasonable values to produce a high-quality result on the output device.
Care should be taken when assigning a non-default value to this attribute. Too small of a value may result in unwanted pixelation in the result. Too large of a value may result in slow processing and large memory usage.
Negative values are an error (see Error processing). Zero values disable rendering of the element which referenced the filter.
Non-integer values are truncated, i.e rounded to the closest integer value towards zero.
Animatable: yes.
Note that both of the two possible value for‘filterUnits’(i.e.,'objectBoundingBox'and'userSpaceOnUse') result in a filter region whose coordinate system has its X-axis and Y-axis each parallel to the X-axis and Y-axis, respectively, of the user coordinate system for the element to which the filter will be applied.
Sometimes implementers can achieve faster performance when thefilter regioncan be mapped directly to device pixels; thus, for best performance on display devices, it is suggested that authors define their region such that SVG user agent can align thefilter regionpixel-for-pixel with the background. In particular, for best filter effects performance, avoid rotating or skewing the user coordinate system. Explicit values for attribute‘filterRes’can either help or harm performance. If‘filterRes’is smaller than the automatic (i.e., default) filter resolution, then filter effect might have faster performance (usually at the expense of quality). If‘filterRes’is larger than the automatic (i.e., default) filter resolution, then filter effects performance will usually be slower.
It is often necessary to provide padding space because the filter effect might impact bits slightly outside the tight-fitting bounding box on a given object. For these purposes, it is possible to provide negative percentage values for‘x’and‘y’, and percentages values greater than 100% for‘width’and‘height’. This, for example, is why the defaults for the filter effects region arex="-10%" y="-10%" width="120%" height="120%".
15.6 Accessing the background image
Two possible pseudo input images for filter effects are BackgroundImage and BackgroundAlpha, which each represent an image snapshot of the canvas under the filter region at the time that the‘filter’element is invoked. BackgroundImage represents both the color values and alpha channel of the canvas (i.e., RGBA pixel values), whereas BackgroundAlpha represents only the alpha channel.
Implementations of SVG user agents often will need to maintain supplemental background image buffers in order to support the BackgroundImage and BackgroundAlpha pseudo input images. Sometimes, the background image buffers will contain an in-memory copy of the accumulated painting operations on the current canvas.
Because in-memory image buffers can take up significant system resources, SVG content must explicitly indicate to the SVG user agent that the document needs access to the background image before BackgroundImage and BackgroundAlpha pseudo input images can be used. The property which enables access to the background image is‘enable-background’, defined below:
‘enable-background’
Value:
accumulate | new [ ] |inherit
Initial:
accumulate
Applies to:
container elements
Inherited:
no
Percentages:
N/A
Media:
visual
Animatable:
no
‘enable-background’is only applicable to container elements and specifies how the SVG user agents manages the accumulation of the background image.
A value ofnewindicates two things:
It enables the ability of children of the current container element to access the background image.
It indicates that a new (i.e., initially transparent black) background image canvas is established and that (in effect) all children of the current container element shall be rendered into the new background image canvas in addition to being rendered onto the target device.
A meaning ofenable-background: accumulate(the initial/default value) depends on context:
If an ancestor container element has a property value ofenable-background: new, then all graphics elements within the current container element are rendered both onto the parent container element's background image canvas and onto the target device.
Otherwise, there is no current background image canvas, so it is only necessary to render graphics elements onto the target device. (No need to render to the background image canvas.)
If a filter effect specifies either the BackgroundImage or the BackgroundAlpha pseudo input images and no ancestor container element has a property value ofenable-background: new, then the background image request is technically in error. Processing will proceed without interruption (i.e., no error message) and a transparent black image shall be provided in response to the request.
The optional,,,parameters on thenewvalue are values that indicate the subregion of the container element's user space where access to the background image is allowed to happen. These parameters enable the SVG user agent potentially to allocate smaller temporary image buffers than the default values. Thus, the values,,,act as a clipping rectangle on the background image canvas. Negative values fororare an error (see Error processing). If more than zero but less than four of the values,,andare specified or if zero values are specified foror, BackgroundImage and BackgroundAlpha are processed as if background image processing were not enabled.
Assume you have an element E in the document and that E has a series of ancestors A1(its immediate parent), A2, etc. (Note: A0is E.) Each ancestor Aiwill have a corresponding temporary background image offscreen buffer BUFi. The contents of thebackground imageavailable to a‘filter’referenced by E is defined as follows:
Find the element Aiwith the smallest subscript i (including A0=E) for which the‘enable-background’property has the valuenew. (Note: if there is no such ancestor element, then there is no background image available to E, in which case a transparent black image will be used as E's background image.)
For each Ai(from i=n to 1), initialize BUFito transparent black. Render all children of Aiup to but not including Ai-1into BUFi. The children are painted, then filtered, clipped, masked and composited using the various painting, filtering, clipping, masking and object opacity settings on the given child. Any filter effects, masking and group opacity that might be set on Aidonotapply when rendering the children of Aiinto BUFi. (Note that for the case of A0=E, the graphical contents of E are not rendered into BUF1and thus are not part of the background image available to E. Instead, the graphical contents of E are available via the SourceGraphic and SourceAlpha pseudo input images.)
Then, for each Ai(from i=1 to n-1), composite BUFiinto BUFi+1.
The accumulated result (i.e., BUFn) represents the background image available to E.
Example enable-background-01illustrates the rules for background image processing.
Example enable-background-01
View this example as SVG (SVG-enabled browsers only)
The example above contains five parts, described as follows:
The first set is the reference graphic. The reference graphic consists of a red rectangle followed by a 50% transparent‘g’element. Inside the‘g’is a green circle that partially overlaps the rectangle and a a blue triangle that partially overlaps the circle. The three objects are then outlined by a rectangle stroked with a thin blue line. No filters are applied to the reference graphic.
The second set enables background image processing and adds an empty‘g’element which invokes the ShiftBGAndBlur filter. This filter takes the current accumulated background image (i.e., the entire reference graphic) as input, shifts its offscreen down, blurs it, and then writes the result to the canvas. Note that the offscreen for the filter is initialized to transparent black, which allows the already rendered rectangle, circle and triangle to show through after the filter renders its own result to the canvas.
The third set enables background image processing and instead invokes the ShiftBGAndBlur filter on the inner‘g’element. The accumulated background at the time the filter is applied contains only the red rectangle. Because the children of the inner‘g’(i.e., the circle and triangle) are not part of the inner‘g’element's background and because ShiftBGAndBlur ignores SourceGraphic, the children of the inner‘g’do not appear in the result.
The fourth set enables background image processing and invokes the ShiftBGAndBlur on the‘polygon’element that draws the triangle. The accumulated background at the time the filter is applied contains the red rectangle plus the green circle ignoring the effect of the‘opacity’property on the inner‘g’element. (Note that the blurred green circle at the bottom does not let the red rectangle show through on its left side. This is due to ignoring the effect of the‘opacity’property.) Because the triangle itself is not part of the accumulated background and because ShiftBGAndBlur ignores SourceGraphic, the triangle does not appear in the result.
The fifth set is the same as the fourth except that filter ShiftBGAndBlur_WithSourceGraphic is invoked instead of ShiftBGAndBlur. ShiftBGAndBlur_WithSourceGraphic performs the same effect as ShiftBGAndBlur, but then renders the SourceGraphic on top of the shifted, blurred background image. In this case, SourceGraphic is the blue triangle; thus, the result is the same as in the fourth case except that the blue triangle now appears.
15.7 Filter primitives overview
15.7.1 Overview
This section describes the various filter primtives that can be assembled to achieve a particular filter effect.
Unless otherwise stated, all image filters operate on premultiplied RGBA samples. Filters which work more naturally on non-premultiplied data (feColorMatrix and feComponentTransfer) will temporarily undo and redo premultiplication as specified. All raster effect filtering operations take 1 to N input RGBA images, additional attributes as parameters, and produce a single output RGBA image.
The RGBA result from each filter primitive will be clamped into the allowable ranges for colors and opacity values. Thus, for example, the result from a given filter primitive will have any negative color values or opacity values adjusted up to color/opacity of zero.
The color space in which a particular filter primitive performs its operations is determined by the value of property‘color-interpolation-filters’on the given filter primitive. A different property,‘color-interpolation’determines the color space for other color operations. Because these two properties have different initial values (‘color-interpolation-filters’has an initial value oflinearRGBwhereas‘color-interpolation’has an initial value ofsRGB), in some cases to achieve certain results (e.g., when coordinating gradient interpolation with a filtering operation) it will be necessary to explicitly set‘color-interpolation’tolinearRGBor‘color-interpolation-filters’tosRGBon particular elements. Note that the examples below do not explicitly set either‘color-interpolation’or‘color-interpolation-filters’, so the initial values for these properties apply to the examples.
15.7.2 Common attributes
With the exception of the‘in’attribute, all of the following attributes are available on all filter primitive elements:
Attribute definitions:
x= "
"
The minimum x coordinate for the subregion which restricts calculation and rendering of the given filter primitive. See filter primitive subregion.
Animatable: yes.
y= "
"
The minimum y coordinate for the subregion which restricts calculation and rendering of the given filter primitive. See filter primitive subregion.
Animatable: yes.
width= "
"
The width of the subregion which restricts calculation and rendering of the given filter primitive. See filter primitive subregion.
A negative value is an error (see Error processing). A value of zero disables the effect of the given filter primitive (i.e., the result is a transparent black image).
Animatable: yes.
height= "
"
The height of the subregion which restricts calculation and rendering of the given filter primitive. See filter primitive subregion.
A negative value is an error (see Error processing). A value of zero disables the effect of the given filter primitive (i.e., the result is a transparent black image).
Animatable: yes.
result= "
"
Assigned name for this filter primitive. If supplied, then graphics that result from processing this filter primitive can be referenced by an
‘in’attribute on a subsequent filter primitive within the same
‘filter’element. If no value is provided, the output will only be available for re-use as the implicit input into the next filter primitive if that filter primitive provides no value for its
‘in’attribute.
Note that a
is not an XML ID; instead, a
is only meaningful within a given
‘filter’element and thus have only local scope. It is legal for the same
to appear multiple times within the same
‘filter’element. When referenced, the
will use the closest preceding filter primitive with the given result.
Animatable: yes.
Identifies input for the given filter primitive. The value can be either one of six keywords or can be a string which matches a previous
‘result’attribute value within the same
‘filter’element. If no value is provided and this is the first filter primitive, then this filter primitive will use
SourceGraphicas its input. If no value is provided and this is a subsequent filter primitive, then this filter primitive will use the result from the previous filter primitive as its input.
If the value for
‘result’appears multiple times within a given
‘filter’element, then a reference to that result will use the closest preceding filter primitive with the given value for attribute
‘result’. Forward references to results are an error.
Definitions for the six keywords:
SourceGraphic
This keyword represents the graphics elements that were the original input into the
‘filter’element. For raster effects filter primitives, the graphics elements will be rasterized into an initially clear RGBA raster in image space. Pixels left untouched by the original graphic will be left clear. The image is specified to be rendered in linear RGBA pixels. The alpha channel of this image captures any anti-aliasing specified by SVG. (Since the raster is linear, the alpha channel of this image will represent the exact percent coverage of each pixel.)
SourceAlpha
This keyword represents the graphics elements that were the original input into the
‘filter’element.
SourceAlphahas all of the same rules as
SourceGraphicexcept that only the alpha channel is used. The input image is an RGBA image consisting of implicitly black color values for the RGB channels, but whose alpha channel is the same as
SourceGraphic. If this option is used, then some implementations might need to rasterize the graphics elements in order to extract the alpha channel.
BackgroundImage
This keyword represents an image snapshot of the canvas under the
filter regionat the time that the
‘filter’element was invoked. See Accessing the background image.
BackgroundAlpha
Same as BackgroundImage except only the alpha channel is used. See
SourceAlphaand Accessing the background image.
FillPaint
This keyword represents the value of the
‘fill’property on the target element for the filter effect. The FillPaint image has conceptually infinite extent. Frequently this image is opaque everywhere, but it might not be if the "paint" itself has alpha, as in the case of a gradient or pattern which itself includes transparent or semi-transparent parts.
StrokePaint
This keyword represents the value of the
‘stroke’property on the target element for the filter effect. The StrokePaint image has conceptually infinite extent. Frequently this image is opaque everywhere, but it might not be if the "paint" itself has alpha, as in the case of a gradient or pattern which itself includes transparent or semi-transparent parts.
The‘in’attribute is available on all filter primitive elements that require an input.
Animatable: yes.
15.7.3 Filter primitive subregion
All filter primitives have attributes‘x’,‘y’,‘width’and‘height’which identify a subregion which restricts calculation and rendering of the given filter primitive. These attributes are defined according to the same rules as other filter primitives' coordinate and length attributes and thus represent values in the coordinate system established by attribute‘primitiveUnits’on the‘filter’element.
‘x’,‘y’,‘width’and‘height’default to the union (i.e., tightest fitting bounding box) of the subregions defined for all referenced nodes. If there are no referenced nodes (e.g., for‘feImage’or‘feTurbulence’), or one or more of the referenced nodes is a standard input (one ofSourceGraphic,SourceAlpha,BackgroundImage,BackgroundAlpha,FillPaintorStrokePaint), or for‘feTile’(which is special because its principal function is to replicate the referenced node in X and Y and thereby produce a usually larger result), the default subregion is0%,0%,100%,100%, where as a special-case the percentages are relative to the dimensions of thefilter region, thus making the the defaultfilter primitive subregionequal to thefilter region.
‘x’,‘y’,‘width’and‘height’act as a hard clip clipping rectangle on both the filter primitive's input image(s) and the filter primitive result.
All intermediate offscreens are defined to not exceed the intersection of‘x’,‘y’,‘width’and‘height’with thefilter region. Thefilter regionand any of the‘x’,‘y’,‘width’and‘height’subregions are to be set up such that all offscreens are made big enough to accommodate any pixels which even partly intersect with either thefilter regionor the x,y,width,height subregions.
‘feTile’references a previous filter primitive and then stitches the tiles together based on the‘x’,‘y’,‘width’and‘height’values of the referenced filter primitive in order to fill its ownfilter primitive subregion.
Example primitive-subregion-01demonstrates the effect of specifying a filter primitive subregion:
Example primitive-subregion-01
View this example as SVG (SVG-enabled browsers only)
In the example above there are three rects that each have a cross and a circle in them. The circle element in each one has a different filter applied, but with the samefilter primitive subregion. The filter output should be limited to thefilter primitive subregion, so you should never see the circles themselves, just the rects that make up thefilter primitive subregion.
The upper left rect shows an‘feFlood’withflood-opacity="75%"so the cross should be visible through the green rect in the middle.
The lower left rect shows an‘feMerge’that mergesSourceGraphicwithFillPaint. Since the circle hasfill-opacity="0.5"it will also be transparent so that the cross is visible through the green rect in the middle.
The upper right rect shows an‘feBlend’that hasmode="multiply". Since the circle in this case isn't transparent the result is totally opaque. The rect should be dark green and the cross should not be visible through it.
15.8 Light source elements and properties
15.8.1 Introduction
The following sections define the elements that define a light source,‘feDistantLight’,‘fePointLight’and‘feSpotLight’, and property‘lighting-color’, which defines the color of the light.
15.8.2 Light source‘feDistantLight’
‘feDistantLight’
Categories:
Light source element
Content model:
Any number of the following elements, in any order:
Direction angle for the light source on the XY plane (clockwise), in degrees from the x axis.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
elevation= "
"
Direction angle for the light source from the XY plane towards the z axis, in degrees. Note the positive Z-axis points towards the viewer of the content.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
The following diagram illustrates the angles which‘azimuth’and‘elevation’represent in an XYZ coordinate system.
15.8.3 Light source‘fePointLight’
‘fePointLight’
Categories:
Light source element
Content model:
Any number of the following elements, in any order:
X location for the light source in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
y= "
"
Y location for the light source in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
z= "
"
Z location for the light source in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element, assuming that, in the initial coordinate system, the positive Z-axis comes out towards the person viewing the content and assuming that one unit along the Z-axis equals one unit in X and Y.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
15.8.4 Light source‘feSpotLight’
‘feSpotLight’
Categories:
Light source element
Content model:
Any number of the following elements, in any order:
X location for the light source in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
y= "
"
Y location for the light source in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
z= "
"
Z location for the light source in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element, assuming that, in the initial coordinate system, the positive Z-axis comes out towards the person viewing the content and assuming that one unit along the Z-axis equals one unit in X and Y.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
pointsAtX= "
"
X location in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element of the point at which the light source is pointing.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
pointsAtY= "
"
Y location in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element of the point at which the light source is pointing.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
pointsAtZ= "
"
Z location in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element of the point at which the light source is pointing, assuming that, in the initial coordinate system, the positive Z-axis comes out towards the person viewing the content and assuming that one unit along the Z-axis equals one unit in X and Y.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
specularExponent= "
"
Exponent value controlling the focus for the light source.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
limitingConeAngle= "
"
A limiting cone which restricts the region where the light is projected. No light is projected outside the cone.
‘limitingConeAngle’represents the angle in degrees between the spot light axis (i.e. the axis between the light source and the point to which it is pointing at) and the spot light cone. User agents should apply a smoothing technique such as anti-aliasing at the boundary of the cone.
If no value is specified, then no limiting cone will be applied.
Animatable: yes.
15.8.5 The‘lighting-color’property
The‘lighting-color’property defines the color of the light source for filter primitives‘feDiffuseLighting’and‘feSpecularLighting’.
This filter composites two objects together using commonly used imaging software blending modes. It performs a pixel-wise combination of two input images.
‘feBlend’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
One of the image blending modes (see table below). If attribute
‘mode’is not specified, then the effect is as if a value of
normalwere specified.
Animatable: yes.
in2= "
(see‘in’attribute)"
The second input image to the blending operation. This attribute can take on the same values as the
‘in’attribute.
Animatable: yes.
For all feBlend modes, the result opacity is computed as follows:
qr = 1 - (1-qa)*(1-qb)
For the compositing formulas below, the following definitions apply:
cr = Result color (RGB) - premultiplied qa = Opacity value at a given pixel for image A qb = Opacity value at a given pixel for image B ca = Color (RGB) at a given pixel for image A - premultiplied cb = Color (RGB) at a given pixel for image B - premultiplied
The following table provides the list of available image blending modes:
Image Blending Mode
Formula for computing result color
normal
cr = (1 - qa) * cb + ca
multiply
cr = (1-qa)*cb + (1-qb)*ca + ca*cb
screen
cr = cb + ca - ca * cb
darken
cr = Min ((1 - qa) * cb + ca, (1 - qb) * ca + cb)
lighten
cr = Max ((1 - qa) * cb + ca, (1 - qb) * ca + cb)
'normal'blend mode is equivalent tooperator="over"on the‘feComposite’filter primitive, matches the blending method used by‘feMerge’and matches the simple alpha compositing technique used in SVG for all compositing outside of filter effects.
Example feBlendshows examples of the five blend modes.
Example feBlend
View this example as SVG (SVG-enabled browsers only)
on the RGBA color and alpha values of every pixel on the input graphics to produce a result with a new set of RGBA color and alpha values.
The calculations are performed on non-premultiplied color values. If the input graphics consists of premultiplied color values, those values are automatically converted into non-premultiplied color values for this operation.
These matrices often perform an identity mapping in the alpha channel. If that is the case, an implementation can avoid the costly undoing and redoing of the premultiplication for all pixels with A = 1.
‘feColorMatrix’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
Indicates the type of matrix operation. The keyword
'matrix'indicates that a full 5x4 matrix of values will be provided. The other keywords represent convenience shortcuts to allow commonly used color operations to be performed without specifying a complete matrix. If attribute
‘type’is not specified, then the effect is as if a value of
matrixwere specified.
Animatable: yes.
values= "
list of s"
The contents of
‘values’depends on the value of attribute
‘type’:
Fortype="matrix",‘values’is a list of 20 matrix values (a00 a01 a02 a03 a04 a10 a11 ... a34), separated by whitespace and/or a comma. For example, the identity matrix could be expressed as:
If the attribute is not specified, then the default behavior depends on the value of attribute
‘type’. If
type="matrix", then this attribute defaults to the identity matrix. If
type="saturate", then this attribute defaults to the value
1, which results in the identity matrix. If
type="hueRotate", then this attribute defaults to the value
0, which results in the identity matrix.
Animatable: yes.
Example feColorMatrixshows examples of the four types of feColorMatrix operations.
Example feColorMatrix
View this example as SVG (SVG-enabled browsers only)
15.11 Filter primitive‘feComponentTransfer’
This filter primitive performs component-wise remapping of data as follows:
R' = feFuncR( R ) G' = feFuncG( G ) B' = feFuncB( B ) A' = feFuncA( A )
for every pixel. It allows operations like brightness adjustment, contrast adjustment, color balance or thresholding.
The calculations are performed on non-premultiplied color values. If the input graphics consists of premultiplied color values, those values are automatically converted into non-premultiplied color values for this operation. (Note that the undoing and redoing of the premultiplication can be avoided if feFuncA is the identity transform and all alpha values on the source graphic are set to 1.)
‘feComponentTransfer’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
The child elements of a‘feComponentTransfer’element specify the transfer functions for the four channels:
‘feFuncR’— transfer function for the red component of the input graphic
‘feFuncG’— transfer function for the green component of the input graphic
‘feFuncB’— transfer function for the blue component of the input graphic
‘feFuncA’— transfer function for the alpha component of the input graphic
The following rules apply to the processing of the‘feComponentTransfer’element:
If more than one transfer function element of the same kind is specified, the last occurrence is to be used.
If any of the transfer function elements are unspecified, the‘feComponentTransfer’must be processed as if those transfer function elements were specified with their‘type’attributes set to'identity'.
‘feFuncR’
Categories:
None
Content model:
Any number of the following elements, in any order:
transfer function element attributes—‘type’,‘tableValues’,‘slope’,‘intercept’,‘amplitude’,‘exponent’,‘offset’
DOM Interfaces:
SVGFEFuncAElement
The attributes below are thetransfer function element attributes, which apply to sub-elements‘feFuncR’,‘feFuncG’,‘feFuncB’and‘feFuncA’that define the transfer functions.
Indicates the type of component transfer function. The type of function determines the applicability of the other attributes.
In the following, C is the initial component (e.g.,‘feFuncR’), C' is the remapped component; both in the closed interval [0,1].
Foridentity:
C' = C
Fortable, the function is defined by linear interpolation between values given in the attribute‘tableValues’. The table hasn+1values (i.e., v0to vn) specifying the start and end values fornevenly sized interpolation regions. Interpolations use the following formula:
For a valueC < 1findksuch that:
k/n <= C < (k+1)/n
The resultC'is given by:
C' = vk+ (C - k/n)*n * (vk+1- vk)
IfC = 1then:
C' = vn.
Fordiscrete, the function is defined by the step function given in the attribute‘tableValues’, which provides a list ofnvalues (i.e., v0to vn-1) in order to identify a step function consisting ofnsteps. The step function is defined by the following formula:
For a valueC < 1findksuch that:
k/n <= C < (k+1)/n
The resultC'is given by:
C' = vk
IfC = 1then:
C' = vn-1.
Forlinear, the function is defined by the following linear equation:
C' =slope* C +intercept
Forgamma, the function is defined by the following exponential function:
C' =amplitude* pow(C,exponent) +offset
Animatable: yes.
tableValues= "
(list of s)"
When
type="table", the list of s
v0,v1,...vn, separated by white space and/or a comma, which define the lookup table. An empty list results in an identity transfer function. If the attribute is not specified, then the effect is as if an empty list were provided.
Animatable: yes.
slope= "
"
When
type="linear", the slope of the linear function.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
intercept= "
"
When
type="linear", the intercept of the linear function.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
amplitude= "
"
When
type="gamma", the amplitude of the gamma function.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
exponent= "
"
When
type="gamma", the exponent of the gamma function.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
offset= "
"
When
type="gamma", the offset of the gamma function.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
Example feComponentTransfershows examples of the four types of feComponentTransfer operations.
Example feComponentTransfer
View this example as SVG (SVG-enabled browsers only)
15.12 Filter primitive‘feComposite’
This filter performs the combination of the two input images pixel-wise in image space using one of the Porter-Duff [PORTERDUFF] compositing operations:over, in, atop, out, xor[SVG-COMPOSITING]. Additionally, a component-wisearithmeticoperation (with the result clamped between [0..1]) can be applied.
Thearithmeticoperation is useful for combining the output from the‘feDiffuseLighting’and‘feSpecularLighting’filters with texture data. It is also useful for implementingdissolve. If thearithmeticoperation is chosen, each result pixel is computed using the following formula:
result = k1*i1*i2 + k2*i1 + k3*i2 + k4
where:
i1andi2indicate the corresponding pixel channel values of the input image, which map to in and in2 respectively
k1, k2, k3andk4indicate the values of the attributes with the same name
For this filter primitive, the extent of the resulting image might grow as described in the section that describes the filter primitive subregion.
‘feComposite’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
operator= "
over | in | out | atop | xor | arithmetic"
The compositing operation that is to be performed. All of the
‘operator’types except
arithmeticmatch the corresponding operation as described in [PORTERDUFF]. The
arithmeticoperator is described above. If attribute
‘operator’is not specified, then the effect is as if a value of
overwere specified.
Animatable: yes.
k1= "
"
Only applicable if
operator="arithmetic".
If the attribute is not specified, the effect is as if a value of
0were specified.
Animatable: yes.
k2= "
"
Only applicable if
operator="arithmetic".
If the attribute is not specified, the effect is as if a value of
0were specified.
Animatable: yes.
k3= "
"
Only applicable if
operator="arithmetic".
If the attribute is not specified, the effect is as if a value of
0were specified.
Animatable: yes.
k4= "
"
Only applicable if
operator="arithmetic".
If the attribute is not specified, the effect is as if a value of
0were specified.
Animatable: yes.
in2= "
(see‘in’attribute)"
The second input image to the compositing operation. This attribute can take on the same values as the
‘in’attribute.
Animatable: yes.
Example feCompositeshows examples of the six types of feComposite operations. It also shows two different techniques to using the BackgroundImage as part of the compositing operation.
The first two rows render bluish triangles into the background. A filter is applied which composites reddish triangles into the bluish triangles using one of the compositing operations. The result from compositing is drawn onto an opaque white temporary surface, and then that result is written to the canvas. (The opaque white temporary surface obliterates the original bluish triangle.)
The last two rows apply the same compositing operations of reddish triangles into bluish triangles. However, the compositing result is directly blended into the canvas (the opaque white temporary surface technique is not used). In some cases, the results are different than when a temporary opaque white surface is used. The original bluish triangle from the background shines through wherever the compositing operation results in completely transparent pixel. In other cases, the result from compositing is blended into the bluish triangle, resulting in a different final color value.
Example feComposite
View this example as SVG (SVG-enabled browsers only)
15.13 Filter primitive‘feConvolveMatrix’
feConvolveMatrix applies a matrix convolution filter effect. A convolution combines pixels in the input image with neighboring pixels to produce a resulting image. A wide variety of imaging operations can be achieved through convolutions, including blurring, edge detection, sharpening, embossing and beveling.
A matrix convolution is based on an n-by-m matrix (the convolution kernel) which describes how a given pixel value in the input image is combined with its neighboring pixel values to produce a resulting pixel value. Each result pixel is determined by applying the kernel matrix to the corresponding source pixel and its neighboring pixels. The basic convolution formula which is applied to each color value for a given pixel is:
where "orderX" and "orderY" represent the X and Y values for the‘order’attribute, "targetX" represents the value of the‘targetX’attribute, "targetY" represents the value of the‘targetY’attribute, "kernelMatrix" represents the value of the‘kernelMatrix’attribute, "divisor" represents the value of the‘divisor’attribute, and "bias" represents the value of the‘bias’attribute.
Note in the above formulas that the values in the kernel matrix are applied such that the kernel matrix is rotated 180 degrees relative to the source and destination images in order to match convolution theory as described in many computer graphics textbooks.
To illustrate, suppose you have a input image which is 5 pixels by 5 pixels, whose color values for one of the color channels are as follows:
and you define a 3-by-3 convolution kernel as follows:
1 2 3 4 5 6 7 8 9
Let's focus on the color value at the second row and second column of the image (source pixel value is 120). Assuming the simplest case (where the input image's pixel grid aligns perfectly with the kernel's pixel grid) and assuming default values for attributes‘divisor’,‘targetX’and‘targetY’, then resulting color value will be:
Because they operate on pixels, matrix convolutions are inherently resolution-dependent. To make‘feConvolveMatrix’produce resolution-independent results, an explicit value should be provided for either the‘filterRes’attribute on the‘filter’element and/or attribute‘kernelUnitLength’.
‘kernelUnitLength’, in combination with the other attributes, defines an implicit pixel grid in the filter effects coordinate system (i.e., the coordinate system established by the‘primitiveUnits’attribute). If the pixel grid established by‘kernelUnitLength’is not scaled to match the pixel grid established by attribute‘filterRes’(implicitly or explicitly), then the input image will be temporarily rescaled to match its pixels with‘kernelUnitLength’. The convolution happens on the resampled image. After applying the convolution, the image is resampled back to the original resolution.
When the image must be resampled to match the coordinate system defined by‘kernelUnitLength’prior to convolution, or resampled to match the device coordinate system after convolution, it is recommended that high quality viewers make use of appropriate interpolation techniques, for example bilinear or bicubic. Depending on the speed of the available interpolents, this choice may be affected by the‘image-rendering’property setting. Note that implementations might choose approaches that minimize or eliminate resampling when not necessary to produce proper results, such as when the document is zoomed out such that‘kernelUnitLength’is considerably smaller than a device pixel.
‘feConvolveMatrix’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
Indicates the number of cells in each dimension for
‘kernelMatrix’. The values provided must be s greater than zero. The first number, , indicates the number of columns in the matrix. The second number, , indicates the number of rows in the matrix. If is not provided, it defaults to .
A typical value is order="3". It is recommended that only small values (e.g., 3) be used; higher values may result in very high CPU overhead and usually do not produce results that justify the impact on performance.
If the attribute is not specified, the effect is as if a value of
3were specified.
Animatable: yes.
kernelMatrix= "
"
The list of s that make up the kernel matrix for the convolution. Values are separated by space characters and/or a comma. The number of entries in the list must equal times .
Animatable: yes.
divisor= "
"
After applying the
‘kernelMatrix’to the input image to yield a number, that number is divided by
‘divisor’to yield the final destination color value. A divisor that is the sum of all the matrix values tends to have an evening effect on the overall color intensity of the result. It is an error to specify a divisor of zero. The default value is the sum of all values in kernelMatrix, with the exception that if the sum is zero, then the divisor is set to 1.
Animatable: yes.
bias= "
"
After applying the
‘kernelMatrix’to the input image to yield a number and applying the
‘divisor’, the
‘bias’attribute is added to each component. One application of
‘bias’is when it is desirable to have .5 gray value be the zero response of the filter. The bias property shifts the range of the filter. This allows representation of values that would otherwise be clamped to 0 or 1. If
‘bias’is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
targetX= "
"
Determines the positioning in X of the convolution matrix relative to a given target pixel in the input image. The leftmost column of the matrix is column number zero. The value must be such that: 0 <= targetX < orderX. By default, the convolution matrix is centered in X over each pixel of the input image (i.e., targetX = floor ( orderX / 2 )).
Animatable: yes.
targetY= "
"
Determines the positioning in Y of the convolution matrix relative to a given target pixel in the input image. The topmost row of the matrix is row number zero. The value must be such that: 0 <= targetY < orderY. By default, the convolution matrix is centered in Y over each pixel of the input image (i.e., targetY = floor ( orderY / 2 )).
Animatable: yes.
edgeMode= "
duplicate | wrap | none"
Determines how to extend the input image as necessary with color values so that the matrix operations can be applied when the kernel is positioned at or near the edge of the input image.
"duplicate" indicates that the input image is extended along each of its borders as necessary by duplicating the color values at the given edge of the input image.
"none" indicates that the input image is extended with pixel values of zero for R, G, B and A.
If attribute‘edgeMode’is not specified, then the effect is as if a value ofduplicatewere specified.
Animatable: yes.
kernelUnitLength= "
"
The first number is the value. The second number is the value. If the value is not specified, it defaults to the same value as . Indicates the intended distance in current filter units (i.e., units as determined by the value of attribute
‘primitiveUnits’) between successive columns and rows, respectively, in the
‘kernelMatrix’. By specifying value(s) for
‘kernelUnitLength’, the kernel becomes defined in a scalable, abstract coordinate system. If
‘kernelUnitLength’is not specified, the default value is one pixel in the offscreen bitmap, which is a pixel-based coordinate system, and thus potentially not scalable. For some level of consistency across display media and user agents, it is necessary that a value be provided for at least one of
‘filterRes’and
‘kernelUnitLength’. In some implementations, the most consistent results and the fastest performance will be achieved if the pixel grid of the temporary offscreen images aligns with the pixel grid of the kernel.
A negative or zero value is an error (see Error processing).
Animatable: yes.
preserveAlpha= "
false | true"
A value of
falseindicates that the convolution will apply to all channels, including the alpha channel. In this case the
ALPHAX,Yof the convolution formula for a given pixel is:
ALPHAX,Y= ( SUMI=0 to [orderY-1]{ SUMJ=0 to [orderX-1]{ SOURCEX-targetX+J, Y-targetY+I* kernelMatrixorderX-J-1, orderY-I-1 } } ) / divisor + bias
A value of
trueindicates that the convolution will only apply to the color channels. In this case, the filter will temporarily unpremultiply the color component values, apply the kernel, and then re-premultiply at the end. In this case the
ALPHAX,Yof the convolution formula for a given pixel is:
ALPHAX,Y= SOURCEX,Y
If
‘preserveAlpha’is not specified, then the effect is as if a value of
falsewere specified.
Animatable: yes.
15.14 Filter primitive‘feDiffuseLighting’
This filter primitive lights an image using the alpha channel as a bump map. The resulting image is an RGBA opaque image based on the light color with alpha = 1.0 everywhere. The lighting calculation follows the standard diffuse component of the Phong lighting model. The resulting image depends on the light color, light position and surface geometry of the input bump map.
The light map produced by this filter primitive can be combined with a texture image using the multiply term of thearithmetic‘feComposite’compositing method. Multiple light sources can be simulated by adding several of these light maps together before applying it to the texture image.
The formulas below make use of 3x3 filters. Because they operate on pixels, such filters are inherently resolution-dependent. To make‘feDiffuseLighting’produce resolution-independent results, an explicit value should be provided for either the‘filterRes’attribute on the‘filter’element and/or attribute‘kernelUnitLength’.
‘kernelUnitLength’, in combination with the other attributes, defines an implicit pixel grid in the filter effects coordinate system (i.e., the coordinate system established by the‘primitiveUnits’attribute). If the pixel grid established by‘kernelUnitLength’is not scaled to match the pixel grid established by attribute‘filterRes’(implicitly or explicitly), then the input image will be temporarily rescaled to match its pixels with‘kernelUnitLength’. The 3x3 filters are applied to the resampled image. After applying the filter, the image is resampled back to its original resolution.
When the image must be resampled, it is recommended that high quality viewers make use of appropriate interpolation techniques, for example bilinear or bicubic. Depending on the speed of the available interpolents, this choice may be affected by the‘image-rendering’property setting. Note that implementations might choose approaches that minimize or eliminate resampling when not necessary to produce proper results, such as when the document is zoomed out such that‘kernelUnitLength’is considerably smaller than a device pixel.
For the formulas that follow, theNorm(Ax,Ay,Az)function is defined as:
Norm(Ax,Ay,Az) = sqrt(Ax^2+Ay^2+Az^2)
The resulting RGBA image is computed as follows:
Dr= kd* N.L * Lr Dg= kd* N.L * Lg Db= kd* N.L * Lb Da= 1.0
where
k
d= diffuse lighting constant
N = surface normal unit vector, a function of x and y
L = unit vector pointing from surface to light, a function of x and y in the point and spot light cases
L
r,L
g,L
b= RGB components of light, a function of x and y in the spot light case
N is a function of x and y and depends on the surface gradient as follows:
The surface described by the input alpha image I(x,y) is:
Z (x,y) = surfaceScale * I(x,y)
Surface normal is calculated using the Sobel gradient 3x3 filter. Different filter kernels are used depending on whether the given pixel is on the interior or an edge. For each case, the formula is:
In these formulas, thedxanddyvalues (e.g.,I(x-dx,y-dy)), represent deltas relative to a given(x,y)position for the purpose of estimating the slope of the surface at that point. These deltas are determined by the value (explicit or implicit) of attribute‘kernelUnitLength’.
If L.S is positive, no light is present. (Lr= Lg= Lb= 0). If‘limitingConeAngle’is specified, -L.S < cos(limitingConeAngle) also indicates that no light is present.
‘feDiffuseLighting’
Categories:
Filter primitive element
Content model:
Any number of
descriptive elementsand exactly one
light source element, in any order.
height of surface when A
in= 1.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
diffuseConstant= "
"
kd in Phong lighting model. In SVG, this can be any non-negative number.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
kernelUnitLength= "
"
The first number is the value. The second number is the value. If the value is not specified, it defaults to the same value as . Indicates the intended distance in current filter units (i.e., units as determined by the value of attribute
‘primitiveUnits’) for
dxand
dy, respectively, in the surface normal calculation formulas. By specifying value(s) for
‘kernelUnitLength’, the kernel becomes defined in a scalable, abstract coordinate system. If
‘kernelUnitLength’is not specified, the
dxand
dyvalues should represent very small deltas relative to a given
(x,y)position, which might be implemented in some cases as one pixel in the intermediate image offscreen bitmap, which is a pixel-based coordinate system, and thus potentially not scalable. For some level of consistency across display media and user agents, it is necessary that a value be provided for at least one of
‘filterRes’and
‘kernelUnitLength’. Discussion of intermediate images are in the Introduction and in the description of attribute
‘filterRes’.
A negative or zero value is an error (see Error processing).
Animatable: yes.
The light source is defined by one of the child elements‘feDistantLight’,‘fePointLight’or‘feSpotLight’. The light color is specified by property‘lighting-color’.
15.15 Filter primitive‘feDisplacementMap’
This filter primitive uses the pixels values from the image from‘in2’to spatially displace the image from‘in’. This is the transformation to be performed:
P'(x,y) <- P( x + scale * (XC(x,y) - .5), y + scale * (YC(x,y) - .5))
where P(x,y) is the input image,‘in’, and P'(x,y) is the destination. XC(x,y) and YC(x,y) are the component values of the channel designated by thexChannelSelectorandyChannelSelector. For example, to use the R component of‘in2’to control displacement in x and the G component of Image2 to control displacement in y, setxChannelSelectorto "R" andyChannelSelectorto "G".
The displacement map defines the inverse of the mapping performed.
The input image in is to remain premultiplied for this filter primitive. The calculations using the pixel values from‘in2’are performed using non-premultiplied color values. If the image from‘in2’consists of premultiplied color values, those values are automatically converted into non-premultiplied color values before performing this operation.
This filter can have arbitrary non-localized effect on the input which might require substantial buffering in the processing pipeline. However with this formulation, any intermediate buffering needs can be determined byscalewhich represents the maximum range of displacement in either x or y.
When applying this filter, the source pixel location will often lie between several source pixels. In this case it is recommended that high quality viewers apply an interpolent on the surrounding pixels, for example bilinear or bicubic, rather than simply selecting the nearest source pixel. Depending on the speed of the available interpolents, this choice may be affected by the‘image-rendering’property setting.
The‘color-interpolation-filters’property only applies to the‘in2’source image and does not apply to the‘in’source image. The‘in’source image must remain in its current color space.
‘feDisplacementMap’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
Displacement scale factor. The amount is expressed in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element.
When the value of this attribute is
0, this operation has no effect on the source image.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
xChannelSelector= "
R | G | B | A"
Indicates which channel from
‘in2’to use to displace the pixels in
‘in’along the x-axis. If attribute
‘xChannelSelector’is not specified, then the effect is as if a value of
Awere specified.
Animatable: yes.
yChannelSelector= "
R | G | B | A"
Indicates which channel from
‘in2’to use to displace the pixels in
‘in’along the y-axis. If attribute
‘yChannelSelector’is not specified, then the effect is as if a value of
Awere specified.
Animatable: yes.
in2= "
(see‘in’attribute)"
The second input image, which is used to displace the pixels in the image from attribute
‘in’. This attribute can take on the same values as the
‘in’attribute.
Animatable: yes.
15.16 Filter primitive‘feFlood’
This filter primitive creates a rectangle filled with the color and opacity values from properties‘flood-color’and‘flood-opacity’. The rectangle is as large as the filter primitive subregion established by the‘x’,‘y’,‘width’and‘height’attributes on the‘feFlood’element.
‘feFlood’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
The‘flood-color’property indicates what color to use to flood the current filter primitive subregion. The keywordcurrentColorand ICC colors can be specified in the same manner as within a specification for the‘fill’and‘stroke’properties.
‘flood-color’
Value:
currentColor | [] | inherit
Initial:
black
Applies to:
‘feFlood’elements
Inherited:
no
Percentages:
N/A
Media:
visual
Animatable:
yes
The‘flood-opacity’property defines the opacity value to use across the entire filter primitive subregion.
‘flood-opacity’
Value:
|inherit
Initial:
1
Applies to:
‘feFlood’elements
Inherited:
no
Percentages:
N/A
Media:
visual
Animatable:
yes
15.17 Filter primitive‘feGaussianBlur’
This filter primitive performs a Gaussian blur on the input image.
The Gaussian blur kernel is an approximation of the normalized convolution:
G(x,y) = H(x)I(y)
where
H(x) = exp(-x2/ (2s2)) / sqrt(2* pi*s2)
and
I(y) = exp(-y2/ (2t2)) / sqrt(2* pi*t2)
with 's' being the standard deviation in the x direction and 't' being the standard deviation in the y direction, as specified by‘stdDeviation’.
The value of‘stdDeviation’can be either one or two numbers. If two numbers are provided, the first number represents a standard deviation value along the x-axis of the current coordinate system and the second value represents a standard deviation in Y. If one number is provided, then that value is used for both X and Y.
Even if only one value is provided for‘stdDeviation’, this can be implemented as a separable convolution.
For larger values of 's' (s >= 2.0), an approximation can be used: Three successive box-blurs build a piece-wise quadratic convolution kernel, which approximates the Gaussian kernel to within roughly 3%.
let d = floor(s * 3*sqrt(2*pi)/4 + 0.5)
... if d is odd, use three box-blurs of size 'd', centered on the output pixel.
... if d is even, two box-blurs of size 'd' (the first one centered on the pixel boundary between the output pixel and the one to the left, the second one centered on the pixel boundary between the output pixel and the one to the right) and one box blur of size 'd+1' centered on the output pixel.
Note: the approximation formula also applies correspondingly to 't'.
Frequently this operation will take place on alpha-only images, such as that produced by the built-in input,SourceAlpha. The implementation may notice this and optimize the single channel case. If the input has infinite extent and is constant (e.gFillPaintwhere the fill is a solid color), this operation has no effect. If the input has infinite extent and the filter result is the input to an‘feTile’, the filter is evaluated with periodic boundary conditions.
‘feGaussianBlur’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
The standard deviation for the blur operation. If two s are provided, the first number represents a standard deviation value along the x-axis of the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element. The second value represents a standard deviation in Y. If one number is provided, then that value is used for both X and Y.
A negative value is an error (see Error processing). A value of zero disables the effect of the given filter primitive (i.e., the result is the filter input image). If
‘stdDeviation’is
0in only one of X or Y, then the effect is that the blur is only applied in the direction that has a non-zero value.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
The example at the start of this chapter makes use of the‘feGaussianBlur’filter primitive to create a drop shadow effect.
15.18 Filter primitive‘feImage’
This filter primitive refers to a graphic external to this filter element, which is loaded or rendered into an RGBA raster and becomes the result of the filter primitive.
This filter primitive can refer to an external image or can be a reference to another piece of SVG. It produces an image similar to the built-in image sourceSourceGraphicexcept that the graphic comes from an external source.
If the‘xlink:href’references a stand-alone image resource such as a JPEG, PNG or SVG file, then the image resource is rendered according to the behavior of the‘image’element; otherwise, the referenced resource is rendered according to the behavior of the‘use’element. In either case, the current user coordinate system depends on the value of attribute‘primitiveUnits’on the‘filter’element. The processing of the‘preserveAspectRatio’attribute on the‘feImage’element is identical to that of the‘image’element.
When the referenced image must be resampled to match the device coordinate system, it is recommended that high quality viewers make use of appropriate interpolation techniques, for example bilinear or bicubic. Depending on the speed of the available interpolents, this choice may be affected by the‘image-rendering’property setting.
‘feImage’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
If attribute‘preserveAspectRatio’is not specified, then the effect is as if a value ofxMidYMid meetwere specified.
Animatable: yes.
Example feImageillustrates how images are placed relative to an object. From left to right:
The default placement of an image. Note that the image is centered in thefilter regionand has the maximum size that will fit in the region consistent with preserving the aspect ratio.
The image stretched to fit the bounding box of an object.
The image placed using user coordinates. Note that the image is first centered in a box the size of thefilter regionand has the maximum size that will fit in the box consistent with preserving the aspect ratio. This box is then shifted by the given 'x' and 'y' values relative to the viewport the object is in.
Example feImage
View this example as SVG (SVG-enabled browsers only)
15.19 Filter primitive‘feMerge’
This filter primitive composites input image layers on top of each other using theoveroperator withInput1(corresponding to the first‘feMergeNode’child element) on the bottom and the last specified input,InputN(corresponding to the last‘feMergeNode’child element), on top.
Many effects produce a number of intermediate layers in order to create the final output image. This filter allows us to collapse those into a single image. Although this could be done by using n-1 Composite-filters, it is more convenient to have this common operation available in this form, and offers the implementation some additional flexibility.
Each‘feMerge’element can have any number of‘feMergeNode’subelements, each of which has an‘in’attribute.
The canonical implementation of feMerge is to render the entire effect into one RGBA layer, and then render the resulting layer on the output device. In certain cases (in particular if the output device itself is a continuous tone device), and since merging is associative, it might be a sufficient approximation to evaluate the effect one layer at a time and render each layer individually onto the output device bottom to top.
If the topmost image input isSourceGraphicand this‘feMerge’is the last filter primitive in the filter, the implementation is encouraged to render the layers up to that point, and then render theSourceGraphicdirectly from its vector description on top.
‘feMerge’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
The example at the start of this chapter makes use of the‘feMerge’filter primitive to composite two intermediate filter results together.
15.20 Filter primitive‘feMorphology’
This filter primitive performs "fattening" or "thinning" of artwork. It is particularly useful for fattening or thinning an alpha channel.
The dilation (or erosion) kernel is a rectangle with a width of 2*x-radiusand a height of 2*y-radius. In dilation, the output pixel is the individual component-wise maximum of the corresponding R,G,B,A values in the input image's kernel rectangle. In erosion, the output pixel is the individual component-wise minimum of the corresponding R,G,B,A values in the input image's kernel rectangle.
Frequently this operation will take place on alpha-only images, such as that produced by the built-in input,SourceAlpha. In that case, the implementation might want to optimize the single channel case.
If the input has infinite extent and is constant (e.gFillPaintwhere the fill is a solid color), this operation has no effect. If the input has infinite extent and the filter result is the input to an‘feTile’, the filter is evaluated with periodic boundary conditions.
Because‘feMorphology’operates on premultipied color values, it will always result in color values less than or equal to the alpha channel.
‘feMorphology’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
A keyword indicating whether to erode (i.e., thin) or dilate (fatten) the source graphic. If attribute
‘operator’is not specified, then the effect is as if a value of
erodewere specified.
Animatable: yes.
radius= "
"
The radius (or radii) for the operation. If two s are provided, the first number represents a x-radius and the second value represents a y-radius. If one number is provided, then that value is used for both X and Y. The values are in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element.
A negative value is an error (see Error processing). A value of zero disables the effect of the given filter primitive (i.e., the result is a transparent black image).
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
Example feMorphologyshows examples of the four types of feMorphology operations.
Example feMorphology
View this example as SVG (SVG-enabled browsers only)
15.21 Filter primitive‘feOffset’
This filter primitive offsets the input image relative to its current position in the image space by the specified vector.
This is important for effects like drop shadows.
When applying this filter, the destination location may be offset by a fraction of a pixel in device space. In this case a high quality viewer should make use of appropriate interpolation techniques, for example bilinear or bicubic. This is especially recommended for dynamic viewers where this interpolation provides visually smoother movement of images. For static viewers this is less of a concern. Close attention should be made to the‘image-rendering’property setting to determine the authors intent.
‘feOffset’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
The amount to offset the input graphic along the x-axis. The offset amount is expressed in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
dy= "
"
The amount to offset the input graphic along the y-axis. The offset amount is expressed in the coordinate system established by attribute
‘primitiveUnits’on the
‘filter’element.
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
The example at the start of this chapter makes use of the‘feOffset’filter primitive to offset the drop shadow from the original source graphic.
15.22 Filter primitive‘feSpecularLighting’
This filter primitive lights a source graphic using the alpha channel as a bump map. The resulting image is an RGBA image based on the light color. The lighting calculation follows the standard specular component of the Phong lighting model. The resulting image depends on the light color, light position and surface geometry of the input bump map. The result of the lighting calculation is added. The filter primitive assumes that the viewer is at infinity in the z direction (i.e., the unit vector in the eye direction is (0,0,1) everywhere).
This filter primitive produces an image which contains the specular reflection part of the lighting calculation. Such a map is intended to be combined with a texture using theaddterm of thearithmetic‘feComposite’method. Multiple light sources can be simulated by adding several of these light maps before applying it to the texture image.
The resulting RGBA image is computed as follows:
Sr= ks* pow(N.H, specularExponent) * Lr Sg= ks* pow(N.H, specularExponent) * Lg Sb= ks* pow(N.H, specularExponent) * Lb Sa= max(Sr,Sg,Sb)
where
k
s= specular lighting constant
N = surface normal unit vector, a function of x and y
H = "halfway" unit vector between eye unit vector and light unit vector
L
r,L
g,L
b= RGB components of light
See‘feDiffuseLighting’for definition of N and (Lr, Lg, Lb).
The definition of H reflects our assumption of the constant eye vector E = (0,0,1):
H = (L + E) / Norm(L+E)
where L is the light unit vector.
Unlike the‘feDiffuseLighting’, the‘feSpecularLighting’filter produces a non-opaque image. This is due to the fact that the specular result (Sr,Sg,Sb,Sa) is meant to be added to the textured image. The alpha channel of the result is the max of the color components, so that where the specular light is zero, no additional coverage is added to the image and a fully white highlight will add opacity.
The‘feDiffuseLighting’and‘feSpecularLighting’filters will often be applied together. An implementation may detect this and calculate both maps in one pass, instead of two.
‘feSpecularLighting’
Categories:
Filter primitive element
Content model:
Any number of
descriptive elementsand exactly one
light source element, in any order.
height of surface when A
in= 1.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
specularConstant= "
"
ks in Phong lighting model. In SVG, this can be any non-negative number.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
specularExponent= "
"
Exponent for specular term, larger is more "shiny". Range 1.0 to 128.0.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
kernelUnitLength= "
"
The first number is the value. The second number is the value. If the value is not specified, it defaults to the same value as . Indicates the intended distance in current filter units (i.e., units as determined by the value of attribute
‘primitiveUnits’) for
dxand
dy, respectively, in the surface normal calculation formulas. By specifying value(s) for
‘kernelUnitLength’, the kernel becomes defined in a scalable, abstract coordinate system. If
‘kernelUnitLength’is not specified, the
dxand
dyvalues should represent very small deltas relative to a given
(x,y)position, which might be implemented in some cases as one pixel in the intermediate image offscreen bitmap, which is a pixel-based coordinate system, and thus potentially not scalable. For some level of consistency across display media and user agents, it is necessary that a value be provided for at least one of
‘filterRes’and
‘kernelUnitLength’. Discussion of intermediate images are in the Introduction and in the description of attribute
‘filterRes’.
A negative or zero value is an error (see Error processing).
Animatable: yes.
The light source is defined by one of the child elements‘feDistantLight’,‘fePointLight’or‘feDistantLight’. The light color is specified by property‘lighting-color’.
The example at the start of this chapter makes use of the‘feSpecularLighting’filter primitive to achieve a highly reflective, 3D glowing effect.
15.23 Filter primitive‘feTile’
This filter primitive fills a target rectangle with a repeated, tiled pattern of an input image. The target rectangle is as large as the filter primitive subregion established by the‘x’,‘y’,‘width’and‘height’attributes on the‘feTile’element.
Typically, the input image has been defined with its own filter primitive subregion in order to define a reference tile.‘feTile’replicates the reference tile in both X and Y to completely fill the target rectangle. The top/left corner of each given tile is at location(x+i*width,y+j*height), where(x,y)represents the top/left of the input image's filter primitive subregion,widthandheightrepresent the width and height of the input image's filter primitive subregion, andiandjcan be any integer value. In most cases, the input image will have a smallerfilter primitive subregionthan the‘feTile’in order to achieve a repeated pattern effect.
Implementers must take appropriate measures in constructing the tiled image to avoid artifacts between tiles, particularly in situations where the user to device transform includes shear and/or rotation. Unless care is taken, interpolation can lead to edge pixels in the tile having opacity values lower or higher than expected due to the interaction of painting adjacent tiles which each have partial overlap with particular pixels.
‘feTile’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
This filter primitive creates an image using the Perlin turbulence function. It allows the synthesis of artificial textures like clouds or marble. For a detailed description the of the Perlin turbulence function, see "Texturing and Modeling", Ebert et al, AP Professional, 1994. The resulting image will fill the entire filter primitive subregion for this filter primitive.
It is possible to create bandwidth-limited noise by synthesizing only one octave.
The C code below shows the exact algorithm used for this filter effect.
For fractalSum, you get a turbFunctionResult that is aimed at a range of -1 to 1 (the actual result might exceed this range in some cases). To convert to a color value, use the formulacolorValue = ((turbFunctionResult * 255) + 255) / 2, then clamp to the range 0 to 255.
For turbulence, you get a turbFunctionResult that is aimed at a range of 0 to 1 (the actual result might exceed this range in some cases). To convert to a color value, use the formulacolorValue = (turbFunctionResult * 255), then clamp to the range 0 to 255.
The following order is used for applying the pseudo random numbers. An initial seed value is computed based on attribute‘seed’. Then the implementation computes the lattice points for R, then continues getting additional pseudo random numbers relative to the last generated pseudo random number and computes the lattice points for G, and so on for B and A.
The generated color and alpha values are in the color space determined by the value of property‘color-interpolation-filters’:
/* Produces results in the range [1, 2**31 - 2]. Algorithm is: r = (a * r) mod m where a = 16807 and m = 2**31 - 1 = 2147483647 See [Park & Miller], CACM vol. 31 no. 10 p. 1195, Oct. 1988 To test: the algorithm should produce the result 1043618065 as the 10,000th generated number if the original seed is 1. */ #define RAND_m 2147483647 /* 2**31 - 1 */ #define RAND_a 16807 /* 7**5; primitive root of m */ #define RAND_q 127773 /* m / a */ #define RAND_r 2836 /* m % a */ long setup_seed(long lSeed) { if (lSeed <= 0) lSeed = -(lSeed % (RAND_m - 1)) + 1; if (lSeed > RAND_m - 1) lSeed = RAND_m - 1; return lSeed; } long random(long lSeed) { long result; result = RAND_a * (lSeed % RAND_q) - RAND_r * (lSeed / RAND_q); if (result <= 0) result += RAND_m; return result; } #define BSize 0x100 #define BM 0xff #define PerlinN 0x1000 #define NP 12 /* 2^PerlinN */ #define NM 0xfff static uLatticeSelector[BSize + BSize + 2]; static double fGradient[4][BSize + BSize + 2][2]; struct StitchInfo { int nWidth; // How much to subtract to wrap for stitching. int nHeight; int nWrapX; // Minimum value to wrap. int nWrapY; }; static void init(long lSeed) { double s; int i, j, k; lSeed = setup_seed(lSeed); for(k = 0; k < 4; k++) { for(i = 0; i < BSize; i++) { uLatticeSelector[i] = i; for (j = 0; j < 2; j++) fGradient[k][i][j] = (double)(((lSeed = random(lSeed)) % (BSize + BSize)) - BSize) / BSize; s = double(sqrt(fGradient[k][i][0] * fGradient[k][i][0] + fGradient[k][i][1] * fGradient[k][i][1])); fGradient[k][i][0] /= s; fGradient[k][i][1] /= s; } } while(--i) { k = uLatticeSelector[i]; uLatticeSelector[i] = uLatticeSelector[j = (lSeed = random(lSeed)) % BSize]; uLatticeSelector[j] = k; } for(i = 0; i < BSize + 2; i++) { uLatticeSelector[BSize + i] = uLatticeSelector[i]; for(k = 0; k < 4; k++) for(j = 0; j < 2; j++) fGradient[k][BSize + i][j] = fGradient[k][i][j]; } } #define s_curve(t) ( t * t * (3. - 2. * t) ) #define lerp(t, a, b) ( a + t * (b - a) ) double noise2(int nColorChannel, double vec[2], StitchInfo *pStitchInfo) { int bx0, bx1, by0, by1, b00, b10, b01, b11; double rx0, rx1, ry0, ry1, *q, sx, sy, a, b, t, u, v; register i, j; t = vec[0] + PerlinN; bx0 = (int)t; bx1 = bx0+1; rx0 = t - (int)t; rx1 = rx0 - 1.0f; t = vec[1] + PerlinN; by0 = (int)t; by1 = by0+1; ry0 = t - (int)t; ry1 = ry0 - 1.0f; // If stitching, adjust lattice points accordingly. if(pStitchInfo != NULL) { if(bx0 >= pStitchInfo->nWrapX) bx0 -= pStitchInfo->nWidth; if(bx1 >= pStitchInfo->nWrapX) bx1 -= pStitchInfo->nWidth; if(by0 >= pStitchInfo->nWrapY) by0 -= pStitchInfo->nHeight; if(by1 >= pStitchInfo->nWrapY) by1 -= pStitchInfo->nHeight; } bx0 &= BM; bx1 &= BM; by0 &= BM; by1 &= BM; i = uLatticeSelector[bx0]; j = uLatticeSelector[bx1]; b00 = uLatticeSelector[i + by0]; b10 = uLatticeSelector[j + by0]; b01 = uLatticeSelector[i + by1]; b11 = uLatticeSelector[j + by1]; sx = double(s_curve(rx0)); sy = double(s_curve(ry0)); q = fGradient[nColorChannel][b00]; u = rx0 * q[0] + ry0 * q[1]; q = fGradient[nColorChannel][b10]; v = rx1 * q[0] + ry0 * q[1]; a = lerp(sx, u, v); q = fGradient[nColorChannel][b01]; u = rx0 * q[0] + ry1 * q[1]; q = fGradient[nColorChannel][b11]; v = rx1 * q[0] + ry1 * q[1]; b = lerp(sx, u, v); return lerp(sy, a, b); } double turbulence(int nColorChannel, double *point, double fBaseFreqX, double fBaseFreqY, int nNumOctaves, bool bFractalSum, bool bDoStitching, double fTileX, double fTileY, double fTileWidth, double fTileHeight) { StitchInfo stitch; StitchInfo *pStitchInfo = NULL; // Not stitching when NULL. // Adjust the base frequencies if necessary for stitching. if(bDoStitching) { // When stitching tiled turbulence, the frequencies must be adjusted // so that the tile borders will be continuous. if(fBaseFreqX != 0.0) { double fLoFreq = double(floor(fTileWidth * fBaseFreqX)) / fTileWidth; double fHiFreq = double(ceil(fTileWidth * fBaseFreqX)) / fTileWidth; if(fBaseFreqX / fLoFreq < fHiFreq / fBaseFreqX) fBaseFreqX = fLoFreq; else fBaseFreqX = fHiFreq; } if(fBaseFreqY != 0.0) { double fLoFreq = double(floor(fTileHeight * fBaseFreqY)) / fTileHeight; double fHiFreq = double(ceil(fTileHeight * fBaseFreqY)) / fTileHeight; if(fBaseFreqY / fLoFreq < fHiFreq / fBaseFreqY) fBaseFreqY = fLoFreq; else fBaseFreqY = fHiFreq; } // Set up initial stitch values. pStitchInfo = &stitch; stitch.nWidth = int(fTileWidth * fBaseFreqX + 0.5f); stitch.nWrapX = fTileX * fBaseFreqX + PerlinN + stitch.nWidth; stitch.nHeight = int(fTileHeight * fBaseFreqY + 0.5f); stitch.nWrapY = fTileY * fBaseFreqY + PerlinN + stitch.nHeight; } double fSum = 0.0f; double vec[2]; vec[0] = point[0] * fBaseFreqX; vec[1] = point[1] * fBaseFreqY; double ratio = 1; for(int nOctave = 0; nOctave < nNumOctaves; nOctave++) { if(bFractalSum) fSum += double(noise2(nColorChannel, vec, pStitchInfo) / ratio); else fSum += double(fabs(noise2(nColorChannel, vec, pStitchInfo)) / ratio); vec[0] *= 2; vec[1] *= 2; ratio *= 2; if(pStitchInfo != NULL) { // Update stitch values. Subtracting PerlinN before the multiplication and // adding it afterward simplifies to subtracting it once. stitch.nWidth *= 2; stitch.nWrapX = 2 * stitch.nWrapX - PerlinN; stitch.nHeight *= 2; stitch.nWrapY = 2 * stitch.nWrapY - PerlinN; } } return fSum; }
‘feTurbulence’
Categories:
Filter primitive element
Content model:
Any number of the following elements, in any order:
The base frequency (frequencies) parameter(s) for the noise function. If two s are provided, the first number represents a base frequency in the X direction and the second value represents a base frequency in the Y direction. If one number is provided, then that value is used for both X and Y.
A negative value for base frequency is an error (see Error processing).
If the attribute is not specified, then the effect is as if a value of
0were specified.
Animatable: yes.
numOctaves= "
"
The numOctaves parameter for the noise function.
If the attribute is not specified, then the effect is as if a value of
1were specified.
Animatable: yes.
seed= "
"
The starting number for the pseudo random number generator.
If the attribute is not specified, then the effect is as if a value of
0were specified. When the seed number is handed over to the algorithm above it must first be truncated, i.e. rounded to the closest integer value towards zero.
Animatable: yes.
stitchTiles= "
stitch | noStitch"
If
stitchTiles="noStitch", no attempt it made to achieve smooth transitions at the border of tiles which contain a turbulence function. Sometimes the result will show clear discontinuities at the tile borders.
If
stitchTiles="stitch", then the user agent will automatically adjust baseFrequency-x and baseFrequency-y values such that the feTurbulence node's width and height (i.e., the width and height of the current subregion) contains an integral number of the Perlin tile width and height for the first octave. The baseFrequency will be adjusted up or down depending on which way has the smallest relative (not absolute) change as follows: Given the frequency, calculate
lowFreq=floor(width*frequency)/widthand
hiFreq=ceil(width*frequency)/width. If frequency/lowFreq < hiFreq/frequency then use lowFreq, else use hiFreq. While generating turbulence values, generate lattice vectors as normal for Perlin Noise, except for those lattice points that lie on the right or bottom edges of the active area (the size of the resulting tile). In those cases, copy the lattice vector from the opposite edge of the active area.
If attribute‘stitchTiles’is not specified, then the effect is as if a value ofnoStitchwere specified.
Animatable: yes.
type= "
fractalNoise | turbulence"
Indicates whether the filter primitive should perform a noise or turbulence function. If attribute
‘type’is not specified, then the effect is as if a value of
turbulencewere specified.
Animatable: yes.
Example feTurbulenceshows the effects of various parameter settings for feTurbulence.
Example feTurbulence
View this example as SVG (SVG-enabled browsers only)
15.25 DOM interfaces
15.25.1 Interface SVGFilterElement
The SVGFilterElement interface corresponds to the
‘filter’element.
Corresponds to attribute
‘width’on the given element.
height(readonly SVGAnimatedLength)
Corresponds to attribute
‘height’on the given element.
result(readonly SVGAnimatedString)
Corresponds to attribute
‘result’on the given element.
15.25.3 Interface SVGFEBlendElement
The SVGFEBlendElement interface corresponds to the
‘feBlend’element.
interfaceSVGFEBlendElement: SVGElement, SVGFilterPrimitiveStandardAttributes { // Blend Mode Types const unsigned short SVG_FEBLEND_MODE_UNKNOWN = 0; const unsigned short SVG_FEBLEND_MODE_NORMAL = 1; const unsigned short SVG_FEBLEND_MODE_MULTIPLY = 2; const unsigned short SVG_FEBLEND_MODE_SCREEN = 3; const unsigned short SVG_FEBLEND_MODE_DARKEN = 4; const unsigned short SVG_FEBLEND_MODE_LIGHTEN = 5; readonly attribute SVGAnimatedString in1; readonly attribute SVGAnimatedString in2; readonly attribute SVGAnimatedEnumeration mode; };
Constants in group “Blend Mode Types”:
SVG_FEBLEND_MODE_UNKNOWN(unsigned short)
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
SVG_FEBLEND_MODE_NORMAL(unsigned short)
Corresponds to value
'normal'.
SVG_FEBLEND_MODE_MULTIPLY(unsigned short)
Corresponds to value
'multiply'.
SVG_FEBLEND_MODE_SCREEN(unsigned short)
Corresponds to value
'screen'.
SVG_FEBLEND_MODE_DARKEN(unsigned short)
Corresponds to value
'darken'.
SVG_FEBLEND_MODE_LIGHTEN(unsigned short)
Corresponds to value
'lighten'.
Attributes:
in1(readonly SVGAnimatedString)
Corresponds to attribute
‘in’on the given
‘feBlend’element.
in2(readonly SVGAnimatedString)
Corresponds to attribute
‘in2’on the given
‘feBlend’element.
mode(readonly SVGAnimatedEnumeration)
Corresponds to attribute
‘mode’on the given
‘feBlend’element. Takes one of the SVG_FEBLEND_MODE_* constants defined on this interface.
15.25.4 Interface SVGFEColorMatrixElement
The SVGFEColorMatrixElement interface corresponds to the
‘feColorMatrix’element.
interfaceSVGFEColorMatrixElement: SVGElement, SVGFilterPrimitiveStandardAttributes { // Color Matrix Types const unsigned short SVG_FECOLORMATRIX_TYPE_UNKNOWN = 0; const unsigned short SVG_FECOLORMATRIX_TYPE_MATRIX = 1; const unsigned short SVG_FECOLORMATRIX_TYPE_SATURATE = 2; const unsigned short SVG_FECOLORMATRIX_TYPE_HUEROTATE = 3; const unsigned short SVG_FECOLORMATRIX_TYPE_LUMINANCETOALPHA = 4; readonly attribute SVGAnimatedString in1; readonly attribute SVGAnimatedEnumeration type; readonly attribute SVGAnimatedNumberList values; };
Constants in group “Color Matrix Types”:
SVG_FECOLORMATRIX_TYPE_UNKNOWN(unsigned short)
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
SVG_EDGEMODE_DUPLICATE(unsigned short)
Corresponds to value
'duplicate'.
SVG_EDGEMODE_WRAP(unsigned short)
Corresponds to value
'wrap'.
SVG_EDGEMODE_NONE(unsigned short)
Corresponds to value
'none'.
Attributes:
in1(readonly SVGAnimatedString)
Corresponds to attribute
‘in’on the given
‘feConvolveMatrix’element.
orderX(readonly SVGAnimatedInteger)
Corresponds to attribute
‘order’on the given
‘feConvolveMatrix’element.
orderY(readonly SVGAnimatedInteger)
Corresponds to attribute
‘order’on the given
‘feConvolveMatrix’element.
kernelMatrix(readonly SVGAnimatedNumberList)
Corresponds to attribute
‘kernelMatrix’on the given
‘feConvolveMatrix’element.
divisor(readonly SVGAnimatedNumber)
Corresponds to attribute
‘divisor’on the given
‘feConvolveMatrix’element.
bias(readonly SVGAnimatedNumber)
Corresponds to attribute
‘bias’on the given
‘feConvolveMatrix’element.
targetX(readonly SVGAnimatedInteger)
Corresponds to attribute
‘targetX’on the given
‘feConvolveMatrix’element.
targetY(readonly SVGAnimatedInteger)
Corresponds to attribute
‘targetY’on the given
‘feConvolveMatrix’element.
edgeMode(readonly SVGAnimatedEnumeration)
Corresponds to attribute
‘edgeMode’on the given
‘feConvolveMatrix’element. Takes one of the SVG_EDGEMODE_* constants defined on this interface.
kernelUnitLengthX(readonly SVGAnimatedNumber)
Corresponds to attribute
‘kernelUnitLength’on the given
‘feConvolveMatrix’element.
kernelUnitLengthY(readonly SVGAnimatedNumber)
Corresponds to attribute
‘kernelUnitLength’on the given
‘feConvolveMatrix’element.
preserveAlpha(readonly SVGAnimatedBoolean)
Corresponds to attribute
‘preserveAlpha’on the given
‘feConvolveMatrix’element.
15.25.13 Interface SVGFEDiffuseLightingElement
The SVGFEDiffuseLightingElement interface corresponds to the
‘feDiffuseLighting’element.
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
SVG_CHANNEL_R(unsigned short)
Corresponds to value
'R'.
SVG_CHANNEL_G(unsigned short)
Corresponds to value
'G'.
SVG_CHANNEL_B(unsigned short)
Corresponds to value
'B'.
SVG_CHANNEL_A(unsigned short)
Corresponds to value
'A'.
Attributes:
in1(readonly SVGAnimatedString)
Corresponds to attribute
‘in’on the given
‘feDisplacementMap’element.
in2(readonly SVGAnimatedString)
Corresponds to attribute
‘in2’on the given
‘feDisplacementMap’element.
scale(readonly SVGAnimatedNumber)
Corresponds to attribute
‘scale’on the given
‘feDisplacementMap’element.
xChannelSelector(readonly SVGAnimatedEnumeration)
Corresponds to attribute
‘xChannelSelector’on the given
‘feDisplacementMap’element. Takes one of the SVG_CHANNEL_* constants defined on this interface.
yChannelSelector(readonly SVGAnimatedEnumeration)
Corresponds to attribute
‘yChannelSelector’on the given
‘feDisplacementMap’element. Takes one of the SVG_CHANNEL_* constants defined on this interface.
15.25.18 Interface SVGFEFloodElement
The SVGFEFloodElement interface corresponds to the
‘feFlood’element.
Corresponds to attribute
‘in’on the given
‘feGaussianBlur’element.
stdDeviationX(readonly SVGAnimatedNumber)
Corresponds to attribute
‘stdDeviation’on the given
‘feGaussianBlur’element. Contains the X component of attribute
‘stdDeviation’.
stdDeviationY(readonly SVGAnimatedNumber)
Corresponds to attribute
‘stdDeviation’on the given
‘feGaussianBlur’element. Contains the Y component (possibly computed automatically) of attribute
‘stdDeviation’.
Operations:
void
setStdDeviation(in float
stdDeviationX, in float
stdDeviationY)
Sets the values for attribute
‘stdDeviation’.
Parameters
float
stdDeviationX
The X component of attribute
‘stdDeviation’.
float
stdDeviationY
The Y component of attribute
‘stdDeviation’.
Exceptions
DOMException, code NO_MODIFICATION_ALLOWED_ERR
Raised on an attempt to change the value of a read only attribute.
15.25.20 Interface SVGFEImageElement
The SVGFEImageElement interface corresponds to the
‘feImage’element.
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
SVG_MORPHOLOGY_OPERATOR_ERODE(unsigned short)
Corresponds to value
'erode'.
SVG_MORPHOLOGY_OPERATOR_DILATE(unsigned short)
Corresponds to value
'dilate'.
Attributes:
in1(readonly SVGAnimatedString)
Corresponds to attribute
‘in’on the given
‘feMorphology’element.
operator(readonly SVGAnimatedEnumeration)
Corresponds to attribute
‘operator’on the given
‘feMorphology’element. Takes one of the SVG_MORPHOLOGY_OPERATOR_* constants defined on this interface.
radiusX(readonly SVGAnimatedNumber)
Corresponds to attribute
‘radius’on the given
‘feMorphology’element.
radiusY(readonly SVGAnimatedNumber)
Corresponds to attribute
‘radius’on the given
‘feMorphology’element.
15.25.24 Interface SVGFEOffsetElement
The SVGFEOffsetElement interface corresponds to the
‘feOffset’element.
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
SVG_TURBULENCE_TYPE_FRACTALNOISE(unsigned short)
Corresponds to value
'fractalNoise'.
SVG_TURBULENCE_TYPE_TURBULENCE(unsigned short)
Corresponds to value
'turbulence'.
Constants in group “Stitch Options”:
SVG_STITCHTYPE_UNKNOWN(unsigned short)
The type is not one of predefined types. It is invalid to attempt to define a new value of this type or to attempt to switch an existing value to this type.
SVG_STITCHTYPE_STITCH(unsigned short)
Corresponds to value
'stitch'.
SVG_STITCHTYPE_NOSTITCH(unsigned short)
Corresponds to value
'noStitch'.
Attributes:
baseFrequencyX(readonly SVGAnimatedNumber)
Corresponds to attribute
‘baseFrequency’on the given
‘feTurbulence’element. Contains the X component of the
‘baseFrequency’attribute.
baseFrequencyY(readonly SVGAnimatedNumber)
Corresponds to attribute
‘baseFrequency’on the given
‘feTurbulence’element. Contains the Y component of the (possibly computed automatically)
‘baseFrequency’attribute.
numOctaves(readonly SVGAnimatedInteger)
Corresponds to attribute
‘numOctaves’on the given
‘feTurbulence’element.
seed(readonly SVGAnimatedNumber)
Corresponds to attribute
‘seed’on the given
‘feTurbulence’element.
stitchTiles(readonly SVGAnimatedEnumeration)
Corresponds to attribute
‘stitchTiles’on the given
‘feTurbulence’element. Takes one of the SVG_STITCHTYPE_* constants defined on this interface.
type(readonly SVGAnimatedEnumeration)
Corresponds to attribute
‘type’on the given
‘feTurbulence’element. Takes one of the SVG_TURBULENCE_TYPE_* constants defined on this interface.
SVG 1.1 (Second Edition) – 16 August 2011Top ⋅ Contents ⋅ Previous ⋅ Next ⋅ Elements ⋅ Attributes ⋅ Properties