Home  >  Article  >  Web Front-end  >  three.js uses gpu to select objects and calculate intersection positions

three.js uses gpu to select objects and calculate intersection positions

angryTom
angryTomforward
2019-11-29 14:25:553204browse

three.js uses gpu to select objects and calculate intersection positions

Raycasting method

It is very simple to select objects using the Raycaster that comes with three.js , the code is as follows:

var raycaster = new THREE.Raycaster();
var mouse = new THREE.Vector2();
function onMouseMove(event) {   
 // 计算鼠标所在位置的设备坐标
    // 三个坐标分量都是-1到1
    mouse.x = event.clientX / window.innerWidth * 2 - 1;
    mouse.y = - (event.clientY / window.innerHeight) * 2 + 1;
}
function pick() {    
    // 使用相机和鼠标位置更新选取光线    
    raycaster.setFromCamera(mouse, camera);    
    // 计算与选取光线相交的物体
    var intersects = raycaster.intersectObjects(scene.children);
}

[Related course recommendations: JavaScript Video Tutorial]

It uses bounding box filtering to calculate the projection ray and each triangular surface element Whether the intersection is achieved.

However, when the model is very large, such as 400,000 faces, selecting objects and calculating the location of collision points through traversal will be very slow and the user experience is not good.

But using gpu to select objects does not have this problem. No matter how big the scene and model are, the position of the object and intersection point at the mouse point can be obtained within one frame.

Use GPU to select objects

The implementation method is very simple:

1. Create a selection material and replace the material of each model in the scene with different color.

2. Read the pixel color at the mouse position and determine the object at the mouse position based on the color.

Specific implementation code:

1. Create a selected material, traverse the scene, and replace each model in the scene with a different color.

let maxHexColor = 1;// 更换选取材质
scene.traverseVisible(n => {    
if (!(n instanceof THREE.Mesh)) {
        return;
    }
    n.oldMaterial = n.material;
        if (n.pickMaterial) { // 已经创建过选取材质了
        n.material = n.pickMaterial;
                return;
    }
    let material = new THREE.ShaderMaterial({
        vertexShader: PickVertexShader,
        fragmentShader: PickFragmentShader,
        uniforms: {
            pickColor: {
                value: new THREE.Color(maxHexColor)
            }
        }
    });
    n.pickColor = maxHexColor;
    maxHexColor++;
    n.material = n.pickMaterial = material;
});
 
PickVertexShader:
void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
 
PickFragmentShader:
uniform vec3 pickColor;void main() {
    gl_FragColor = vec4(pickColor, 1.0);
}

2. Draw the scene on WebGLRenderTarget, read the color of the mouse position, and determine the selected object.

let renderTarget = new THREE.WebGLRenderTarget(width, height);
let pixel = new Uint8Array(4);// 绘制并读取像素
renderer.setRenderTarget(renderTarget);
renderer.clear();
renderer.render(scene, camera);
renderer.readRenderTargetPixels(renderTarget, offsetX, height - offsetY, 1, 1, pixel); // 读取鼠标所在位置颜色
// 还原原来材质,并获取选中物体
const currentColor = pixel[0] * 0xffff + pixel[1] * 0xff + pixel[2];
let selected = null;

scene.traverseVisible(n => {
    if (!(n instanceof THREE.Mesh)) {
            return;
    }
        if (n.pickMaterial && n.pickColor === currentColor) {
         // 颜色相同

        selected = n; // 鼠标所在位置的物体    
        }
        if (n.oldMaterial) {
            n.material = n.oldMaterial;        delete n.oldMaterial;
        }
});

Explanation: offsetX and offsetY are the mouse position, and height is the canvas height. The meaning of the readRenderTargetPixels line is to select the color of the pixel at the mouse position (offsetX, height - offsetY), with a width of 1 and a height of 1.

pixel is Uint8Array(4), which stores four channels of rgba color respectively. The value range of each channel is 0~255.

Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

Use GPU to obtain intersection points Position

The implementation method is also very simple:

1. Create a depth shader material and render the scene depth to the WebGLRenderTarget.

2. Calculate the depth of the mouse position, and calculate the intersection position based on the mouse position and depth.

Specific implementation code:

1. Create a depth shader material, encode the depth information in a certain way, and render it to the WebGLRenderTarget.

Depth Material:

const depthMaterial = new THREE.ShaderMaterial({
    vertexShader: DepthVertexShader,
    fragmentShader: DepthFragmentShader,
    uniforms: {
        far: {
            value: camera.far
        }
    }
});
DepthVertexShader:
precision highp float;
uniform float far;
varying float depth;void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    depth = gl_Position.z / far;
}
DepthFragmentShader:
precision highp float;
varying float depth;void main() {
    float hex = abs(depth) * 16777215.0; // 0xffffff
    float r = floor(hex / 65535.0);
    float g = floor((hex - r * 65535.0) / 255.0);    
    float b = floor(hex - r * 65535.0 - g * 255.0);    
    float a = sign(depth) >= 0.0 ? 1.0 : 0.0; // depth大于等于0,为1.0;小于0,为0.0。
    gl_FragColor = vec4(r / 255.0, g / 255.0, b / 255.0, a);
}

Important Note:

a. gl_Position.z is the depth in camera space, which is linear and ranges from cameraNear to cameraFar. Shader varying variables can be used directly for interpolation.

b. The reason for gl_Position.z/far is to convert the value to the range of 0~1 for easy output as a color.

c. You cannot use the depth in screen space. After perspective projection, the depth becomes -1~1, most of which are very close to 1 (more than 0.9). It is not linear and almost unchanged. The output color is almost Unchanged and very inaccurate.

d. Obtain the depth method in the fragment shader: the camera space depth is gl_FragCoord.z, and the screen space depth is gl_FragCoord.z ​​/ gl_FragCoord.w.

e. The above descriptions are all for perspective projection. In orthographic projection, gl_Position.w is 1, and the camera space and screen space depth are the same.

f. In order to output the depth as accurately as possible, three components of rgb are used to output the depth. The gl_Position.z/far range is 0~1, multiplied by 0xffffff, and converted into an rgb color value. The r component 1 represents 65535, the g component 1 represents 255, and the b component 1 represents 1.

Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

2. Read the mouse position Color, restore the read color value to the camera space depth value.

a. Draw the "encrypted" depth on the WebGLRenderTarget. Reading color method

let renderTarget = new THREE.WebGLRenderTarget(width, height);
let pixel = new Uint8Array(4);
scene.overrideMaterial = this.depthMaterial;
renderer.setRenderTarget(renderTarget);
renderer.clear();
renderer.render(scene, camera);
renderer.readRenderTargetPixels(renderTarget, offsetX, height - offsetY, 1, 1, pixel);

Description: offsetX and offsetY are the mouse position, and height is the canvas height. The meaning of the readRenderTargetPixels line is to select the color of the pixel at the mouse position (offsetX, height - offsetY), with a width of 1 and a height of 1.

pixel is Uint8Array(4), which stores four channels of rgba color respectively. The value range of each channel is 0~255.

b. "Decrypt" the "encrypted" camera space depth value to obtain the correct camera space depth value.

if (pixel[2] !== 0 || pixel[1] !== 0 || pixel[0] !== 0) {
    let hex = (this.pixel[0] * 65535 + this.pixel[1] * 255 + this.pixel[2]) / 0xffffff;    
    if (this.pixel[3] === 0) {
        hex = -hex;
    }
    cameraDepth = -hex * camera.far; // 相机坐标系中鼠标所在点的深度(注意:相机坐标系中的深度值为负值)}

3. Based on the position of the mouse on the screen and the depth of the camera space, interpolate and back-calculate the coordinates in the world coordinate system of the intersection.

let nearPosition = new THREE.Vector3(); // 鼠标屏幕位置在near处的相机坐标系中的坐标
let farPosition = new THREE.Vector3(); // 鼠标屏幕位置在far处的相机坐标系中的坐标
let world = new THREE.Vector3(); // 通过插值计算世界坐标
// 设备坐标
const deviceX = this.offsetX / width * 2 - 1;
const deviceY = - this.offsetY / height * 2 + 1;// 近点
nearPosition.set(deviceX, deviceY, 1); // 屏幕坐标系:(0, 0, 1)
nearPosition.applyMatrix4(camera.projectionMatrixInverse); // 相机坐标系:(0, 0, -far)
// 远点
farPosition.set(deviceX, deviceY, -1); // 屏幕坐标系:(0, 0, -1)
farPosition.applyMatrix4(camera.projectionMatrixInverse); // 相机坐标系:(0, 0, -near)
// 在相机空间,根据深度,按比例计算出相机空间x和y值。
const t = (cameraDepth - nearPosition.z) / (farPosition.z - nearPosition.z);
// 将交点从相机空间中的坐标,转换到世界坐标系坐标。
world.set(
    nearPosition.x + (farPosition.x - nearPosition.x) * t,
    nearPosition.y + (farPosition.y - nearPosition.y) * t,
    cameraDepth
);
world.applyMatrix4(camera.matrixWorld);

Full code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

Related applications

Use gpu to select objects and calculate the intersection position, which is mostly used in situations where very high performance is required. For example:

1. The hover effect when the mouse moves to the 3D model.

2. When adding a model, the model moves with the mouse and the effect of placing the model in the scene is previewed in real time.

3. Tools such as distance measurement and area measurement, lines and polygons can be previewed in real time as the mouse moves on the plane, and the length and area can be calculated.

4. The scene and model are very large, the ray casting method selection speed is very slow, and the user experience is very poor.

Here is a picture of using GPU to select objects and achieve mouse hover effect. The red border is the selection effect, and the yellow translucent effect is the mouse hover effect.

three.js uses gpu to select objects and calculate intersection positionsDo not understand? Maybe you are not familiar with the various projection operations in three.js. The projection operation formula in three.js is given below.

Projection operation in three.js

1. modelViewMatrix = camera.matrixWorldInverse * object.matrixWorld

2. viewMatrix = camera .matrixWorldInverse

3. modelMatrix = object.matrixWorld

4. project = applyMatrix4( camera.matrixWorldInverse ).applyMatrix4( camera.projectionMatrix )

5. unproject = applyMatrix4( camera.projectionMatrixInverse ).applyMatrix4( camera.matrixWorld )

6. gl_Position = projectionMatrix * modelViewMatrix * position

                                                                                                                         * viewMatrix * modelMatrix * position

Reference materials:

1. Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event /GPUPickEvent.js

2. Open source three-dimensional scene editor based on three.js: https://github.com/tengge1/ShadowEditor

3. Using shaders to draw depth values ​​in OpenGL :https://stackoverflow.com/questions/6408851/draw-the-depth-value-in-opengl-using-shaders

4. In glsl, get the real fragment shader depth value: https://gamedev.stackexchange.com/questions/93055/getting-the-real-fragment-depth-in-glsl

This article comes from the

js tutorial

column, welcome to learn!

The above is the detailed content of three.js uses gpu to select objects and calculate intersection positions. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:cnblogs.com. If there is any infringement, please contact admin@php.cn delete