search
HomeWeb Front-endJS Tutorialthree.js uses gpu to select objects and calculate intersection positions

three.js uses gpu to select objects and calculate intersection positions

Raycasting method

It is very simple to select objects using the Raycaster that comes with three.js , the code is as follows:

var raycaster = new THREE.Raycaster();
var mouse = new THREE.Vector2();
function onMouseMove(event) {   
 // 计算鼠标所在位置的设备坐标
    // 三个坐标分量都是-1到1
    mouse.x = event.clientX / window.innerWidth * 2 - 1;
    mouse.y = - (event.clientY / window.innerHeight) * 2 + 1;
}
function pick() {    
    // 使用相机和鼠标位置更新选取光线    
    raycaster.setFromCamera(mouse, camera);    
    // 计算与选取光线相交的物体
    var intersects = raycaster.intersectObjects(scene.children);
}

[Related course recommendations: JavaScript Video Tutorial]

It uses bounding box filtering to calculate the projection ray and each triangular surface element Whether the intersection is achieved.

However, when the model is very large, such as 400,000 faces, selecting objects and calculating the location of collision points through traversal will be very slow and the user experience is not good.

But using gpu to select objects does not have this problem. No matter how big the scene and model are, the position of the object and intersection point at the mouse point can be obtained within one frame.

Use GPU to select objects

The implementation method is very simple:

1. Create a selection material and replace the material of each model in the scene with different color.

2. Read the pixel color at the mouse position and determine the object at the mouse position based on the color.

Specific implementation code:

1. Create a selected material, traverse the scene, and replace each model in the scene with a different color.

let maxHexColor = 1;// 更换选取材质
scene.traverseVisible(n => {    
if (!(n instanceof THREE.Mesh)) {
        return;
    }
    n.oldMaterial = n.material;
        if (n.pickMaterial) { // 已经创建过选取材质了
        n.material = n.pickMaterial;
                return;
    }
    let material = new THREE.ShaderMaterial({
        vertexShader: PickVertexShader,
        fragmentShader: PickFragmentShader,
        uniforms: {
            pickColor: {
                value: new THREE.Color(maxHexColor)
            }
        }
    });
    n.pickColor = maxHexColor;
    maxHexColor++;
    n.material = n.pickMaterial = material;
});
 
PickVertexShader:
void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
 
PickFragmentShader:
uniform vec3 pickColor;void main() {
    gl_FragColor = vec4(pickColor, 1.0);
}

2. Draw the scene on WebGLRenderTarget, read the color of the mouse position, and determine the selected object.

let renderTarget = new THREE.WebGLRenderTarget(width, height);
let pixel = new Uint8Array(4);// 绘制并读取像素
renderer.setRenderTarget(renderTarget);
renderer.clear();
renderer.render(scene, camera);
renderer.readRenderTargetPixels(renderTarget, offsetX, height - offsetY, 1, 1, pixel); // 读取鼠标所在位置颜色
// 还原原来材质,并获取选中物体
const currentColor = pixel[0] * 0xffff + pixel[1] * 0xff + pixel[2];
let selected = null;

scene.traverseVisible(n => {
    if (!(n instanceof THREE.Mesh)) {
            return;
    }
        if (n.pickMaterial && n.pickColor === currentColor) {
         // 颜色相同

        selected = n; // 鼠标所在位置的物体    
        }
        if (n.oldMaterial) {
            n.material = n.oldMaterial;        delete n.oldMaterial;
        }
});

Explanation: offsetX and offsetY are the mouse position, and height is the canvas height. The meaning of the readRenderTargetPixels line is to select the color of the pixel at the mouse position (offsetX, height - offsetY), with a width of 1 and a height of 1.

pixel is Uint8Array(4), which stores four channels of rgba color respectively. The value range of each channel is 0~255.

Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

Use GPU to obtain intersection points Position

The implementation method is also very simple:

1. Create a depth shader material and render the scene depth to the WebGLRenderTarget.

2. Calculate the depth of the mouse position, and calculate the intersection position based on the mouse position and depth.

Specific implementation code:

1. Create a depth shader material, encode the depth information in a certain way, and render it to the WebGLRenderTarget.

Depth Material:

const depthMaterial = new THREE.ShaderMaterial({
    vertexShader: DepthVertexShader,
    fragmentShader: DepthFragmentShader,
    uniforms: {
        far: {
            value: camera.far
        }
    }
});
DepthVertexShader:
precision highp float;
uniform float far;
varying float depth;void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    depth = gl_Position.z / far;
}
DepthFragmentShader:
precision highp float;
varying float depth;void main() {
    float hex = abs(depth) * 16777215.0; // 0xffffff
    float r = floor(hex / 65535.0);
    float g = floor((hex - r * 65535.0) / 255.0);    
    float b = floor(hex - r * 65535.0 - g * 255.0);    
    float a = sign(depth) >= 0.0 ? 1.0 : 0.0; // depth大于等于0,为1.0;小于0,为0.0。
    gl_FragColor = vec4(r / 255.0, g / 255.0, b / 255.0, a);
}

Important Note:

a. gl_Position.z is the depth in camera space, which is linear and ranges from cameraNear to cameraFar. Shader varying variables can be used directly for interpolation.

b. The reason for gl_Position.z/far is to convert the value to the range of 0~1 for easy output as a color.

c. You cannot use the depth in screen space. After perspective projection, the depth becomes -1~1, most of which are very close to 1 (more than 0.9). It is not linear and almost unchanged. The output color is almost Unchanged and very inaccurate.

d. Obtain the depth method in the fragment shader: the camera space depth is gl_FragCoord.z, and the screen space depth is gl_FragCoord.z ​​/ gl_FragCoord.w.

e. The above descriptions are all for perspective projection. In orthographic projection, gl_Position.w is 1, and the camera space and screen space depth are the same.

f. In order to output the depth as accurately as possible, three components of rgb are used to output the depth. The gl_Position.z/far range is 0~1, multiplied by 0xffffff, and converted into an rgb color value. The r component 1 represents 65535, the g component 1 represents 255, and the b component 1 represents 1.

Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

2. Read the mouse position Color, restore the read color value to the camera space depth value.

a. Draw the "encrypted" depth on the WebGLRenderTarget. Reading color method

let renderTarget = new THREE.WebGLRenderTarget(width, height);
let pixel = new Uint8Array(4);
scene.overrideMaterial = this.depthMaterial;
renderer.setRenderTarget(renderTarget);
renderer.clear();
renderer.render(scene, camera);
renderer.readRenderTargetPixels(renderTarget, offsetX, height - offsetY, 1, 1, pixel);

Description: offsetX and offsetY are the mouse position, and height is the canvas height. The meaning of the readRenderTargetPixels line is to select the color of the pixel at the mouse position (offsetX, height - offsetY), with a width of 1 and a height of 1.

pixel is Uint8Array(4), which stores four channels of rgba color respectively. The value range of each channel is 0~255.

b. "Decrypt" the "encrypted" camera space depth value to obtain the correct camera space depth value.

if (pixel[2] !== 0 || pixel[1] !== 0 || pixel[0] !== 0) {
    let hex = (this.pixel[0] * 65535 + this.pixel[1] * 255 + this.pixel[2]) / 0xffffff;    
    if (this.pixel[3] === 0) {
        hex = -hex;
    }
    cameraDepth = -hex * camera.far; // 相机坐标系中鼠标所在点的深度(注意:相机坐标系中的深度值为负值)}

3. Based on the position of the mouse on the screen and the depth of the camera space, interpolate and back-calculate the coordinates in the world coordinate system of the intersection.

let nearPosition = new THREE.Vector3(); // 鼠标屏幕位置在near处的相机坐标系中的坐标
let farPosition = new THREE.Vector3(); // 鼠标屏幕位置在far处的相机坐标系中的坐标
let world = new THREE.Vector3(); // 通过插值计算世界坐标
// 设备坐标
const deviceX = this.offsetX / width * 2 - 1;
const deviceY = - this.offsetY / height * 2 + 1;// 近点
nearPosition.set(deviceX, deviceY, 1); // 屏幕坐标系:(0, 0, 1)
nearPosition.applyMatrix4(camera.projectionMatrixInverse); // 相机坐标系:(0, 0, -far)
// 远点
farPosition.set(deviceX, deviceY, -1); // 屏幕坐标系:(0, 0, -1)
farPosition.applyMatrix4(camera.projectionMatrixInverse); // 相机坐标系:(0, 0, -near)
// 在相机空间,根据深度,按比例计算出相机空间x和y值。
const t = (cameraDepth - nearPosition.z) / (farPosition.z - nearPosition.z);
// 将交点从相机空间中的坐标,转换到世界坐标系坐标。
world.set(
    nearPosition.x + (farPosition.x - nearPosition.x) * t,
    nearPosition.y + (farPosition.y - nearPosition.y) * t,
    cameraDepth
);
world.applyMatrix4(camera.matrixWorld);

Full code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

Related applications

Use gpu to select objects and calculate the intersection position, which is mostly used in situations where very high performance is required. For example:

1. The hover effect when the mouse moves to the 3D model.

2. When adding a model, the model moves with the mouse and the effect of placing the model in the scene is previewed in real time.

3. Tools such as distance measurement and area measurement, lines and polygons can be previewed in real time as the mouse moves on the plane, and the length and area can be calculated.

4. The scene and model are very large, the ray casting method selection speed is very slow, and the user experience is very poor.

Here is a picture of using GPU to select objects and achieve mouse hover effect. The red border is the selection effect, and the yellow translucent effect is the mouse hover effect.

three.js uses gpu to select objects and calculate intersection positionsDo not understand? Maybe you are not familiar with the various projection operations in three.js. The projection operation formula in three.js is given below.

Projection operation in three.js

1. modelViewMatrix = camera.matrixWorldInverse * object.matrixWorld

2. viewMatrix = camera .matrixWorldInverse

3. modelMatrix = object.matrixWorld

4. project = applyMatrix4( camera.matrixWorldInverse ).applyMatrix4( camera.projectionMatrix )

5. unproject = applyMatrix4( camera.projectionMatrixInverse ).applyMatrix4( camera.matrixWorld )

6. gl_Position = projectionMatrix * modelViewMatrix * position

                                                                                                                         * viewMatrix * modelMatrix * position

Reference materials:

1. Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event /GPUPickEvent.js

2. Open source three-dimensional scene editor based on three.js: https://github.com/tengge1/ShadowEditor

3. Using shaders to draw depth values ​​in OpenGL :https://stackoverflow.com/questions/6408851/draw-the-depth-value-in-opengl-using-shaders

4. In glsl, get the real fragment shader depth value: https://gamedev.stackexchange.com/questions/93055/getting-the-real-fragment-depth-in-glsl

This article comes from the

js tutorial

column, welcome to learn!

The above is the detailed content of three.js uses gpu to select objects and calculate intersection positions. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:博客园. If there is any infringement, please contact admin@php.cn delete
Mastering the Command Line for JavaScript DevelopersMastering the Command Line for JavaScript DevelopersJul 24, 2025 am 03:59 AM

Master basic but in-depth command line skills: proficient in using Tab completion, history, Ctrl R search, pwd positioning and ls-la to view hidden files; 2. Efficiently use Node.js and package manager: make good use of npminit-y, --save-dev, npx without installation and operation, npmrun scripts and outdated inspection, and optimize pnpm speed up; 3. Accurate search code and log: combine grep-r/-i, find files, jq formatted JSON and xargs to find JS files with specific content; 4. Write simple script automation tasks: such as deploy.sh one-click deployment, chmod x grant execution rights; 5. Custom Sh

How to check if a variable is an array in JS?How to check if a variable is an array in JS?Jul 24, 2025 am 03:58 AM

To determine whether the variable is an array type, the Array.isArray() method is preferred, and secondly, Object.prototype.toString.call() can be used. 1.Array.isArray(variable) returns true or false, suitable for modern browsers and Node.js; 2.Object.prototype.toString.call(variable)==='[objectArray]', compatible with old environments. Avoid using typeof and instanceof because they have flaws in judging arrays. Choose the right method according to your needs: Priority is given if there is no need to be compatible with old versions

JavaScript Dependency Management StrategiesJavaScript Dependency Management StrategiesJul 24, 2025 am 03:58 AM

Keep dependency version consistent, update dependencies regularly, distinguish dependencies types, and use Monorepo to manage multi-project dependencies. 1. Use package-lock.json or yarn.lock to submit to Git to ensure environmental consistency; 2. Check the dependency tree regularly to avoid duplicate installation and conflicts; 3. Use npmoutdated or yarnoutdated to check for updates, prioritize security issues and evaluate the impact of major version upgrades; 4. Clearly distinguish dependencies and devDependencies to avoid misinstalling development tools in the production environment; 5. Use tools such as Lerna or Nx to uniformly manage shared dependencies and code of multiple projects to improve collaboration efficiency.

Advanced JavaScript Testing Strategies with Jest and PlaywrightAdvanced JavaScript Testing Strategies with Jest and PlaywrightJul 24, 2025 am 03:56 AM

Use Jest's MockFunctions and Timers to control asynchronous behavior, and improve testing efficiency through jest.fn() and jest.useFakeTimers(); 2. Use Fixtures and PageObjectModel (POM) in Playwright to improve maintainability and encapsulate common operations and page logic; 3. Jest and Playwright can jointly achieve comprehensive coverage of unit tests and end-to-end tests; 4. Playwright's TraceViewer can be used to intuitively debug failed tests. These strategies respectively optimize testing efficiency, code structure, test level coverage and debugging capabilities, and are suitable for complex projects

How to Use Bun: The All-in-One JavaScript ToolkitHow to Use Bun: The All-in-One JavaScript ToolkitJul 24, 2025 am 03:54 AM

Bun is a modern JavaScript runtime that can replace tools such as Node.js, npm and Webpack. Its core advantage lies in speed. 1. Bun is built on Zig, using the JavaScriptCore engine, which can directly run JavaScript and TypeScript files without additional configuration; 2. It has a built-in package manager, supports installation dependencies from npm, and speeds are 10-100 times faster than npm, and generates bun.lockb lock files; 3. Built-in native packager, supports minimization, environment variables, code segmentation and other functions, which can replace Webpack or esbuild; 4. Provides a quick test runner with Jest syntax, automatically searching

Real-time Communication with JavaScript WebSocketsReal-time Communication with JavaScript WebSocketsJul 24, 2025 am 03:50 AM

To achieve real-time communication, the key to using JavaScript's WebSocket is to understand its basic usage and common scenarios. 1. Only one line of code is required to establish a connection: constsocket=newWebSocket('ws://example.com/socket');, using open, message, error, and close events to handle the connection status; 2. Send and receive data in JSON format, send through socket.send(), listen to message events to receive and parse data; 3. Handle disconnection and reconnection, you can automatically reconnect by listening to close and error events, combined with setTimeout,

ES Modules vs. CommonJS: A Detailed ComparisonES Modules vs. CommonJS: A Detailed ComparisonJul 24, 2025 am 03:50 AM

ESModules (ESM) and CommonJS are two module systems of JavaScript. The main differences are syntax, loading mechanism, execution timing, environment support and interoperability. 1. Syntax, ESM uses import/export static declaration, and CommonJS uses require()/module.exports to dynamically assign values. 2. In terms of loading mechanism, ESM supports static analysis and tree-shaking. CommonJS is dynamically loaded at runtime, with high flexibility but not conducive to optimization. 3. In execution time, ESM can obtain the latest value through real-time binding; CommonJS cache module output may return part

Advanced Error Handling Strategies in JavaScript and Node.jsAdvanced Error Handling Strategies in JavaScript and Node.jsJul 24, 2025 am 03:47 AM

DefinecustomerrorclasseslikeValidationErrorandDatabaseErrortoenablepreciseerrorhandlingusinginstanceofandimproveloggingconsistency.2.Usecentralizederror-handlingmiddlewareinExpress.jsbypassingerrorstonext()anddefiningaglobalerrorhandlertostandardizer

See all articles

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.