Home > Web Front-end > H5 Tutorial > An initial introduction to web-based VR

An initial introduction to web-based VR

伊谢尔伦
Release: 2017-01-24 11:43:39
Original
5613 people have browsed it

The most exciting topic in the technology world in 2016 is how VR will change the world. Some movies have begun to get involved in VR, allowing users to not only see 3D images, but also be immersed in the scene using the technique of "transformation", bringing an unprecedented immersive viewing experience; in addition, the gaming field has also begun to become VR. , users no longer have to endure a single scene in a game package. These cool effects bring huge space for imagination, and VR is approaching people's lives. However, the reality is that apart from occasionally experiencing the wonders of black technology, VR has not really become popular. Behind the enthusiasm of capital and hardware manufacturers, doubts are also rising one after another.

At present, although the development of VR hardware has been on the fast track, the content is very thin. The cost of a VR movie is quite high, and VR games are no less expensive. The high cost of content creation has led to the high profile of VR. In order to take off the aloof aristocratic splendor and fly into the homes of ordinary people, VR still needs to solve the problem of content supply. The development of Web technology represented by HTML5 may change this deadlock. Currently, the latest Google Chrome and Mozilla Firefox browsers have added WebVR function support for HTML5 technology, and all parties are drafting and enriching the industry's latest WebVR API standards. These web-based virtual reality standards will further reduce the technical creation costs and thresholds of VR content, and will help HTML5 (JavaScript) developers, the world's largest developer group, to enter the field of VR content creation. This is not only a significant breakthrough in the development of Web technology, but also creates an opportunity for VR to take off.

Advantages of Web-side VR

Web can lower the threshold for VR experience

Web technology not only makes the cost of creating VR cheaper, but also greatly reduces the technical threshold. Relying on the rapid development of WebGL technology, Web VR uses GPU to perform calculations and game engine technology to optimize chip-level APIs, which improves graphics rendering computing capabilities and greatly reduces the threshold for developers to enter the VR field. At the same time, Web VR can also better Combined with cloud computing technology, it supplements the computing power of VR terminals and enhances the interactive experience.

It is certain that the Web has expanded the scope of VR use. A number of innovative cases have emerged in the fields of advertising and marketing, panoramic video, and many life-oriented content have also been incorporated into VR creation, such as real-life tourism and news. Reports, virtual shopping, etc., their content display and interaction can be easily created by the HTML5 engine. This will undoubtedly bring more room for imagination for its future development.

There is a huge base of Web developers

In addition to its technical implementation advantages, the Web can also bring a huge innovation power to VR because it has a wide range of applications and huge development The user base can help VR technology win a people's war, so that VR is no longer just a capital game for industrial tycoons, but has entered every aspect of the daily life of users in a civilian manner.

I believe that in time, VR applications will be like the ubiquitous apps now. A large number of VR developers will enter in large numbers with the help of the low threshold of web development. At the same time, all kinds of weird and weird ideas will emerge in an endless stream, and virtual reality will become Necessary business tools for e-commerce merchants, etc. If it reaches this stage, VR will not be far away from real prosperity.

Develop Web-side VR content

Next, we will use practical operations to actually produce some Web-side VR content and experience the convenient advantages of WebVR. We know that many VR experiences are presented in the form of applications, which means that you must search and download them before you can experience VR. Web VR changes this form, it moves the VR experience into the browser, Web+VR = WebVR. Before entering into practice, let’s first analyze the technical status of WebVR implementation.

WebVR development methods

There are three ways to develop VR applications on the Web:

HTML5+ Java Scnipt + WebGL + WebVR API

Traditional Engine + Emscripten[1]

Third-party tools, such as A-Frame[2]

The first method is to use WebGL combined with WebVR API, based on conventional web-side 3D applications Interact with VR equipment through API to obtain the corresponding VR implementation. The second method is based on traditional engine development content, such as Unity, Unreal, etc., using Emscripten to transplant the C/C++ code into the Java Scnipt version, thereby realizing VR on the Web. The third method is to encapsulate the first method and specifically produce web-side VR content for ordinary users who have no programming foundation. In this article, we mainly take the first and third methods as examples to illustrate.

WebVR Draft

WebVR is an early and experimental JavaScript API that provides APIs to access the functionality of VR devices such as Oculus Rift, HTC Vive, and Google Cardboard. VR applications require high-precision, low-latency interfaces to deliver an acceptable experience. For interfaces like Device Orientation Event, although shallow VR input can be obtained, it does not provide the necessary accuracy requirements for high-quality VR. WebVR provides a dedicated interface to access VR hardware, allowing developers to build comfortable VR experiences.

The WebVR API is currently available for Oculus Rift with Firefox nightly installed, an experimental version of Chrome, and the Samsung Gear VR browser.

Use A-Frame to develop VR content

If you want to experience WebVR development with a lower threshold, you can use the A-Frame framework developed by the MozVR team. A-Frame is an open source WebVR framework for creating VR experiences through HTML. VR scenes built through this framework are compatible with smartphones, PCs, Oculus Rift and HTC Vive. The MozVR team developed the A-Frame framework to make it easier and faster to build 3D/VR scenes, so as to attract the web development community into the WebVR ecosystem. For WebVR to be successful, it needs content. But there are currently only a small number of WebGL developers, but there are millions of Web developers and designers. A-Frame wants to give everyone the power to create 3D/VR content. It has the following advantages and characteristics:

A-Frame can reduce redundant code. Redundant and complex code has become an obstacle for early adopters. A-Frame reduces complex and redundant code to one line of HTML code. If you create a scene, you only need one tag.

A-Frame is specially designed for web developers. It is based on DOM, so it can operate 3D/VR content like other web applications. Of course, it can also be used in conjunction with JavaScript frameworks such as box, d3, and React.

A-Frame makes the code structured. Three.js code is usually loose, and A-Frame builds a declarative entity-component-system on top of Three.js. In addition, components can be published and shared so that other developers can use them in HTML form.

The code is implemented as follows:

// 引入A-Frame框架<script src="./aframe.min.js"></script><a-scene>
  <!-- 定义并创建球体 -->
  <a-sphere position="0 1 -1" radius="1" color="#EF2D5E"></a-sphere>
  <!-- 定义交创建立方体 -->
  <a-box width="1" height="1" rotation="0 45 0" depth="1" color="#4CC3D9" position="-1 0.5 1"></a-box>    
  <!-- 定义并创建圆柱体 -->
  <a-cylinder position="1 0.75 1" radius="0.5" height="1.5" color="#FFC65D"></a-cylinder>
  <!-- 定义并创建底板 -->
  <a-plane rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane>
  <!-- 定义并创建基于颜色的天空盒背景-->
  <a-sky color="#ECECEC"></a-sky>
  <!-- 设置并指定摄像机的位置 -->
  <a-entity position="0 0 4">
    <a-camera></a-camera>
  </a-entity></a-scene>
Copy after login

Using Three.js to develop VR content

Above we mentioned another method that is closer to the bottom layer and more The way to flexibly produce WebVR content is to directly use the WebGL+WebVR API. The advantage of this method over A-Frame is that VR support can be easily introduced into our own Web3D engine. At the same time, more optimization operations can be done on the bottom layer, especially the rendering module, to improve the performance and experience of VR runtime. .

It doesn’t matter if you don’t have your own Web3D engine. You can directly use mature rendering frameworks, such as Three.js and Babylon.js. These are popular and excellent Web3D rendering engines (frameworks). . Next, we will take Three.js as an example to explain how to produce WebVR content on it.

First of all, the three elements of any rendering program are similar, that is, establishing a scene, renderer, and camera. The operation of setting the renderer, scene and camera is as follows:

var renderer = new THREE.WebGLRenderer({antialias: true});
renderer.setPixelRatio(window.devicePixelRatio);
 document.body.appendChild(renderer.domElement);
// 创建Three.js的场景
var scene = new THREE.Scene();
// 创建Three.js的摄像机
var camera = new THREE.PerspectiveCamera(60, window.innerWidth / window.innerHeight, 0.1, 10000);
// 调用WebVR API中的摄像机控制器对象,并将其与主摄像机进行绑定
var controls = new THREE.VRControls(camera);
// 设置为站立姿态controls.standing = true;
// 调用WebVR API中的渲染控制器对象,并将其与渲染器进行绑定
var effect = new THREE.VREffect(renderer);
effect.setSize(window.innerWidth, window.innerHeight);// 创建一个全局的VR管理器对象,并进行初始化的参数设置
var params = {
  hideButton: false, // Default: false.
  isUndistorted: false // Default: false.
};
var manager = new WebVRManager(renderer, effect, params);
Copy after login

The above code completes the initialization settings before rendering. Next, you need to add specific model objects to the scene. The main operations are as follows:

function onTextureLoaded(texture) {
  texture.wrapS = THREE.RepeatWrapping;
  texture.wrapT = THREE.RepeatWrapping;
  texture.repeat.set(boxSize, boxSize);  
  var geometry = new THREE.BoxGeometry(boxSize, boxSize, boxSize);  
  var material = new THREE.MeshBasicMaterial({
    map: texture,
    color: 0x01BE00,
    side: THREE.BackSide
  });  
  // Align the skybox to the floor (which is at y=0).
  skybox = new THREE.Mesh(geometry, material);
  skybox.position.y = boxSize/2;
  scene.add(skybox);  
  // For high end VR devices like Vive and Oculus, take into account the stage
  // parameters provided.
  setupStage();
}
// Create 3D objects.
var geometry = new THREE.BoxGeometry(0.5, 0.5, 0.5);
var material = new THREE.MeshNormalMaterial();
var targetMesh = new THREE.Mesh(geometry, material);
var light = new THREE.DirectionalLight( 0xffffff, 1.5 );
light.position.set( 10, 10, 10 ).normalize();
scene.add( light );  
var ambientLight = new THREE.AmbientLight(0xffffff);
scene.add(ambientLight);
var loader = new THREE.ObjectLoader();
loader.load(&#39;./assets/scene.json&#39;, function (obj){
    mesh = obj;    
    // Add cube mesh to your three.js scene
    scene.add(mesh);
    mesh.traverse(function (node) {
        if (node instanceof THREE.Mesh) {
            node.geometry.computeVertexNormals();
        }
    });    
    // Scale the object
    mesh.scale.x = 0.2;
    mesh.scale.y = 0.2;
    mesh.scale.z = 0.2;
    targetMesh = mesh;    
    // Position target mesh to be right in front of you.
    targetMesh.position.set(0, controls.userHeight * 0.8, -1);
});
Copy after login

The last operation is to set the update in requestAnimationFrame. In the animate function, we need to continuously obtain the information returned by the HMD and update the camera.

// Request animation frame loop functionvar lastRender = 0;function animate(timestamp) {
  var delta = Math.min(timestamp - lastRender, 500);
  lastRender = timestamp;  
   // Update VR headset position and apply to camera.
  //更新获取HMD的信息
  controls.update();  
   // Render the scene through the manager.
  //进行camera更新和场景绘制
  manager.render(scene, camera, timestamp);
  requestAnimationFrame(animate);
}
Copy after login

Experience and experience

Through the above introduction, we can basically implement a web-side VR application with a preliminary interactive experience, but this is only the first step. Purely technical implementation is far from being truly feasible. There is still a certain gap in engineering. Because after final engineering, user-oriented products must consider more specific things than technical prototypes, such as the quality of rendering, the smoothness of interaction, the immersion of virtualization, etc., which ultimately determine whether users will continue to use the product and accept it. The services provided by the product, etc., so there is still a lot of optimization and improvement work to be done before the above technology can be applied in engineering. The following are some personal experiences in the process of making web-side VR applications, which are shared for readers’ reference.

Engine selection. If you are using an existing WebGL engine, you can refer to the documentation in [5] for VR SDK integration. Here we need to make the engine layer compatible with the VR SDK layer, and integrate the VR mode with the tool part of the engine. You can also refer to the development models of desktop engines such as Unity3D and Unreal on VR SDK integration. If you choose a third-party WebGL engine, you can choose from Three.js or Babylon.js. These mainstream WebGL engines have (some functions) integrated with the VR SDK.

Debugged device. Debugging VR applications on the web also requires the support of specific VR devices. For desktop WebM content, you should try to use highly immersive VR devices such as HTC Vive or Oculus. For mobile web applications, due to the large differences between browsers on the Android platform, the performance will be inconsistent. Therefore, it is recommended to use iOS devices for development and debugging. However, more Android devices still need to be adapted before final release. Compatibility testing and optimization.

Optimization of performance. When doing 3D drawing and rendering on the Web, performance is still the main bottleneck. Therefore, the performance of real-time rendering must be improved as much as possible so that more resources can be left for the VR part. The current WebVR does not have the ability to call many GPU underlying interfaces for in-depth optimization such as Stereo rendering in real-time rendering like the desktop VR SDK, so it still consumes a lot of performance.

Known issues. At present, WebVR is still unstable and has many bugs. For example, device tracking may be lost in some cases, and the efficiency is not very high. Most WebVR applications can be used as reserves and pre-research for later products, but there is still a long way to go before we can launch products that are truly usable by users and provide a smooth experience.

Let’s take a look at the development process of a VR DEMO.

The following is a brief introduction to the development links that you may be concerned about:

1. About developing a project with a certain size The time required for VR DEMO

Sixty days ago, a book called "UNITY from Beginner to Master" was placed on the table in our conference room. We were not sure whether this development time was a good thing or not. It is easy for people to think that it is another thing created by "China Speed".

Later we switched to the UE engine. Of course, UE may be very happy. Look at how efficient our engine is, how effective it is, how convenient the blueprint is, and how easy to use the mall resources are...

It should also be pointed out that these 60 days did not actually start with zero foundation. Our team is relatively older, basically 30+. They all have ten years of basic 3D art experience and have played the role of Party B for a long time. As a result, the technology implementation and time concept are better. In addition, this is because I am creating something by myself, so everyone is more thoroughly invested in it. So I say this is a strange and happy time, but of course the price paid is also very high. During this period, we did not receive any new business orders.

2. About what engine to use for VR development

In fact, it doesn’t really matter what engine to use. I have seen various smart articles by the masters on Zhihu before discussing whether to use UE or UNITY for VR development. , the opinion is almost one-sided in favor of UNITY, similar to using UE will lead to pitfalls... But we studied for a while and felt that we are still suitable to use UE, and now we are still living relatively happily. The story of "Little Pony Crossing the River" is Have you read it?

I recalled the tools we used before: MAX and MAYA. Both are good, depending on what you use them for. If you are doing architectural animation, use MAX because it is fast; if you are doing character-based animated movies, use MAYA. The action module is easy to use and is conducive to team collaboration.

In addition, UE is actually working very hard now. Looking at its official tutorials, it can be seen that the mall has put a lot of effort into it, and the blueprint tool is also particularly suitable for teams that prefer visuals.

But it’s touching to see that the number of views on UE’s official public account is very small every time. It forms an interesting contrast with such a hot VR theme in China. I really don’t know what account they are following.

Another big problem is that there are relatively few people who know UE. We can only spend time to explore by ourselves. It is difficult to find ready-made people. This method is called deadlock.

3. Advantages and disadvantages of CG team developing VR

I think our advantage lies in visual design. For example, we believe that the Vive handle as an interactive tool should also be different in different scenarios. theme, so various beautiful controllers appeared, and I think they can all be put on the official mall.

I think the background of VR content development in China can be divided into several categories:

The first is panoramic video with a relatively low technical threshold. Most of these backgrounds are from real-life shooting. , the previous title was usually director.

The second type is a game company. The advantage is that the procedures and workflow are similar to VR, and it is easy to transform technically. But the embarrassing thing is that if you want to pursue visual effects, it is best to make next-generation games. , but in China, most online games are now mobile-based, and next-generation games are mainly outsourced processing.

The third type is from the CG industry in the past, which is our kind. The advantage is that we have a greater pursuit of visuals. There are some foreign teams that are doing quite well and have followed this approach before, but they need to be technically advanced. Some upgrades and transformations have been made, but fortunately the underlying principles are similar and can be understood at a glance.

The fourth type is a team of programmers. Their advantage is that they can develop more awesome code, write the shaders they want, and get the functions they want. What requires effort is how to make things look good. .

The last one is to say that he is doing VR.

4. Blueprint of UE engine

The blueprint is a visual module programming developed by UE specifically to improve work efficiency. This thing is of great help to us. Without it, our DEMO cannot be produced. .

Give me an example:

Your arms that appeared in the second half of DEMO turned into wings. You waved your wings and flew into the sky. When you came to a salt lake with the same color as water and sky, your wings turned into Feathers flying all over the sky... such a romantic scene.

To realize this, you need the help of blueprints. Of course, our annotation is very down-to-earth, and those who can understand it will definitely understand it.

An initial introduction to web-based VR

In short, this is a DEMO with a complete blueprint without writing a single line of code. Doesn’t it sound high-level enough? In fact, what is the purpose of the UE development blueprint? Isn't it to let you focus on things that are more helpful to the final effect? ​​The process that suits your team is the right process.

5. Another technical point: PBR process

The PBR process is said to have been introduced by the game industry after learning from the CG industry, but I am ashamed that in the later stages of the CG industry, we rarely even exhibited UV due to the type of project.

The PBR material system allows objects to show more realistic textures and rich details in the engine. This is also a common material system for this generation of game engines. Artists can work more in line with the lighting principles of the real world. The logic draws the texture and adjusts the material. The Substance Tools series of tool software based on this system integrates texture, material, shader and other systems from the original multi-software tool collaboration into a complete system. At the same time, the flexible workflow greatly saves production time and facilitates the production of serialized texture materials. Iteration and updating make the material mapping work process more intuitive and efficient.

Using the software Substance painter, this process is much better than the back and forth between PS and MAX flattening textures in our early years.

Technology is very important, but in the face of a new audio-visual experience method, creation is very important, and it is also a difficulty in VR experience design.

7. Regarding the topic selection of VR experience

The topic we chose is actually a relatively unpopular topic. Most VR that can be seen now is about shooting guns. I think we seem to be interested in cultural topics. I was more interested, and several of our colleagues were aviation fans, so I chose this subject.

We noticed that there was a foreign team on STEAM that made the Apollo moon landing experience, so we downloaded it and studied it, but we thought that what they gave us was some wrong experience. I think this team is A bunch of boring engineers are so boring that I can't stand it. Cultural output is something we are relatively good at. This may have something to do with our previous work experience, and we should use new technologies to do something meaningful. It is necessary to defeat monsters, but it would be wrong to just defeat monsters.

My own conclusion is that it is best if the subject matter is something you like, has in-depth research on, and is suitable for VR expression.

8. How will the narrative language of VR be different from before?

From a creative perspective, in the past, the director held the camera in his hand and let you look wherever you wanted, and there were montages and so on. It makes you wonder; but in the world of VR, you are a live participant, and the director needs to guide you to see what he wants you to see and participate in.

We have participated in the planning and execution of many live events since 2008. I think this experience is actually similar to VR. In a large space, where there is noise and light, How to guide the line of sight and promote the development of the plot is a delicate and fun matter.

So I think the visual language of VR is not a film approach, but a live event or performance approach.

From a technical process point of view, the original composition and editing positions have been retired, but not all of them have been retired. They are still there, but they just changed the platform. UE actually adds non-linear editing, and you can also have Post-production filters and stuff are really fun.

9. About the process and division of labor

There are about 20 people involved before and after. Most of the people will be concentrated in the position of 3D art materials, which is similar to the distribution of game development personnel. There is a director, our upright teacher Pu An, who will coordinate the whole thing and make artistic control. Then other colleagues and I will work together to push forward the technical process.

Fundamentally speaking, this topic may be a bit too big to start with. The advantage is that the entire audition language will be discussed more fully and in-depth. The disadvantage is that it requires a lot of resources. Although we have done a lot of research in the past ten years, Zhong is a team that is particularly good at building large volumes and selling great products. However, within the limited time, some resources may not be accurate enough. It would be better if we had more time. I was lucky enough not to encounter pits that I expected to be unable to climb out of, and always turned disaster into good luck. For example, I initially thought that so many objects would eventually become stuck, but through reasonable loading and hiding, the frame rate was still fine. , that is, the engine’s carrying capacity is actually better than we imagined.

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template