Recently, because Hongxing Erke donated 50 million yuan in supplies to Henan, netizens really cried a lot. Ordinary companies donating 50 million yuan may not have such empathy, but look at After understanding the background of Hongxing Erke, I found that it is really sad. Hongxing Erke's revenue in 2020 was 2.8 billion, but its profit was a loss of 200 million. Even the official Weibo account was reluctant to open a membership. Under such circumstances, it generously donated 50 million, which really broke its defense. .
Netizens also called Hongxing Erke, especially like the savings saved by the older generation frugally and carefully stored in the iron box. As soon as I heard that my motherland needed it, I immediately took out the iron box and, wow~ I'll give it to you. I gave him the most expensive shoes and took out a pair worth 249.
Then I went to Hongxing Erke’s official website to look at his shoes.
Hey guys, after waiting for 55 seconds, the website finally opened. . . (It seems that it is really in disrepair, which is so sad. As a front-end person, it is really crazy to see this scene...)
It happened to be the weekend, so I went to the nearest one to me. Hongxing Erke looked at it. I bought a pair of 136 shoes (they are really cheap, and the most important thing is they are comfortable).
After I bought it home, I was thinking, like those Adidas and Nike products on Poison APP, you can view them online in 360°, I just thought Is it possible to make one for Hongxing Erke? As a technician, I can contribute a small amount to it.
Action
After I had this idea, I immediately started taking action. Then I roughly summarized the following steps:
1. Modeling
2. Use Three.js to create a scene
3. Import the model
4. Add Three.js controller
Since I have learned some knowledge about Three.js before, I am quite familiar with the display after having the model, so the most troublesome part is modeling. Because we need to put a 3-dimensional thing into the computer. For 2-dimensional objects, if you want to put them on the computer, we all know that it is very simple, just use the camera to take a picture, but if you want to view 3-dimensional objects on the computer, it is different. It has one more dimension. The increase did increase exponentially, so I started to consult various materials to see how to build a model of an object.
# I checked a lot of information and wanted to build a shoe model. In summary, there are two models.
1. Photogrammetry: Taking photos and converting them into 3D models through pure algorithms is also called monocular reconstruction in graphics.
2. Lidar scan: It scans through lidar. This method is also mentioned in the latest video of Mr. He to scan point clouds.
Put an outline that I summarized, most of which are foreign websites/tools.
In the initial search results, most people were mentioning 123D Catch, and also watched a lot of videos, saying that it can build models quickly and realistically, but going one step further During the exploration, I found that it seemed that the business was merged and integrated in 2017. The integrated ReMake requires payment, and I did not continue due to cost considerations. (After all, it was just a demo attempt)
Later I found a software called Polycam, and the finished product was very good.
But when I chose to use it, I found that it requires a laser radar scanner (LiDAR), which must be iPhone 12 pro or above to use.
In the end, I chose Reality Capture to create the model. It can synthesize a model through multiple pictures. After watching some videos on station b, I felt that its imaging effect was also good, but it only supports windows. And the running memory requires 8g. At this time, I moved out my Windows computer from 7 years ago... I didn't expect it to still be in service, which was also a surprise.
Modeling
The official content begins. The protagonist is the shoes I bought this time (the pair at the beginning)
Then we started shooting. First I took a random set of photos around the shoes, but found that this model was really unsatisfactory...
Back I also used the form of a white screen and added a layer of background, but later found that it still didn't work. The application more recognized the background numbers behind it.
Finally... With the help of Nan Xi, I changed the background image to white.
Huangtian has lived up to his hard work, and the final effect is pretty good. The basic point cloud model has been released. (This feels pretty good, it feels like the black technology in the movie)
The following is what the model looks like. It is the best I have spent a day training. The model (but there are still some slight roughness)
In order to make the model look as perfect as possible, it took a total of one day to test the model, because of the shooting The angle has a great impact on the generation of the model. I took a total of about 1G of pictures, about 500 pictures (because I didn’t know how to adjust the model in the early stage, I tried a lot of methods.)
After we have the model, we can display it on the Internet. Three.js is used here (since many people are not related to this field, we will talk about it in a relatively basic way, boss) Please forgive me.)
Building the application
It mainly consists of three parts (building the scene, loading the model, and adding the controller)
1. Build 3d scene
First we load Three.js
<script type="module"> import * as THREE from 'https://cdn.jsdelivr.net/npm/three@0.129.0/build/three.module.js'; </script>
Then create a WebGL renderer
const container = document.createElement( 'div' ); document.body.appendChild( container ); let renderer = new THREE.WebGLRenderer( { antialias: true } ); container.appendChild( renderer.domElement );
Then add a scene and camera
let scene = new THREE.Scene();
Camera SyntaxPerspectiveCamera(fov, aspect, near, far)
// 设置一个透视摄像机 camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 0.25, 1000 ); // 设置相机的位置 camera.position.set( 0, 1.5, -30.0 );
Adds the scene and camera to the WebGL renderer.
renderer.render( scene, camera );
2. Model loading
Since our exported model is in OBJ format and is very large, I have compressed it into gltf and glb formats. Three.js has helped We have written the GLTF loader and we can use it directly.
// 加载模型 const gltfloader = new GLTFLoader(); const draco = new DRACOLoader(); draco.setDecoderPath('https://www.gstatic.com/draco/v1/decoders/'); gltfloader.setDRACOLoader(draco); gltfloader.setPath('assets/obj4/'); gltfloader.load('er4-1.glb', function (gltf) { gltf.scene.scale.set(0.2, 0.2, 0.2); //设置缩放 gltf.scene.rotation.set(-Math.PI / 2, 0, 0) // 设置角度 const Orbit = new THREE.Object3D(); Orbit.add(gltf.scene); Orbit.rotation.set(0, Math.PI / 2, 0); scene.add(Orbit); render(); });
But when we open our page through the above code, it will be dark. This is because we have not added lighting yet. So let's go ahead and add a light to illuminate our shoes.
// 设置灯光 const directionalLight = new THREE.AmbientLight(0xffffff, 4); scene.add(directionalLight); directionalLight.position.set(2, 5, 5);
Now we can clearly see our shoes, as if we can see the light in the dark, but at this time we cannot control them through the mouse or gestures. We need to use our Three .js controller to help us control the angle of our model.
3. Add a controller
const controls = new OrbitControls( camera, renderer.domElement ); controls.addEventListener('change', render ); controls.minDistance = 2; // 限制缩放 controls.maxDistance = 10; controls.target.set( 0, 0, 0 ); // 旋转中心点 controls.update();
At this time we can look at our shoes from all angles.
Done!
Online experience address: https://resume.mdedit.online/erke/
Open source address (including tools, operation steps and actual demo): https://github.com/ hua1995116/360-sneakers-viewer
Follow-up plan
Due to limited time (it took a whole day on the weekend), we still did not get a very perfect model. We will continue to explore the implementation of this in the future, and then we will explore whether an automated method can be realized from shooting. After the display of the model, and in fact, after we have the model, we are not far away from trying on AR shoes. If you are interested or have better ideas and suggestions, please feel free to communicate with me.
Finally, I am very grateful to Nan Xi, who put down some things that were originally planned to help with shooting and post-processing, as well as accompanying me to deal with the model for a whole day. (It’s really difficult to shoot with limited conditions.)
I also wish Hongxing Erke can become a long-term enterprise, maintain innovation, make more and better sportswear, and maintain the current status of being favored by the whole people. .
Appendix
The several shooting tips are also provided by the official.
1. Don’t limit the number of images, RealityCapture can handle any picture.
2. Use high-resolution images.
3. Each point in the scene surface should be clearly visible in at least two high-quality images.
4. Move around the object in a circular manner when taking pictures.
5. The moving angle should not exceed 30 degrees.
6. Start by taking a photo of the entire object, move it and then focus on the details, making sure they are all about the same size.
7. Complete surround. (Don’t go around half a circle and end it)