Regarding the method of tracking points, the problem I encountered was that I couldn’t fully track the movement of the upper face point, so I had to manually pull back to the correct position in many places. This increased the workload considerably. Regarding the denois and samrtvector approach, the problem I encountered was that the face had undergone a large deformation and I was unable to restore it to its original position, thus causing the tracking to fail.
Using denoise first, and note that it is better to choose the blackest part of the analysis point, so that the most information effect comes out. Export exr to keep more information. Then use smartvectorb to export the noise-reduced video to exr (you can export directly without changing the channel settings)
The Keylight effect is easy to use and very good at handling reflections, translucent areas and hair, allowing you to precisely control the blue or green screen reflections that remain on the foreground objects and replace them with ambient light from the new composite background.
I encountered two problems with the green screen keying, the first is the keylight after the machine side will also show a little green, so I used a roto to fix, adjusted to alpha, so that it restores the original color of the machine. The second is the above recording rod through the problem, I also want to use roto to solve it, but roto after that part became a black background, so I tried to adjust the parameters of roto. You can see the two bezier curve in the color side of the mode is different.
In this lesson we learned about aov layered rendering compositing.
This layering is the layering of objects, such as dividing a scene into independent layers, such as the character into a layer, the background into a layer, so that when rendering people the background is not rendered, the camera does not move, then sometimes you can only render a frame of the background, saving rendering time, aov layering is the channel layering, such as an object channel into a light layer color layer reflective layer diffuse reflection layer, and so on, and then use nuke compositing, multi-channel compositing is also called depth compositing. Z channel is generally used to do the scene of the air fog and depth of field. Ao, meaning environmental masking, is used to superimpose the shadows generated by the object and object contact, increasing the sense of volume. Coat, varnish layer, is the second layer of material highlights, generally like celadon, such a glazed material will have this, and car paint also have. diffuse, this is a color layer with light information, without any material other than color in it. direct, this is the direct light layer, is based on the scene inside the light, calculate the effect of direct irradiation out, did not calculate the effect of photon bounce produced. emission, this is the direct translation of the meaning, self-luminous, this channel will be the scene on the self-luminous parameters of the material extracted. Easy to adjust the content related to self-illumination later. indirect, with the above description of direct, this channel is Arnold calculated according to the light of the photon bounce effect, that is, indirect lighting, like a sunlight, shining into the house, the house is not illuminated in places will also light, which is the photon bounce and indirect lighting. motionvecto, because the scene is not animated, so use a small ball with animation to do the case. motionvector, motion vector, used to do the motion blur effect later. opacity, this is the transparency channel, there are transparent properties will be raised to facilitate the control of transparent materials. specular, highlight layer, used to adjust the intensity of the highlights of the Aov layer later. sss, when an object has a subsurface, epidermal material, this channel can be convenient to adjust the size of the SSS later. Transmission, the refraction layer, similar to water, glass and other such materials with refraction properties, this channel is convenient to adjust the size of refraction and other properties. Finally, when outputting Aov, we choose the material content according to the file, for example: there is no glass in the scene with refraction material, then transmission is not output. And so on, choose what you or the downstream department needs most, so that you can reduce the rendering time and improve efficiency.
In this class, we also learned about relight, the basic principle of which is to first model the real scene to create a digital scene. Then we take points in different parts of the real scene, and each point takes an omnidirectional cubemap, through which we can derive the lighting response of each point in the scene and apply it to the digital scene. Finally, you can move the light source randomly and get a rendering that looks fine. In my understanding, this is more like a 2.5d technique. Here are the four methods of relight.
At first, I kept failing and the cards did not follow the tracking movement more successfully. So I had to spend a whole day to fix it. After summing up, I think the main reason for my failure was the wrong use of the lensdistortion.
So here is my summary of the correct way to use it.
① This aberration is the lens aberration data given to the pre-shoot, we have to make full use of. In analysis at detect (preview can see the line preview) and then at keys1 solve. ②This is copied from ①, because I missed this step, resulting in tracking itself without aberration data. ③The important thing is to connect the source to the scanlinerender so that it can be corrected. (I previously connected to the bg card does not show)
Also, care must be taken to open it with nukeX, otherwise the lens will not work. When tracking to pay attention to the timeline, whether the 249 frames chased out (once I only chased 100 frames led to the failure of the tracking later)
three important lensdistortionA wrong way to use lensdistortion.
Class note:
In this lesson, we learned about project.
The first method uses card projection, so the text needs to be mirrored first. The second way is to use the card itself after roto, but there is a project camera is moving. framehold is set at 120 frames. roto out, turned into a mask, and then extracted. Another framehold was added to paste that image back in. Another framehold was added below to track the camera. The third one is similar to the second one, but the roto is projected. The fourth is equivalent to the replacement image is flattened, is a plane, and then draw and deform the object to be changed.
Then, learned how to create point clouds. Here is a screenshot of my class to make sure I can make point clouds afterwards.
In this lesson we learned about track in a 3D scene. Unlike the 2Dtrack I learned last semester, 3D needs to allow the camera to calculate a specific 3D space, and more tracking points are needed. Therefore, it is divided into several steps.
1. Set the number of tracked points reasonably (such as 300 to 500) and let the camera calculate automatically. Solve it after the calculation.
2. Delete the point where the operation failed, which will interfere with the operation result. The red dots didn’t catch up. Make sure its error value is below a certain value (1 is best). Then update the solve.
3. Establish a flat surface. To ensure that the camera’s computation is a correct 3D space, we can select multiple points to tell the camera that it is a plane (choosing a floor is best, otherwise it will cause the space to be reversed).
4. Done. Create a scene+. The small arrow adds a series of associated nodes together.
5. Verify it and use a plane to determine the success of the trace.
I think the hot air balloon turned into an ice cube falling itself is rather absurd, so I simply match a decent newscast. In addition to the hot air balloon itself, if you look closely at the subtitles, you will also see interesting content.
zdefocus can use one point to adjust the depth of field in the picture, and adjust the output to focal plane setup to use green, pink and blue to see where the focal length is more clearly. Rotopaint adjusts stroke writeonstart to turn written text into animation.