Hi mford610,
In your example clip, it switches in certain intervals relatively visible. I guess that is not the target here.
The production year of 1977 certainly indicates a more analog base. There was no camera move at all, so far I can tell. Just images scaled and blended. I do not get that you asked for this since that leaves any perspective change out.
As with anything in that matter, the cinematography is the point. As any cinematographer will tell you, use a dolly over a zoom shot. Zoom shots are like scaling flat images, as the camera isn’t moving, no perspective change will happen, just scale. Which is a big no-no if we talk about a good audience experience. The perspective change created by a camera move helps to immerse the audience.
https://www.cineversity.com/vidplaytut/cinematography_part_07
I guess that is your target, reading along, but using the video clip just for the scale demonstration.
I would split this in different but overlapping dolly moves as well. The camera motion has to be set to linear (vs. spline) while key-framing it. The camera doesn’t change the field of view, there is just pulling in or out. Linear comes close to the video example, but it is technically not perfect.
The camera move would start framing a cube (as reference) that is maybe a 1/10 or a 1/100 in size of the cube on end. The movement goes a little bit longer to allow for a good blend (head and tail). Each scene can use the Project settings to scale, but since I’m not aware of what objects (e.g., dynamics) you will use, this must be explored in detail.
As an example, you might have a head of 24 frames, then the main part (start to end) of 240 frames, then the tail with 24 frames. So you know you match-points. The movement before and after: Timeline. Function> Track [Before/After]
Since the movement needs to change over time, the suggested set up does this, as it goes to the scale times 10 each time (1, 10, 100, 1000, each time 240 frames). However, a constantly increasing speed would come closer to the effect that even a universe’s “image” moves as fast as an image in a microscope view.
If you leave the camera field of view on default, the field of view will fit the object as the distance is to it. So, a 200 unit cube will need a distance of -200 to fill the screen.
Have a look here: (linear motion)
https://www.amazon.com/clouddrive/share/GujYgOyVgo8rHVFnm8c5p7gCKMjTSGWtwseVXh6Ngco
If you use motion-blur, the blend will be much easier.
The option to start with the smallest first will give you its image to be camera-mapped to the next scene. Problems of reflections might prevent this, as reflections change with a dolly move.
The camera move should be in an Environment Object, to simulate Sfumato (depth-perspective), if the move is really long, to help the visualization.
Once you have a clear idea about what each part has, create a preview, and edit those to see what needs attention.
As a side note:
If all needs to be done in just a few scenes, this first part of the tutorial might help.
https://www.cineversity.com/vidplaytut/use_fbx_to_export_lods_from_cinema_4d_to_unreal
Let me know if there is anything else, I’m happy to look into it.
Cheers