A new version of Cineversity has been launched. This legacy site and its tutorials will remain accessible for a limited transition period

Visit the New Cineversity
   
 
Jungle Book VR
Posted: 11 May 2016 02:34 PM   [ Ignore ]  
Total Posts:  20
Joined  2006-05-09

Has anyone seen the Jungle Book VR scene?
It’s posted as content when you open MILK VR in the Samsung Gear.

I was wondering if anyone had any idea how Disney can get the quality of the VR so good.
The file size is 4K, just like I’ve been rendering within C4D, but I can’t get anywhere near the quality.
My resolution just looks very pixelated when I view it in the Gear VR.

I just wanted to open it up to discussion on what they might have done differently than using the CV-VR cam that I’ve been using.

Mike

Profile
 
 
Posted: 11 May 2016 03:10 PM   [ Ignore ]   [ # 1 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Hi Mike,

Is that the clip you have seen?
https://samsungmilkvr.com/view/17UckX9nZqH
What I can see here on my screen (5K) it looks close to SD rather than HD, but perhaps I have to check later again on a different device. With UHD the SD quality would be expected.

The site shares this:
<canvas width=“1140” height=“696” class=“player-canvas” style=“width: 1140px; height: 696px;”></canvas>
Which would give a ~90º+ field of view , but so far I can see they use for a 2:1 ratio for mono and a 1:1 for stereo, but way more importantly, the specs they use:
https://samsungmilkvr.com/portal/content/content_specs

They use a “Resolution: Minimum 3840x1920 (3840x3840 for stereoscopic)” which means it might be much higher, perhaps here is your answer already, MINIMUM.

Compare these specs with yours and you might find an answer already. I quote: “Due to the nature of spherical video, higher resolution should be favored over other factors. Reducing the resolution will not help bitrates or the perceived quality of your video.”, I haven’t found anything that would indicate any 4K or UHD limitation.

The more interesting question would be, what is your workflow, your settings, etc.?

To my understanding (and AFAIK), an automatic compression is always less effective than a monitored and manually optimized (studio based) compression, but what studios do exactly is not public, their secret sauce so to say, if it is Blu-ray™ or any streaming for example.

The main data-stream for this format has an initially adjustable limit, e.g., 6000kb/s. This limit must be kept, e.g., for stable streaming. Which means, if one waste a lot data on secondary details, other areas will suffer. An automatic codec is not a cinematographer, it is all data for it, so it decides perhaps to use the data for the wrong area.
Typically images in a forrest have more problems than images with a large portion of blue sky, to give an orientation. In this example, there is a lot of “diffuse” and or dark areas, especially those areas have no motion at all. I have of course no first hand information about this clip, but I could think of the Mask technique in JPG, to determine high frequency areas and low ones, or where the refresh rate is lower and where it needs to be higher. Again, just a guess.

All the best

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 11 May 2016 05:11 PM   [ Ignore ]   [ # 2 ]  
Total Posts:  20
Joined  2006-05-09

Thanks Sassi,
That is the clip, the one I have is 3840 x 2160.
I think you hit it on the nose, they may be doing a lot of tweaking (or have already) to the compression settings.  They also could be rendering at a higher res, and then reducing it.  I’ll keep playing around and see if I can’t improve the quality.
Thanks,
Mike

Profile
 
 
Posted: 11 May 2016 05:31 PM   [ Ignore ]   [ # 3 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Thanks for the reply, Mike.

I do quite some 360ºx180º shooting for my stuff, (I do it since the late ‘90s). Recently I went from 16K to 25K for the equirectangulars as a minimum size. (For 3D I’m more in the 40K to 64K range, as min.).
The quality I get from downsampling is just so much better, but I hit yet often a border when I re-project material, e.g., for my “Little Planets”. https://plus.google.com/collection/kV0vZ
You will see that I use often Cinema 4D to convert my “stuff”, as it is the best image-machine I can think off ;o) Yes, I’m biased, of course.

Compression is a huge theme, I got so used to my 6K RED raw footage, that I initially thought that the 4K camera on my P4 drone was just bad. There is a huge learning curve, even after decades of reading about compressions and such, it never stops. I can only encourage to explore this field carefully as you mentioned already, and not go into the trap of 3rd party information as the only source. To understand what you have and what you want opens the doors to explorations. There is one thing I can say with certainty, to re-compress anything is a bad idea, so for any rendering, us the best output format possible, certainly not 8bit/channel, but I guess that is common knowledge by now. Softening dark areas and especially de-noising the material helps, e.g., Monte Carlo and other noisy engines. Noise produces a lot of data.

My best wishes

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 11 May 2016 06:14 PM   [ Ignore ]   [ # 4 ]  
Total Posts:  20
Joined  2006-05-09

Hi,
So let me make sure I understand this.  For Equirectangulars, you’re shooting 16k to 25k (ie - 16,000px x 16,000px).
For Video, I’ve considered this, but the render times would just be astronomical.  Beyond that, for playing back video in Milk VR, it will only support up to a max size.  So (again, just so I understand where you’re going).  You’d render out a 16k video or more, ad then compress it down to you’re final 4k-8k size depening on what Milk VR will allow (or any player.  I’m using Milk VR, because I’m using the Samsung Gear for our VR)

Beyond that, I guess I will just follow your advice, and start going over some of that compression stuff you’ve mentioned again.
Thanks,
Mike

Profile
 
 
Posted: 11 May 2016 06:38 PM   [ Ignore ]   [ # 5 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

Thanks for asking, Mike,

I will try to guide you though this thought, each part will build up on the next.

I talked about photos, not renderings, hence the link.

When I use large euquirectangulars, then to retrieve a still or a video from it. Example, around 17 years ago, I had to produce an 20 minute architectural visualization; Deadline a month later, with six different objects (interior viz). So, I rendered one equirectagular (for each room) and placed it on a Sky Object (sphere) and from there I made a camera animation (Camera stayed on the center!), like watching an equirectangular-still with a head mounted display these days. It is fast and clean.

An Equirectangular is typically 2:1 , so only half high, not square. Keep in mind that the AA doesn’t need to be that high if the resolution is higher than needed, when you consider to down-sample. The results from photos are so much cleaner that way. No weird sharpening, which destroys quality if done globally. Note that renderings are typically clean and without any noise, but many techniques introduce noise, e.g., Monte Carlo.

When you do video, you could create a high res equirectangular and only have the animated parts in 3D in the scene, the rendering would be then just the those parts and merged later. This, again is just a suggestion, not meant as “best or only” practice.

When I go to
http://store.kolor.com/virtual-tour/panotour-pro.html
You will see the options to mix still and motion parts seamlessly. Example here: the TV in the clip has a movie, the rest seems to be based on a still. So, one time loading of the “environment” and streaming the little partial parts (animation) seams to be so much less “data”. What you see in the initial example of your post, might be only partially animated. Of course I don’t know for sure how they did it.

However, I hope I was able to share some thoughts and examples. In your example link, I did not found any “three.js” mentioning, which is another option, and usually created via Unreal or Unity as authoring tool. Just to be clear, I’m not a web-coder, and haven’t done any web stuff since decades nearly, so I’m not savvy here anymore at all. ;o)

Analyse your ideas, find the best way and options and deliver based on what is possible. To just dumb an 8 by 4 or even 8 by 8K file is certainly no idea. I agree.

All the best

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile
 
 
Posted: 11 May 2016 06:47 PM   [ Ignore ]   [ # 6 ]  
Total Posts:  20
Joined  2006-05-09

Intersting thoughts as always,
Thanks.

Mike

Profile
 
 
Posted: 11 May 2016 07:02 PM   [ Ignore ]   [ # 7 ]  
Administrator
Avatar
Total Posts:  12043
Joined  2011-03-04

You’re welcome, Mike.

I read my first book about VR in the late ‘80s [William Gibson, Cyberspace, Heyne, Munich 1986], and I’m amazed how much it evolves every day since.

My first published work is here (just to show you how much I like it), translation is in the post as well: https://plus.google.com/+DrSassiLA/posts/BQJU3pS2Bpe


Enjoy!

 Signature 

Dr. Sassi V. Sassmannshausen Ph.D.
Cinema 4D Mentor since 2004
Maxon Master Trainer, VES, DCS

Photography For C4D Artists: 200 Free Tutorials.
https://www.youtube.com/user/DrSassiLA/playlists

NEW:

NEW: Cineversity [CV4]

Profile