Hi mikehoium,
The compression of any footage is very much based on its visual content. Per image and over time. The compression method one choses is either a good fit or works against the material.
There key is to get the most data to YouTube but also in a way that they do not “recompress” already heavily compress data. To upload uncompressed seems often, especially with 8K x8K prohibitive.
The best way to get good quality is often to increase the frame rate, which is also the base for a better viewing experience. Since in both cases, if a head rotation will result in different images, motion blur is out of the window.
But in normal cinema footage that is part of any fast moving content (camera based on object based). Motion blur “eats” details and requires so less data.
Often a stream is not based on content, it is just limited by a fixed max bitrate.
In these few lines I have reflected my personal understanding of it. In a nut shell, the problems of getting a great quality is based on how much data can be used per second, what frame rate is used, how large is the image and how much change and details are in the content.
If you go now and do your tests, use something that comes close to your wanted content, and use always the same scene. After uploading, wait a while, that the movie streams already (so my experience) is not the moment where YouTube is done with it. You can see that often, that HD is available but 4K (UHD) is not.
To have 16:9 or 2:1 as an option is certainly a recommendation of the material one has natively. If the camera set up (practically) gives you an equirectangular 2:1, one would lower the quality by changing it to 16:9, vice versa.
It is like with any change, e.g., lens distortion correction: any pixel move less than a complete and precise pixel-distance will result in less information (as in contrast, sharpness, etc). Since that blurs to a certain degree the content, the compression has less work to do. IF one sharpens then in post the content to get a visual quality (kind of) back, the pronounced edges, as that what you get, is not real content sharpness but visually forced sharpness. Such harm to the content will be paid in more need for bandwidth, since it is often already maxed out, the quality goes even more down. If the content is then oversampled, even smaller resolutions(here the pixel amount) can result in more bandwidth need—considering Mbit/pixel ratio.
Here I see a reason why YouTube provides the option to give you both ratio options, and doesn’t force you to convert and lower the quality.
Since we talk about streaming, the end user device and encoder on the other side is in the game as well, it might vary drastically.
All of that is a formula where I have no general answer to, and even the presets from Adobe haven’t given me pleasure for all footage, as these presets have no idea about “my” content. So it is a little bit trial and error.
Set up higher frame rates (yes, I know more render time), or try to find your compression settings where you can’t see any loss, go a little bit higher perhaps for delivery. What your videos (if possible) on a scree 1:1. If you have only 2K or something, watch in After Effects in 100%. Often even 5K monitors are set to 2.5K to keep software interfaces readable, and QuickTime can’t show then 4K content, it needs to be switched to 1:1 resolution.
I hope I could point you to some areas where to look, and what matters. Since highly detailed compress differently than low detail (per image or over time), make your tests. Keep in mind that one often can see up to (or even above) 90º, which is a quarter of the footage provided. IF you use a 2K phone as viewer, anything below 4K will fail, but anything *K might kill the stream. Tests, tests, test.
Something similar was discussed here:
https://www.cineversity.com/forums/viewthread/1890/
All the best
P.S.: I checked [again] the availability of books for that—which I do frequently, but the latest about compression I could find was from 2010: nothing new. Assuming the research was held in 2009 and before for this book, the content is stone-age in a web-based age.
The articles found while google the theme, are mostly a year old and the majority of them tried to sell software solutions, i.e., presumable biased information. The first book I bought about compression options and techniques was in the ‘90s, and even that book has seen no updates. In other words, the development in this area is obviously too fast to make anything in print available.