This sounds like a simple to answer question, just use the Bake Texture Tag and click the Reflection-option without “Use Camera Angle” and go (while having only this object visible except a white Sky object). But it’s certainly not that simple.
The first thing that comes to mind is the self-reflection. Well, in most cases a copy of the model shifted with a very tiny “Normal Shift” would do the trick, while the new model gets only a while illumination as material.
Then in most cases that might work, except if the inner edges are not too small. The small shift allows only to reflect the bright object and yes, colors (in the reflection channel) will change the outcome. Which leads to:
The next idea would be to move the texture of the “noise shader” into the Luminance Channel. Then Bake the Luminance Channel and use the baked texture for the transfer. Perhaps if one has a mask for several refections or a limitation of areas, this needs to be combined, in the Luminance Channel or later. From here the next would be:
Which brings up the question, why not using a texture from the start and share it. Yes, I’m aware of the differences between Shaders (especially 3D shaders) and textures (as in fixed resolution). But If one has to create a texture anyway, the question seems kind of natural.
As you wrote, “I don’t expect the output to be a perfect PBS conversion…”, how much those texture will give you real world (e.g., BTS) quality is not clear to me, physical based rendering is a loosely used term and used from “looks cool” to “a precise simulation of, e.g., metallic car paint with various qualities regarding “subtle angular dependency but also significant spatial variation” [Uni Bonn].
This sounds very geeky, but only meant to concur with your expectations.
All the best