Has anyone tried it, yet ?
I did, here are the results.



Images on the day after the recent eclipse. (Moon wasn’t red at the time of capture. Apologies for that.)
The moon’s image is replaced in real time, like when you tap to focus on the moon, it would change. And when the picture is clicked, another optimization makes it such.
These are from zoom levels where pro res turns on(two images are on full 60x and one is lower, I don’t remember). I do understand what A.I does, being a data scientist.
But, but, but, hear me out.
There needs to be some sort of filtering to the optimization. I know there might be pre-trained models to which the captured vectors goes through the model to output the optimizations. No doubt they do work well in some general environments.
My solution to this problem would be to, either fine-tune the model with some hand-tuned images of the moon, given the true input. This would greatly leverage the manually tuning variations for the model, but I don’t think this would be in the hands of the team as it might be using some thirtd party proprietary model.
I know previously some samsung phones did it, but they listened and corrected it to an extent that the moon doesn’t look replaced.
I’m not a pro in photography, therefore, I’d love to hear some opinions on it.
However, it was a jaw dropper when I saw the clicked images for a few seconds, till my neurons fired.😆