<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d12988030\x26blogName\x3dDon\x27t+Trust+Snakes\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://donttrustsnakes.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://donttrustsnakes.blogspot.com/\x26vt\x3d-4673447362931781663', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>


DON’T

TRUST

SNAKES


“I know where I'm headed.”
ROGER THORNHILL



Monday, November 21, 2005

O wonder! How many goodly creatures are there here!

Stanford University computer scientists have built a digital camera that captures enough information so that they can shoot a photograph, load it into a computer, and change the focus in the image. [LINK] I think the application they are demonstrating is only the tip of the iceberg. Assuming it's possible to make accurate enough initial measurements, then, among other things, the following should be possible with sufficient computing power:

  1. Virtual camera movements. In large-format photography, it is possible to tilt the lens or the film plane or both to change the orientation of the plane of sharp focus. If you picture two planes, one running through the lens perpendicular to the direction of the light (the "lens plane") and the other being the plane of the film (the "film plane"), then the plane of sharp focus will be the third plane that intersects the lens plane and the film plane. This is called the Scheimpflug principle. In a "normal" camera, the third plane is parallel to the other two (they intersect at infinity, as we learned in Honors Geometry). But in a large-format camera (a "view camera"), you can tilt the lens and/or film plane so the plane of sharp focus is at an angle and intersects the other two along a line. Thus, you could take a shot where some wildflowers in the foreground and mountains in the background were all in focus. Or a car parked diagonally could be rendered all in focus. There is no reason that all of this could not be computed.


  2. Virtual aperture changes. You could obtain the effect of stopping down the lens by computing the exclusion of rays from areas nearer to the edge of the lens. Since this is being done by computation, you could avoid the effects of diffraction, the scattering of light that passes the edge of any diaphragm in the real world. Diffraction is a limiting factor in how small an aperture you can use in the real world, because it happens at the edge of the diaphragm, and the smaller the aperture circle gets, the greater the ratio of the perimeter of the aperture to the area of the aperture and the larger the percentage of light from the image that gets diffracted and degrades the image.


  3. Virtual lenses. It seems to me that once you have captured the data for the lens that is actually on the camera, you could compute how it would be rendered by a different lens, even a hypothetical lens. Within reason, you could probably achieve the effects of a longer or shorter lens. You could also achieve effects of lenses that are not practical to manufacture or to use in the real world. For example, only recently has it been possible to manufacture lenses with so-called aspherical elements. Before that all elements of a lens were parts of a sphere, because those are far easier to grind/manufacture. With a computer, you could apply the effects a lens with any number and type of aspherical elements you wanted. Although I think a human would still have to design the lens, you could perhaps have one-off lenses for particular images. In the real world there are limits to the number of elements a lens can have because, among other reasons, there is some small loss of light at each air/glass interface (even with modern multicoatings). Before the development of coatings, people had designed better lenses that could not actually be used productively because they had too many air/glass interfaces. Today these designs are standard. With a computed lens, you would eliminate any light loss. Also, you could use imaginary materials with any refractive index and any dispersion (spectrum-splitting) effect you wanted. High-end lenses today rely on exotic glass types with so-called anomalous dispersion properties. In designing a virtual lens you could just make up anything you needed. Indeed, in designing a virtual lens it might not even be productive to think in terms of real-world lenses with elements and glass types anymore. It could all just be equations.


  4. "Faster" lenses. You should be able to seamlessly add light with the computer that wasn't there in real life. As long as some light was recorded, the computer could pretend that more was recorded.

Whether there will be a market for any of this stuff is unclear. And it's also unclear how professional photographers would react if photo editors and others began demanding images in a format that allowed them to change the focus of images. Perhaps in terms of altering the photographer's "vision," this is no different from cropping, but it sure seems different.

0 Comments:

Post a Comment

<< Home