(with bonus W3 close up attachment)
I have recently been experimenting with an optical device which presents identical visual information to both eyes, thus removing all binocular depth clues. Invented by Moritz von Rohr in 1907, but later patented by Carl Zeiss the same year as the Synopter, Zeiss marketed it as a visual aid to assist the gallery-visiting public to appreciate depth in paintings. Since then similar devices have been variously called Perspectoscope, Reflectoscope, and more recently, the Myopter and Cyclopter. The basic optical variations as described by Von Rohr are as follows:
As a figurative painter I have spent thousands of hours struggling with representing our binocular perception of space and form on a two dimensional surface, so I was very interested in getting to grips with one of these devices, and in particular to view paintings and photographs with it. Here then are my subjective impressions of looking through a synopter.
Getting hold of one proved tricky, but I managed to make a serviceable version by cutting the back off an old charity shop Bakelite bi-lens 35mm slide viewer (a later variation of the Perspectoscope) in which both eyes look at one 35mm slide by the use of mirrors, one of which is a half silvered beam splitter, then removing the lenses, which is more or less the arrangement in Von Rohr’s fig A.
Here it is with the rear battery compartment removed:
I began by looking at the real world through it, and found the experience decidedly odd. The effect, despite being monocular, definitely created more of a sensation of depth than using just one eye, but there was no lateral disparity. To anyone used to stereoscopy, opening and closing alternate eyes with no shift in the relative background position is decidedly counter intuitive, if not downright weird. It was very apparent that close objects appeared magnified and any lateral movement of the head created a slightly nauseous impression of flat planes sliding one across the other.
First impressions on viewing 2D images or prints was that some images worked better than others and some hardly at all, and it was not obvious which would or would not work without trying it out. I got good results looking at museum photographs of pottery photographed against a plain background, especially those with a cast shadow. These weren’t a matter of ‘maybe it looks a bit 3D if I try’…. to me they appeared as fully formed stereo images, which I found promising, if a little disconcerting. Reproductions of carved relief (such as Assyrian bas reliefs, or the Parthenon frieze) viewed from the right distance appeared very convincingly modelled indeed, almost as if looked at with true stereopsis. It also worked similarly with photos of some Henry Moore bronzes.
The viewing distance or magnification with reading glasses seemed to be critical to the best effect, as did spending time getting yourself into the ‘zone’, or state of receptivity. The more I looked, the more marked the effect. Photographs of highly textured fossils or crystal formations with unpredictable form were also beautifully realized, as were the craters and curvature of the moon which appeared convincingly 3D. Engineering models taken against simple backgrounds also allowed for a most extraordinarily stereoscopic effect, enabling me to sort out a muddle of pipes or make sense of the ambiguous spaces behind frames and cogs. Wheels at oblique angles receded very clearly, and I found I was able to pull extra detail out of dark areas.
My experience of looking at paintings (or reproductions) was much more hit and miss, and for no very obvious reasons. Renaissance paintings with strongly constructed spacial perspectives and vanishing points are obvious candidates, and mostly work well. Also Caravaggio’s Supper at Emmaus looks great, maybe due to his strange use of scaling (the hands in particular) and the way the form gets lost in shadow. Turning to a life size figure painting in my studio of a girl sitting in a chair proved very intriguing. Her knees appeared to project forward in real space, with the chair in which she was sitting wrapping very convincingly round with the wall some way behind. The floor sloped away correctly too. It seemed as though the image requires a very well defined description of form with good surface texture for best results. I pay particular attention to both in my work, which probably helped. Shadows also seem to play a role in reinforcing the effect.
In an attempt to fool the process I looked at some large photos taken inside a tropical forest; the most formless and muddled mass of information I could think of (and which would normally rely on binocular vision to make sense of) and was astonished to experience it rendered in very believable depth. It beat me how this can be, as there were no receding perspectives and no relative size clues, the trunks and leaves giving no scaling clues as there were large leaves in the distance and tiny ones up close and vice versa). I also chose pictures with no perceivable atmospheric distancing. It was utterly baffling, and really rather delightful. There were a few insignificant errors in one or two pictures due to some obscure ambiguities (repeated vertical lines mostly), but it only took a few seconds to resolve them.
One of the charms of the device is the delicate, slightly dreamy and introverted nature of the sensation. It is more ephemeral than an optical effect…..more akin to a stereoscopic ‘thought’ if you can imagine such a thing. It needs to be invited, but once seen, is impossible to deny. When it throws up errors of scaling however, they can be quite radical and surprising. I took it to the trees and undergrowth at the bottom of the garden to see if it would sort real foliage into depth as effectively as it did with photographs. Everything looked fine until I saw what for a moment appeared to be a bird sized hornet foraging amongst the leaves. It turned out to be a small bee or hover fly very close to the viewer! The effect was totally convincing and quite shocking for a moment.
What I imagine might be going on (although I’m probably straying way out of my depth) is that the part of the brain which deals with merging the left/right images is using the unexpectedly identical information received from both eyes to compute, or hazard a guess at some kind of ‘virtual’ depth, before feeding it to the area which deals with processing binocular input into a single sensation of depth. It then presents it to us as a kind of synthesised 3D image.…. rather like some hallucinations are believed to occur. I thought I’d test this out by looking at one half of a random dot stereogram and try to mentally project depressions onto it, but no go. However I could feel my mind attempting to manipulate the dots into some kind of tangible if unstable surface contours.
I have noticed that while looking at a distant mountain or wooded hill too far away to contain any perceptible binocular disparity (generally thought to be about 30 metres), we don’t in fact see it as flat, but perhaps by using our experience of perspective and relative scale clues we understand the contours and ‘see’ the distant scene as a 3D form. If so, then this surely implies that at the point in the distance where perceived binocular disparity ceases, a ‘synthesized’, or hypothetical version kicks in. If we are pre-disposed to this, then looking through the synopter simulates the experience and thus triggers a similar response. The subliminal processes responsible are tricked into believing near objects must require similar synthesizing. That the vergence on the device appears to be set as if looking to infinity may be a contributing factor.
In conclusion then, I often found myself asking during my experiments ‘am I just imagining this?’ which of course is to miss the whole point. Of course I was imagining it; the stereoscopic sensation, or ‘cyclopean eye’ (as Bela Joulesz described it in his brilliant book The Foundations of Cyclopean Perception) IS nothing more than a figment of the imagination, and that is precisely why it is so hard to pin down and so hard to measure, but also of course why it is so endlessly fascinating to anyone with a visually enquiring mind. In the end, perhaps it’s a little like Father Christmas…..if you don’t believe in it, it isn’t going to come!
Having got this far, I suppose the next stage must be to take my synopter to the National Gallery (or Watts Gallery indeed) and get arrested for suspect behavior by the attendants for peering oddly at their painting collection through it!
Accidental bonus of a close up attachment for the Fuji W3
During my experiments looking through this curious device I thought I could detect a minute binocular disparity between the left and right images. It was so slight as to be difficult to be sure handheld, so I attached it to a tripod and took two photos, one through the right and left paths. Sure enough viewing these stereoscopically gave a very faint but just discernible sensation of depth to nearer objects. This is not an effect restricted to my device, as in his fascinating paper The Synoptic Art Experience published in the Journal of Art and Perception, M.W.A. Wijntjes includes photos taken through his own synopter exhibiting the same small disparity (which incidently had escaped his notice). No one has been able to tell me why this was as it looked to me as if the light paths should merge into one. Maybe the thickness of the half mirrored glass is involved somehow, but anyway the obvious next stage was to point a Fuji W3 through it. The mirror spacing proved just wide enough to accommodate the lens separation, and further experiments made it clear that with closer objects the stereo effect became more pronounced. It took ten minutes to knock up a platform out of foam board to which the camera was attached with a tripod screw. The viewer was tacked on in front with adjustable double sided tape as the positioning was critical and involved a little bit of trial and error.
At full zoom, and with the lenses set to macro it focused cleanly at about 3-6 inches in front of the device, and the stereo effect was rather lovely, which was a bit of a surprise, but I wasn’t going to argue with a quality stereo-macro device for a fiver!
There is an optimum distance for best depth effect, and the image needs to be cropped a bit as the edges are cut off by the mirrors, but at the cameras highest resolution there is plenty of leeway for this. It was also clear that any vertical error in the sensor positioning was exaggerated, so while Stereophotomaker will correct this, it is better to do it via the optical axes control in the camera menu if you want effortless stereo viewing on the auto-stereo LCD screen. SPM can also be used to even out the slight colour and exposure differences in the two images due to the half mirrored beam splitter adding a colour cast and soaking up some of the light on one side.

Copyright © The Stereoscopy Blog. All rights reserved.











A very fascinating issue, thx so much for sharing your experiments and experience! A few comments…
1. I am curious, why would this be any different than looking through a stereoscope with one of the images being a perfect duplicate of the other image? I would think the effects would be identical as what you experienced, as each retina would be fed the same retinal image.
2. Viewing through one of these devices, we no longer converge our eyes, this possibly could explain the perceived differences vs. unaided viewing of the same image. The exception would be, if the unaided viewing distance was so far, our eyes would not converge. Running this experiment with a large billboard image might offer some insights.
3. The other issue at play is the FOV. When viewing through this device, the Horiz. FOV based on your drawing is somewhere around 25-30 deg. To compare the same with unaided viewing, you should adjust the viewing distance so it matches the same FOV as in the viewer. This would provide an apples to apples comparison, thereby eliminating FOV as a variable.
4. The other possible issue, as you eluded to, is the optical components and/or distance, thereby creating a slight retinal variance in scale between the two images, which is NOT perceived in unaided viewing.
LikeLike
Thank you for these well reasoned points Bill. I too have wondered about why two identical images in a viewer don’t create the effect, but haven’t reached a satisfactory explanation. The creators of the Perspectoscope (“Oh grandpa, it’s just beautiful”) obviously decided that it would, but I can’t see the effect myself. I came to the conclusion that it had something to do with the mobility of the device, the way everything else in the scene around the painting or 2D image provides parallax and other depth clues. Similar to the way that the live view through a camera obscura gives a rather magical effect, which a similar photograph simply doesn’t. It’s all very difficult to pin down though.
LikeLike
Thank you for these well reasoned points Bill. I too have wondered about why two identical images in a viewer don’t create the effect, but haven’t reached a satisfactory explanation. The creators of the Perspectoscope (“Oh grandpa, it’s just beautiful”) obviously decided that it would, but I can’t see the effect myself. I came to the conclusion that it had something to do with the mobility of the device, the way everything else in the scene around the painting or 2D image provides parallax and other depth clues. Perhaps similar to the way in which the live view through a camera obscura gives a rather magical effect, while a similar photograph simply doesn’t.
LikeLike
Thx for the kind words….
It is true that a VERY wide retinal FoV offers way more depth sensation than a narrow FoV, regardless if in 2d or 3d. Camera obscura views were projected very wide, and people viewed the projection relatively close. FoV as u know, is a function of both image size and viewing distance. But the devices here, surely do NOT have a wide FoV on the retina, often called apparent FoV or AFOV. The FoV of the subject itself is not as relevant.
Achieving a wide AFoV via optics is nearly impossible in high resolution. I am interested to see if the new round of VR coming out next year has found digital solutions to overcome this shortcoming. Based on some tech reviews I read, I believe they have. The key is to maintain high fovea resolution (2% FOV of our center vision) with super low peripheral resolution, which is how our unaided vision works. Keeping the eye forward on the high rez portion of the digital display, and shifting the scene by moving your head while always keeping the peripheral full is the holy grail in 3d viewing. IMO, this will replace the need for stereo capture, as the 2d videos will have all the depth cues required. Might not be the same for still capture though, that remains to be seen… a lot will be scene dependent.
Of all the 3d perceptions I have experienced in my life, the BEST one ever was a 2d film showed in a 200 deg. dome / 3/4 sphere. People stood inside the dome, so the entire retina was fully immersed by imagery, same as in our unaided viewing in the real world. The depth was everywhere. It was overwhelming the most intense depth I ever experienced. A moving camera and moving subjects added to this effect by continually offering up fresh depth cues. This was intentional by the videographer. I think this is where VR is heading, fun times ahead. Quite the advancement since the tools being discussed here.
Sorry for the slight tangent. Based on the information you have provided, my guess for the perceived depth is mismatched retinal imagery size. The slight variance in image size to each retina is a form of deviation. (vs. left/right only deviation in stereo capture) The brain is very sensitive to slight changes in deviation and often interprets it into depth. In addition, the brain is knowledgeable of many scenes which then completes the depth puzzle.
As an example, when I cross view L/R stereo views of commonplace subjects (i.e. intended to be viewed straight on, L image, L eye, and same for right), I see 95% the same depth as I do when viewed L/L and R/R, as it should be viewed. How can this be, when logic dictates I should perceive reverse depth? This I contribute to the (at least “my”) brains comprehending the scene through historical learning and thereby properly placing the deviated subjects. There is that small % of subjects in the scene that will appear opposite like as expected. Obviously my brain confuses these few subjects, and the Z Axis perception is completely out of place vs. the rest of the scene. Some images appear perfect. My brain does have a lot of 3d training though, not sure newbie would experience this, but would be an interesting experiment.
Another indicator of the brains ability to decipher depth on 2d images, is looking at a 2d photograph. If the scene is recognizable to the brain, the avg person can properly identify the relative depth of each of the subjects in the scene. This is with NO deviation fed to the brain. Our brains can properly place the items in the scene with tremendous accuracy, demonstrating that a majority of depth cues are NOT from Horiz stereo deviation. So deviation is only one of many depth cues. It appears to be the most absolute for sure.
OTOH, look at a 2d image that is completely unrecognizable, such as odd geometrical shapes in space, and our brain is unable to determine relative depth. Often looking at the same image at different times, will give you different depth relations between the shapes, as the brain is unsure, hence the randomness.
This same general premise of “variance in enlargement factor between the two retinal images” is also employed in astronomy, whereas you add a bino-viewer in place of a single eyepiece in a telescope. The bino-viewer splits the image produced by the objective lens at the depth stop, into two light paths, one for each eye, then the eye pieces are inserted into the bino viewer. Example here…
https://williamoptics.com/binoviewer-complete-package
This replicates the devices you are using, i.e. a single 2d view (the stars are too distant to have depth for sure 😉 broken into two separate retinal views. The 3d effect when looking at the stars in a bino-viewer produces breathtaking 3d. Of course, the depth is not the least bit accurate, as the deviation is caused by mismatched fl’s of the eyepieces. And our brains have no clue what is accurate as the scene is not recognizable. As with all optics, (magnifiers, telescopes, camera lenses, etc.) the fl is “nominal”, i.e. its close, but not exact to its stated value. Some camera lenses I tested could vary as much as 5-10% in fl, much depends on the optical design which dictates the tolerances of the as-built optics. Same is true with mirrors, prisms, etc, they are not identical, creating slightly different enlargement factors in each eye. Our fovea resolution is excellent, about 1 arc minute for 20/20 vision. So it can easily detect these slight differences in enlargement factor, creating binocular disparity, a cue for the brain to deliver depth. (in true stereo capture and viewing, the disparity is limited to the horizontal axis vs. enlargement factor variance which creates disparity in both the horizontal and vertical axis, X,Y)
Whether the makers of bino-viewers and the devices you tested were aware of this fact would be interesting to know. My guess is, for the bino-viewers, the variance in short fl eyepieces is so significant, the depth sensation is unavoidable. But with your devices, it was only mirrors or prisms, so less variance could be expected. But it only takes a small amount of variance for the brain to decipher and deliver what the proper depth. The brain only has experience with horiz deviation from unaided real world viewing, and no deviation, from viewing 2 imagery. Enlargement factor deviation is something the brain does not experience in the real world.
That’s my best guess on why you see depth in these devices. The makers of these devices worked off the slight enlargement mismatch premise to create deviation, stimulating the brain to use all its learned methods to properly deliver depth.
..thoughts?
LikeLike