With the cleareyed understanding that this article is photographic clickbait/comment-bait...
Use gear that achieves the task at hand. Sometimes that needs an optically-correct lens, sometimes it doesn't. When Frank Thorp brings a Speed Graphic or a disposable camera carrying black-and-white film to the halls of Congress, it achieves a different way of seeing. The rest of the press corps brings us high-quality versions of more traditional images with modern cameras and glass. They all have important places.
For my photographic work, I'll generally prefer to work with lenses of higher optical-correctness, but understand when others don't. Bresson's “sharpness is a bourgeois concept" echoes in my mind at times as I work, reminding me that message trumps image quality most of the time.
Photography lends itself well to writing analogies -- you can write on paper with a crayon, a Sharpie, a Rotring, a word processor, a typewriter, fingerpaint, a Bic, LaTeX, a #2, a calligraphic brush, or chalk. They'll all write, perhaps even the same characters, in different ways. A master with any one of them will find ways to evoke the message they intend. So it is with lenses.
A modern $150 50mm f/1.8 at f/8 will out-resolve an older $1300 50mm f/1.2 at f/8, but holy poops, can that older design produce a rendering quality that is simply smooth and beautiful in an intangible way due to the lens designers' intentional tradeoffs.
If you find yourself making comparisons like this one, you're almost to the part of your photographic career where you may be about to start reducing the amount of gear you use, as you begin to find the tools that resonate for you.
Perhaps the older lens has more pleasing colors:
normally absorbing a bit more of hi-freq(blue/violet) lens a comfy "orange sunset" effect and warm atmosphere.
Modern digital cameras tend to be "colder"
in color, with sharp blue/violet.
To me, the copy of the EF 1.2L that I tested had a softness to its rendering that was very painterly. I shudder to use the term "microcontrast", but the lens appeared to have solid contrast and sharpness at broader spatial scales and a softness at the smallest scales.
It felt like a lens that rendered every scene with a deft touch, making everything just a little prettier than it might really be.
Someday, I may find a used one to call my own, but I can't quite justify its weight and expense today. The EF 1.4 largely fills that bill without the price and heft, but doesn't render things so instantaneously as art.
The Sigma still wins in the corners, but chromatic aberrations are largely absent for both lenses.
With new photographers, I often try to get them to take a 50mm f/1.8 through its paces early in their careers. One of the key milestones in the proposed shot-list is a series of images at f/8. It is eye-opening to see an inexpensive lens deliver top-notch results on even the least-expensive cameras. Beginners are often drawn to the selective depth of field of the wide aperture primes, but to me, the greatest joy of these timeless (and lightweight) double-gauss designs is their ability to render superlative quality at their diffraction-limited apertures.
They haven’t taken that down. On the contrary; in their writing instrument analogy, what would happen if all pens produced the very same “perfect output”?
Don't forget that photography flow also changed - with cheap and powerful postprocessing and whatnot. And it will only get better. So the gear changed into catching maximum information and deal with it later. And entropy goes one way. You can always blur or add creative touches on the clinical picture. The other way is harder.
I feel that focusing on one piece of the flow and not looking at the whole could lead to somewhat off conclusions.
I think that this article is an example of a phenomenon wherein we subconsciously associate the flaws and artifacts of the medium with the medium itself -- and once those flaws and artifacts are removed, yielding a more accurate reproduction of the world or the artist's vision, we get the sense that something is missing and consider the result to be of lower quality rather than higher. Examples abound:
* You will never, ever get an old rockist audiophile to admit that digital formats yield better results than vinyl. To them, vinyl sounds better -- "richer, warmer", more like you're actually there listening to the band. One such person of my acquaintance went on about "fat synths" that CDs couldn't reproduce. Let alone the fact that physical properties of vinyl prevent records from having the dynamic range of uncompressed digital music on the first playthrough, let alone after the needle has worn imprecisions into those grooves. Let alone the fact that almost no band sounds live the way they do in a recording because you can't control the acoustic properties of a concert hall the way you can in a studio, or mix tracks or adjust the relative mix of various parts in quite the same way.
* In cinema, someone else mentioned the idea, beloved among cinemaphiles and even directors like Quentin Tarantino, that only 24fps celluloid with grain and scratches counts as "real film" and everything else looks "lifeless".
* In video games, there is the phenomenon of CRT purism. Some of this is justified; developers have been able to achieve color and transparency effects on CRTs that haven't translated well to LCDs without additional filtering. I also understand that the lower latency of CRTs improved the situation for e.g. fighting or light gun games. But mostly, the "truer image" people associate with CRTs were artifacts of grotty TVs and, specifically, NTSC composite video signals. A new arcade machine of the era would use a top-quality CRT engineered to deliver the crispest possible image -- closer to the LCD or OLED displays of today than to the smearfests of home systems played on spare TVs. And if you lived in Europe and could play on a PAL or, better yet, SCART set, many of those artifacts went away there as well.
* Oddly enough, no one seems to miss VHS tracking artifacts. But they are used for effect, to signify an aesthetic associated with a particular time and place, as in for example the game Katana Zero or the Local 58 YouTube channel. I guess that's what we should be aiming for here: the "richer sound", the NTSC color effects, the film grain, and the camera aberration can be used as tools to shape the audience's experience and evoke a time or mood. I just wish that people were honest about what they were doing, and not be all like "old good, new bad". I like having high-quality Opus rips and sharp LCD monitors.
I work in the animation/VFX industry, and I’m constantly amused by how camera and lens companies are constantly trying to make images more and more perfect by removing imperfections from lenses and decreasing grain and noise, and meanwhile in CG we are working very hard to take our clean too-perfect CG renders and give them lens imperfections and chromatic aberration and film grain…
I wonder why animation hasn't enabled more experimental cinematography unhampered by physical limitations. So many possibilities: Long takes are exponentially more difficult to film IRL with increasing duration, yet come at no extra cost when the actors and camera are untiring and deterministic. Ray tracing allows for arbitrary virtual lenses, from prohibitively expensive to physically impossible. I saw this video demonstrating a lens that can continuously vary from endocentric to telecentric to hypercentric, where several comments are along the lines of,
> Now I really want to see a scene in a horror movie, where in camera, using a MASSIVE one of these lenses, we get to see the killer is behind the person in shot, by adjusting the camera like this. Can you imagine how freaky it would be, at they seem to appear and then grow to loom over them, out of nowhere? (https://www.youtube.com/watch?v=iJ4yL6kaV1A&lc=UgwgTANmFg7oz...)
Since telecentric lenses must be as large as their subjects, that won't be happening in the foreseeable future. Yet even Blender can render a working ray-traced camera.[1]
So why don't we see these things in CGI? Even if most films stick to conventions, there should be some art films, right? Why isn't there a single one-take animated short?
Addendum: CGI also allows exposure times longer than the time between frames (i.e. shutter angles greater than 360°). So, 60fps can have the same motion blur as 24fps if desired, which those who very vocally condemn high-framerate animation seem to very much. I find it quite funny--shouldn't those most passionate about an art form be the most open to innovation?
Chromatic aberration must be the most annoying effect I'm seeing in games recently. I don't want most of my games to look like photos or movies, that breaks immersion. I want them to look like reality!
I'd like to see a game or VR environment where the effect of extra long eyelashes affects the image. Or cataracts. Or dirty specs. That's your organic life right there.
Its interesting, I watched a "how its made" type show about a car chase in a movie, it turns out it was all actual real footage and they had a bunch of elaborate moving camera crane setups to get footage from all different angles.
It was so elaborate that my brain just assumed it was CGI.
Our perceptions of what an image "should" look like are shaped by all of the images we have seen.
It may not be surprising to see mainstream CG place less emphasis on those imperfections in fifty years (perhaps especially lens flare, which modern coatings largely eliminate).
Maybe the future is to record perfectly and then add the imperfections later. That gives you the most flexibility and control. I think it is common for studios to record raw data from guitars and then routed through amplifier and effects later by a sound engineer.
I strongly disagree with this entire premise. I want cameras to capture the real world as faithfully as possible. The rest of us shouldn't all have to deal with unnecessary artifacts (which can't actually be perfectly removed in post) just because some people like their aesthetic. This reminds me of people who think we should abandon FLAC in favor of vinyl.
I also think it's silly that imperfections should be artificially maintained just to keep a "vintage" look, but when you think about it, nothing really about modern cameras actually reflect the real world. It's all a judgment call by the sensor/algorithm/photographer/editor who together decide how the published photo looks.
- CCD layering & subpixel arrangements
- aperture openings & shapes
- vignetting & chromatic aberrations
- focal distance, white balance, ISO...
- shutter speed, aperture, depth of field, focus, composition...
- post-production...
The comparison to FLAC (a storage medium) is a bit flawed, vs say the actual creative instruments (guitars, pianos). The article is more like the photography equivalent of lamenting the loss of Stradivarius's secret sauce.
Long before a photo even gets to RAW, a bunch of artistic decisions are made by the photographer and the engineers who put together the camera and lens (and their firmware). You can't turn off all the processing, and even if you could, the camera lens doesn't really work the same way the human eye does. A "lifelike" photo would have a huge field of view but be blurry anywhere except the center focal point, have a huge dynamic range but limited depth of field, etc.
And then after you go from RAW to whatever renderer makes it a JPEG, a bunch more decisions and corrections are made, with or without your input as a photographer/editor.
Even if you shoot with film, there is no "calibration to reality" per se, it's all an artistic choice in terms of body, lens, film, development techniques, etc.
Cameras are not "reality capture devices", they are advanced signals processors that turn a gazillion rays of light into images that match a certain preprogrammed look (or one of several, like "natural", "vivid", "sepia", etc. -- all of those being just different parameters that tweak the algorithms).
> A "lifelike" photo would have a huge field of view but be blurry anywhere except the center focal point, have a huge dynamic range but limited depth of field, etc.
The trope of the two overlapping discs that tells you that someone is using binoculars could be replaced with this when the director wants to emphasize some subjective aspect.
Assuming we're not talking about technical photography - cameras don't capture the real world, neither do they capture impressions. They capture flat projections of the scene, frozen in time and displayed on the flat media with limited dynamic range and gamut. The human brain doesn't operate pictures at all. It reconstructs a set of ideas about the scene, and relationships between entities, which makes it impossible to represent the scene in a single picture - because those ideas are usually contradictory.
Here's an example: perspective works completely differently in a photo. Brain "sees" a heavily nonlinear perspective with the focused objects seem closer, but at the same time all straight lines are perceived as straight. This is impossible to represent in a photo, it has either linear or curvilinear perspective but not both simultaneously. Non-photography artists such as painters or game developers often mix different perspective planes in one work to depict this, but this is also just a rough imitation of what they really see or imagine.
Everything else works similarly. The dynamic range can't be represented in a camera+monitor system (or camera + printed media). The colors are impressions by definition, but are also skewed by illusions well known from the color theory. The motion requires you to use longer exposures, motion blur, and composition to convey in a static photo. Sense of depth, same story.
Nothing you see can be "faithfully" represented in a photo. The best you can do is to interpret the scene yourself and use crude artistic tools (tonemapping in post, composition, capture-time camera tools such as DoF etc) to hope that it will induce more or less the same impressions when viewed. Painters use extremely low-contrast media for centuries yet are able to convey the impression of a blinding sun, exploiting the human perception artifacts.
You can see the non-ideal lens as one of those capture-time tools to convey a certain impression. It might be imperfect at that, it doesn't induce the same impression in everyone, it can be imitated in post with technical precision - sure.
People who make or listen to different mixes, listen to music with their own settings they like more, abandon FLAC in favor of vinyl, people who don't care about high fidelity in general outside of archiving, just embrace this point as well.
You can't "capture the real world as faithfully as possible" in the same way that there is no map projection that "portrays the globe as faithfully as possible".
It's the impedance mismatch between the real world - which is incredibly complex and messy - and a snapshot of that encoded as a JPEG.
Photography is a wide field, and aesthetics is in the eye of the beholder, as usual. For example, if you do photography which goes towards technical fields, architecture, forensics, archaeology, reproduction, documentation in general of whatever, and also a good deal of landscape photography, chances are a more perfect rendering on behalf of the used lenses is one of the goals. Some of that doesn't even have to do with aesthetics to begin with, but it is almost a hard requirement.
Let me give a counterexample: Leica Summilux 35/1.4 (FLE), or Voigtländer Nokton 35/1.7, both with a very similar rendering, which includes a very strong field curvature wide open. Namely, the focus "plane" bends to further away spots towards the image corners. Using these lenses in the right way can create a 3D effect, and weirdly enough, you can see that even in a viewfinder with a single eye, which conceptually makes no sense, but truly is like that. Seen it using A7/A7s, tested it with other persons, they all report this effect. This rendering is so unique that even non-photographer friends eventually could tell images apart made by the Leica (if around wide open settings). In return, it doesn't have the nicest bokeh, or more like you should keep an eye on the background, but I also picked it for its harsher/"grittier" rendering with that.
Another example: Leica Summarit 90/2.4 vs. Zeiss Loxia 85/2.4. On my travels I use the Leica, but otherwise I try to grab the Zeiss. Thing is, the Leica, like its other 90mm brethren, suffers from veiling flare (yes, even the hailed Summicron, and also the old Tele-Elmarit), a problem the Zeiss doesn't have. In return, the Zeiss is bulkier and has almost twice the weight, so for travels (multi-day hiking etc.) the Leica has advantages over the Zeiss, despite being the inferior lens from a rendering point of view. Read: Sometimes you have to compromise on "image quality" (very fuzzy term really) for other factors like usability and so on. A common theme with the more perfect lenses: Larger + heavier, and that is not always a good compromise to take.
Which means there are many factors which play into lens choice, as usual, driven by a wide range of preferences and requirements. That also means it is not really possible to make general statements for one or the other, but "depends".
Do you have any sample images that showcase the qualities you are talking about? I can probably search it up myself, but if there’s anything you would like to showcase. Thanks!
I don't have pictures online to show around. After using that lens for years now I think there are a number of factors contributing to this 3D effect:
* In the center of the image things behind the sujet are strongest out of focus.
* With the focus "plane" wandering off to the rear towards the corners, you can put up some "visual pillars" on the sides which are somewhat in focus, but shouldn't be from our own visual training. That seems to mentally push whatever is actually in focus in the center even further to the front. It also means if you just isolate the sujet without that most of the effect goes poof.
* On my A7s, through that field curvature, the corners are also somewhat smeared.
* The lens actually does have a lot of microcontrast in the center, i.e. is really sharp.
I mentioned this lens, because it is in certain ways superior than the Leica, but in overall has very similar visual qualities. It doesn't show in every wide open picture, it requires this "context" to work its magic, and some of the examples show that.
Another lens with this effect is the Voigtländer 21/1.8, but because it suffers in general in the sharpness department wide open it is less pronounced. I think on the above linked website there should be some examples for that, too. More perfect lenses look ... different. They certainly work better if you don't want these "visual pillars", so it is not like that field curvature is always the perfect thing to have, quite the contrary. However, it is a tool, or can be used as such, like so many other things in photography.
Sorry for the late reply. And I have no idea what other lenses show this effect, but I wouldn't exclude that possibility. From what I gathered you will need a field curvature where towards the edges the focus runs away from the camera. It needs to be a somewhat faster lens, because stopped down the effect is less pronounced. And finally, it should be sharp enough to convey this "in focus" vs. "out of focus".
I didn't know about all this at the time I got the Leica, only that it had field curvature wide open, and learned these things through using it. But considering available pictures online, there are maybe ways to figure the field curvature out, because most test/review sites are rather useless in that regard:
Once you have seen these "special" images and their look it also makes it easier skimming through images from other lenses if they show that here and there.
Roger also jokes about the 3D-pop at the end of the 2nd link, but considering that tilt-shift lenses can produce a miniature look by meddling with the "field curvature", getting specific other looks through usage of actual field curvature isn't too far fetched of a concept. It is some sort of an optical illusion in the end.
There's nothing more boring than perfectly clean, crisp, and uniformly perfect images, but you do have a point. The issue seems to be the industry converging and removing choice though?
Indeed. A similar thing happened in the HiFi loudspeaker industry: If manufacturers simply pursued higher fidelity they'd converge on flat (on/off-axis) frequency response, near perfect impulse response etc. making them sound more similar. This is a big problem for branding so instead you'll find intentional imperfections in speakers above a certain price point to produce a certain signature sound.
> I want cameras to capture the real world as faithfully as possible.
Zoom with your feet, then. (That is, go there and look.)
With digital, what you get is determined by the compromises made by the sensor designers, or (increasingly!) by software; with analogue, what you get is constrained by your choice of film.
You're really going to write an entire diatribe about how the image quality of new lenses is aesthetically inferior to old lenses and not include a single example comparison??
This sounds very much like "records are purer than digital", which to my knowledge has been shown to be nonsense. I get that's not a perfect analogy but:
If all lenses are identical, why should I buy your product: I could be wrong but last time I checked there's a hell of a lot more to a camera than just the lens. There's a bunch mechanics: the lens housing, the sensor, etc. But also simple device ergonomics, etc which I suspect (as a non-photographer) are more important in reality.
The distortion due to non-cleanness or what have you: as long as I have been aware of photographers as a profession (vs. hey I take pictures from time to time) I have been aware of physical filters, and straight up vaseline on the lens. Is that not a thing anymore?
All the pictures will look the same: I mean yes, but that is essentially true - what distinguishes "I take pictures" from "I get paid for my pictures" is timing, composition, positioning, etc. People would not pay hundreds-thousands for a wedding photographer if they could borrow/rent a good camera and have a friend be the photographer.
It sounds very much like if a painter were to argue mass produced oil paints were undermining oil paints by removing the variation in paint vs artisanally produced hand blended linseed+random toxic chemical paints.
For the movie WALL-E, Pixar commissioned a space cinematographer to advise on how they might render aberration, lens flare, and other optical artifacts for the space scenes. The cinematographer remarked that Pixar wanted to replicate the very things he spent a career trying to get rid of.
Nah. Getting a high bit per pixel raw image with minimum distortions maximizes your creative freedom to produce final prints or digital images. There are important caveats - cheaper gear you are comfortable paying for is probably still pretty good, maybe you want one click effects with no need to edit on PC or phone,analogue art for arts sake. Also some effects like polarized filters can't be replicated with post processing. But otherwise it's trivial to create dozens of automated workflows that replicate effects of older lenses and films, whereas you would be hard pressed to carry all of these with you.
Disagree. Light is analog, your RAW files are probably 12 bit max.
A physical Black Pro Mist filter is always going to diffuse light better than an ADC'd emulation working on quantized data. It's noticeable if you look.
Analog is not everything and any filters with distortions reduce the options in regards to what can be done with these 12 bits later. Unless we are talking of overcoming limitations of the sensor, like polarized filter or darkening filter to photograph the sun.
Of course, if you got a filter that does exactly what you want and you are going to use it a lot, there is nothing wrong with that and you may get more precise effects than post processing. But the post is about lenses, which are bulky and more expensive than filters. To me it makes more sense to invest in 3 clean lenses (travel, telephoto and compact zoom) than ones with built in distortions that I may not want for every photo. If someone is sufficiently wealthy, athletic and dedicated to carry a suitcase for 10 lenses everywhere, hats off to them I guess. Even filters are like $80?
No. Light is quantized (photons), as is analog film (emulsion crystals remain clear or turn dark, never in between). Virtually all modern DSLRs and DSLMs have 14 bit sensors, which results in quantization noise below the noise level of the rest of the system.
This happened before in countless products where it used to matter who made it, then all players achieved more or less satisfying results and the product became a commodity - or took a completely irrational turn into emotions, retro and sentiments; wrist watches come to mind.
There used to be specialty high end manufacturers of nearly all everyday items but once something hits commodity status, it gets sent to China and to the lowest bidder.
Making lenses imperfect just to give retro character is going exactly where the watch industry went after Quartz.
It’s all about the imperfections. This is similar to the modern audio recording industry. There are a ton of new high-end clean condenser microphones that have extremely good and flat frequency response, and some of them are surprisingly inexpensive. All you have to do is to add some “character” in post-processing by applying an emulator that can mimic classic microphones like the Neumann U87. Heck you can even go so far as to emulate the recording console itself in order to reproduce characteristics of 60s,70s, and 80s mixing desks, compressors, limiters, preamps, and the like. These things add imperfections and distortions that are somehow pleasing to humans, and I am guessing that is similar to what these lenses do.
At the end of the day, you should probably just use what works. Ignore the hype, name-dropping, and brand snobbery. Plug in your shit and twist knobs until it sounds good. I usually avoid emulators and go this way, as it (usually) produces the best results for what I’m trying to achieve.
I’m definitely not a photographer, but I am assuming that it would be similar in that domain.
// Zeiss could’ve saved themselves by reissuing their older lenses that have nice character. But instead, they’ve more or less exited the photo industry.
Funny that I just read an article on Zeiss, ASML and EUV Lithography. My next smartphone with 5nm SoC inside actually depends on Zeiss's pursuit of clean lens.
Article ignores you can take 30fps of 89MP stills for USD$5,995 list using a BMD, and for less than well heeled hobbyists spend on DSLRs purchase the lenses to taste. [0]
People are designing brand new lenses with well defined abberative character. Orion is a new company who are enjoying mainstream Hollywood adoption. Key scenes in John Wick 2 are taken with Orions. The car park opening action in particular setting up the movie. (Yes, I justified watching John Wick 2 for the optical research. Twice..)
Cooke, the British lens maker infamous for "The Cooke Look" precisely because they hand calculated [0.5] attractive and until very recently [considered] minimal distortion into consistency across their range, are selling right now brand new lenses which retain a considerably consistent amount of that same character whilst hammering undesirable optical traits into negligible measures. Remember that these pictures are being enlarged. (projected) tens of feet across in your nearest theater. Moving vs still pictures do conceal imperfections, but everyone making pictures with these are staring at freeze frames in the DIT and post on equal or better projection for extended periods of time, and distractingly flawed taking optics would simply be eschewed today except possibly for effects cut scenes.
Contrary to the articles self serving and probably just ignorant statement, the lens making Zeiss Consumer unit [2] is in fine health.
However they're making progress in cinema lenses, because this more immediately pays for and nurtues the necessary R&D that's become expensive or impossible to match or follow Sony, Nikon, Canon and Fujinon.
Rather than lament fictional losses of the strongest historical names (I'd be mad as hell if my company was misrepresented in this way) we are gearing up in a renaissance period for optical designs for all applications including enthusiast photography. I have to stop here because the next item on this agenda is manufacturing tolerances and capacity, which caused by increasing resolution has been pushing costs and logistics into divorce territory from the industry of merely ten years ago. That separation is probably underlying so many unfounded and hysterical outbursts posing as nostalgia such as this article.
[0] The BMD is a 12K / 89MP S35/ APS-C sensor for which the highest quality lenses can be obtained for the price of as low as a couple of eg Canon 50/1.0 examples (being just such a lens as draws a very attractive picture with far from modern corrections).
[1] Cooke never seems to have been forthcoming about automated design aides, however the human element and its degree in designs isn't unique even for the hyper corrected Sony and Nikon glass recently launched.
[2] maybe it's revealing that Zeiss denotes $20,000+ lenses as consumer products, but we're informed by their semiconductor division. This price isn't a outlier in any specialist photography field but the performance of this lens currently is a outlier.
Can the BMD really shoot 12K? At 12K you're limited to significantly compressed formats. With so much of the data missing, resolving power is noticeably degraded when you look at a crop.
I've heard that the tiny photosites also contribute to the sharpness issue, but I won't pretend to understand why.
There are many possible reasons for images not looking so sharp. From the top of my head:
- When not stopped down, most lenses are not that sharp, especially older ones.
- Most sensors employ an anti-aliasing (lowpass) filter that blurs the image.
- The smaller the photosites, the worse the signal-to-noise ratio.
- Bayer interpolation.
- Less then optimal post-processing, like aggressive denoising and sharpening with too large of a radius
- Compression artifacts and color sub-sampling.
- Analog outputs are usually bandwidth-limited (lowpass filtered) for EMI compliance reasons. Not an issue with an all-digital workflow of course.
First, the knocks on post-production completely ignore the history of film development, editing, & printing. The time to an output photo product has only gone down. Drastically.
And this piece ignores the needs of the commercial photographers high end glass is designed for, who need not a toy but as few pieces of our as possible to land as many jobs as possible. When you’re working for clients, it’s easier to own less glass and a few pieces of software that will make it look however you want.
uBO's generic cosmetic filtering is disabled by default in Firefox for Android -- those placeholders are probably normally removed by generic cosmetic filters. I will add specific cosmetic filters for that site so as to avoid having to rely on generic cosmetic filters.
I feel like a few pictures here, illustrating the relative differences between clinically clean lenses and older lenses, would be worth a thousand words.
Why isn't depth a way bigger deal? Cameras could capture depth and give you whatever virtual lens you want, in camera like a phone filter.
Lenses are fun and collectable and all, but I think my idea of a near perfect camera would be a one piece waterproof superzoom with depth capture and AI processing. Less to fuss with, and you can carry all your virtual lenses everywhere.
Lightfield cameras were briefly a thing, but then I guess the engineers at Google and Apple figured they could fake it well enough with algorithms + depth sensors.
Smartphones had nothing to do with Lytro's death, though; DSLR users ultimately decided that they liked super sharp pictures more than the ability to focus in post.
This one is super interesting to me - clearly Snapchat, Google, Apple, and I assume Samsung have proved that this is viable.
While some features (e.g. portrait mode) exist to work around limitations that many lenses don't have (ability to create a shallow DoF) other ones are more interesting, i.e. Snapchat filters.
Still, I worry that these are too difficult to reason about as compared to something like the exposure triangle, and the imperfections of things like portrait mode can be very obvious. Perhaps these are better left to post-processing tools?
In-camera-phone processing has the advantage of access to depth and accelerometer data recorded during potentially multiple sequential exposures of multiple cameras for stacking and motion blur deconvolution.
There's a parallel in the advent of stacking being a massive boon for amateur astrophotography. There, the natural format for unprocessed data is video, hours of which are recorded in-camera and processed afterwards. Perhaps someday a new RAW-over-time format will be developed to hold the raw data of multiple sensors, optical and otherwise, to enable smartphone processing after the fact.
"Super resolution" may be the marketing term to search for to investigate your interest. However sequential image composite decononvolution hasn't solved for usually desired out of focus transition effects commonly hand waved as " bokeh" because the important contrast gradient limit of the Airy Disc is convolution.
Accelerometers are appearing even in the lowest budget cinema cameras whilst ARRI, the de facto market leader who used to be the left field upstart (finishing The Bridge Over The River Kwai, critically being inexpensive light and ergonomic of which only ergonomic is arguably retained by contemporary boxes) has just gotten around to putting one in a smaller sensor camera that costs six times the bigger sensor competition. (I'll ignore the 3:2 vs 4:3 ratio of the newest ARRI is wonderful for anamorphic squeeze and utterly wasted for spherical shooting.. and ignore RED offering very usable ratio comprises.
Use gear that achieves the task at hand. Sometimes that needs an optically-correct lens, sometimes it doesn't. When Frank Thorp brings a Speed Graphic or a disposable camera carrying black-and-white film to the halls of Congress, it achieves a different way of seeing. The rest of the press corps brings us high-quality versions of more traditional images with modern cameras and glass. They all have important places.
For my photographic work, I'll generally prefer to work with lenses of higher optical-correctness, but understand when others don't. Bresson's “sharpness is a bourgeois concept" echoes in my mind at times as I work, reminding me that message trumps image quality most of the time.
Photography lends itself well to writing analogies -- you can write on paper with a crayon, a Sharpie, a Rotring, a word processor, a typewriter, fingerpaint, a Bic, LaTeX, a #2, a calligraphic brush, or chalk. They'll all write, perhaps even the same characters, in different ways. A master with any one of them will find ways to evoke the message they intend. So it is with lenses.