Small differences in flange distance are less important than is often thought in the photo-tech-gawking sphere because it doesn't per se limit the back-focal distance of a lens design; note how in most non-tele lenses the back element is inside the mount or protudes out of the lens mount. Flange distance is merely a property of the mechanical interface, not an optical property. E.g. some people argue that Canon RF has to be worse than Nikon Z because the flange distance is 2 mm longer, but if you look at the available designs both have lenses with a BFD of 10 or 11 mm.
The article mentions image-side telecentrism for reduced aberrations. This is another talking point often brought forth; "the big mount allows a rear element so big that it can cover the image sensor with a near-telecentric swath of rays". Again, if you do a reality check here you'll notice that the chief ray angle (the angle between the chief ray of a ray bundle on the image circle to the image plane; chief rays are those crossing the center of the aperture) in actual lens designs is nowhere near 0° and is instead somewhere between 35-50°. Part of that equation is certainly that microlenses are offset as you leave the center of the sensor, so essentially sensors are designed for a particular range of CRAs and will not do well if you go way out of that range. Like the wide-angle rangefinder primes usually did: putting these on a digital camera results in strange color shifts towards the edges of the frame.
I'm moderately convinced that this is all about manufacturing costs. SLRs are complex beasts with mechanical parts that need to be made to very tight tolerances and eventually wear out. The physicist in my knows that numerical aperture, the various inevitable trade-offs made in lens design and ultimately sensor size are really what limits optical performance. Going mirrorless means that you can spend the budget on these components rather than the mirrorbox and pentaprism / pentamirror on the top and the associated electronics with the viewfinder. It's not necessarily a bad thing, but it is above all a business decision and not really a scientific one.
I personally like my [pentax] dSLR a lot -- the batteries last for ever as a consequence of the fact that most of the time all the power-hungry electrical components are off. The optical compensation with the viewfinder and my eye works perfectly, and I like the fact that, unlike an EVF, it will give me a human representation of how bright or dark a scene is. Put a long telephoto lens on, and watch everything get darker. I very much "feel" that the process of pressing the shutter button connects me with the physics of what I am doing in a very visceral way.
Manufacturing costs are a big part of it. Its safe to assume the AF Sensors for a given segment (often as much a differentiator as megapixels in DSLRs) are fairly custom/proprietary widgets and thus more expensive, compared to Sensors where often costs are more amortized (i.e. Nikon using Sony sensors.)
And quite importantly: lighter, and longer battery life. Displays, CPU, RAM, flashlights and storage use a ton of power, anything that can save on that is useful.
DSLR's have longer battery life unless you're constantly taking pictures (and consequently are lighter since the battery can be smaller), since the sensors is typically off (whereas in mirrorless it's constantly going to the EVF).
The majority of the size of the camera is going to be in the lens anyway. Mirrorless is not going to make your telephoto any smaller·
The two advantages mirrorless really has over DSLR (other than... lack of a mirror which makes it cheaper) is faster photo rates (since you don't have to move the sensor, though modern DSLRs have liveview anyway, though not through the VF), and the fact that the EVF shows the effects of ISO/exposure/etc (though again, modern DSLRs have liveview, though not through the VF).
Ironically, those are all arguments against a mirrorless camera and towards a DSLR, swapping an EVF's circuitry and screen for five lightweight mirrors. My Pentax is built like a brick and quite sturdy, but a modern low-end Canon DSLR is amazingly light and really quite surprisingly so: the 250D is 419 g.
Interesting, yes, that is a good point if the camera is 'on' more than off between shots. The DSLrs that I'm familiar with all have whopping large batteries vs the mirrorless ones that come with puny little cells.
Of course it is about costs, which was driven by the fact that the technology for on sensor phase detect autofocus got good enough that the separate phase detect array was no longer required to achieve high performance cameras (the DSLRs days were numbered when the Sony A6000 was released), and probably today would result in worse performance.
Shorter flange distances do mean that retrofocusing design can be avoided, and an equivalent optical setup can be made much smaller.
For example, a Voigtlander 35/1.2 lens for mirrorless full frame is about 60mm (diameter) x 60mm (length) while I don't know of any equivalent lens for an SLR that is less than 90x120mm.
Also, you can't have the rear element go too far behind the flange distance of an SLR without getting smacked by the mirror. With mirrorless this isn't an issue. But you don't want your ray angles to be too extreme or the microlenses won't focus well onto the photosite.
The largest market of photography dilettentes (like me) switched to phone. Most people just want a shitload of bokeh (real or software), sharp wide angle without a lot of distortion and ability to zoom without losing too much detail.
Besides, most Good Photographs are 90% technique/framing/instinct/editing NOT hardware. (Just get closer to your subject.) "The camera you have with you..", too.
So, that only really leaves Pro wedding, sports, and, journalists and that just isn't nearly as big a market as the golden age of late 00s DSLR Flickr hobbyism
This isn't about killing off interchangeable lens cameras that look like DSLRs, its about killing off actual DSLRs
Theyre just getting rid of the inferior mirror technology, the thing that flips in front of the sensor, not conceding to phones
The entire sony a7 line, for example, is still going to be made
its kind of a troll discussion but its a good thing to happen because SLRs have been associated with a look or form factor too long and its time for that ambiguity and term to die.
Definitely true. I did photography as a hobby, but phones match or exceed the performance in many cases now. For example, low-light mode on phones outperforms older entry-level canon DSLRs (T3/T6) by default without having to twiddle with the settings. Of course, you could get better pictures with these cameras, but it takes time and a really good lense! So in other words, it’s just not ideal for quick pictures around the house.
I think there are two areas where phone cameras have a lot of room for improvement: sharpness and landscape.
By sharpness, I mean that if you zoom into any phone picture, it falls apart very quickly. There is a lot of software optimization hiding how a small sensor can’t actually do that well in the details.
By landscape, I mean if you take a picture of a mountain or object in the far distance, you’ll barely be able to see it in the phone because the lenses are so wide. It really makes nature pictures underwhelming :)
> low-light mode on phones outperforms older entry-level canon DSLRs
Shouldn't you compare with modern entry-level DSLRs? Otherwise of course, you are possibly skipping all technological improvements that DSLR cameras might have received over time.
There isn't a real comparison with low light mode on phones vs what the best SLRs can do at high ISO at night.
Phones are now using a combination of photo stacking and HDR in order to achieve pretty ridiculous low-light results.
You could certainly sit there with an SLR and bracket half a dozen shots and then stack/HDR them in post, but that is a lot more work and can't be uploaded to facebook/instagram seconds later ;)
I only agree on the seconds later and that's where it ends. Unless you are shooting for social media even old dslrs with a good lens outperform any phone in terms of final output. It takes offline processing but the result is almost always better. Phone photos definitely look nice but generally look nice on phone screens, zoom in a bit and all the magic is gone.
I just try to keep things in perspective, it's pretty wild how many pictures I took with my iPhone 4S because it finally had a reliably good camera.
If I can't take high quality pictures with a iPhone 12, that's a creative issue, not hardware. Low light capture is better than my DSLR ever was without a tripod, also.
Don’t get me wrong, it’s impressive that I can take a photo of my moving baby in dark situations and my iPhone will do image stacking to get an acceptable image - but it’s just that, acceptable. On a phone. Blow any of those up and they’re a smeary, noise-reduced mess.
I’d rather shoot at high ISO on one of my cameras and have some grain.
I think the point is more that someone who felt they needed an entry-level DSLR a few years ago can get better performance from their phone today, and doesn't need any camera.
>Shouldn't you compare with modern entry-level DSLRs?
Not if the quality of those older entry-level DSLRs was already good enough for most people, and even good enough to win photo awards, print and hang in galleries in many cases.
>Otherwise of course, you are possibly skipping all technological improvements that DSLR cameras might have received over time.
Considering the above, who cares, if their current phones already do a fine job as far as they are concerned?
In 2000-2005-up to around 2010 or so, the era of "potato camera phones", that wasn't yet the case, so casual Joes and Janes still bought the entry-level DSLRs of the time.
2000-2012ish, Casual Joes and Janes were buying Point and Shoots. Most folks only bought DSLRs if they were actually doing it as a hobby or decided to take Photography as an elective in college.
At the time, DSLRs were a worse video experience for casual users, if they supported video at all. Point and Shoots were anything from pocket sized to 'DSLR' sized and typically (but not always) accordingly scaled in relative capability.
In any case, Point and shoots are what got killed in the last decade, or rather, got moved to niches. It's still the best option for certain things (e.x. Superzooms like Sony's RX10 IV or Nikon's P1000) Whether DSLRs/MILCs get niched out remains to be seen.
Phones naturally get replaced every two years or so, for lots of reasons most of which have nothing to do with the cameras.
DSLRs are expected (by me, anyway) to last for decades. Mine was made in 2009 and still looks and functions just as if it was brand new. Since they last so long, lots and lots of people still have DSLRs as old as mine and want to know how they compare to the recent phone camera that they also have.
Yeah, that's always dissapointing. Realistically, this is only an editor issue, no viewer can even zoom into a photo on Instagram etc. Even the sharpest Ansel Adams print is subject to the same JPEG compression artifacts. Not sure if the modern hobbyist/entry level print much these days.
It's still so nice to zoom deep into my crisp old shots.
Print? Who is even looking at photos after taking them.
But photos are also used as input for other graphics that can be printed - which is now also less common for reasons you have already mentioned.
By low light mode on phones I assume you mean image stacking. This requires a stationary subject.
Intrinsically, phones are still far behind simply because of the physics of having a tiny sensor. https://www.imaging-resource.com/IMCOMP/COMPS01.HTM has a comparison tool that includes both phones and DSLRs. The newest phone I found was the iPhone 11 from 2019. For comparison, I chose the Canon 40D from 2007 (4 years older than the models you list) as the oldest model with the same test scene available. It's a 2007 cropped sensor DSLR. The shot of the 40D at ISO3200 absolutely destroys the iPhone 11 at ISO2500 (try reading the bottle labels). The 50D from 2008 has a 12800 image available and it still easily beats the iPhone at 2500.
> By low light mode on phones I assume you mean image stacking. This requires a stationary subject.
Which is a misconception. The entire point of stacking in smartphones is to handle lots of movement in a robust way, and it's surprisingly good at that. You can actually perform similar stacking manually for a camera, the algorithms are well known and there are applications to do just that, for example [1]. There's less benefit on a normal-sized sensor since it can perform much better with a single shot as you pointed out, but there are still valid uses for it in certain scenes where even proper cameras struggle, see [2].
That's the low end of the market basically. It's no different than picking up a cheap compact camera a few decades ago. I had one (a Kodak) and I knew absolutely nothing about photography. Point, click, bring the film roll to the store, and whatever you got back was it. Not very different from giving full control to Apple/Google over the creative outcome. And they do a decent job of course. They do some amazing things with the very limited sensors and lenses that these cameras have. But there are limits to that of course.
Pro cameras start at a few thousand dollars. Clearly not for everyone but they always were that expensive. And that's before you consider the lenses. The type of person that spends 2000 dollar on a lens that only works in very specific situations is never going to be satisfied by an iphone.
Most amateurs with ambition fall into the pro-sumer segment that yearn for more than a cheap lens + sensor in a mobile phone can give you but are not necessarily interested in spending thousands of dollars on their hobby. There are plenty of options for those people. I bought a Fuji X-T30 in 2020; a typical entry level camera for people that like to have some level of control over the whole photo taking process.
It was about the same price as an iphone and it definitely takes better photos if you know what you are doing (and probably even if you don't) because of basic physics. The lenses are bigger and better. Some of the lenses you can buy are actually way more expensive than a typical iphone. Also, the sensor is bigger. It's just more light and information being gathered before destructive post processing happens. And you have more control over the process. Not for everyone. But those people still exist.
You don't have to spend anywhere near that money, though.
When you're spending over 2000 dollars, you're at the point where you're either in a very niche category, or you're paying for convenience.
At 600 dollars, you can get a full-frame Sony A7ii (used). At that point, you can get a 30$ adapter and use manual lenses, and you'll get a very competent collection from 24mm all the way to a 135mm 2.8 for an additional ~500$. This will last you forever, but you won't have any autofocus.
If you want autofocus, you can spend 150$ on a used EF-E adapter, and buy a 50 1.8 for 100$, the venerable 80-200 2.8 for 400$ or so, and a normal zoom for 200-400$ (I got the Tamron 28-75 for 200$). At that point, you'll get essentially any image you could want, at a stellar quality, for 1500$ or so, with autofocus and image stabilization.
Past that point, you're either trying to do something extremely specific, or paying for more convenience. The gear itself will outlast you if you don't do anything stupid with it, and you will get more image quality than you can ask for, and absolute and complete control over every step of the photographic process, for less than the price of an iPhone 13 Pro Max. And if in ten years you get tired of it, you can sell it for at least half the price you paid for it.
>That's the low end of the market basically. It's no different than picking up a cheap compact camera a few decades ago.
It's quite different, market-wise, as in that today that "end of the market" it's 80% smartphones, and the previous "low end of the market" for compacts is as good as dead (and even a big part of the enthusiasts that bought cheaper DSLRs in the 00s and 10s is now content with just a smartphone).
Better in the situations where you are lugging it around, forsight to change lenses etc. I was reminded of a photojournalist who took great shots with an early iPhone to North Korea https://www.npr.org/sections/pictureshow/2013/03/05/17349916... Fidelity doesn't matter that much.
Nearly 80% of the photos that I have taken since 2011 have been on my mirrorless camera that I've only had for nine months. The phone wins when it is all you have and there is something extraordinary to capture, but it has never compelled me to put effort into taking photos like the camera has.
Until you hate noise. Noise makes phones a non-starter (the sensor is too small). Or until you're in non-standard lighting conditions, like a dark night. A year ago I found a mantis on my balcony[1]. I tried to make photos of it with my phone and my compact camera, and all of them were crap due to noise. Several months later I bought myself a DSLR and now I can take pretty good photos in very different lighting conditions. I plan to go further in that direction still and get myself a full-frame camera to get even better photos.
Also, stabilising a full-blown camera in hands is a lot easier than stabilising a smartphone.
[1]: Mantises aren't really a thing in this part of the world.
I was just reading Master and Commander series of books. There is a great scene where Maturin (the surgeon/naturalist) is taking notes while watching a pair of Mantis's mate; they continue while the female bites off the male's head.
Now that's the photo we hope you capture!
Completely agree. My DSLR is mostly gathering dust these days - the things I bought it for that phones could not do well at the time were primarily landscapes, low-light photography, and wildlife. My current phone has a wide angle lens that does a great job with landscapes. Phones over the last few years have gotten very good at low light. DSLRs still win here, but not by enough to make it worth lugging around (with a tripod, too).
Wildlife is my only remaining use case where the DSLR is still the only viable option as I generally can't get very close to the subject. This seems too niche to keep an entire market afloat, though.
I'll also still lug the DSLR out if I have a shot planned out that I want to nail or when I'm asked to take photos of someone's kid. Having the ability to work with raw gives a lot of options for fixing lighting and colors. Today's phones do a fantastic job of getting you 80% of the way there automatically, though.
Look into something like a Fujifilm X100V or Leica Q2 (depending on your budget)
A camera should be nice and compact, then it won't be a hassle to have it with you.
The problem with modern cameras is they are heavy, bulky, junk build quality, have awful menu systems and don't play nice with your phone or your computer. What a wombo-combo of pain, just to get a nice photo!
If you can afford a Q2, all of those problems are addressed.
Modern cameras are like writing corporate software in Java. Leica Q2 is like writing short programs in Python. Phones are like using Squarespace to build a website.
Hm, I quite like your analogy! I actually just got into photography not too long ago with an R5. Thinking about it now, it really does feel some sorta "corporate." I also considered the fujifilm xt4, all the reviews make me feel as if that'd be a Pythonic kind of camera.
After falling in love with the whole photography universe, I hope one day I can experience the COBOL-like world of film. I did grow up when film was around, in the 90s and all that, but my family was too poor and I only ever messed around with disposable cameras and Polaroids. Polaroids themselves are a whole other insanely awesome universe to explore.
I find the experience of going trough 300 raw images and developing them the worst part of using DSLR and why I prefer modern phones with AI based image processing.
But you can’t argue with dslr results though. And when we talk about pro sport photography, well exactly same demands for large aperture, large and variable focal length, high iso with low noise, fast focusing and fast shooter speeds also apply to us amateurs taking pictures of our kids in poorly lit indoor sport events.
If I spent 10hrs a day for 365 days in a row I wouldn’t get through my backlog (easily way over 500k photos), but that’s not really the point. I don’t need to process and post every photo I take, many are either just for me or alternate takes that I might want someday. Looking through photos I took 10 years ago is one of my favorite activities and I often find hidden gems that I had dismissed before in the moment when originally gone through.
I do the same thing, I follow Johnny Harris' method[0] mostly. Basically I pick a random date (or an iOS provided "memory") and go through a bunch of photos related to that event. Then I delete all the useless ones, like the 99 takes I did of a sunset, and just keep the best ones.
I find this argument nonsensical. Cameras don't force you to take 300 images, nor shoot RAW. As someone who shoots on film, I hear this all the time from people who try to justify why they shoot on film (...and then blow through ten rolls of Portra).
I should explained that I mostly shoot kids, birthdays and their sport and other events. Strictly on amateur level, but for this kind of occasions it's quite beneficial to take 300 shots since it all happens in the moment and you'll have a lot of misses to end up with 30 keepers.
Also, jpeg straight out of camera can be ok if you're controlling circumstances and have time to take test pictures and adjust parameters. And not change entire scene in a seconds. Once I went to RAW I could never come back. Now you don't have to worry about white balance or even exposure in the heat of moment. It allows you to really save a lot of these shots which would be lost in jpeg. Price you pay is staring in 300 raw images afterwards.
I must admit this affects my desire to use DSLR in the first place. I tend to use only if absolutely necessary.
I dropped to a micro 4/3rds setup for walking around for this very reason.
The lightest body and 2-lens setup I could configure was about 5-6lb depending on how many spare batteries I wanted to carry as well.
My 4/3rds setup with flash, 5 lenses, and a few batteries was only 2.2lb. I don't even notice I'm wearing it.
I did this, 2013-2019ish. Unfortunately there has been so little development in m4/3 sensors over this time period that my iPhone has replaced it for all scenarios except occasional extreme-tele. The improvement in quality offered by even the best m43 camera and glass over the latest phones just isn't worth the effort to me anymore at focal lengths below 200mm-ish, or I want to do something that requires a traditional camera "hot shoe" like external lighting.
The latest and greatest m4/3 camera today doesn't really take still pictures any better than an Olympus EM-5 from 2013 (the increase from 16 to 20mp makes little "real world" difference), cellphones improved _enormously_ over the same period. With a modern iPhone, I get an extreme wide angle, "standard" and reasonable portrait focal length lens all in one device, with the files in Lightroom immediately after taking them. Yes you can argue for days about "fokeh" vs the real thing, but m4/3 isn't the platform you want to be on if rich bokeh was your primary objective in the first place. I've also yet to find a three lens kit that covers the range of a modern iPhone I can fit in my jeans pocket too, even in m4/3!
I think in some cases, many people just like the "ritual" of using an interchangeable lens camera as much as they practically get any better images than with a decent phone, which is fine! For some reason it's not fashionable in photograhy circles to admit this, I guess the fear of looking like a dilettante etc.
The 20mp 4/3 sensors have noticeable advantages in dynamic range and high-ISO noise compared to the 16mp sensors. It's not just more pixels. A new ~23mp sensor with further performance improvements should show up this year in the Panasonic GH6 and an unnamed camera from the company formerly known as Olympus, though it's not clear if or when it will make it to more pocket-friendly models.
Perhaps, but I have a Panasonic G9 (20mp) and a EM1 Mk1 (16mp from 2013) the differences, while measurable to some degree in benchmarks, in day to use are really not that noticeable at all. The m4/3 industry relies effectively just on Sony now for sensors as Olympus/Panasonic do not make them, and there is not that much commercial incentive for Sony to invest more - this is why we only get a sensor feature bump when the Sony sensor catalog finally gets a new installment in 17x13mm.
I noticed a big difference going from a Panasonic G7 to a GX9. It gave me a stop more usable ISO and a lot more ability to bring up the shadows when editing.
I think the problem here is general purpose phone design/cheap optional cases. I would absolutely love a camera-like industrial design and a real/artificial shutter "snap". I hope this improves
Personally I really enjoy doing analog photography. As the feedback loop is very long, technique is important. Also, since the number of pictures is limited, I always have to ask myself the question: is there actually a picture here?
Analog has all the tactile aspects I miss from DSLR and more, I definitely take more analog than I used to.
I also think there's a noticable gulf between fake analog and real, even on screen. The most skillful photoshop creations still looks totally off to me.
Phone + an old analog camera is an incredible setup imo
It’s true that most people switched because convenience and marginal cost usually beats quality. That’s how we went from landlines to “can you hear me now.” But I have the latest iPhone and a low end SLR and the SLR blows it out of the water for photos of my kids. All but one of the photos on our wall are from the SLR even though 90% of our overall photos are from the SLR.
>Most people just want a shitload of bokeh (real or software)
Software bokeh isn't really that good though. It's especially apparent with things like hair, which is unfortunately a lot of the edge area if you're taking portraits.
It drives me crazy too. I think iPhone portrait mode has really rough results but people seem to use it. Honestly most DSLR bokeh photos were bad ten years ago too. (for different reasons)
I think one of the problems is they don't simulate it correctly. They segment the person with a neural net and then blur the rest by a constant arbitrary amount.
Ideally it should estimate the depth map of the scene and simulate different amounts of blur outside the DOF.
Here's a RealSense-based software bokeh I wrote that interpolates between 2 levels of computed bokeh. If I didn't have to render at 30fps I would actually just compute it on a per-pixel basis.
And then people like me come along, point out how bad it looks and all the flaws present and their view on quality of software blur is changed (at least for some time).
I agree. DSLR manufacturers never really took phone camera seriously. This hit me when my SO decided to buy a new IPhone over a Sony alpha for her YT channel.
And Canon and Nikon is particular were/are mostly hardware companies who catered to professionals desiring complete creative control. Sure, they stuck on all the various auto modes for the casual users who wanted to use a "real" camera which in all fairness gave them better results than the alternatives even if they weren't exercising a fraction of the camera's capabilities.
However, with the last few generations of smartphones, it's really pretty silly to be using a DSLR with a kit lens on full auto. Even food porn or snapshots in low light--which was one of the last day-to-day things that phones really didn't do well--is remarkably good on phones these days.
There's a huge segment of prosumers missing from your list - aspiring content creators. If you look at the numbers someone like Peter McKinnon is putting up on youtube, you realize there are millions of people out there learning how to use mirrorless + DSLR kits for things like twitch, youtube, ecommerce, etc.
Another big market for dslrs is studio photography. I haven’t come across speedlite/studio strobes that can sync with a phone (tbh I haven’t looked either!)
A pro camera with a speedlite pointed at a white ceiling world produce amazing indoor pics everytime, unlike a phone which would be dependent on ambient light quality.
Phones have surely gotten better, but boy have they gotten bigger. The iPhone Pro Max is more than halfway to the weight of a X100V, and only 100 grams lighter than my Olympus. I still like having a small phone with me all the time and a camera sometimes. It's nice to not have to have the greatest camera phone available because it's all you're going to shoot with.
Plus, phones are just never going to manage decent shots on any focal length that's not wide as hell. I know they have long lenses now, but they are crazy slow and useless in anything but direct sunlight. I mostly take photos of people and almost never choose to shoot wide. Phones cut me right out of how I want to shoot. I've actually started putting a long prime on my camera and using my phone at the same time if I want a landscape. Works pretty darn well.
There is still also a niche market for underwater photographers. Housings are available for some smartphones now but the photos are mostly crap due to small sensors and poor integration with external strobes.
Nope. Go Pro still images look terrible once you go deeper and lose a bit of natural light. They don't really work with external strobes either. But for video they do fine as long as you have good lights.
The preferred hobby tier underwater still camera is now the Olympus TG-6 with a cheap plastic housing.
Sorry but no - if by underwater you mean diving, and not putting your hand 0.5m underwater or similar depth snorkeling. Gopro has tiny sensor and crappy lens and it shows, even if you set the bar of quality as low as phone screen-only.
You can end up with reasonably nice pictures from time to time, I've ended up with dozens from all activities I've mentioned (and that's using gopro hero 2 with custom housing in the past, now newer variants), but you will lose many more moments due to rather poor device performance.
The key word is niche. The area that these companies can ONLY afford to cater to. The Canon Rebel users of yore, taking slightly out-of-focus holiday snaps are totally lost to phones.
Don't forget wildlife photography. I have a Canon 100-400mm lens on a DSLR and wildlife photography with it is a really rewarding hobby. Don't use it for anything else really.
Yeah, I do wildlife photography as a hobby. Cellphone cameras will never catch up, especially bird photography. That 100-400 lens will bring subjects 1/4 mile+ away right in front of you in great detail. Software can't make up for the gap in data that sensors and lenses bring.
I joke that the number of photos I take is inversely proportional to the expense of my camera system. I shoot most of my photos on my iPhone these days. I sometimes use my Fuji mirrorless system that I have a few lenses for if I want to be a little more serious on a trip--or I'm taking photos for an event, and I have a Full-frame Canon with both very wide and very long glass that I rarely use these days.
What exactly is a "good photograph"? Are we purely talking about the artistic aspects (composition, etc.) and disregarding the technical ones (sharpness, etc.)?
I've seen some amazing photos done with crappy cameras (or phones - i.e. an old iPhone4). But I imagine the same photographer would take much better photos with a brand new DSLR (or mirrorless!) than with an iPhone4.
...again, unless the term "better" used in the context of these kind of discussions is supposed to disregard the technical aspects of a photograph.
If I hired a top photographer big bucks to photograph my wedding, and he/she showed up with an iPhone4 saying that they are confident with it, I'd be very wary...
40x optical zoom point-and-shoot, baby! The only reason I have a dedicated camera anymore. I keep it in a little belt pouch when I'm traveling or even hiking. I want to capture moments, but I don't want it to be my hobby.
Agreed. I like phones for photography, the image quality and the optimizations, features are great, but they have shitty ergonomics compared to a real camera. The thing with the phone is that it's always on me and that's what matters in the end.
I used many cameras, film and digital from compacts to one of the best 6x7 medium format rangefinder. Just pulled the trigger on a Leica DG 15 mm for micro four thirds, which will probably be my main lens from now on.
A phone camera is like a multitool. While it can replace a computer or a camera for 80% of the tasks, doing real work still requires a computer or a camera. There are also physical limitations, as in physics. You won't see too many sports or wildlife photographers doing their job with a phone, but for the majority of people who take selfies and snaps of their food, babies and cats, it's perfect. Also for the creative types and new media.
Dedicated cameras, with all their compromises, are just more fun to use.
Photography to me isn't just about the final output, but the experience getting that photo.
> the golden age of late 00s DSLR Flickr hobbyism
Even though I do miss parts of that era, the mirrorless cameras we get nowadays are great, and can do so much more. Even the APS-C cameras today can do just about anything you need a camera to do, and manage to be reasonably compact.
There was a point in time when people wanted this because it looked professional, but now bokeh is a dime a dozen.
I think these days people want it because it isolates a subject without them having to do work in thinking of a good composition that isolates the subject with other techniques (e.g. contrast, color, and so on).
It’s not always about work. When I’m shooting weddings f/1.4 is a staple because unless it’s the portrait portion of the day I have no control over the background, apart from my own positioning.
The room the girls get ready in is always a disaster, for instance.
I think your opinion is shared by a lot of people, but at the same time, "DSLR hobbyism" is alive and well, it's just shifted to Instagram.
That being said, as another dilettante, I take WAY MORE pictures with my very heavy camera system (~7000 a year) than with my phone. It's because phones can't come close to the the focused experience of having a camera in your hand.
As far as Good Photographs being 90% technique/framing/instinct/editing, that's true. But phones don't allow you to get near the level that's possible in those categories as an interchangeable lens camera.
Let's say I want to do some street photography. On an interchangeable lens camera, I have a choice to go between 18mm to 300mm+ in focal length, and if I compensate with my position for the main subject, I can get very very different framing. I don't have this lattitude with a phone, where I can go from 13mm to 77mm, but where I have to sacrifice a lot of the little quality lattitude I have outside of three discrete positions. That's only in sunny situations. If you're inside or of it's a bit dark out, you can't use the telephoto lens, and you have a tiny native range from 13-26 or so on most phones, because the quality is so atrocious for longer focal ranges in less than stellar conditions that cropping a wide angle is better.
As far as fake Bokeh, it's great for simpler portrait. However, if I want to use the defocus more subtly or in a more complex way than sharp foreground/blurry background, tough luck. This isn't hypothetical, multiple layers of depth is a good way to get a better sense of pop that just adding on more blur. Most of my all-time best photographs use mildly defocused elements in the foreground to create and organic frame for the subject, which is then followed by a slightly blurred background again. This way, you can have an organic framing of your subject without distracting from it, and creating an increased sense of depth.
You say "just get closer to your subject", but that removes a lot of compositions. Your position dictates the perspective you have, and your focal length dictates your framing options. So if I have to get really close to my subject, I just lost a lot of my creative power in how I want to frame my image!
Then there's instinct. When you are this limited in your freedom and creative choices, you can never fully develop your instincts.
Then there's editing. To get a usable image out of a phone camera, you need a massive amount of editing that simply cannot be done by a human. As a result, you can only get a fraction of the manual editing latitude of a bigger sensor camera. The lower actual resolution also hurts your ability to edit a lot.
So while I agree that Good Photos are only 10% hardware, using a phone camera means you also have a sacrifice a lot of the rest of the 90%.
True, for any non hobbyist, smartphone is more than sufficient.
You take some pictures, do some editing, and post them to family Whatsapp group or Instagram. Mission accomplished. All done on the same pocketable gadget. Let's how the fancier Sony A7 RIV or Hasselblad XD handle that :)
You want bokeh, long exposure, HDR, etc? Smartphones are obviously won't compete in image sensor size, but they have powerful CPU. Say hello to computational photography: https://vas3k.com/blog/computational_photography/
I completely agree that they need to go away, and that new lens tech needs to appear.
But I think that we are -just now- starting to get to the point where we can have truly excellent mirrorless cameras that actually can replace (and will surpass) DSLRs.
A lot of it has to do with on-chip computational photography. All the camera companies are coming up with custom image processing chips, and these are what makes the difference.
Lag time needs to be in the single-to-mid-double-digit ms range. I don't remember what the number was, but the company that I worked for, did some research, on what the maximum allowable lag time on a viewfinder could be, and I think it was like 50ms. That means that you need to expose the chip, scan the image from it, do the A/D conversion, demosaic, gamma correct, color correct, denoise, vignette correct, distortion manage, remove chromatic aberration (because cheap lenses), and all the rest of the image processing, then, display it on the screen, continuously, within 50ms per frame, while also not draining the battery. Also, pixel density has a big impact on this performance. More pixels, more milliseconds; more battery drainage.
Not a simple task. You can do tricks like scale down the image early on, and eschew some of the finer image processing tasks, but you still need that viewfinder image to be right on the money, and to the standards of a very picky photographer.
Also sensor physical size has a lot to do with the image quality, bokeh, and what-have-you. A big issue with smartphone cameras (and point-and-shoot cameras), is that small sensors mean that there's little control over DoF (Depth of Field). Apple has introduced artificial bokeh, with their software.
Sensor size, and the length of the optical path, are what really drive lens size. They discuss that, in the article. Hasselblad and Phase One (if they still do that stuff) cameras have these monster sensors. They still need big lenses.
Yeah, I was vehemently anti-mirrorless until maybe just like 6 weeks ago. I tried to use every reason to stick with the latest Canon DSLR, but there are just too many advantages with the mirrorless R5.
And a lot of advantages are the computational tech in the cameras that you speak of. The eye tracking and focus is absolutely incredible. Those that claim cellphone cameras will replace dedicated cameras have no idea the advances that are taking place in the cameras. Camera companies are not sitting by idle and waiting for cellphones to make themselves obsolete. They are making incredible strides. With the advantages of the sensor sizes and lens physics, I'm not going to say cellphones will never catch up, but I find it hard to believe they will.
At that MP and that price, it's dipping its toe into medium format photography.
Ultimately, in the hardware sense, I think making medium format technology accessible to the masses, is where Canon/Sony/Nikon is heading. It's going to be about striking a balance between camera size, sensor size, using software to fill in the compromises, and of course, cost. However the point is that while cellphones are catching cameras, cameras are also moving ahead.
My wife has the latest Pro Max. I have a mini. I also have one of the latest Canon mirrorlesses and pounds and pounds of lenses.
Give me my setup any day, but I'm looking at it for what I love to do - wildlife, sports, cityscapes. I don't care about portraits, I don't care about taking picture of food, I don't care about capturing life memories/events. Sure my use case is not what a regular consumer uses a camera for, but a cellphone can't come anywhere close to what I do.
My experience is similar, and I bet our experience is very common (among people who've tried both). My prosumer DSLR and medium-expensive lens is so much better than any phone for wildlife photography. For indoor low-light portraits, though, my Google Pixel 3 is actually better than my DSLR, and I say that as someone who very much expected the opposite.
In my experience even the cheapest (mirrorless) digital snapshot camera runs circles around any mobile phone's. I used to have a cheap Nikon camera and its pictures still stand-out compared to those of even high-end mobile phones (like my Samsung Galaxy S9), especially in low-light or imperfect lighting situations.
If you want to take great looking pictures, buy a camera. Don't settle for a mobile phone.
This feels a little like a decade-old maxim that needs a refresh. Most comparisons that I find of modern flagship handsets against even mid-range point-and-shoots show the dedicated cameras failing to dominate. Particularly with computational processing it gets really tough to honestly just wave a hand and scoff at the mobile as a primary choice for most casual photo situations (and some professional, as well!)
I too love my manual-operation DSLRs and mirrorless swappable cameras, and just bought a new lens for my go-to. But I've not found myself in years compelled to spend what I'd need to on a pocket camera that'd beat what my phone can do.
I agree. Light issues are like distance issues. It's about capturing data, which cellphones will never match given their sensor and lens limitations.
Sure Apple/Samsung boats how good their phones are in low light situations these days. But people forget how awful they were in the past. Cellphones were useless for low light even just a few years ago. They've made great strides, but the lower and lower light you go, cameras are the clear winner.
I read somewhere, the photos from phones are looking great when are viewed on phones. The moment you view them on a big screen, you immediately understand the picture is not that great as you thought. One could argue the great screens of phones nowadays and the vivid colors affect the perception of the image
Absolutely. How you view photos absolutely come into play. Digital? Print? Viewing them on a monitor, even a huge one, has different requirements than, say taking a photo and making a wallpaper (as in a real wallpaper, on a real wall) out of it. I have terabytes of photos that look fine on screen, but if I printed it on an 8x10 to hang up, they look like crap.
What really is the mark of a pro photographer, is composition, and control of light.
My wife is a far better photographer than I am. We could be standing next to each other, shooting the same subject. When people look at my photos, they go “That’s great!”, but when they look at hers, they forget all about mine.
Different purposes of computational capabilities in modern cameras vs. phones place them reliably in different categories.
The former aim to computationally adjust settings and make it easier to capture raw scene data the way you intend. The “magic” happens on lower levels (e.g., tracking subject), and you generally feel in control of how it’s applied.
The latter focus on computationally interpreting raw data and generating a display-referred image. The “magic” happens throughout the process, and is mostly a black box (recall that case where iPhone camera replaced human head with a leaf on a group photo). If you use phone camera to capture raw image, phone software may still be able to assist to a degree, but challenges arising from sensor size and lens limitations would become more obvious.
(I would, however, watch out for the line becoming blurred if/when phones—hopefully not cameras!—start to capture computational raws. Potential improvements offered by applying extensive magic at unmosaiced raw signal level would for me be killed by that human-head-turned-to-leaf possibility, which would ruin phone photography for me.)
1. GCam, MotionCam etc were always able to output stacked raws. In case of raws, "computational" is pretty much a marketing buzzword that means very little, it's just good old exposure fusion and stacking in bayer domain, with in-camera alignment robust enough to shoot a long stack handheld. They pretty much have the same exposure/iso priority and controls you expect to simplify capturing the most of the scene data unchanged. There's even experimental RAW video recording in MotionCam, however ridiculous that might sound.
2. RAW is also pretty much meaningless with modern mirrorless since it's not literally raw. They also do a lot of stuff under the hood for years, including "computational" stuff like noise suppression or pixel shift/superresolution in bayer domain.
So the line isn't just blurred, it doesn't exist at all. All this "computational" stuff is here precisely to capture more of the real data, not to draw something that never existed.
I don’t know what are things you reference in (1), some apps?
In (2), either you are speaking about consumer point-and-shoots, or you are speaking of features that can be turned off, or more typically have to be turned on. (Have you ever tried to use pixel shift, by the way? What kind of magic do you think does it do, except for capturing multiple pixel-shifted exposures?)
Mirrorless cameras I use get me a dumb array of raw light values, and I am free to apply (or not) noise reduction as I interpret the data. And if they were applying noise suppression, I would not put it anywhere in the same domain as computationally detecting and smoothening human skin or the like.
The line is there, and at this time it’s pretty sharp. If you don’t believe, try some of the latest “dumb” pro mirrorless cameras with the latest iPhone and compare their respective raw captures.
GCam is Google Camera, the app that started all this. Funny how everybody praises iPhone when Apple actually have been very slow to notice this trend, and their camera apps really sucked for many years. I hate GCam specifically because it requires lots of fiddling with mods and ad-hoc setups if you want full potential, but it doesn't change the fact that a "computational" camera outputs real scene data.
>computationally detecting and smoothening human skin or the like
Seems like there's some misunderstanding. I'm not talking about stuff like face detection or beautifying, computational bokeh, one-size-fits-all tonemapping etc. I'm talking about raw capture, which simply aligns and stacks many frames (discarding bogus ones) without demosaicing. There's really nothing more to it. Not much different than, say, Pentax' pixel shift. Similar process, same limitations and issues, no opaque magic that tries to think for you. I've used similar process for years with HDRMerge to shoot landscapes with my DSLRs.
>If you don’t believe, try some of the latest “dumb” pro mirrorless cameras with the latest iPhone and compare their respective raw captures.
I currently use the a7S III, but I fail to see how it's relevant. The difference is not due to them being dumb but because of having better sensors and optics, variable aperture, mechanical shutter etc. There's nothing smart or special in smartphone raw capture.
We seem to mostly agree. Here’s the line—on one side of it are phones, able to produce decent images despite hardware limitations by applying black-box magic when interpreting raw captures (but not faring that well when used to shoot raw); on the other side are cameras, employing advanced tech (and sometimes, indeed, magic) to assist you with capturing raw signal of much higher fidelity and yielding full control as to its further interpretation.
That’s what I meant by cameras being in a qualitatively different category, and why I wrote my comment arguing against the next iPhone somehow rendering mirrorless cameras useless.
(The only point where it might become more of a debate in my eyes is if phones start applying similar extensive processing that they do, but at raw levels. There could be genuine gains there—but also unwelcome unpredictable adulteration of DNG captures.)
>able to produce decent images despite hardware limitations by applying black-box magic when interpreting raw captures (but not faring that well when used to shoot raw)
Not really, the quality boost in recent smartphones comes exactly from bayer domain stacking, which is scene-referred capture. You can absolutely get precise photometric values from this process, as precise as your hardware and lighting allows you to, and as long as you calibrate your device. Those apps contain profiles for specific sensors precisely for this reason, albeit you can do it yourself better, as many people do just like they do it with proper cameras. There's no black box magic here, the output is just the same ordinary DNG with the CFA photometric interpretation, but with significantly lowered noise floor and better dynamic range. Moreover, the pipeline is strictly separated into two parts: bayer domain operations, which give the quality boost, and post-demosaicing processing (auto white balance, tonemapping, skin smoothing, other features aimed at point-and-shoot users - what you call magic).
I'm not claiming smartphones are somehow better, which would be silly. I'm objecting to a claim that apps do some opaque magic behind your back when writing RAWs. They don't, you can get entirely normal RAWs with all the quality benefits of stacking (and its limitations).
And to be honest, I feel like our discussion is creeping into a no true scotsman/audiophile territory a bit. There's nothing sacred or qualitatively different about single-shot capture, no modern normal-sized sensor gives you truly unprocessed internal bucket values, and even what is counted as single shot is often unclear (many sensors are capable of simultaneous multiple exposure capture requiring the same exposure fusion to get the result; is it single shot or not?). What smartphones, and some cameras, do is essentially temporal denoising, it's a routine imaging technique employed by videographers and in technical imaging (which is as precise and scene-referred as you can get), and I'm not sure what's magical here. Same with temporal super-resolution that many cameras do.
Sure, some of the black box processing that phone camera does relies on known algorithms. The difference is that it also makes use of ML, which IMO either plays a significant role or will do so in near term.
> auto white balance, tonemapping, skin smoothing, other features
This, for example, I see as an opaque step with nondeterministic outcomes, which if you take away from phones and shoot raw reveals how far they are away from mirrorless cameras. (Cameras that shoot raw do not do this all that well, nor are they expected to.)
> There's nothing sacred or qualitatively different about single-shot capture
I stack exposures too. Whatever process works, works. The difference is that my flow is A -> N -> X, my camera’s built-in processing is A -> M -> Y, while ML-powered black-box compute in a phone does A -> ? -> R today or A -> ? -> Z tomorrow.
Due to its automatic nature, I see this processing as taking more of the creative part of photographer’s role, unlike compute in cameras where it is a capture-time assistant.
It will have to play a bigger and bigger role in how phone camera output improves in near future, as there is not that much space for enlarging the sensor or the lens; and as a user you either let it do the magic or you end up with a very mediocre camera.
It is not inherently better or worse, but it relates to photography in the sense that I appreciate it in kind of as Nvidia Canvas et al. relate to hand-created art: different tools, different goals.
It still seems we're talking about completely different things, or maybe you're trying to derail the conversation...
Both ML-based processing and tonemapping take place after demosaicing to produce the final JPEG. RAW capture is just your typical laplacian pyramid stacking/exposure fusion with motion compensation and sensor-specific calibration. That's the main contributing factor that somewhat compensates for the sensor size, not some kind of ML processing. If you think that ML is somehow involved here, I have zero idea where you are getting it from. The entire point of pre-demosaicing pipeline is to robustly converge the captured stack into a neutral scene-referred baseline with provably better SNR and precise photometric parameters. And that's what you get in a resulting DNG, not some neural tonemapped RGB image (which is not even created at this point).
Perhaps you should look at the pipeline yourself, here's the source code for a typical "computational imaging" camera app that produces stacked RAWs, for example. [1] And here's what Google Camera does [2] [3] (you can ignore everything that comes after demosaicing). Google's particular method is also partially implemented for arbitrary RAWs as third-party software, so you can stack any burst capture from your camera with the same method smartphones do. [4] (there's much less benefit for normal sized sensors, for obvious reasons)
> It still seems we're talking about completely different things, or maybe you're trying to derail the conversation...
I am making literally the same point I made when replying to jhoechtl: phones and cameras are different beasts, and the next iPhone is unlikely to obsolete semi-pro mirrorless cameras. Judging by your tone, you are dead set at proving me wrong, but I can’t see a meaningful argument.
Why do you keep reverting to ML involvement at pre-demosaicing level? “Magic” computed raws is not something I claimed to happen now, nor am I certain it ever will—I just speculated we are logically headed there and if we do get there it may affect my view at phones vs. mirrorless.
So no, not interested in a hypothetical typical “dumb” camera app or raw capture from a phone. I mean the computationally more advanced capture (like the one done by bog standard Camera) that produces good images, which is what makes people think iPhone can rival a mirrorless. If you take away (by shooting raw or using a dumber app) semantic masks and all the magic it uses to obtain display-referred data for you, you just get a mediocre camera compared to a mirrorless, and it will remain the case for the next few iPhone releases.
Didn’t know this was debunked, my faith in iphoneography was restored a little!
Still, I occasionally notice artifacts of computational processing—most recently, wrong color of a building (it somehow decided it’s yellow, instead of reddish beige), other times some skin improvements, eye sharpening and so on. (I don’t shoot in raw on my phone though.)
> That means that you need to expose the chip, scan the image from it, do the A/D conversion, demosaic, gamma correct, color correct, denoise, vignette correct, distortion manage, remove chromatic aberration (because cheap lenses),
Can you explain what's the difference in gamma correction and color correction? New gamma correction uses perceptual mapping right (e.g. Dolby vision) - doesn't that already put you into a standard color space?
> then, display it on the screen, continuously, within 50ms per frame
That's 20 fps. Don't you need at least 24 to capture cinematic footage?
> That's 20 fps. Don't you need at least 24 to capture cinematic footage?
They‘re not talking about fps but latency. From light hitting the sensor to it being emitted again by the viewfinder With all the processing inbetween.
Gamma is basically a logarithmic light correction. It is a digital reflection of an original analog process. Color correction is a much more intricate and subtle thing. Digital cameras have color bias (for example, they tend to like red, a lot, while film liked greens and blues). Color correction makes sure that the colors that come out, can be accurately correlated to the ones that came in. You can also do some cool things, by remapping into different color spaces (a lot of image processing is done by changing the data to a new color space, and playing with the channels).
We’re talking the viewfinder, so it is latency. It is also FPS. The viewfinder can be slightly relaxed, but pros will want the viewfinder to be highly representative of the final outcome. Latency is critical, because they use the viewfinder to make the decision when to snap the image, or set the scene.
The final output can be slightly more relaxed, with post processing in a buffer. Video makes that a lot more intense.
That’s why you have a limit to how many frames you can have in a burst. They get fed into a buffer, for post processing.
Another big advance will be true electronic shutters. I’m not sure we’re there yet.
Even now it seems like the mirror less bodies are only just reaching parity. They’ll get there, but for now they come with a high price tag and real downsides, while the promised advantages are real but more modest than once thought.
An R5 plus RF 2.8 24-70 weighs just about the same as a 5D IV with the same in EF.
In-body stabilization was promised to bring 6+ stops, but for supertelephotos I hear it’s delivering around 1 stop.
This is likely a difference between claimed performance on unstabilized lenses and stabilized ones. One additional stop on an already-stabilized lens is quite impressive!
It’s the march of progress, and I’m glad for every stop I can get. The point is mirrorless is delivering incremental rather than revolutionary benefit. The result is that it’s a gradual succession as the weaknesses are shored up, instead of a game-changing rout.
I would add that lenses are never going to be the same in that the latest mirror-less full frame lenses, even the higher-end ones (which cost $1000-2500 and more) are increasingly made out of greater proportion of plastic than the DSLR lenses (Canon EF, Nikon F). Even the mount is attached with screws that first go through a plastic frame, which consequently contains at best a partially metal assembly, which was fully metal previously.
I'm not familiar with how well do modern plastics age, but suspect that it's not as well as the old metal lenses (50ies onwards) which are still usable. So it's more of a planned obsolescence trend in my opinion.
And the increasing reliance on digital post processing for correcting optical aberrations like distortion and chromatic aberration. Some Micro Four Thids lenses' raw files are downright unusable without such corrections.
I don’t understand people’s obsession with plastic vs metal lenses. I have 10+ year old plastic and metal lenses and all are fine. I once dropped a plastic lens and it rolled about 20 feet down a steep gravel and stone driveway. No issues other than some scuffs on the exterior, and I’ve used it for years and years since.
For truly old lenses - 50 years or more like you mention - mildew and fogging seem more likely to be a problem than plastic aging. Plus, lenses of that age are often of limited real-world use anyway given the developments in cameras.
> I don’t understand people’s obsession with plastic vs metal lenses.
It's an issue with perceived quality. So much of the inner workings of things have been replaced with plastic, usually not for the better. When you spend as much money on lenses these days, you don't expect them to cheap out on a small ring.
Also, not all plastic is made equal. I have two lenses made in the last 20 years that have plastic that is starting to deteriorate and become sticky. It's a real shame as those lenses still work and do their job.
Plastic deterioration like this has ruined so many things I liked that are otherwise well designed. I've had it happen with computer mice, umbrellas, knives, camera tripods, shoes, flashlights. I don't know the chemistry behind why, but it's a terrible feeling to have something either disintegrate, or become terribly sticky, or turn into putty.
The lens I use the most is a 70 years old leica lens, it's all metal and glass and looks as if it came out of the factory yesterday. It stills holds up on an m10m sensor (40mpx without bayer array).
Plastic will always be inferior to metal no matter what, both in feel and durability. The only plastic lens I have is also the only lens with helicoid play... it's not even 1/3rd the age of the leica
Plastic will also always be superior to metal in weight, and functionality is all over the map: good lens (for ones purpose) dwarfs enclosure material for that.
It wouldn't be hard to pair you with someone mad at their old-school lens for helicoid play who has no complaints about their plastic. Especially in the last decade, it's not a useful filter, just one of many factors to consider in selecting gear.
I think the issue is, how well the plastics age. Some over time get brittle and less "flexible", and thus prone to fatigue.
I had a Minolta SLR as a teen (read: late 70s to early 80s). I gave it away 25+ yrs later. I'm not sure about internals, but there was no fears about plastic fatigue. I believe it might still be in use.
I think it’s pretty clear at this point that well made plastic lenses hold up for at least 10-15 years. In my view, the vast majority of use of a typical lens is going to be in the first 10 years or so. After that, if the lenses haven’t been scratched, dropped in the ocean, etc., use is probably going to drop off anyway as better cameras and lenses come along.
Yes! I have one, and wide-open it outresloves the 24MP sensor of my A7ii. It's so sharp that I have to turn down the default sharpening settings in lightroom because then it looks unnaturally sharp.
The reason why it's so good is because it doesn't have in-lens IBIS. It took 15 years to make an IS lens with similar quality to the legendary 80-200!
Metal breaks too. It entirely depends on how it's engineered. Cheapest plastic sucks a lot more ass than the cheapest metal, but when engineered for permanence, plastics can be superb materials.
I wrote that comment after the plastic buckles on my messenger bag just broke. Perfectly fine, hardly used, but it just broke because it was ten years old. Plastic sucks.
If you interested in great feeling and great IQ lenses with a small foot print look not further than the Zeiss Loxia line, Sony 24mm F2.8G / 40mm F2.5G and 50mm F2.5G, the new Sigma Contemporary lens line for both E and L mount and Voigtländer lenses for both E and M mount. All of them have metal bodies and some of these are even weather sealed and have AF.
There is some Chinese lenses as well which seem to be very well built.
That's one of the reasons I mentioned those three.
I think we have to distinguish between plastic, the one that feels cheap and might crack even when exposed to cold like many of the cheap DSLR lenses. And the "new" kind of plastic that is sturdier and more resistant to drops or scratches. The FE 24-105 is the latter and it does not feel anyhow less quality than more expensive top zooms.
Plastic has its positives as well: it won't scratch as easy as metal and, specially for zooms, its lighter. Imagine a zoom made of metal like a Voigt lens.
Soft plastics on my nikon are an issue. I have noticed that they tend to deteriorate quickly on beach holidays. I suspect it is a combination of Sun, residues of solar cream on the hands and the salt in the air.
You mean the grip? Nikon rubber grips are notorious for falling apart. Which is part of why I sold all my digital Nikon gear when they were tone deaf and peddalled dSLRs and the dumbed down 1 system. Went with the m43. It's not an open standard but it's considerably more open than others.
This is complete speculation on my part, but doesn't plastic tend to deform more than metal in harsh temperatures? Won't the lack of a metal frame to precisely position all the optical elements cause more aberrations over time?
There are a ton of games one can play. For example, one can mix something which expands by 1%, with something which contracts by 1% (which can also mean expanding in the opposite direction). Modern lenses do very well across temperatures.
Over time? At least based on what I've seen, older plastic lenses have not aged well. Of course, that means more sold products. A new lens will blow away an older lens if the older ones is plastic and slightly wobbly. Planned obsolescence.
Not at all. This is actually part of why telephoto lenses use a lot of plastic, you can get a plastic to have ~0 thermal expansion, but that's impossible for a metal, meaning better IQ.
Plastic is used to save weight. I have both new RF lenses and the old EF lenses. The RFs are better in every way like for like and are much lighter. Makes a ton of difference when carrying around.
Use Fujifilm X, Leica DG or Zeiss lenses if you're discontent with the build quality. Sigma also has a very good quality lens lineup. I only need two, a standard and a 28-30 mm equivalent. Could also do with the wider one. On the Ricoh GRD I never needed anything else.
I can’t quite RTFA because I have an ad blocker, but I will say — Full-frame mirrorless is a revolution in cheap lenses for anyone serious about artistic photography. Which I hope would include most cinematographers.
Last summer I bought a 40-year-old lens in so-so condition on Etsy for $20 and have made wonderful photos with it. I plan to spend more on old glass.
This is only possible because the mirrorless systems allow adapting these lenses where most DSLR systems did not. And a lot of the old glass makes nicer pictures — not sharper, but nicer.
Just search eBay for “canon FD lens Japan” and you will find the tip of this wonderful iceberg.
"And a lot of the old glass makes nicer pictures — not sharper, but nicer."
I hear this a lot, and every time I grow more skeptical. On every conceivable technical measure, new lenses are superior to older ones. The single advantage of old lenses is, as you say, they are cheap. (I got a metal Nikon 50mm f/1.4, manual focus, for less than $100 on eBay!)
Unless you can point to some objective property of older lenses in general that assists them in taking better pictures, I'm forced to conclude that this claim is just a photographic superstition.
For certain lenses, like the old Nikon 105 2.5, you might be able to do this! But for old lenses in general I think not.
(Another thought: maybe you find the extra distortion and tint of old lenses pleasing?)
Well in the case of my cheap lens, yes I’m saying that the “flaws” result in an aesthetically pleasing image. Plus the difference in how you shoot when the camera isn’t doing the work for you.
But in general I’m unconvinced that the Cult of Sharpness is giving us creative pictures that are more fun or interesting to look at. If the tools really are that much better, shouldn’t we be seeing better photography than ten years ago?
As for objective properties: with my 100-200mm f/?? I could easily club a mugger to death, so… self-defense advantage.
Purely in terms of image quality, we are indeed seeing better photography than 10 years ago (controlling for sensor size).
I also think comments about "the camera [...] doing the work for you" or about sharp lenses not necessarily leading to fun or interesting pictures sort of misses the point. If you're a terrible piano player, you're going to be just as bad on a $100K Steinway grand as you are on a used $5K upright. But there are difficult and beautiful things you can do on the Steinway that are impossible, or least much more difficult, on the upright. Example: any piece requiring rapid soft passages, which is facilitated by the double escapement mechanism on the grand. There are amazing YouTube videos of pros playing virtuoso works on beaten-up street pianos, but I don't think any of them would deny their job would be easier on a well-maintained grand.
The same is true for photography.
Also, let's face up to the fact that judging additional distortion to be aesthetically pleasing is just nostalgia tripping. Back in the bad old days of film, lens designers worked like dogs to minimize it. (Same with film companies and grain – it was undesirable, not retro!) I'm not saying this means your aesthetic judgement is wrong or anything like that, just ... that's clearly the mechanism behind it. Someone without that cultural baggage is probably going to prefer the more correctly proportioned photo, all else being equal.
Sorry for being a little prickly here. But new lenses really are better! They're engineering marvels.
Now – for most people, taking photos that are going to be compressed (lossy) and shared online, and viewed on laptop or cell phone screens – can you really tell the difference between an old metal Nikon and a new Z series lens? Especially after distortion and tint correction in Photoshop? I think that is a much better question, and a much better argument in favor of eBaying old lenses. But again, it's an issue of choosing the right tool for the job. For big prints, you probably want the sharper, less distorted, less tinted lens. (But who among us routinely makes big prints? Not I.)
Technically modern lenses are way more accurate than vintage ones, but photography is only partly about technicalities. Lenses are so sharp nowadays that now you can spend hundreds of dollars on "black pro mist" filters, which introduce diffusion into the image. Seems silly but people like the effect. Probably because most phone cameras today can produce sharper pictures than 40 year old SLR lenses and people have gotten used to it
I think you're onto something at the end there! The commenter was almost certainly talking about subjectively "nicer" photos that have been made more interesting by the use of flawed lenses.
And this isn't an uncommon idea either. It is the idea behind lensbaby, Lomography, and the slightest resurgence of Polaroid.
My (superficial) understanding is something like: even with modern lens technology, certain design tradeoffs have to be made, e.g. between distortion and other aspects of lens performance. I have heard it said that accepting some distortion makes lenses that take pictures that look closer to those taken by "old school" film photography lenses, and also that Fujifilm deliberately favors relatively more distortion in their lens designs for exactly this reason. I think my original source for this is Thom Hogan's blog – I have not checked the technical specs to verify it (which you can easily do if you're curious by browsing lens review sites). But if it's true, maybe it's useful information to someone here!
(I tried reading the article without an ad blocker. When I was nearing the end, a full screen modal popped up. Closing it caused the browser to instantly return to the top of the article. Sigh...)
That's interesting about the old glass. Personally I'm into portable superzooms lately, because of the usual distance to fauna in my area, and also it's fun to ID and capture the big passenger jets at sunset. But I'd really like to explore the kinds of lenses you mentioned someday. You can definitely see a difference with the lens changes.
A few years back I bought a middle of the road Lumix micro four thirds camera instead of DSLR. The reason I went with the MFT camera is it could shoot 4k video without the sensor overheating. At that time, DSLRs would overheat in 10-15 minutes and 4k capable DSLR bodies were 2x more expensive than the MFT Lumix.
I really liked the mirrorless better than the last DSLR I owned. MFT sensors can't deliver as much bokeh as a 35mm DSLR, but the trade off was the body and lenses were a lot lighter and more compact. I'll often grab the Lumix when I would have left the bulkier DSLR home... and it really is a better camera than any mobile phone I've used.
Compared to the phone (Samsung Galaxy S20), shots are much more color accurate, it's easier to focus on exactly what you want, and you get better lines in wider angle shots. That said, I'm usually pretty happy with the phone, but I love being able to change lenses and the precision of the camera (compared to the phone).
Not that several camera makers make EF mount cameras and third party lens makers make EF mount lenses, like Yongnuo, Samyang, Schneider, Sigma, Tamron, Tokina, Cosina and Carl Zeiss.
Be very careful investing in lenses that have a short flange focal distance. In other words, where the distance from the mount to the sensor is short. We went big on Sony E mount primes for shooting video. When we moved to Blackmagic Design Ursa and pocket cine cameras with Canon mounts we couldn’t adapt the lenses and had to get rid of them. The general point here is that as you grow as a shooter your needs will change and you’ll be far happier if you can adapt your old lenses somehow.
You can, for example, put an EF mount lens on a Sony E mount camera, but not the other way around.
What my media dept has done is to standardize on Canon EF mount lenses. We now have a great collection of L series and cine lenses that will work on everything from our 5Ds to BMD Ursa 12K to our pocket cine cameras.
Don’t get vendor locked in and make sure you retain a few options.
>Not that several camera makers make EF mount cameras and third party lens makers make EF mount lenses, like Yongnuo, Samyang, Schneider, Sigma, Tamron, Tokina, Cosina and Carl Zeiss.
They are being discontinued at these manufacturers as well. I was looking into validating a range of EF mount lenses for a specific application a couple years ago and for every vendor we talked to, they indicated that some of their lenses were either newly discontinued or soon to be discontinued. No matter what the lens manufacturers are saying, I'm confident that they will slowly be discontinuing their DSLR lines over the coming few years.
Do the EF lenses retain the autofocus capability on cine cameras or do you even need it in cinematography?
The only drawback with longer flange is more complicated designs for wide angle lenses. That didn't stop Canon from making a rectilinear 15 mm EF lens.
That's an annoying blog. I was half way through reading when a full page "enter your email address" pop-up appeared. I closed it, and was sent to the top of the page again. (Samsung S20, Chrome).
I recommend you installing Samsung Internet browser and using the Adblock Fast software to block ads. This browser has built-in support for blocking ads using the Adblock Fast. And if this not works, most of the time I just use the reading mode to bypass the paywalls and ads.
Not dead, so much as not being made anymore, in the same quantities as before. I still shoot Timelapses with my Nikon D700 and use it as a backup camera for some event photography. I often use my old Nikon 60mm f/2.8 macro lens I picked up years ago for a song.
I figure many people will be able to purchase amazing lenses (maybe not the legendary ones, those will always be collectible) for a very good price as more of the (admittedly shrinking) photographers of the world switch systems and get tired of adapting their old lenses.
Not sure about the lenses in the title - they state that most lenses will remain compatible with newer mounts and some vintage lenses even increase in popularity. So, this is like it has always been: Buy expensive lenses, to keep them over 30+ years. Buy cameras every 5-10 years, to get the new features. I have 15k worth of lenses and they stick with me across camera upgrades. I am not as far as attached to my cameras.
Camera bodies have pretty much become annual upgrades like phones. There's nothing wrong with the one you have, but the new one just has that "one more thing" to them.
Not true at all. All camera makers take years (some times 5 or more) to update their sensors to newer versions.
And while feature updates like better EVFs or better video specs are a bit more frequent, they aren't yearly. Probably closer to 3 years for most brands other than Sony.
I wan't meaning to imply each manufacturer releases a new camera per year. It's sort of a leap frogging of each vendor releasing something better than the other vendor's previous release.
It is less about image quality but more about what you can do with the cameras. Smaller? 60fps? 120fps? 360°? AI? 40MP? 100MP? All of this _can_ be an argument to upgrade, even if image quality has improved zero.
The A7r2 can do 42MP. It's not really practical to have any more megapixels.
The rest of the things are useless to 99.9% of people, so it's just a gimmick or convenience update. That's where we're at for cameras.
Phones however still see big image quality improvements, and you could take pictures that were previously impossible as they added and improved suppmentary cameras.
We've already seen cameras running on Android. Eventually, we'll see cameras with all of the artificial image manipulation that you are seeing in the phones. I'm not so sure that's a good thing, but it will be done. At that point, it will be this annual update cycle
The A7r2 actually does run android. You can even sideload APKs. It does some level of image manipulation, if you ask it to. The issue is that photographers want to be in control of their image, so the generally use RAW files. Because of that, the artificial image manipulation is left to your computer for best results (I often use the new AI features of Lightroom, they're great)
Sooner or later, I think DSLR is fundamentally "dead".
To understand my PoV, perhaps we need to go back into 1940/1950s.
Practically the most camera used at this time is a rangefinder (including Leica), which was a mirrorless. Rangefinder is compact, but limited to shorter primes, like 50mm and wider.
SLRs don't have such limitation. You want longer lenses like 200mm, 300mm? No problem. Or a zoom lens? why not. In the 1960s, pro journalists started to adopt SLR and there's a decline in rangefinders usage.
Now we are living in the digital world. Why do we still have to keep extra space for mirror box, which makes the camera body bigger? After all, there's no issue using tele or zoom lenses on mirrorless.
Please don't understand this as "yet another anti DSLR post". No. My main camera is a Nikon Df, equipped with some AI and pre-AI lenses. I'm happy with the result, it suits my shooting style. That said, if I have some cash for another camera system, it's obviously going to be a mirrorless. Probably medium format mirrorless :D
I was vehemently anti-mirrorless up until maybe a few months ago.
I went on a big wildlife trip and realized I had to finally upgrade my Canon 70D. The first thought was to jump to a 90D, but I had to review what mirrorless options were.
I tried to find every reason in the world to go with the much cheaper 90D than mirrorless. In the end, I couldn't. For my primary use cases (wildlife, night cityscapes, and sports), the Canon R5 has so many game changing advantages, I couldn't ignore it. I lose nothing with the lens adapters, and they mostly solved or mitigated the issues with the first gen.
Battery life when using the EVF (vs viewfinder in SLR) seems to be the only downside currently with Canon (I think Sony's is better). Hopefully over time this will improve.
That's really the only disadvantage I came up with. I was also really nervous about lag and blackouts, but once I actually used one, even in the older R, I realized it was a non-issue.
It's still an incredibly strange concept for me, and admittedly I'm not sure I'll ever be fully comfortable with it. However from a practical standpoint, there's really no issue.
Excuse me for this very uninformed question, but (D)SLR, as far as I can tell, is about using a prism to split light, to allow having a viewfinder.
Why does a digital camera need a prism for it, can't the digital sensor directly show the image on screen (or viewfinder, do photographers still look through a viewfinder?)
Or does DSLR stand here for any camera with exchangeable lenses? Does the article mean professional cameras with exchangeable lenses are dead?
* Until fairly recently, cameras couldn't focus well using just the sensor. The simple sensor-based focusing is based on contrast, and the problem with the method is that the camera can't tell whether it's as focused as it can get, or in which direction should focus be changed. So the camera constantly "hunts". You can see that happening a lot on youtube with people trying to show stuff from close (eg, Bosnian Bill, AvE very often have this problem).
* Phase focusing fixes this issue, but initially couldn't be incorporated into the sensor. The mirror in DSLRs bounces light into the better focus detectors, which are located on the bottom of the camera. Here's an illustration of the system: https://photographylife.com/how-phase-detection-autofocus-wo...
* Sensors get hot when they're constantly lit. Cameras refuse to keep taking pictures when it overheats. A DSLR's sensor can cool off while the mirror is blocking it.
* Sometimes the viewfinder is the better way, eg, if the LCD can't overpower the sun. Making a good digital viewfinder is harder than just optics.
* An optical viewfinder has no lag and almost zero battery usage
But yes, mirrorless is definitely the future. The DSLR was a technical compromise, and in fact mirrorless has advantages of its own:
* As this article says, DSLRs require a large separation between the sensor and lens to make room for the mirror, and that makes some kinds of lenses hard to design. Mirrorless cameras can use whatever distance works best.
* Also, the mirror is a pretty delicate, very quickly moving piece that creates vibration, and this needs taking into account. It's common to incorporate a delay to allow for vibration to settle, or to use a special mode where the mirror is permanently kept up for the cases in which vibration needs to be minimized. Mirrorless cameras don't have a moving mirror, so this is not an issue.
>Excuse me for this very uninformed question, but (D)SLR, as far as I can tell, is about using a prism to split light, to allow having a viewfinder.
They have a mirror that reflects light to the view finder. The mirror flips out of the way to let the light hit the sensor.
What you describe are called mirror less and is the revolution the article is about. They've been around for a while now but the combination of sensor, battery, and screen improvements have finally reached the point where the disadvantages are mostly gone.
The film SLR was the predecessor of DSLRs. Back then, you basically had two choices - preview through a different lens than you shoot with (twin-lens reflex, rangefinder), or preview through the same lens that you shoot with (single-lens reflex). In an SLR design, you need a way to swap between the light path to the film versus the light path to the viewfinder, and a mirror and prism is the most practical way to do so.
Yes, a DSLR can simply show in real time what is being captured on the sensor - this is called "live view" and was introduced some years after the first DSLR, as it's not a critical feature. But live view drains a lot of battery. It also had terribly slow and inaccurate autofocus using contrast detection, but was improved using phase detection (e.g. Canon DPAF). Another consideration is that live view takes a second or two to start up, whereas most DSLRs will let you pick up a camera from idle and shoot through the viewfinder in a fraction of a second.
> any camera with exchangeable lenses
These are called interchangeable-lens cameras (ILC), a category which has significant overlap with but isn't identical to DSLRs.
> professional cameras with exchangeable lenses are dead?
Mirrorless cameras with interchangeable lenses (MILC) are quickly becoming the new professional gear.
A screen on the back of a camera is lit up by the environment, so it needs to be extremely bright to work properly in direct sunlight while also having zero backlight bleed in near darkness. Viewfinders solve this issue.
The problem with live preview from a digital sensor is that the sensor needs to be able to stream the entire image constantly. Early sensors were not designed for this, and would often overheat quite quickly. It also needs to update often enough to not be jittery, and in low light conditions early digital sensors just weren't sensitive enough.
You also need an extremely high-resolution viewfinder screen to provide enough detail for things like focus. Those didn't exist yet either.
One thing to keep in mind is that it was definitely possible: consumer cameras did it quite early on! But the quality was quite poor: good enough for holiday snapshots, but not nearly good enough for professional photographers. Sticking to the DSLR approach was a no-brainer for them.
Also, a (D)SLR doesn't split light: a mirror flips down in front of the sensor to reflect the light into the viewfinder. When you take an image the mirror flips up, allowing light to fall onto the sensor.
What you are describing is called a "mirrorless" camera which is what the article says will be the future. They also have interchangeable lenses, but there are less constraints on the design since they don't have to interface with another optical system, just the digital sensor, which is the main point the article is trying to make.
> can't the digital sensor directly show the image on screen
A bunch of them do - https://en.wikipedia.org/wiki/Electronic_viewfinder - but a screen or viewfinder isn't going to have the same kind of resolution as your eyes when you're trying to judge focus/DoF/etc.
A digital camera doesn’t need a prism…anymore. When DSLRs first came to market two decades ago, they didn’t do video. Even if they did, the batteries at the time wouldn’t have held up for long shoots for weddings, sporting events, etc if they were showing video in the screen.
As the article states, the move to mirrorless cameras is now pretty much a given. Canon announced their latest DSLR will be the last.
Photographers definitely still need viewfinders, but only recently did the pixel densities of digital ones reach the point where they are an acceptable replacement for looking directly through the lens. This also means mirrorless cameras can add features to the viewfinder like focus peaking (highlighting the areas that are focused).
>Or does DSLR stand here for any camera with exchangeable lenses? Does the article mean professional cameras with exchangeable lenses are dead?
No. By removing the mirror-prism system as mirrorless cameras with interchangeable lenses have, you can bring the lens a lot closer to the sensor which opens up design choices that couldn't happen during the SLR or DSLR era. That's exactly what the article is describing.
The article just does a fairly subtle job of explaining it:
>These new lenses and lens mounts differ in one special way from previous generations. They aren’t limited by the design of the camera. The distance between the sensor and lens can be whatever the manufacturer wants it to be for the best results. With that limitation out of the way, lenses can embrace the advantages of rangefinder lenses, without their disadvantages.
That being said its going to be a while before I declare DSLRs dead, because the great majority of people will be extraordinarily well served by a modern SLR, and I buy all my camera equipment used anyways.
I used to be an avid photographer. Then got drowned in an overkill of technology since then. I preferred the look of AA filter sensors.
Modern lenses and modern cameras are so tack sharp that to me photos feel like they are beyond reality. Too much of everything. I have difficulties to connect to the image itself. Much more of an craft you learn than art you explore. Keeping a piece of the world to carry with you to revisit and to remember or to cherrish.
Only Ricoh managed to have a line up that keeps me interested. Seems if the photo bug bites again I'm happy with old gear.
I'm a rather frequent photographer myself, but I shoot solely 35mm film and Fujifilm instant photos. There's still a thriving market for film and there are plenty of labs around to develop your films at. You can pick up various old cameras and lenses for fairly small amounts of money on eBay and similar (or you can spend a ridiculous amount, your call haha).
Film has a certain quality to it that I adore. You don't get insanely crisp images like you describe, you don't get to edit your photos, you just get what you pointed your camera at. And you get something meaningful and physical that you can pick up and hold when you're done. It's wonderful.
> you don't get to edit your photos, you just get what you pointed your camera at
There's still the whole aspect of developing and printing film, which itself can be an artform and very much change the end result. See for example how much the analog dodging and burning were used to change the tonality of an image, drawing attention to some aspects and reducing others. These final results are typically what we think of when we see classic film photography.
I have three cameras that I use - a Canon R5, Canon FT QL, and a Zenza Bronica S2A.
The first one is a joy for many reasons and I use it for the things the later two can't do very well - action and very low light. However, the later two offer an incredible experience to shoot. They're not forgiving, and film gives you a hazy look that is still poorly reproduced by software.
One of the nicer parts of mirrorless cameras the article touches on briefly is the reduced focal distance makes it easy to adapt older lenses with just a cheap tube. Combine with focus peaking in viewfinder and sensor image stabilization and you have a great platform for messing around with cheaper vintage glass like the classic Helios lens. I’m hoping to get a Sony A7 II as it has all these features, but it is hard to stomach watching the price go up over time on a 7 year old camera!
I have an A7ii, and I did the same with cheap vintage glass. It is quite amazing! Eventually I upgraded to older canon EF lenses with a Metabones adapter I found on local classifieds for 100$. The AF isn't amazing, but it's good.
The almost exclusive reason I never bought a DSLR is because nobody ever saw the reason to add a GPS chip to encode exif data into the pictures being taken. Sure, you could sometimes buy another device and plug it into your already large device you have to lug around to get that metadata. I just never wanted to deal with all that for something so fundamental to digital photography in my mind. I could however see the trade-off of the larger form factor for the lenses. I was not going to lug around and keep plugging in a GPS module though.
To me, going back 20+ years now I have been convinced that the metadata is as important as the images themselves. I just got tired of looking into it every few years, getting disappointed, and trying to remember to go look again next year. Seemed like phone cameras could deliver more of that baseline for digital images should be underneath the lenses. When I look back on my images I don't want to guess where they were taken or hope that the timestamps were right. Using a DSLR was too risky to me. Too much information about an image was lost like the most critical bits of coordinates and accurate timestamps.
Canon 6D was one of the few DSLRs that had built-in GPS support, but the feature wasn’t that successful since most people preferred to keep it off to save battery life.
100% support this. It's unbelievable that my 100k+ DSLR photos over 15 years are not connected to their location when it would have been so easy. (I would have gladly carried a multiple pound extra battery to support always-on GPS).
For starters, consult the list at https://en.wikipedia.org/wiki/List_of_cameras_which_provide_... . Most photographers I talked to have the opposite stance as you - they care about grouping photos by event (vacation, portrait session, hiking, etc.) but don't care about the exact street. As you are in the minority, I can see why the cameras are not reliably released with geotagging features.
I've used several cameras with geotagging capabilities and I'd like to describe their quirks:
* Canon EOS 6D: Built-in GPS receiver. Drains a lot of battery, especially if you set the update interval very short (1 second, 10, 60, etc.). Takes unreasonably long to first acquire location (easily 2~10 minutes). The weak receiver struggles with urban canyons and airplane windows - phones have much better receivers. Sometimes calculates completely wrong coordinates that are ~10 km away. As a result, some photos don't have geotags (slow acquisition) and some are wrong. But the extra transfer step is mildly annoying.
* Canon EOS M6: Make sure camera has the correct time set (can't be off by hours and days). Run the "Canon Camera Connect" phone app and enable continuous location logging. After shooting (e.g. end of day), use Wi-Fi to connect phone and camera, and use the app to apply geotags onto the photos after the fact. My phone is extremely accurate (GPS+GLONASS) and supports indoor geolocation through Wi-Fi, so essentially 100% of photos get tagged and mistagging is rare.
* Canon EOS RP: Use Bluetooth to pair the camera with the phone. Must run the "Canon Camera Connect" phone app in the background while shooting. Periodically check that the phone OS didn't evict the app due to inactivity, quite annoyingly. Every time you turn on the camera, wait 2~10 seconds for it to connect to the phone, and then you'll get geolocation data at the time of shooting. If the connection failed for whatever reason, you cannot go back and add geotags to photos after the fact.
Even though the Canon app can connect to all 3 aforementioned cameras over Wi-Fi, their geotagging functionalities are completely non-overlapping. You cannot send geotags to the 6D at all. You cannot geotag the M6 in real time. You cannot send geotags to the RP after a photo is shot.
By contrast, when you shoot a photo on a phone, it gets geotagged automatically with no fuss. The phone's accurate GPS+GLONASS+WiFi sensors are used (unlike the slow and inaccurate 6D), you don't have to do manual actions after shooting (unlike the M6 synchronization), and you don't need to wait because the phone geolocation service is always ready (unlike the 6D and RP).
Ok silly non specialist and even non hobbyist question.
The only reason I own a DSLR (the cheapest Nikkon I could find) is because I hate looking at a LCD. With how (d)slrs are set up what I see in that view finder is what i get in the photo.
Mirrorless cameras only have the LCD right? Is there any option that fixes that?
Note that there may be a camera category that I don't know about that fixes my problem. If yes, please tell me which it is :)
Lots of mirrorless cameras have electronic viewfinders - you have a conventional-looking viewfinder, but you look at a tiny screen, rather than a prism and a mirror. Because you are seeing an image captured by the sensor, it is precisely "what you see in that view finder is what you get in the photo" - including things like adjusting exposure compensation, which you would not see in an SLR viewfinder.
This also allows various helpful features to operate in the viewfinder, rather than just on the back LCD. I use magnification a lot, and there is also focus peaking and highlight and shadow indication. It can be really useful to have those a couple of button taps away while i'm composing a shot.
It's the smallest (usable) full frame camera from Sony (there's another smallrer one from Sigms but it doesn't have a viewfinder). Its weight is 509g / 1.1 lb, has the same sensor as the Sony Alpha 7III and better autofocus (tracks humans as well as animals with eye detection - for photo and video).
Not even, actually. Optical viewfinders have a different optical path, and you will see a different image. For example, you'll get incorrect behaviour at large apertures.
No, it's also true for traditional SLR cameras, due to the nature of focusing screens and other issues you don't get accurate depth of field for very fast lenses, for example.
Actually, it is true for SLRs too. The pentaprism and other elements in the optical path are not large enough to transmit light at an aperture over f/2.8.
Stopping down isn't an issue. The issue is... not stopping up to the max aperture.
That's the key feature of the design - there is no _real_ optical viewfinder because there is no mirror & pentaprism, which would be needed to redirect the incoming light from the sensor to a viewfinder.
Some mirrorless cameras have viewfinders, but it's either a digital display of what the sensor sees, or an actual optical path that doesn't go through the lens system. In either case it's not actual light redirected from the lens. A few illustrations:
Oh i looked at a few models and indeed some have what looks like a viewfinder. If there's a lcd inside instead of a direct optical path I don't think it matters for my simple needs. As long as I don't have to hold the camera at arms length and still not see anything on that external lcd because of the sun...
The one mirrorless i played with didn't have the viewfinder so I assumed that's how they all are.
Looks like I just need to be careful what I buy. Thanks!
The viewfinders in mirrorless (at least in Canons) are all electronic. I know, it sounds weird, I know it sucks up battery life, I know there are concerns about lag. However, I resisted them for a long time until I actually used one.
They're pretty awesome, I recommend anyone that they're zero issues, and yet, I still don't feel right about them.
My Sony A6100 has a 120hz oled viewfinder which is smooth and good enough that I don't believe that the advantages of a real viewfinder outweigh the benefits of a new mirror less camera body.
DSLR manufacturers also actively resisted improving features on phones to enable market segmentation.
Open source hacked DSLR replacement operating systems like Magic Lantern prove they are just market manipulators. example: every canon dslr is intentionally software limited in features like time lapse support, shorter exposure times, motion detection, bulb activation, and many more.
In general the product space was undeveloped for years for them to keep their for short-term profits from massively marked up high end bodies; unlike the phone world where every phone has the best features it possibly can, and there is no 2k, 5k+ USD iphone which "really" has the best features. Canon/others did not innovate and this is what they should get. It's a shame considering what they might have been.
Entry level DSLRs also only had 1 control wheel for a long time. This was always one of my biggest issues when looking for a decent, affordable DSLR. It really kneecaps usability. When in aperture priority, having only a single wheel for F-stop, exposure compensation, and ISO is an exercise is frustration.
Or how even on 2k+ DSLRs you cannot specify "auto ISO limits" - i.e. AUTO means 50-400 but do NOT go over that. So you can't build in your requirements for photos - after verifying that quality up to a certain ISO is okay for your purpose, and above it is NOT okay, you have no way to force the camera to match your requirements.
I started with SLR's in high school photography and loved how you could see just what the camera was doing. Using the Pentax K1000 with a 50mm 2.0 lens (at the time was a super cheap combo kit) just inspired me to really learn more about photography. When I moved to DSLR's later, it was even better, I could now see the images and taken nearly endless amount photos.
But I started to get bogged down trying to take a full bag of lenses and large DSLR to social events or traveling. I found that I was spending more time behind the lens with the technical side vs spending time capturing the moment. Sold off my kit and went with a Fuji X100T, fixed length compact camera. It's not great for action but worked wonderfully capturing the kids and with the fully silent mode, very well for weddings.
Once my phone camera quality started to approach my camera, I stopped using the Fuji as much and followed the rule of "the best camera you have is the one you carry". While I love having that dedicated camera, just having the live cloud backup, apps to edit, made my next decision to be an upgraded phone over a camera. It's hard to hear the news about Canon especially, the 70-200mm F/2.8 IS lens is simply amazing and I rented a 85mm f/1.2 which is the lens for portraits.
Likewise, when it came time to upgrade my camera drawer, I decided spending a bit more on the iPhone Pro made more sense then a new body or lens. Glad I did, as it’s always in my pocket.
I do still have a pair of Olympus u4/3 bodies and ~5 lenses for special occasions. Vacations, wildlife, etc.
The biggest reason why I held off on the camera was missing the fisheye lens, once that became a common feature, I was pretty much sold going with the phone over a camera.
My use case for the camera are wildlife (reach), low-light (phones can’t beat a moderately priced f1.8 lens), and macro (haven’t tried an iPhone 13 Pro yet).
General trend in technology during my 55 years on the planet: moving parts get replaced once the replacing tech is good enough, at that point looking back is just nostalgia.
In practice the true effect of mirrorless cameras and their new lens mounts are in how they affect the optical formulas that manufacturers are choosing. No longer does an optical designer have to make the same compromises when they can assume that distortion, vignetting, etc. can all be corrected in camera and in software. The shorter flange distance gives more options for optical designs that may allow for smaller or more efficient designs.
The other interesting development has been with the use of linear motors for moving lens elements which is allowing for significantly improved autofocus speeds on lenses that would traditionally have been expected to focus slowly.
I used to be a big photography hobbyist and I'm slowly getting back into it. My interests were a) street photography b) portrait lighting / controlled lighting with remote strobes (a.k.a. strobist).
I've owned Canon EF gear for decades, and the 'most recent' which is now old, is a 6D, and my best lens is a 35mm 1.4L from the early 2000's which I love.
A few years ago I invested in Fuji gear, including the X100series and a XT-4 with two prime lenses.
A couple of points.
1. DSLR's have big sensors and lenses. The 6D with lenses like the 35 1.4 which I have, and the 85 1.2 which I rented, have INSANE bokeh and low light performance. I don't claim to have a science background, but it's pretty well established that a bigger sensor size (the 6D has a 35mm size sensor) and a large aperture contribute to low light performance - the bigger aperture because you have a faster lens (it needs less light/you get away with a [edit] faster shutter speed) and less noise (the larger sensor has less noise artifact happening). This gives you a fair amount of creative flexibility in your settings.
I thought about selling the 6D and the 35 but I'm having trouble 'letting go' even though I haven't used them in a while (and they are huge and heavy)
2. DLSR's have a hella lotta weight to them. When traveling, I used to travel with the 6D and a bunch of lenses (some I now sold including a 70-200 F4L with which I took pictures I'm proud of and want to keep forever). Well the stress of filling 30-50% of your luggage space with camera gear and nervously waiting your suitcase in the airport conveyor belt is one thing I don't miss, nor having to pick and choose which lenses on a hike.
3. Mirrorless has unlimited shutter speed, practically, since it's electronic. If you want to shoot against the sun, and turn your shutter speed so high that the world is dark, and you have controlled lighting with a probe, you can't do that with a 6D due to the slower shutter speed, since it's made up of physical curtains. This gets cool effects outdoors.
4. Mirrorless are obviously better street cameras, since they fit in your pocket / hand and aren't as visible. Once I got into a concert where no cameras were allowed, because the rule was, no cameras where the lens is removeable are allowed (that's only for paid pros in the concert).
5. Subjectively: I'm having a LOT of trouble steering myself away from the DSLR mindset, I'm finding myself reluctant to use the XT-4 for portrait type work. And I think I'm selling the X100F since I prefer 50 EFL instead of 35 EFL for the street.
I think the camera industry has yet to realize the amount of processing power that's available in modern chips.
Once they do, the very concept of lenses as we know them is likely going to evolve drastically: at the end of the day, the signal captured by the physical CCD needs not in any way to look like the finished product (the final photo).
They've realized that for color already (CCD captures a signal that's heavily processed for color before final delivery, eg debayering + color correction).
Wait till they realize they can pull off the same trick with geometry, the entire discussion in this thread will become moot in a heartbeat.
Just look at iphone bokeh (background blur), it's utterly disgusting. Nothing beats real world light rays hitting a large sensor, the larger the better (that's why 4x5 cameras still shit on modern 150mpx sensors).
Or are you just enamored with an artifact produced by existing sub-par technology after having been exposed to it for long enough that you now have Stockholm syndrome?
Ever wondered why modern digital film is still shot at 24fps and a shutter speed of 1/48th of a second when a faster shutter and higher frame rate would produce a technically superior product? It’s because audiences are enamored with the feeling that sub par technology created decades ago and want that feeling to continue. It’s also why guitarists use tube amps and digital tube amp emulators are so popular. Art is context and film and photo has over a century of context.
A faster shutter speed would not produce motion blur that looks similar to how human vision perceives it in reality. A higher framerate without changing the shutter speed (i.e. 48 FPS with a 1/48s shutter duration) produces a smoother result with the same motion blur, but may still be rejected by audiences because "that's not how movies should look".
Yes. When your eye is focused on something close, point sources of light in the background will look larger and blurry. You'll also see two of them if you have two eyes open and have a sense of depth that a single lens projecting the scene onto a flat sensor cannot capture in the same way. 3D photography using two lenses has existed for a while, but viewing the result requires compromises relative to a print on paper or conventional screen technologies.
The ability to have a shallow depth of field is valuable as a compositional tool to focus the viewer's attention where the photographer wants it. Bokeh - how the defocused area looks is an artistic quality, and so far the digitally-created version I've seen produced by smartphones is not aesthetically pleasing to me. I imagine that will change as the algorithms get better.
I think some of the trends currently popular in photography using dedicated cameras are driven by what smartphones can't do, which includes (optical) shallow depth of field.
It's pure physics, every lens based optical system has them including your eyes.
Also, photography is art, bokeh is considered pleasing for portraits, that's a fact of life and it doesn't look like it's going away, it's so visually pleasing that phones are trying to replicate the effect.
The "data" as you call it can come in all shape and forms, as long as it contains enough information to rebuild what the camera "saw"
The "data" could come from 150 sensors arranged in a geometric pattern that completely alleviate the need for a traditional lens (see telescope arrays).
cameras are a dedicated tool for sampling visual data. they are very tightly designed to be good at that, and have been optimized to a nearly ideal form over the course of more than a century. cameras won't provide a decent ergonomic experience for anything else without significant change.
on the professional end, the camera will always remain strictly a sampling tool, and processing the sample is done with full attention, usually later. the level of detail and control that is already demanded by post-processing requires a complex interface, and often dedicated control surfaces. additional creative complexity will not be attempted at point of capture with worse control.
in every other use case, of course there is room for experimentation, but there is no reason to put that on the camera. every user is carrying a more powerful device with a bigger display and better touch surface in their pocket. the camera will remain an optimized sampling tool.
Strictly none of the things you said contradict what I said.
There are very many ways to conduct sampling, the current method just being one of them, and the past 100 year of optimizations were conducted under the rather punishing constraint of "no math allowed at then end of the pipeline".
DSLR's are dead because they can't do the stuff phone cameras can do on the software side. No DSLR can take 100 raw frames inside 100 milliseconds and do optical flow and stacking to get a higher resolution, reduced noise, nobody-blinks photo.
None have 3 lenses like an iphone to be able to generate a depth map and use it for smart stuff like software-adjustable focus after taking the photo.
Best you can hope for with DSLR's is 10 raw frames in 100 milliseconds, and a lot of effort in lightroom or photoshop to manually overlay and select frames with the least motion blur and best face of each person.
Actually, mirorless cameras CAN generate a depth map from their phase detection sensors.
They also can take multiple frames and stack them automatically, either for higher resolution or lower noise.
But the thing is, there's no reason you would want 100 raw frames in 100ms (which no phone can do, that's 168Gbps of bandwidth, the sensor interface can only do around 8-12 Gbps), instead of a single 20ms exposure from a large, high resolution sensor.
A 35mm sensor will, in that 30ms exposure, accumulate around 20 times as much light in the best conditions for the phone (normal focal length) taking 100 pictures in 100ms. If you're using the wide angle or the "telephoto" lens, now the big sensor is now accumulating anywhere from 40 to 200 times more light.
The big sensor can have three times the resolution or more, and actually make full use of that resolution. At that quality, no amount of image stacking can improve the resolution unless you have a 2000$ lens and a tripod to use sensor shift mode - which a mirorless camera can also do.
In the end, even if your algorithms are perfect, you will always have a much lower resolution and much higher noise on the phone, no matter how much image stacking you do. At that point, you improve by using AI to replace the imperfections with plausible looking generated replacements. You can also do that well on a computer, but in most cases this only makes it worse, because there aren't the imperfections to begin with.
If you decide to remove frames without motion blur and imperfect faces, now you go from 100 pictures to stack to maybe 10. At that point, the critical parts of your picture will only have 8ms or so of actual exposure vs 20ms on the MILC. But the 20ms exposure also has 50-300x more light in it, so now your phone camera is way behind.
So it ends up being better to just crank up the shutter speed to 1/200 and get a 5ms full sensor exposure, as the critical parts of your picture will have more light, less noise, and more detail than the stacked version you take on your phone.
Software support for moving photos from SLR => disk/gphotos is nonexistant. Huge miss that DSLR companies haven't made this work. They just gave up and ceded the market for "take a picture and conveniently share it on social media"
The Camera Control API (CCAPI) which Canon provides is actually very good it allows our company to create software which grabs the pictures of the camera and put it where we like.
I think there was a peak in hobby photography after the introduction of consumer grade DSLR cameras, which in my opinion is slowly fading away. The bulkiness of having a dedicated camera to take pictures, stopped making sense to me. For special photography needs I can rent or borrow a friends camera like my parents used to do in the SLR days.
Like most other things in life we are progressively choosing convenience over features and photography just reached it's limit to add new and useful features.
By "hobby photography" it sounds like you mean SLR hobby photography. Of course, hobby photography has boomed off the charts with phones.
Meanwhile, professional photography is nearly an extinct profession.
Over the last 20 years, as print publications were replaced by online, the professions of journalist and press photographer declined to the point where they now hardly exist.
Say you are buying a car at pre-tesla days. There is plenty of competition in the market and there is no clear market leader. You need to buy because you have to buy a car. The features never add much to anything and you make your decision based on recommendation and the reputation.
Now, you are making cars and you want people to buy more car. How can they do that? Upping the speed of your average consumer vehicle? Making it more mobile? Making generic upgrades in comfort features? Essentially adding units to existing features isn't a viable sales strategy. They can do nothing to have an uptick in car sales by just adding units to the features they offer.
Now you have the entire camera industry. Since the advent of decent mobile photography, you don't have to buy camera, so the manufacturer are already in a bad position. Then as far as innovation they are just adding units to existing features. More megapixels, better low light capabilities, higher FPS and that's about it. There isn't any effort or innovation in delivering something that is featurewise a new concept. There isn't anything that warrants someone to buy and carry it.
This isn't about killing off interchangeable lens cameras that look like DSLRs, its about killing off actual DSLRs
Theyre just getting rid of the inferior mirror technology, the thing that flips in front of the sensor, not conceding to phones
The entire sony a7 line, for example, is still going to be made
its kind of a troll discussion but its a good thing to happen because SLRs have been associated with a look or form factor too long and its time for that ambiguity and term to die.
I'm going to dispute, a little, that mirrors are inherently inferior.
1. It's only recently that the very best mirrorless cameras (Sony A1, Nikon Z9) have really reached the autofocus capabilities of the best SLRs…and arguably, they still aren’t quite there. It's close, but a Nikon D6 is still a bit better—but certainly not worse. The dedicated phase detection sensor is powerful. If only because you get more phase separation.
2. A DSLR, used like a film camera, has what feels like a perpetual motion machine inside of it. Just wild battery life. My old Nikon D700 had wayyyyy better battery life than a current camera, beacause if you kept the screen off, you were hardly touching the electronics in the camera. The imaging sensor was almost always asleep, and drawing next to no power.
Obviously mirrors have drawbacks. Phase detect sensor/imaging plane alignment issues, and they are big and clunky, but they have benefits.
Disagree. The best mirrorless cameras, say A7iv/A1, are a lot better at focusing than any DSLR. Not only do they focus accurately, but they have eye-detection AF. Yes, it will take slightly longer for them to give you the AF confirmation in some scenarios, but that's because they only give focus confirmation when it's actually perfectly sharp on the sensor. No DSLR can do that, because of the way dedicated phase detection sensors work.
As far as battery life, you're right, but now that battery banks are a thing I don't think it's a real issue for those days you're going to be shooting for 4+ hours, and otherwise they're just so convenient to charge by usb-c
I disagree with you in the autofocus front for most use. I have a D850 and Z6ii strapped to me for most jobs and the only time I don’t find the Z6ii to be leagues better is in heavy backlighting or extremely dark scenes.
My hit rate for wedding dances goes way, way up when using the Z6ii. For regular portraiture they’re both fine, but only with the Z6ii can I get the front eye tack sharp 90% of the time at f/1.4.
Sports might be another matter, but I did some Polar Plunges last year mostly with the Z6ii and it did extremely well. That has people running and jumping in to water, and it usually kept their faces in focus until a splash obfuscated them.
The smartphone photography reminds me of TV-VCR combos of days bygone. It works ok for both purposes, it's convenient... yet it speeds up the obsolescence. Eventually, you're on the market again for either one or both components. Unlike with the dedicated devices, which may retain their use for longer (even if relegated to drawer life) due to their specialization by design.
It's a really, really, really bad idea. dSLR lenses are designed for sensors which are 24mm x 36mm. Cell phone sensors vary, but a largish one might be around 6x8mm.
That's not to mention issues like autofocus.
There were a few attempts to bridge this gap, which attach a large sensor to your phone. The best-known ones are the Sony QX1 and the Olympus Air. They never took off. I think that was due to very poor execution. I though the Olympus Air was close, but had a few complete showstopper issues which made it unusable. A v2 might have done well, but instead, Olympus gave up.
A few cameras are getting improved workflows with interfacing to cell phones, so perhaps we'll get there another way.
Back in the day, hobbyists looking for a cine look would hook up a kind of adapter to use SLR or cine lenses with a consumer camcorder.
Of course the camcorders had tiny sensors behind fixed lenses, so what was happening was the adapter had a bit of vibrating ground glass onto which the external lens was projecting its image, like a viewfinder, and the camcorder recorded that.
The effect looked very good. Manual focus of course but this was considered desireable. I'm not suggesting this as a commercial solution, I just thought it was very inventive.
Honestly smartphone workflow are just bad for different reasons. The best UX in digital photography is found in Leica rangefinders and Hasselblad / PhaseOne detachable medium format backs (not really the mirrorless medium format Hasselblad or Fuji cameras which have the usual awful menu systems).
It depends on what you want out of a workflow. As I'm getting older, I'm finding I like taking photos, viewing photos, and I hate editing photos. For 99% of what I do, cell for workflows would be great, if not for the privacy issues.
Google takes my photos, and sometimes shows me "this day 3 years ago" or a nice montage. I like that.
I like having raw files for that 1% of photos I will edit, but that's a much less important use.
Of course, yes. My issue with phones (besides the limitations in acuity, DR, etc.) is that they unpredictabily decide a lot by themselves in an opaque fashion.
- Camera unpredictably decides a lot by itself in an opaque fashion. I get useful automation for family photos.
- I get a RAW file if I want to decide something different or if I'm printing and framing a photo
99% of the photos I shoot fit in the former category, but the 1% in the latter is about equally important (1 framed photo is worth a lot more than a random cell phone snapshot).
RAW+Jpeg with a modern mirrorless in Aperture priority and vivid presets does that for me. Send off all the pics into Amazon Photos or Google Photos, put the RAWs in my NAS, and edit them whenever I want to.
It is possible but the iphone has a tiny sensor with other lenses on top that you can't easily remove. You would also have issues with lenses that use electronic aperture control.
A cool trick, although takes practice to get it right, is shoot with camera through a binocular lens. Can do that with SLR lenses too, similar issues: light leakage, stability, edge effects, etc.
But it's fun.
Ok, I get that. In that case, it would be fantastic if someone created professional lenses for smart phones. The nearest I’ve seen are nowhere near the capabilities for telephoto.
The thing is, if you wanted to get a 200mm lens with equivalent quality to the main lens in the iPhone 13 Max, it would need to be 8 times wider and 8 times longer. That's around 5cm long and 3cm wide. Just doesn't work.
Not practical because of physics. It would ruin phone ergonomics and you'd end up with a brick. Or maybe requirea cumbersome adapter and lens assembly. New flagship phone camera assemblies are big and ugly enough as it is. Phones with 3 or 4 cameras because the consumer demands it.
In order to have nice bokeh (a function of the aperture of the lens), you also need to have a shallow enough depth of field at a fairly close range.
As the sensor gets smaller, and the field of view remains constant, the depth of field increases and so the opportunity for bokeh disappears.
For pleasant bokeh in a video call, you're going to need a larger sensor than the ones used in camera phones or laptop screens - probably significantly so. Additionally, the lens is going to be quite a bit larger to have good illumination across the entire sensor.
From a cost perspective, the cost of the sensor has to do with the yield of the wafer. It costs about the same to make a wafer no matter what you put on it (with certain caveats of the complexity of what you put on it).
If you're working from a full frame sensor, you can at best fit 24x 24mmx36mm chips on a standard 8" wafer. Going to a 13.2mm x 8.8mm sensor you can put 244 of them on the same wafer. Then there's the yield on top of that which further reduces it.
This gets it to the point where "to have a nice depth of field for a video call, you're going to need a camera that has similar costs and dimensions of a good digital camera." Adding that to a phone or laptop screen isn't possible.
That said, I wouldn't be unhappy to find an affordable usb capable video camera with a nice lens on it (and maybe paired with a ring light or off axis hot lights)... but that's probably easily in the $300-$400 range.
IME the remaining advantage of SLRs is the speed of phase focusing over contrast focusing - moving subjects are much easier\faster to track with phase focusing.
Modern mirrorless cameras (and some DSLRs) use phase detecting image sensors.
E.g. on Canon R5 every pixel on the sensor is microlensed into two sub-pixels, one that sees the left half of the aperture and one that sees the right half. The difference between the two is the phase information for auto-focusing.
The use of the image sensor for auto-focusing also means that the camera can use ML object detection to refine the focus point (e.g. eye-detection) and the camera can use optical flow for servo of moving subjects tracking. Without a viewfinder prism in the way the focusing sensor also receives more light, and it's not interrupted while a photo is being taken. So even though the on-sensor phase detectors are less good than the ones used in DSLRs there are a lot of offsetting benefits.
The camera can even preserve the dual subpixel info in raw files, though there isn't yet much software to make use of the info in post-processing.
In manual focus mode the camera can highlight the portion of the image that's in-focus according to the phase detection. It's pretty @#$@ amazing, especially when using a tilted lens.
Phase detection AF on the image sensor had been here for more than a few years. I remember the 5D2 (or 5D3) has a "Dual pixel AF" system that does phase detection.
My A7m3 does phase detection if there is enough light and it is practically as quick as the DSLRs I used before.
From the reviews I've read, it sounds like contrast focusing is damn fast these days. A mirrorless system won't beat a DSLR under all conditions, but under most conditions, at the same price point.
They're not really even in the discussion when it comes to being real replacements for DSLRs. Their sensor size is significantly smaller than even a crop frame dslr/mirrorless camera, and roughly a quarter of the size of a full frame camera.
This smaller sensor size means they're not going to be used at the prosumer or pro level, and that in turn means a limited selection of high end lenses, etc.
Interchangeable lens cameras now all have video features and increasingly most of the improvements are in that area. When SLRs are used for video the mirror needs to be flipped up and auto-focus system that is used for photos can’t be used, so the camera need another one on the sensor. In this case the mirror is redundant and the viewfinder can’t be used.
Tracking of fast moving subject is difficult with a SLR, the SLR cannot see the image in viewfinder mode only a focus module can, which likely only has a few hundred focus points (or less) and those points often don’t reach the edge of the frame. Additionally mirrorless cameras are able track a subject eye using AI and keep that in focus. A SLR cannot do this in the viewfinder mode as the focus sensor does not have nearly enough resolution to recognise small item like an eye or to know that it is an eye.
Burst shooting is also difficult on a SLR, for each shot the mirror needs to flip up and down and the focus module use a brief period to change focus. Canon is/was the leader in sports photography cameras. The highest end Canon SLR camera can do 16fps with autofocus, but 20fps with the mirror up. The Sony a1 (mirrorless) can do 30fps. These fast shooting rates are only possible with mirrorless cameras.
Integrated sensors really. Things like AF would work on different sensors in DSLR. Sony removed the mirror and put everything on the sensor. Fast AF tracking, shorter distance between sensor and lens so better low light capability.
I get why this is happening but it still makes me sad. There's really no better experience than being able to look through the viewfinder, track the target and have a pretty good idea of the composition (the viewfinder is typically 95-99% of the frame). Any mirrorless system has limitations on low light and latency (of reading the light and then displaying it on a separate screen). This has gotten better but it can never really compete with the sensitivity of a human eye and low latency of passing through a prism (or mirror).
The article mentioned compatibility and flange distance but didn't really explain why this is the case. Take the two big 35mm DSLR players of the 20th century Nikon and Canon. Nikon introduced the F mount in 1959 [1]. These were manual focus lenses. Autofocus came much later. Nikon chose compatibility to the point that you can buy a DSLR and a manual focus F mount lens from 1960 and it will work.
Canon OTOH had a number of lens mounts (eg FD). When Canon introduced autofocus they bit the bullet and broke backwards compatibility to introduce a new mount, the EF [2] in 1987. Oddly, this situation kind of reminds me of how Java and C# introduced generics, each with a different philosophy.
But the point is that Canon learned from its "mistakes" and corrected them with EF and benefitted from it.
None of this really explains why DSLRs are dying though. This chart [3] does. Digital cameras in part funded DSLR development and phones killed the digital camera market (~85% drop in sales from the peak).
I like digital cameras. Generally speaking picture quality is better, optics are better, framerates are better and the form factor allows things that phones can't do (eg waterproof to 40m, shooting 1080p at ~1000fps, 100x optical zooms).
But they're a pain in the ass to use. I think this is a failing of camera manufacturers not to integrate seamlessly with phones. Like why can't I pair a camera with my phone and have photos just transfer to the phone automatically? I end up never using digital cameras because I simply can't be bothered with the steps required to do something with that photo. Phones solve that so much better.
Lastly, it's worth pointing out that at this point only two companies produce digital camera sensors at any kind of volume: Canon and Sony. Sony literally produces almost everything else. This volume and market hold gives them economies of scale that no one can compete with.
But Sony also makes phone camera sensors so Sony really doesn't lose anything if you buy a camera instead of a phone of vice versa. This probably contributed to manufacturers not caring at the smartphone revolution that literally destroyed that industry.
Mirrorless cameras now have much better low light through the viewfinder than a DSLR. A viewfinder is limited to around f/2.8, while a mirorless evf can do any aperture your lens can. The sensor of the camera has around 3-4x the sensitivity of your eyes. So we're at the point where even running the sensor at a fourth of its capacity and binning or throwing out pixels, at 120fps, is going to be better in low light than an ovf.
Lag is an issue, yes. But the 20-30ms latency EVFs now have is really really close to the blackout lag of a DSLR.
As far as Sony having no incentive to make you buy a camera, I don't think that's true. They're making at most a 1$ profit on smartphone sensors, and certainly around a 50$ profit on large sensors. If you buy their cameras, they're going to make something like 100-200$ in profit, and if you buy a lineup of their lenses, it's going to be somewhere like a 1000$ profit. So they definitely want you to make more cameras!
As far as photos being seamlessly transferred, new Sony cameras do have that. As soon as you start the app, it will transfer images you take in the background, but I don't know if it works with RAW images.
> Any mirrorless system has limitations on low light and latency (of reading the light and then displaying it on a separate screen).
Surely there is more latency, but on the most recent mirrorless cameras like Canon R5 it's not noticiable.
Low light there is absolutely no contest: R5's viewfinder is totally usable in situations where you'd see nothing but black in a DSLR viewfinder. Sometimes after going out at night with the R5 in the woods I find myself looking through the viewfinder just to get a better view of the path ahead.
The electronic viewfinder also makes it reasonable to build full time high ratio lenses that would be too dark to comfortably use through a viewfinder much of the time. For example, https://files.catbox.moe/fgxew8.jpg this image is 1/60th of a second ISO 5000 on a fixed f/11 lens. It would have been annoyingly dark on a DSLR viewfinder (and DSLR autofocus wouldn't function on an F/11 lens.).
So by removing the mirror, you remove the lag in the camera systems making future camera's become more like human eye capture units which then has implications for AI development.
The article mentions image-side telecentrism for reduced aberrations. This is another talking point often brought forth; "the big mount allows a rear element so big that it can cover the image sensor with a near-telecentric swath of rays". Again, if you do a reality check here you'll notice that the chief ray angle (the angle between the chief ray of a ray bundle on the image circle to the image plane; chief rays are those crossing the center of the aperture) in actual lens designs is nowhere near 0° and is instead somewhere between 35-50°. Part of that equation is certainly that microlenses are offset as you leave the center of the sensor, so essentially sensors are designed for a particular range of CRAs and will not do well if you go way out of that range. Like the wide-angle rangefinder primes usually did: putting these on a digital camera results in strange color shifts towards the edges of the frame.