I find this way of describing the iPhone cameras frustratingly misleading. It's not the author's fault; Apple does it in their official specs.
The "26mm" rating is "equivalent", meaning "this looks like a 26mm lens on a 35mm film camera". It basically tells you the FOV angle.
The "f/1.5" rating is not equivalent. It's very dishonest to mix and match equivalent and non-equivalent specs, because it makes the lens sound a lot better than it is to non-expert readers. If they were being honest, the equivalent aperture would be a lot smaller, like f/16 or something.
Every other camera mfg I am aware of will consistently use equivalent or non-equivalent for all specs, and make it clear which they are using.
For example, most mirrorless camera lenses are spec'd non-equivalently, meaning that a "26mm f/1.5" means "if you put this lens on a 35mm camera and extrapolated the output to cover the whole sensor, that's what you would get". To understand what it will look like on a different sensor, you multiply 26 and 1.5 by the crop factor.
In Apple's case, it's actually impossible to tell what the image will look like from the specs without looking up the crop factor, which of course they don't advertise front and center like they do this misleading tuple of specs.
In general, lenses with smaller f/N aperture values tend to be better and more expensive (because they have a larger aperture, let in more light, narrower depth of field), so if you don't mind misleading customers, you just pick the smallest denominator you can get away with.
Every time people tell me iPhone cameras are getting as good as dedicated DSLR/mirrorless, I always point this out to them. F/16 is absolutely unusable for certain types of photography (although quite perfect for landscape), and even worse with such a small sensor with that big of a crop factor (3.5x) [0]. I personally use a f1.8 35 mm on an APS-C sensor for hobby photography and I still feel the sensor is too small for portrait photography or low light, but then I'm also kind of a weirdo since I still use film cameras once in a while (can't afford a full-frame lol, so I use it for portraits).
I don’t personally care for shallow depth of field (it’s become a cliche at this point), but the best phone cameras do a pretty good job of faking it. It’s not yet perfect, but it’s only going to get better. I also like that I can precisely adjust the amount of background blur in post.
I hate hate hate “equivalent aperture” specs. I don’t mind the FOV being equated to 35mm sensor size but give me the actual Fstop please. Back in the day I shot formats from 8x10 down to 16mm. Nobody ever tried to normalize DOF between formats by talking about fstops until digital photography. Trying to normalize DOF across formats only obscures and confuses what is actually going on optically. No particular fstop (ratio between focal length and physical aperture) has a unique look and yet that is what people that demand equivalent fstops assume and demand.
For exposure purposes, f1.5 is the accurate and most important spec for that lens. All mirrorless camera lenses are specced accurately with focal length and fstop. Perhaps some of the marketing is done with 35mm “equivalent” specs but I’ve never seen a lens that had anything but the actual measurements on them. I’m not sure why people are still referring to 35mm focal lengths when talking about FOV in any case. Nobody is comparing the lens on an iPhone with one on a camera with a 35mm sized sensor. Certainly nobody will use the iPhone lens on another camera.
The main sensor on an iPhone is larger than most phone cameras but still close enough that touting the f1.5 fstop does indeed mean that it lets in more light, has a larger aperture, and has a narrower DOF than many other phone cameras. The vast majority of iPhone users have never shot with 35mm cameras and so wouldn’t have any idea what you’re talking about when it comes to 35mm DOF. I wish companies would ditch the 35mm normalized specs altogether.
> For exposure purposes, f1.5 is the accurate and most important spec for that lens.
This is a very common misunderstanding. To whatever extent the camera "looks like" it's using a 35mm-format f/1.5 lens from an exposure standpoint, it's only because the sensor gain is turned way up to compensate. As a result, SNR is worse, so it's like you took a darker exposure on a better camera and then cranked it up in post.
Again, trying to normalize on what things look like when shot with a 35mm sensor is irrelevant. If you have two camera phones with similar sized sensors and focal lengths, an f1.5 lens is almost a full stop faster than an f2 lens. Yes, the resultant SNR is worse on a small sensor but fstops are a ratio of focal length to aperture size not a measure of DOF or SNR. The f1.5 lens will have narrower DOF and let in more light. F1.5 is an accurate way of marketing the lens.
Like I said before, trying to normalize different aspects of images and the resultant quality by trying to make f-stops something they aren't only confuses and obscures what is actually happening and prevents sensible discussions of camera systems.
The 35mm equivalent focal length is used just to give an idea about FOV. I wish they would just give us the angle of view in degrees instead. Then maybe people would stop trying to mangle what fstops means in regards to DOF, the amount of light the sensor sees, SNR etc. Compare similar sensor systems, stop trying to cram one into the other.
> fstops are a ratio of focal length to aperture size
This number in isolation is completely useless for telling anything about how the resultant image is going to look. As opposed to, say, equivalent focal length, which tells you valuable information (about FoV) independently of any other spec.
> I wish they would just give us the angle of view in degrees instead
I agree, it would be nice if they just used invariant specs for everything. FoV angle and linear aperture diameter would be sufficient for most use cases.
At a given shutter speed, the sensor gain is exactly as high as you would expect it to be at f/1.5. It’s just that with the much smaller surface area of the sensor, the signal-to-noise ratio is way worse.
The only invariant way of specifying gain is ADC linear brightness units per incident photon per subtended angle, in which case the iPhone's gain is vastly higher.
Sensor speed is a completely irrelevant spec when it comes to maximum f-stop of a lens. The f-stop is a physical ratio. People that try to make "equivalent" f-stops do so by throwing out what it actually means in order to gauge some arbitrary level of performance of a format that fewer and fewer people know anything about.
All of this nonsense started with digital photography. There was close to a century of shooting with different formats and lenses without ever trying to twist what an f-stop meant. Open up an ASC handbook and you'll find DOF charts for different formats and lenses but no talk of equivalent anything.
I want to know the actual specs of a lens, that is how you can compare with similar systems. I wish Apple would drop the now irrelevant references to 35mm. I also wish people would stop confusing a physical measurement with a quality target. To reiterate, I find the concept of equivalent f-stops to be nonsensical and think they only confuse the issue.
The f is not an equivalency number. It's the actual f stop number. F number is the ratio of the lens focal length to the diameter of the entrance pupil and is independent of the size of said lens.
The depth of field on the other hand does depend on the size of the lens (or hence the size of the sensor the lens focuses it's incoming light on).
Let's take an 18mm F/2 lens on a Micro-Fourthirds camera. Those have a 2x ratio of sensor size compared to a full-frame sensor. So this 18mm lens would be an (2x18=36mm "equivalent") full-frame lens, meaning the perspective of said 18mm would be the same perspective a standard 35-36mm full frame lense would have. This 18mm F/2 lens would be F/2.0 based on the amount of light it lets in. That's the ratio of said lenses focal length to it's entrance pupil. Only in terms of "how the background will blur" we multiply this by 2 to show what the visual effect of this would be.
So really the best way of saying it would be "hey this is an 18mm/F2 Micro Four Thirds lens (36mm, F2/F4 equivalent).
Personal opinion: Even better if we used a different acronym for depth of field, say d or doF or something. Then we could say "36mm f2.0 DoF4.0"-equivalent lens.
So in the iPhone 14PM main camera case we'd call it a "28mm f1.78 DoF8.1"-equivalent as it has an about 4.6x sensor size crop factor from full frame.
With a crop factor of 4.6 the depth of field equivalent would be around f8 actually for the larger 14 Pro / 14 Pro Max sensor.
(and f1.78 for actual f-stop light performance)
When you get close enough you can actually take proper bokeh producing shots. It's actually becoming hard to be fixed aperture as some closer shots would require a higher f-stop number already to be properly in focus.
The f/1.5 is an actual rating. It's the ratio between the focal length and the diameter of the pupil. It's important for shutter speeds, and relative performance in low light (before all the computational duckery). I can compare roughly a phone camera that's at f/1.5 vs a phone camera that's at f/2. Making a low f/number lens is more difficult than a high f/number lens, regardless of actual focal length.
And I've got a non-35mm mirrorless camera. All the lens I've got are marked with the true focal length and true f/number, not some "equivalent". My iPhone camera photos' metadata show the focal length, both true and "35mm equivalent".
You're not disagreeing with my actual argument, which is just "they should use the equivalent ratings for both focal length and f-stop". If you prefer physically-based specs, you should be demanding that they also say the focal length is 7mm or whatever.
> It's important for shutter speeds, and relative performance in low light
It's completely useless for determining low-light perf without knowing the crop factor.
> I can compare roughly a phone camera that's at f/1.5 vs a phone camera that's at f/2
Not without knowing the crop factor. You're falling for the marketing.
I guess this is "astrophotography" for the sake of making pretty images, because I couldn't imagine the "computational AI" features he mentions towards the end of the article doing anything but horrify even an amateur astronomer.
As somebody who has a Pixel 7 I completely agree. The AI makes my photos unusable. The worst part is I can see a usable image before it finishes processing. But then it turns it into pixilated garbage.
I am often taking pictures of small text that is borderline unreadable to get part numbers off stuff. The AI processing is absolutely a bad idea.
You can install an alternate camera app if you just want raw rixels with no computetional stuff.
Having said that, I find the computational stuff turns a fairly poor sensor and lens into kinda acceptable photos, as long as I don't want to zoom in (I can see far further with my eyes than the camera can when it comes to taking photos of tiny far away things)
ProRAW in the system app is still heavily processed, it just saves more of the intermediate data to let you tweak more of the processing stages.
You still need a third party camera app to capture true RAW photos.
E.g. with ProRAW you can't remove most of the noise reduction, and you can't rescue images with large white balance issues (you get fringing in one of the processing steps since the white balance is baked into several of the intermediate images)
Not sure I follow your logic, how did you end up with 9GB for a single shot, and why exactly 128 frames in 2 seconds?
iPhone's hardware is more than adequate to capture RAW videos at 60fps with binning, not just photos (too bad Apple doesn't allow it, so it can't). People are using Android smartphones to shoot videos in RAW for several years already. OnePlus 8 Pro and later can easily capture a RAW stream in 4K 60fps.
When you take a photo on iPhone, the image is composed of up to 128 frames taken before and after you hit the shutter button. Those frames are aligned and stacked using dedicated image processing silicon which can do that stuff reasonably power efficiently while working at those data rates.
The stacking also allows things in the frame to move - it uses optical flow to track movement.
That's how iPhone manages to take a tiny very noisy sensor and give non-noisy output.
If you want to see how the camera performs without this stacking, try taking a photo from a moving car of something behind railings. The moving railings screw up the optical flow, so most of the 127 other frames will be discarded (if regions of an image arent a close enough match, then the region will be discarded), and you'll get to see all the sensor noise.
Sure I know, GCam pioneered this years ago. But all those frames are merged into one which is written as a correct scene-referred DNG with the usual dimensions and the CFA pattern, as if it was taken by the sensor directly. Which is what GCam and every other app do. They don't write gigabytes of RAWs for a single shot. (although some apps do have that option, within the hardware capabilities)
And all this has nothing to do with manual astrophotography, where you ditch computational apps, use a mount and shoot actual unstacked raw sensor frames with several seconds exposure each, interleaving them with dark frames, and do the computation yourself with the dedicated astrophoto software which is far better than any app if you calibrate the sequence and do the math correctly. Every phone is capable of this.
Yeah the RAW photos from third party apps are just a single frame, that's the point. It's the same thing as a RAW photo you'd get out of a standard digital camera.
On that subject one of the recent samsung models had a rather interesting feature, where you could print a blurry picture of the moon, put it in a dark room and the camera would "enhance" the image adding detail it never had. I think smartphone cameras are reaching the limits of their potential, the only way forward now is ever protruding camera bumps.
If you read further, at first they mostly talk about stacking, even though it has nothing to do with "AI", but then they suddenly mix it with synthetic ML techniques, which defeats the point. Confusing.
Nevertheless, you can do some proper astrophoto with smartphones without any AI, just like you would with any other camera. It won't be that good for obvious reasons, but it's possible and it will be better than using automated astrophoto modes in any app. Calibrate your lens and sensor, make a RAW stack with fixed exposure which is not too long, and some dark frames; use something like Sequator (or more sophisticated software) to process the stack.
I've found it very useful for showing curious people (especially kids) that there's a whole universe up there mostly unseen; you can whip it out and a minute or two later they can see the Milky Way in at least a little of its splendor. For those who show interest, I follow up with our 6" backyard telescope to see the Moon's mountains, Jupiter's moons, Saturn's rings, and Mars's ice caps.
There's something different about experiencing it in person, even if the photos from a professional or a space telescope are empirically better.
On that front, there is also one called the vaonis hyperia that seems totally automatic, but perhaps we miss out on the challenge of amateur astronomy..
Yes, but the computational pipeline is designed and implemented by the end user to achieve a scientific result. Here, it’s a black box where the resulting image cannot be treated as raw data.
Yes exactly. The training set used for the famous black hole image was a database of images created using black hole simulations based on our best available models. This is pretty far away from these consumer ML models trained on publicly available images which may as well be consider garbage from a scientific perspective.
You haven't been following the story, then, because Samsung phone cameras have been accused of doing exactly that: using a training set of publicly available images to "enhance" people's images by filling in details with a ML model.
Those people don't understand how anything works though so I don't care what they said. And how would they know if it used "publicly available images"? It's the moon, of course it's publicly available. It's in the sky.
You're focusing on the "publicly available" part but it's irrelevant to my point. Samsung's camera software is literally doctoring people's photos with fake data from its ML models. From the perspective of photographic integrity, it does not matter whether they scraped their training set off public photography sites or if they hired a team of photographers to build a dataset from scratch. To the user the effect is the same: it is not really an authentic photograph that is being created, it's a hybrid derived from (to the user) unknown sources.
There's a difference between ICC colour profiling (which is embedded in the RAW file and can be changed by the photographer without any degradation in image quality) and what Samsung is doing. The former can only affect the image colours globally (for example, allowing adjustment of white balance) whereas Samsung's changes are local, affecting image data in a profound way that can add hallucinated details which were not present in the original scene. One rather disturbing example is that the tool added teeth to baby photos!
White balance adjustments to camera raws don't generally use ICC profiles, they do whatever the combination of camera format and raw processor want.
Which can involve local adjustments, since you may want to process people differently from skies. So object recognition/segmentation models can definitely be involved, or what's essentially an upscaling model for better demosaicing. (That wouldn't be trained on public images though, unless they were the right format of camera raw.)
That is completely unrelated to astrophotography as a hobby, taking pictures of space objects from the surface of the earth. Like, totally and completely not-comparable in any way.
I like an old camera I have around here with a 50mm lens because the photos I take with it resemble what I see better than when I use my old smartphone.
Eg, the size of a windmill behind somebody can vary dramatically in size depending on this
If I upgraded my phone, do the new smartphones enable you to take photos like my old 50mm camera, or will I still need my old camera to capture those shots?
50mm on full frame camera is called a "normal lens" precisely because for most people it mimics their internal perspective / wordlview, so you're not alone:-)
Just look at any given phone specs and see if any of the cameras they offer are near "50mm equivalent" (you may have to dig because primary specs are usually useless - either the completely arbitrary "2x zoom" or the focal length on the specific sensor size such as "4.38mm" Which is true but not useful).
Most phones come with wide and ultra wide lenses. I'm with you and don't like those. Other phones come with zoom lenses of various focal lengths. Most of those on the other hand will overshoot Into 75mm and higher territory.
Worst case, grab a phone with great lens and sensor and zoom it in the store until you get 50mm equivalent... And see if that works.
Note though there may be many other subtle and subconscious things you like about that camera besides focal length of the lens : aperture, bokeh, vignetting, film type, specific aberrations, or even the feel in your hand and mechanics of the shutter and the optical viewfinder. It's certainly more FUN, for me personally, to take a photo with DSLR than an iPhone :-)
You basically have to zoom in on the camera app to get a 50mm equivalent photo. This isn't as bad as it seems though. The 2x quality is quite good on the baseline photos these days.
However, on the "pro" iphone models, they do have a "telephoto" lense that is indeed a 50mm equivalent that will get you the perspective that you're used to.
Here is the one on the iPhone 14 Pro -> 12MP 2x Telephoto (enabled by quad-pixel sensor): 48 mm
Personally, I use an iPhone 8 plus and pretty much always take images on 2x zoom. Because yeah everyone looks better on a longer lense.
I didn’t know they blended lenses to create a 50mm, there must be some post processing tricks to have it look right - are there examples side by side with a true 50mm?
It does. Focal length is just an alternate notation for FOV. 2x crop on a “28mm” photo should geometrically match that of any 1,2/55, except bluriness(bokeh) effect and image quality.
Perspective is determined entirely by the position of the photographer relative to the subject. There will be no difference in perspective if you zoom in with the phone to get the same angle of view you get with a 50mm lens on a full frame camera.
> Changing the lens focal length without changing the camera position has no impact upon the perspective in the scene, even though it radically alters the size of objects within the scene
Radically altering the size of objects within the scene is what most average people mean by perspective. It’s exactly the effect the author of the article is claiming doesn’t happen yet he proved it does immediately.
Is that technically the correct usage of the term perspective? Yes! It’s just not the specific technical usage definition common in optics. But it is correct, as it’s referencing the relative size & position of objects.
The point is that it doesn't do anything that you couldn't achieve by cropping a photo taken with a wider lens from the same position. The relative size and position of the objects doesn't change.
I think there's a lot of misunderstanding here and a lot of optically correct comments are being down voted.
50mm is a focal length. If what you like about 50mm is indeed the perspective (as opposed to other subtle things that may creep like aperture/bokeh or aberration or vignetting of the specific lens) then zooming something to 50mm effective field of view is exactly what will accomplish the goal.
The 2x digital zoom is pretty good on the 14 Pro as it’s still got 12MP to work with. (I’m guessing that combining a stack of slightly misaligned exposures helps it to overcome the loss of color information you’d expect from the quad Bayer layout.)
The perspective is determined by where you’re standing in relation to the subject.
If you’re talking about wide angle ‘distortion’, then on an iPhone (with the default settings) that’s just what a rectilinear projection looks like with a wide field of view. Zooming in digitally ‘corrects’ that just as much as using a longer focal length would.
Your link doesn’t load for me. I’m talking about the contraction of depth when using focal lengths greater than 47mm and lengthening of it when using shorter focal lengths. I guess you can contract depth as well by walking backwards, but at this point your subject is so small you lost a lot of detail. Also, your subject may be gone by then.
Then you mean "depth of field" not "distorted perspective"
The perspective will be the same with a 24mm or 200mm (or any focal length) as long as you stay in the same position relative to the subject, if you crop in the 24mm to get the 200mm field of view you'll get the same image with a much lower resolution and much wider depth of field (ie blurred background)
The perspective distortion happens if you physically move closer to your subject to get the same framing on the 24 as you had on the 200
The thing is though that the depth is contracted disproportionally much more than width and height, which results in a non-linear transformation of perceived dimensions. This causes lines to bend. At 47mm straight lines aren’t bent no matter the distance from whatever there is in the picture. At shorter lengths there are always regions in your photo which are distorted in this way. You can’t fix that by changing your position. The only thing you’ll do is change what object is bent and in so doing maybe make it less obvious.
I have a 28mm lens with a symmetrical design that shows no distortion
I agree that, in general, wide angle lenses are harder to design in term of distortion control, but it's not a god given rule that all lenses wider than 50mm have to show distortion
That's lens distortion, and has nothing to do with depth. Phones will correct it in software automatically, and Lightroom etc can also do it for your DSLR lenses.
It's not usually a concern until you get to <20mm, and especially not if you're going to crop the center of the picture to mimic a 50mm perspective.
1:1 crops still look like a solarized mess. It’s nothing than a feel-good gimmick (“oh wow, this phone in my pocket can do some sort of multi-syllable photography”).
Yeah, I kinda lost me at the end when he’s talking about AI generated “synthetic skies”.
I get it, a ton of photoshops will just drop in a new sky. But that’s a choice that a user made not a choice that the phone manufacturer made for them.
> Yeah, I kinda lost me at the end when he’s talking about AI generated “synthetic skies”.
For anyone who didn't read TFA, this is part of an in-story quote by Russell Brown (not the author) talking about the future of mobile photography. As a long-time Adobe employee he's parroting their messaging here, since Adobe's value-add is in "what comes after" photography.
Photoshop's new AI-powered Generative Fill is a good example of what Russell's thinking of when he talks about "computational AI" (a non-sensical name intended to one-up "computational photography"): https://www.youtube.com/watch?v=uR7j8r-LzQ8
I think the time isn't too far away when you don't even have to go out for taking pictures but you just tell AI to create a perfect image at a certain location with objects or people of your choice. That's maybe a good thing because a lot of people who just want Instagram shots won't bother going to the places anymore so it will be less crowded.
Reading this it seems their method of doing this is different than I expected, but maybe more disturbing:
"In the side-by-side above, I hope you can appreciate that Samsung is leveraging an AI model to put craters and other details on places which were just a blurry mess."
If you can get your handles on a Pixel 3 for cheap, that took simply incredible photos. Even better than my current iPhone 12 for most purposes (not all).
I find this way of describing the iPhone cameras frustratingly misleading. It's not the author's fault; Apple does it in their official specs.
The "26mm" rating is "equivalent", meaning "this looks like a 26mm lens on a 35mm film camera". It basically tells you the FOV angle.
The "f/1.5" rating is not equivalent. It's very dishonest to mix and match equivalent and non-equivalent specs, because it makes the lens sound a lot better than it is to non-expert readers. If they were being honest, the equivalent aperture would be a lot smaller, like f/16 or something.
Every other camera mfg I am aware of will consistently use equivalent or non-equivalent for all specs, and make it clear which they are using.
For example, most mirrorless camera lenses are spec'd non-equivalently, meaning that a "26mm f/1.5" means "if you put this lens on a 35mm camera and extrapolated the output to cover the whole sensor, that's what you would get". To understand what it will look like on a different sensor, you multiply 26 and 1.5 by the crop factor.
In Apple's case, it's actually impossible to tell what the image will look like from the specs without looking up the crop factor, which of course they don't advertise front and center like they do this misleading tuple of specs.
In general, lenses with smaller f/N aperture values tend to be better and more expensive (because they have a larger aperture, let in more light, narrower depth of field), so if you don't mind misleading customers, you just pick the smallest denominator you can get away with.