I present my top 10 tips for capturing time-lapses of the moving sky.
If you can take one well-exposed image of a nightscape, you can take 300. There’s little extra work required, just your time. But if you have the patience, the result can be an impressive time-lapse movie of the night sky sweeping over a scenic landscape. It’s that simple.
Or is it?
Here are my tips for taking time-lapses, in a series of “Do’s” and “Don’ts” that I’ve found effective for ensuring great results.
But before you attempt a time-lapse, be sure you can first capture well-exposed and sharply focused still shots. Shooting hundreds of frames for a time-lapse will be a disappointing waste of your time if all the images are dark and blurry.
For that reason many of my tips apply equally well to shooting still images. But taking time-lapses does require some specialized gear, techniques, planning, and software. First, the equipment.
NOTE: This article appeared originally in Issue #9 of Dark Sky Travels e-magazine.
TIP 1 — DO: Use a solid tripod
A lightweight travel tripod that might suffice for still images on the road will likely be insufficient for time-lapses. Not only does the camera have to remain rock steady for the length of the exposure, it has to do so for the length of the entire shoot, which could be several hours. Wind can’t move it, nor any camera handling you might need to do mid-shoot, such as swapping out a battery.
The tripod needn’t be massive. For hiking into scenic sites you’ll want a lightweight but sturdy tripod. While a carbon fibre unit is costly, you’ll appreciate its low weight and good strength every night in the field. Similarly, don’t scrimp on the tripod head.
TIP 2 — DO: Use a fast lens
As with nightscape stills, the single best purchase you can make to improve your images of dark sky scenes is not buying a new camera (at least not at first), but buying a fast, wide-angle lens.
Ditch the slow kit zoom and go for at least an f/2.8, if not f/2, lens with 10mm to 24mm focal length. This becomes especially critical for time-lapses, as the fast aperture allows using short shutter speeds, which in turn allows capturing more frames in a given period of time. That makes for a smoother, slower time-lapse, and a shoot you can finish sooner if desired.
TIP 3 — DO: Use an intervalometer
Time-lapses demand the use of an intervalometer to automatically fire the shutter for at least 200 to 300 images for a typical time-lapse. Many cameras have an intervalometer function built into their firmware. The shutter speed is set by using the camera in Manual mode.
Just be aware that a camera’s 15-second exposure really lasts 16 seconds, while a 30-second shot set in Manual is really a 32-second exposure.
So in setting the interval to provide one second between shots, as I advise below, you have to set the camera’s internal intervalometer for an interval of 17 seconds (for a shutter speed of 15 seconds) or 33 seconds (for a shutter speed of 30 seconds). It’s an odd quirk I’ve found true of every brand of camera I use or have tested.
Alternatively, you can set the camera to Bulb and then use an outboard hardware intervalometer (they sell for $60 on up) to control the exposure and fire the shutter. Test your unit. Its interval might need to be set to only one second, or to the exposure time + one second.
How intervalometers define “Interval” varies annoyingly from brand to brand. Setting the interval incorrectly can result in every other frame being missed and a ruined sequence.
SETTING YOUR CAMERA
TIP 4 — DON’T: Underexpose
As with still images, the best way to beat noise is to give the camera signal. Use a wider aperture, a longer shutter speed, or a higher ISO (or all of the above) to ensure the image is well exposed with a histogram pushed to the right.
If you try to boost the image brightness later in processing you’ll introduce not only the very noise you were trying to avoid, but also odd artifacts in the shadows such as banding and purple discolouration.
With still images we have the option of taking shorter, untrailed images for the sky, and longer exposures for the dark ground to reveal details in the landscape, to composite later. With time-lapses we don’t have that luxury. Each and every frame has to capture the entire scene well.
At dark sky sites, expose for the dark ground as much as you can, even if that makes the sky overly bright. Unless you outright clip the highlights in the Milky Way or in light polluted horizon glows, you’ll be able to recover highlight details later in processing.
After poor focus, underexposure, resulting in overly noisy images, is the single biggest mistake I see beginners make.
TIP 5 — DON’T: Worry about 500 or “NPF” Exposure Rules
While still images might have to adhere to the “500 Rule” or the stricter “NPF Rule” to avoid star trailing, time-lapses are not so critical. Slight trailing of stars in each frame won’t be noticeable in the final movie when the stars are moving anyway.
So go for rule-breaking, longer exposures if needed, for example if the aperture needs to be stopped down for increased depth of field and foreground focus. Again, with time-lapses we can’t shoot separate exposures for focus stacking later.
Just be aware that the longer each exposure is, the longer it will take to shoot 300 of them.
Why 300? I find 300 frames is a good number to aim for. When assembled into a movie at 30 frames per second (a typical frame rate) your 300-frame clip will last 10 seconds, a decent length of time in a final movie.
You can use a slower frame rate (24 fps works fine), but below 24 the movie will look jerky unless you employ advanced frame blending techniques. I do that for auroras.
How long it will take to acquire the needed 300 frames will depend on how long each exposure is and the interval between them. An app such as PhotoPills (via its Time lapse function) is handy in the field for calculating exposure time vs. frame count vs. shoot length, and providing a timer to let you know when the shoot is done.
TIP 6 — DO: Use short intervals
At night, the interval between exposures should be no more than one or two seconds. By “interval,” I mean the time between when the shutter closes and when it opens again for the next frame.
Not all intervalometers define “Interval” that way. But it’s what you expect it means. If you use too long an interval then the stars will appear to jump across the sky, ruining the smooth motion you are after.
In practice, intervals of four to five seconds are sometimes needed to accommodate the movement of motorized “motion control” devices that turn or slide the camera between each shot. But I’m not covering the use of those advanced units here. I cover those options and much, much more in 400 pages of tips, techniques and tutorials in my Nightscapes ebook, linked to above.
However, during the day or in twilight, intervals can be, and indeed need to be, much longer than the exposures. It’s at night with stars in the sky that you want the shutter to be closed as little as possible.
TIP 7 — DO: Shoot Raw
This advice also applies to still images where shooting raw files is essential for professional results. But you likely knew that.
However, with time-lapses some cameras offer a mode that will shoot time-lapse frames and assemble them into a movie right in the camera. Don’t use it. It gives you a finished, pre-baked movie with no ability to process each frame later, an essential step for good night time-lapses. And raw files provide the most data to work with.
So even with time-lapses, shoot raw not JPGs.
If you are confident the frames will be used only for a time-lapse, you might choose to shoot in a smaller S-Raw or compressed C-Raw mode, for smaller files, in order to fit more frames onto a card.
But I prefer not to shrink or compress the original raw files in the camera, as some of them might make for an excellent stacked and layered still image where I want the best quality originals (such as for the ISS over Waterton Lakes example above).
To get you through a long field shoot away from your computer buy more and larger memory cards. You don’t need costly, superfast cards for most time-lapse work.
PLANNING AND COMPOSITION
TIP 8 — DO: Use planning apps to frame
All nightscape photography benefits from using one of the excellent apps we now have to assist us in planning a shoot. They are particularly useful for time-lapses.
Apps such as PhotoPills and The Photographer’s Ephemeris are great. I like the latter as it links to its companion TPE 3D app to preview what the sky and lighting will look like over the actual topographic horizon from your site. You can scrub through time to see the motion of the Milky Way over the scenery. The Augmented Reality “AR” modes of these apps are also useful, but only once you are on site during the day.
For planning a time-lapse at home I always turn to a “planetarium” program to simulate the motion of the sky (albeit over a generic landscape), with the ability to add in “field of view” indicators to show the view your lens will capture.
You can step ahead in time to see how the sky will move across your camera frame during the length of the shoot. Indeed, such simulations help you plan how long the shoot needs to last until, for example, the galactic core or Orion sets.
Planetarium software helps ensure you frame the scene properly, not only for the beginning of the shoot (that’s easy — you can see that!), but also for the end of the shoot, which you can only predict.
If your shoot will last as long as three hours, do plan to check the battery level and swap batteries before three hours is up. Most cameras, even new mirrorless models, will now last for three hours on a full battery, but likely not any longer. If it’s a cold winter night, expect only one or two hours of life from a single battery.
TIP 9 — DO: Develop one raw frame and apply settings to all
Processing the raw files takes the same steps and settings as you would use to process still images.
With time-lapses, however, you have to do all the processing required within your favourite raw developer software. You can’t count on bringing multiple exposures into a layer-based processor such as Photoshop to stack and blend images. That works for a single image, but not for 300.
I use Adobe Camera Raw out of Adobe Bridge to do all my time-lapse processing. But many photographers use Lightroom, which offers all the same settings and non-destructive functions as Adobe Camera Raw.
For those who wish to “avoid Adobe” there are other choices, but for time-lapse work an essential feature is the ability to develop one frame, then copy and paste its settings (or “sync” settings) to all the other frames in the set.
Not all programs allow that. Affinity Photo does not. Luminar doesn’t do it very well. DxO PhotoLab, ON1 Photo RAW, and the free Raw Therapee, among others, all work fine.
HOW TO ASSEMBLE A TIME-LAPSE
Once you have a set of raws all developed, the usual workflow is to export all those frames out as high-quality JPGs which is what movie assembly programs need. Your raw developing software has to allow batch exporting to JPGs — most do.
However, none of the programs above (except Photoshop and Adobe’s After Effects) will create the final movie, whether it be from those JPGs or from the raws.
So for assembling the intermediate JPGs into a movie, I often use a low-cost program called TLDF (TimeLapse DeFlicker) available for MacOS and Windows (timelapsedeflicker.com). It offers advanced functions such as deflickering (i.e. smoothing slight frame-to-frame brightness fluctuations) and frame blending (useful to smooth aurora motions or to purposely add star trails).
While there are many choices for time-lapse assembly, I suggest using a program dedicated to the task and not, as many do, a movie editing program. For most sequences, the latter makes assembly unnecessarily difficult and harder to set key parameters such as frame rates.
TIP 10 — DO: Try LRTimelapse for more advanced processing
Get serious about time-lapse shooting and you will want — indeed, you will need — the program LRTimelapse (LRTimelapse.com). A free but limited trial version is available.
This powerful program is for sequences where one setting will not work for all the frames. One size does not fit all.
Instead, LRTimelapse allows you to process a few keyframes throughout a sequence, say at the start, middle, and end. It then interpolates all the settings between those keyframes to automatically process the entire set of images to smooth (or “ramp”) and deflicker the transitions from frame to frame.
This is essential for sequences where the lighting changes during the shoot (say, the Moon rises or sets), and for so-called “holy grails.” Those are advanced sequences that track from daylight or twilight to darkness, or vice versa, over a wide range of camera settings.
However, LRTimelapse works only with Adobe Lightroom or the Adobe Camera Raw/Bridge combination. So for advanced time-lapse work Adobe software is essential.
A Final Bonus Tip
Keep it simple. You might aspire to emulate the advanced sequences you see on the web, where the camera pans and dollies during the movie. I suggest avoiding complex motion control gear at first to concentrate on getting well-exposed time-lapses with just a static camera. That alone is a rewarding achievement.
But before that, first learn to shoot still images successfully. All the settings and skills you need for a great looking still image are needed for a time-lapse. Then move onto capturing the moving sky.
I end with a link to an example music video, shot using the techniques I’ve outlined. Thanks for reading and watching. Clear skies!
The Beauty of the Milky Way from Alan Dyer on Vimeo.
I had the chance to test out an early sample of Canon’s new EOS Ra camera designed for deep-sky photography.
Once every 7 years astrophotographers have reason to celebrate when Canon introduces one of their “a” cameras, astronomical variants optimized for deep-sky objects, notably red nebulas.
In 2005 Canon introduced the ground-breaking 8-megapixel 20Da, the first DLSR to feature Live View for focusing. Seven years later, in 2012, Canon released the 18-megapixel 60Da, a camera I still use and love.
Both cameras were cropped-frame DSLRs.
Now in 2019, seven years after the 60Da, we have the newly-released EOS Ra, the astrophoto version of the 30-megapixel EOS R released in late 2018. The EOS R is a full-frame mirrorless camera with a sensor similar to what’s in Canon’s 5D MkIV DSLR.
Here, I present a selection of sample images taken with the new EOS Ra.
Both versions of the EOS R have identical functions and menus.
The big difference is that the EOS Ra, as did Canon’s earlier “a” models, has a factory-installed filter in front of the sensor that transmits more of the deep red “hydrogen-alpha” wavelength emitted by glowing nebulas.
Normal cameras suppress much of this deep-red light as a by-product of their filters cutting out the infra-red light that digital sensors are very sensitive to, but that would not focus well.
I was sent an early sample of the EOS Ra, and earlier this autumn also had a sample of the stock EOS R.
Both were sent for testing so I could prepare a test report for Sky and Telescope magazine. The full test report will appear in an upcoming issue.
• How the Ra compares to previous “a” models and third-party filter-modified cameras
• How the Ra works for normal daylight photography
• Noise levels compared to other cameras
• Features unique to the EOS Ra, such as 30x Live View focusing
UPDATE — November 25, 2019
As part of further testing I shot the Heart and Soul Nebulas in Cassiopeia through my little Borg 77mm f/4 astrograph with both the EOS Ra and my filter-modified 5D MkII (modified years ago by AstroHutech) to compare which pulled in more nebulosity. It looked like a draw.
Both images are single 8-minute exposures, taken minutes apart and developed identically in Adobe Camera Raw, but adjusted for colour balance to equally neutralize the sky background. The histograms look similar. Even so, the Ra looks a little redder overall. But keep in mind a sky or nebula can be made to appear any shade of red you like in processing.
The question is which camera shows more faint nebulosity?
The modified 5D MkII has always been my favourite camera for this type of astrophotography, picking up more nebulosity than other “a” models I’ve tested, including the Nikon D810a.
But in this case, I’d say the EOS Ra is performing as well as, if not better than the 5D MkII. How well any third-party modified camera you buy now performs will depend which, if any, filter the modifier installs in front of the sensor. So your mileage will vary.
For most of my other testing I shot through my much-prized Astro-Physics Traveler, a 105mm aperture f/6 apochromatic refractor on the Astro-Physics Mach1 mount.
To connect the EOS Ra (with its new RF lens mount) to my existing telescope-to-camera adapter and field flattener lens I used one of Canon’s EF-EOS R lens adapters.
The bottom line is that the EOS Ra works great!
It performs very well on H-alpha-rich nebulas and has very low noise. It will be well-suited to not only deep-sky photography but also to wide-field nightscape and time-lapse photography, perhaps as Canon’s best camera yet for those applications.
WHAT ABOUT THE PRICE?
The EOS Ra will sell for $2,500 US, a $700 premium over the cost of the stock EOS R. Some complain. Of course, if you don’t like it, you don’t have to buy it. This is not an upgrade being forced upon you.
As I look at it, it is all relative. When Nikon’s astronomy DSLR, the 36 Mp D810a, came out in 2015 it sold for $3,800 US, $1,300 more than the EOS Ra. It was, and remains a fine camera, if you can find one. It is discontinued.
A 36 Mp cooled and dedicated CMOS astro camera, the QHY367, with the same chip as the D810a, goes for $4,400, $1,900 more than the Ra. Yes, it will produce better images I’m sure than the EOS Ra, but deep-sky imaging is all it can do. At a cost, in dollars and ease of use.
And yes, buying a stock EOS R and having it modified by a third party costs less, and you’ll certainly get a good camera, for $300 to $400 less than an Ra. But …
• The EOS Ra has a factory adjusted white balance for ease of “normal” use — no need to buy correction filters. So there’s a $$ saving there, even if you can find clip-in correction filters for the EOS R — you can’t.
• And the Ra retains the sensor dust cleaning function. Camera modifier companies remove it or charge more to reinstall it.
• And the 30x live view magnification is very nice.
• The EOS Ra also carries a full factory warranty.
Do I wish the EOS Ra had some other key features? Sure. A mode to turn all menus red would be nice. As would an intervalometer built-in, one that works with the Bulb Timer to allow sequences of programmed multi-minute exposures. Both could be added in with a firmware update.
And providing a basic EF-EOS R lens adapter in the price would be a welcome plus, as one is essential to use the EOS Ra on a telescope.
That’s my take on it. I’ll be buying one. But then again I bought the 20Da, twice!, and the 60Da, and I hate to think what I paid for those much less capable cameras.
BONUS TEST — The RF 15-35mm L Lens
Canon is also releasing an impressive series of top-class RF lenses for their R mirrorless cameras. The image below is an example astrophoto with the new RF 15-35mm f/2.8 L zoom lens, an ideal combination of focal lengths and speed for nightscape shooting.
Below is a further set of stacked and processed images with the RF 15-35mm L lens, taken in quick succession, at 15mm, 24mm, and 35mm focal lengths, all shot wide open at f/2.8. The EOS Ra was on the Star Adventurer tracker (as below) to follow the stars.
Click or tap on the images below to view a full-resolution version for closer inspection.
The RF 15-35mm lens performs extremely well at 15mm exhibiting very little off-axis aberrations at the corners.
Off-axis aberrations do increase at the longer focal lengths but are still very well controlled, and are much less than I’ve seen on my older zoom and prime lenses in this focal length range.
The RF 15-35mm is a great complement to the EOS Ra for wide-field Milky Way images.
I was impressed with the new EOS Ra. It performs superbly for astrophotography.
Panoramas featuring the arch of the Milky Way have become the icons of dark sky locations. “Panos” can be easy to shoot, but stitching them together can present challenges. Here are my tips and techniques.
My tutorial complements the much more extensive information I provide in my eBook, at right. Here, I’ll step through techniques for simple to more complex panoramas, dealing first with essential shooting methods, then reviewing the workflows I use for processing and stitching panoramas.
What software works best depends on the number of segments in your panorama, or even on the focal length of the lens you used.
PART 1 — SHOOTING
What Equipment Do You Need?
Nightscape panoramas don’t require any more equipment than what you likely already own for shooting the night sky. For Milky Way scenes you need a fast lens and a solid tripod, but any good DSLR or mirrorless camera will suffice.
The tripod head can be either a ball head or a three-axis head, but it should have a horizontal axis marked with a degree scale. This allows you to move the camera at a correct and consistent angle from segment to segment. I think that’s essential.
What you don’t need is a special, and often costly, panorama head. These rotate the camera around the so-called “nodal point” inside the lens, avoiding parallax shifts that can make it difficult to align and stitch adjacent frames. Parallax shift is certainly a concern when shooting interiors or any scenes with prominent content close to the camera. However, in most nightscapes our scene content is far enough away that parallax simply isn’t an issue.
Though not a necessity, I find a levelling base a huge convenience. As I show above, this specialized ball head goes under the usual tripod head and makes it easy to level the main head. It eliminates all the fussing with trial-and-error adjustments of the length of each tripod leg.
Then to level the camera itself, I use the electronic level now in most cameras. Or, if your camera lacks that feature, an accessory bubble level clipped into the camera’s hot shoe will work.
Having the camera level is critical. It can be tipped up, of course, but not tilted left-right. If it isn’t level the whole panorama will be off kilter, requiring excessive straightening and cropping in processing, or the horizon will wave up and down in the final stitch, perhaps causing parts of the scene to go missing.
NOTE: Click or tap on the panorama images to open a high-res version for closer inspection.
Shooting Horizon Panoramas
While panoramas spanning the entire sky might be what you are after, I suggest starting simpler, with panos that take in just a portion of the 360° horizon and only a part of the 180° of the sky. These “partial panos” are great for auroras (above) or noctilucent clouds, (below), or for capturing just the core of the Milky Way over a landscape.
The key to all panorama success is overlap. Segments should overlap by 30 to 50 percent, enabling the stitching software to align the segments using the content common to adjacent frames. Contrary to some users, I’ve never found an issue with having too much overlap, where the same content is present on several frames.
For a practical example, let’s say you shoot with a 24mm lens on a full-frame camera, or a 16mm lens on a cropped-frame camera. Both combinations yield a field of view across the long dimension of the frame of roughly 80°, and across the short dimension of the frame of about 55°.
That means if you shoot with the camera in “landscape” orientation, panning the camera by 40° between segments would provide a generous 50 percent overlap. The left half of each segment will contain the same content as the right half of the previous segment, if you take your panos by turning from left to right.
TIP: My habit is to always shoot from left to right, as that puts the segments in the correct order adjacent to each other when I view them in browser programs such as Lightroom or Adobe Bridge, with images sorted in chronological order (from first to last images in a set) as I typically prefer. But the stitching will work no matter which direction you rotate the camera.
In the example of a 24mm lens and a camera in landscape orientation you could turn at a 45° or 50° spacing and yield enough overlap. However, turning the camera at multiples of 15° is usually the most convenient, as tripod heads are often graduated with markings at 5° increments, and labeled every 15° or 30°.
Some will have coarser and perhaps unlabeled markings. If so, determine what each increment represents, then take care to move the camera consistently by the amount that will provide adequate overlap.
To maximize the coverage of the sky while still framing a good amount of foreground, a common practice is to shoot panoramas with the camera in portrait orientation. That provides more vertical but less horizontal coverage for each frame. In that case, for adequate overlap with a 24mm lens and full-frame camera shoot at 30° spacings.
TIP: When shooting a partial panorama, for example just to the south for the Milky Way, or to the north for the aurora borealis, my practice is to always shoot a segment farther to the left and another to the right of the main scene. Shoot more than you need. Those end segments can get distorted when stitching, but if they don’t contain essential content, they can be cropped out with no loss, leaving your main scene clean and undistorted.
Shooting with a longer lens, such as a 50mm (or 35mm on a cropped frame camera), will yield higher resolution in the final panorama, but you will have much less sky coverage, unless you shoot multiple tiers, as I describe below. You would also have to shoot more segments, at 15° to 20° spacings, taking longer to complete the shoot.
As the number of segments goes up shooting fast becomes more important, to minimize how much the sky moves from segment to segment, and during each exposure itself, to aid in stitching. Remember, the sky appears to be turning from east to west, but the ground isn’t. So a prolonged shoot can cause problems later as the stitching software tries to align on either the fixed ground or the moving stars.
Panoramas on moonlit nights, as I show above, are relatively easy because exposures are short.
Milky Way panoramas taken on dark, moonless nights are tougher. They require fast apertures (f/2 to f/2.8) and high ISOs (ISO 3200 to 6400), to keep individual exposures no more than 30 to 40 seconds long.
Noise lives in the dark foregrounds, so I find it best to err on the side of overexposure, to ensure adequate exposure for the ground, even if it means the sky is bright and the stars slightly trailed. It’s the “Expose to the Right” philosophy I espouse at length in my eBook.
Advanced users can try shooting in two passes: one at a low ISO and with a long exposure for the fixed ground, and another pass at a higher ISO and a shorter exposure for the moving sky. But assembling such a set will take some deft work in Photoshop to align and mask the two stitched panos. None of the examples here are “double exposures.”
Shooting 360° Panoramas
More demanding than partial panoramas are full 360° panoramas, as above. Here I find it is best to start the sequence with the camera aimed toward the celestial pole (to the north in the northern hemisphere, or to the south in the southern hemisphere). That places the area of sky that moves the least over time at the two ends of the panorama, again making it easier for software to align segments, with the two ends taken farthest apart in time meeting up in space.
In our 24mm lens example, to cover the entire 360° scene shooting with a 45° spacing would require at least eight images (8 x 45 = 360). I used 10 above. Using that same lens with the camera in portrait orientation will require at least 12 segments to cover the entire 360° landscape.
Shooting 360° by 180° Panoramas
More demanding still are 360° panoramas that encompass the entire sky, from the ground below the horizon to the zenith overhead. Above is an example.
To do that with a single row of images requires shooting in portrait orientation with a very wide 14mm rectilinear lens on a full-frame camera. That combination has a field of view of about 100° across the long dimension of the sensor.
That sounds generous, but reaching up to the zenith at an altitude of 90° means only a small portion of the landscape will be included along the bottom of the frame.
To provide an even wider field of view to take in more ground, I use full-frame fish-eye lenses on my full-frame cameras, such as Canon’s old 15mm lens (as shown at top) or Rokinon’s 12mm. Even a circular-format fish-eye will work, such as an 8mm on a full-frame camera or 4.5mm on a cropped-frame camera.
All such fish-eye lenses produce curved horizons, but they take in a wide swath of sky, making it possible to include lots of foreground while reaching well past the zenith. Conventional panorama assembly programs won’t work with such wide and distorted segments, but the specialized programs described below will.
Shooting Multi-Tier Panoramas
The alternative technique for “all-sky” panos is to shoot multiple tiers of images: first, a lower row covering the ground and partway up the sky, followed by an upper row completing the coverage of just the sky at top.
The trick is to ensure adequate overlap both horizontally and vertically. With the camera in landscape orientation that will require a 20mm lens for full-frame cameras, or a 14mm lens for cropped-frame cameras. Either combination can cover the entire sky plus lots of foreground in two tiers, though I usually shoot three, just to be sure!.
Shooting with longer lenses provides incredible resolution for billboard-sized “gigapan” blow-ups, but will require shooting three, if not more, tiers, each with many segments. That starts to become a chore to do manually. Some motorized assistance really helps when shooting multi-tier panoramas.
Automating the Pan Shooting
The dedicated pano shooter might want to look at a device such as the GigaPan Epic models or the iOptron iPano, (shown below), all about $800 to $1000.
I’ve tested the latter and it works great. You program in the lens, overlap, and angular sweep desired. The iPano works out how many segments and tiers will be required, and automates the shooting, firing the shutter for the duration you program, then moving to the new position, firing again, and so on. I’ve shot four-tier panos effortlessly and with great success.
However, these devices are generally bigger and heavier than I care to heft around in the field.
Instead, I use the original Genie Mini from SYRP, (below), a $250 device primarily for shooting motion control time-lapses. But the wireless app that programs the Genie also has a panorama function that automatically slews the camera horizontally between exposures, again based on the lens, overlap, and angular sweep you enter. The just-introduced Genie Mini II is similar, but with even more capabilities for camera control.
While combining two Genie Minis allows programming in a vertical motion as well, I’ve been using just a regular tripod head atop the Mini to manually move the camera vertically between each of the horizontal tiers. I don’t feel the one or two moves needed to go from tier to tier too arduous to do manually, and I like to keep my field gear compact and easy to use.
The Genie Mini (now replaced by the Mini II) works great and I highly recommend it, even if panoramas are your only interest. But it is also one of the best, yet most affordable, single-axis motion control devices on the market for time-lapse work.
When to Shoot the Milky Way
While the right gear and techniques are important, go out on the wrong night and you won’t be able to capture the Milky Way as the great sweeping arch you might have hoped for.
In the northern hemisphere the Milky Way arches directly overhead from late July to October for most of the night. That’s fine for spherical fish-eye panoramas, but in rectangular images when the Milky Way is overhead it gets stretched and distorted across the top of the final panorama. For example, in the Bow Lake by Night panorama above, I cropped out most of this distorted content.
The prime season for Milky Way arches is therefore before the Milky Way climbs overhead, while it is still across the eastern sky, as above. That’s on moonless nights from March to early July, with May and June best for catching it in the evening, and not having to wait up until dawn, as is the case in early spring.
TIP: The best way to figure out when and where the Milky Way will appear is to use a desktop planetarium program such as Starry Night or Sky Safari or the free Stellarium. All can realistically depict the Milky Way for your location and date. You can then step through time to see how the Milky Way will move through the night, and how it will frame with your camera and lens combination using the “field of view” indicators the programs provide.
When shooting in the southern hemisphere I like the April to June period for catching the sweep of the southern Milky Way and the galactic core rising in late evening. By contrast, during mid austral winter in July and August the galactic centre shines directly overhead in the evening, a spectacular sight to be sure, but tough to capture in a panorama except in a spherical or fish-eye scene.
That said, I always like to put in a good word for the often sadly neglected winter Milky Way (the summer Milky Way for those “down under”). While lacking the spectacle of the galactic core in Sagittarius, the “other” Milky Way has its attractions such as Orion and Taurus. The best months for a panorama with that Milky Way in an arch across a rectangular frame are January to March. The Zodiacal Light can be a bonus at that season, as it was above.
TIP: Always shoot raw files for the widest dynamic range and flexibility in recovering details in the highlights and shadows. Even so, each segment has to be well exposed and focused out in the field.
And unless you are doing a “two-pass” double exposure, always shoot each segment with identical exposure settings. This is especially critical for bright sky scenes such twilights or moonlit scenes. Vary the exposure and you might get unsightly banding at the seams.
There’s nothing worse than getting home only to find one or more segments was missed, or was out of focus or badly exposed, spoiling the set.
PART 2 — STITCHING
Developing Panorama Segments
Once you have your panorama segments, the next step is to develop and assemble them. For my workflow, the process of assembling a panorama from its constituent segments begins with developing each of those segments identically.
NOTE: Click or tap on the software screen shots to open a high-res version for closer inspection.
I like to develop each segment’s raw file as fully as possible at this first stage in the workflow, applying noise reduction, colour correction, contrast adjustments, shadow and highlight recovery, and any special settings such as dehaze and clarity that can make the Milky Way pop.
I also apply lens corrections to each raw image. While some feel doing so produces problems with stitching later on, I’ve never found that. I prefer to have each frame with minimal vignetting and distortion when going into stitching. I use Adobe Camera Raw out of Adobe Bridge, but Lightroom Classic has identical functions.
There are several other raw developers that can work well at this stage. In other tests I’ve conducted, Capture One and DxO PhotoLab stand out as producing good results on nightscapes. See my blog from 2017 for more on software choices.
The key is developing each raw file identically, usually by working on one segment, then copying and pasting its settings to all the others in a set. Not all raw developers have this “Copy Settings” function. For example, Affinity Photo does not. It works very well as a layer-based editor to replace Photoshop, but is crude in its raw developing “Persona” functions.
While panorama stitching software will apply corrections to smooth out image-to-image variations, I find it is best to ensure all the segments look as similar as possible at the raw stage for brightness, contrast, and colour correction.
Do be aware that among social media groups and chat rooms devoted to nightscape imaging a lot of myth and misinformation abounds about how to process and stitch panoramas, and why some don’t work. Someone having a problem with a particular pano will ask why, and get ten different answers from well-meaning helpers, most of them wrong!
Stitching Simple Panoramas
For example, if your segments don’t join well it likely isn’t because you needed to use a panorama head (one oft-heard bit of advice). I never do. The issue is usually a lack of sufficient overlap. Or perhaps the image content moved too much from frame to frame as the photographer took too long to shoot the set.
Or, even when quickly-shot segments do have lots of overlap, stitching software can still get confused if adjoining segments contain featureless content or content that changes, such as segments over rippling water with no identifiable “landmarks” for the software to latch onto.
The primary problems, however, arise from using software that just isn’t up to the task. Programs that work great on simple panoramas (as the next three examples show) will fail when trying to stitch a more demanding set of segments.
For example, for partial horizon panos shot with 20mm to 50mm lenses, I’ll use the panorama function now built into Adobe Camera Raw (ACR) and Adobe Lightroom Classic, and also in the mobile-friendly Lightroom app. As I show above, ACR can do a wonderful job, yielding a raw DNG file that can continue to be edited non-destructively. It’s by far the easiest and fastest option, and is my first choice.
Another choice, not shown here, is the Photomerge function from within Photoshop, which yields a layered and masked master file, and provides the option for “content-aware” filling of missing areas. It can sometimes work on panos that ACR balks at.
Two programs popular as Adobe alternatives, ON1 PhotoRAW (above) and the aforementioned Affinity Photo (below), also have very capable panorama stitching functions.
However, in testing both programs with the demanding Bow Lake multi-tier panorama I used below with other programs, ON1 2019.5 did an acceptable job, while Affinity 1.7 failed. It works best on simpler panoramas, like this partial scene with a 24mm lens.
Even if they succeed when stitching 360° panoramas, such general-purpose editing programs, Adobe’s included, provide no option for choosing how the final scene gets framed. You have no control over where the program puts the ends of the scene.
Or the program just fails, producing a result like this.
Far worse is that multi-tier panoramas or, as I show above, even single-tier panos shot with very wide lenses, will often completely befuddle your favourite editing software, with it either refusing to perform the stitch or producing bizarre results.
Some photographers attempt to correct such wild distortions with lots of ad hoc adjustments with image-warping filters. But that’s completely unnecessary if you use the right software to begin with.
Stitching Complex Panoramas
When conventional software fails, I turn to the dedicated stitching program PTGui, $150 for MacOS or Windows. The name comes from “Panorama Tools – Graphical User Interface.”
While PTGui can read raw files from most cameras, it will not read any of the development adjustments you made to those files using Lightroom, Camera Raw, or any other raw developers.
So, my workflow is to develop all the raw segments, export them out as 16-bit TIFFs, then import those into PTGui. It can detect what lens was used to take the images, information PTGui needs to stitch accurately. If you used a manual lens you can enter the lens focal length and type (rectilinear or fish-eye) yourself.
I include a full tutorial on using PTGui in my eBook linked to above, but suffice to say that the program usually does a superb job first time and very quickly. You can drag the panorama around to frame the scene as you like, and change the projection at will to create rectangular or spherical format images, as above, and even so-called “little planet” projections that appear as if you were looking down at the scene from space.
Occasionally PTGui complains about some frames, requiring you to manually intervene to pick the same stars or horizon features in adjacent frames to provide enough matching alignment points until it is happy. Its interface also leaves something to be desired, with essential floating windows disappearing behind other mostly blank panels.
When exporting the finished panorama I usually choose to export it as a layered 16-bit Photoshop .PSD or, with big panos, as a Photoshop .PSB “big” document.
The reason is that in aligning the moving stars PTGui (indeed, all programs) can produce a few “fault lines” along the horizon, requiring a manual touch up to the masks to clean up mismatched horizon content, as I show above. Having a layered and masked master makes this easy to do non-destructively, though that’s best done in Photoshop.
However, Affinity Photo (above) can also read layered .PSD and .PSB Photoshop files, preserving the layers. By comparison, ON1 PhotoRAW flattens layered Photoshop files when it imports them, one deficiency that prevents this program from being a true Photoshop alternative.
Once a 360° panorama is in a program like Photoshop, some photographers like to “squish” the panorama horizontally to make it more square, for ease of printing and publication. I prefer not to do that, as it makes the Milky Way look overly tall, distorted, and in my opinion, ugly. But each to their own style.
You can test out a limited trial version of PTGui for free, but I think it is worth the cost as an essential tool for panorama devotees.
Other Stitching Options
However, Windows users can also try Image Composite Editor (ICE), free from Microsoft Research. As shown above in my test 3-tier pano, ICE works very well on complex panoramas, has a clean, user-friendly interface, offers a choice of geometric projections, and can export a master file with each segment on its own layer, if desired, for later editing.
The free, open source program HugIn is based on the same Panorama Tools root software that PTGui uses. However, I find HugIn’s operation clunky and overly technical. Its export process is arcane yet renders out only a flattened image.
In testing it with the same three-tier 21-segment pano that PTGui and ICE handled perfectly, HugIn failed to properly include one segment. However, it is free for MacOS and Windows, and so the price is right and is well worth a try.
With the superb tools now at our disposal, it is possible to create detailed panoramas of the night sky that convey the majesty of the Milky Way – and the night sky – as no single image can. Have fun!
I put the new Nikon Z6 mirrorless camera through its paces for astrophotography.
Following Sony’s lead, in late 2018 both Nikon and Canon released their entries to the full-frame mirrorless camera market.
Here I review one of Nikon’s new mirrorless models, the Z6, tested solely with astrophotography in mind. I did not test any of the auto-exposure, auto-focus, image stabilization, nor rapid-fire continuous mode features.
• Current owners of Nikon cropped-frame cameras wanting to upgrade to full-frame would do well to consider a Z6 over any current Nikon DSLR.
• Anyone wanting a full-frame camera for astrophotography and happy to “go Nikon” will find the Z6 nearly perfect for their needs.
Nikon Z6 vs. Z7
I opted to test the Z6 over the more expensive Z7, as the 24-megapixel Z6 has 6-micron pixels resulting in lower noise (according to independent tests) than the 46 megapixel Z7 with its 4.4 micron pixels.
In astrophotography, I feel low noise is critical, with 24-megapixel cameras hitting a sweet spot of noise vs. resolution.
However, if the higher resolution of the Z7 is important for your daytime photography needs, then I’m sure it will work well at night. The Nikon D850 DSLR, with a sensor similar to the Z7, has been proven by others to be a good astrophotography camera, albeit with higher noise than the lesser megapixel Nikons such as the D750 and Z6.
NOTE: Tap or click on images to download and display them full screen for closer inspection.
High ISO Noise
To test noise in a real-world situation, I shot a dark nightscape scene with the three cameras, using a 24mm Sigma Art lens on the two Nikons, and a 24mm Canon lens on the Sony via a MetaBones adapter. I shot at ISOs from 800 to 12,800, typical of what we use in nightscapes and deep-sky images.
The comparison set above shows performance at the higher ISOs of 3200 to 12,800. I saw very little difference among the trio, with the Nikon Z6 very similar to the Sony a7III, and with the four-year-old Nikon D750 holding up very well against the two new cameras.
The comparison below shows the three cameras on another night and at ISO 3200.
Both the Nikon Z6 and Sony a7III use a backside illuminated or “BSI” sensor, which in theory is promises to provide lower noise than a conventional CMOS sensor used in an older camera such as the D750.
In practice I didn’t see a marked difference, certainly not as much as the one- or even 1/2-stop improvement in noise I might have expected or hoped for.
Nevertheless, the Nikon Z6 provides as low a noise level as you’ll find in a camera offering 24 megapixels, and will perform very well for all forms of astrophotography.
Nikon and Sony both employ an “ISO-invariant” signal flow in their sensor design. You can purposely underexpose by shooting at a lower ISO, then boost the exposure later “in post” and end up with a result similar to an image shot at a high ISO to begin with in the camera.
I find this feature proves its worth when shooting Milky Way nightscapes that often have well-exposed skies but dark foregrounds lit only by starlight. Boosting the brightness of the landscape when developing the raw files reveals details in the scene without unduly introducing noise, banding, or other artifacts such as magenta tints.
That’s not true of “ISO variant” sensors, such as in most Canon cameras. Such sensors are far less tolerant of underexposure and are prone to noise, banding, and discolouration in the brightened shadows.
To test the Z6’s ISO invariance (as shown above) I shot a dark nightscape at ISO 3200 for a properly exposed scene, and also at ISO 100 for an image underexposed by a massive 5 stops. I then boosted that image by 5 stops in exposure in Adobe Camera Raw. That’s an extreme case to be sure.
I found the Z6 provided very good ISO invariant performance, though with more chrominance specking than the Sony a7III and Nikon D750 at -5 EV.
Below is a less severe test, showing the Z6 properly exposed on a moonlit night and at 1 to 4 EV steps underexposed, then brightened in processing. Even the -4 EV image looks very good.
In my testing, even with frames underexposed by -5 EV, I did not see any of the banding effects (due to the phase-detect auto-focus pixels) reported by others.
As such, I judge the Z6 to be an excellent camera for nightscape shooting when we often want to extract detail in the shadows or dark foregrounds.
Compressed vs. Uncompressed / Raw Large vs. Small
The Z6, as do many Nikons, offers a choice of shooting 12-bit or 14-bit raws, and either compressed or uncompressed.
I shot all my test images as 14-bit uncompressed raws, yielding 46 megabyte files with a resolution of 6048 x 4024 pixels. So I cannot comment on how good 12-bit compressed files are compared to what I shot. Astrophotography demands the best original data.
However, as the menu above shows, Nikon now also offers the option of shooting smaller raw sizes. The Medium Raw setting produces an image 4528 x 3016 pixels and a 18 megabyte file (in the files I shot), but with all the benefits of raw files in processing.
The Medium Raw option might be attractive when shooting time-lapses, where you might need to fit as many frames onto the single XQD card as possible, yet still have images large enough for final 4K movies.
However, comparing a Large Raw to a Medium Raw did show a loss of resolution, as expected, with little gain in noise reduction.
This is not like “binning pixels” in CCD cameras to increase signal-to-noise ratio. I prefer to never throw away information in the camera, to allow the option of creating the best quality still images from time-lapse frames later.
Nevertheless, it’s nice to see Nikon now offer this option on new models, a feature which has long been on Canon cameras.
Star Image Quality
Above is the Orion Nebula with the D750 and with the Z6, both shot in moonlight with the same 105mm refractor telescope.
I did not find any evidence for “star-eating” that Sony mirrorless cameras have been accused of. (However, I did not find the Sony a7III guilty of eating stars either.) Star images looked as good in the Z6 as in the D750.
Raw developers (Adobe, DxO, ON1, and others) decoded the Z6’s Bayer-array NEF files fine, with no artifacts such as oddly-coloured or misshapen stars, which can arise in cameras lacking an anti-alias filter.
LENR Dark frames
Above, 8-minute exposures of nothing, taken with the lens cap on at room temperature: without LENR, and with LENR, both boosted a lot in brightness and contrast to exaggerate the visibility of any thermal noise. These show the reduction in noise speckling with LENR activated, and the clean result with the Z6. At small size you’ll likely see nothing but black!
For deep-sky imaging a common practice is to shoot “dark frames,” images recording just the thermal noise that can then be subtracted from the image.
The Long Exposure Noise Reduction feature offered by all cameras performs this dark frame subtraction internally and automatically by the camera for any exposures over one second long.
I tested the Z6’s LENR and found it worked well, doing the job to effectively reduce thermal noise (hot pixels) without adding any other artifacts.
Some astrophotographers dismiss LENR and never use it. By contrast, I prefer to use LENR to do dark frame subtraction. Why? Through many comparison tests over the years I have found that separate dark frames taken later at night rarely do as good a job as LENR darks, because those separate darks are taken when the sensor temperature, and therefore the noise levels, are different than they were for the “light” frames.
I’ve found that dark frames taken later, then subtracted “in post” inevitably show less or little effect compared to images taken with LENR darks. Or worse, they add a myriad of pock-mark black specks to the image, adding noise and making the image look worse.
The benefit of LENR is lower noise. The penalty of LENR is that each image takes twice as long to shoot — the length of the exposure + the length of the dark frame. Because …
As Expected on the Z6 … There’s no LENR Dark Frame Buffer
Only Canon full-frame cameras offer this little known but wonderful feature for astrophotography. Turn on LENR and it is possible to shoot three (with the Canon 6D MkII) or four (with the Canon 6D) raw images in quick succession even with LENR turned on. The Canon 5D series also has this feature.
The single dark frame kicks in and locks up the camera only after the series of “light frames” are taken. This is excellent for taking a set of noise-reduced deep-sky images for later stacking without need for further “image calibration.”
No Nikon has this dark frame buffer, not even the “astronomical” D810a. And not the Z6.
I have to mention this every time I describe Canon’s dark frame buffer: It works only on full-frame Canons, and there’s no menu function to activate it. Just turn on LENR, fire the shutter, and when the first exposure is complete fire the shutter again. Then again for a third, and perhaps a fourth exposure. Only then does the LENR dark frame lock up the camera as “Busy” and prevent more exposures. That single dark frame gets applied to each of the previous “light” frames, greatly reducing the time it takes to shoot a set of dark-frame subtracted images.
But do note that Canon’s dark frame buffer will not work if…:
a) You leave Live View on. Don’t do that for any long exposure shooting.
b) You control the camera through the USB port via external software. It works only when controlling the camera via its internal intervalometer or via the shutter port using a hardware intervalometer.
With DSLRs deep-sky images shot through telescopes, then boosted for contrast in processing, usually exhibit a darkening along the bottom of the frame. This is caused by the upraised mirror shadowing the sensor slightly, an effect never noticed in normal photography.
Mirrorless cameras should be free of this mirror box shadowing. The Sony a7III, however, still exhibits some edge shadows due to an odd metal mask in front of the sensor. It shouldn’t be there and its edge darkening is a pain to eliminate in the final processing.
As I show in my review of the a7III, the Sony also exhibits a purple edge glow in long-exposure deep-sky images, from an internal light source. That’s a serious detriment to its use in deep-sky imaging.
Happily, the Z6 proved to be free of any such artifacts. Images are clean and evenly illuminated to the edges, as they should be. I saw no amp glows or other oddities that can show up under astrophotography use. The Z6 can produce superb deep-sky images.
During my short test period, I was not able to shoot red nebulas under moonless conditions. So I can’t say how well the Z6 performs for recording H-alpha regions compared to other “stock” cameras.
With the D810a gone, if it is deep red nebulosity you are after with a Nikon, then consider buying a filter-modified Z6 or having yours modified.
Both LifePixel and Spencer’s Camera offer to modify the Z6 and Z7 models. However, I have not used either of their services, so cannot vouch for them first hand.
Live View Focusing and Framing
For all astrophotography manually focusing with Live View is essential. And with mirrorless cameras there is no optical viewfinder to look through to frame scenes. You are dependent on the live electronic image (on the rear LCD screen or in the eye-level electronic viewfinder, or EVF) for seeing anything.
Thankfully, the Z6 presents a bright Live View image making it easy to frame, find, and focus on stars. Maximum zoom for precise focusing is 15x, good but not as good as the D750’s 20x zoom level, but better than Canon’s 10x maximum zoom in Live View.
The Z6 lacks the a7III’s wonderful Bright Monitoring function that temporarily ups the ISO to an extreme level, making it much easier to frame a dark night scene. However, something similar can be achieved with the Z6 by switching it temporarily to Movie mode, and having the ISO set to an extreme level.
As with most Nikons (and unlike Sonys), the Z6 remembers separate settings for the still and movie modes, making it easy to switch back and forth, in this case for a temporarily brightened Live View image to aid framing.
That’s very handy, and the Z6 works better than the D750 in this regard, providing a brighter Live View image, even with the D750’s well-hidden Exposure Preview option turned on.
Where the Z6 pulls far ahead of the otherwise similar D750 is in its movie features.
The Z6 can shoot 4K video (3840 x 2160 pixels) at either 30, 25, or 24 frames per second. Using 24 frames per second and increasing the ISO to between 12,800 to 51,200 (the Z6 can go as high as ISO 204,800!) it is possible to shoot real-time video at night, such as of auroras.
But the auroras will have to be bright, as at 24 fps, the maximum shutter speed is 1/25-second, as you might expect.
The a7III, by comparison, can shoot 4K movies at “dragged” shutter speeds as slow as 1/4 second, even at 24 fps, making it possible to shoot auroras at lower and less noisy ISO speeds, albeit with some image jerkiness due to the longer exposures per frame.
The D750 shoots only 1080 HD and, as shown above, produces very noisy movies at ISO 25,600 to 51,200. It’s barely usable for aurora videos.
The Z6 is much cleaner than the D750 at those high ISOs, no doubt due to far better internal processing of the movie frames. However, if night-sky 4K videos are an important goal, a camera from the Sony a7 series will be a better choice, if only because of the option for slower dragged shutter speeds.
For examples of real-time auroras shot with the Sony a7III see my music videos shot in Yellowknife and in Norway.
The Z6 uses the EN-EL15b battery compatible with the battery and charger used for the D750. But the “b” variant allows for in-camera charging via the USB port.
In room temperature tests the Z6 lasted for 1500 exposures, as many as the D750 was able to take in a side-by-side test. That was with the screens off.
At night, in winter temperatures of -10 degrees C (14° F), the Z6 lasted for three hours worth of continuous shooting, both for long deep-sky exposure sets and for a test time-lapse I shot, shown below.
A time-lapse movie, downsized here to HD from the full-size originals, shot with the Z6 and its internal intervalometer, from twilight through to moonrise on a winter night. Processed with Camera Raw and LRTimelapse.
However, with any mirrorless camera, you can extend battery life by minimizing use of the LCD screen and eye-level EVF. The Z6 has a handy and dedicated button for shutting off those screens when they aren’t needed during a shoot.
The days of mirrorless cameras needing a handful of batteries just to get through a few hours of shooting are gone.
Lens and Telescope Compatibility
As with all mirrorless cameras, the Nikon Z cameras use a new lens mount, one that is incompatible with the decades-old Nikon F mount.
The Z mount is wider and can accommodate wider-angle and faster lenses than the old F mount ever could, and in a smaller package. While we have yet to see those lenses appear, in theory that’s the good news.
The bad news is that you’ll need Nikon’s FTZ lens adapter to use any of your existing Nikon F-mount lenses on either the Z6 or Z7. As of this writing, Nikon is supplying an FTZ free with every Z body purchase.
I got an FTZ with my loaner Z6 and it worked very well, allowing even third-party lenses like my Sigma Art lenses to focus at the same point as they normally do (not true of some thIrd-party adapters), preserving the lens’s optical performance. Autofocus functions all worked fine and fast.
You’ll also need the FTZ adapter for use on a telescope, as shown above, to go from your telescope’s camera adapter, with its existing Nikon T-ring, to the Z6 body.
The reason is that the field flattener or coma corrector lenses often required with telescopes are designed to work best with the longer lens-to-sensor distance of a DSLR body. The FTZ adapter provides the necessary spacing, as do third-party adapters.
The only drawback to the FTZ is that any tripod plate attached to the camera body itself likely has to come off, and the tripod foot incorporated into the FTZ used instead. I found myself often having to swap locations for the tripod plate, an inconvenience.
Camera Controller Compatibility
Since it uses the same Nikon-type DC2 shutter port as the D750, the Z6 it should be compatible with most remote hardware releases and time-lapse motion controllers that operate a Nikon through the shutter port. An example are the controllers from SYRP.
On the other hand, time-lapse devices and external intervalometers that run Nikons through the USB port might need to have their firmware or apps updated to work with the Z6.
For example, as of early May 2019, CamRanger lists the Z6 as a supported camera; the Arsenal “smart controller” does not. Nor does Alpine Labs for their Radian and Pulse controllers, nor TimeLapse+ for its excellent View bramping intervalometer. Check with your supplier.
For those who like to use laptops to run their camera at the telescope, I found the Windows program Astro Photography Tool (v3.63) worked fine with the Z6, in this case connecting to the camera’s USB-C port using the USB-C to USB-A cable that comes with the camera. This allows APT to shift not only shutter speed, but also ISO and aperture under scripted sequences.
Inevitably, raw files from brand new cameras cannot be read by any raw developer programs other than the one supplied by the manufacturer, Nikon Capture NX in this case. However, even by the time I did my testing in winter 2019 all the major software suppliers had updated their programs to open Z6 files.
Adobe Lightroom and Photoshop, Affinity Photo, DxO PhotoLab, Luminar 3, ON1 PhotoRAW, and the open-source Raw Therapee all open the Z6’s NEF raw files just fine.
Specialized programs for processing astronomy images might be another story. For example, as of v1.08.06, PixInsight, a favourite program among astrophotographers, does not open Z6 raw files. Nor does Nebulosity v4. But check with the developers for updates.
Other Features for Astrophotography
Here are other Nikon Z6 features I found of value for astrophotography, and for operating the camera at night.
Tilting LCD Screen
Like the Nikon D750 and Sony A7III, the Z6 offers a tilting LCD screen great for use on a telescope or tripod when aimed up at the sky. However, the screen does not flip out and reverse, a feature useful for vloggers, but seldom needed for astrophotography.
OLED Top Screen (Above)
The Sony doesn’t have one, and Canon’s low-cost mirrorless Rp also lacks one. But the top-mounted OLED screen of the Z6 is a great convenience for astrophotography. It makes it possible to monitor camera status and battery life during a shoot, even with the rear LCD screen turned off to prolong battery life.
Sony’s implementation of touch-screen functions is limited to just choosing autofocus points. By contrast, the Nikon Z6 offers a full range of touchscreen functions, making it easy to navigate menus and choose settings.
I do wish there was an option, as there is with Pentax, to tint the menus red for preserving night vision.
As with other Nikons, the Z6 offers an internal intervalometer capable of shooting time-lapses, just as long as individual exposures don’t need to be longer than 30 seconds.
In addition, there’s the Exposure Smoothing option which, as I have found with the D750, is great for smoothing flickering in time-lapses shot using auto exposure.
Sony has only just added an intervalometer to the a7III with their v3 firmware update, but with no exposure smoothing.
Custom i Menu / Custom Function Buttons
The Sony a7III has four custom function buttons users can assign to commonly used commands, for quick access. For example, I assign one Custom button to the Bright Monitoring function which is otherwise utterly hidden in the menus, but superb for framing nightscapes, if only you know it’s there!
The Nikon Z6 has two custom buttons beside the lens mount. However, I found it easier to use the “i” menu (shown above) by populating it with those functions I use at night for astrophotography. It’s then easy to call them up and adjust them on the touch screen.
Thankfully, the Z6’s dedicated ISO button is now on top of the camera, making it much easier to find at night than the awkwardly placed ISO button on the back of the D750, which I am always mistaking for the Image Quality button, which you do not want to adjust by mistake.
As most cameras do, the Z6 also has a “My Menu” page which you can also populate with favourite menu commands.
Lighter Weight / Smaller Size
The Z6 provides similar imaging performance, if not better (for movies) than the D750, and in a smaller and lighter camera, weighing 200 grams (0.44 pounds) less than the D750. Being able to downsize my equipment mass is a welcome plus to going mirrorless.
Electronic Front Curtain Shutter / Silent Shooting
By design, mirrorless cameras lack any vibration from a bouncing mirror. But even the mechanical shutter can impart vibration and blurring to high-magnification images taken through telescopes.
The electronic front curtain shutter (lacking in the D750) helps eliminate this, while the Silent Shooting mode does just that — it makes the Z6 utterly quiet and vibration free when shooting, as all the shutter functions are now electronic. This is great for lunar and planetary imaging.
What’s Missing for Astrophotography (not much!)
Bulb Timer for Long Exposures
While the Z6 has a Bulb setting, there is no Bulb Timer as there is with Canon’s recent cameras. A Bulb Timer would allow setting long Bulb exposures of any length in the camera, though Canon’s cannot be combined with the intervalometer.
Instead, the Nikon must be used with an external Intervalometer for any exposures over 30 seconds long. Any number of units are compatible with the Z6, through its shutter port which is the same type DC2 jack used in the D750.
In-Camera Image Stacking to Raws
The Z6 does offer the ability to stack up to 10 images in the camera, a feature also offered by Canon and Pentax. Images can be blended with a Lighten (for star trails) or Average (for noise smoothing) mode.
However, unlike with Canon and Pentax, the result is a compressed JPG not a raw file, making this feature of little value for serious imaging. Plus with a maximum of only 10 exposures of up to 30-seconds each, the ability to stack star trails “in camera” is limited.
Unlike the top-end D850, the Z6’s buttons are not illuminated, but then again neither are the Z7’s.
As a bonus — the Nikon 35mm S-Series Lens
With the Z6 I also received a Nikkor 35mm f/1.8 S lens made for the Z-mount, as the lens perhaps best suited for nightscape imaging out of the native Z-mount lenses from Nikon. See Nikon’s website for the listing.
If there’s a downside to the Z-series Nikons it’s the limited number of native lenses that are available now from Nikon, and likely in the future from anyone, due to Nikon not making it easy for other lens companies to design for the new Z mount.
In testing the 35mm Nikkor on tracked shots, stars showed excellent on- and off-axis image quality, even wide open at f/1.8. Coma, astigmatism, spherical aberration, and lateral chromatic aberration were all well controlled.
However, as with most lenses now offered for mirrorless cameras, the focus is “by-wire” using a ring that doesn’t mechanically adjust the focus. As a result, the focus ring turns continuously and lacks a focus scale.
So it is not possible to manually preset the lens to an infinity mark, as nightscape photographers often like to do. Focusing must be done each night.
Until there is a greater selection of native lenses for the Z cameras, astrophotographers will need to use the FTZ adapter and their existing Nikon F-mount or third-party Nikon-mount lenses with the Zs.
I was impressed with the Z6.
For any owner of a Nikon cropped-frame DSLR (from the 3000, 5000, or 7000 series for example) wanting to upgrade to full-frame for astrophotography I would suggest moving to the Z6 over choosing a current DSLR.
Mirrorless is the way of the future. And the Z6 will yield lower noise than most, if not all, of Nikon’s cropped-frame cameras.
For owners of current Nikon DSLRs, especially a 24-megapixel camera such as the D750, moving to a Z6 will not provide a significant improvement in image quality for still images.
But … it will provide 4K video and much better low-light video performance than older DSLRs. So if it is aurora videos you are after, the Z6 will work well, though not quite as well as a Sony alpha.
In all, there’s little downside to the Z6 for astrophotography, and some significant advantages: low noise, bright live view, clean artifact-free sensor images, touchscreen convenience, silent shooting, low-light 4K video, all in a lighter weight body than most full-frame DSLRs.
But what about lenses for the Sony? Here’s one ideal for astrophotography.
Made for Sony e-mount cameras, the Venus Optics 15mm f/2 Laowa provides excellent on- and off-axis performance in a fast and compact lens ideal for nightscape, time-lapse, and wide-field tracked astrophotography with Sony mirrorless cameras. (UPDATE: Venus Optics has announced versions of this lens for Canon R and Nikon Z mount mirrorless cameras.)
I use it a lot and highly recommend it.
Size and Weight
While I often use the a7III with my Canon lenses by way of a Metabones adapter, the Sony really comes into its own when matched to a “native” lens made for the Sony e-mount. The selection of fast, wide lenses from Sony itself is limited, with the new Sony 24mm G-Master a popular favourite (I have yet to try it).
However, for much of my nightscape shooting, and certainly for auroras, I prefer lenses even wider than 24mm, and the faster the better.
Aurora over Båtsfjord, Norway. This is a single 0.8-second exposure at f/2 with the 15mm Venus Optics lens and Sony a7III at ISO 1600.
The Laowa 15mm f/2 from Venus Optics fills the bill very nicely, providing excellent speed in a compact lens. While wide, the Laowa is a rectilinear lens providing straight horizons even when aimed up, as shown above. This is not a fish-eye lens.
The Venus Optics 15mm realizes the potential of mirrorless cameras and their short flange distance that allows the design of fast, wide lenses without massive bulk.
While compact, at 600 grams the Laowa 15mm is quite hefty for its size due to its solid metal construction. Nevertheless, it is half the weight of the massive 1250-gram Sigma 14mm f/1.8 Art. The Laowa is not a plastic entry-level lens, nor is it cheap, at $850 from U.S. sources.
For me, the Sony-Laowa combination is my first choice for a lightweight travel camera for overseas aurora trips
However, this is a no-frills manual focus lens. Nor does it even transfer aperture data to the camera, which is a pity. There are no electrical connections between the lens and camera.
However, for nightscape work where all settings are adjusted manually, the Venus Optics 15mm works just fine. The key factor is how good are the optics. I’m happy to report that they are very good indeed.
Testing Under the Stars
To test the Venus Optics lens I shot “same night” images, all tracked, with the Sigma 14mm f/1.8 Art lens, at left, and the Rokinon 14mm SP (labeled as being f/2.4, at right). Both are much larger lenses, made for DSLRs, with bulbous front elements not able to accept filters. But they are both superb lenses. See my test report on these lenses published in 2018.
The next images show blow-ups of the same scene (the nightscape shown in full below, taken at Dinosaur Provincial Park, Alberta), and all taken on a tracker.
I used the Rokinon on the Sony a7III using the Metabones adapter which, unlike some brands of lens adapters, does not compromise the optical quality of the lens by shifting its focal position. But lacking a lens adapter for Nikon-to-Sony at the time of testing, I used the Nikon-mount Sigma lens on a Nikon D750, a DSLR camera with nearly identical sensor specs to the Sony.
Above is a tracked image (so the stars are not trailed, which would make it hard to tell aberrations from trails), taken wide open at f/2. No lens correction has been applied so the vignetting (the darkening of the frame corners) is as the lens provides.
As shown above, when used wide open at f/2 vignetting is significant, but not much more so than with competitive lenses with much larger lenses, as I compare below.
And the vignetting is correctable in processing. Adobe Camera Raw and Lightroom have this lens in their lens profile database. That’s not the case with current versions (as of April 2019) of other raw developers such as DxO PhotoLab, ON1 Photo RAW, and Raw Therapee where vignetting corrections have to be dialled in manually by eye.
When stopped down to f/2.8 the Laowa “flattens” out a lot for vignetting and uniformity of frame illumination. Corner aberrations also improve but are still present. I show those in close-up detail below.
Above, I compare the vignetting of the three lenses, both wide open and when stopped down. Wide open, all the lenses, even the Sigma and Rokinon despite their large front elements, show quite a bit of drop off in illumination at the corners.
The Rokinon SP actually seems to be the worst of the trio, showing some residual vignetting even at f/2.8, while it is reduced significantly in the Laowa and Sigma lenses. Oddly, the Rokinon SP, even though it is labeled as f/2.4, seemed to open to f/2.2, at least as indicated by the aperture metadata.
Above I show lens sharpness on-axis, both wide open and stopped down, to check for spherical and chromatic aberrations with the bright blue star Vega centered. The red box in the Navigator window at top right indicates what portion of the frame I am showing, at 200% magnification in Photoshop.
On-axis, the Venus Optics 15mm shows stars just as sharply as the premium Sigma and Rokinon lenses, with no sign of blurring spherical aberration nor coloured haloes from chromatic aberration.
Focusing is precise and easy to achieve with the Sony on Live View. My unit reaches sharpest focus on stars with the lens set just shy of the middle of the infinity symbol. This is consistent and allows me to preset focus just by dialing the focus ring, handy for shooting auroras at -35° C, when I prefer to minimize fussing with camera settings, thank you very much!
The Laowa and Sigma lenses show similar levels of off-axis coma and astigmatism, with the Laowa exhibiting slightly more lateral chromatic aberration than the Sigma. Both improve a lot when stopped down one stop, but aberrations are still present though to a lesser degree.
However, I find that the Laowa 15mm performs as well as the Sigma 14mm Art for star quality on- and off-axis. And that’s a high standard to match.
The Rokinon SP is the worst of the trio, showing significant elongation of off-axis star images (they look like lines aimed at the frame centre), likely due to astigmatism. With the 14mm SP, this aberration was still present at f/2.8, and was worse at the upper right corner than at the upper left corner, an indication to me that even the premium Rokinon SP lens exhibits slight lens de-centering, an issue users have often found with other Rokinon lenses.
Real-World Examples – The Milky Way
The fast speed of the Laowa 15mm is ideal for shooting tracked wide-field images of the Milky Way, and untracked camera-on-tripod nightscapes and time-lapses of the Milky Way.
Image aberrations are very acceptable at f/2, a speed that allows shutter speed and ISO to be kept lower for minimal star trailing and noise while ensuring a well-exposed frame.
Real World Examples – Auroras
Where the Laowa 15mm really shines is for auroras. On my trips to chase the Northern Lights I often take nothing but the Sony-Laowa pair, to keep weight and size down.
Above is an example, taken from a moving ship off the coast of Norway. The fast f/2 speed (I wish it were even faster!) makes it possible to capture the Lights in only 1- or 2-second exposures, albeit at ISO 6400. But the fast shutter speed is needed for minimizing ship movement.
The Sony also excels at real-time 4K video, able to shoot at ISO 12,800 to 51,200 without excessive noise.
Aurora Reflections from Alan Dyer on Vimeo.
The Sky is Dancing from Alan Dyer on Vimeo.
The Northern Lights At Sea from Alan Dyer on Vimeo.
Click through to see the posts and the videos shot with the Venus Optics 15mm.
As an aid to video use, the aperture ring of the Venus Optics 15mm can be “de-clicked” at the flick of a switch, allowing users to smoothly adjust the iris during shooting, avoiding audible clicks and jumps in brightness. That’s a very nice feature indeed.
In all, I can recommend the Venus Optics Laowa 15mm lens as a great match to Sony mirrorless cameras, for nightscape still and video shooting. UPDATE: Versions for Canon R and Nikon Z mount mirrorless cameras will now be available.
Can the new version of ON1 Photo RAW match Photoshop for astrophotography?
The short TL;DR answer: No.
But … as always, it depends. So do read on.
Released in mid-November 2018, the latest version of ON1 Photo RAW greatly improves a non-destructive workflow. Combining Browsing, Cataloging, Raw Developing, with newly improved Layers capabilities, ON1 is out to compete with Adobe’s Creative Cloud photo suite – Lightroom, Camera Raw, Bridge, and Photoshop – for those looking for a non-subscription alternative.
Many reviewers love the new ON1 – for “normal” photography.
But can it replace Adobe for night sky photos? I put ON1 Photo RAW 2019 through its paces for the demanding tasks of processing nightscapes, time-lapses, and deep-sky astrophotos.
In my eBook “How to Photograph and Process Nightscapes and Time-Lapses” (linked to at right) I present dozens of processing tutorials, including several on how to use ON1 Photo RAW, but the 2018 edition. I was critical of many aspects of the old version, primarily of its destructive workflow when going from its Develop and Effects modules to the limited Layers module of the 2018 edition.
I’m glad to see many of the shortfalls have been addressed, with the 2019 edition offering a much better workflow allowing layering of raw images while maintaining access to all the original raw settings and adjustments. You no longer have to flatten and commit to image settings to layer them for composites. When working with Layers you are no longer locked out of key functions such as cropping.
I won’t detail all the changes to ON1 2019 but they are significant and welcome.
The question I had was: Are they enough for high-quality astrophotos in a non-destructive workflow, Adobe Photoshop’s forté.
While ON1 Photo RAW 2019 is much better, I concluded it still isn’t a full replacement of Adobe’s Creative Cloud suite, as least not for astrophotography.
NOTE: All images can be downloaded as high-res versions for closer inspection.
ON1 2019 is Better, But for Astrophotography …
Functions in Layers are still limited. For example, there is no stacking and averaging for noise smoothing. Affinity Photo has those.
Filters, though abundant for artistic special effect “looks,” are limited in basic but essential functions. There is no Median filter, for one.
Despite a proliferation of contrast controls, for deep-sky images (nebulas and galaxies) I was still not able to achieve the quality of images I’ve been used to with Photoshop.
The lack of support for third-party plug-ins means ON1 cannot work with essential time-lapse programs such as Timelapse Workflow or LRTimelapse.
Nightscapes: ON1 Photo RAW 2019 works acceptably well for nightscape still images:
Its improved layering and excellent masking functions are great for blending separate ground and sky images, or for applying masked adjustments to selected areas.
Time-Lapses: ON1 works is just adequate for basic time-lapse processing:
Yes, you can develop one image and apply its settings to hundreds of images in a set, then export them for assembly into a movie. But there is no way to vary those settings over time, as you can by mating Lightroom to LRTimelapse.
As with the 2018 edition, you still cannot copy and paste masked local adjustments from image to image, limiting their use.
Exporting those images is slow.
Deep-Sky: ON1 is not a program I can recommend for deep-sky image processing:
Stars inevitably end up with unsightly sharpening haloes.
De-Bayering artifacts add blocky textures to the sky background.
And all the contrast controls still don’t provide the “snap” and quality I’m used to with Photoshop when working with low-contrast subjects.
Library / Browse Functions
ON1 is sold first and foremost as a replacement for Adobe Lightroom, and to that extent it can work well. Unlike Lightroom, ON1 allows browsing and working on images without having to import them formally into a catalog.
However, you can create a catalog if you wish, one that can be viewed even if the original images are not “on-line.” The mystery seems to be where ON1 puts its catalog file on your hard drive. I was not able to find it, to manually back it up. Other programs, such as Lightroom and Capture One, locate their catalogs out in the open in the Pictures folder.
For those really wanting a divorce from Adobe, ON1 now offers an intelligent AI-based function for importing Lightroom catalogs and transferring all your Lightroom settings you’ve applied to raw files to ON1’s equivalent controls.
However, while ON1 can read Photoshop PSD files, it will flatten them, so you would lose access to all the original image layers.
ON1’s Browse module is good, with many of the same functions as Lightroom, such as “smart collections.” Affinity Photo – perhaps ON1’s closest competitor as a Photoshop replacement – still lacks anything like it.
But I found ON1’s Browse module buggy, often taking a long while to allow access into a folder, presumably while it is rendering image previews.
There are no plug-ins or extensions for exporting directly to or synching to social media and photo sharing sites.
ON1 did a fairly good job. Some of its special effect filters, such a Dynamic Contrast, Glow, and Sunshine, can help bring out the Milky Way, though do add an artistic “look” to an image which you might or might not like.
Below, I compare Adobe Camera Raw (ACR) to ON1. It was tough to get ON1’s image looking the same as ACR’s result, but then again, perhaps that’s not the point. Does it just look good? Yes, it does.
Compared to Adobe Camera Raw, which has a good array of basic settings, ON1 has most of those and more, in the form of many special Effects, with many combined as one-click Presets, as shown below.
A few presets and individual filters – the aforementioned Dynamic Contrast and Glow – are valuable. However, most of ON1’s filters and presets will not be useful for astrophotography, unless you are after highly artistic and unnatural effects.
Noise Reduction and Lens Correction
Critical to all astrophotography is excellent noise reduction. ON1 does a fine job here, with good smoothing of noise without harming details.
Lens Correction works OK. It detected the 20mm Sigma art lens and automatically applied distortion correction, but not any vignetting (light “fall-off”) correction, perhaps the most important correction in nightscape work. You have to dial this in manually by eye, a major deficiency.
By comparison, ACR applies both distortion and vignetting correction automatically. It also includes settings for many manual lenses that you can select and apply in a click. For example, ACR (and Lightroom) includes settings for popular Rokinon and Venus Optics manual lenses; ON1 does not.
Hot Pixel Removal
I shot the example image on a warm summer night and without using in-camera Long Exposure Noise Reduction (to keep the gap between exposures short when shooting sets of tracked and untracked exposures for later compositing).
However, the penalty for not using LENR to expedite the image taking is a ground filled with hot pixels. While Adobe Camera Raw does have some level of hot pixel removal working “under the hood,” many specks remained.
ON1 showed more hot pixels, until you clicked Remove Hot Pixels, found under Details. As shown at centre above, it did a decent job getting rid of the worst offenders.
But as I’ll show later, the penalty is that stars now look distorted and sometimes double, or you get the outright removal of stars. ON1 doesn’t do a good job distinguishing between true sharp-edged hot pixels and the softer images of stars. Indeed, it tends to over sharpen stars.
A competitor, Capture One 11, does a better job, with an adjustable Single Pixel removal slider, so you can at least select the level of star loss you are willing to tolerate to get rid of hot pixels.
Star Image Quality
Yes, we are pixel peeping here, but that’s what we do in astrophotography. A lot!
Stars in ON1 don’t look as good as in Camera Raw. Inevitably, as you add contrast enhancements, stars in ON1 start to exhibit dark and unsightly “sharpening haloes” not present in ACR, despite me applying similar levels of sharpening and contrast boosts to each version of the image.
Camera Raw has been accused of producing images that are not as sharp as with other programs such as Capture One and ON1.
There’s a reason. Other programs over-sharpen, and it shows here.
We can get away with it here in wide-field images, but not later with deep-sky close-ups. I don’t like it. And it is unavoidable. The haloes are there, albeit at a low level, even with no sharpening or contrast enhancements applied, and no matter what image profile is selected (I used ON1 Standard throughout).
You might have to download and closely inspect these images to see the effect, but ON1’s de-Bayering routine exhibits a cross-hatched blocky pattern at the pixel-peeping level. ACR does not.
I see this same effect with some other raw developers. For example, the free Raw Therapee shows it with many of its choices for de-Bayering algorithms, but not all. Of the more than a dozen raw developers I tested a year ago, ACR and DxO PhotoLab had (and still have) the most artifact-free de-Bayering and smoothest noise reduction
Again, we can get away with some pixel-level artifacts here, but not later, in deep-sky processing.
Nightscape Processing — Layering and Compositing
The 2018 version of ON1 forced you to destructively flatten images when bringing them into the Layers module.
The 2019 version of ON1 improves that. It is now possible to composite several raw files into one image and still retain all the original Develop and Effects settings for non-destructive work.
You can then use a range of masking tools to mask in or out the sky.
For the example above, I have stacked tracked and untracked exposures, and am starting to mask out the trailed stars from the untracked exposure layer.
To do this with Adobe, you would have to open the developed raw files in Photoshop (ideally using “smart objects” to retain the link back to the raw files). But with ON1 we stay within the same program, to retain access to non-destructive settings. Very nice!
To add masks, ON1 2019 does not have the equivalent of Photoshop’s excellent Quick Selection Tool for selecting the sky or ground. It does have a “Perfect Brush” option which uses the tonal value of the pixels below it, rather than detecting edges, to avoid “painting over the lines.”
While the Perfect Brush does a decent job, it still requires a lot of hand painting to create an accurate mask without holes and defects. There is no non-destructive “Select and Mask” refinement option as in Photoshop.
Yes, ON1’s Refine Brush and Chisel Mask tools can help clean up a mask edge but are destructive to the mask. That’s not acceptable to my non-destructive mindset!
The masking tools are also applicable to adding “Local Adjustments” to any image layer, to brighten or darken regions of an image for example.
These work well and I find them more intuitive than the “pins” ACR uses on raw files, or DxO PhotoLab’s quirky “U-Point” interface.
ON1’s Local Adjustments work more like Photoshop’s Adjustment Layers and are similarly non-destructive. Excellent.
A very powerful feature of ON1 is its built-in Luminosity masking.
Yes, Camera Raw now has Range Masks, and Photoshop can be used to create luminosity masks, but making Photoshop’s luminosity masks easily adjustable requires purchasing third-party extension panels.
ON1 can create an adjustable and non-destructive luminosity mask on any image or adjustment layer with a click.
While such masks, based on the brightness of areas, aren’t so useful for low-contrast images like the Milky Way scene above, they can be very powerful for merging high-contrast images (though ON1 also has an HDR function not tested here).
ON1 has the advantage here. Its Luminosity masks are a great feature for compositing exposures or for working on regions of bright and dark in an image.
Here again is the final result, above.
It is not just one image each for the sky and ground, but is instead a stack of four images for each half of the composite, to smooth noise. This form of stacking is somewhat unique to astrophotography, and is commonly used to reduce noise in nightscapes and in deep-sky images, as shown later.
Here I show how you have to stack images in ON1.
Unlike Photoshop and Affinity Photo, ON1 does not have the ability to merge images automatically into a stack and apply a mathematical averaging to the stack, usually a Mean or Median stack mode. The averaging of the image content is what reduces the random noise.
Instead, with ON1 you have perform an “old school” method of average stacking – by changing the opacity of the layers, so that Layer 2 = 50%, Layer 3 = 33%, Layer 4 = 25%, and so on. The result is identical to performing a Mean stack mode in Photoshop or Affinity.
Fine, except there is no way to perform a Median stack, which can be helpful for eliminating odd elements present in only one frame, perhaps an aircraft trail.
Copy and Paste Settings
Before we even get to the stacking stage, we have to develop and process all the images in a set. Unlike Lightroom or Camera Raw, ON1 can’t develop and synchronize settings to a set of images at once. You can work on only one image at a time.
So, you work on one image (one of the sky images here), then Copy and Paste its settings to the other images in the set. I show the Paste dialog box here.
This works OK, though I did find some bugs – the masks for some global Effects layers did not copy properly; they copied inverted, as black instead of white masks.
However, Luminosity masks did copy from image to image, which is surprising considering the next point.
The greater limitation is that no Local Adjustments (ones with masks to paint in a correction to a selected area) copy from one image to another … except ones with gradient masks. Why the restriction?
So as wonderful as ON1’s masking tools might be, they aren’t of any use if you want to copy their masked adjustments across several images, or, as shown next, to a large time-lapse set.
While Camera Raw’s and Lightroom’s Local Adjustment pins are more awkward to work with, they do copy across as many images as you like.
A few Adobe competitors, such as Affinity Photo (as of this writing) simply can’t do this.
By comparison, with the exception of Local Adjustments, ON1 does have good functions for Copying and Pasting Settings. These are essential for processing a set of hundreds of time-lapse frames.
Once all the images are processed – whether it be with ON1 or any other program – the frames have to exported out to an intermediate set of JPGs for assembly into a movie by third-party software. ON1 itself can’t assemble movies, but then again neither can Lightroom (as least not very well), though Photoshop can, through its video editing functions.
For my test set of 220 frames, each with several masked Effects layers, ON1 took 2 hours and 40 minutes to perform the export to 4K JPGs. Photoshop, through its Image Processor utility, took 1 hour and 30 minutes to export the same set, developed similarly and with several local adjustment pins.
ON1 did the job but was slow.
A greater limitation is that, unlike Lightroom, ON1 does not accept any third party plug-ins (it serves as a plug-in for other programs). That means ON1 is not compatible with what I feel are essential programs for advanced time-lapse processing: either Timelapse Workflow (from https://www.timelapseworkflow.com) or the industry-standard LRTimelapse (from https://lrtimelapse.com).
Both programs work with Lightroom to perform incremental adjustments to settings over a set of images, based on the settings of several keyframes.
Lacking the ability to work with these programs means ON1 is not a program for serious and professional time-lapse processing.
Wide-Angle Milky Way
Now we come to the most demanding task: processing long exposures of the deep-sky, such as wide-angle Milky Way shots and close-ups of nebulas and galaxies taken through telescopes. All require applying generous levels of contrast enhancement.
As the above example shows, try as I might, I could not get my test image of the Milky Way to look as good with ON1 as it did with Adobe Camera Raw. Despite the many ways to increase contrast in ON1 (Contrast, Midtones, Curves, Structure, Haze, Dynamic Contrast and more!), the result still looked flat and with more prominent sky gradients than with ACR.
And remember, with ACR that’s just the start of a processing workflow. You can then take the developed raw file into Photoshop for even more precise work.
With ON1, its effects and filters all you have to work with. Yes, that simplifies the workflow, but its choices are more limited than with Photoshop, despite ON1’s huge number of Presets.
Similarly, taking a popular deep-sky subject, the Andromeda Galaxy, aka M31, and processing the same original images with ON1 and ACR/Photoshop resulted in what I think is a better-looking result with Photoshop.
Of course, it’s possible to change the look of such highly processed images with the application of various Curves and masked adjustment layers. And I’m more expert with Photoshop than with ON1.
But … as with the Cygnus Milky Way image, I just couldn’t get Andromeda looking as good in ON1. It always looked a little flat.
Dynamic Contrast did help snap up the galaxy’s dark lanes, but at the cost of “crunchy” stars, as I show next. A luminosity “star mask” might help protect the stars, but I think the background sky will inevitably suffer from the de-Bayering artifacts.
Star and Background Sky Image Quality
As I showed with the nightscape image, stars in ON1 end up looking too “crunchy,” with dark halos from over sharpening, and also with the blocky de-Bayering artifacts now showing up in the sky.
I feel it is not possible to avoid dark star haloes, as any application of contrast enhancements, so essential for these types of objects, brings them out, even if you back off sharpening at the raw development stage, or apply star masks.
ON1 is applying too much sharpening “under the hood.” That might “wow” casual daytime photographers into thinking ON1 is making their photos look better, but it is detrimental to deep-sky images. Star haloes are a sign of poor processing.
Noise and Hot Pixels
ON1’s noise reduction is quite good, and by itself does little harm to image details.
But turn on the Remove Hot Pixel button and stars start to be eaten. Faint stars fade out and brighter stars get distorted into double shapes or have holes in them.
Hot pixel removal is a nice option to have, but for these types of images it does too much harm to be useful. Use LENR or take dark frames, best practices in any case.
Image Alignment and Registration
Before any processing of deep-sky images is possible, it is first necessary to stack and align them, to make up for slight shifts from image to image, usually due to the mount not being perfectly polar aligned. Such shifts can be both translational (left-right, up-down) and rotational (turning about the guide star).
New to ON1 2019 is an Auto-Align Layers function. It worked OK but not nearly as well as Photoshop’s routine. In my test images of M31, ON1 didn’t perform enough rotation.
Once stacked and aligned, and as I showed above, you then have to manually change the opacities of each layer to blend them for noise smoothing.
By comparison, Photoshop has a wonderful Statistics script (under File>Scripts) that will automatically stack, align, then mean or median average the images, and turn the result into a non-destructive smart object, all in one fell swoop. I use it all the time for deep-sky images. There’s no need for separate programs such as Deep-Sky Stacker.
In ON1, however, all that has to be done manually, step-by-step. ON1 does do the job, just not as well.
ON1 Photo RAW 2019 is a major improvement, primarily in providing a more seamless and less destructive workflow.
Think of it as Lightroom with Layers!
But it isn’t Photoshop.
True to ON1’s heritage as a special effect plug-in, it has some fine Effect filters, such as Dynamic Contrast above, ones I sometimes use from within Photoshop as plug-in smart filters.
Under Sharpen, ON1 does offer a High Pass option, a popular method for sharpening deep-sky objects.
Missing Filters and Adjustments
But for astrophoto use, ON1 is missing a lot of basic but essential filters for pixel-level touch-ups. Here’s a short list:
• Missing are Median, Dust & Scratches, Radial Blur, Shake Reduction, and Smart Sharpen, just to mention a handful of filters I find useful for astrophotography, among the dozens of others Photoshop has, but ON1 does not. But then again, neither does Lightroom, another example of how ON1 is more light Lightroom with layers and not Photoshop.
• While ON1 has many basic adjustments for color and contrast, its version of Photoshop’s Selective Color lacks Neutral or Black sliders, great for making fine changes to color balance in astrophotos.
• While there is a Curves panel, it has no equivalent to Photoshop’s “Targeted Adjustment Tool” for clicking on a region of an image to automatically add an inflection point at the right spot on the curve. This is immensely useful for deep-sky images.
• Also lacking is a basic Levels adjustment. I can live without it, but most astrophotographers would find this a deal-breaker.
• On the other hand, hard-core deep-sky photographers who do most of their processing in specialized programs such as PixInsight, using Photoshop or Lightroom only to perform final touch-ups, might find ON1 perfectly fine. Try it!
Saving and Exporting
ON1 saves its layered images as proprietary .onphoto files and does so automatically. There is no Save command, only a final Export command. As such it is possible to make changes you then decide you don’t like … but too late! The image has already been saved, writing over your earlier good version. Nor can you Save As … a file name of your choice. Annoying!
Opening a layered .onphoto file (even with ON1 itself already open) can take a minute or more for it to render and become editable.
Once you are happy with an image, you can Export the final .onphoto version as a layered .PSD file but the masks ON1 exports to the Photoshop layers may not match the ones you had back in ON1 for opacity. So the exported .PSD file doesn’t look like what you were working on. That’s a bug.
Only exporting a flattened TIFF file gets you a result that matches your ON1 file, but it is now flattened.
Bugs and Cost
I encountered a number of other bugs, ones bad enough to lock up ON1 now and then. I’ve even seen ON1’s own gurus encounter bugs with masking during their live tutorials. These will no doubt get fixed in 2019.x upgrades over the next few months.
But by late 2019 we will no doubt be offered ON1 Photo RAW 2020 for another $80 upgrade fee, over the original $100 to $120 purchase price. True, there’s no subscription, but ON1 still costs a modest annual fee, presuming you want the latest features.
Now, I have absolutely no problem with that, and ON1 2019 is a significant improvement.
However, I found that for astrophotography it still isn’t there yet as a complete replacement for Adobe.