I present my top 10 tips for capturing time-lapses of the moving sky.
If you can take one well-exposed image of a nightscape, you can take 300. There’s little extra work required, just your time. But if you have the patience, the result can be an impressive time-lapse movie of the night sky sweeping over a scenic landscape. It’s that simple.
Or is it?
Here are my tips for taking time-lapses, in a series of “Do’s” and “Don’ts” that I’ve found effective for ensuring great results.
But before you attempt a time-lapse, be sure you can first capture well-exposed and sharply focused still shots. Shooting hundreds of frames for a time-lapse will be a disappointing waste of your time if all the images are dark and blurry.
For that reason many of my tips apply equally well to shooting still images. But taking time-lapses does require some specialized gear, techniques, planning, and software. First, the equipment.
NOTE: This article appeared originally in Issue #9 of Dark Sky Travels e-magazine.
TIP 1 — DO: Use a solid tripod
A lightweight travel tripod that might suffice for still images on the road will likely be insufficient for time-lapses. Not only does the camera have to remain rock steady for the length of the exposure, it has to do so for the length of the entire shoot, which could be several hours. Wind can’t move it, nor any camera handling you might need to do mid-shoot, such as swapping out a battery.
The tripod needn’t be massive. For hiking into scenic sites you’ll want a lightweight but sturdy tripod. While a carbon fibre unit is costly, you’ll appreciate its low weight and good strength every night in the field. Similarly, don’t scrimp on the tripod head.
TIP 2 — DO: Use a fast lens
As with nightscape stills, the single best purchase you can make to improve your images of dark sky scenes is not buying a new camera (at least not at first), but buying a fast, wide-angle lens.
Ditch the slow kit zoom and go for at least an f/2.8, if not f/2, lens with 10mm to 24mm focal length. This becomes especially critical for time-lapses, as the fast aperture allows using short shutter speeds, which in turn allows capturing more frames in a given period of time. That makes for a smoother, slower time-lapse, and a shoot you can finish sooner if desired.
TIP 3 — DO: Use an intervalometer
Time-lapses demand the use of an intervalometer to automatically fire the shutter for at least 200 to 300 images for a typical time-lapse. Many cameras have an intervalometer function built into their firmware. The shutter speed is set by using the camera in Manual mode.
Just be aware that a camera’s 15-second exposure really lasts 16 seconds, while a 30-second shot set in Manual is really a 32-second exposure.
So in setting the interval to provide one second between shots, as I advise below, you have to set the camera’s internal intervalometer for an interval of 17 seconds (for a shutter speed of 15 seconds) or 33 seconds (for a shutter speed of 30 seconds). It’s an odd quirk I’ve found true of every brand of camera I use or have tested.
Alternatively, you can set the camera to Bulb and then use an outboard hardware intervalometer (they sell for $60 on up) to control the exposure and fire the shutter. Test your unit. Its interval might need to be set to only one second, or to the exposure time + one second.
How intervalometers define “Interval” varies annoyingly from brand to brand. Setting the interval incorrectly can result in every other frame being missed and a ruined sequence.
SETTING YOUR CAMERA
TIP 4 — DON’T: Underexpose
As with still images, the best way to beat noise is to give the camera signal. Use a wider aperture, a longer shutter speed, or a higher ISO (or all of the above) to ensure the image is well exposed with a histogram pushed to the right.
If you try to boost the image brightness later in processing you’ll introduce not only the very noise you were trying to avoid, but also odd artifacts in the shadows such as banding and purple discolouration.
With still images we have the option of taking shorter, untrailed images for the sky, and longer exposures for the dark ground to reveal details in the landscape, to composite later. With time-lapses we don’t have that luxury. Each and every frame has to capture the entire scene well.
At dark sky sites, expose for the dark ground as much as you can, even if that makes the sky overly bright. Unless you outright clip the highlights in the Milky Way or in light polluted horizon glows, you’ll be able to recover highlight details later in processing.
After poor focus, underexposure, resulting in overly noisy images, is the single biggest mistake I see beginners make.
TIP 5 — DON’T: Worry about 500 or “NPF” Exposure Rules
While still images might have to adhere to the “500 Rule” or the stricter “NPF Rule” to avoid star trailing, time-lapses are not so critical. Slight trailing of stars in each frame won’t be noticeable in the final movie when the stars are moving anyway.
So go for rule-breaking, longer exposures if needed, for example if the aperture needs to be stopped down for increased depth of field and foreground focus. Again, with time-lapses we can’t shoot separate exposures for focus stacking later.
Just be aware that the longer each exposure is, the longer it will take to shoot 300 of them.
Why 300? I find 300 frames is a good number to aim for. When assembled into a movie at 30 frames per second (a typical frame rate) your 300-frame clip will last 10 seconds, a decent length of time in a final movie.
You can use a slower frame rate (24 fps works fine), but below 24 the movie will look jerky unless you employ advanced frame blending techniques. I do that for auroras.
How long it will take to acquire the needed 300 frames will depend on how long each exposure is and the interval between them. An app such as PhotoPills (via its Time lapse function) is handy in the field for calculating exposure time vs. frame count vs. shoot length, and providing a timer to let you know when the shoot is done.
TIP 6 — DO: Use short intervals
At night, the interval between exposures should be no more than one or two seconds. By “interval,” I mean the time between when the shutter closes and when it opens again for the next frame.
Not all intervalometers define “Interval” that way. But it’s what you expect it means. If you use too long an interval then the stars will appear to jump across the sky, ruining the smooth motion you are after.
In practice, intervals of four to five seconds are sometimes needed to accommodate the movement of motorized “motion control” devices that turn or slide the camera between each shot. But I’m not covering the use of those advanced units here. I cover those options and much, much more in 400 pages of tips, techniques and tutorials in my Nightscapes ebook, linked to above.
However, during the day or in twilight, intervals can be, and indeed need to be, much longer than the exposures. It’s at night with stars in the sky that you want the shutter to be closed as little as possible.
TIP 7 — DO: Shoot Raw
This advice also applies to still images where shooting raw files is essential for professional results. But you likely knew that.
However, with time-lapses some cameras offer a mode that will shoot time-lapse frames and assemble them into a movie right in the camera. Don’t use it. It gives you a finished, pre-baked movie with no ability to process each frame later, an essential step for good night time-lapses. And raw files provide the most data to work with.
So even with time-lapses, shoot raw not JPGs.
If you are confident the frames will be used only for a time-lapse, you might choose to shoot in a smaller S-Raw or compressed C-Raw mode, for smaller files, in order to fit more frames onto a card.
But I prefer not to shrink or compress the original raw files in the camera, as some of them might make for an excellent stacked and layered still image where I want the best quality originals (such as for the ISS over Waterton Lakes example above).
To get you through a long field shoot away from your computer buy more and larger memory cards. You don’t need costly, superfast cards for most time-lapse work.
PLANNING AND COMPOSITION
TIP 8 — DO: Use planning apps to frame
All nightscape photography benefits from using one of the excellent apps we now have to assist us in planning a shoot. They are particularly useful for time-lapses.
Apps such as PhotoPills and The Photographer’s Ephemeris are great. I like the latter as it links to its companion TPE 3D app to preview what the sky and lighting will look like over the actual topographic horizon from your site. You can scrub through time to see the motion of the Milky Way over the scenery. The Augmented Reality “AR” modes of these apps are also useful, but only once you are on site during the day.
For planning a time-lapse at home I always turn to a “planetarium” program to simulate the motion of the sky (albeit over a generic landscape), with the ability to add in “field of view” indicators to show the view your lens will capture.
You can step ahead in time to see how the sky will move across your camera frame during the length of the shoot. Indeed, such simulations help you plan how long the shoot needs to last until, for example, the galactic core or Orion sets.
Planetarium software helps ensure you frame the scene properly, not only for the beginning of the shoot (that’s easy — you can see that!), but also for the end of the shoot, which you can only predict.
If your shoot will last as long as three hours, do plan to check the battery level and swap batteries before three hours is up. Most cameras, even new mirrorless models, will now last for three hours on a full battery, but likely not any longer. If it’s a cold winter night, expect only one or two hours of life from a single battery.
TIP 9 — DO: Develop one raw frame and apply settings to all
Processing the raw files takes the same steps and settings as you would use to process still images.
With time-lapses, however, you have to do all the processing required within your favourite raw developer software. You can’t count on bringing multiple exposures into a layer-based processor such as Photoshop to stack and blend images. That works for a single image, but not for 300.
I use Adobe Camera Raw out of Adobe Bridge to do all my time-lapse processing. But many photographers use Lightroom, which offers all the same settings and non-destructive functions as Adobe Camera Raw.
For those who wish to “avoid Adobe” there are other choices, but for time-lapse work an essential feature is the ability to develop one frame, then copy and paste its settings (or “sync” settings) to all the other frames in the set.
Not all programs allow that. Affinity Photo does not. Luminar doesn’t do it very well. DxO PhotoLab, ON1 Photo RAW, and the free Raw Therapee, among others, all work fine.
HOW TO ASSEMBLE A TIME-LAPSE
Once you have a set of raws all developed, the usual workflow is to export all those frames out as high-quality JPGs which is what movie assembly programs need. Your raw developing software has to allow batch exporting to JPGs — most do.
However, none of the programs above (except Photoshop and Adobe’s After Effects) will create the final movie, whether it be from those JPGs or from the raws.
So for assembling the intermediate JPGs into a movie, I often use a low-cost program called TLDF (TimeLapse DeFlicker) available for MacOS and Windows (timelapsedeflicker.com). It offers advanced functions such as deflickering (i.e. smoothing slight frame-to-frame brightness fluctuations) and frame blending (useful to smooth aurora motions or to purposely add star trails).
While there are many choices for time-lapse assembly, I suggest using a program dedicated to the task and not, as many do, a movie editing program. For most sequences, the latter makes assembly unnecessarily difficult and harder to set key parameters such as frame rates.
TIP 10 — DO: Try LRTimelapse for more advanced processing
Get serious about time-lapse shooting and you will want — indeed, you will need — the program LRTimelapse (LRTimelapse.com). A free but limited trial version is available.
This powerful program is for sequences where one setting will not work for all the frames. One size does not fit all.
Instead, LRTimelapse allows you to process a few keyframes throughout a sequence, say at the start, middle, and end. It then interpolates all the settings between those keyframes to automatically process the entire set of images to smooth (or “ramp”) and deflicker the transitions from frame to frame.
This is essential for sequences where the lighting changes during the shoot (say, the Moon rises or sets), and for so-called “holy grails.” Those are advanced sequences that track from daylight or twilight to darkness, or vice versa, over a wide range of camera settings.
However, LRTimelapse works only with Adobe Lightroom or the Adobe Camera Raw/Bridge combination. So for advanced time-lapse work Adobe software is essential.
A Final Bonus Tip
Keep it simple. You might aspire to emulate the advanced sequences you see on the web, where the camera pans and dollies during the movie. I suggest avoiding complex motion control gear at first to concentrate on getting well-exposed time-lapses with just a static camera. That alone is a rewarding achievement.
But before that, first learn to shoot still images successfully. All the settings and skills you need for a great looking still image are needed for a time-lapse. Then move onto capturing the moving sky.
I end with a link to an example music video, shot using the techniques I’ve outlined. Thanks for reading and watching. Clear skies!
The Beauty of the Milky Way from Alan Dyer on Vimeo.
A new low-cost sky tracker promises to simplify not only tracking the sky but also taking time-lapses panning along the horizon. It works but …
If you are an active nightscape photographer chances are your social media feeds have been punctuated with ads for this new low-cost tracker from MoveShootMove.com.
For $200, much less than popular trackers from Sky-Watcher and iOptron, the SiFo unit (as it is labelled) offers the ability track the sky, avoiding any star trails. That alone would make it a bargain, and useful for nightscape and deep-sky photographers.
But it also has a function for panning horizontally, moving incrementally between exposures, thus the Move-Shoot-Move designation. The result is a time-lapse movie that pans along the horizon, but with each frame with the ground sharp, as the camera moves only between exposures, not during them.
Again, for $200 this is an excellent feature lacking in trackers like the Sky-Watcher Star Adventurer or iOptron SkyTracker. The Sky-Watcher Star Adventurer Mini does, however, offer both tracking and “move-shoot-move” time-lapse functions, but at a cost of $300 to $400 U.S., depending on accessories.
All these functions are provided in a unit that is light (weighing 700 grams with a tripod plate and the laser) and compact (taking up less space in your camera bag than most lenses). By comparison, the Star Adventurer Mini weighs 900 grams with the polar scope, while the original larger Star Adventurer is 1.4 kg, double the MSM’s weight.
Note, that the MSM’s advertised weight of 445 grams does not include the laser or a tripod plate, two items you need to use it. So 700 grams is a more realistic figure, still light, but not lighter than the competition by as much as you might be led to believe.
Nevertheless, the MSM’s small size and weight make it attractive for travel, especially for flights to remote sites. Construction is solid and all-metal. This is not a cheap plastic toy.
But does it work? Yes, but with several important caveats that might be a concern for some buyers.
What I Tested
I purchased the Basic Kit B package for $220 U.S., which includes a small case, a laser pointer and bracket for polar alignment (and with a small charger for the laser’s single 3.7-volt battery), and with the camera sync cable needed for time-lapse shooting.
I also purchased the new “button” model, not the older version that used a knob to set various tracking rates.
The ball head needed to go on top of the tracker is something you supply. The kit does come with two 3/8-inch stud bolts and a 3/8-to1/4-inch bushing adapter, for placing the tracker on tripods in the various mounting configurations I show below.
The first units were labelled as ‘SiFo,” but current units now carry the Gauda brand name. I’ll just call it the MSM.
I purchased the gear from the MSM website, and had my order fulfilled and shipped to me in Canada from China with no problems.
Tracking the Sky in Nightscapes
The attraction is its tracking function, allowing a camera to follow the sky and take exposures longer than any dictated by “500” or “NPF” Rules to avoid any star trailing.
Exposures can be a minute or more to record much more depth and detail in the Milky Way, though the ground will blur. But blending tracked sky exposures with untracked ground exposures gets around that, and with the MSM it’s easy to turn on and off the tracking motor, something not possible with the low-cost wind-up Mini Track from Omegon.
The illustrations and instructions (in a PDF well-hidden off the MSM Buy page) show the MSM mounted using the 1/4-20 bolt hole on the side of the unit opposite the LED-illuminated control panel. While this seems to be the preferredmethod, in the first unit I tested I found it produced serious mis-tracking problems.
With a Canon 6D MkII and 50mm f/1.4 lens (not a particularly heavy combination), the MSM’s gears would not engage and start tracking until after about 5 minutes. The first exposures were useless. This was also the case whenever I moved the camera to a new position to re-frame the scene or sky. Again, the first few minutes produced no or poor tracking until the gears finally engaged.
This would be a problem when taking tracked/untracked sets for nightscapes, as images need to be taken in quick succession. It’s also just plain annoying.
However, see the UPDATE at the end for the performance of a new Gauda-branded unit that was sent to me.
The solution was to mount the MSM using the 3/8-inch bolt hole on the back plate of the tracker, using the 1/4-20 adapter ring to allow it to attach to my tripod head. This still allowed me to tip the unit up to polar align it.
Tracking was now much more consistent, with only the first exposure usually badly trailed. But subsequent exposures all tracked, but with varying degrees of accuracy as I show below.
When used as a tracker, you need to control the camera’s exposure time with an external intervalometer you supply, to allow setting exposures over 30 seconds long.
The MSM offers a N and S setting, the latter for use in the Southern Hemisphere. A 1/2-speed setting turns the tracker at half the normal sidereal rate, useful for nightscapes as a compromise speed to provide some tracking while minimizing ground blurring.
For any tracker to track, its rotation axis has to be aimed at the Celestial Pole, near Polaris in the Northern Hemisphere, and near Sigma Octantis in the Southern Hemisphere.
I chose the laser pointer option for this, rather than the polar alignment scope. The laser attaches to the side of the MSM using a small screw-on metal bracket so that it points up along the axis of rotation, the polar axis.
The laser is labeled as a 1mw unit, but it is far brighter than any 1mw I’ve used. This does make it bright, allowing the beam to show up even when the sky is not dark. The battery is rechargeable and a small charger comes with the laser. Considering the laser is just a $15 option, it’s a bargain. But ….
UPDATE ADDED SEPTEMBER 1
Since I published the review, I have had the laser professionally tested, and it measured as having an output of 45 milliwatts. Yet it is labeled as being under 1 milliwatt. This is serious misrepresentation of the specs, done I can only assume to circumvent import restrictions. In Canada it is now illegal to import, own, or use any green laser over 5 milliwatts, a power level that would be sufficient for the intended use of polar aligning. 45mw is outright illegal.
So be warned, use of this laser will be illegal in some areas. And use of any green laser will be illegal close to airports, and outlawed entirely in some jurisdictions such as Australia, a fact the MSM website mentions.
The legal alternative is the optical polar alignment scope. I already have several of those, but my expectation that I could use one I had with the same bracket supplied with the laser were dashed by the fact that the bracket’s hole is too narrow to accept any of the other polar alignment scopes I have, which are all standard items. I you want a polar scope, buy theirs for $70.
However, if you can use it where you live, the laser works well enough, allowing you to aim the tracker at the Pole just by eye. For the wide lenses the tracker is intended to be used with, eyeball alignment proved good enough.
Just be very, very careful not to accidentally look down the beam. Seriously. It is far too easy to do by mistake, but doing so could damage your eye in moments.
Tracking the Sky in Deep-Sky Images
How well does the MSM actually track? In tests of the original SiFo unit I bought, and in sets of exposures with 35mm, 50mm, and 135mm lenses, and with the tracker mounted on the back, I found that 25% to 50% of the images showed mis-tracking. Gear errors still produced slightly trailed stars. This gear error shows itself more as you shoot with longer focal lengths.
The MSM is best for what it is advertised as — as a tracker for nightscapes with forgiving wide-angle lenses in the 14mm to 24mm range. With longer lenses, expect to throw away a good number of exposures as unusable. Take twice as many as you think you might need.
With a 135mm lens taking Milky Way closeups, more than half the shots were badly trailed. Really badly trailed. This is not from poor polar alignment, which produces a gradual drift of the frame, but from errors in the drive gears, and random errors at that, not periodic errors.
To be fair, this is often the case with other trackers as well. People always want to weight them down with heavy and demanding telephotos for deep-sky portraits, but that’s rarely a good idea with any tracker. They are best with wide lenses.
That said, I found the MSM’s error rate and amount to be much worse than with other trackers. With the Star Adventurer models and a 135mm lens for example, I can expect only 20% to 25% of the images to be trailed, and even then rarely as badly as what the MSM exhibited.
See the UPDATE at the end for the performance of the replacement Gauda-branded unit sent to me with the promise of much improved tracking accuracy.
Yes, enough shots worked to be usable, but it took using a fast f/2 lens to keep exposure times down to a minute to provide that yield. Users of slow f/5.6 kit-zoom lenses will struggle trying to take deep-sky images with the MSM.
In short, this is a low-cost tracker and it shows. It does work, but not as well as the higher-cost competitors. But restrict it to wide-angle lenses and you’ll be fine.
Panning the Ground
The other mode the MSM can be used in is as a time-lapse motion controller. Here you mount the MSM horizontally so the camera turns parallel to the horizon (or it can be mounted vertically for vertical panning, a mode I rarely use and did not test).
This is where the Move-Shoot-Move function comes in.
The supplied Sync cable goes from the camera’s flash hot shoe to the MSM’s camera jack. What happens is that when the camera finishes an exposure it sends a pulse to the MSM, which then quickly moves while the shutter is closed by the increment you set.
There is a choice of 4 speeds, marked in degrees-per-move: 0.05°, 0.2°, 0.5°, and 1.0°. For example, as the movie below shows, taking 360 frames at the 1° speed results in a complete 360° turn.
The MSM does the moving, but all the shutter speed control and intervals must be set using a separate intervalometer, either one built into the camera, or an outboard hardware unit. The MSM does not control the camera shutter. In fact, the camera controls the MSM.
Intervals should be set to be about 2 seconds longer than the shutter speed, to allow the MSM to perform its move and settle.
This connection between the MSM and camera worked very well. It is unconventional, but simple and effective.
Too Slow or Too Fast
The issue is the limited choice of move speeds. I found the 0.5° and 1° speeds much too fast for night use, except perhaps for special effects in urban cityscapes. Even in daytime use, when exposure times are very short, the results are dizzying, as I show below.
Even the 0.2°-per-move speed I feel is too fast for most nightscape work. Over the 300 exposures one typically takes for a time-lapse movie, that speed will turn the MSM (300 x 0.2°) = 60 degrees. That’s a lot of motion for 300 shots, which will usually be rendered out at 24 or 30 frames per second for a clip that lasts 10 to 12 seconds. The scene will turn a lot in that time.
On the other hand, the 0.05°-per-move setting is rather slow, producing a turn of (300 x 0.05°) = 15° during the 300 shots.
That works, but with all the motion controllers I’ve used — units that can run at whatever speed they need to get from the start point to the end point you set — I find a rate of about 0.1° per move is what works best for a movie that provides the right amount of motion. Not too slow. Not too fast. Just right.
UPDATE ADDED DECEMBER 21, 2019
From product photos on the MoveShootMove.com website now it appears that the tracker is now labeled MSM, as it should have been all along.
Most critically, perhaps in response to this review and my comments here, the time-lapse speeds have been changed to 0.05, 0.075, 0.1 and 0.125 degrees per move, adding the 0.1°/move speed I requested below and deleting the overly fast 0.5° and 1.0° speeds.
Plus it appears the new units have the panel labels printed the other way around so they are not upside down for most mounting situations.
I have not tested this new version, but these speeds sound much more usable for panning time-lapses. Bravo to MSM for listening!
Following the Sky in a Time-Lapse
The additional complication is trying to get the MSM to also turn at the right rate to follow the sky — for example, to keep the galaxy core in frame during the time-lapse clip. I think doing so produces one of the most effective time-lapse sequences.
But to do that with any device requires turning at a rate of 15° per hour, the rate the sky moves from east to west.
Because the MSM provides only set fixed speeds, the only way you have of controlling how much it moves over a given amount of time, such as an hour, is to vary the shutter speed.
I found that to get the MSM to follow the Milky Way in a time-lapse using the 0.05° rate and shooting 300 frames required shooting at a shutter speed of 12 seconds. No more, no less.
Do the Math
Where does that number come from?
At its rate of 0.05°/move, the MSM will turn 15° over 300 shots. The sky moves 15° in one hour, or 3600 seconds. So to fit 300 shots into 3600 seconds means each shot has to be no longer than (3600/300) = 12 seconds long.
The result works, as I show in the sampler movie.
But 12 seconds is a rather short shutter speed on a dark, moonless night with the Milky Way.
For properly exposed images you would need to shoot at very fast apertures (f/1.4 to f/2) and/or high and noisy ISO speeds. Neither are optimal. But they are forced upon you by the MSM’s restricted rates.
Using the faster 0.2° rate (of the original model) yields a turn of 60° over 300 shots. That’s four hours of sky motion. So each exposure now has to be 48 seconds long for the camera to follow the sky, four times longer because the drive rate is now four times faster.
A shutter speed of 48 seconds is a little too long in my opinion. Stars in each frame will trail. Plus a turn of 60° over 300 shots is quite a lot, producing a movie that turns too quickly.
By far the best speed for motion control time-lapses would be 0.1° per move. That would allow 24-second exposures to follow the sky, allowing a stop less in aperture or ISO speed. (DECEMBER 21 UPDATE: That speed seems to now be offered.)
Yes, having only a limited number of pre-wired speeds does make the MSM much easier to program than devices like the Star Adventurer Mini or SYRP Genie Mini that use wireless apps to set their functions. No question, the MSM is better suited to beginners who don’t want to fuss with lots of parameters.
As it is, getting a decent result requires some math and juggling of camera settings to make up for the MSM’s limited choices of speeds.
Time-Lapse Movie Examples
This compilation shows examples of daytime time-lapses taken at the fastest and dizzying 0.5° and 1.0° speeds, and night time-lapses taken at the slower speeds. The final clip is taken at 0.05°/move and with 12-second exposures, a combination that allowed the camera to nicely follow the Milky Way, albeit at a slow pace. Taking more than the 300 frames used here would have produced a clip that turned at the same rate, but lasted longer.
The MSM is powered off an internal rechargeable battery, which can be charged from any 5-volt charger you have from a mobile phone.
The MSM uses a USB-C jack for the power cable, but a USB-A to USB-C cord is supplied, handy as you might not have one if you don’t have other USB-C devices.
The battery lasted for half a dozen or more 300-shot time-lapses, enough to get you through at least 2 or 3 nights of shooting. However, my testing was done on warm summer nights. In winter battery life will be less.
While the built-in battery is handy, in the field should you find battery level low (the N and S switches blink as a warning) you can’t just swap in fresh batteries. Just remember to charge up before heading out. Alternatively, it can be charged from an external 5V battery pack such as used to prolong cell phone life.
The MSM does not offer, nor does it promise, any form of automated panorama shooting. This is where the device turns by, say, 15° to 45° between shots, to shoot the segments for a still-image panorama. More sophisticated motion controllers from SYRP and Edelkrone offer that function, including the ability to mate two devices for automated multi-tier panoramas.
Nor does the MSM offer the more advanced option of ramping speeds up and down at the start and end of a time-lapse. It moves at a constant rate throughout.
While some of the shortcomings could perhaps be fixed with a firmware update, there is no indication anywhere that its internal firmware can be updated through the USB-C port.
UPDATE ADDED OCTOBER 7, 2019
Since I published the review, MSM saw the initial test results and admitted that the earlier units like mine (ordered in June) exhibited large amounts of tracking error. They sent me a replacement unit, now branded with the Gauda label. According to MSM it contains a more powerful motor promised to improve tracking accuracy and making it possible to take images with lenses as long as 135mm.
I’m sorry to report it didn’t.
In tests with the 135mm lens the new, improved MSM still showed lots of tracking error, to the point that images taken with a lens as long as this were mostly unusable.
Tap or click on the images to download full-res versions.
The short movie above takes the full-frame images from the zenith set of 24 frames taken over 48 minutes and turns them into a little time-lapse. It shows how the mechanism of the MSM seems to be wobbling the camera around in a circle, creating the mis-tracking.
Comparison with the Star Adventurer
As a comparison, the next night I used a Sky-Watcher Star Adventurer (the full-size model not the Mini) to shoot the same fields in the northeast and overhead with the same 135mm lens and with the same ball-head, to ensure the ball-head was not at fault. Here are the results:
The Star Adventurer performed much better. Most images were well-tracked. Even on those frames that showed trailing, it was slight. The Star Adventurer is a unit you can use to take close-ups of deep-sky fields with telephoto lenses, if that’s your desire.
By contrast, the MSM is best used — indeed, I feel can only be used practically — with wide-angle lenses and with exposures under 2 minutes. Here’s a set taken with a 35mm lens, each for 2 minutes.
With the more forgiving 35mm lens, while more images worked, the success rate was still only 50%.
What I did not see with the new Gauda unit was the 5-minute delay before the gears meshed and tracking began. That issue has been resolved by the new, more powerful motor. The new Gauda model does start tracking right away.
But it is still prone to significant enough drive errors that stars are often trailed even with a 35mm lens (this was on a full-frame Canon 6D MkII).
UPDATED CONCLUSIONS (December 21, 2019)
The MSM tracker is low-cost, well-built, and compact for easy packing and travel. It performs its advertised functions well enough to allow users to get results, either tracked images of the Milky Way and constellations, or simple motion-control time-lapses.
But it is best used — indeed I would suggest can only be used — with wide-angle lenses for tracked Milky Way nightscapes. Even then, take more shots than you think you need to be sure enough are well-tracked and usable.
It can also be used for simple motion-control time-lapses, provided you do to the math to get it to turn by the amount you want, working around the too-slow or too-fast speeds. The new 0.1° per move speed (added in models as of December 2019) seems a reasonable rate for most time-lapses.
However, I think aspiring time-lapse photographers will soon outgrow the MSM’s limitations for motion-control sequences. But it can get you started.
If you really value its compactness and your budget is tight, the MSM will serve you well enough for tracked nightscape shooting with wide-angle lenses.
But if you wish to take close-ups of starfields and deep-sky objects with longer lenses, consider a unit like the Sky-Watcher Star Adventurer for its lower tracking errors. Or the Star Adventurer Mini for its better motion-control time-lapse functions.
Panoramas featuring the arch of the Milky Way have become the icons of dark sky locations. “Panos” can be easy to shoot, but stitching them together can present challenges. Here are my tips and techniques.
My tutorial complements the much more extensive information I provide in my eBook, at right. Here, I’ll step through techniques for simple to more complex panoramas, dealing first with essential shooting methods, then reviewing the workflows I use for processing and stitching panoramas.
What software works best depends on the number of segments in your panorama, or even on the focal length of the lens you used.
PART 1 — SHOOTING
What Equipment Do You Need?
Nightscape panoramas don’t require any more equipment than what you likely already own for shooting the night sky. For Milky Way scenes you need a fast lens and a solid tripod, but any good DSLR or mirrorless camera will suffice.
The tripod head can be either a ball head or a three-axis head, but it should have a horizontal axis marked with a degree scale. This allows you to move the camera at a correct and consistent angle from segment to segment. I think that’s essential.
What you don’t need is a special, and often costly, panorama head. These rotate the camera around the so-called “nodal point” inside the lens, avoiding parallax shifts that can make it difficult to align and stitch adjacent frames. Parallax shift is certainly a concern when shooting interiors or any scenes with prominent content close to the camera. However, in most nightscapes our scene content is far enough away that parallax simply isn’t an issue.
Though not a necessity, I find a levelling base a huge convenience. As I show above, this specialized ball head goes under the usual tripod head and makes it easy to level the main head. It eliminates all the fussing with trial-and-error adjustments of the length of each tripod leg.
Then to level the camera itself, I use the electronic level now in most cameras. Or, if your camera lacks that feature, an accessory bubble level clipped into the camera’s hot shoe will work.
Having the camera level is critical. It can be tipped up, of course, but not tilted left-right. If it isn’t level the whole panorama will be off kilter, requiring excessive straightening and cropping in processing, or the horizon will wave up and down in the final stitch, perhaps causing parts of the scene to go missing.
NOTE: Click or tap on the panorama images to open a high-res version for closer inspection.
Shooting Horizon Panoramas
While panoramas spanning the entire sky might be what you are after, I suggest starting simpler, with panos that take in just a portion of the 360° horizon and only a part of the 180° of the sky. These “partial panos” are great for auroras (above) or noctilucent clouds, (below), or for capturing just the core of the Milky Way over a landscape.
The key to all panorama success is overlap. Segments should overlap by 30 to 50 percent, enabling the stitching software to align the segments using the content common to adjacent frames. Contrary to some users, I’ve never found an issue with having too much overlap, where the same content is present on several frames.
For a practical example, let’s say you shoot with a 24mm lens on a full-frame camera, or a 16mm lens on a cropped-frame camera. Both combinations yield a field of view across the long dimension of the frame of roughly 80°, and across the short dimension of the frame of about 55°.
That means if you shoot with the camera in “landscape” orientation, panning the camera by 40° between segments would provide a generous 50 percent overlap. The left half of each segment will contain the same content as the right half of the previous segment, if you take your panos by turning from left to right.
TIP: My habit is to always shoot from left to right, as that puts the segments in the correct order adjacent to each other when I view them in browser programs such as Lightroom or Adobe Bridge, with images sorted in chronological order (from first to last images in a set) as I typically prefer. But the stitching will work no matter which direction you rotate the camera.
In the example of a 24mm lens and a camera in landscape orientation you could turn at a 45° or 50° spacing and yield enough overlap. However, turning the camera at multiples of 15° is usually the most convenient, as tripod heads are often graduated with markings at 5° increments, and labeled every 15° or 30°.
Some will have coarser and perhaps unlabeled markings. If so, determine what each increment represents, then take care to move the camera consistently by the amount that will provide adequate overlap.
To maximize the coverage of the sky while still framing a good amount of foreground, a common practice is to shoot panoramas with the camera in portrait orientation. That provides more vertical but less horizontal coverage for each frame. In that case, for adequate overlap with a 24mm lens and full-frame camera shoot at 30° spacings.
TIP: When shooting a partial panorama, for example just to the south for the Milky Way, or to the north for the aurora borealis, my practice is to always shoot a segment farther to the left and another to the right of the main scene. Shoot more than you need. Those end segments can get distorted when stitching, but if they don’t contain essential content, they can be cropped out with no loss, leaving your main scene clean and undistorted.
Shooting with a longer lens, such as a 50mm (or 35mm on a cropped frame camera), will yield higher resolution in the final panorama, but you will have much less sky coverage, unless you shoot multiple tiers, as I describe below. You would also have to shoot more segments, at 15° to 20° spacings, taking longer to complete the shoot.
As the number of segments goes up shooting fast becomes more important, to minimize how much the sky moves from segment to segment, and during each exposure itself, to aid in stitching. Remember, the sky appears to be turning from east to west, but the ground isn’t. So a prolonged shoot can cause problems later as the stitching software tries to align on either the fixed ground or the moving stars.
Panoramas on moonlit nights, as I show above, are relatively easy because exposures are short.
Milky Way panoramas taken on dark, moonless nights are tougher. They require fast apertures (f/2 to f/2.8) and high ISOs (ISO 3200 to 6400), to keep individual exposures no more than 30 to 40 seconds long.
Noise lives in the dark foregrounds, so I find it best to err on the side of overexposure, to ensure adequate exposure for the ground, even if it means the sky is bright and the stars slightly trailed. It’s the “Expose to the Right” philosophy I espouse at length in my eBook.
Advanced users can try shooting in two passes: one at a low ISO and with a long exposure for the fixed ground, and another pass at a higher ISO and a shorter exposure for the moving sky. But assembling such a set will take some deft work in Photoshop to align and mask the two stitched panos. None of the examples here are “double exposures.”
Shooting 360° Panoramas
More demanding than partial panoramas are full 360° panoramas, as above. Here I find it is best to start the sequence with the camera aimed toward the celestial pole (to the north in the northern hemisphere, or to the south in the southern hemisphere). That places the area of sky that moves the least over time at the two ends of the panorama, again making it easier for software to align segments, with the two ends taken farthest apart in time meeting up in space.
In our 24mm lens example, to cover the entire 360° scene shooting with a 45° spacing would require at least eight images (8 x 45 = 360). I used 10 above. Using that same lens with the camera in portrait orientation will require at least 12 segments to cover the entire 360° landscape.
Shooting 360° by 180° Panoramas
More demanding still are 360° panoramas that encompass the entire sky, from the ground below the horizon to the zenith overhead. Above is an example.
To do that with a single row of images requires shooting in portrait orientation with a very wide 14mm rectilinear lens on a full-frame camera. That combination has a field of view of about 100° across the long dimension of the sensor.
That sounds generous, but reaching up to the zenith at an altitude of 90° means only a small portion of the landscape will be included along the bottom of the frame.
To provide an even wider field of view to take in more ground, I use full-frame fish-eye lenses on my full-frame cameras, such as Canon’s old 15mm lens (as shown at top) or Rokinon’s 12mm. Even a circular-format fish-eye will work, such as an 8mm on a full-frame camera or 4.5mm on a cropped-frame camera.
All such fish-eye lenses produce curved horizons, but they take in a wide swath of sky, making it possible to include lots of foreground while reaching well past the zenith. Conventional panorama assembly programs won’t work with such wide and distorted segments, but the specialized programs described below will.
Shooting Multi-Tier Panoramas
The alternative technique for “all-sky” panos is to shoot multiple tiers of images: first, a lower row covering the ground and partway up the sky, followed by an upper row completing the coverage of just the sky at top.
The trick is to ensure adequate overlap both horizontally and vertically. With the camera in landscape orientation that will require a 20mm lens for full-frame cameras, or a 14mm lens for cropped-frame cameras. Either combination can cover the entire sky plus lots of foreground in two tiers, though I usually shoot three, just to be sure!.
Shooting with longer lenses provides incredible resolution for billboard-sized “gigapan” blow-ups, but will require shooting three, if not more, tiers, each with many segments. That starts to become a chore to do manually. Some motorized assistance really helps when shooting multi-tier panoramas.
Automating the Pan Shooting
The dedicated pano shooter might want to look at a device such as the GigaPan Epic models or the iOptron iPano, (shown below), all about $800 to $1000.
I’ve tested the latter and it works great. You program in the lens, overlap, and angular sweep desired. The iPano works out how many segments and tiers will be required, and automates the shooting, firing the shutter for the duration you program, then moving to the new position, firing again, and so on. I’ve shot four-tier panos effortlessly and with great success.
However, these devices are generally bigger and heavier than I care to heft around in the field.
Instead, I use the original Genie Mini from SYRP, (below), a $250 device primarily for shooting motion control time-lapses. But the wireless app that programs the Genie also has a panorama function that automatically slews the camera horizontally between exposures, again based on the lens, overlap, and angular sweep you enter. The just-introduced Genie Mini II is similar, but with even more capabilities for camera control.
While combining two Genie Minis allows programming in a vertical motion as well, I’ve been using just a regular tripod head atop the Mini to manually move the camera vertically between each of the horizontal tiers. I don’t feel the one or two moves needed to go from tier to tier too arduous to do manually, and I like to keep my field gear compact and easy to use.
The Genie Mini (now replaced by the Mini II) works great and I highly recommend it, even if panoramas are your only interest. But it is also one of the best, yet most affordable, single-axis motion control devices on the market for time-lapse work.
When to Shoot the Milky Way
While the right gear and techniques are important, go out on the wrong night and you won’t be able to capture the Milky Way as the great sweeping arch you might have hoped for.
In the northern hemisphere the Milky Way arches directly overhead from late July to October for most of the night. That’s fine for spherical fish-eye panoramas, but in rectangular images when the Milky Way is overhead it gets stretched and distorted across the top of the final panorama. For example, in the Bow Lake by Night panorama above, I cropped out most of this distorted content.
The prime season for Milky Way arches is therefore before the Milky Way climbs overhead, while it is still across the eastern sky, as above. That’s on moonless nights from March to early July, with May and June best for catching it in the evening, and not having to wait up until dawn, as is the case in early spring.
TIP: The best way to figure out when and where the Milky Way will appear is to use a desktop planetarium program such as Starry Night or Sky Safari or the free Stellarium. All can realistically depict the Milky Way for your location and date. You can then step through time to see how the Milky Way will move through the night, and how it will frame with your camera and lens combination using the “field of view” indicators the programs provide.
When shooting in the southern hemisphere I like the April to June period for catching the sweep of the southern Milky Way and the galactic core rising in late evening. By contrast, during mid austral winter in July and August the galactic centre shines directly overhead in the evening, a spectacular sight to be sure, but tough to capture in a panorama except in a spherical or fish-eye scene.
That said, I always like to put in a good word for the often sadly neglected winter Milky Way (the summer Milky Way for those “down under”). While lacking the spectacle of the galactic core in Sagittarius, the “other” Milky Way has its attractions such as Orion and Taurus. The best months for a panorama with that Milky Way in an arch across a rectangular frame are January to March. The Zodiacal Light can be a bonus at that season, as it was above.
TIP: Always shoot raw files for the widest dynamic range and flexibility in recovering details in the highlights and shadows. Even so, each segment has to be well exposed and focused out in the field.
And unless you are doing a “two-pass” double exposure, always shoot each segment with identical exposure settings. This is especially critical for bright sky scenes such twilights or moonlit scenes. Vary the exposure and you might get unsightly banding at the seams.
There’s nothing worse than getting home only to find one or more segments was missed, or was out of focus or badly exposed, spoiling the set.
PART 2 — STITCHING
Developing Panorama Segments
Once you have your panorama segments, the next step is to develop and assemble them. For my workflow, the process of assembling a panorama from its constituent segments begins with developing each of those segments identically.
NOTE: Click or tap on the software screen shots to open a high-res version for closer inspection.
I like to develop each segment’s raw file as fully as possible at this first stage in the workflow, applying noise reduction, colour correction, contrast adjustments, shadow and highlight recovery, and any special settings such as dehaze and clarity that can make the Milky Way pop.
I also apply lens corrections to each raw image. While some feel doing so produces problems with stitching later on, I’ve never found that. I prefer to have each frame with minimal vignetting and distortion when going into stitching. I use Adobe Camera Raw out of Adobe Bridge, but Lightroom Classic has identical functions.
There are several other raw developers that can work well at this stage. In other tests I’ve conducted, Capture One and DxO PhotoLab stand out as producing good results on nightscapes. See my blog from 2017 for more on software choices.
The key is developing each raw file identically, usually by working on one segment, then copying and pasting its settings to all the others in a set. Not all raw developers have this “Copy Settings” function. For example, Affinity Photo does not. It works very well as a layer-based editor to replace Photoshop, but is crude in its raw developing “Persona” functions.
While panorama stitching software will apply corrections to smooth out image-to-image variations, I find it is best to ensure all the segments look as similar as possible at the raw stage for brightness, contrast, and colour correction.
Do be aware that among social media groups and chat rooms devoted to nightscape imaging a lot of myth and misinformation abounds about how to process and stitch panoramas, and why some don’t work. Someone having a problem with a particular pano will ask why, and get ten different answers from well-meaning helpers, most of them wrong!
Stitching Simple Panoramas
For example, if your segments don’t join well it likely isn’t because you needed to use a panorama head (one oft-heard bit of advice). I never do. The issue is usually a lack of sufficient overlap. Or perhaps the image content moved too much from frame to frame as the photographer took too long to shoot the set.
Or, even when quickly-shot segments do have lots of overlap, stitching software can still get confused if adjoining segments contain featureless content or content that changes, such as segments over rippling water with no identifiable “landmarks” for the software to latch onto.
The primary problems, however, arise from using software that just isn’t up to the task. Programs that work great on simple panoramas (as the next three examples show) will fail when trying to stitch a more demanding set of segments.
For example, for partial horizon panos shot with 20mm to 50mm lenses, I’ll use the panorama function now built into Adobe Camera Raw (ACR) and Adobe Lightroom Classic, and also in the mobile-friendly Lightroom app. As I show above, ACR can do a wonderful job, yielding a raw DNG file that can continue to be edited non-destructively. It’s by far the easiest and fastest option, and is my first choice.
Another choice, not shown here, is the Photomerge function from within Photoshop, which yields a layered and masked master file, and provides the option for “content-aware” filling of missing areas. It can sometimes work on panos that ACR balks at.
Two programs popular as Adobe alternatives, ON1 PhotoRAW (above) and the aforementioned Affinity Photo (below), also have very capable panorama stitching functions.
However, in testing both programs with the demanding Bow Lake multi-tier panorama I used below with other programs, ON1 2019.5 did an acceptable job, while Affinity 1.7 failed. It works best on simpler panoramas, like this partial scene with a 24mm lens.
Even if they succeed when stitching 360° panoramas, such general-purpose editing programs, Adobe’s included, provide no option for choosing how the final scene gets framed. You have no control over where the program puts the ends of the scene.
Or the program just fails, producing a result like this.
Far worse is that multi-tier panoramas or, as I show above, even single-tier panos shot with very wide lenses, will often completely befuddle your favourite editing software, with it either refusing to perform the stitch or producing bizarre results.
Some photographers attempt to correct such wild distortions with lots of ad hoc adjustments with image-warping filters. But that’s completely unnecessary if you use the right software to begin with.
Stitching Complex Panoramas
When conventional software fails, I turn to the dedicated stitching program PTGui, $150 for MacOS or Windows. The name comes from “Panorama Tools – Graphical User Interface.”
While PTGui can read raw files from most cameras, it will not read any of the development adjustments you made to those files using Lightroom, Camera Raw, or any other raw developers.
So, my workflow is to develop all the raw segments, export them out as 16-bit TIFFs, then import those into PTGui. It can detect what lens was used to take the images, information PTGui needs to stitch accurately. If you used a manual lens you can enter the lens focal length and type (rectilinear or fish-eye) yourself.
I include a full tutorial on using PTGui in my eBook linked to above, but suffice to say that the program usually does a superb job first time and very quickly. You can drag the panorama around to frame the scene as you like, and change the projection at will to create rectangular or spherical format images, as above, and even so-called “little planet” projections that appear as if you were looking down at the scene from space.
Occasionally PTGui complains about some frames, requiring you to manually intervene to pick the same stars or horizon features in adjacent frames to provide enough matching alignment points until it is happy. Its interface also leaves something to be desired, with essential floating windows disappearing behind other mostly blank panels.
When exporting the finished panorama I usually choose to export it as a layered 16-bit Photoshop .PSD or, with big panos, as a Photoshop .PSB “big” document.
The reason is that in aligning the moving stars PTGui (indeed, all programs) can produce a few “fault lines” along the horizon, requiring a manual touch up to the masks to clean up mismatched horizon content, as I show above. Having a layered and masked master makes this easy to do non-destructively, though that’s best done in Photoshop.
However, Affinity Photo (above) can also read layered .PSD and .PSB Photoshop files, preserving the layers. By comparison, ON1 PhotoRAW flattens layered Photoshop files when it imports them, one deficiency that prevents this program from being a true Photoshop alternative.
Once a 360° panorama is in a program like Photoshop, some photographers like to “squish” the panorama horizontally to make it more square, for ease of printing and publication. I prefer not to do that, as it makes the Milky Way look overly tall, distorted, and in my opinion, ugly. But each to their own style.
You can test out a limited trial version of PTGui for free, but I think it is worth the cost as an essential tool for panorama devotees.
Other Stitching Options
However, Windows users can also try Image Composite Editor (ICE), free from Microsoft Research. As shown above in my test 3-tier pano, ICE works very well on complex panoramas, has a clean, user-friendly interface, offers a choice of geometric projections, and can export a master file with each segment on its own layer, if desired, for later editing.
The free, open source program HugIn is based on the same Panorama Tools root software that PTGui uses. However, I find HugIn’s operation clunky and overly technical. Its export process is arcane yet renders out only a flattened image.
In testing it with the same three-tier 21-segment pano that PTGui and ICE handled perfectly, HugIn failed to properly include one segment. However, it is free for MacOS and Windows, and so the price is right and is well worth a try.
With the superb tools now at our disposal, it is possible to create detailed panoramas of the night sky that convey the majesty of the Milky Way – and the night sky – as no single image can. Have fun!
I put the new Nikon Z6 mirrorless camera through its paces for astrophotography.
Following Sony’s lead, in late 2018 both Nikon and Canon released their entries to the full-frame mirrorless camera market.
Here I review one of Nikon’s new mirrorless models, the Z6, tested solely with astrophotography in mind. I did not test any of the auto-exposure, auto-focus, image stabilization, nor rapid-fire continuous mode features.
• Current owners of Nikon cropped-frame cameras wanting to upgrade to full-frame would do well to consider a Z6 over any current Nikon DSLR.
• Anyone wanting a full-frame camera for astrophotography and happy to “go Nikon” will find the Z6 nearly perfect for their needs.
Nikon Z6 vs. Z7
I opted to test the Z6 over the more expensive Z7, as the 24-megapixel Z6 has 6-micron pixels resulting in lower noise (according to independent tests) than the 46 megapixel Z7 with its 4.4 micron pixels.
In astrophotography, I feel low noise is critical, with 24-megapixel cameras hitting a sweet spot of noise vs. resolution.
However, if the higher resolution of the Z7 is important for your daytime photography needs, then I’m sure it will work well at night. The Nikon D850 DSLR, with a sensor similar to the Z7, has been proven by others to be a good astrophotography camera, albeit with higher noise than the lesser megapixel Nikons such as the D750 and Z6.
NOTE: Tap or click on images to download and display them full screen for closer inspection.
High ISO Noise
To test noise in a real-world situation, I shot a dark nightscape scene with the three cameras, using a 24mm Sigma Art lens on the two Nikons, and a 24mm Canon lens on the Sony via a MetaBones adapter. I shot at ISOs from 800 to 12,800, typical of what we use in nightscapes and deep-sky images.
The comparison set above shows performance at the higher ISOs of 3200 to 12,800. I saw very little difference among the trio, with the Nikon Z6 very similar to the Sony a7III, and with the four-year-old Nikon D750 holding up very well against the two new cameras.
The comparison below shows the three cameras on another night and at ISO 3200.
Both the Nikon Z6 and Sony a7III use a backside illuminated or “BSI” sensor, which in theory is promises to provide lower noise than a conventional CMOS sensor used in an older camera such as the D750.
In practice I didn’t see a marked difference, certainly not as much as the one- or even 1/2-stop improvement in noise I might have expected or hoped for.
Nevertheless, the Nikon Z6 provides as low a noise level as you’ll find in a camera offering 24 megapixels, and will perform very well for all forms of astrophotography.
Nikon and Sony both employ an “ISO-invariant” signal flow in their sensor design. You can purposely underexpose by shooting at a lower ISO, then boost the exposure later “in post” and end up with a result similar to an image shot at a high ISO to begin with in the camera.
I find this feature proves its worth when shooting Milky Way nightscapes that often have well-exposed skies but dark foregrounds lit only by starlight. Boosting the brightness of the landscape when developing the raw files reveals details in the scene without unduly introducing noise, banding, or other artifacts such as magenta tints.
That’s not true of “ISO variant” sensors, such as in most Canon cameras. Such sensors are far less tolerant of underexposure and are prone to noise, banding, and discolouration in the brightened shadows.
To test the Z6’s ISO invariance (as shown above) I shot a dark nightscape at ISO 3200 for a properly exposed scene, and also at ISO 100 for an image underexposed by a massive 5 stops. I then boosted that image by 5 stops in exposure in Adobe Camera Raw. That’s an extreme case to be sure.
I found the Z6 provided very good ISO invariant performance, though with more chrominance specking than the Sony a7III and Nikon D750 at -5 EV.
Below is a less severe test, showing the Z6 properly exposed on a moonlit night and at 1 to 4 EV steps underexposed, then brightened in processing. Even the -4 EV image looks very good.
In my testing, even with frames underexposed by -5 EV, I did not see any of the banding effects (due to the phase-detect auto-focus pixels) reported by others.
As such, I judge the Z6 to be an excellent camera for nightscape shooting when we often want to extract detail in the shadows or dark foregrounds.
Compressed vs. Uncompressed / Raw Large vs. Small
The Z6, as do many Nikons, offers a choice of shooting 12-bit or 14-bit raws, and either compressed or uncompressed.
I shot all my test images as 14-bit uncompressed raws, yielding 46 megabyte files with a resolution of 6048 x 4024 pixels. So I cannot comment on how good 12-bit compressed files are compared to what I shot. Astrophotography demands the best original data.
However, as the menu above shows, Nikon now also offers the option of shooting smaller raw sizes. The Medium Raw setting produces an image 4528 x 3016 pixels and a 18 megabyte file (in the files I shot), but with all the benefits of raw files in processing.
The Medium Raw option might be attractive when shooting time-lapses, where you might need to fit as many frames onto the single XQD card as possible, yet still have images large enough for final 4K movies.
However, comparing a Large Raw to a Medium Raw did show a loss of resolution, as expected, with little gain in noise reduction.
This is not like “binning pixels” in CCD cameras to increase signal-to-noise ratio. I prefer to never throw away information in the camera, to allow the option of creating the best quality still images from time-lapse frames later.
Nevertheless, it’s nice to see Nikon now offer this option on new models, a feature which has long been on Canon cameras.
Star Image Quality
Above is the Orion Nebula with the D750 and with the Z6, both shot in moonlight with the same 105mm refractor telescope.
I did not find any evidence for “star-eating” that Sony mirrorless cameras have been accused of. (However, I did not find the Sony a7III guilty of eating stars either.) Star images looked as good in the Z6 as in the D750.
Raw developers (Adobe, DxO, ON1, and others) decoded the Z6’s Bayer-array NEF files fine, with no artifacts such as oddly-coloured or misshapen stars, which can arise in cameras lacking an anti-alias filter.
LENR Dark frames
Above, 8-minute exposures of nothing, taken with the lens cap on at room temperature: without LENR, and with LENR, both boosted a lot in brightness and contrast to exaggerate the visibility of any thermal noise. These show the reduction in noise speckling with LENR activated, and the clean result with the Z6. At small size you’ll likely see nothing but black!
For deep-sky imaging a common practice is to shoot “dark frames,” images recording just the thermal noise that can then be subtracted from the image.
The Long Exposure Noise Reduction feature offered by all cameras performs this dark frame subtraction internally and automatically by the camera for any exposures over one second long.
I tested the Z6’s LENR and found it worked well, doing the job to effectively reduce thermal noise (hot pixels) without adding any other artifacts.
Some astrophotographers dismiss LENR and never use it. By contrast, I prefer to use LENR to do dark frame subtraction. Why? Through many comparison tests over the years I have found that separate dark frames taken later at night rarely do as good a job as LENR darks, because those separate darks are taken when the sensor temperature, and therefore the noise levels, are different than they were for the “light” frames.
I’ve found that dark frames taken later, then subtracted “in post” inevitably show less or little effect compared to images taken with LENR darks. Or worse, they add a myriad of pock-mark black specks to the image, adding noise and making the image look worse.
The benefit of LENR is lower noise. The penalty of LENR is that each image takes twice as long to shoot — the length of the exposure + the length of the dark frame. Because …
As Expected on the Z6 … There’s no LENR Dark Frame Buffer
Only Canon full-frame cameras offer this little known but wonderful feature for astrophotography. Turn on LENR and it is possible to shoot three (with the Canon 6D MkII) or four (with the Canon 6D) raw images in quick succession even with LENR turned on. The Canon 5D series also has this feature.
The single dark frame kicks in and locks up the camera only after the series of “light frames” are taken. This is excellent for taking a set of noise-reduced deep-sky images for later stacking without need for further “image calibration.”
No Nikon has this dark frame buffer, not even the “astronomical” D810a. And not the Z6.
I have to mention this every time I describe Canon’s dark frame buffer: It works only on full-frame Canons, and there’s no menu function to activate it. Just turn on LENR, fire the shutter, and when the first exposure is complete fire the shutter again. Then again for a third, and perhaps a fourth exposure. Only then does the LENR dark frame lock up the camera as “Busy” and prevent more exposures. That single dark frame gets applied to each of the previous “light” frames, greatly reducing the time it takes to shoot a set of dark-frame subtracted images.
But do note that Canon’s dark frame buffer will not work if…:
a) You leave Live View on. Don’t do that for any long exposure shooting.
b) You control the camera through the USB port via external software. It works only when controlling the camera via its internal intervalometer or via the shutter port using a hardware intervalometer.
With DSLRs deep-sky images shot through telescopes, then boosted for contrast in processing, usually exhibit a darkening along the bottom of the frame. This is caused by the upraised mirror shadowing the sensor slightly, an effect never noticed in normal photography.
Mirrorless cameras should be free of this mirror box shadowing. The Sony a7III, however, still exhibits some edge shadows due to an odd metal mask in front of the sensor. It shouldn’t be there and its edge darkening is a pain to eliminate in the final processing.
As I show in my review of the a7III, the Sony also exhibits a purple edge glow in long-exposure deep-sky images, from an internal light source. That’s a serious detriment to its use in deep-sky imaging.
Happily, the Z6 proved to be free of any such artifacts. Images are clean and evenly illuminated to the edges, as they should be. I saw no amp glows or other oddities that can show up under astrophotography use. The Z6 can produce superb deep-sky images.
During my short test period, I was not able to shoot red nebulas under moonless conditions. So I can’t say how well the Z6 performs for recording H-alpha regions compared to other “stock” cameras.
With the D810a gone, if it is deep red nebulosity you are after with a Nikon, then consider buying a filter-modified Z6 or having yours modified.
Both LifePixel and Spencer’s Camera offer to modify the Z6 and Z7 models. However, I have not used either of their services, so cannot vouch for them first hand.
Live View Focusing and Framing
For all astrophotography manually focusing with Live View is essential. And with mirrorless cameras there is no optical viewfinder to look through to frame scenes. You are dependent on the live electronic image (on the rear LCD screen or in the eye-level electronic viewfinder, or EVF) for seeing anything.
Thankfully, the Z6 presents a bright Live View image making it easy to frame, find, and focus on stars. Maximum zoom for precise focusing is 15x, good but not as good as the D750’s 20x zoom level, but better than Canon’s 10x maximum zoom in Live View.
The Z6 lacks the a7III’s wonderful Bright Monitoring function that temporarily ups the ISO to an extreme level, making it much easier to frame a dark night scene. However, something similar can be achieved with the Z6 by switching it temporarily to Movie mode, and having the ISO set to an extreme level.
As with most Nikons (and unlike Sonys), the Z6 remembers separate settings for the still and movie modes, making it easy to switch back and forth, in this case for a temporarily brightened Live View image to aid framing.
That’s very handy, and the Z6 works better than the D750 in this regard, providing a brighter Live View image, even with the D750’s well-hidden Exposure Preview option turned on.
Where the Z6 pulls far ahead of the otherwise similar D750 is in its movie features.
The Z6 can shoot 4K video (3840 x 2160 pixels) at either 30, 25, or 24 frames per second. Using 24 frames per second and increasing the ISO to between 12,800 to 51,200 (the Z6 can go as high as ISO 204,800!) it is possible to shoot real-time video at night, such as of auroras.
But the auroras will have to be bright, as at 24 fps, the maximum shutter speed is 1/25-second, as you might expect.
The a7III, by comparison, can shoot 4K movies at “dragged” shutter speeds as slow as 1/4 second, even at 24 fps, making it possible to shoot auroras at lower and less noisy ISO speeds, albeit with some image jerkiness due to the longer exposures per frame.
The D750 shoots only 1080 HD and, as shown above, produces very noisy movies at ISO 25,600 to 51,200. It’s barely usable for aurora videos.
The Z6 is much cleaner than the D750 at those high ISOs, no doubt due to far better internal processing of the movie frames. However, if night-sky 4K videos are an important goal, a camera from the Sony a7 series will be a better choice, if only because of the option for slower dragged shutter speeds.
For examples of real-time auroras shot with the Sony a7III see my music videos shot in Yellowknife and in Norway.
The Z6 uses the EN-EL15b battery compatible with the battery and charger used for the D750. But the “b” variant allows for in-camera charging via the USB port.
In room temperature tests the Z6 lasted for 1500 exposures, as many as the D750 was able to take in a side-by-side test. That was with the screens off.
At night, in winter temperatures of -10 degrees C (14° F), the Z6 lasted for three hours worth of continuous shooting, both for long deep-sky exposure sets and for a test time-lapse I shot, shown below.
A time-lapse movie, downsized here to HD from the full-size originals, shot with the Z6 and its internal intervalometer, from twilight through to moonrise on a winter night. Processed with Camera Raw and LRTimelapse.
However, with any mirrorless camera, you can extend battery life by minimizing use of the LCD screen and eye-level EVF. The Z6 has a handy and dedicated button for shutting off those screens when they aren’t needed during a shoot.
The days of mirrorless cameras needing a handful of batteries just to get through a few hours of shooting are gone.
Lens and Telescope Compatibility
As with all mirrorless cameras, the Nikon Z cameras use a new lens mount, one that is incompatible with the decades-old Nikon F mount.
The Z mount is wider and can accommodate wider-angle and faster lenses than the old F mount ever could, and in a smaller package. While we have yet to see those lenses appear, in theory that’s the good news.
The bad news is that you’ll need Nikon’s FTZ lens adapter to use any of your existing Nikon F-mount lenses on either the Z6 or Z7. As of this writing, Nikon is supplying an FTZ free with every Z body purchase.
I got an FTZ with my loaner Z6 and it worked very well, allowing even third-party lenses like my Sigma Art lenses to focus at the same point as they normally do (not true of some thIrd-party adapters), preserving the lens’s optical performance. Autofocus functions all worked fine and fast.
You’ll also need the FTZ adapter for use on a telescope, as shown above, to go from your telescope’s camera adapter, with its existing Nikon T-ring, to the Z6 body.
The reason is that the field flattener or coma corrector lenses often required with telescopes are designed to work best with the longer lens-to-sensor distance of a DSLR body. The FTZ adapter provides the necessary spacing, as do third-party adapters.
The only drawback to the FTZ is that any tripod plate attached to the camera body itself likely has to come off, and the tripod foot incorporated into the FTZ used instead. I found myself often having to swap locations for the tripod plate, an inconvenience.
Camera Controller Compatibility
Since it uses the same Nikon-type DC2 shutter port as the D750, the Z6 it should be compatible with most remote hardware releases and time-lapse motion controllers that operate a Nikon through the shutter port. An example are the controllers from SYRP.
On the other hand, time-lapse devices and external intervalometers that run Nikons through the USB port might need to have their firmware or apps updated to work with the Z6.
For example, as of early May 2019, CamRanger lists the Z6 as a supported camera; the Arsenal “smart controller” does not. Nor does Alpine Labs for their Radian and Pulse controllers, nor TimeLapse+ for its excellent View bramping intervalometer. Check with your supplier.
For those who like to use laptops to run their camera at the telescope, I found the Windows program Astro Photography Tool (v3.63) worked fine with the Z6, in this case connecting to the camera’s USB-C port using the USB-C to USB-A cable that comes with the camera. This allows APT to shift not only shutter speed, but also ISO and aperture under scripted sequences.
Inevitably, raw files from brand new cameras cannot be read by any raw developer programs other than the one supplied by the manufacturer, Nikon Capture NX in this case. However, even by the time I did my testing in winter 2019 all the major software suppliers had updated their programs to open Z6 files.
Adobe Lightroom and Photoshop, Affinity Photo, DxO PhotoLab, Luminar 3, ON1 PhotoRAW, and the open-source Raw Therapee all open the Z6’s NEF raw files just fine.
Specialized programs for processing astronomy images might be another story. For example, as of v1.08.06, PixInsight, a favourite program among astrophotographers, does not open Z6 raw files. Nor does Nebulosity v4. But check with the developers for updates.
Other Features for Astrophotography
Here are other Nikon Z6 features I found of value for astrophotography, and for operating the camera at night.
Tilting LCD Screen
Like the Nikon D750 and Sony A7III, the Z6 offers a tilting LCD screen great for use on a telescope or tripod when aimed up at the sky. However, the screen does not flip out and reverse, a feature useful for vloggers, but seldom needed for astrophotography.
OLED Top Screen (Above)
The Sony doesn’t have one, and Canon’s low-cost mirrorless Rp also lacks one. But the top-mounted OLED screen of the Z6 is a great convenience for astrophotography. It makes it possible to monitor camera status and battery life during a shoot, even with the rear LCD screen turned off to prolong battery life.
Sony’s implementation of touch-screen functions is limited to just choosing autofocus points. By contrast, the Nikon Z6 offers a full range of touchscreen functions, making it easy to navigate menus and choose settings.
I do wish there was an option, as there is with Pentax, to tint the menus red for preserving night vision.
As with other Nikons, the Z6 offers an internal intervalometer capable of shooting time-lapses, just as long as individual exposures don’t need to be longer than 30 seconds.
In addition, there’s the Exposure Smoothing option which, as I have found with the D750, is great for smoothing flickering in time-lapses shot using auto exposure.
Sony has only just added an intervalometer to the a7III with their v3 firmware update, but with no exposure smoothing.
Custom i Menu / Custom Function Buttons
The Sony a7III has four custom function buttons users can assign to commonly used commands, for quick access. For example, I assign one Custom button to the Bright Monitoring function which is otherwise utterly hidden in the menus, but superb for framing nightscapes, if only you know it’s there!
The Nikon Z6 has two custom buttons beside the lens mount. However, I found it easier to use the “i” menu (shown above) by populating it with those functions I use at night for astrophotography. It’s then easy to call them up and adjust them on the touch screen.
Thankfully, the Z6’s dedicated ISO button is now on top of the camera, making it much easier to find at night than the awkwardly placed ISO button on the back of the D750, which I am always mistaking for the Image Quality button, which you do not want to adjust by mistake.
As most cameras do, the Z6 also has a “My Menu” page which you can also populate with favourite menu commands.
Lighter Weight / Smaller Size
The Z6 provides similar imaging performance, if not better (for movies) than the D750, and in a smaller and lighter camera, weighing 200 grams (0.44 pounds) less than the D750. Being able to downsize my equipment mass is a welcome plus to going mirrorless.
Electronic Front Curtain Shutter / Silent Shooting
By design, mirrorless cameras lack any vibration from a bouncing mirror. But even the mechanical shutter can impart vibration and blurring to high-magnification images taken through telescopes.
The electronic front curtain shutter (lacking in the D750) helps eliminate this, while the Silent Shooting mode does just that — it makes the Z6 utterly quiet and vibration free when shooting, as all the shutter functions are now electronic. This is great for lunar and planetary imaging.
What’s Missing for Astrophotography (not much!)
Bulb Timer for Long Exposures
While the Z6 has a Bulb setting, there is no Bulb Timer as there is with Canon’s recent cameras. A Bulb Timer would allow setting long Bulb exposures of any length in the camera, though Canon’s cannot be combined with the intervalometer.
Instead, the Nikon must be used with an external Intervalometer for any exposures over 30 seconds long. Any number of units are compatible with the Z6, through its shutter port which is the same type DC2 jack used in the D750.
In-Camera Image Stacking to Raws
The Z6 does offer the ability to stack up to 10 images in the camera, a feature also offered by Canon and Pentax. Images can be blended with a Lighten (for star trails) or Average (for noise smoothing) mode.
However, unlike with Canon and Pentax, the result is a compressed JPG not a raw file, making this feature of little value for serious imaging. Plus with a maximum of only 10 exposures of up to 30-seconds each, the ability to stack star trails “in camera” is limited.
Unlike the top-end D850, the Z6’s buttons are not illuminated, but then again neither are the Z7’s.
As a bonus — the Nikon 35mm S-Series Lens
With the Z6 I also received a Nikkor 35mm f/1.8 S lens made for the Z-mount, as the lens perhaps best suited for nightscape imaging out of the native Z-mount lenses from Nikon. See Nikon’s website for the listing.
If there’s a downside to the Z-series Nikons it’s the limited number of native lenses that are available now from Nikon, and likely in the future from anyone, due to Nikon not making it easy for other lens companies to design for the new Z mount.
In testing the 35mm Nikkor on tracked shots, stars showed excellent on- and off-axis image quality, even wide open at f/1.8. Coma, astigmatism, spherical aberration, and lateral chromatic aberration were all well controlled.
However, as with most lenses now offered for mirrorless cameras, the focus is “by-wire” using a ring that doesn’t mechanically adjust the focus. As a result, the focus ring turns continuously and lacks a focus scale.
So it is not possible to manually preset the lens to an infinity mark, as nightscape photographers often like to do. Focusing must be done each night.
Until there is a greater selection of native lenses for the Z cameras, astrophotographers will need to use the FTZ adapter and their existing Nikon F-mount or third-party Nikon-mount lenses with the Zs.
I was impressed with the Z6.
For any owner of a Nikon cropped-frame DSLR (from the 3000, 5000, or 7000 series for example) wanting to upgrade to full-frame for astrophotography I would suggest moving to the Z6 over choosing a current DSLR.
Mirrorless is the way of the future. And the Z6 will yield lower noise than most, if not all, of Nikon’s cropped-frame cameras.
For owners of current Nikon DSLRs, especially a 24-megapixel camera such as the D750, moving to a Z6 will not provide a significant improvement in image quality for still images.
But … it will provide 4K video and much better low-light video performance than older DSLRs. So if it is aurora videos you are after, the Z6 will work well, though not quite as well as a Sony alpha.
In all, there’s little downside to the Z6 for astrophotography, and some significant advantages: low noise, bright live view, clean artifact-free sensor images, touchscreen convenience, silent shooting, low-light 4K video, all in a lighter weight body than most full-frame DSLRs.
It was a magical night as the rising Moon lit the Badlands with a golden glow.
When doing nightscape photography it’s often best not to fight the Moon, but to embrace it and use it as your light source.
I did this on a fine night, Easter Sunday, at one of my favourite nightscape spots, Dinosaur Provincial Park.
I set up two cameras to frame different views of the hoodoos as they lit up with the light of the rising waning Moon.
The night started out as a dark moonless evening as twilight ended. Then about 90 minutes after the arrival of darkness, the sky began to brighten again as the Moon rose to illuminate the eroded formations of the Park.
This was a fine example of “bronze hour” illumination, as some have aptly called it.
Photographers know about the “golden hour,” the time just before sunset or just after sunrise when the low Sun lights the landscape with a golden glow.
The Moon does the same thing, with a similar tone, though greatly reduced in intensity.
The low Moon, especially just after Full, casts a yellow or golden tint over the scene. This is caused by our atmosphere absorbing the “cold” blue wavelengths of moonlight, and letting through the “warm” red and yellow tones.
Making use of the rising (or setting) Moon to light a scene is one way to capture a nightscape lit naturally, and not with artificial lights, which are increasingly being frowned upon, if not banned at popular nightscape destinations.
“Bronze hour” lighting is great in still-image nightscapes. But in time-lapses the effect is more striking — indeed, in time-lapse lingo it is called a “moonstrike” scene.
The dark landscape suddenly lights up as if it were dawn, yet stars remain in the sky.
The best nights for such a moonstrike are ones with a waning gibbous or last quarter Moon. At these phases the Moon rises after sunset, to re-light a scene after evening twilight has faded.
On April 21 I made use of such a circumstance to shoot moonstrike stills and movies, not only for their own sake, but for use as illustrations in the next edition of my Nightscapes and Time-lapse eBook (at top here).
One camera, the Nikon D750, I coupled with a device called a bramping intervalometer, in this case the TimeLapse+ View, shown above. It works great to automatically shift the shutter and ISO speeds as the sky darkens then brightens again.
Yes, in bright situations the camera’s own Auto Exposure and Auto ISO modes might accomplish this.
But … once the sky gets dark the Auto circuits fail and you’re left with hugely underexposed images.
The TimeLapse+ View, with its more sensitive built-in light meter, can track right through into full darkness, making it possible to shoot so-called “holy grail” time-lapses that go from daylight to darkness, from sunset to the Milky Way, all shot unattended.
For the other camera, the Sony a7III (with the Laowa 15mm lens I just reviewed) I set the camera manually, then shifted the ISO and shutter speed a couple of times to accommodate the darkening, then brightening of the scene.
Processing the resulting RAW files in the highly-recommended program LRTimelapse smoothed out all the jumps in brightness to make a seamless transition.
I also used the new intervalometer function that Sony has just added to the a7III with its latest firmware update. Hurray! I complained about the lack of an intervalometer in my original review of the Sony a7III. But that’s been fixed.
I shot 425 frames with the Sony, which I not only turned into a movie but, as one can with time-lapse frames, I also stacked into a star trail still image, in this case looking north to the circumpolar stars.
I prefer this action set over dedicated programs such as StarStaX, because it works directly with the developed Raw files. There’s no need to create a set of JPGs to stack, compromising image quality, and departing from the non-destructive workflow I prefer to maintain.
While the still images are very nice, the intended final result was this movie above, a short time-lapse vignette using clips from both cameras. Do watch in HD.
I rendered out the frames from the Sony both as a “normal” time-lapse, and as one with accumulating star trails, again using the Advanced Stacker Plus actions to create the intermediate frames for assembling into the movie.
All these techniques, gear, and apps are explained in tutorials in my eBook, above. However, it’s always great to get a night perfect for putting the methods to work on a real scene.
But what about lenses for the Sony? Here’s one ideal for astrophotography.
Made for Sony e-mount cameras, the Venus Optics 15mm f/2 Laowa provides excellent on- and off-axis performance in a fast and compact lens ideal for nightscape, time-lapse, and wide-field tracked astrophotography with Sony mirrorless cameras. (UPDATE: Venus Optics has announced versions of this lens for Canon R and Nikon Z mount mirrorless cameras.)
I use it a lot and highly recommend it.
Size and Weight
While I often use the a7III with my Canon lenses by way of a Metabones adapter, the Sony really comes into its own when matched to a “native” lens made for the Sony e-mount. The selection of fast, wide lenses from Sony itself is limited, with the new Sony 24mm G-Master a popular favourite (I have yet to try it).
However, for much of my nightscape shooting, and certainly for auroras, I prefer lenses even wider than 24mm, and the faster the better.
Aurora over Båtsfjord, Norway. This is a single 0.8-second exposure at f/2 with the 15mm Venus Optics lens and Sony a7III at ISO 1600.
The Laowa 15mm f/2 from Venus Optics fills the bill very nicely, providing excellent speed in a compact lens. While wide, the Laowa is a rectilinear lens providing straight horizons even when aimed up, as shown above. This is not a fish-eye lens.
The Venus Optics 15mm realizes the potential of mirrorless cameras and their short flange distance that allows the design of fast, wide lenses without massive bulk.
While compact, at 600 grams the Laowa 15mm is quite hefty for its size due to its solid metal construction. Nevertheless, it is half the weight of the massive 1250-gram Sigma 14mm f/1.8 Art. The Laowa is not a plastic entry-level lens, nor is it cheap, at $850 from U.S. sources.
For me, the Sony-Laowa combination is my first choice for a lightweight travel camera for overseas aurora trips
However, this is a no-frills manual focus lens. Nor does it even transfer aperture data to the camera, which is a pity. There are no electrical connections between the lens and camera.
However, for nightscape work where all settings are adjusted manually, the Venus Optics 15mm works just fine. The key factor is how good are the optics. I’m happy to report that they are very good indeed.
Testing Under the Stars
To test the Venus Optics lens I shot “same night” images, all tracked, with the Sigma 14mm f/1.8 Art lens, at left, and the Rokinon 14mm SP (labeled as being f/2.4, at right). Both are much larger lenses, made for DSLRs, with bulbous front elements not able to accept filters. But they are both superb lenses. See my test report on these lenses published in 2018.
The next images show blow-ups of the same scene (the nightscape shown in full below, taken at Dinosaur Provincial Park, Alberta), and all taken on a tracker.
I used the Rokinon on the Sony a7III using the Metabones adapter which, unlike some brands of lens adapters, does not compromise the optical quality of the lens by shifting its focal position. But lacking a lens adapter for Nikon-to-Sony at the time of testing, I used the Nikon-mount Sigma lens on a Nikon D750, a DSLR camera with nearly identical sensor specs to the Sony.
Above is a tracked image (so the stars are not trailed, which would make it hard to tell aberrations from trails), taken wide open at f/2. No lens correction has been applied so the vignetting (the darkening of the frame corners) is as the lens provides.
As shown above, when used wide open at f/2 vignetting is significant, but not much more so than with competitive lenses with much larger lenses, as I compare below.
And the vignetting is correctable in processing. Adobe Camera Raw and Lightroom have this lens in their lens profile database. That’s not the case with current versions (as of April 2019) of other raw developers such as DxO PhotoLab, ON1 Photo RAW, and Raw Therapee where vignetting corrections have to be dialled in manually by eye.
When stopped down to f/2.8 the Laowa “flattens” out a lot for vignetting and uniformity of frame illumination. Corner aberrations also improve but are still present. I show those in close-up detail below.
Above, I compare the vignetting of the three lenses, both wide open and when stopped down. Wide open, all the lenses, even the Sigma and Rokinon despite their large front elements, show quite a bit of drop off in illumination at the corners.
The Rokinon SP actually seems to be the worst of the trio, showing some residual vignetting even at f/2.8, while it is reduced significantly in the Laowa and Sigma lenses. Oddly, the Rokinon SP, even though it is labeled as f/2.4, seemed to open to f/2.2, at least as indicated by the aperture metadata.
Above I show lens sharpness on-axis, both wide open and stopped down, to check for spherical and chromatic aberrations with the bright blue star Vega centered. The red box in the Navigator window at top right indicates what portion of the frame I am showing, at 200% magnification in Photoshop.
On-axis, the Venus Optics 15mm shows stars just as sharply as the premium Sigma and Rokinon lenses, with no sign of blurring spherical aberration nor coloured haloes from chromatic aberration.
Focusing is precise and easy to achieve with the Sony on Live View. My unit reaches sharpest focus on stars with the lens set just shy of the middle of the infinity symbol. This is consistent and allows me to preset focus just by dialing the focus ring, handy for shooting auroras at -35° C, when I prefer to minimize fussing with camera settings, thank you very much!
The Laowa and Sigma lenses show similar levels of off-axis coma and astigmatism, with the Laowa exhibiting slightly more lateral chromatic aberration than the Sigma. Both improve a lot when stopped down one stop, but aberrations are still present though to a lesser degree.
However, I find that the Laowa 15mm performs as well as the Sigma 14mm Art for star quality on- and off-axis. And that’s a high standard to match.
The Rokinon SP is the worst of the trio, showing significant elongation of off-axis star images (they look like lines aimed at the frame centre), likely due to astigmatism. With the 14mm SP, this aberration was still present at f/2.8, and was worse at the upper right corner than at the upper left corner, an indication to me that even the premium Rokinon SP lens exhibits slight lens de-centering, an issue users have often found with other Rokinon lenses.
Real-World Examples – The Milky Way
The fast speed of the Laowa 15mm is ideal for shooting tracked wide-field images of the Milky Way, and untracked camera-on-tripod nightscapes and time-lapses of the Milky Way.
Image aberrations are very acceptable at f/2, a speed that allows shutter speed and ISO to be kept lower for minimal star trailing and noise while ensuring a well-exposed frame.
Real World Examples – Auroras
Where the Laowa 15mm really shines is for auroras. On my trips to chase the Northern Lights I often take nothing but the Sony-Laowa pair, to keep weight and size down.
Above is an example, taken from a moving ship off the coast of Norway. The fast f/2 speed (I wish it were even faster!) makes it possible to capture the Lights in only 1- or 2-second exposures, albeit at ISO 6400. But the fast shutter speed is needed for minimizing ship movement.
The Sony also excels at real-time 4K video, able to shoot at ISO 12,800 to 51,200 without excessive noise.
Aurora Reflections from Alan Dyer on Vimeo.
The Sky is Dancing from Alan Dyer on Vimeo.
The Northern Lights At Sea from Alan Dyer on Vimeo.
Click through to see the posts and the videos shot with the Venus Optics 15mm.
As an aid to video use, the aperture ring of the Venus Optics 15mm can be “de-clicked” at the flick of a switch, allowing users to smoothly adjust the iris during shooting, avoiding audible clicks and jumps in brightness. That’s a very nice feature indeed.
In all, I can recommend the Venus Optics Laowa 15mm lens as a great match to Sony mirrorless cameras, for nightscape still and video shooting. UPDATE: Versions for Canon R and Nikon Z mount mirrorless cameras will now be available.
Spring is the season for Earthshine on the waxing Moon.
April 8 was the perfect night for capturing the waxing crescent Moon illuminated both by the Sun and by the Earth.
The phase was a 4-day-old Moon, old enough to be high in the sky, but young enough – i.e. a thin enough crescent – that its bright side didn’t wash out the dark side!
In the lead photo at top, and even in the single-exposure image below taken earlier in a brighter sky, you can see the night side of the Moon faintly glowing a deep blue, and brighter than the background twilight sky.
This, too, is from sunlight, but light that has bounced off the Earth first to then light up the night side of the Moon.
If you were standing on the lunar surface on the night side, the Sun would be below the horizon but your sky would contain a brilliant blue and almost Full Earth lighting your night, much as the Moon lights our Earthly nights. However, Earth is some 80 times brighter in the Moon’s sky than even the Full Moon is in our sky.
Unlike the single image, the lead image, repeated just above, is a multi-exposure blend (using luminosity masks), to bring out the faint Earthshine and deep blue sky, while retaining details in the bright crescent.
Once the sky gets dark enough to see Earthshine well, no single exposure can record the full range in brightness on both the day and night sides of the Moon.
April 8 was a great night for lunar fans as the crescent Moon also appeared between the two bright star clusters in Taurus, the Hyades and Pleiades, and below reddish Mars.
It was a fine gathering of celestial sights, captured above with a telephoto lens.
This show the chart I used to plan the framing, created with StarryNight™ software and showing the field of the 135mm lens I used.
The chart also shows why spring is best for the waxing Moon. It is at this time of year that the ecliptic – the green line – swings highest into the evening sky, taking the Moon with it, placing it high in the west above obscuring haze.
That makes it easier to see and shoot the subtle Earthshine. And to see sharp details on the Moon.
The 4-day-old waxing crescent Moon on April 8, 2019 exposed for just the bright sunlit crescent, revealing details along the terminator. This is with the 105mm Traveler refractor and 2X AP Barlow lens for an effective focal length of 1200mm at f/12, and with the cropped-frame Canon 60Da at ISO 400, for a single exposure of 1/60 second. This is not a stack or mosaic.
The 4-day-old waxing crescent Moon on April 8, 2019 exposed for just the bright sunlit crescent, revealing details along the terminator. This is with the 105mm Traveler refractor and 2X AP Barlow lens for an effective focal length of 1200mm at f/12, and with the cropped-frame Canon 60Da at ISO 400, for a single exposure of 1/60 second. This is not a stack or mosaic.
After the sky got darker I shot the crescent Moon in a short exposure to capture just the bright crescent, included above in two versions – plain and with labels attached marking the major features visible on a 4-day Moon.
If you missed “Earthshine night” this month, mark May 7 and 8 on your calendar for next month’s opportunities.