The annual Geminid meteor shower peaks under ideal conditions this year, providing a great photo opportunity.
The Geminids is the best meteor shower of the year, under ideal conditions capable of producing rates of 80 to 120 meteors an hour, higher than the more widely observed Perseids in August. And this year conditions are ideal!
The Perseids get better PR because they occur in summer. For most northern observers the Geminids demand greater dedication and warm clothing to withstand the cool, if not bitterly cold night.
A Good Year for Geminids
While the Geminids occur every year, many years are beset by a bright Moon or poor timing. This year conditions couldn’t be better:
• The shower peaks on the night of December 13-14 right at New Moon, so there’s no interference from moonlight at any time on peak night.
• The shower peaks in the early evening of December 13 for North America, about 8 p.m. EST (5 p.m. PST). This produces a richer shower than if it peaked in the daytime hours, as it can in some years.
The two factors make this the best year for the Geminids since 2017 when I shot all the images here.
What Settings to Use?
To capture the Geminids, as is true of any meteor shower, you need:
A good DSLR or mirrorless camera set to ISO 1600 to 6400.
A fast, wide-angle lens (14mm to 24mm) set to f/2.8 or wider, perhaps f/2. Slow f/4 to f/.6 kit zooms are not very suitable.
Exposures of 30 to 60 seconds each.
An intervalometer to fire the shutter automatically with no more than 1 second between exposures. As soon as one exposure ends and the shutter closes, the next exposure begins.
Take hundreds of images over as long a time period as you can on peak night.
Out of hundreds of images, a dozen or more should contain a meteor! You increase your chances by using:
A high ISO, so the meteor records in the brief second or two it appears.
A wide aperture, to again increase the light-gathering ability of the lens for those fainter meteors.
A wide-angle lens so you capture as much area of sky as possible.
Running two or more cameras aimed at different spots, perhaps to the east and south to maximize sky coverage.
A minimum interval between exposures. Increase the interval to more than a second and you know it’s during that “dark time” when the shutter is closed that the brightest meteor of the night will occur. Keep the shutter open as much as possible.
When to Shoot?
The radiant point of the shower meteors in Gemini rises in the early evening, so you might see some long, slow Earth-grazing meteors early in the night, streaking out of the east.
For Europe the peak of the shower occurs in the middle of the night of December 13/14.
For North America, despite the peak occurring in the early evening hours, meteors will be visible all night and will likely be best after your local midnight.
So wherever you are, start shooting as the night begins and keep shooting for as long as you and your camera can withstand the cold!
Where to Go?
To take advantage of the moonless night, get away from urban light pollution to as dark a sky as you can. Preferably, put the major urban skyglow to the west or north.
While from brightly lit locations the very brightest meteors will show up, they are the rarest, so you’d be fortunate to capture one in a night of shooting from a city or town.
From a dark site, you can use longer exposures, wider apertures and higher ISOs to boost your chances of capturing more and fainter meteors. Plus the Milky Way will show up.
Where to Aim?
You can aim a camera any direction, even to the west.
But aiming east to frame the constellation of Gemini (marked by the twin stars Castor and Pollux) will include the radiant point, perhaps capturing the effect of meteors streaking away from that point, especially if you stack multiple images into one composite, as most of my images here are.
Using a Tracker
Using a star tracker such as the Sky-Watcher Star Adventurer shown here, makes it possible to obtain images with stars that remain untrailed even in 1- or 2-minute exposures. The sky remains framed the same through hours of shooting, making it much easier to align and stack the images for a multi-meteor composite.
However, a tracker requires accurate polar alignment of its rotation axis (check its instruction manual to learn how to do this) or else the images will gradually shift out of alignment through a long shoot. Using Photoshop’s Auto-Align feature or specialized stacking programs can bring frames back into registration. But good polar alignment is still necessary.
If you aim east you can frame a tracked set so the first images include the ground. The camera frame will move away from the ground as it tracks the rising sky.
Using a Tripod and Untracked Camera
The simpler method for shooting is to just use a camera (or two!) on a fixed tripod, and keep exposures under about 30 seconds to minimize star trailing. That might mean using a higher ISO than with tracked images, especially with slower lenses.
The work comes in post-processing, as stacking untracked images will produce a result with meteors streaking in many different orientation and locations, ruining the effect of meteors bursting from a single radiant.
To make it easier to stack untracked images, try to include Polaris in the field of the wide-angle lens, perhaps in the upper left corner. The sky rotates around Polaris, so it will form the easy-to-identify point around which you can manually rotate images in editing to bring them back into at least rough alignment.
Covering the steps to composite tracked and untracked meteor shower images is beyond the purview of this blog.
The images shown here were layered, masked and blended with those steps and are used as examples in the book’s tutorials.
Keeping yourself warm is important. But your camera is going to get cold. It should work fine but its battery will die sooner than it would on a warm night. Check it every hour, and have spare, warm batteries ready to swap in when needed.
Lenses can frost up. The only way to prevent this is with low-voltage heater coils, such as the DewDestroyer from David Lane. It works very well. Other types are available on Amazon.
A bright comet is a once-a-decade opportunity to capture some unique nightscapes. Here are my suggested tips and FAQs for getting your souvenir shot.
My guide to capturing Comet NEOWISE assumes you’ve done little, if any, nightscape photography up to now. Even for those who have some experience shooting landscape scenes by night, the comet does pose new challenges — for one, it moves from night to night and requires good planning to get it over a scenic landmark.
So here are my tips and techniques, in answers to the most frequently asked questions I get and that I see on social media posts.
How Long Will the Comet be Visible?
The comet is not going to suddenly whoosh away or disappear. It is in our northern hemisphere sky and fairly well placed for shooting and watching all summer.
But … it is now getting fainter each night so the best time to shoot it is now! Or as soon as clouds allow on your next clear night.
As of this writing on July 18 it is still bright enough to be easily visible to the unaided eye from a dark site. How long this will be the case is unknown.
But after July 23 and its closest approach to Earth the comet will be receding from us and that alone will cause it to dim. Later this summer it will require binoculars to see, but might still be a good photogenic target, but smaller and dimmer than it was in mid-July.
When is the Best Time to Shoot?
The comet has moved far enough west that it is now primarily an evening object. So look as soon as it gets dark each night.
Until later in July it is still far enough north to be “circumpolar” for northern latitudes (above 50° N) and so visible all night and into the dawn.
But eventually the comet will be setting into the northwest even as seen from northern latitudes and only visible in the evening sky. Indeed, by the end of July the comet will have moved far enough south that observers in the southern hemisphere anxious to see the comet will get their first looks.
Where Do I Look?
In July look northwest below the Big Dipper. By August the comet is low in the west below the bright star Arcturus. By then it will be moving much less from night to night. The chart above shows the comet at nightly intervals; you can see how its nightly motion slows as it recedes from us and from the Sun.
What Exposures Do I Use?
There is no single best setting. It depends on …
— How bright the sky is from your location (urban vs a rural site).
— Whether the Moon is up — it will be after July 23 or so when the Moon returns to the western sky as a waxing crescent.
— The phase of the Moon — in late July it will be waxing to Full on August 3 when the sky will be very bright and the comet faint enough it might lost in the bright sky.
However, here are guidelines:
— ISO 400 to 1600
— Aperture f/2 to f/4
— Shutter speed of 4 to 30 seconds
Unless you are shooting in a very bright sky, your automatic exposure settings are likely not going to work.
As with almost all nightscape photography you will need to set your camera on Manual (M) and dial in those settings for ISO, Aperture and Shutter Speed manually. Just how is something you need to consult your camera’s instruction manual for, as some point-and-shoot snapshot cameras are simply not designed to be used manually.
As a rule you want to …
— Keep the ISO as low as possible for the lowest noise. The higher the ISO the worse the noise. But … do raise the ISO high enough to get a well-exposed image. Better to shoot at ISO 3200 and expose well, than at ISO 800 and end up with a dark, underexposed image.
— Shoot at a wide aperture, such as f/2 or f/2.8. The wider the aperture (smaller the f-number) the shorter the exposure can be and/or lower the ISO can be. But … lens aberrations might spoil the sharpness of the image.
— Keep exposures short enough that the stars won’t trail too much during the exposure due to Earth’s rotation. The “500 Rule” of thumb says exposures should be no longer than 500 / Focal length of your lens.
So for a 50mm lens exposures should be no longer than 500/50 = 10s seconds. You’ll still see some trailing but not enough to spoil the image. And going a bit longer in exposure time can make it possible to use a slower and less noisy ISO speed or simply having a better exposed shot.
— Avoid underexposing. If you can, call up the “histogram”— the graph of exposure values — on the resulting image in playback on your camera. The histogram should look fairly well distributed from left to right and not all bunched up at the left.
When and where you are will also affect your exposure combination.
If you are at a site with lots of lights such as overlooking a city skyline, exposures will need to be shorter than at a dark site.
And nights with a bright Moon will require shorter exposures than moonless nights.
Take test shots and see what looks good! Inspect the histogram. This isn’t like shooting with film when we had no idea if we got the shot until it was too late!
What Lens Do I Use?
Any lens can produce a fine shot. Choose the lens to frame the scene well.
Using a longer lens (105mm to 200mm) does make the comet larger, but … might make it more difficult to also frame it above a landscape. A good choice is likely a 24mm to 85mm lens.
A fast lens is best, to keep exposure times below the 500 Rule threshold and ISO speeds lower. Slow f/5.6 kit zooms can be used but do pose challenges for getting well exposed and untrailed shots.
Shooting with shorter focal lengths can help keep the aperture wider and faster. Long focal lengths aren’t needed, especially for images of the comet over a landscape. Avoid the temptation to use that monster 400mm or 600mm telephoto wildlife lens. Unless it is on a tracker (see below) it will produce a trailed mess. It is best to shoot with no more than a 135mm telephoto, the faster the better, IF you want a close-up.
Planetarium programs that I recommend below offer “field of view” indicators so you can preview how much of the horizon and sky your camera and lens combination will show.
Can I Use My [insert camera here] Camera?
Yes. Whatever you have, try it.
However, the best cameras for any nightscape photography are DSLRs and Mirrorless cameras, either full-frame or cropped frame. They have the lowest noise and are easiest to set manually.
In my experience in teaching workshops I find that the insidious menus of automatic “point-and-shoot” pocket cameras make it very difficult to find the manual settings. And some have such noisy sensors they do not allow longer exposures and/or higher ISO speeds. But try their Night or Fireworks scene modes.
It doesn’t hurt to try, but if you don’t get the shot, don’t fuss. Just enjoy the view with your eyes and binoculars.
But … if you have an iPhone11 or recent Android phone (I have neither!) their “Night scene” modes are superb and use clever in-camera image stacking and processing routines to yield surprisingly good images. Give them a try — keep the camera steady and shoot.
What No One Asks: How Do I Focus?
Everyone fusses about “the best” exposure.
What no one thinks of is how they will focus at night. What ruins images is often not bad exposure (a lot of exposure sins can be fixed in processing) but poor focus (which cannot be fixed later).
On bright scenes it is possible your camera’s Autofocus system will “see” enough in the scene to work and focus the lens. Great.
On dark scenes it will not. You must manually focus. Do that using your camera’s “Live View” function (all DSLRs and Mirrorless cameras have it — but check your user manual as on DSLRs it might need to be activated in the menus if you have never used it).
Aim at a bright star or distant light and magnify the image 5x or 10x (with the + button) to inspect the star or light. Put the lens on MF (not AF) and focus the lens manually to make the star as pinpoint as possible. Do not touch the lens afterwards.
Practice on a cloudy night on distant lights.
All shooting must be done with a camera on a good tripod. As such, turn OFF any image stabilization (IS), whether it be on the lens or in the camera. IS can ruin shots taken on a tripod.
What Few Ask: How Do I Plan a Shoot?
Good photos rarely happen by accident. They require planning. That’s part of the challenge and satisfaction of getting the once-in-a-lifetime shot.
To get the shot of the comet over some striking scene below, you have to figure out:
— First, where the comet will be in the sky,
— Then, where you need to be to look toward that location.
— And of course, you need to be where the sky will be clear!
Planning Where the Comet Will Be
Popular planning software such as PhotoPills and The Photographer’s Ephemeris can help immensely, but won’t have the comet itself included in their displays, just the position of the Sun, Moon and Milky Way.
For previewing the comet’s position in the sky, I use the planetarium programs Starry Night (desktop) or SkySafari (mobile app). Both include comet positions.
The program Stellarium (stellarium.org) is free for desktop while the mobile Stellarium Plus apps (iOS and Android) have a small fee. There is also a free web-based version at https://stellarium-web.orgBe sure to allow it to access your location.
Set the programs to the night in question to see where the comet will be in relation to the stars and patterns such as the Big Dipper. Note the comet’s altitude in degrees and azimuth (how far along the horizon it will be). For example, an azimuth of 320° puts it in the northwest (270° is due west; 0° or 360° is due north, 315° is directly northwest).
With either you can dial in the time and date and see lines pointing toward where the Sun would be, but below the horizon. Scrub through time to move that line to the same azimuth angle as where the comet will be and then see if the comet is sitting in the right direction.
Move your location to place the line toward the comet over what you want to include in the scene.
I like The Photographer’s Ephemeris as it links to the companion app TPE3D that can show the stars over the actual topographic landscape. It won’t show the comet, but if you know where it is in the sky you can see if if will clear mountains, for example.
Planning for the Weather
All is for nought if the sky is cloudy.
For planning astro shoots I like the app Astrospheric (https://www.astrospheric.com). It is free for mobile and there is a web-based version. It uses Environment Canada predictions of cloud cover for North America. Use it to plan where to be for clear skies first, then figure out the best scenic site that will be under those clear skies.
Be happy to get a well-composed and exposed single shot.
But … if you wish to try some more advanced techniques for later processing, here are suggestions.
On several nights I’ve found a panorama captures the scene better, including the comet in context with the wide horizon, sweep of the twilight arch or, as we’ve had in western Canada, some Northern Lights.
Take several identical exposures, moving the camera 10 to 15 degrees between images. Editing programs such as Lightroom, Adobe Camera Raw, ON1 Photo RAW and Affinity Photo have panorama stitching routines built in.
My Nightscapes and Time-Lapses ebook shown above provides tutorials for shooting and processing nightscape panoramas.
2. Exposure Blending
If you have a situation where the sky is bright but the ground is dark, or vice versa, and one exposure cannot record both well, then shoot two exposures, each best suited to recording the sky and ground individually.
For example, on moonless nights I’ve been shooting 2- to 5-minute long exposures for the ground and with the lens stopped down to f/5.6 or f/8 for better depth of field to be sure the foreground was in focus.
To reduce noise, it is also possible to shoot multiple exposures to stack later in processing to smooth noise. This is most useful in scenes with dark foregrounds where noise is most obvious, and where I will stack 4 to 8 images.
Just how to do this is beyond the scope of this blog. I also give step-by-step tutorials for the process in my Nightscapes and Time-Lapses ebook shown above. It be done in Photoshop, or in specialized programs such as StarryLandscapeStacker (for MacOS) or Sequator (Windows).
But shoot the images now, and learn later how to use them.
4. Tracking the Sky
If it is close-ups of the comet you want, then you will need to use a 135mm to 300mm telephoto lens (especially later in the summer when the comet is farther away and smaller).
But with such lenses any exposure over a few seconds will result in lots of trailing.
The solution is a tracking device such as the Sky-Watcher Star Adventurer or iOptron SkyGuider. These need to be set up so their rotation axis aims at the North Celestial Pole near Polaris. The camera can then follow the stars for the required exposures of up to a minute or more needed to record the comet and its tails well.
Just how to use a tracker is again beyond the scope of this blog. But if you have one, it will work very well for comet shots with telephoto lenses. However, trackers are not essential for wide-angle shots, especially once the Moon begins to light the sky.
But later in the summer when the comet is fainter and smaller, a tracked and stacked set of telephoto lens images will likely be the best way to capture the comet.
I present my top 10 tips for capturing time-lapses of the moving sky.
If you can take one well-exposed image of a nightscape, you can take 300. There’s little extra work required, just your time. But if you have the patience, the result can be an impressive time-lapse movie of the night sky sweeping over a scenic landscape. It’s that simple.
Or is it?
Here are my tips for taking time-lapses, in a series of “Do’s” and “Don’ts” that I’ve found effective for ensuring great results.
But before you attempt a time-lapse, be sure you can first capture well-exposed and sharply focused still shots. Shooting hundreds of frames for a time-lapse will be a disappointing waste of your time if all the images are dark and blurry.
For that reason many of my tips apply equally well to shooting still images. But taking time-lapses does require some specialized gear, techniques, planning, and software. First, the equipment.
NOTE: This article appeared originally in Issue #9 of Dark Sky Travels e-magazine.
TIP 1 — DO: Use a solid tripod
A lightweight travel tripod that might suffice for still images on the road will likely be insufficient for time-lapses. Not only does the camera have to remain rock steady for the length of the exposure, it has to do so for the length of the entire shoot, which could be several hours. Wind can’t move it, nor any camera handling you might need to do mid-shoot, such as swapping out a battery.
The tripod needn’t be massive. For hiking into scenic sites you’ll want a lightweight but sturdy tripod. While a carbon fibre unit is costly, you’ll appreciate its low weight and good strength every night in the field. Similarly, don’t scrimp on the tripod head.
TIP 2 — DO: Use a fast lens
As with nightscape stills, the single best purchase you can make to improve your images of dark sky scenes is not buying a new camera (at least not at first), but buying a fast, wide-angle lens.
Ditch the slow kit zoom and go for at least an f/2.8, if not f/2, lens with 10mm to 24mm focal length. This becomes especially critical for time-lapses, as the fast aperture allows using short shutter speeds, which in turn allows capturing more frames in a given period of time. That makes for a smoother, slower time-lapse, and a shoot you can finish sooner if desired.
TIP 3 — DO: Use an intervalometer
Time-lapses demand the use of an intervalometer to automatically fire the shutter for at least 200 to 300 images for a typical time-lapse. Many cameras have an intervalometer function built into their firmware. The shutter speed is set by using the camera in Manual mode.
Just be aware that a camera’s 15-second exposure really lasts 16 seconds, while a 30-second shot set in Manual is really a 32-second exposure.
So in setting the interval to provide one second between shots, as I advise below, you have to set the camera’s internal intervalometer for an interval of 17 seconds (for a shutter speed of 15 seconds) or 33 seconds (for a shutter speed of 30 seconds). It’s an odd quirk I’ve found true of every brand of camera I use or have tested.
Alternatively, you can set the camera to Bulb and then use an outboard hardware intervalometer (they sell for $60 on up) to control the exposure and fire the shutter. Test your unit. Its interval might need to be set to only one second, or to the exposure time + one second.
How intervalometers define “Interval” varies annoyingly from brand to brand. Setting the interval incorrectly can result in every other frame being missed and a ruined sequence.
SETTING YOUR CAMERA
TIP 4 — DON’T: Underexpose
As with still images, the best way to beat noise is to give the camera signal. Use a wider aperture, a longer shutter speed, or a higher ISO (or all of the above) to ensure the image is well exposed with a histogram pushed to the right.
If you try to boost the image brightness later in processing you’ll introduce not only the very noise you were trying to avoid, but also odd artifacts in the shadows such as banding and purple discolouration.
With still images we have the option of taking shorter, untrailed images for the sky, and longer exposures for the dark ground to reveal details in the landscape, to composite later. With time-lapses we don’t have that luxury. Each and every frame has to capture the entire scene well.
At dark sky sites, expose for the dark ground as much as you can, even if that makes the sky overly bright. Unless you outright clip the highlights in the Milky Way or in light polluted horizon glows, you’ll be able to recover highlight details later in processing.
After poor focus, underexposure, resulting in overly noisy images, is the single biggest mistake I see beginners make.
TIP 5 — DON’T: Worry about 500 or “NPF” Exposure Rules
While still images might have to adhere to the “500 Rule” or the stricter “NPF Rule” to avoid star trailing, time-lapses are not so critical. Slight trailing of stars in each frame won’t be noticeable in the final movie when the stars are moving anyway.
So go for rule-breaking, longer exposures if needed, for example if the aperture needs to be stopped down for increased depth of field and foreground focus. Again, with time-lapses we can’t shoot separate exposures for focus stacking later.
Just be aware that the longer each exposure is, the longer it will take to shoot 300 of them.
Why 300? I find 300 frames is a good number to aim for. When assembled into a movie at 30 frames per second (a typical frame rate) your 300-frame clip will last 10 seconds, a decent length of time in a final movie.
You can use a slower frame rate (24 fps works fine), but below 24 the movie will look jerky unless you employ advanced frame blending techniques. I do that for auroras.
How long it will take to acquire the needed 300 frames will depend on how long each exposure is and the interval between them. An app such as PhotoPills (via its Time lapse function) is handy in the field for calculating exposure time vs. frame count vs. shoot length, and providing a timer to let you know when the shoot is done.
TIP 6 — DO: Use short intervals
At night, the interval between exposures should be no more than one or two seconds. By “interval,” I mean the time between when the shutter closes and when it opens again for the next frame.
Not all intervalometers define “Interval” that way. But it’s what you expect it means. If you use too long an interval then the stars will appear to jump across the sky, ruining the smooth motion you are after.
In practice, intervals of four to five seconds are sometimes needed to accommodate the movement of motorized “motion control” devices that turn or slide the camera between each shot. But I’m not covering the use of those advanced units here. I cover those options and much, much more in 400 pages of tips, techniques and tutorials in my Nightscapes ebook, linked to above.
However, during the day or in twilight, intervals can be, and indeed need to be, much longer than the exposures. It’s at night with stars in the sky that you want the shutter to be closed as little as possible.
TIP 7 — DO: Shoot Raw
This advice also applies to still images where shooting raw files is essential for professional results. But you likely knew that.
However, with time-lapses some cameras offer a mode that will shoot time-lapse frames and assemble them into a movie right in the camera. Don’t use it. It gives you a finished, pre-baked movie with no ability to process each frame later, an essential step for good night time-lapses. And raw files provide the most data to work with.
So even with time-lapses, shoot raw not JPGs.
If you are confident the frames will be used only for a time-lapse, you might choose to shoot in a smaller S-Raw or compressed C-Raw mode, for smaller files, in order to fit more frames onto a card.
But I prefer not to shrink or compress the original raw files in the camera, as some of them might make for an excellent stacked and layered still image where I want the best quality originals (such as for the ISS over Waterton Lakes example above).
To get you through a long field shoot away from your computer buy more and larger memory cards. You don’t need costly, superfast cards for most time-lapse work.
PLANNING AND COMPOSITION
TIP 8 — DO: Use planning apps to frame
All nightscape photography benefits from using one of the excellent apps we now have to assist us in planning a shoot. They are particularly useful for time-lapses.
Apps such as PhotoPills and The Photographer’s Ephemeris are great. I like the latter as it links to its companion TPE 3D app to preview what the sky and lighting will look like over the actual topographic horizon from your site. You can scrub through time to see the motion of the Milky Way over the scenery. The Augmented Reality “AR” modes of these apps are also useful, but only once you are on site during the day.
For planning a time-lapse at home I always turn to a “planetarium” program to simulate the motion of the sky (albeit over a generic landscape), with the ability to add in “field of view” indicators to show the view your lens will capture.
You can step ahead in time to see how the sky will move across your camera frame during the length of the shoot. Indeed, such simulations help you plan how long the shoot needs to last until, for example, the galactic core or Orion sets.
Planetarium software helps ensure you frame the scene properly, not only for the beginning of the shoot (that’s easy — you can see that!), but also for the end of the shoot, which you can only predict.
If your shoot will last as long as three hours, do plan to check the battery level and swap batteries before three hours is up. Most cameras, even new mirrorless models, will now last for three hours on a full battery, but likely not any longer. If it’s a cold winter night, expect only one or two hours of life from a single battery.
TIP 9 — DO: Develop one raw frame and apply settings to all
Processing the raw files takes the same steps and settings as you would use to process still images.
With time-lapses, however, you have to do all the processing required within your favourite raw developer software. You can’t count on bringing multiple exposures into a layer-based processor such as Photoshop to stack and blend images. That works for a single image, but not for 300.
I use Adobe Camera Raw out of Adobe Bridge to do all my time-lapse processing. But many photographers use Lightroom, which offers all the same settings and non-destructive functions as Adobe Camera Raw.
For those who wish to “avoid Adobe” there are other choices, but for time-lapse work an essential feature is the ability to develop one frame, then copy and paste its settings (or “sync” settings) to all the other frames in the set.
Not all programs allow that. Affinity Photo does not. Luminar doesn’t do it very well. DxO PhotoLab, ON1 Photo RAW, and the free Raw Therapee, among others, all work fine.
HOW TO ASSEMBLE A TIME-LAPSE
Once you have a set of raws all developed, the usual workflow is to export all those frames out as high-quality JPGs which is what movie assembly programs need. Your raw developing software has to allow batch exporting to JPGs — most do.
However, none of the programs above (except Photoshop and Adobe’s After Effects) will create the final movie, whether it be from those JPGs or from the raws.
So for assembling the intermediate JPGs into a movie, I often use a low-cost program called TLDF (TimeLapse DeFlicker) available for MacOS and Windows (timelapsedeflicker.com). It offers advanced functions such as deflickering (i.e. smoothing slight frame-to-frame brightness fluctuations) and frame blending (useful to smooth aurora motions or to purposely add star trails).
While there are many choices for time-lapse assembly, I suggest using a program dedicated to the task and not, as many do, a movie editing program. For most sequences, the latter makes assembly unnecessarily difficult and harder to set key parameters such as frame rates.
TIP 10 — DO: Try LRTimelapse for more advanced processing
Get serious about time-lapse shooting and you will want — indeed, you will need — the program LRTimelapse (LRTimelapse.com). A free but limited trial version is available.
This powerful program is for sequences where one setting will not work for all the frames. One size does not fit all.
Instead, LRTimelapse allows you to process a few keyframes throughout a sequence, say at the start, middle, and end. It then interpolates all the settings between those keyframes to automatically process the entire set of images to smooth (or “ramp”) and deflicker the transitions from frame to frame.
This is essential for sequences where the lighting changes during the shoot (say, the Moon rises or sets), and for so-called “holy grails.” Those are advanced sequences that track from daylight or twilight to darkness, or vice versa, over a wide range of camera settings.
However, LRTimelapse works only with Adobe Lightroom or the Adobe Camera Raw/Bridge combination. So for advanced time-lapse work Adobe software is essential.
A Final Bonus Tip
Keep it simple. You might aspire to emulate the advanced sequences you see on the web, where the camera pans and dollies during the movie. I suggest avoiding complex motion control gear at first to concentrate on getting well-exposed time-lapses with just a static camera. That alone is a rewarding achievement.
But before that, first learn to shoot still images successfully. All the settings and skills you need for a great looking still image are needed for a time-lapse. Then move onto capturing the moving sky.
I end with a link to an example music video, shot using the techniques I’ve outlined. Thanks for reading and watching. Clear skies!
The Beauty of the Milky Way from Alan Dyer on Vimeo.
A new low-cost sky tracker promises to simplify not only tracking the sky but also taking time-lapses panning along the horizon. It works but …
If you are an active nightscape photographer chances are your social media feeds have been punctuated with ads for this new low-cost tracker from MoveShootMove.com.
For $200, much less than popular trackers from Sky-Watcher and iOptron, the SiFo unit (as it is labelled) offers the ability track the sky, avoiding any star trails. That alone would make it a bargain, and useful for nightscape and deep-sky photographers.
But it also has a function for panning horizontally, moving incrementally between exposures, thus the Move-Shoot-Move designation. The result is a time-lapse movie that pans along the horizon, but with each frame with the ground sharp, as the camera moves only between exposures, not during them.
Again, for $200 this is an excellent feature lacking in trackers like the Sky-Watcher Star Adventurer or iOptron SkyTracker. The Sky-Watcher Star Adventurer Mini does, however, offer both tracking and “move-shoot-move” time-lapse functions, but at a cost of $300 to $400 U.S., depending on accessories.
All these functions are provided in a unit that is light (weighing 700 grams with a tripod plate and the laser) and compact (taking up less space in your camera bag than most lenses). By comparison, the Star Adventurer Mini weighs 900 grams with the polar scope, while the original larger Star Adventurer is 1.4 kg, double the MSM’s weight.
Note, that the MSM’s advertised weight of 445 grams does not include the laser or a tripod plate, two items you need to use it. So 700 grams is a more realistic figure, still light, but not lighter than the competition by as much as you might be led to believe.
Nevertheless, the MSM’s small size and weight make it attractive for travel, especially for flights to remote sites. Construction is solid and all-metal. This is not a cheap plastic toy.
But does it work? Yes, but with several important caveats that might be a concern for some buyers.
What I Tested
I purchased the Basic Kit B package for $220 U.S., which includes a small case, a laser pointer and bracket for polar alignment (and with a small charger for the laser’s single 3.7-volt battery), and with the camera sync cable needed for time-lapse shooting.
I also purchased the new “button” model, not the older version that used a knob to set various tracking rates.
The ball head needed to go on top of the tracker is something you supply. The kit does come with two 3/8-inch stud bolts and a 3/8-to1/4-inch bushing adapter, for placing the tracker on tripods in the various mounting configurations I show below.
The first units were labelled as ‘SiFo,” but current units now carry the Gauda brand name. I’ll just call it the MSM.
I purchased the gear from the MSM website, and had my order fulfilled and shipped to me in Canada from China with no problems.
Tracking the Sky in Nightscapes
The attraction is its tracking function, allowing a camera to follow the sky and take exposures longer than any dictated by “500” or “NPF” Rules to avoid any star trailing.
Exposures can be a minute or more to record much more depth and detail in the Milky Way, though the ground will blur. But blending tracked sky exposures with untracked ground exposures gets around that, and with the MSM it’s easy to turn on and off the tracking motor, something not possible with the low-cost wind-up Mini Track from Omegon.
The illustrations and instructions (in a PDF well-hidden off the MSM Buy page) show the MSM mounted using the 1/4-20 bolt hole on the side of the unit opposite the LED-illuminated control panel. While this seems to be the preferredmethod, in the first unit I tested I found it produced serious mis-tracking problems.
With a Canon 6D MkII and 50mm f/1.4 lens (not a particularly heavy combination), the MSM’s gears would not engage and start tracking until after about 5 minutes. The first exposures were useless. This was also the case whenever I moved the camera to a new position to re-frame the scene or sky. Again, the first few minutes produced no or poor tracking until the gears finally engaged.
This would be a problem when taking tracked/untracked sets for nightscapes, as images need to be taken in quick succession. It’s also just plain annoying.
However, see the UPDATE at the end for the performance of a new Gauda-branded unit that was sent to me.
The solution was to mount the MSM using the 3/8-inch bolt hole on the back plate of the tracker, using the 1/4-20 adapter ring to allow it to attach to my tripod head. This still allowed me to tip the unit up to polar align it.
Tracking was now much more consistent, with only the first exposure usually badly trailed. But subsequent exposures all tracked, but with varying degrees of accuracy as I show below.
When used as a tracker, you need to control the camera’s exposure time with an external intervalometer you supply, to allow setting exposures over 30 seconds long.
The MSM offers a N and S setting, the latter for use in the Southern Hemisphere. A 1/2-speed setting turns the tracker at half the normal sidereal rate, useful for nightscapes as a compromise speed to provide some tracking while minimizing ground blurring.
For any tracker to track, its rotation axis has to be aimed at the Celestial Pole, near Polaris in the Northern Hemisphere, and near Sigma Octantis in the Southern Hemisphere.
I chose the laser pointer option for this, rather than the polar alignment scope. The laser attaches to the side of the MSM using a small screw-on metal bracket so that it points up along the axis of rotation, the polar axis.
The laser is labeled as a 1mw unit, but it is far brighter than any 1mw I’ve used. This does make it bright, allowing the beam to show up even when the sky is not dark. The battery is rechargeable and a small charger comes with the laser. Considering the laser is just a $15 option, it’s a bargain. But ….
UPDATE ADDED SEPTEMBER 1
Since I published the review, I have had the laser professionally tested, and it measured as having an output of 45 milliwatts. Yet it is labeled as being under 1 milliwatt. This is serious misrepresentation of the specs, done I can only assume to circumvent import restrictions. In Canada it is now illegal to import, own, or use any green laser over 5 milliwatts, a power level that would be sufficient for the intended use of polar aligning. 45mw is outright illegal.
So be warned, use of this laser will be illegal in some areas. And use of any green laser will be illegal close to airports, and outlawed entirely in some jurisdictions such as Australia, a fact the MSM website mentions.
The legal alternative is the optical polar alignment scope. I already have several of those, but my expectation that I could use one I had with the same bracket supplied with the laser were dashed by the fact that the bracket’s hole is too narrow to accept any of the other polar alignment scopes I have, which are all standard items. I you want a polar scope, buy theirs for $70.
However, if you can use it where you live, the laser works well enough, allowing you to aim the tracker at the Pole just by eye. For the wide lenses the tracker is intended to be used with, eyeball alignment proved good enough.
Just be very, very careful not to accidentally look down the beam. Seriously. It is far too easy to do by mistake, but doing so could damage your eye in moments.
Tracking the Sky in Deep-Sky Images
How well does the MSM actually track? In tests of the original SiFo unit I bought, and in sets of exposures with 35mm, 50mm, and 135mm lenses, and with the tracker mounted on the back, I found that 25% to 50% of the images showed mis-tracking. Gear errors still produced slightly trailed stars. This gear error shows itself more as you shoot with longer focal lengths.
The MSM is best for what it is advertised as — as a tracker for nightscapes with forgiving wide-angle lenses in the 14mm to 24mm range. With longer lenses, expect to throw away a good number of exposures as unusable. Take twice as many as you think you might need.
With a 135mm lens taking Milky Way closeups, more than half the shots were badly trailed. Really badly trailed. This is not from poor polar alignment, which produces a gradual drift of the frame, but from errors in the drive gears, and random errors at that, not periodic errors.
To be fair, this is often the case with other trackers as well. People always want to weight them down with heavy and demanding telephotos for deep-sky portraits, but that’s rarely a good idea with any tracker. They are best with wide lenses.
That said, I found the MSM’s error rate and amount to be much worse than with other trackers. With the Star Adventurer models and a 135mm lens for example, I can expect only 20% to 25% of the images to be trailed, and even then rarely as badly as what the MSM exhibited.
See the UPDATE at the end for the performance of the replacement Gauda-branded unit sent to me with the promise of much improved tracking accuracy.
Yes, enough shots worked to be usable, but it took using a fast f/2 lens to keep exposure times down to a minute to provide that yield. Users of slow f/5.6 kit-zoom lenses will struggle trying to take deep-sky images with the MSM.
In short, this is a low-cost tracker and it shows. It does work, but not as well as the higher-cost competitors. But restrict it to wide-angle lenses and you’ll be fine.
Panning the Ground
The other mode the MSM can be used in is as a time-lapse motion controller. Here you mount the MSM horizontally so the camera turns parallel to the horizon (or it can be mounted vertically for vertical panning, a mode I rarely use and did not test).
This is where the Move-Shoot-Move function comes in.
The supplied Sync cable goes from the camera’s flash hot shoe to the MSM’s camera jack. What happens is that when the camera finishes an exposure it sends a pulse to the MSM, which then quickly moves while the shutter is closed by the increment you set.
There is a choice of 4 speeds, marked in degrees-per-move: 0.05°, 0.2°, 0.5°, and 1.0°. For example, as the movie below shows, taking 360 frames at the 1° speed results in a complete 360° turn.
The MSM does the moving, but all the shutter speed control and intervals must be set using a separate intervalometer, either one built into the camera, or an outboard hardware unit. The MSM does not control the camera shutter. In fact, the camera controls the MSM.
Intervals should be set to be about 2 seconds longer than the shutter speed, to allow the MSM to perform its move and settle.
This connection between the MSM and camera worked very well. It is unconventional, but simple and effective.
Too Slow or Too Fast
The issue is the limited choice of move speeds. I found the 0.5° and 1° speeds much too fast for night use, except perhaps for special effects in urban cityscapes. Even in daytime use, when exposure times are very short, the results are dizzying, as I show below.
Even the 0.2°-per-move speed I feel is too fast for most nightscape work. Over the 300 exposures one typically takes for a time-lapse movie, that speed will turn the MSM (300 x 0.2°) = 60 degrees. That’s a lot of motion for 300 shots, which will usually be rendered out at 24 or 30 frames per second for a clip that lasts 10 to 12 seconds. The scene will turn a lot in that time.
On the other hand, the 0.05°-per-move setting is rather slow, producing a turn of (300 x 0.05°) = 15° during the 300 shots.
That works, but with all the motion controllers I’ve used — units that can run at whatever speed they need to get from the start point to the end point you set — I find a rate of about 0.1° per move is what works best for a movie that provides the right amount of motion. Not too slow. Not too fast. Just right.
UPDATE ADDED DECEMBER 21, 2019
From product photos on the MoveShootMove.com website now it appears that the tracker is now labeled MSM, as it should have been all along.
Most critically, perhaps in response to this review and my comments here, the time-lapse speeds have been changed to 0.05, 0.075, 0.1 and 0.125 degrees per move, adding the 0.1°/move speed I requested below and deleting the overly fast 0.5° and 1.0° speeds.
Plus it appears the new units have the panel labels printed the other way around so they are not upside down for most mounting situations.
I have not tested this new version, but these speeds sound much more usable for panning time-lapses. Bravo to MSM for listening!
Following the Sky in a Time-Lapse
The additional complication is trying to get the MSM to also turn at the right rate to follow the sky — for example, to keep the galaxy core in frame during the time-lapse clip. I think doing so produces one of the most effective time-lapse sequences.
But to do that with any device requires turning at a rate of 15° per hour, the rate the sky moves from east to west.
Because the MSM provides only set fixed speeds, the only way you have of controlling how much it moves over a given amount of time, such as an hour, is to vary the shutter speed.
I found that to get the MSM to follow the Milky Way in a time-lapse using the 0.05° rate and shooting 300 frames required shooting at a shutter speed of 12 seconds. No more, no less.
Do the Math
Where does that number come from?
At its rate of 0.05°/move, the MSM will turn 15° over 300 shots. The sky moves 15° in one hour, or 3600 seconds. So to fit 300 shots into 3600 seconds means each shot has to be no longer than (3600/300) = 12 seconds long.
The result works, as I show in the sampler movie.
But 12 seconds is a rather short shutter speed on a dark, moonless night with the Milky Way.
For properly exposed images you would need to shoot at very fast apertures (f/1.4 to f/2) and/or high and noisy ISO speeds. Neither are optimal. But they are forced upon you by the MSM’s restricted rates.
Using the faster 0.2° rate (of the original model) yields a turn of 60° over 300 shots. That’s four hours of sky motion. So each exposure now has to be 48 seconds long for the camera to follow the sky, four times longer because the drive rate is now four times faster.
A shutter speed of 48 seconds is a little too long in my opinion. Stars in each frame will trail. Plus a turn of 60° over 300 shots is quite a lot, producing a movie that turns too quickly.
By far the best speed for motion control time-lapses would be 0.1° per move. That would allow 24-second exposures to follow the sky, allowing a stop less in aperture or ISO speed. (DECEMBER 21 UPDATE: That speed seems to now be offered.)
Yes, having only a limited number of pre-wired speeds does make the MSM much easier to program than devices like the Star Adventurer Mini or SYRP Genie Mini that use wireless apps to set their functions. No question, the MSM is better suited to beginners who don’t want to fuss with lots of parameters.
As it is, getting a decent result requires some math and juggling of camera settings to make up for the MSM’s limited choices of speeds.
Time-Lapse Movie Examples
This compilation shows examples of daytime time-lapses taken at the fastest and dizzying 0.5° and 1.0° speeds, and night time-lapses taken at the slower speeds. The final clip is taken at 0.05°/move and with 12-second exposures, a combination that allowed the camera to nicely follow the Milky Way, albeit at a slow pace. Taking more than the 300 frames used here would have produced a clip that turned at the same rate, but lasted longer.
The MSM is powered off an internal rechargeable battery, which can be charged from any 5-volt charger you have from a mobile phone.
The MSM uses a USB-C jack for the power cable, but a USB-A to USB-C cord is supplied, handy as you might not have one if you don’t have other USB-C devices.
The battery lasted for half a dozen or more 300-shot time-lapses, enough to get you through at least 2 or 3 nights of shooting. However, my testing was done on warm summer nights. In winter battery life will be less.
While the built-in battery is handy, in the field should you find battery level low (the N and S switches blink as a warning) you can’t just swap in fresh batteries. Just remember to charge up before heading out. Alternatively, it can be charged from an external 5V battery pack such as used to prolong cell phone life.
The MSM does not offer, nor does it promise, any form of automated panorama shooting. This is where the device turns by, say, 15° to 45° between shots, to shoot the segments for a still-image panorama. More sophisticated motion controllers from SYRP and Edelkrone offer that function, including the ability to mate two devices for automated multi-tier panoramas.
Nor does the MSM offer the more advanced option of ramping speeds up and down at the start and end of a time-lapse. It moves at a constant rate throughout.
While some of the shortcomings could perhaps be fixed with a firmware update, there is no indication anywhere that its internal firmware can be updated through the USB-C port.
UPDATE ADDED OCTOBER 7, 2019
Since I published the review, MSM saw the initial test results and admitted that the earlier units like mine (ordered in June) exhibited large amounts of tracking error. They sent me a replacement unit, now branded with the Gauda label. According to MSM it contains a more powerful motor promised to improve tracking accuracy and making it possible to take images with lenses as long as 135mm.
I’m sorry to report it didn’t.
In tests with the 135mm lens the new, improved MSM still showed lots of tracking error, to the point that images taken with a lens as long as this were mostly unusable.
Tap or click on the images to download full-res versions.
The short movie above takes the full-frame images from the zenith set of 24 frames taken over 48 minutes and turns them into a little time-lapse. It shows how the mechanism of the MSM seems to be wobbling the camera around in a circle, creating the mis-tracking.
Comparison with the Star Adventurer
As a comparison, the next night I used a Sky-Watcher Star Adventurer (the full-size model not the Mini) to shoot the same fields in the northeast and overhead with the same 135mm lens and with the same ball-head, to ensure the ball-head was not at fault. Here are the results:
The Star Adventurer performed much better. Most images were well-tracked. Even on those frames that showed trailing, it was slight. The Star Adventurer is a unit you can use to take close-ups of deep-sky fields with telephoto lenses, if that’s your desire.
By contrast, the MSM is best used — indeed, I feel can only be used practically — with wide-angle lenses and with exposures under 2 minutes. Here’s a set taken with a 35mm lens, each for 2 minutes.
With the more forgiving 35mm lens, while more images worked, the success rate was still only 50%.
What I did not see with the new Gauda unit was the 5-minute delay before the gears meshed and tracking began. That issue has been resolved by the new, more powerful motor. The new Gauda model does start tracking right away.
But it is still prone to significant enough drive errors that stars are often trailed even with a 35mm lens (this was on a full-frame Canon 6D MkII).
UPDATED CONCLUSIONS (December 21, 2019)
The MSM tracker is low-cost, well-built, and compact for easy packing and travel. It performs its advertised functions well enough to allow users to get results, either tracked images of the Milky Way and constellations, or simple motion-control time-lapses.
But it is best used — indeed I would suggest can only be used — with wide-angle lenses for tracked Milky Way nightscapes. Even then, take more shots than you think you need to be sure enough are well-tracked and usable.
It can also be used for simple motion-control time-lapses, provided you do to the math to get it to turn by the amount you want, working around the too-slow or too-fast speeds. The new 0.1° per move speed (added in models as of December 2019) seems a reasonable rate for most time-lapses.
However, I think aspiring time-lapse photographers will soon outgrow the MSM’s limitations for motion-control sequences. But it can get you started.
If you really value its compactness and your budget is tight, the MSM will serve you well enough for tracked nightscape shooting with wide-angle lenses.
But if you wish to take close-ups of starfields and deep-sky objects with longer lenses, consider a unit like the Sky-Watcher Star Adventurer for its lower tracking errors. Or the Star Adventurer Mini for its better motion-control time-lapse functions.
Panoramas featuring the arch of the Milky Way have become the icons of dark sky locations. “Panos” can be easy to shoot, but stitching them together can present challenges. Here are my tips and techniques.
My tutorial complements the much more extensive information I provide in my eBook, at right. Here, I’ll step through techniques for simple to more complex panoramas, dealing first with essential shooting methods, then reviewing the workflows I use for processing and stitching panoramas.
What software works best depends on the number of segments in your panorama, or even on the focal length of the lens you used.
PART 1 — SHOOTING
What Equipment Do You Need?
Nightscape panoramas don’t require any more equipment than what you likely already own for shooting the night sky. For Milky Way scenes you need a fast lens and a solid tripod, but any good DSLR or mirrorless camera will suffice.
The tripod head can be either a ball head or a three-axis head, but it should have a horizontal axis marked with a degree scale. This allows you to move the camera at a correct and consistent angle from segment to segment. I think that’s essential.
What you don’t need is a special, and often costly, panorama head. These rotate the camera around the so-called “nodal point” inside the lens, avoiding parallax shifts that can make it difficult to align and stitch adjacent frames. Parallax shift is certainly a concern when shooting interiors or any scenes with prominent content close to the camera. However, in most nightscapes our scene content is far enough away that parallax simply isn’t an issue.
Though not a necessity, I find a levelling base a huge convenience. As I show above, this specialized ball head goes under the usual tripod head and makes it easy to level the main head. It eliminates all the fussing with trial-and-error adjustments of the length of each tripod leg.
Then to level the camera itself, I use the electronic level now in most cameras. Or, if your camera lacks that feature, an accessory bubble level clipped into the camera’s hot shoe will work.
Having the camera level is critical. It can be tipped up, of course, but not tilted left-right. If it isn’t level the whole panorama will be off kilter, requiring excessive straightening and cropping in processing, or the horizon will wave up and down in the final stitch, perhaps causing parts of the scene to go missing.
NOTE: Click or tap on the panorama images to open a high-res version for closer inspection.
Shooting Horizon Panoramas
While panoramas spanning the entire sky might be what you are after, I suggest starting simpler, with panos that take in just a portion of the 360° horizon and only a part of the 180° of the sky. These “partial panos” are great for auroras (above) or noctilucent clouds, (below), or for capturing just the core of the Milky Way over a landscape.
The key to all panorama success is overlap. Segments should overlap by 30 to 50 percent, enabling the stitching software to align the segments using the content common to adjacent frames. Contrary to some users, I’ve never found an issue with having too much overlap, where the same content is present on several frames.
For a practical example, let’s say you shoot with a 24mm lens on a full-frame camera, or a 16mm lens on a cropped-frame camera. Both combinations yield a field of view across the long dimension of the frame of roughly 80°, and across the short dimension of the frame of about 55°.
That means if you shoot with the camera in “landscape” orientation, panning the camera by 40° between segments would provide a generous 50 percent overlap. The left half of each segment will contain the same content as the right half of the previous segment, if you take your panos by turning from left to right.
TIP: My habit is to always shoot from left to right, as that puts the segments in the correct order adjacent to each other when I view them in browser programs such as Lightroom or Adobe Bridge, with images sorted in chronological order (from first to last images in a set) as I typically prefer. But the stitching will work no matter which direction you rotate the camera.
In the example of a 24mm lens and a camera in landscape orientation you could turn at a 45° or 50° spacing and yield enough overlap. However, turning the camera at multiples of 15° is usually the most convenient, as tripod heads are often graduated with markings at 5° increments, and labeled every 15° or 30°.
Some will have coarser and perhaps unlabeled markings. If so, determine what each increment represents, then take care to move the camera consistently by the amount that will provide adequate overlap.
To maximize the coverage of the sky while still framing a good amount of foreground, a common practice is to shoot panoramas with the camera in portrait orientation. That provides more vertical but less horizontal coverage for each frame. In that case, for adequate overlap with a 24mm lens and full-frame camera shoot at 30° spacings.
TIP: When shooting a partial panorama, for example just to the south for the Milky Way, or to the north for the aurora borealis, my practice is to always shoot a segment farther to the left and another to the right of the main scene. Shoot more than you need. Those end segments can get distorted when stitching, but if they don’t contain essential content, they can be cropped out with no loss, leaving your main scene clean and undistorted.
Shooting with a longer lens, such as a 50mm (or 35mm on a cropped frame camera), will yield higher resolution in the final panorama, but you will have much less sky coverage, unless you shoot multiple tiers, as I describe below. You would also have to shoot more segments, at 15° to 20° spacings, taking longer to complete the shoot.
As the number of segments goes up shooting fast becomes more important, to minimize how much the sky moves from segment to segment, and during each exposure itself, to aid in stitching. Remember, the sky appears to be turning from east to west, but the ground isn’t. So a prolonged shoot can cause problems later as the stitching software tries to align on either the fixed ground or the moving stars.
Panoramas on moonlit nights, as I show above, are relatively easy because exposures are short.
Milky Way panoramas taken on dark, moonless nights are tougher. They require fast apertures (f/2 to f/2.8) and high ISOs (ISO 3200 to 6400), to keep individual exposures no more than 30 to 40 seconds long.
Noise lives in the dark foregrounds, so I find it best to err on the side of overexposure, to ensure adequate exposure for the ground, even if it means the sky is bright and the stars slightly trailed. It’s the “Expose to the Right” philosophy I espouse at length in my eBook.
Advanced users can try shooting in two passes: one at a low ISO and with a long exposure for the fixed ground, and another pass at a higher ISO and a shorter exposure for the moving sky. But assembling such a set will take some deft work in Photoshop to align and mask the two stitched panos. None of the examples here are “double exposures.”
Shooting 360° Panoramas
More demanding than partial panoramas are full 360° panoramas, as above. Here I find it is best to start the sequence with the camera aimed toward the celestial pole (to the north in the northern hemisphere, or to the south in the southern hemisphere). That places the area of sky that moves the least over time at the two ends of the panorama, again making it easier for software to align segments, with the two ends taken farthest apart in time meeting up in space.
In our 24mm lens example, to cover the entire 360° scene shooting with a 45° spacing would require at least eight images (8 x 45 = 360). I used 10 above. Using that same lens with the camera in portrait orientation will require at least 12 segments to cover the entire 360° landscape.
Shooting 360° by 180° Panoramas
More demanding still are 360° panoramas that encompass the entire sky, from the ground below the horizon to the zenith overhead. Above is an example.
To do that with a single row of images requires shooting in portrait orientation with a very wide 14mm rectilinear lens on a full-frame camera. That combination has a field of view of about 100° across the long dimension of the sensor.
That sounds generous, but reaching up to the zenith at an altitude of 90° means only a small portion of the landscape will be included along the bottom of the frame.
To provide an even wider field of view to take in more ground, I use full-frame fish-eye lenses on my full-frame cameras, such as Canon’s old 15mm lens (as shown at top) or Rokinon’s 12mm. Even a circular-format fish-eye will work, such as an 8mm on a full-frame camera or 4.5mm on a cropped-frame camera.
All such fish-eye lenses produce curved horizons, but they take in a wide swath of sky, making it possible to include lots of foreground while reaching well past the zenith. Conventional panorama assembly programs won’t work with such wide and distorted segments, but the specialized programs described below will.
Shooting Multi-Tier Panoramas
The alternative technique for “all-sky” panos is to shoot multiple tiers of images: first, a lower row covering the ground and partway up the sky, followed by an upper row completing the coverage of just the sky at top.
The trick is to ensure adequate overlap both horizontally and vertically. With the camera in landscape orientation that will require a 20mm lens for full-frame cameras, or a 14mm lens for cropped-frame cameras. Either combination can cover the entire sky plus lots of foreground in two tiers, though I usually shoot three, just to be sure!.
Shooting with longer lenses provides incredible resolution for billboard-sized “gigapan” blow-ups, but will require shooting three, if not more, tiers, each with many segments. That starts to become a chore to do manually. Some motorized assistance really helps when shooting multi-tier panoramas.
Automating the Pan Shooting
The dedicated pano shooter might want to look at a device such as the GigaPan Epic models or the iOptron iPano, (shown below), all about $800 to $1000.
I’ve tested the latter and it works great. You program in the lens, overlap, and angular sweep desired. The iPano works out how many segments and tiers will be required, and automates the shooting, firing the shutter for the duration you program, then moving to the new position, firing again, and so on. I’ve shot four-tier panos effortlessly and with great success.
However, these devices are generally bigger and heavier than I care to heft around in the field.
Instead, I use the original Genie Mini from SYRP, (below), a $250 device primarily for shooting motion control time-lapses. But the wireless app that programs the Genie also has a panorama function that automatically slews the camera horizontally between exposures, again based on the lens, overlap, and angular sweep you enter. The just-introduced Genie Mini II is similar, but with even more capabilities for camera control.
While combining two Genie Minis allows programming in a vertical motion as well, I’ve been using just a regular tripod head atop the Mini to manually move the camera vertically between each of the horizontal tiers. I don’t feel the one or two moves needed to go from tier to tier too arduous to do manually, and I like to keep my field gear compact and easy to use.
The Genie Mini (now replaced by the Mini II) works great and I highly recommend it, even if panoramas are your only interest. But it is also one of the best, yet most affordable, single-axis motion control devices on the market for time-lapse work.
When to Shoot the Milky Way
While the right gear and techniques are important, go out on the wrong night and you won’t be able to capture the Milky Way as the great sweeping arch you might have hoped for.
In the northern hemisphere the Milky Way arches directly overhead from late July to October for most of the night. That’s fine for spherical fish-eye panoramas, but in rectangular images when the Milky Way is overhead it gets stretched and distorted across the top of the final panorama. For example, in the Bow Lake by Night panorama above, I cropped out most of this distorted content.
The prime season for Milky Way arches is therefore before the Milky Way climbs overhead, while it is still across the eastern sky, as above. That’s on moonless nights from March to early July, with May and June best for catching it in the evening, and not having to wait up until dawn, as is the case in early spring.
TIP: The best way to figure out when and where the Milky Way will appear is to use a desktop planetarium program such as Starry Night or Sky Safari or the free Stellarium. All can realistically depict the Milky Way for your location and date. You can then step through time to see how the Milky Way will move through the night, and how it will frame with your camera and lens combination using the “field of view” indicators the programs provide.
When shooting in the southern hemisphere I like the April to June period for catching the sweep of the southern Milky Way and the galactic core rising in late evening. By contrast, during mid austral winter in July and August the galactic centre shines directly overhead in the evening, a spectacular sight to be sure, but tough to capture in a panorama except in a spherical or fish-eye scene.
That said, I always like to put in a good word for the often sadly neglected winter Milky Way (the summer Milky Way for those “down under”). While lacking the spectacle of the galactic core in Sagittarius, the “other” Milky Way has its attractions such as Orion and Taurus. The best months for a panorama with that Milky Way in an arch across a rectangular frame are January to March. The Zodiacal Light can be a bonus at that season, as it was above.
TIP: Always shoot raw files for the widest dynamic range and flexibility in recovering details in the highlights and shadows. Even so, each segment has to be well exposed and focused out in the field.
And unless you are doing a “two-pass” double exposure, always shoot each segment with identical exposure settings. This is especially critical for bright sky scenes such twilights or moonlit scenes. Vary the exposure and you might get unsightly banding at the seams.
There’s nothing worse than getting home only to find one or more segments was missed, or was out of focus or badly exposed, spoiling the set.
PART 2 — STITCHING
Developing Panorama Segments
Once you have your panorama segments, the next step is to develop and assemble them. For my workflow, the process of assembling a panorama from its constituent segments begins with developing each of those segments identically.
NOTE: Click or tap on the software screen shots to open a high-res version for closer inspection.
I like to develop each segment’s raw file as fully as possible at this first stage in the workflow, applying noise reduction, colour correction, contrast adjustments, shadow and highlight recovery, and any special settings such as dehaze and clarity that can make the Milky Way pop.
I also apply lens corrections to each raw image. While some feel doing so produces problems with stitching later on, I’ve never found that. I prefer to have each frame with minimal vignetting and distortion when going into stitching. I use Adobe Camera Raw out of Adobe Bridge, but Lightroom Classic has identical functions.
There are several other raw developers that can work well at this stage. In other tests I’ve conducted, Capture One and DxO PhotoLab stand out as producing good results on nightscapes. See my blog from 2017 for more on software choices.
The key is developing each raw file identically, usually by working on one segment, then copying and pasting its settings to all the others in a set. Not all raw developers have this “Copy Settings” function. For example, Affinity Photo does not. It works very well as a layer-based editor to replace Photoshop, but is crude in its raw developing “Persona” functions.
While panorama stitching software will apply corrections to smooth out image-to-image variations, I find it is best to ensure all the segments look as similar as possible at the raw stage for brightness, contrast, and colour correction.
Do be aware that among social media groups and chat rooms devoted to nightscape imaging a lot of myth and misinformation abounds about how to process and stitch panoramas, and why some don’t work. Someone having a problem with a particular pano will ask why, and get ten different answers from well-meaning helpers, most of them wrong!
Stitching Simple Panoramas
For example, if your segments don’t join well it likely isn’t because you needed to use a panorama head (one oft-heard bit of advice). I never do. The issue is usually a lack of sufficient overlap. Or perhaps the image content moved too much from frame to frame as the photographer took too long to shoot the set.
Or, even when quickly-shot segments do have lots of overlap, stitching software can still get confused if adjoining segments contain featureless content or content that changes, such as segments over rippling water with no identifiable “landmarks” for the software to latch onto.
The primary problems, however, arise from using software that just isn’t up to the task. Programs that work great on simple panoramas (as the next three examples show) will fail when trying to stitch a more demanding set of segments.
For example, for partial horizon panos shot with 20mm to 50mm lenses, I’ll use the panorama function now built into Adobe Camera Raw (ACR) and Adobe Lightroom Classic, and also in the mobile-friendly Lightroom app. As I show above, ACR can do a wonderful job, yielding a raw DNG file that can continue to be edited non-destructively. It’s by far the easiest and fastest option, and is my first choice.
Another choice, not shown here, is the Photomerge function from within Photoshop, which yields a layered and masked master file, and provides the option for “content-aware” filling of missing areas. It can sometimes work on panos that ACR balks at.
Two programs popular as Adobe alternatives, ON1 PhotoRAW (above) and the aforementioned Affinity Photo (below), also have very capable panorama stitching functions.
However, in testing both programs with the demanding Bow Lake multi-tier panorama I used below with other programs, ON1 2019.5 did an acceptable job, while Affinity 1.7 failed. It works best on simpler panoramas, like this partial scene with a 24mm lens.
Even if they succeed when stitching 360° panoramas, such general-purpose editing programs, Adobe’s included, provide no option for choosing how the final scene gets framed. You have no control over where the program puts the ends of the scene.
Or the program just fails, producing a result like this.
Far worse is that multi-tier panoramas or, as I show above, even single-tier panos shot with very wide lenses, will often completely befuddle your favourite editing software, with it either refusing to perform the stitch or producing bizarre results.
Some photographers attempt to correct such wild distortions with lots of ad hoc adjustments with image-warping filters. But that’s completely unnecessary if you use the right software to begin with.
Stitching Complex Panoramas
When conventional software fails, I turn to the dedicated stitching program PTGui, $150 for MacOS or Windows. The name comes from “Panorama Tools – Graphical User Interface.”
While PTGui can read raw files from most cameras, it will not read any of the development adjustments you made to those files using Lightroom, Camera Raw, or any other raw developers.
So, my workflow is to develop all the raw segments, export them out as 16-bit TIFFs, then import those into PTGui. It can detect what lens was used to take the images, information PTGui needs to stitch accurately. If you used a manual lens you can enter the lens focal length and type (rectilinear or fish-eye) yourself.
I include a full tutorial on using PTGui in my eBook linked to above, but suffice to say that the program usually does a superb job first time and very quickly. You can drag the panorama around to frame the scene as you like, and change the projection at will to create rectangular or spherical format images, as above, and even so-called “little planet” projections that appear as if you were looking down at the scene from space.
Occasionally PTGui complains about some frames, requiring you to manually intervene to pick the same stars or horizon features in adjacent frames to provide enough matching alignment points until it is happy. Its interface also leaves something to be desired, with essential floating windows disappearing behind other mostly blank panels.
When exporting the finished panorama I usually choose to export it as a layered 16-bit Photoshop .PSD or, with big panos, as a Photoshop .PSB “big” document.
The reason is that in aligning the moving stars PTGui (indeed, all programs) can produce a few “fault lines” along the horizon, requiring a manual touch up to the masks to clean up mismatched horizon content, as I show above. Having a layered and masked master makes this easy to do non-destructively, though that’s best done in Photoshop.
However, Affinity Photo (above) can also read layered .PSD and .PSB Photoshop files, preserving the layers. By comparison, ON1 PhotoRAW flattens layered Photoshop files when it imports them, one deficiency that prevents this program from being a true Photoshop alternative.
Once a 360° panorama is in a program like Photoshop, some photographers like to “squish” the panorama horizontally to make it more square, for ease of printing and publication. I prefer not to do that, as it makes the Milky Way look overly tall, distorted, and in my opinion, ugly. But each to their own style.
You can test out a limited trial version of PTGui for free, but I think it is worth the cost as an essential tool for panorama devotees.
Other Stitching Options
However, Windows users can also try Image Composite Editor (ICE), free from Microsoft Research. As shown above in my test 3-tier pano, ICE works very well on complex panoramas, has a clean, user-friendly interface, offers a choice of geometric projections, and can export a master file with each segment on its own layer, if desired, for later editing.
The free, open source program HugIn is based on the same Panorama Tools root software that PTGui uses. However, I find HugIn’s operation clunky and overly technical. Its export process is arcane yet renders out only a flattened image.
In testing it with the same three-tier 21-segment pano that PTGui and ICE handled perfectly, HugIn failed to properly include one segment. However, it is free for MacOS and Windows, and so the price is right and is well worth a try.
With the superb tools now at our disposal, it is possible to create detailed panoramas of the night sky that convey the majesty of the Milky Way – and the night sky – as no single image can. Have fun!
I put the new Nikon Z6 mirrorless camera through its paces for astrophotography.
Following Sony’s lead, in late 2018 both Nikon and Canon released their entries to the full-frame mirrorless camera market.
Here I review one of Nikon’s new mirrorless models, the Z6, tested solely with astrophotography in mind. I did not test any of the auto-exposure, auto-focus, image stabilization, nor rapid-fire continuous mode features.
• Current owners of Nikon cropped-frame cameras wanting to upgrade to full-frame would do well to consider a Z6 over any current Nikon DSLR.
• Anyone wanting a full-frame camera for astrophotography and happy to “go Nikon” will find the Z6 nearly perfect for their needs.
Nikon Z6 vs. Z7
I opted to test the Z6 over the more expensive Z7, as the 24-megapixel Z6 has 6-micron pixels resulting in lower noise (according to independent tests) than the 46 megapixel Z7 with its 4.4 micron pixels.
In astrophotography, I feel low noise is critical, with 24-megapixel cameras hitting a sweet spot of noise vs. resolution.
However, if the higher resolution of the Z7 is important for your daytime photography needs, then I’m sure it will work well at night. The Nikon D850 DSLR, with a sensor similar to the Z7, has been proven by others to be a good astrophotography camera, albeit with higher noise than the lesser megapixel Nikons such as the D750 and Z6.
NOTE: Tap or click on images to download and display them full screen for closer inspection.
High ISO Noise
To test noise in a real-world situation, I shot a dark nightscape scene with the three cameras, using a 24mm Sigma Art lens on the two Nikons, and a 24mm Canon lens on the Sony via a MetaBones adapter. I shot at ISOs from 800 to 12,800, typical of what we use in nightscapes and deep-sky images.
The comparison set above shows performance at the higher ISOs of 3200 to 12,800. I saw very little difference among the trio, with the Nikon Z6 very similar to the Sony a7III, and with the four-year-old Nikon D750 holding up very well against the two new cameras.
The comparison below shows the three cameras on another night and at ISO 3200.
Both the Nikon Z6 and Sony a7III use a backside illuminated or “BSI” sensor, which in theory is promises to provide lower noise than a conventional CMOS sensor used in an older camera such as the D750.
In practice I didn’t see a marked difference, certainly not as much as the one- or even 1/2-stop improvement in noise I might have expected or hoped for.
Nevertheless, the Nikon Z6 provides as low a noise level as you’ll find in a camera offering 24 megapixels, and will perform very well for all forms of astrophotography.
Nikon and Sony both employ an “ISO-invariant” signal flow in their sensor design. You can purposely underexpose by shooting at a lower ISO, then boost the exposure later “in post” and end up with a result similar to an image shot at a high ISO to begin with in the camera.
I find this feature proves its worth when shooting Milky Way nightscapes that often have well-exposed skies but dark foregrounds lit only by starlight. Boosting the brightness of the landscape when developing the raw files reveals details in the scene without unduly introducing noise, banding, or other artifacts such as magenta tints.
That’s not true of “ISO variant” sensors, such as in most Canon cameras. Such sensors are far less tolerant of underexposure and are prone to noise, banding, and discolouration in the brightened shadows.
To test the Z6’s ISO invariance (as shown above) I shot a dark nightscape at ISO 3200 for a properly exposed scene, and also at ISO 100 for an image underexposed by a massive 5 stops. I then boosted that image by 5 stops in exposure in Adobe Camera Raw. That’s an extreme case to be sure.
I found the Z6 provided very good ISO invariant performance, though with more chrominance specking than the Sony a7III and Nikon D750 at -5 EV.
Below is a less severe test, showing the Z6 properly exposed on a moonlit night and at 1 to 4 EV steps underexposed, then brightened in processing. Even the -4 EV image looks very good.
In my testing, even with frames underexposed by -5 EV, I did not see any of the banding effects (due to the phase-detect auto-focus pixels) reported by others.
As such, I judge the Z6 to be an excellent camera for nightscape shooting when we often want to extract detail in the shadows or dark foregrounds.
Compressed vs. Uncompressed / Raw Large vs. Small
The Z6, as do many Nikons, offers a choice of shooting 12-bit or 14-bit raws, and either compressed or uncompressed.
I shot all my test images as 14-bit uncompressed raws, yielding 46 megabyte files with a resolution of 6048 x 4024 pixels. So I cannot comment on how good 12-bit compressed files are compared to what I shot. Astrophotography demands the best original data.
However, as the menu above shows, Nikon now also offers the option of shooting smaller raw sizes. The Medium Raw setting produces an image 4528 x 3016 pixels and a 18 megabyte file (in the files I shot), but with all the benefits of raw files in processing.
The Medium Raw option might be attractive when shooting time-lapses, where you might need to fit as many frames onto the single XQD card as possible, yet still have images large enough for final 4K movies.
However, comparing a Large Raw to a Medium Raw did show a loss of resolution, as expected, with little gain in noise reduction.
This is not like “binning pixels” in CCD cameras to increase signal-to-noise ratio. I prefer to never throw away information in the camera, to allow the option of creating the best quality still images from time-lapse frames later.
Nevertheless, it’s nice to see Nikon now offer this option on new models, a feature which has long been on Canon cameras.
Star Image Quality
Above is the Orion Nebula with the D750 and with the Z6, both shot in moonlight with the same 105mm refractor telescope.
I did not find any evidence for “star-eating” that Sony mirrorless cameras have been accused of. (However, I did not find the Sony a7III guilty of eating stars either.) Star images looked as good in the Z6 as in the D750.
Raw developers (Adobe, DxO, ON1, and others) decoded the Z6’s Bayer-array NEF files fine, with no artifacts such as oddly-coloured or misshapen stars, which can arise in cameras lacking an anti-alias filter.
LENR Dark frames
Above, 8-minute exposures of nothing, taken with the lens cap on at room temperature: without LENR, and with LENR, both boosted a lot in brightness and contrast to exaggerate the visibility of any thermal noise. These show the reduction in noise speckling with LENR activated, and the clean result with the Z6. At small size you’ll likely see nothing but black!
For deep-sky imaging a common practice is to shoot “dark frames,” images recording just the thermal noise that can then be subtracted from the image.
The Long Exposure Noise Reduction feature offered by all cameras performs this dark frame subtraction internally and automatically by the camera for any exposures over one second long.
I tested the Z6’s LENR and found it worked well, doing the job to effectively reduce thermal noise (hot pixels) without adding any other artifacts.
Some astrophotographers dismiss LENR and never use it. By contrast, I prefer to use LENR to do dark frame subtraction. Why? Through many comparison tests over the years I have found that separate dark frames taken later at night rarely do as good a job as LENR darks, because those separate darks are taken when the sensor temperature, and therefore the noise levels, are different than they were for the “light” frames.
I’ve found that dark frames taken later, then subtracted “in post” inevitably show less or little effect compared to images taken with LENR darks. Or worse, they add a myriad of pock-mark black specks to the image, adding noise and making the image look worse.
The benefit of LENR is lower noise. The penalty of LENR is that each image takes twice as long to shoot — the length of the exposure + the length of the dark frame. Because …
As Expected on the Z6 … There’s no LENR Dark Frame Buffer
Only Canon full-frame cameras offer this little known but wonderful feature for astrophotography. Turn on LENR and it is possible to shoot three (with the Canon 6D MkII) or four (with the Canon 6D) raw images in quick succession even with LENR turned on. The Canon 5D series also has this feature.
The single dark frame kicks in and locks up the camera only after the series of “light frames” are taken. This is excellent for taking a set of noise-reduced deep-sky images for later stacking without need for further “image calibration.”
No Nikon has this dark frame buffer, not even the “astronomical” D810a. And not the Z6.
I have to mention this every time I describe Canon’s dark frame buffer: It works only on full-frame Canons, and there’s no menu function to activate it. Just turn on LENR, fire the shutter, and when the first exposure is complete fire the shutter again. Then again for a third, and perhaps a fourth exposure. Only then does the LENR dark frame lock up the camera as “Busy” and prevent more exposures. That single dark frame gets applied to each of the previous “light” frames, greatly reducing the time it takes to shoot a set of dark-frame subtracted images.
But do note that Canon’s dark frame buffer will not work if…:
a) You leave Live View on. Don’t do that for any long exposure shooting.
b) You control the camera through the USB port via external software. It works only when controlling the camera via its internal intervalometer or via the shutter port using a hardware intervalometer.
With DSLRs deep-sky images shot through telescopes, then boosted for contrast in processing, usually exhibit a darkening along the bottom of the frame. This is caused by the upraised mirror shadowing the sensor slightly, an effect never noticed in normal photography.
Mirrorless cameras should be free of this mirror box shadowing. The Sony a7III, however, still exhibits some edge shadows due to an odd metal mask in front of the sensor. It shouldn’t be there and its edge darkening is a pain to eliminate in the final processing.
As I show in my review of the a7III, the Sony also exhibits a purple edge glow in long-exposure deep-sky images, from an internal light source. That’s a serious detriment to its use in deep-sky imaging.
Happily, the Z6 proved to be free of any such artifacts. Images are clean and evenly illuminated to the edges, as they should be. I saw no amp glows or other oddities that can show up under astrophotography use. The Z6 can produce superb deep-sky images.
During my short test period, I was not able to shoot red nebulas under moonless conditions. So I can’t say how well the Z6 performs for recording H-alpha regions compared to other “stock” cameras.
With the D810a gone, if it is deep red nebulosity you are after with a Nikon, then consider buying a filter-modified Z6 or having yours modified.
Both LifePixel and Spencer’s Camera offer to modify the Z6 and Z7 models. However, I have not used either of their services, so cannot vouch for them first hand.
Live View Focusing and Framing
For all astrophotography manually focusing with Live View is essential. And with mirrorless cameras there is no optical viewfinder to look through to frame scenes. You are dependent on the live electronic image (on the rear LCD screen or in the eye-level electronic viewfinder, or EVF) for seeing anything.
Thankfully, the Z6 presents a bright Live View image making it easy to frame, find, and focus on stars. Maximum zoom for precise focusing is 15x, good but not as good as the D750’s 20x zoom level, but better than Canon’s 10x maximum zoom in Live View.
The Z6 lacks the a7III’s wonderful Bright Monitoring function that temporarily ups the ISO to an extreme level, making it much easier to frame a dark night scene. However, something similar can be achieved with the Z6 by switching it temporarily to Movie mode, and having the ISO set to an extreme level.
As with most Nikons (and unlike Sonys), the Z6 remembers separate settings for the still and movie modes, making it easy to switch back and forth, in this case for a temporarily brightened Live View image to aid framing.
That’s very handy, and the Z6 works better than the D750 in this regard, providing a brighter Live View image, even with the D750’s well-hidden Exposure Preview option turned on.
Where the Z6 pulls far ahead of the otherwise similar D750 is in its movie features.
The Z6 can shoot 4K video (3840 x 2160 pixels) at either 30, 25, or 24 frames per second. Using 24 frames per second and increasing the ISO to between 12,800 to 51,200 (the Z6 can go as high as ISO 204,800!) it is possible to shoot real-time video at night, such as of auroras.
But the auroras will have to be bright, as at 24 fps, the maximum shutter speed is 1/25-second, as you might expect.
The a7III, by comparison, can shoot 4K movies at “dragged” shutter speeds as slow as 1/4 second, even at 24 fps, making it possible to shoot auroras at lower and less noisy ISO speeds, albeit with some image jerkiness due to the longer exposures per frame.
The D750 shoots only 1080 HD and, as shown above, produces very noisy movies at ISO 25,600 to 51,200. It’s barely usable for aurora videos.
The Z6 is much cleaner than the D750 at those high ISOs, no doubt due to far better internal processing of the movie frames. However, if night-sky 4K videos are an important goal, a camera from the Sony a7 series will be a better choice, if only because of the option for slower dragged shutter speeds.
For examples of real-time auroras shot with the Sony a7III see my music videos shot in Yellowknife and in Norway.
The Z6 uses the EN-EL15b battery compatible with the battery and charger used for the D750. But the “b” variant allows for in-camera charging via the USB port.
In room temperature tests the Z6 lasted for 1500 exposures, as many as the D750 was able to take in a side-by-side test. That was with the screens off.
At night, in winter temperatures of -10 degrees C (14° F), the Z6 lasted for three hours worth of continuous shooting, both for long deep-sky exposure sets and for a test time-lapse I shot, shown below.
A time-lapse movie, downsized here to HD from the full-size originals, shot with the Z6 and its internal intervalometer, from twilight through to moonrise on a winter night. Processed with Camera Raw and LRTimelapse.
However, with any mirrorless camera, you can extend battery life by minimizing use of the LCD screen and eye-level EVF. The Z6 has a handy and dedicated button for shutting off those screens when they aren’t needed during a shoot.
The days of mirrorless cameras needing a handful of batteries just to get through a few hours of shooting are gone.
Lens and Telescope Compatibility
As with all mirrorless cameras, the Nikon Z cameras use a new lens mount, one that is incompatible with the decades-old Nikon F mount.
The Z mount is wider and can accommodate wider-angle and faster lenses than the old F mount ever could, and in a smaller package. While we have yet to see those lenses appear, in theory that’s the good news.
The bad news is that you’ll need Nikon’s FTZ lens adapter to use any of your existing Nikon F-mount lenses on either the Z6 or Z7. As of this writing, Nikon is supplying an FTZ free with every Z body purchase.
I got an FTZ with my loaner Z6 and it worked very well, allowing even third-party lenses like my Sigma Art lenses to focus at the same point as they normally do (not true of some thIrd-party adapters), preserving the lens’s optical performance. Autofocus functions all worked fine and fast.
You’ll also need the FTZ adapter for use on a telescope, as shown above, to go from your telescope’s camera adapter, with its existing Nikon T-ring, to the Z6 body.
The reason is that the field flattener or coma corrector lenses often required with telescopes are designed to work best with the longer lens-to-sensor distance of a DSLR body. The FTZ adapter provides the necessary spacing, as do third-party adapters.
The only drawback to the FTZ is that any tripod plate attached to the camera body itself likely has to come off, and the tripod foot incorporated into the FTZ used instead. I found myself often having to swap locations for the tripod plate, an inconvenience.
Camera Controller Compatibility
Since it uses the same Nikon-type DC2 shutter port as the D750, the Z6 it should be compatible with most remote hardware releases and time-lapse motion controllers that operate a Nikon through the shutter port. An example are the controllers from SYRP.
On the other hand, time-lapse devices and external intervalometers that run Nikons through the USB port might need to have their firmware or apps updated to work with the Z6.
For example, as of early May 2019, CamRanger lists the Z6 as a supported camera; the Arsenal “smart controller” does not. Nor does Alpine Labs for their Radian and Pulse controllers, nor TimeLapse+ for its excellent View bramping intervalometer. Check with your supplier.
For those who like to use laptops to run their camera at the telescope, I found the Windows program Astro Photography Tool (v3.63) worked fine with the Z6, in this case connecting to the camera’s USB-C port using the USB-C to USB-A cable that comes with the camera. This allows APT to shift not only shutter speed, but also ISO and aperture under scripted sequences.
Inevitably, raw files from brand new cameras cannot be read by any raw developer programs other than the one supplied by the manufacturer, Nikon Capture NX in this case. However, even by the time I did my testing in winter 2019 all the major software suppliers had updated their programs to open Z6 files.
Adobe Lightroom and Photoshop, Affinity Photo, DxO PhotoLab, Luminar 3, ON1 PhotoRAW, and the open-source Raw Therapee all open the Z6’s NEF raw files just fine.
Specialized programs for processing astronomy images might be another story. For example, as of v1.08.06, PixInsight, a favourite program among astrophotographers, does not open Z6 raw files. Nor does Nebulosity v4. But check with the developers for updates.
Other Features for Astrophotography
Here are other Nikon Z6 features I found of value for astrophotography, and for operating the camera at night.
Tilting LCD Screen
Like the Nikon D750 and Sony A7III, the Z6 offers a tilting LCD screen great for use on a telescope or tripod when aimed up at the sky. However, the screen does not flip out and reverse, a feature useful for vloggers, but seldom needed for astrophotography.
OLED Top Screen (Above)
The Sony doesn’t have one, and Canon’s low-cost mirrorless Rp also lacks one. But the top-mounted OLED screen of the Z6 is a great convenience for astrophotography. It makes it possible to monitor camera status and battery life during a shoot, even with the rear LCD screen turned off to prolong battery life.
Sony’s implementation of touch-screen functions is limited to just choosing autofocus points. By contrast, the Nikon Z6 offers a full range of touchscreen functions, making it easy to navigate menus and choose settings.
I do wish there was an option, as there is with Pentax, to tint the menus red for preserving night vision.
As with other Nikons, the Z6 offers an internal intervalometer capable of shooting time-lapses, just as long as individual exposures don’t need to be longer than 30 seconds.
In addition, there’s the Exposure Smoothing option which, as I have found with the D750, is great for smoothing flickering in time-lapses shot using auto exposure.
Sony has only just added an intervalometer to the a7III with their v3 firmware update, but with no exposure smoothing.
Custom i Menu / Custom Function Buttons
The Sony a7III has four custom function buttons users can assign to commonly used commands, for quick access. For example, I assign one Custom button to the Bright Monitoring function which is otherwise utterly hidden in the menus, but superb for framing nightscapes, if only you know it’s there!
The Nikon Z6 has two custom buttons beside the lens mount. However, I found it easier to use the “i” menu (shown above) by populating it with those functions I use at night for astrophotography. It’s then easy to call them up and adjust them on the touch screen.
Thankfully, the Z6’s dedicated ISO button is now on top of the camera, making it much easier to find at night than the awkwardly placed ISO button on the back of the D750, which I am always mistaking for the Image Quality button, which you do not want to adjust by mistake.
As most cameras do, the Z6 also has a “My Menu” page which you can also populate with favourite menu commands.
Lighter Weight / Smaller Size
The Z6 provides similar imaging performance, if not better (for movies) than the D750, and in a smaller and lighter camera, weighing 200 grams (0.44 pounds) less than the D750. Being able to downsize my equipment mass is a welcome plus to going mirrorless.
Electronic Front Curtain Shutter / Silent Shooting
By design, mirrorless cameras lack any vibration from a bouncing mirror. But even the mechanical shutter can impart vibration and blurring to high-magnification images taken through telescopes.
The electronic front curtain shutter (lacking in the D750) helps eliminate this, while the Silent Shooting mode does just that — it makes the Z6 utterly quiet and vibration free when shooting, as all the shutter functions are now electronic. This is great for lunar and planetary imaging.
What’s Missing for Astrophotography (not much!)
Bulb Timer for Long Exposures
While the Z6 has a Bulb setting, there is no Bulb Timer as there is with Canon’s recent cameras. A Bulb Timer would allow setting long Bulb exposures of any length in the camera, though Canon’s cannot be combined with the intervalometer.
Instead, the Nikon must be used with an external Intervalometer for any exposures over 30 seconds long. Any number of units are compatible with the Z6, through its shutter port which is the same type DC2 jack used in the D750.
In-Camera Image Stacking to Raws
The Z6 does offer the ability to stack up to 10 images in the camera, a feature also offered by Canon and Pentax. Images can be blended with a Lighten (for star trails) or Average (for noise smoothing) mode.
However, unlike with Canon and Pentax, the result is a compressed JPG not a raw file, making this feature of little value for serious imaging. Plus with a maximum of only 10 exposures of up to 30-seconds each, the ability to stack star trails “in camera” is limited.
Unlike the top-end D850, the Z6’s buttons are not illuminated, but then again neither are the Z7’s.
As a bonus — the Nikon 35mm S-Series Lens
With the Z6 I also received a Nikkor 35mm f/1.8 S lens made for the Z-mount, as the lens perhaps best suited for nightscape imaging out of the native Z-mount lenses from Nikon. See Nikon’s website for the listing.
If there’s a downside to the Z-series Nikons it’s the limited number of native lenses that are available now from Nikon, and likely in the future from anyone, due to Nikon not making it easy for other lens companies to design for the new Z mount.
In testing the 35mm Nikkor on tracked shots, stars showed excellent on- and off-axis image quality, even wide open at f/1.8. Coma, astigmatism, spherical aberration, and lateral chromatic aberration were all well controlled.
However, as with most lenses now offered for mirrorless cameras, the focus is “by-wire” using a ring that doesn’t mechanically adjust the focus. As a result, the focus ring turns continuously and lacks a focus scale.
So it is not possible to manually preset the lens to an infinity mark, as nightscape photographers often like to do. Focusing must be done each night.
Until there is a greater selection of native lenses for the Z cameras, astrophotographers will need to use the FTZ adapter and their existing Nikon F-mount or third-party Nikon-mount lenses with the Zs.
I was impressed with the Z6.
For any owner of a Nikon cropped-frame DSLR (from the 3000, 5000, or 7000 series for example) wanting to upgrade to full-frame for astrophotography I would suggest moving to the Z6 over choosing a current DSLR.
Mirrorless is the way of the future. And the Z6 will yield lower noise than most, if not all, of Nikon’s cropped-frame cameras.
For owners of current Nikon DSLRs, especially a 24-megapixel camera such as the D750, moving to a Z6 will not provide a significant improvement in image quality for still images.
But … it will provide 4K video and much better low-light video performance than older DSLRs. So if it is aurora videos you are after, the Z6 will work well, though not quite as well as a Sony alpha.
In all, there’s little downside to the Z6 for astrophotography, and some significant advantages: low noise, bright live view, clean artifact-free sensor images, touchscreen convenience, silent shooting, low-light 4K video, all in a lighter weight body than most full-frame DSLRs.
It was a magical night as the rising Moon lit the Badlands with a golden glow.
When doing nightscape photography it’s often best not to fight the Moon, but to embrace it and use it as your light source.
I did this on a fine night, Easter Sunday, at one of my favourite nightscape spots, Dinosaur Provincial Park.
I set up two cameras to frame different views of the hoodoos as they lit up with the light of the rising waning Moon.
The night started out as a dark moonless evening as twilight ended. Then about 90 minutes after the arrival of darkness, the sky began to brighten again as the Moon rose to illuminate the eroded formations of the Park.
This was a fine example of “bronze hour” illumination, as some have aptly called it.
Photographers know about the “golden hour,” the time just before sunset or just after sunrise when the low Sun lights the landscape with a golden glow.
The Moon does the same thing, with a similar tone, though greatly reduced in intensity.
The low Moon, especially just after Full, casts a yellow or golden tint over the scene. This is caused by our atmosphere absorbing the “cold” blue wavelengths of moonlight, and letting through the “warm” red and yellow tones.
Making use of the rising (or setting) Moon to light a scene is one way to capture a nightscape lit naturally, and not with artificial lights, which are increasingly being frowned upon, if not banned at popular nightscape destinations.
“Bronze hour” lighting is great in still-image nightscapes. But in time-lapses the effect is more striking — indeed, in time-lapse lingo it is called a “moonstrike” scene.
The dark landscape suddenly lights up as if it were dawn, yet stars remain in the sky.
The best nights for such a moonstrike are ones with a waning gibbous or last quarter Moon. At these phases the Moon rises after sunset, to re-light a scene after evening twilight has faded.
On April 21 I made use of such a circumstance to shoot moonstrike stills and movies, not only for their own sake, but for use as illustrations in the next edition of my Nightscapes and Time-lapse eBook (at top here).
One camera, the Nikon D750, I coupled with a device called a bramping intervalometer, in this case the TimeLapse+ View, shown above. It works great to automatically shift the shutter and ISO speeds as the sky darkens then brightens again.
Yes, in bright situations the camera’s own Auto Exposure and Auto ISO modes might accomplish this.
But … once the sky gets dark the Auto circuits fail and you’re left with hugely underexposed images.
The TimeLapse+ View, with its more sensitive built-in light meter, can track right through into full darkness, making it possible to shoot so-called “holy grail” time-lapses that go from daylight to darkness, from sunset to the Milky Way, all shot unattended.
For the other camera, the Sony a7III (with the Laowa 15mm lens I just reviewed) I set the camera manually, then shifted the ISO and shutter speed a couple of times to accommodate the darkening, then brightening of the scene.
Processing the resulting RAW files in the highly-recommended program LRTimelapse smoothed out all the jumps in brightness to make a seamless transition.
I also used the new intervalometer function that Sony has just added to the a7III with its latest firmware update. Hurray! I complained about the lack of an intervalometer in my original review of the Sony a7III. But that’s been fixed.
I shot 425 frames with the Sony, which I not only turned into a movie but, as one can with time-lapse frames, I also stacked into a star trail still image, in this case looking north to the circumpolar stars.
I prefer this action set over dedicated programs such as StarStaX, because it works directly with the developed Raw files. There’s no need to create a set of JPGs to stack, compromising image quality, and departing from the non-destructive workflow I prefer to maintain.
While the still images are very nice, the intended final result was this movie above, a short time-lapse vignette using clips from both cameras. Do watch in HD.
I rendered out the frames from the Sony both as a “normal” time-lapse, and as one with accumulating star trails, again using the Advanced Stacker Plus actions to create the intermediate frames for assembling into the movie.
All these techniques, gear, and apps are explained in tutorials in my eBook, above. However, it’s always great to get a night perfect for putting the methods to work on a real scene.
But what about lenses for the Sony? Here’s one ideal for astrophotography.
Made for Sony e-mount cameras, the Venus Optics 15mm f/2 Laowa provides excellent on- and off-axis performance in a fast and compact lens ideal for nightscape, time-lapse, and wide-field tracked astrophotography with Sony mirrorless cameras. (UPDATE: Venus Optics has announced versions of this lens for Canon R and Nikon Z mount mirrorless cameras.)
I use it a lot and highly recommend it.
Size and Weight
While I often use the a7III with my Canon lenses by way of a Metabones adapter, the Sony really comes into its own when matched to a “native” lens made for the Sony e-mount. The selection of fast, wide lenses from Sony itself is limited, with the new Sony 24mm G-Master a popular favourite (I have yet to try it).
However, for much of my nightscape shooting, and certainly for auroras, I prefer lenses even wider than 24mm, and the faster the better.
Aurora over Båtsfjord, Norway. This is a single 0.8-second exposure at f/2 with the 15mm Venus Optics lens and Sony a7III at ISO 1600.
The Laowa 15mm f/2 from Venus Optics fills the bill very nicely, providing excellent speed in a compact lens. While wide, the Laowa is a rectilinear lens providing straight horizons even when aimed up, as shown above. This is not a fish-eye lens.
The Venus Optics 15mm realizes the potential of mirrorless cameras and their short flange distance that allows the design of fast, wide lenses without massive bulk.
While compact, at 600 grams the Laowa 15mm is quite hefty for its size due to its solid metal construction. Nevertheless, it is half the weight of the massive 1250-gram Sigma 14mm f/1.8 Art. The Laowa is not a plastic entry-level lens, nor is it cheap, at $850 from U.S. sources.
For me, the Sony-Laowa combination is my first choice for a lightweight travel camera for overseas aurora trips
However, this is a no-frills manual focus lens. Nor does it even transfer aperture data to the camera, which is a pity. There are no electrical connections between the lens and camera.
However, for nightscape work where all settings are adjusted manually, the Venus Optics 15mm works just fine. The key factor is how good are the optics. I’m happy to report that they are very good indeed.
Testing Under the Stars
To test the Venus Optics lens I shot “same night” images, all tracked, with the Sigma 14mm f/1.8 Art lens, at left, and the Rokinon 14mm SP (labeled as being f/2.4, at right). Both are much larger lenses, made for DSLRs, with bulbous front elements not able to accept filters. But they are both superb lenses. See my test report on these lenses published in 2018.
The next images show blow-ups of the same scene (the nightscape shown in full below, taken at Dinosaur Provincial Park, Alberta), and all taken on a tracker.
I used the Rokinon on the Sony a7III using the Metabones adapter which, unlike some brands of lens adapters, does not compromise the optical quality of the lens by shifting its focal position. But lacking a lens adapter for Nikon-to-Sony at the time of testing, I used the Nikon-mount Sigma lens on a Nikon D750, a DSLR camera with nearly identical sensor specs to the Sony.
Above is a tracked image (so the stars are not trailed, which would make it hard to tell aberrations from trails), taken wide open at f/2. No lens correction has been applied so the vignetting (the darkening of the frame corners) is as the lens provides.
As shown above, when used wide open at f/2 vignetting is significant, but not much more so than with competitive lenses with much larger lenses, as I compare below.
And the vignetting is correctable in processing. Adobe Camera Raw and Lightroom have this lens in their lens profile database. That’s not the case with current versions (as of April 2019) of other raw developers such as DxO PhotoLab, ON1 Photo RAW, and Raw Therapee where vignetting corrections have to be dialled in manually by eye.
When stopped down to f/2.8 the Laowa “flattens” out a lot for vignetting and uniformity of frame illumination. Corner aberrations also improve but are still present. I show those in close-up detail below.
Above, I compare the vignetting of the three lenses, both wide open and when stopped down. Wide open, all the lenses, even the Sigma and Rokinon despite their large front elements, show quite a bit of drop off in illumination at the corners.
The Rokinon SP actually seems to be the worst of the trio, showing some residual vignetting even at f/2.8, while it is reduced significantly in the Laowa and Sigma lenses. Oddly, the Rokinon SP, even though it is labeled as f/2.4, seemed to open to f/2.2, at least as indicated by the aperture metadata.
Above I show lens sharpness on-axis, both wide open and stopped down, to check for spherical and chromatic aberrations with the bright blue star Vega centered. The red box in the Navigator window at top right indicates what portion of the frame I am showing, at 200% magnification in Photoshop.
On-axis, the Venus Optics 15mm shows stars just as sharply as the premium Sigma and Rokinon lenses, with no sign of blurring spherical aberration nor coloured haloes from chromatic aberration.
Focusing is precise and easy to achieve with the Sony on Live View. My unit reaches sharpest focus on stars with the lens set just shy of the middle of the infinity symbol. This is consistent and allows me to preset focus just by dialing the focus ring, handy for shooting auroras at -35° C, when I prefer to minimize fussing with camera settings, thank you very much!
The Laowa and Sigma lenses show similar levels of off-axis coma and astigmatism, with the Laowa exhibiting slightly more lateral chromatic aberration than the Sigma. Both improve a lot when stopped down one stop, but aberrations are still present though to a lesser degree.
However, I find that the Laowa 15mm performs as well as the Sigma 14mm Art for star quality on- and off-axis. And that’s a high standard to match.
The Rokinon SP is the worst of the trio, showing significant elongation of off-axis star images (they look like lines aimed at the frame centre), likely due to astigmatism. With the 14mm SP, this aberration was still present at f/2.8, and was worse at the upper right corner than at the upper left corner, an indication to me that even the premium Rokinon SP lens exhibits slight lens de-centering, an issue users have often found with other Rokinon lenses.
Real-World Examples – The Milky Way
The fast speed of the Laowa 15mm is ideal for shooting tracked wide-field images of the Milky Way, and untracked camera-on-tripod nightscapes and time-lapses of the Milky Way.
Image aberrations are very acceptable at f/2, a speed that allows shutter speed and ISO to be kept lower for minimal star trailing and noise while ensuring a well-exposed frame.
Real World Examples – Auroras
Where the Laowa 15mm really shines is for auroras. On my trips to chase the Northern Lights I often take nothing but the Sony-Laowa pair, to keep weight and size down.
Above is an example, taken from a moving ship off the coast of Norway. The fast f/2 speed (I wish it were even faster!) makes it possible to capture the Lights in only 1- or 2-second exposures, albeit at ISO 6400. But the fast shutter speed is needed for minimizing ship movement.
The Sony also excels at real-time 4K video, able to shoot at ISO 12,800 to 51,200 without excessive noise.
Aurora Reflections from Alan Dyer on Vimeo.
The Sky is Dancing from Alan Dyer on Vimeo.
The Northern Lights At Sea from Alan Dyer on Vimeo.
Click through to see the posts and the videos shot with the Venus Optics 15mm.
As an aid to video use, the aperture ring of the Venus Optics 15mm can be “de-clicked” at the flick of a switch, allowing users to smoothly adjust the iris during shooting, avoiding audible clicks and jumps in brightness. That’s a very nice feature indeed.
In all, I can recommend the Venus Optics Laowa 15mm lens as a great match to Sony mirrorless cameras, for nightscape still and video shooting. UPDATE: Versions for Canon R and Nikon Z mount mirrorless cameras will now be available.
Spring is the season for Earthshine on the waxing Moon.
April 8 was the perfect night for capturing the waxing crescent Moon illuminated both by the Sun and by the Earth.
The phase was a 4-day-old Moon, old enough to be high in the sky, but young enough – i.e. a thin enough crescent – that its bright side didn’t wash out the dark side!
In the lead photo at top, and even in the single-exposure image below taken earlier in a brighter sky, you can see the night side of the Moon faintly glowing a deep blue, and brighter than the background twilight sky.
This, too, is from sunlight, but light that has bounced off the Earth first to then light up the night side of the Moon.
If you were standing on the lunar surface on the night side, the Sun would be below the horizon but your sky would contain a brilliant blue and almost Full Earth lighting your night, much as the Moon lights our Earthly nights. However, Earth is some 80 times brighter in the Moon’s sky than even the Full Moon is in our sky.
Unlike the single image, the lead image, repeated just above, is a multi-exposure blend (using luminosity masks), to bring out the faint Earthshine and deep blue sky, while retaining details in the bright crescent.
Once the sky gets dark enough to see Earthshine well, no single exposure can record the full range in brightness on both the day and night sides of the Moon.
April 8 was a great night for lunar fans as the crescent Moon also appeared between the two bright star clusters in Taurus, the Hyades and Pleiades, and below reddish Mars.
It was a fine gathering of celestial sights, captured above with a telephoto lens.
This show the chart I used to plan the framing, created with StarryNight™ software and showing the field of the 135mm lens I used.
The chart also shows why spring is best for the waxing Moon. It is at this time of year that the ecliptic – the green line – swings highest into the evening sky, taking the Moon with it, placing it high in the west above obscuring haze.
That makes it easier to see and shoot the subtle Earthshine. And to see sharp details on the Moon.
The 4-day-old waxing crescent Moon on April 8, 2019 exposed for just the bright sunlit crescent, revealing details along the terminator. This is with the 105mm Traveler refractor and 2X AP Barlow lens for an effective focal length of 1200mm at f/12, and with the cropped-frame Canon 60Da at ISO 400, for a single exposure of 1/60 second. This is not a stack or mosaic.
The 4-day-old waxing crescent Moon on April 8, 2019 exposed for just the bright sunlit crescent, revealing details along the terminator. This is with the 105mm Traveler refractor and 2X AP Barlow lens for an effective focal length of 1200mm at f/12, and with the cropped-frame Canon 60Da at ISO 400, for a single exposure of 1/60 second. This is not a stack or mosaic.
After the sky got darker I shot the crescent Moon in a short exposure to capture just the bright crescent, included above in two versions – plain and with labels attached marking the major features visible on a 4-day Moon.
If you missed “Earthshine night” this month, mark May 7 and 8 on your calendar for next month’s opportunities.
There’s a slogan used in the U.S. National Parks that “half the Park is after dark.” It is certainly true at Dinosaur Provincial Park in Alberta.
Last Friday night, March 29, I spent the evening at one of my favourite nightscape sites, Dinosaur Provincial Park, about an hour’s drive east of my home. It was one of those magical nights – clear, mild, dry, and no mosquitoes! Yet!
I wanted to shoot Orion and the photogenic winter sky setting into the evening twilight over the Badlands landscape. This was the last moonless weekend to do so.
I shot some individual images (such as above) and also multi-panel panoramas, created by shooting a series of overlapping images at equal spacings, then stitching them later at the computer.
There’s a narrow window of time between twilight and full darkness when the Milky Way shows up well but the western sky still has a lingering blue glow. This window occurs after the normal “blue hour” favoured by photographers.
The panorama above shows the arch of the winter Milky Way but also the towering band of the Zodiacal Light rising out of the twilight and distant yellow glow of Calgary. Zodiacal Light is sunlight scattering off meteoric and cometary dust orbiting in the inner solar system, so this is a phenomenon in space not in our atmosphere. However, the narrow streak is an aircraft contrail.
Later that night, when the sky was fully dark I shot this complete panorama showing not only the Milky Way and Zodiacal Light to the west, but also the faint arc of the Zodiacal Band continuing on from the pyramid-shaped Zodiacal Light over into the east, where it brightens into the subtle glow of Gegenschein. This is caused by sunlight reflecting off interplanetary dust particles in the direction opposite the Sun.
Both the Band and Gegenschein were visible to the naked eye, but only if you knew what to look for, and have a very dark sky.
A closeup shows the Zodiacal Light in the west as the subtle blue glow tapering toward the top as it meets the Milky Way.
It takes a dark site to see these subtle glows. Dinosaur Park is not an official Dark Sky Preserve but certainly deserves to be. Now if we could only get Calgary, Brooks and Bassano to turn down and shield their lights!
A closeup facing the other way, to the east, shows the area of sky opposite the Milky Way, in the spring sky. The familiar Big Dipper, now high our spring sky, is at top with its handle pointing down to Arcturus and Spica (just rising above the horizon) – remember to “arc to Arcturus, and speed on to Spica.”
Leo is at right of centre, flanked by the Beehive and Coma Berenices star clusters.
Polaris is at left — however, the distortion introduced by the panorama stitching at high altitudes stretches out the sky at the top of the frame, so the Dipper’s Pointer stars do not point in a straight line to Polaris.
The faint Zodiacal Band is visible at right, brightening toward the horizon in the Gegenschein.
I shoot images like these for use as illustrations in future eBook projects about stargazing and the wonders of the night sky. Several are in the works!
For two magical nights I was able to capture the Rockies by moonlight, with the brilliant stars of winter setting behind the mountains.
I’ve been waiting for nights like these for many years! I consider this my “25-Year Challenge!”
Back during my early years of shooting nightscapes I was able to capture the scene of Orion setting over Lake Louise and the peaks of the Continental Divide, with the landscape lit by the Moon.
Such a scene is possible only in late winter, before Orion sets out of sight and, in March, with a waxing gibbous Moon to the east to light the scene but not appear in the scene. There are only a few nights each year the photograph is possible. Most are clouded out!
Above is the scene in March 1995, in one of my favourite captures on film. What a night that was!
But it has taken 24 years for my schedule, the weather, and the Moon phase to all align to allow me to repeat the shoot in the digital age. Thus the Challenge.
Here’s the result.
Unlike with film, digital images make it so much easier to stitch multiple photos into a panorama.
In the film days I often shot long single exposures to produce star trails, though the correct exposure was an educated guess factoring in variables like film reciprocity failure and strength of the moonlight.
Below is an example from that same shoot in March 1995. Again, one of my favourite film images.
This year, time didn’t allow me to shoot enough images for a star trail. In the digital age, we generally shoot lots of short exposures to stack them for a trail.
Instead, I shot this single image of Orion setting over Mt. Temple.
Plus I shot the panorama below, both taken at Morant’s Curve, a viewpoint named for the famed CPR photographer Nicholas Morant who often shot from here with large format film cameras. Kevin Keefe of Trains magazine wrote a nice blog about Morant.
I was shooting multi-segment panoramas when a whistle in the distance to the west alerted me to the oncoming train. I started the panorama segment shooting at the left, and just by good luck the train was in front of me at centre when I hit the central segment. I continued to the right to catch the blurred rest of the train snaking around Morant’s Curve. I was very pleased with the result.
The night before I was at another favourite spot, Two Jack Lake near Banff, to again shoot panoramas of the moonlit scene below the bright stars of the winter sky.
A run up to the end of the Vermilion Lakes road at the end of that night allowed me to capture Orion and Siris reflected in the open water of the upper lake.
Unlike in the film days, today we also have some wonderful digital planning tools to help us pick the right sites and times to capture the scene as we envision it.
This is a screen shot of the PhotoPills app in its “augmented reality” mode, taken by day during a scouting session at Two Jack, but showing where the Milky Way will be later that night in relation to the real “live” scene shot with the phone’s camera.
The app I like for planning before the trip is The Photographer’s Ephemeris. This is a shot of the plan for the Lake Louise shoot. The yellow lines are the sunrise and sunset points. The thin blue line at lower right is the angle toward the gibbous Moon at about 10 p.m. on March 19.
Even better than TPE is its companion program TPE 3D, which allows you to preview the scene with the mountain peaks, sky, and illumination all accurately simulated for your chosen location. I am impressed!
Compare the simulation above to the real thing below, in a wide 180° panorama.
These sort of moonlit nightscapes are what I started with 25 years ago, as they were what film could do well.
These days, everyone chases after dark sky scenes with the Milky Way, and they do look wonderful, beyond anything film could do. I shoot many myself. And I include an entire chapter in my ebook above about shooting the Milky Way.
But … there’s still a beauty in a contrasty moonlit scene with a deep blue sky from moonlight, especially with the winter sky and its population of bright stars and constellations.
I’m glad the weather and Moon finally cooperated at the right time to allow me to capture these magical moonlit panoramas.
We’ve embarked upon a new project to produce a comprehensive tutorial on deep-sky imaging with DSLR cameras.
This past week we launched a new KickStarter campaign to fund the production of a new multi-hour video course on how to capture deep-sky objects using entry-level telescope gear and DSLR cameras.
The emphasis in the course will be on techniques for taking and processing publication-quality images as simply and easily as possible.
The final video course will consist of several programs, including a video of one of our annual “Deep-Sky with Your DSLR” workshops presented locally here in Alberta. We’ve often had requests for a video version of those workshops, for those who cannot attend in person.
This is it! Here’s a short preview of some of the content.
We include the Workshop video, but we supplement it with much more: with video segments shot in the field by day and by night, showing how to setup and use gear, and shot in the studio showing how to process images.
While much of the content has been shot and edited, there’s more to do yet. Thus our KickStarter campaign to complete the funding and production. Backers of the project through KickStarter will get the final videos at a substantial discount off the final retail price.
All the details are on the project’s KickStarter page. Click through for the listing of course content, and options for funding levels. An FAQ page answers many of the common questions.
A week into the campaign and we’re just over 50% funded, but we have a way to go yet!
We hope you’ll consider backing our project, which we think will be unique on the market.
On the evening of January 20 for North America, the Full Moon passes through the umbral shadow of the Earth, creating a total eclipse of the Moon.
No, this isn’t a “blood,” “super,” nor “wolf” Moon. All those terms are internet fabrications designed to bait clicks.
It is a totallunareclipse — an event that doesn’t need sensational adjectives to hype, because they are always wonderful sights! And yes, the Full Moon does turn red.
As such, on January 20 the evening and midnight event provides many opportunities for great photos of a reddened Moon in the winter sky.
Here’s my survey of tips and techniques for capturing the eclipsed Moon.
First … What is a Lunar Eclipse?
As the animation below shows (courtesy NASA/Goddard Space Flight Center), an eclipse of the Moon occurs when the Full Moon (and they can happen only when the Moon is exactly full) travels through the shadow of the Earth.
The Moon does so at least two times each year, though often not as a total eclipse, one where the entire disk of the Moon enters the central umbral shadow. Many lunar eclipses are of the imperceptible penumbral variety, or are only partial eclipses.
Total eclipses of the Moon can often be years apart. The last two were just last year, on January 31 and July 27, 2018. However, the next is not until May 26, 2021.
At any lunar eclipse we see an obvious darkening of the lunar disk only when the Moon begins to enter the umbra. That’s when the partial eclipse begins, and we see a dark bite appear on the left edge of the Moon.
While it looks as if Earth’s shadow sweeps across the Moon, it is really the Moon moving into, then out of, our planet’s umbra that causes the eclipse. We are seeing the Moon’s revolution in its orbit around Earth.
At this eclipse the partial phases last 67 minutes before and after totality.
Once the Moon is completely immersed in the umbra, totality begins and lasts 62 minutes at this eclipse, a generous length.
The Moon will appear darkest and reddest at mid-eclipse. During totality the lunar disk is illuminated only by red sunlight filtering through Earth’s atmosphere. It is the light of all the sunsets and sunrises going on around our planet.
And yes, it is perfectly safe to look at the eclipsed Moon with whatever optics you wish. Binoculars often provide the best view. Do have a pair handy!
At this eclipse because the Moon passes across the north half of the umbra, the top edge of the Moon will always remain bright, as it did above in 2010, looking like a polar cap on the reddened Moon.
Near the bright edge of the umbra look for subtle green and blue tints the eye can see and that the camera can capture.
Where is the Eclipse?
As the chart below shows, all of the Americas can see the entire eclipse, with the Moon high in the evening or late-night sky. For the record, the Moon will be overhead at mid-eclipse at local midnight from Cuba!
I live in Alberta, Canada, at a latitude of 50 degrees North. And so, the sky charts I provide here are for my area, where the Moon enters the umbral shadow at 8:35 p.m. MST with the Moon high in the east. By the end of totality at 10:44 p.m. MST the Moon shines high in the southeast. This sample chart is for mid-eclipse at my site.
I offer them as examples of the kinds of planning you can do to ensure great photos. I can’t provide charts good for all the continent because exactly where the Moon will be during totality, and the path it will take across your sky will vary with your location.
In general, the farther east and south you live in North America the higher the Moon will appear. But from all sites in North America the Moon will always appear high and generally to the south.
The latter two apps present the sightlines toward the Moon overlaid on a map of your location, to help you plan where to be to shoot the eclipsed Moon above a suitable foreground, if that’s your photographic goal.
When is the Eclipse?
While where the Moon is in your sky depends on your site, the various eclipse events happen at the same time for everyone, with differences in hour due only to the time zone you are in.
While all of North America can see the entirety of the partial and total phases of this eclipse (lasting 3 hours and 16 minutes from start to finish), the farther east you live the later the eclipse occurs, making for a long, late night for viewers on the east coast.
Those in western North America can enjoy all of totality and be in bed at or before midnight.
Here are the times for the start and end of the partial and total phases. Because the penumbral phases produce an almost imperceptible darkening, I don’t list the times below for the start and end of the penumbral eclipse.
PM times are on the evening of January 20.
AM times are after midnight on January 21.
Note that while some sources list this eclipse as occurring on January 21, that is true for Universal Time (Greenwich Time) and for sites in Europe where the eclipse occurs at dawn near moonset.
For North America, if you go out on the evening of January 21 expecting to see the eclipse you’ll be a day late and disappointed!
Picking a Photo Technique
Lunar eclipses lend themselves to a wide range of techniques, from a simple camera on a tripod, to a telescope on a tracking mount following the sky.
If this is your first lunar eclipse I suggest keeping it simple! Select just one technique, to focus your attention on only one camera on a cold and late winter night.
Then during the hour of totality take the time to enjoy the view through binoculars and with the unaided eye. No photo quite captures the glowing quality of an eclipsed Moon. But here’s how to try it.
Option 1: Simple — Camera-on-Tripod
The easiest method is to take single shots using a very wide-angle lens (assuming you also want to include the landscape below) with the camera on a fixed tripod. No fancy sky trackers are needed here.
During totality, with the Moon now dimmed and in a dark sky, use a good DSLR or mirrorless camera in Manual (M) mode (not an automatic exposure mode) for settings of 2 to 20 seconds at f/2.8 to f/4 at ISO 400 to 1600.
That’s a wide range, to be sure, but it will vary a lot depending on how bright the sky is at your site. Shoot at lots of different settings, as blending multiple exposures later in processing is often the best way to reproduce the scene as your eyes saw it.
Shoot at a high ISO if you must to prevent blurring from sky motion. However, lower ISOs, if you can use them by choosing a slower shutter speed or wider lens aperture, will yield less digital noise.
Focus carefully on a bright star, as per the advice below for telephoto lenses. Don’t just set the lens focus to infinity, as that might not produce the sharpest stars.
One scene to go for at this eclipse is similar to the above photo, with the reddened Moon above a winter landscape and shining east of Orion and the winter Milky Way. But that will require shooting from a dark site away from urban lights. But when the Moon is totally eclipsed, the sky will be dark enough for the Milky Way to appear.
The high altitude of the Moon at mid-eclipse from North America (with it 40 to 70 degrees above the horizon) will also demand a lens as wide as 10mm to 24mm, depending whether you use portrait or landscape orientation, and if your camera uses a cropped frame or full frame sensor. The latter have the advantage in this category of wide-angle nightscape.
Alternatively, using a longer 14mm to 35mm lens allows you to frame the Moon beside Orion and the winter Milky Way, as above, but without the landscape. Again, this will require a dark rural site.
If you take this type of image with a camera on a fixed tripod, use high ISOs to keep exposures below 10 to 20 seconds to avoid star trailing. You have an hour of totality to shoot lots of exposures to make sure some will work best.
If you have a sky tracker to follow the stars, as I did above, exposures can be much longer — perhaps a minute to pick up the Milky Way really well — and ISOs can be lower to avoid noise.
Option 1 Variation — Urban Eclipses
Unfortunately, point-and-shoot cameras and so-called “bridge” cameras, ones with non-interchangeable lenses, likely won’t have lenses wide enough to capture the whole scene, landscape and all. Plus their sensors will be noisy when used at high ISOs. Those cameras might be best used to capture moderate telephoto closeups at bright urban sites.
With any camera, at urban sites look for scenic opportunities to capture the eclipsed Moon above a skyline or behind a notable landmark. By looking up from below you might be able to frame the Moon beside a church spire, iconic building, or a famous statue using a normal or short telephoto lens, making this a good project for those without ultra-wide lenses.
Whatever your lens or subject, at urban sites expose as best you can for the foreground, trying to avoid any bright and bare lights in the frame that will flood the image with lens flares in long exposures.
Capturing such a scene during the deep partial phases might produce a brighter Moon that stands out better in an urban sky than will a photo taken at mid-totality when the Moon is darkest.
TIP: Practice, Practice, Practice!
With any camera, especially beginner point-and-shoots, ensure success on eclipse night by practicing shooting the Moon before the eclipse, during the two weeks of the waxing Moon leading up to Full Moon night and the eclipse.
The crescent Moon with Earthshine on the dark side of the Moon is a good stand-in for the eclipsed Moon. Set aside the nights of January 8 to 11 to shoot the crescent Moon. Check for exposure and focus. Can you record the faint Earthshine? It’s similar in brightness to the shadowed side of the eclipsed Full Moon.
The next week, on the nights of January 18 and 19, the waxing gibbous Moon will be closer to its position for eclipse night and almost as bright as the uneclipsed Full Moon, allowing some rehearsals for shooting it near a landmark.
Option 2: Advanced — Multiple Exposures
An advanced method is to compose the scene so the lens frames the entire path of the Moon for the 3 hours and 16 minutes from the start to the end of the partial eclipse.
As shown above, including the landscape will require at least a 20mm lens on a full frame camera, or 12mm lens on a cropped frame camera. However, these charts are for my site in western Canada. From sites to the east and south where the Moon is higher an even wider lens might be needed, making this a tough sequence to take.
With wide lenses, the Moon will appear quite small. The high altitude of the Moon and midnight timing won’t lend itself to this type of multiple image composite as well as it does for eclipses that happen near moonrise or moonset, as per the example below.
A still-image composite with the lunar disks well separated will need shots only every 5 minutes, as I did above for the September 27, 2015 eclipse.
Exposures for any lunar eclipse are tricky, whether you are shooting close-ups or wide-angles, because the Moon and sky change so much in brightness.
As I did for the image below, for a still-image composite, you can expose just for the bright lunar disk and let the sky go dark.
Exposures for just the Moon will range from very short (about 1/500th second at f/8 and ISO 100) for the partials, to 1/2 to 2 seconds at f/2.8 to f/4 and ISO 400 for the totals, then shorter again (back to 1/500 at ISO 100) for the end shots when the Full Moon has returned to its normal brilliance.
That’ll take constant monitoring and adjusting throughout the shoot, stepping the shutter speed gradually longer thorough the initial partial phase, then shorter again during the post-totality partial phase.
You’d then composite and layer (using a Lighten blend mode) the well-exposed disks (surrounded by mostly black sky) into another background image exposed longer for 10 to 30 seconds at ISO 800 to 1600 for the sky and stars, shot at mid-totality.
To maintain the correct relative locations of the lunar disks and foreground, the camera cannot move.
That technique works best if it’s just a still image you are after, such as above. This image is such a composite, of the April 4, 2015 total lunar eclipse from Monument Valley, Utah.
This type of composite takes good planning and proper exposures to pull off, but will be true to the scene, with the lunar disk and its motion shown to the correct scale and position as it was in the sky. It might be a composite, but it will be accurate.
That’s in stark contrast to the flurry of ugly “faked” composites that will appear on the web by the end of the day on January 21, ones with huge telephoto Moons pasted willy-nilly onto a wide-angle sky.
Rather than look artistic, most such attempts look comically cut-and-pasted. They are amateurish. Don’t do it!
Option 3: Advanced — Wide-Angle Time-Lapses
If it’s a time-lapse movie you want (see the video below), take exposures every 10 to 30 seconds, to ensure a final movie with smooth motion.
Unlike shooting for a still-image composite, for a time lapse each frame will have to be exposed well enough to show the Moon, sky, and landscape.
That will require exposures long enough to show the sky and foreground during the partial phases — likely about 1 to 4 seconds at f/2.8 and ISO 400. In this case, the disk of the partially-eclipsed Moon will greatly overexpose, as it does toward the end of the above time-lapse from September 27, 2015..
But the Moon will darken and become better exposed during the late stages of the partial eclipse and during totality when a long exposure — perhaps now 10 to 20 seconds at f/2.8 and ISO 800 to 1600 — will record the bright red Moon amid the stars and winter Milky Way.
Maintaining a steady cadence during the entire sequence requires using an interval long enough throughout to accommodate the expected length of the longest exposure at mid-totality, with similar camera settings to what you’ve used for other Milky Way nightscapes. If you’ve never taken those before, then don’t attempt this complex sequence.
After totality, as the Moon and sky re-brighten, exposures will have to shorten again, andsymmetrically in reverse fashion for the final partial phases.
Such a time-lapse requires consistently and incrementally adjusting the camera over the three or more hours of the eclipse on a cold winter night. The high altitude of the Moon and its small size on the required wide angle lenses will make any final time lapse less impressive than at eclipses that occur when the Moon is rising or setting.
But … the darkening of the sky and “turning on” of the Milky Way during totality will make for an interesting time-lapse effect. The sky and scene will be going from a bright fully moonlit night to effectively a dark moonless night, then back to moonlit. It’s a form of “holy grail” time lapse, requiring advanced processing with LRTimelapse software.
Again, do not move the camera. Choose your lens and frame your camera to include the entire path of the Moon for as long as you plan to shoot.
Even if the final movie looks flawed, individual frames should still produce good still images, or a composite built from a subset of the frames.
Option 4: Simple — Telephoto Close-Ups
The first thought of many photographers is to shoot the eclipse with as long a telephoto lens as possible. That can work, but …
The harsh reality is that the Moon is surprisingly small (only 1/2-degree across) and needs a lot of focal length to do it justice, if you want a lunar close-up.
You’ll need a 300mm to 800mm lens. Unfortunately, the Moon and sky are moving and any exposures over 1/4 to 2 seconds (required during totality) will blur the Moon badly if its disk is large on the frame and all you are using is a fixed tripod.
If you don’t have a tracking mount, one solution is to keep the Moon’s disk small (using no more than a fast f/2 or f/2.8 135mm to 200mm lens) and exposures short by using a high ISO speed of 1600 to 3200. Frame the Moon beside the Beehive star cluster as I show below.
Take a range of exposures. But … be sure to focus!
TIP: Focus! And Focus Again!
Take care to focus precisely on a bright star using Live View. That’s true of any lens but especially telephotos and telescopes.
Focus not just at the start of the night, but also more than once again later at night. Falling temperatures on a winter night will cause long lenses and telescopes to shift focus. What was sharp at the start of the eclipse won’t be by mid totality.
The catch is that if you are shooting for a time-lapse or composite you likely won’t be able to re-point the optics to re-focus on a star in mid-eclipse. In that case, be sure to set up the gear well before you want to start shooing to let it cool to ambient air temperature. Now focus on a star, then frame the scene. Then hope the lens doesn’t shift off focus. You might be able to focus on the bright limb of the Moon but it’s risky.
Fuzzy images, not bad exposures, are the ruin of most attempts to capture a lunar eclipse, especially with a telephoto lens. And the Moon itself, especially during totality, is not a good target to focus on. Use a bright star. The winter sky has lots!
Option 5: Advanced — Tracked Telescopic Close-Ups
If you have a mount that can be polar aligned to track the sky, then many more options are open to you.
You can use a telescope mount or one of the compact and portable trackers, such as the Sky-Watcher Star Adventurer (I show the Mini model above) or iOptron Sky Tracker units. While these latter units work great, you are best to keep the payload weight down and your lens size well under 300mm.
That’s just fine for this eclipse, as you really don’t need a frame-filling Moon. The reason is that the Moon will appear about 6 degrees west of the bright star cluster called the Beehive, or Messier 44, in Cancer.
As shown above, a 135mm to 200mm lens will frame this unique pairing well. For me, that will be the signature photo of this eclipse. The pairing can happen only at lunar eclipses that occur in late January, and there won’t be any more of those until 2037!
That’s the characteristic that makes this eclipse rare and unique, not that it’s a “super-duper, bloody, wolf Moon!” But it doesn’t make for a catchy headline.
Exposures to show the star cluster properly might have to be long enough (30 to 120 seconds) that the Moon overexposes, even at mid-totality. If so, take different exposures for the Moon and stars, then composite them later, as I did above for the December 20, 2010 eclipse near the Messier 35 star cluster in Gemini.
If really you want to shoot with even more focal length for framing just the Moon, a monster telephoto lens will work, but a small telescope such as an 80mm aperture f/6 to f/7 refractor will provide enough focal length and image size at much lower cost and lighter weight, and be easier to attach to a telescope mount.
But even with a 500mm to 800mm focal length telescope the Moon fills only a small portion of the frame, though cropped frame cameras have the advantage here. Use one if it’s a big Moon you’re after!
No matter the camera, the lens or telescope should be mounted on a solid equatorial telescope mount that you must polar align earlier in the night to track the sky.
Alternatively, a motorized Go To telescope on an alt-azimuth mount will work, but only for single shots. The rotation of the field with alt-az mounts will make a mess of any attempts to shoot multiple-exposure composites or time-lapses, described below.
Whatever the mount, for the sharpest lunar disks during totality, use the Lunar tracking rate for the motor.
Assuming an f-ratio of f/6 to f/8, exposures will vary from as short as 1/250th second at ISO 100 to 200 for the barely eclipsed Moon, to 4 to 20 seconds at ISO 400 to 1600 for the Moon at mid-totality.
It’s difficult to provide a precise exposure recommendation for totality because the brightness of the Moon within the umbra can vary by several stops from eclipse to eclipse, depending on how much red sunlight manages to make it through Earth’s atmospheric filter to light the Moon.
TIP: Shoot for HDR
As I did above, during the deep partial phases an option is to shoot both long, multi-second exposures for the red umbra and short, split-second exposures for the bright part of the Moon not yet in the umbra.
Take 5 to 7 shots in rapid succession, covering the range needed, perhaps at 1-stop increments. Merge those later with High Dynamic Range (HDR) techniques and software, or with luminosity masks.
Even if you’re not sure how to do HDR processing now, shoot all the required exposures anyway so you’ll have them when your processing skills improve.
Option 6: Advanced — Close-Up Composites and Time-Lapses
With a tracking telescope on an equatorial mount you could fire shots every 10 to 30 seconds, and then assemble them into a time-lapse movie, as below.
But as with wide-angle time-lapses, that will demand constant attention to gradually and smoothly shift exposures, ideally by 1/3rd-stop increments every few shots during the partial and total phases. Make lots of small adjustments, rather than fewer large ones.
If you track at the lunar rate, as I did above, the Moon should stay more or less centred while it drifts though the stars, assuming your mount is accurately polar aligned, an absolutely essential prerequisite here.
Conversely, track at the sidereal rate and the stars will stay more or less fixed while the Moon drifts through the frame from right to left (west to east) as I show above in a composite of the October 27, 2004 eclipse.
But such a sequence takes even more careful planning to position the Moon correctly at the start of the sequence so it remains “in frame” for the duration of the eclipse, and ends up where you want at the end.
In the chart below, north toward Polaris is at the top of the frame. Position the Moon at the start of the eclipse so it ends up just above the centre of the frame at mid-eclipse. Tricky!
As I show above, for this type of “Moon-thru-shadow” sequence a focal length of about 400mm is ideal on a full frame camera, or 300mm on a cropped frame camera.
From such a time-lapse set you could also use several frames selected from key stages of the eclipse, as I did in 2004, to make up a multiple-image composite showing the Moon moving through the Earth’s shadow.
Again, planetarium software such as Starry Night I used above, which can be set to display the field of view of the camera and lens of your choice, is essential to plan the shoot. Don’t attempt it without the right software to plan the framing.
I would consider the telescopic time-lapse method the most challenging of techniques. Considering the hour of the night and the likely cold temperatures, your best plan might be to keep it simple.
It’s what I plan to do.
I’ll be happy to get a tracked telephoto close-up of the Moon and Beehive cluster as my prime goal, with a wide-angle scene of the eclipsed Moon beside Orion and the Milky Way as a bonus. A few telescope close-ups will be even more of a bonus.
However, just finding clear skies might be the biggest challenge!
Try the Astrospheric app for astronomy-oriented weather predictions. The Environment Canada data it uses has led me to clear skies for several recent eclipses that other observers in my area missed.
It’ll be worth the effort to chase!
The next total eclipse of the Moon anywhere on Earth doesn’t occur until May 26, 2021 in an event visible at dawn from Western North America. The next total lunar eclipse visible from all of North America comes a lunar year later, on May 15, 2022.
Total Lunar Eclipse from Alan Dyer on Vimeo.
I leave you with a music video of the lunar eclipse of September 27, 2015 that incorporates still and time-lapse sequences shot using all of the above methods.
Can the new version of ON1 Photo RAW match Photoshop for astrophotography?
The short TL;DR answer: No.
But … as always, it depends. So do read on.
Released in mid-November 2018, the latest version of ON1 Photo RAW greatly improves a non-destructive workflow. Combining Browsing, Cataloging, Raw Developing, with newly improved Layers capabilities, ON1 is out to compete with Adobe’s Creative Cloud photo suite – Lightroom, Camera Raw, Bridge, and Photoshop – for those looking for a non-subscription alternative.
Many reviewers love the new ON1 – for “normal” photography.
But can it replace Adobe for night sky photos? I put ON1 Photo RAW 2019 through its paces for the demanding tasks of processing nightscapes, time-lapses, and deep-sky astrophotos.
In my eBook “How to Photograph and Process Nightscapes and Time-Lapses” (linked to at right) I present dozens of processing tutorials, including several on how to use ON1 Photo RAW, but the 2018 edition. I was critical of many aspects of the old version, primarily of its destructive workflow when going from its Develop and Effects modules to the limited Layers module of the 2018 edition.
I’m glad to see many of the shortfalls have been addressed, with the 2019 edition offering a much better workflow allowing layering of raw images while maintaining access to all the original raw settings and adjustments. You no longer have to flatten and commit to image settings to layer them for composites. When working with Layers you are no longer locked out of key functions such as cropping.
I won’t detail all the changes to ON1 2019 but they are significant and welcome.
The question I had was: Are they enough for high-quality astrophotos in a non-destructive workflow, Adobe Photoshop’s forté.
While ON1 Photo RAW 2019 is much better, I concluded it still isn’t a full replacement of Adobe’s Creative Cloud suite, as least not for astrophotography.
NOTE: All images can be downloaded as high-res versions for closer inspection.
ON1 2019 is Better, But for Astrophotography …
Functions in Layers are still limited. For example, there is no stacking and averaging for noise smoothing. Affinity Photo has those.
Filters, though abundant for artistic special effect “looks,” are limited in basic but essential functions. There is no Median filter, for one.
Despite a proliferation of contrast controls, for deep-sky images (nebulas and galaxies) I was still not able to achieve the quality of images I’ve been used to with Photoshop.
The lack of support for third-party plug-ins means ON1 cannot work with essential time-lapse programs such as Timelapse Workflow or LRTimelapse.
Nightscapes: ON1 Photo RAW 2019 works acceptably well for nightscape still images:
Its improved layering and excellent masking functions are great for blending separate ground and sky images, or for applying masked adjustments to selected areas.
Time-Lapses: ON1 works is just adequate for basic time-lapse processing:
Yes, you can develop one image and apply its settings to hundreds of images in a set, then export them for assembly into a movie. But there is no way to vary those settings over time, as you can by mating Lightroom to LRTimelapse.
As with the 2018 edition, you still cannot copy and paste masked local adjustments from image to image, limiting their use.
Exporting those images is slow.
Deep-Sky: ON1 is not a program I can recommend for deep-sky image processing:
Stars inevitably end up with unsightly sharpening haloes.
De-Bayering artifacts add blocky textures to the sky background.
And all the contrast controls still don’t provide the “snap” and quality I’m used to with Photoshop when working with low-contrast subjects.
Library / Browse Functions
ON1 is sold first and foremost as a replacement for Adobe Lightroom, and to that extent it can work well. Unlike Lightroom, ON1 allows browsing and working on images without having to import them formally into a catalog.
However, you can create a catalog if you wish, one that can be viewed even if the original images are not “on-line.” The mystery seems to be where ON1 puts its catalog file on your hard drive. I was not able to find it, to manually back it up. Other programs, such as Lightroom and Capture One, locate their catalogs out in the open in the Pictures folder.
For those really wanting a divorce from Adobe, ON1 now offers an intelligent AI-based function for importing Lightroom catalogs and transferring all your Lightroom settings you’ve applied to raw files to ON1’s equivalent controls.
However, while ON1 can read Photoshop PSD files, it will flatten them, so you would lose access to all the original image layers.
ON1’s Browse module is good, with many of the same functions as Lightroom, such as “smart collections.” Affinity Photo – perhaps ON1’s closest competitor as a Photoshop replacement – still lacks anything like it.
But I found ON1’s Browse module buggy, often taking a long while to allow access into a folder, presumably while it is rendering image previews.
There are no plug-ins or extensions for exporting directly to or synching to social media and photo sharing sites.
ON1 did a fairly good job. Some of its special effect filters, such a Dynamic Contrast, Glow, and Sunshine, can help bring out the Milky Way, though do add an artistic “look” to an image which you might or might not like.
Below, I compare Adobe Camera Raw (ACR) to ON1. It was tough to get ON1’s image looking the same as ACR’s result, but then again, perhaps that’s not the point. Does it just look good? Yes, it does.
Compared to Adobe Camera Raw, which has a good array of basic settings, ON1 has most of those and more, in the form of many special Effects, with many combined as one-click Presets, as shown below.
A few presets and individual filters – the aforementioned Dynamic Contrast and Glow – are valuable. However, most of ON1’s filters and presets will not be useful for astrophotography, unless you are after highly artistic and unnatural effects.
Noise Reduction and Lens Correction
Critical to all astrophotography is excellent noise reduction. ON1 does a fine job here, with good smoothing of noise without harming details.
Lens Correction works OK. It detected the 20mm Sigma art lens and automatically applied distortion correction, but not any vignetting (light “fall-off”) correction, perhaps the most important correction in nightscape work. You have to dial this in manually by eye, a major deficiency.
By comparison, ACR applies both distortion and vignetting correction automatically. It also includes settings for many manual lenses that you can select and apply in a click. For example, ACR (and Lightroom) includes settings for popular Rokinon and Venus Optics manual lenses; ON1 does not.
Hot Pixel Removal
I shot the example image on a warm summer night and without using in-camera Long Exposure Noise Reduction (to keep the gap between exposures short when shooting sets of tracked and untracked exposures for later compositing).
However, the penalty for not using LENR to expedite the image taking is a ground filled with hot pixels. While Adobe Camera Raw does have some level of hot pixel removal working “under the hood,” many specks remained.
ON1 showed more hot pixels, until you clicked Remove Hot Pixels, found under Details. As shown at centre above, it did a decent job getting rid of the worst offenders.
But as I’ll show later, the penalty is that stars now look distorted and sometimes double, or you get the outright removal of stars. ON1 doesn’t do a good job distinguishing between true sharp-edged hot pixels and the softer images of stars. Indeed, it tends to over sharpen stars.
A competitor, Capture One 11, does a better job, with an adjustable Single Pixel removal slider, so you can at least select the level of star loss you are willing to tolerate to get rid of hot pixels.
Star Image Quality
Yes, we are pixel peeping here, but that’s what we do in astrophotography. A lot!
Stars in ON1 don’t look as good as in Camera Raw. Inevitably, as you add contrast enhancements, stars in ON1 start to exhibit dark and unsightly “sharpening haloes” not present in ACR, despite me applying similar levels of sharpening and contrast boosts to each version of the image.
Camera Raw has been accused of producing images that are not as sharp as with other programs such as Capture One and ON1.
There’s a reason. Other programs over-sharpen, and it shows here.
We can get away with it here in wide-field images, but not later with deep-sky close-ups. I don’t like it. And it is unavoidable. The haloes are there, albeit at a low level, even with no sharpening or contrast enhancements applied, and no matter what image profile is selected (I used ON1 Standard throughout).
You might have to download and closely inspect these images to see the effect, but ON1’s de-Bayering routine exhibits a cross-hatched blocky pattern at the pixel-peeping level. ACR does not.
I see this same effect with some other raw developers. For example, the free Raw Therapee shows it with many of its choices for de-Bayering algorithms, but not all. Of the more than a dozen raw developers I tested a year ago, ACR and DxO PhotoLab had (and still have) the most artifact-free de-Bayering and smoothest noise reduction
Again, we can get away with some pixel-level artifacts here, but not later, in deep-sky processing.
Nightscape Processing — Layering and Compositing
The 2018 version of ON1 forced you to destructively flatten images when bringing them into the Layers module.
The 2019 version of ON1 improves that. It is now possible to composite several raw files into one image and still retain all the original Develop and Effects settings for non-destructive work.
You can then use a range of masking tools to mask in or out the sky.
For the example above, I have stacked tracked and untracked exposures, and am starting to mask out the trailed stars from the untracked exposure layer.
To do this with Adobe, you would have to open the developed raw files in Photoshop (ideally using “smart objects” to retain the link back to the raw files). But with ON1 we stay within the same program, to retain access to non-destructive settings. Very nice!
To add masks, ON1 2019 does not have the equivalent of Photoshop’s excellent Quick Selection Tool for selecting the sky or ground. It does have a “Perfect Brush” option which uses the tonal value of the pixels below it, rather than detecting edges, to avoid “painting over the lines.”
While the Perfect Brush does a decent job, it still requires a lot of hand painting to create an accurate mask without holes and defects. There is no non-destructive “Select and Mask” refinement option as in Photoshop.
Yes, ON1’s Refine Brush and Chisel Mask tools can help clean up a mask edge but are destructive to the mask. That’s not acceptable to my non-destructive mindset!
The masking tools are also applicable to adding “Local Adjustments” to any image layer, to brighten or darken regions of an image for example.
These work well and I find them more intuitive than the “pins” ACR uses on raw files, or DxO PhotoLab’s quirky “U-Point” interface.
ON1’s Local Adjustments work more like Photoshop’s Adjustment Layers and are similarly non-destructive. Excellent.
A very powerful feature of ON1 is its built-in Luminosity masking.
Yes, Camera Raw now has Range Masks, and Photoshop can be used to create luminosity masks, but making Photoshop’s luminosity masks easily adjustable requires purchasing third-party extension panels.
ON1 can create an adjustable and non-destructive luminosity mask on any image or adjustment layer with a click.
While such masks, based on the brightness of areas, aren’t so useful for low-contrast images like the Milky Way scene above, they can be very powerful for merging high-contrast images (though ON1 also has an HDR function not tested here).
ON1 has the advantage here. Its Luminosity masks are a great feature for compositing exposures or for working on regions of bright and dark in an image.
Here again is the final result, above.
It is not just one image each for the sky and ground, but is instead a stack of four images for each half of the composite, to smooth noise. This form of stacking is somewhat unique to astrophotography, and is commonly used to reduce noise in nightscapes and in deep-sky images, as shown later.
Here I show how you have to stack images in ON1.
Unlike Photoshop and Affinity Photo, ON1 does not have the ability to merge images automatically into a stack and apply a mathematical averaging to the stack, usually a Mean or Median stack mode. The averaging of the image content is what reduces the random noise.
Instead, with ON1 you have perform an “old school” method of average stacking – by changing the opacity of the layers, so that Layer 2 = 50%, Layer 3 = 33%, Layer 4 = 25%, and so on. The result is identical to performing a Mean stack mode in Photoshop or Affinity.
Fine, except there is no way to perform a Median stack, which can be helpful for eliminating odd elements present in only one frame, perhaps an aircraft trail.
Copy and Paste Settings
Before we even get to the stacking stage, we have to develop and process all the images in a set. Unlike Lightroom or Camera Raw, ON1 can’t develop and synchronize settings to a set of images at once. You can work on only one image at a time.
So, you work on one image (one of the sky images here), then Copy and Paste its settings to the other images in the set. I show the Paste dialog box here.
This works OK, though I did find some bugs – the masks for some global Effects layers did not copy properly; they copied inverted, as black instead of white masks.
However, Luminosity masks did copy from image to image, which is surprising considering the next point.
The greater limitation is that no Local Adjustments (ones with masks to paint in a correction to a selected area) copy from one image to another … except ones with gradient masks. Why the restriction?
So as wonderful as ON1’s masking tools might be, they aren’t of any use if you want to copy their masked adjustments across several images, or, as shown next, to a large time-lapse set.
While Camera Raw’s and Lightroom’s Local Adjustment pins are more awkward to work with, they do copy across as many images as you like.
A few Adobe competitors, such as Affinity Photo (as of this writing) simply can’t do this.
By comparison, with the exception of Local Adjustments, ON1 does have good functions for Copying and Pasting Settings. These are essential for processing a set of hundreds of time-lapse frames.
Once all the images are processed – whether it be with ON1 or any other program – the frames have to exported out to an intermediate set of JPGs for assembly into a movie by third-party software. ON1 itself can’t assemble movies, but then again neither can Lightroom (as least not very well), though Photoshop can, through its video editing functions.
For my test set of 220 frames, each with several masked Effects layers, ON1 took 2 hours and 40 minutes to perform the export to 4K JPGs. Photoshop, through its Image Processor utility, took 1 hour and 30 minutes to export the same set, developed similarly and with several local adjustment pins.
ON1 did the job but was slow.
A greater limitation is that, unlike Lightroom, ON1 does not accept any third party plug-ins (it serves as a plug-in for other programs). That means ON1 is not compatible with what I feel are essential programs for advanced time-lapse processing: either Timelapse Workflow (from https://www.timelapseworkflow.com) or the industry-standard LRTimelapse (from https://lrtimelapse.com).
Both programs work with Lightroom to perform incremental adjustments to settings over a set of images, based on the settings of several keyframes.
Lacking the ability to work with these programs means ON1 is not a program for serious and professional time-lapse processing.
Wide-Angle Milky Way
Now we come to the most demanding task: processing long exposures of the deep-sky, such as wide-angle Milky Way shots and close-ups of nebulas and galaxies taken through telescopes. All require applying generous levels of contrast enhancement.
As the above example shows, try as I might, I could not get my test image of the Milky Way to look as good with ON1 as it did with Adobe Camera Raw. Despite the many ways to increase contrast in ON1 (Contrast, Midtones, Curves, Structure, Haze, Dynamic Contrast and more!), the result still looked flat and with more prominent sky gradients than with ACR.
And remember, with ACR that’s just the start of a processing workflow. You can then take the developed raw file into Photoshop for even more precise work.
With ON1, its effects and filters all you have to work with. Yes, that simplifies the workflow, but its choices are more limited than with Photoshop, despite ON1’s huge number of Presets.
Similarly, taking a popular deep-sky subject, the Andromeda Galaxy, aka M31, and processing the same original images with ON1 and ACR/Photoshop resulted in what I think is a better-looking result with Photoshop.
Of course, it’s possible to change the look of such highly processed images with the application of various Curves and masked adjustment layers. And I’m more expert with Photoshop than with ON1.
But … as with the Cygnus Milky Way image, I just couldn’t get Andromeda looking as good in ON1. It always looked a little flat.
Dynamic Contrast did help snap up the galaxy’s dark lanes, but at the cost of “crunchy” stars, as I show next. A luminosity “star mask” might help protect the stars, but I think the background sky will inevitably suffer from the de-Bayering artifacts.
Star and Background Sky Image Quality
As I showed with the nightscape image, stars in ON1 end up looking too “crunchy,” with dark halos from over sharpening, and also with the blocky de-Bayering artifacts now showing up in the sky.
I feel it is not possible to avoid dark star haloes, as any application of contrast enhancements, so essential for these types of objects, brings them out, even if you back off sharpening at the raw development stage, or apply star masks.
ON1 is applying too much sharpening “under the hood.” That might “wow” casual daytime photographers into thinking ON1 is making their photos look better, but it is detrimental to deep-sky images. Star haloes are a sign of poor processing.
Noise and Hot Pixels
ON1’s noise reduction is quite good, and by itself does little harm to image details.
But turn on the Remove Hot Pixel button and stars start to be eaten. Faint stars fade out and brighter stars get distorted into double shapes or have holes in them.
Hot pixel removal is a nice option to have, but for these types of images it does too much harm to be useful. Use LENR or take dark frames, best practices in any case.
Image Alignment and Registration
Before any processing of deep-sky images is possible, it is first necessary to stack and align them, to make up for slight shifts from image to image, usually due to the mount not being perfectly polar aligned. Such shifts can be both translational (left-right, up-down) and rotational (turning about the guide star).
New to ON1 2019 is an Auto-Align Layers function. It worked OK but not nearly as well as Photoshop’s routine. In my test images of M31, ON1 didn’t perform enough rotation.
Once stacked and aligned, and as I showed above, you then have to manually change the opacities of each layer to blend them for noise smoothing.
By comparison, Photoshop has a wonderful Statistics script (under File>Scripts) that will automatically stack, align, then mean or median average the images, and turn the result into a non-destructive smart object, all in one fell swoop. I use it all the time for deep-sky images. There’s no need for separate programs such as Deep-Sky Stacker.
In ON1, however, all that has to be done manually, step-by-step. ON1 does do the job, just not as well.
ON1 Photo RAW 2019 is a major improvement, primarily in providing a more seamless and less destructive workflow.
Think of it as Lightroom with Layers!
But it isn’t Photoshop.
True to ON1’s heritage as a special effect plug-in, it has some fine Effect filters, such as Dynamic Contrast above, ones I sometimes use from within Photoshop as plug-in smart filters.
Under Sharpen, ON1 does offer a High Pass option, a popular method for sharpening deep-sky objects.
Missing Filters and Adjustments
But for astrophoto use, ON1 is missing a lot of basic but essential filters for pixel-level touch-ups. Here’s a short list:
• Missing are Median, Dust & Scratches, Radial Blur, Shake Reduction, and Smart Sharpen, just to mention a handful of filters I find useful for astrophotography, among the dozens of others Photoshop has, but ON1 does not. But then again, neither does Lightroom, another example of how ON1 is more light Lightroom with layers and not Photoshop.
• While ON1 has many basic adjustments for color and contrast, its version of Photoshop’s Selective Color lacks Neutral or Black sliders, great for making fine changes to color balance in astrophotos.
• While there is a Curves panel, it has no equivalent to Photoshop’s “Targeted Adjustment Tool” for clicking on a region of an image to automatically add an inflection point at the right spot on the curve. This is immensely useful for deep-sky images.
• Also lacking is a basic Levels adjustment. I can live without it, but most astrophotographers would find this a deal-breaker.
• On the other hand, hard-core deep-sky photographers who do most of their processing in specialized programs such as PixInsight, using Photoshop or Lightroom only to perform final touch-ups, might find ON1 perfectly fine. Try it!
Saving and Exporting
ON1 saves its layered images as proprietary .onphoto files and does so automatically. There is no Save command, only a final Export command. As such it is possible to make changes you then decide you don’t like … but too late! The image has already been saved, writing over your earlier good version. Nor can you Save As … a file name of your choice. Annoying!
Opening a layered .onphoto file (even with ON1 itself already open) can take a minute or more for it to render and become editable.
Once you are happy with an image, you can Export the final .onphoto version as a layered .PSD file but the masks ON1 exports to the Photoshop layers may not match the ones you had back in ON1 for opacity. So the exported .PSD file doesn’t look like what you were working on. That’s a bug.
Only exporting a flattened TIFF file gets you a result that matches your ON1 file, but it is now flattened.
Bugs and Cost
I encountered a number of other bugs, ones bad enough to lock up ON1 now and then. I’ve even seen ON1’s own gurus encounter bugs with masking during their live tutorials. These will no doubt get fixed in 2019.x upgrades over the next few months.
But by late 2019 we will no doubt be offered ON1 Photo RAW 2020 for another $80 upgrade fee, over the original $100 to $120 purchase price. True, there’s no subscription, but ON1 still costs a modest annual fee, presuming you want the latest features.
Now, I have absolutely no problem with that, and ON1 2019 is a significant improvement.
However, I found that for astrophotography it still isn’t there yet as a complete replacement for Adobe.
I’m pleased to announce that my “Nightscapes and Time-Lapses” eBook is now available for all devices as a “universal” PDF!
First published in 2014, and revised several times since then, my How to Photograph and Process Nightscapes and Time-Lapses eBook had been available only for Apple devices through the Apple iBooks Store. Not any more!
Over the years, many people have inquired about an edition for other devices, notably Android and Windows tablets. The only format that I can be sure the wide array of other devices can read and display as I intend it is PDF.
To convert the interactive Apple iBook into a PDF required splitting the content into two volumes:
Volume 1 deals just with Photography in 425 pages.
Volume 2 deals just with Processing, also in 425 pages.
Volume 2 includes all the same step-by-step tutorials as the Apple edition, but spread over many more pages. That’s because the Apple Edition allows “stacking” many processing steps into a one-page interactive gallery.
In the PDF version, however, those same steps are shown over several pages. And there are about 50 processing tutorials, including for selected non-Adobe programs such as Affinity Photo, ON1 Photo RAW, and DxO PhotoLab.
The other main difference is that, unlike the Apple version, I cannot embed videos. So all the videos are provided by links to Vimeo feeds, many “private” so only my ebook owners have access to those videos.
Otherwise, the combined content of the two PDFs is the same as the Apple iBooks edition.
I’ve also updated the Apple iBooks version (to v3.1) to revise the content, and add a few new pages: on Luminosity Mask panel extensions, southern hemisphere Milky Way and Moon charts, and even the new Nikon Z6 camera. It is now 580 pages.
Owners of the previous Apple iBooks edition can get the updated version for free. In iBooks, check under Purchased>Updates.
Both Apple and PDF editions are now in sync and identical in content. I think you’ll find them the most comprehensive works on the subject in print and in digital.