This article is about an experiment combining operating systems and software, and other things:
The Fedora Linux laptop that we normally use for imaging has a 17” screen and a 9th generation CORE i7 processor, also runs a Windows 10 Virtual Machine. The Virtual Machine runs in one of the 4 Linux workspaces. The Win 10 VM has access to all of the Linux directories (folders).
We ran Deep Sky Stacker Live on the VM and AstroDMx Capture on one of the other Linux workspaces. It is easy to switch between them.
Screenshot showing the four Linux workspaces, with the Win 10 VM running in the bottom left one with DeepSky Stacker Live running in it and the top left one running AstroDMx Capture capturing data on the Orion nebula
AstroDMx Capture was set up to place captured images in the directory (folder) that we called ‘MONITORED’ on the Linux desktop.
Screenshot of the Stacked image tab of DSS Live where the stacked image (in this case, the Orion nebula) is accumulated
Screenshot showing the Win 10 VM running DSS Live with the setting tab open
The Settings tab is fairly intuitive and also includes the possibility to set rejection parameters that would prevent a given image from being added to the stack depending on its properties such as the brightness of the background or the number of stars detected and/or other criteria.
DSS Live works here by monitoring a directory (folder) that we have called ‘MONITORED’ on the Linux desktop, as each new image is deposited by AstroDMx Capture into the ‘MONITORED’ folder it is registered and stacked by DSS Live, and the accumulating stacked image is placed by DSS Live in a directory that we have called ‘STACKED’ on the Linux desktop.
Another tab in DSS Live shows the current stacked image and there are levels controls to apply to the stacked image to make it bright and well visible. Yet another tab shows the last image to come in and has similar levels controls. By comparing these two tabs, the difference in noise between an individual image and the stacked image soon becomes evident.
Screenshot with AstroDMx Capture in the top left Linux workspace and the DSS Live stacked image tab in the bottom left Linux workspace (desktop)
Screenshot of the Stacked image tab accumulating the image of M33
Screenshot of AstroDMx Capture capturing FITS data on the Triangulum galaxy M33
Although this was all done on a Linux computer with a Win 10 VM running DSS Live, it is just as easy to do this all on a Windows computer with AstroDMx Capture and DSS Live running on different workspaces (desktops). This was done as a demonstration, however, it should be noted that while Deep Sky Stacker will run under Linux in Wine, DSS Live will NOT run in Wine.
Live Stacking and its history
Live stacking can be used to good effect for outreach and is an important component of EAA (electronically assisted astronomy); EEA (electronically assisted astronomy); EEVA (electronically enhanced visual astronomy) or OA (observational astrophotography). All of these terms mean exactly the same thing, that is, a camera is used instead of an eyepiece, and the results are viewed on a computer screen. Some people distinguish this from Video astronomy (I don’t accept this distinctinction) Video astronomy uses a video camera (usually an analogue TV camera, and frequently a frame-accumulating video camera) that accumulates frames on board the camera and releases them as a steadily updating video stream, effectively live-stacking on camera. These cameras such as the Mintrons, Samsung SDC-435 (SCB-2000) and LN300 video cameras can be used without a computer as they produce composite video or S-Video that can be played directly on many monitors. If the signal is passed through a capture card, the video is digitised and is then available for playing via capture software such as AstroDMx Capture. The video frames can then be captured and stacked like any other frames and can be used to produce quite pleasing astronomical images, both deep sky and planetary/Lunar/Solar. The burden of these TV cameras is that they have the relatively low resolution of PAL or NTSC and can usually be captured at VGA resolution (640 x 480) via capture cards. Most of the TV video cameras were CCD devices and have rapidly gone into disuse as sensitive CMOS based cameras have become available and the manufacture of CCD sensors has essentially stopped.
An LN300 frame accumulating CCD video camera fitted with a UV/IR cut filter and a light pollution filter was placed at the Newtonian focus of a Star Discovery, f/5, 150P Newtonian on a Star Discovery AZ GOTO mount and used to produce this image of M27
The difference between the old video frame-accumulating astronomy and EAA is that now, the frame-accumulation (stacking) is done on the computer via software, rather than on the camera itself. It is my belief that a CMOS camera could be built that accumulates frames (bright with less noise) on camera into an accumulating buffer and then releases a video stream that could be viewed on the computer, or used as the basis of image capture. (In fact, there are HD camera modules that can possibly do this, but like the old video surveillance cameras they produce a CVBS signal. It is my intention to build a camera from one of these modules and test its suitability for EAA and imaging. These cameras have OSD functionality, but whether they have a functional SENS-UP frame integration remains to be seen) There was a huge paradigm shift away from on-camera image summing to off-camera image-summing simultaneous with the demise of CCD cameras after Sony’s decision in 2015 to phase out CCD production.
Interestingly, at the turn of the century we in the international imaging group QCUIAG, and others, were producing deep sky images from very low-lux (as low as 0.0001 lux) surveillance cameras by literally summing thousands of video frames (at 50 frames per second) via capture cards into 32 bit FITS files using software such as AstroVideo written by Bev Ewen-Smith of the COAA observatory for the Windows operating system. It is important to realise that the sensor has to be sufficiently sensitive to be able to detect the photons from very faint objects in a short period of time. This was before the introduction of frame-accumulating video cameras such as the Mintrons etc. The process of off-camera image summing was developed and programmed in Java by Jürgen Leismann. He produced versions for Linux, Solaris (a version of UNIX initially developed by Sun Microsystems), and Windows. With the introduction of frame-accumulating video cameras, the off-camera frame summing was used to even greater effect. This is why I regard the distinction between video astronomy and EAA as being semantic rather than substantive.
One of Jürgen’s early images from 2000 was of M13 and was produced by summing 23,500 video frames.
At this moment in time, live stacking is not one of the features of AstroDMx Capture. However, as we have demonstrated here and in a previous article, AstroDMx Capture works seamlessly with DSS Live in a Windows environment (even in a Windows VM within Linux). The Live-stacking used here was purely for viewing or EAA and played no part in the production of final images (although it could have been if real time dark-frame correction had been used during capture). The reason that we don’t usually use real time calibration is that there is always the danger of losing precious imaging time whilst capturing dark-frames, having previously determined the correct exposures etc for the current session. We believe that (particularly in the UK climate) it is preferable to capture image data first, and capture dark-frames when image data have been captured.
The FITS files captured on M42/M43 and M33 during the EAA session were calibrated, registered, stacked and post-processed to produce the following final images, independent of the live stacking procedures.
M42/M43 The Orion nebula with AstroDMx Capture and an SV405CC
M33 The Triangulum galaxy with AstroDMx Capture and an SV405CC
- Video Integration with registration
- Hot pixel removal
- Drift Integration
- Finding the sharpest frame in an AVI
- Increasing the signal/noise ratio from many frames.
- Wavelet image processing
- Colour correction in captured images
- Network control of telescopes for image acquisition
- Astrovideo (Bev Ewen-Smith)
- Vega (Colin Bownes)
- K3CCD tools (Peter Katreniak)
- Icatch; Imerge (Jon Grove)
- Registax (Cor Berrevoets)
- Network telescope controller (Andrew Sprott)