StarTools is a new type of image processing engine. It tracks your signal and its noise component as you process.
The result? Superior signal fidelity, detail, quality, user-friendliness and capabilities compared to any other post-processing software.
StarTools is a new type of image processing application for astrophotography that tracks signal and noise propagation as you process.
By tracking signal and noise evolution during processing, it lets you effortlessly accomplish hitherto "impossible" feats like deconvolution of a heavily processed image, or pin-point accurate noise reduction without local supports or masks.
StarTools is a new type of image processing application for astrophotography that tracks noise propagation as you process.
StarTools extensive knowledge of the past, present and - sometimes - future of your signal, allows you to do things users of other software can only dream of. These things include mathematically correct deconvolution of heavily processed data, mathematically correct color calibration of stretched data, and objectively the best noise reduction routine in the market that seems to "just know" exactly where noise grain in your final image is located.
As opposed to other software, StarTools uses new brute force and data mining techniques, so your precious signal is preserved as much as possible till the very end. StarTools makes use of the advances in CPU power, RAM and storage space, replacing old algorithms with new, more powerful ones.
StarTools is not just popular with beginners. StarTools is the best-kept secret amongst signal processing purists; those who fundamentally understand how StarTools achieves such superior signal fidelity. Yet, you don't need a mathematics or physics degree to understand the underlying theory; see the Tracking section to learn more.
We're incredibly pleased StarTools superior processing capabilities haven't gone unnoticed, now being the new tool of choice for a rapidly growing group of beginners, enthusiasts and institutions that numbers in the many thousands.
The software is "user friendly by mathematical nature". To be able to function, the engine needs to be able to make mathematical sense of your signal flow from start to finish. That's why it's simply unable to perform "nonsensical" operations. This is great if you're a beginner and saves you from bad habits or sub-optimal decisions. It's not so much because we put "guard rails" in, it's just that the application would break otherwise.
StarTools comprises several modules with deep, state-of-the-art functionality that rival (and often improve on) other software packages.
Don't be fooled by StarTools' simple interface - you are forgiven if, at first glance, you get the impression StarTools offers only the basics. Nothing could be further from the truth!
StarTools goes deep. Very deep. It's just not 'in your face' about it and you can still get great results without delving into the depths of its capabilities. It's up to you.
If you're a seasoned photographer looking to get more out of your data, StarTools will allow you to visibly gain the edge with novel, brute-force techniques and data mining routines that have only just become viable on modern 64-bit multi-core CPUs and increases in RAM and storage space.
If you're a beginner, StarTools will assist you by making it easy to achieve great results out-of-the box, while you get to know the exciting field of astrophotography better.
Whatever your situation, skills, equipment and prior experience, you'll find that working with StarTools is quite a bit different than most software you've worked with. And in astrophotography, that tends to be a good thing!
Navigation within StarTools generally takes place between the main screen and the different modules. StarTools' navigation was written to provide a fast, predictable and consistent work flow.
There are no windows that overlap, obscure or clutter the screen. Where possible, feedback and responsiveness will be immediate. Many modules in StarTools offer on-the-spot background processing, yielding quick final results for evaluation and further tweaking.
In some modules a preview area can be specified in order to get a better idea of how settings would modify the image in a particular area, saving the user from waiting for the whole image to be re-calculated.
In both the main screen and the different modules, a toolbar is found at the very top, with buttons that perform functionality that is specific to the active module. In case of the main screen, this toolbar contains buttons for opening an image, saving an image, undoing/redoing the last operation, invoking the mask editor, switching Tracking mode on/off, restoring the image to a particular state, and opening an 'about' dialog.
Exclusive to the main screen, the buttons that activate the different modules, reside on the left hand side of the main screen. Note that the modules will only successfully activate once an image has been loaded, with the exception of the 'Compose' module. Note also that some module may remain unavailable, depending on whether Tracking mode is engaged.
Helpfully, the buttons are roughly arranged in a recommended workflow. Obviously not all modules need to be visited and workflow deviations may be needed, recommended or suit your personal taste better.
Consistent throughout StarTools, a set of zoom control buttons are found in the top right corner, along with a zoom percentage indicator.
Panning controls ('scrollbar style') are found below and to the right of the image, as appropriate, depending on whether the image at its current zoom level fits in the application window.
Common to most modules is a 'Before/After' button, situated next to the zoom controls, which toggles between the original and processed version of an image for easy comparison. A "PreTweak/PostTweak" button may also be available, which toggles between the current and previous result, allowing you to quickly spot the difference between two different settings.
All modules come with a 'Help' button in the toolbar, which explains, in brief, the purpose of the module. Furthermore, all settings and parameters come with their own individual 'Help' buttons, situated to the right of the parameter control. These help buttons explain, again in brief, the nature of the parameter or setting.
Even the way StarTools displays and scales images, has been created specifically for astrophotography.
StarTools implements a custom scaling algorithm in its user interface, which makes sure that perceived noise levels stay constant, no matter the zoom level. This way, nasty noise surprises when viewing the image at 100% are avoided.
Even more clever, StarTools scaling algorithm can highlight latent and faint patterns (often indicating stacking problems or acquisition errors) by intentionally causing an aliasing pattern at different zoom levels in the presence of such patterns.
The parameters in the different modules are typically controlled by one of two types of controls;
1A level setter, which allows the user to quickly set the value of a parameter within a certain range2An item selector, which allows the user to switch between different modes.
Setting the value represented in a level setter control is accomplished by clicking on the '+' and '-' buttons to increment or decrement the value respectively. Alternatively you can click anywhere in the area between the '-" and '+' button to set a value quickly.
Switching items in the item selector is accomplished by clicking the arrows at either end of the item description. Note that the arrows may disappear as the first or last item in a set of items is reached. Alternatively the user may click on the label area of the item selector to see the full range of items which may then be selected from a pop-over menu.
As of version 1.5, StarTools implements some hotkeys for common functions;
+ or = key
D or ENTER key
ESC key or ENTER key
Signal evolution Tracking data mining plays a very important role in StarTools and understanding it is key to achieving superior results with StarTools.
As soon as you load any data, StarTools will start Tracking the evolution of every pixel in your image, constantly keeping track of things like noise estimates, parameters you use and other statistics.
Tracking makes workflows much less linear and allows for StarTools' engine to "time travel" between different versions of the data as needed, so that it can insert modifications or consult the data in different points in time as needed ('change the past for a new present and future'). It's the primary reason why there is no difference between linear and non-linear data in StarTools, and the reason why you can do things in StarTools that would have otherwise been nonsensical (like deconvolution after stretching your data). If you're not familiar with Tracking and what it means for your images, signal fidelity and simplification of the workflow & UI, please do read up on it!
Tracking how you process your data also allows the noise reduction routines in StarTools to achieve superior results. By the time you get to your end result, the Tracking feature will have data-mined/pin-pointed exactly where (and how much) visible noise grain exists in your image. I therefore 'knows' exactly how much noise reduction to apply in each area of your image.
Noise reduction is applied at the very end, as you switch Tracking off, because doing it at the very last possible moment will have given StarTools the longest possible amount of time to build and refine its knowledge of where the noise is in your image. This is different from other software, which allow you to reduce noise at any stage, since such software does not track signal evolution and its noise component.
Tracking how you processed your data also allows the Color module to calculate and reverse how the stretching of the luminance information has distorted the color information (such as hue and saturation) in your image, without having to resort to 'hacks'. Due to this capability, color calibration is best done at the end as well, before switching Tracking off. This too is different from other software, which wants you to do your colour calibration before doing any stretching, since it cannot deal with colour correction after the signal has been non-linearly transformed like StarTools can.
The knowledge that Tracking gathers is used in many other ways in StarTools, however, the nice thing about Tracking is that it is very unobtrusive. In fact, it actually helps get you get better results from your data in less time by homing in on parameters in the various modules that it thinks are good defaults, given what Tracking has learnt about your data.
StarTools keeps a detailed log of what modules and parameters you used. This log file is located in the same folder as the StarTools executable and is named StarTools.log.
As of the 1.4 beta versions, this log also includes the mask you used, encoded in base64 format. See the documentation on masks on how to easily decode the base64 if needed.
Getting to grips with new software can be daunting, but StarTools was designed to make this as painless as possible. This quick, generic work flow will get you started.
While processing your first images with StarTools, it may help knowing that the icons in the top two panels roughly follow a recommended workflow when read top to bottom, left to right.
Open an image stack ("dataset"), fresh from a stacker. Processing in StarTools is easiest and will yield vastly better results if the data is as "virgin" as possible, meaning unstretched, not colour balanced, not noise reduced and not deconvolved. Best results are achieved with data that is as close to what the camera recorded as possible.
Do not use any software that may meddled with your data prior to passing it to your stacking program. Avoid any pre-conversion tools or software that came with your camera. Make sure that any stacking software that you use is configured to perform as little processing to the data as possible. For example, if you use Deep Sky Stacker make sure that Per Channel Color Calibration and RGB Channels Calibration are set to 'no'. Also make sure that, in Deep Sky Stacker, the final file is saved with settings 'embedded', rather than applied. 32-bit integer FITS files are preferable.
Counter-intuitively, a good stacker output will have a distinct, heavy color bias with little or no apparent detail. Worry not; subsequent processing in StarTools will remove the color bias, while restoring and bringing out detail. If, looking at the initial image, you are wondering how on earth this will be turned into a nice picture, you are often on the right track.
Upon opening an image, the Tracking dialog will open, asking you about the characteristics of the data. Choose the option that best matches the data being imported. If your dataset comes straight from a stacker, the first option is safe. Tracking is now engaged (the Track button is lit up green).
Launch AutoDev to help inspect the data. Chances are that the image looks terrible, which is - believe it or not - the point. In the presence of problems in the data, AutoDev will show these problems until they are dealt with. Because StarTools constantly tries to make sense of your data, StarTools is very sensitive to artefacts, meaning anything that is not real celestial detail (such as stacking artefacts, dust donuts, gradients, terrestrial scenery, etc.). Just 'Keep' the result. StarTools, thanks to Tracking, will allow us to redo the stretch later on.
At this point, things to look out for are;
•Stacking artefacts close to the borders of the image. These are dealt with in the Crop or Lens modules•Bias or gradients (such as light pollution or skyglow). These are dealt with in the Wipe module.•Oversampling (meaning the finest detail, such as small stars, being "smeared out" over multiple pixels). This is dealt with in the Bin module.•Coma or elongated stars towards one or more corners of the image. These can be ameliorated using the Lens module.
Fix the issues that AutoDev has brought to your attention;
1Ameliorate coma using the Lens module.2Crop any remaining stacking artefacts.3Bin the image up until each pixel describes one unit of real detail.4Wipe gradients and bias away. Be very mindful of any dark anomalies - bump up the Dark Anomaly filter if dealing with small ones (such as dark pixels) or mask big ones out using the Mask editor. You may also wish to use a mask to mask out nebulosity if using high values for the two Aggressiveness parameters.
Once all issues are fixed, launch AutoDev again and tell it to 'redo' the stretch. If all is well, AutoDev will now create a histogram stretch that is optimised for the "real" object(s) in your clean data. If your data is very noisy, it is possible AutoDev will optimise for the noise, mistaking it for real detail. In this case you can tell it to Ignore Fine detail.
If your object(s) reside on an otherwise uninteresting or "empty" background, you can tell AutoDev where the interesting bits of your image are by clicking & dragging a Region Of Interest.
Don't worry about the colouring just yet - focus getting the detail out of your data first. If your image shows very bright highlights, know that you can "rescue" them later on using, for example, the HDR module.
Season your image to taste. Apply deconvolution with the Decon module, dig out detail with the Wavelet Sharpen ('Sharp') module, enhance Contrast with the Contrast module and fix any dynamic range issues with the HDR module.
There are many ways to enhance detail to taste and much depends on what you feel is most important to bring out in your image. As opposed to other software, however, you don't need to be as concerned with noise grain propagation; StarTools will take care of noise grain when you finally switch Tracking off.
Launch the Color module.
See if StarTools comes up with a good colour balance all by itself. A good colour balance shows a good range of all star temperatures, from red, orange and yellow through to white and blue. HII areas will tend to look purplish/pink, while galaxy cores tend to look yellow and their outer rims tend to look bluer.
Green is an uncommon colour in outer space (though there are notable exceptions, such as areas that are strong in OIII such as the core of M42). If you see green dominance, you may want to reduce the green bias. If you think you have a good colour balance, but still see some dominant green in your image, you can remove the last bit of green using the 'Cap Green' function.
Switch Tracking off and apply noise reduction. You will now see what all the "signal evolution Tracking" fuss is about, as StarTools seems to know exactly where the noise exists in your image, snuffing it out. The most important parameters to tweak are Smoothness, in combination with Grain Dispersion.
A video is also available that shows a simple, short processing workflow of a real-world, imperfect dataset.
Please refer to the video description below the video for the source data and other helpful links.
The Mask feature is an integral part of StarTools. Many modules use a mask to operate on specific pixels and parts of the image, leaving other parts intact.
Importantly, besides operating only on certain parts of the image, it allows the many modules in StarTools to perform much more sophisticated operations.
You may have noticed that when you launch a module that is able to apply a mask, the pixels that are set in the mask will flash three times in green. This is to remind you which parts of the image will be affected by the module and which are not. If you just loaded an image, all pixels in the whole image will be set in the mask, so every pixel will be processed by default. In this case, when you launch a module that is able to apply a mask, the whole image will flash in green three times.
Green coloured pixels in the mask are considered 'on'. That is to say, they will be altered/used by whatever processing is carried out by the module you chose. 'Off' pixels (shown in their original colour) will not be altered or used by the active module. Again, please note that, by default all pixels in the whole image are marked 'on' (they will all appear green).
For example, an 'on' pixel (green coloured) in the Sharp module will be sharpened, in the Wipe module it will be sampled for gradient modelling, in Synth it will be scanned for being part of a star, in Heal in will be removed and healed, in Layer it will be layered on top of the background image, etc.
•If a pixel in mask is 'on' (coloured green), then this pixel is fed to the module for processing.•If a pixel in mask is 'off' (shown in original colour), then tell the module to 'keep the pixel as-is, hands off, do not touch or consider'.
The Mask Editor is accessible from the main screen, as well as from the different modules that are able to apply a mask. The button to launch the Mask Editor is labelled 'Mask'. When launching the Mask Editor from a module, pressing the 'Keep' or 'Cancel' buttons will return StarTools to the module you pressed the 'Mask' button in.
As with the different modules in StarTools, the 'Keep' and 'Cancel' buttons work as expected; 'Keep' will keep the edited Mask and return, while 'Cancel' will revert to the Mask as it was before it was edited and return.
As indicated by the 'Click on the image to edit mask' message below the image, clicking on the image will allow you create or modify a Mask. What actually happens when you click the image, depends on the selected 'Brush mode'. While some of the 'Brush modes' seem complex in their workings, they are quite intuitive to use.
Apart from different brush modes to set/unset pixels in the mask, various other functions exist to make editing and creating a Mask even easier;
•The 'Save' button allows you to save the current mask to a standard TIFF file that shows 'on' pixels in pure white and 'off' pixels in pure black.•The 'Open' button allows you to import a Mask that was previously saved by using the 'Save' button. Note that the image that is being opened to become the new Mask, needs to have the same dimensions as the image the Mask is intended for. Loading an image that has values between black and white will designate any shades of gray closest to white as 'on', and any shades of gray closest to black as 'off'.•The 'Auto' button is a very powerful feature that allows you to automatically isolate features.•The 'Clear' button turns off all green pixels (i.e. it deselects all pixels in the image).•The 'Invert' button turns on all pixels that are off, and turns off all pixels that were on.•The 'Shrink' button turns off all the green pixels that have a non-green neighbour, effectively 'shrinking' any selected regions.•The 'Grow' button turns on any non-green pixel that has a green neighbour, effectively 'growing' any selected regions.•The 'Undo' button allows you to undo the last operation that was performed.
NOTE: To quickly turn on all pixels, click the 'clear' button, then the 'invert' button.
Different 'Brush modes' help in quickly selecting (and de-selecting) features in the image.
For example, while in 'Flood fill lighter pixels' mode, try clicking next to a bright star or feature to select it. Click anywhere on a clump of 'on' (green) pixels, to toggle the whole clump off again.
The mask editor has 10 'Brush modes';
•Flood fill lighter pixels; use it to quickly select an adjacent area that is lighter than the clicked pixel (for example a star or a galaxy). Specifically, Clicking a non-green pixel will, starting from the clicked pixel, recursively fill the image with green pixels until it finds that; either all neighbouring pixels of a particular pixel are already filled (on/green), or the pixel under evaluation is darker than the original pixel clicked. Clicking on a green pixel will, starting from the clicked pixel, recursively turn off any green pixels until it can no longer find any green neighbouring pixels.•Flood fill darker pixels; use it to quickly select an adjacent area that is darker than the clicked pixel (for example a dust lane). Specifically, clicking a non-green pixel will, starting from the clicked pixel, recursively fill the image with green pixels until it finds that; either all neighbouring pixels of a particular pixel are already filled (on/green), or the pixel under evaluation is lighter than the original pixel clicked. Clicking on a green pixel will, starting from the clicked pixel, recursively turn off any green pixels until it can no longer find any on/green neighbouring pixels.•Single pixel toggle; clicking a non-green pixel will make a non-green pixel turn green. Clicking a green pixel will make green pixel turn non-green. It is a simple toggle operation for single pixels.•Single pixel off (freehand); clicking or dragging while holding the mouse button down will turn off pixels. This mode acts like a single pixel "eraser".•Similar color; use it to quickly select an adjacent area that is similar in color.•Similar brightness; use it to quickly select an adjacent area that is similar in brightness.•Line toggle (click & drag); use it to draw a line from the start point (when the mouse button was first pressed) to the end point (when the mouse button was released). This mode is particularly useful to trace and select satellite trails, for example for healing out using the Heal module.•Lasso; toggles all the pixels confined by a convex shape that you can draw in this mode (click and drag). Use it to quickly select or deselect circular areas by drawing their outline.•Grow blob; grows any contiguous area of adjacent pixels by expanding their borders into the nearest neighbouring pixel. Use it to quickly grow an area (for example a star core) without disturbing the rest of the mask.•Shrink blob; shrinks any contiguous area of adjacent pixels by withdrawing their borders into the nearest neighbouring pixel that is not part of a border. Use it to quickly shrink an area without disturbing the rest of the mask.
The powerful 'Auto' function quickly and autonomously isolates features of interest such as stars, noise, hot or dead pixels, etc.
For example, isolating just the stars in an image is a necessity for obtaining any useful results from the 'Decon' and 'Magic' module.
The type of features to be isolated are controlled by the 'Selection Mode' parameter
•Light features + highlight > threshold; a combination of two selection algorithms. One is the simpler 'Highlight > threshold' mode, which selects any pixel whose brightness is brighter than a certain percentage of the maximum value (see the 'Threshold' parameter below). The other selection algorithm is 'Light features' which selects high frequency components in an image (such as stars, gas knots and nebula edges), up to a certain size (see 'Max. feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below'). This mode is particularly effective for selecting stars. Note that if the 'Threshold' parameter is kept at 100%, this mode produces results that are identical to the 'Light features' mode.•Light features; selects high frequency components in an image (such as stars, gas knots and nebula edges), up to a certain size (see 'Max feature size') and depending on a certain sensitivity (see 'Filter sensitivity').•Highlight > threshold; selects any pixel whose brightness is brighter than a certain percentage of the maximum (e.g. pure white) value. . If you find this mode does not select bright stars with white cores that well, open the 'Levels' module and set the 'Normalization' a few pixels higher. This should make light features marginally brighter and dark features marginally darker.•Dead pixels color/mono < threshold; selects dark high frequency components in an image (such star edges, halos introduced by over sharpening, nebula edges and dead pixels), up to a certain size (see 'Max feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below') and whose brightness is darker than a certain percentage of the maximum value (see the Threshold parameter below). It then further narrows down the selection by looking at which pixels are likely the result of CCD defects (dead pixels). Two versions are available, one for color images, the other for mono images.•Hot pixels color/mono > threshold; selects high frequency components in an image up to a certain size (see 'Max feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below). It then further narrows down the selection by looking at which pixels are likely the result of CCD defects or cosmic rays (also known as 'hot' pixels). The 'Threshold' parameter controls how bright hot pixels need to be before they are potentially tagged as 'hot'. Note that a 'Threshold' of less than 100% needs to be specified for this mode to have any effect. Noise Fine - selects all pixels that are likely affected by significant amounts of noise. Please note that other parameters such as the 'Threshold', 'Max feature size', 'Filter sensitivity' and 'Exclude color' have no effect in this mode. Two versions are available, one for color images, the other for mono images.•Noise; selects all pixels that are likely affected by significant amounts of noise. This algorithm is more aggressive in its noise detection and tagging than 'Noise Fine'. Please note that other parameters such as the 'Threshold', 'Max feature size', 'Filter sensitivity' and 'Exclude color' have no effect in this mode.•Dust & scratches; selects small specks of dusts and scratches as found on old photographs. Only the 'Threshold' parameter is used, and a very low value for the 'Threshold' parameter is needed.•Edges > Threshold; selects all pixels that are likely to belong to the edge of a feature. Use the 'Threshold' parameter to set sensitivity where lower values make the edge detector more sensitive.•Horizontal artifacts; selects horizontal anomalies in the image. Use the 'Max feature size' and 'Filter sensitivity' to throttle the aggressiveness with which the detector detects the anomalies.•Vertical artifacts; selects vertical anomalies in the image. Use the 'Max feature size' and 'Filter sensitivity' to throttle the aggressiveness with which the detector detects the anomalies.•Radius; selects a circle, starting from the centre of the image going outwards. The 'Threshold' parameter defines the radius of the circle, where 100.00 covers the whole image.
Some of the selection algorithms are controlled by additional parameters;
•Exclude color; tells the selection algorithms to not evaluate specific colour channels when looking for features. This is particularly useful if you have a predominantly red, purple and blue nebula with white stars in the foreground and, say, you'd want to select only the stars. By setting 'Exclude color' to 'Purple (red + blue), you are able to tell the selection algorithms to leave features in the nebula alone (since these features are most prominent in the red and blue channels). This greatly reduces the amount of false positives.•Max feature size; specifies the largest size of any feature the algorithm should expect. If you find that stars are not correctly detected and only their outlines show up, you may want to increase this value. Conversely, if you find that large features are being inappropriately tagged and your stars are small (for example in wide field images), you may reduce this value to reduce false positives.•Filter sensitivity; specifies how sensitive the selection algorithms should be to local brightness variations. A lower value signifies a more aggressive setting, leading to more features and pixels being tagged.•Threshold; specifies a percentage of full brightness (i.e. pure white) below, or above which a selection algorithm should detect features.
Finally, the 'Source' parameter selects the source data the Auto mask generator should use. Thanks to StarTools' Tracking functionality which gives every module the capability to go "back in time", the Auto mask generator can use either the original 'Linear' data (perfect for getting at the brightest star cores) or the data as you see it right now.
As of the 1.4. beta versions, StarTools stores the masks you used in the StarTools.log file.
This StarTools.log file is located in the same folder as the executables. The masks are encoded as BASE64 PNG images. To convert the BASE64 text into loadable PNG images, you can use any online (or offline) BASE64 converter tool.
One online tool for BASE64 is Motobit Software's BASE64 encoder/decoder.
Simply paste the BASE64 code into the text box, select the decode the data from a Base64 string (base64 decoding) radio button, as well as the export to a binary file, filename: radio button. Name the file for example "mask.png" and click the convert the source data button.
This should result in a download of the mask as a PNG file.
In StarTools, Histogram Transformation Curves are considered obsolete. AutoDev uses image analysis to achieve better results in a more intuitive way.
When data is acquired, it is recorded in a linear form, corresponding to raw photon counts. To make this data suitable for human consumption, stretching it non-linearly is required.
Historically, simple algorithms were used to emulate the non-linear response of photographic paper by modelling its non-linear transformation curve. Later, in the 1990s because dynamic range in outer space varies greatly, "levels and curves" tools allowed imagers to create custom histogram transformation curves that better matched the object imaged so that the most amount of detail became visible in the stretched image.
Creating these custom curves was a highly laborious and subjective process. And, unfortunately, in many software packages this is still the situation today. The result is almost always sub-optimal dynamic range allocation, leading to detail loss in the shadows (leaving recoverable detail unstretched), shrouding interesting detail in the midtones (by not allocating it enough dynamic range) or blowing out stars (by failing to leave enough dynamic range for the stellar profiles).
StarTools' AutoDev module however uses image analysis to find the optimum custom curve for the characteristics of the data. By actively looking for detail in the image, AutoDev autonomously creates a custom histogram curve that best allocates the available dynamic range to the scene, taking into account all aspects and detail. As a consequence, the need for local HDR manipulation is minimised.
AutoDev is, in fact, so good at its job that it is also one of the most important tools in StarTools for initial data inspection; using AutoDev as one of the first modules on your data will see it bring out problems in the data, such as stacking artefacts, gradients, bias, dust donuts, etc. Upon removal and/or mitigation of these problems, AutoDev may then be used to stretch the cleaned up data.
AutoDev has a lot of smarts behind it. It analyses a Region of Interest ("RoI") - by default the whole image - so that it can find the optimum histogram transformation curve based on what it "sees". The 'Develop' module by comparison, is more simple in that it mimics photographic film development, which doesn't actually take into account what is in the image.
Understanding AutoDev is pretty simple really; its job is to look at what's in your image and to make sure as much as possible is visible. The problem with a histogram transformation curve (aka 'global stretch') is that it affects all pixels in the image. So, what works in one area (bringing out detail in the background), may not necessarily work in another (for example, it may make a medium-brightness DSO core harder to see). Therefore stretching the image is always a compromise. AutoDev finds the best compromise global curve, given what detail is visible in your image and your preferences. Of course, fortunately we have other tools like the Contrast and HDR modules to 'rescue' all detail by optimising for local dynamic range on top of global dynamic range.
The latter is a really useful feature, as it is also very adept at finding artefacts or stuff in your image that is not real detail but requires attention. That's why AutoDev is also extremely useful to launch as the first thing after loading an image to see what - if any - issues need addressing before proceeding. If there are any, AutoDev will show them to you guaranteed.
After fixing such issues, we can start using AutoDev's skills for showing the remaining (this time real celestial) detail in the image.
If most of the image consists of a background and just a small object of interest, by default AutoDev will weigh the importance of the background higher (since it covers a much larger part of the image vs the object); given what it has to work with it's the best compromise. If the background is noisy, it will start digging out the noise, mistaking it for fine detail. If this behaviour is undesirable, there are a couple of things you can do in AutoDev.
1Change the 'Ignore Fine Detail <' parameter, so that AutoDev will no longer detect fine detail (such as noise grain).2Simply tell it what it should focus on instead by specifying an ROI and not regard the area outside the ROI just a little bit ('Outside ROI influence').
You'll find that, as you include more background around the object, AutoDev, as expected, starts to optimise more and more for the background and less for the object; it's doing its job very well!
So, to use the ROI effectively, give it a 'sample' of the important bit of the image. This can be a whole object, or it can be just a slice of the object that is a good representation of what's going on in the object in terms of detail, for example a slice of a galaxy from the core, through the dust lanes, to the faint outer arms.
There is no shame in trying a few different ROIs in order to find one you're happy with. What ever the case, it certainly beats pulling histogram curves, both in results and objectivity (you've got a dedicated algorithm/assistant watching over your shoulder!).
There are two ways of further influencing the way the detail detector "sees" your image;
•The 'Detector Gamma' parameter applies - for values other than 1.0 - a non-linear stretch to the image prior to passing it to the detector. This makes the detector proportionally more (< 1.0) or less (> 1.0) sensitive to detail in the highlights. Conversely it makes the detector less (<1.0) or more (> 1.0) sensitive to detail in the shadows. The effect can be though of as a "smart" gamma correction.•The 'Shadow Linearity' parameter specifies the amount of linearity that is applied in the shadows, before non-linear stretching takes over. Higher amounts have the effect of allocating more dynamic range to the shadows and background.
The Band module reduces horizontal and vertical banding/striping, often caused by read noise.
Using the Band module is quite straight forward; simply specify the orientation of the banding ("Horizontal" or "Vertical") and click 'Do'. An 'algorithm' parameter switches between two subtly different algorithms that attempt to reduce banding. If the default algorithm ('Algorithm 1') does not produce satisfactory results, 'Algorithm 2' may possibly yield better results.
The Bin module puts you in control over the trade-off between resolution, resolved detail and noise.
With today's multi-megapixel imaging equipment and high density CCDs, oversampling is a common occurrence; there is only so much detail that seeing conditions allow for with a given setup. Beyond that it is impossible to pick up fine detail. Once detail no longer fits in a single pixel, but instead gets "smeared out" over multiple pixels due to atmospheric conditions (resulting in a blur), binning may turn this otherwise useless blur into noise reduction. Binning your data may make an otherwise noisy and unusable data set usable again, at the expense of 'useless' resolution.
The Bin module was created to provide a freely scalable alternative to the fixed 2×2 (4x reduction in resolution) or 4×4 (16x reduction in resolution) software binning modes commonly found in other software packages or modern consumer digital cameras and DSLRs (also known as 'Low Light Mode'). As opposed to these other binning solutions, the StarTools' Bin module allows you to bin your data (and gain noise reduction) by the amount you want – if your data is seeing-limited (blurred due to adverse seeing conditions) you are now free to bin your data until exactly that limit and you are not forced by a fixed 2×2 or 4×4 mode to go beyond that.
Similarly, deconvolution (and subsequent recovery of detail that was lost due to atmospheric conditions) may not be a viable proposition due to the noisiness of an initial image. Binning may make deconvolution an option again. The StarTools Bin module allows you to determine the ratio whith which you use your oversampled data for binning and deconvolution to achieve a result that is finely tuned to your data and imaging circumstances of the night(s).
Core to StarTools' fractional binning algorithm is a custom built anti-aliasing filter that has been carefully designed to not introduce any ringing (overshoot) and, hence, to not introduce any artefacts when subsequent deconvolution is used on the binned data.
The Bin module is operated with just a single parameter. This parameter controls the amount of binning that is performed on the data. The new resolution is displayed ('New Image Size X x Y') , as well the single axis scale reduction, the Signal-to-Noise-Ratio improvement and the increased bit-depth of the new image.
Data binning is a data pre-processing technique used to reduce the effects of minor observation errors. Many astrophotographers are familiar with the virtues of hardware binning. The latter pools the value of 4 (or more) CCD pixels before the final value is read. Because reading introduces noise by itself, pooling the value of 4 or more pixels reduces this 'read noise' also by a factor of 4 (one read is now sufficient, instead of having to do 4). Ofcourse, by pooling 4 pixels, the final resolution is also reduced by a factor of 4. There are many, many factors that influence hardware binning and Steve Cannistra has done a wonderful write-up on the subject on his starrywonders.com website. It also appears that the merits of hardware binning are heavily dependent on the instrument and the chip used.
Most OSCs (One-Shot-Color) and DSLR do not offer any sort of hardware binning in color, due to the presence of a Bayer matrix; binning adjacent pixels makes no sense, as they alternate in the color that they pick up. The best we can do in that case is create a grayscale blend out of them. So hardware binning is out of the question for these instruments.
So why does StarTools offer software binning? Firstly, because it allows us to trade resolution for noise reduction. By grouping multiple pixels into 1, a more accurate 'super pixel' is created that pools multiple measurements into one. Note that we are actually free to use any statistical reduction method that we want. Take for example this 2 by 2 patch of pixels;
7 73 7
A 'super pixel' that uses simple averaging yields (7 + 7 + 3 + 7) / 4 = 6. If we suppose the '3' is anomalous value due to noise and '7' is correct, then we can see here how the other 3 readings 'pull up' the average value to 6; pretty darn close to 7.
We could use a different statistical reduction method (for example taking the median of the 4 values) which would yield 7, etc. The important thing is that grouping values like this tends to filter out outliers and make your super pixel value more precise.
But what about the downside of losing resolution? That super high resolution may have actually been going to waste! If for example your CCD can resolve detail at 0.5 arcsecs per pixel, but your seeing is at best 2.0 arcsecs, then you effectively have 4 times more pixels than you need to record one 1 unit of real resolvable celestial detail. Your image will be "oversampled", meaning that you have allocated more resolution than the signal really will ever require. When that happens, you can zoom in into your data and you will notice that all fine detail looks blurry and smeared out over multiple pixels. And with the latest DSLRS having sensors that count 20 million pixels and up, you can bet that most of this resolution will be going to waste at even the most moderate magnification. Sensor resolution may be going up, but the atmosphere's resolution will forever remain the same - buying a higher resolution instrument will do nothing for the detail in your data in that case! This is also the reason why professional CCDs are typically much lower in resolution; the manufacturers rather use the surface area of the chip for coarser but more deeper, more precise CDD wells ('pixels') than squeezing in a lot of very imprecise (noisy) CCD wells (it has to be said the latter is a slight oversimplification of the various factors that determine photon collection, but it tends to hold).
There is one other reason to bin OSC and DSLR data to at least 25% of its original resolution; the presence of a bayer matrix means that (assuming an RGGB matrix) after applying a debayering (aka 'demosaicing') algorithm, 75% of all red pixels, 50% of all green pixels, and another 75% of all blue pixels are completely made up!
Granted, your 16MP camera may have a native resolution of 16 million pixels, however it has to divide these 16 million pixels up between the red, green and blue channels! Here is another very good reason why you might not want to keep your image at native resolution. Binning to 25% of native resolution will ensure that each pixel corresponds to one real recorded pixel in the red channel, one real recorded pixel in the blue channel and two pixels in the green channel (the latter yielding a 50% noise reduction in the green channel).
There are, however, instances where the interpolation can be undone if enough frames are available (through sub-pixel dithering) to have exposed all sub-pixels of the bayer matrix to real data in the scene (drizzling).
StarTools' binning algorithm is a bit special in that it allows you to apply 'fractional' binning; you're not stuck with pre-determined factors (ex. 2x2, 3x3 or 4x4). You can bin exactly the amount that achieves a single unit of celestial detail in a single pixel. In order to see what that limit is, you simply keep reducing resolution until no blurriness can be detected when zooming into the image. Fine detail (not noise!) should look crisp. However, you may decide to leave a little bit of blurriness to see if you can bring out more detail using deconvolution.
Thanks to StarTools' Tracking feature the Color module provides you with unparalleled flexibility when it comes to colour presentation in your image.
Whereas other software without Tracking data mining, destroys colour and colour saturation in bright parts of the image as the data gets stretched, StarTools allows you to retain colour and saturation throughout the image with its 'Color Constancy' feature. This ability allows you to display all colours in the scene as if it were evenly illuminated, meaning that even very bright cores of galaxies and nebulas retain the same colour throughout, irrespective of their local brightness, or indeed acquisition methods and parameters.
This ability is important in scientific representation of your data, as it allows the viewer to compare similar objects or areas like-for-like, since colour in outer space very often correlates with chemical signatures or temperature.
The same is true for star temperatures across the image, even in bright, dense star clusters. This mode allows the viewer of your image to objectively compare different parts and objects in the image without suffering from reduced saturation in bright areas. It allows the viewer to explore the universe that you present in full colour, adding another dimension of detail, irrespective of the exposure time and subsequent stretching of the data.
For example, StarTools enables you to keep M42's colour constant throughout, even in its bright core. No fiddling with different exposure times, masked stretching or saturation curves needed. You are able to show M31's true colours instead of a milky white, or resolve star temperatures to well within a globular cluster's bright core. All that said, if you're a fan of the traditional 'handicapped' way of colour processing in other software, then StarTools can emulate this type of processing as well.
The Color module's abilities don't stop there, however. It is also capable of emulating a range of complex LRGB color compositing methods that have been invented over the years. And it does it at the click of a button! Even if you acquired data with an OSC or DSLR, you will still be able to use these compositing methods; the Color module will generate synthetic luminance from your RGB on the fly and re-composite the image in your desired compositing style.
The Color module allows for various ways to calibrate the image, including by star field, sampling G2V star, galaxy sampling and - unique to StarTools - the MaxRGB calibration view. The latter allows for objective colour calibration, even on poorly calibrated screens.
Aside from Color calibration (thanks to Tracking data mining carried out on a linear version of your data, no matter whether you have stretched it or not), the Color module comes with a number of ways to control colour saturation in your image. A green removal algorithm rounds out the feature set.
The Color module is very powerful - offering capabilities surpassing most other software - yet it is simple to use.
The primary goal that the Color module was designed to accomplish, is achieving a good colour balance that accurately describes the colour ratios that were recorded. In accomplishing that goal, the Color module goes further than other software by offering a way to negate the adverse effects of non-linear dynamic range manipulations on the data (thanks to Tracking data mining). In simple terms, this means that colouring can be reproduced (and compared!) in a consistent manner regardless of how bright or dim a part of the scene is shown.
Upon launch, the colour module blinks the mask three times in the familiar way. If a full mask is not set, the Color modules allows you to set it now, as colour balancing is typically applied to the full image (requiring a full mask).
In addition to blinking the mask, the Color module also analyses the image and sets the Red Bias Reduce, Green Bias Reduce and Blue Bias Reduce factors to an value which it deems the most appropriate for your image. This behaviour is identical to manually clicking the 'Sample' button.
The Red Bias Reduce, Green Bias Reduce and Blue Bias Reduce factors are the most important settings in the Color module. They directly determine the colour balance in your image. Their operation is intuitive; too much red in your image? Pump up the 'Red Bias Reduce' value. Too little red in your image? Reduce the 'Red Bias Reduce' value.
If you'd rather operate on these values in terms of Bias Increase, then simply switch the 'Bias Slider Mode' setting to 'Sliders Increase Color Bias'.
Switching between these two modes you can see that, for example, a RedBias Reduce of 8.00 is the same as a Green and Blue Bias Increase of 8.00. It makes intuitive sense when you think about it - a relative decrease of red makes blue and green more prevalent and vice versa.
Now that we know how to change the colour balance, how do we know what to actually set it to?
There are a great number of tools and techniques that can be applied in StarTools that let you home in on a good colour balance. Before delving into them, It is highly recommended to switch 'Style' to 'Scientific (Color Constancy)' during color balancing, even if that is not the preferred style of rendering the colour of the end result, this is because the Color Constancy feature makes it much easier to colour balance by eye in some instances due to its ability to show continuous, constant colour throughout the image. Once a satisfactory colour balance is achieved, of course, feel free to switch to any alternative style of colour rendering.
If you know that a particular pixel or area in your image is supposed to be a shade of neutral white or gray, simply clicking on it is sufficient to let StarTools compute the right Red, Green and Blue bias settings to make that pixel appear neutral. This technique is particularly useful if you have a star of spectral type G2V (sun-like) in your image. The reasoning is that the sun is the perfect daylight white reference, and so any similar star elsewhere in the galaxy should be too.
Upon launch, or upon clicking the Sample button, the Color module samples whatever mask is set (note also that the set mask also ensures the Color module only applies any changes to the masked-in pixels!) and sets the Red, Green and Blue bias settings accordingly.
We can use this same behaviour to sample larger parts of the image that we know should be white. This method mostly exploits the fact that stars come in all sorts of sizes and temperatures (and thus colours!) and that this distribution is completely random. Therefore if we sample a large enough population, we should find the average star to be somewhere in the middle. Our sun is a very average star and is the white balance that we're after. Therefore, if we sample a large enough number of pixels containing a large enough number of stars, we should find a good colour balance.
We can accomplish that in two ways; we either sample all stars (but only stars!) in a wide enough field, or we sample a whole galaxy that happens to be in the image (note that the galaxy must be of a certain type to be a good candidate and be reasonably close - preferably a barred spiral galaxy much like our own Milkyway).
Whichever you choose, we need to create a mask, so we launch the Mask editor. Here we can use the Auto feature to select a suitable selection of stars, or we can us the Flood Fill Brighter or Lassoo tool to select a galaxy. Once selected, return to the Color module and click Sample. StarTools will now determine the correct Red, Green and Blue bias to match the white reference pixels in the mask so that they come out neutral.
To apply the new colour balance to the whole image, launch the Mask editor once more and click Clear, then click Invert to select the whole image. Upon return to the Color module, the whole image will now be balanced by the Red, Green and Blue bias values we determined earlier with just the white reference pixels selected.
StarTools comes with a unique colour balancing aid called MaxRGB. This mode of colour balancing is exceptionally useful if trying to colour balance by eye, but the user suffers from colour blindness or uses a screen that is not colour calibrated very well.
The MaxRGB aid allows you to view which channel is dominant per-pixel. If a pixel is mostly red, that pixel is shown red, if a pixel is mostly green, that pixel is shown green, and if a pixel is mostly blue, that pixel is shown blue.
By cross referencing the normal image with the MaxRGB image, it is possible to find deficiencies in the colour balance. For example, the colour green is very rarely dominant in space (with the exception of highly dominant OIII emission areas in, for example the Trapezium in M42).
Therefore, if we see large areas of green, we know that we have too much green in our image and we should adjust the bias accordingly. Similarly if we have too much red or blue in our image, the MaxRGB mode will show many more red than blue pixels in areas that should show an even amount (for example the background). Again we then know we should adjust red or green accordingly.
StarTools' Color Constancy feature makes it much easier to see colours and spot processes, interactions, emissions and chemical composition in objects. In fact, the Color Constancy feature makes colouring comparable between different exposure lengths and different gear. This allows for the user to start spotting colours repeating in different features of comparable objects. Such features are, for example, the yellow cores of galaxies (due to the relative over representation of older stars as a result of gas depletion), the bluer outer rims of galaxies (due to the relative over representation of bright blue young stars as a result of the abundance of gas) and the pink/purplish HII area 'blobs' in their discs. Red/brown (white light filtered by dust) dust lanes complement a typical galaxy's rendering.
Similarly, HII areas in our own galaxy (e.g. most nebulae), while in StarTools Color Constancy Style mode, display the exact same colour signature found in the galaxies; a pink/purple as a result of predominantly deep red Hydrogen-alpha emissions mixed with much weaker blue/greenemissions of Hydrogen-beta and Oxygen-III emissions and (more dominantly) reflected blue star light from bright young blue giants who are often born in these areas, and shape the gas around them.
Dusty areas where the bright blue giants have 'boiled away' the Hydrogen through radiation pressure (for example the Pleiades) reflect the blue star light of any surviving stars, becoming distinctly blue reflection nebulae. Sometimes gradients can be spotted where (gas-rich) purple gives away to (gas-poor) blue (for example the Rosette core) as this process is caught in the act.
Diffraction spikes, while artefacts, also can be of great help when calibrating colours; the "rainbow" patterns (though skewed by the dominant colour of the star whose light is being diffracted) should show a nice continuum of colouring.
Finally, star temperatures, in a wide enough field, should be evenly distributed; the amount of red, orange, yellow, white and blue stars should be roughly equal. If any of these colors are missing or are over-represented we know the colour balance is off.
Colour balancing of data that was filtered by a light pollution filter is fundamentally impossible; narrow (or wider) bands of the spectrum are missing and no amount of colour balancing is going to bring them back and achieve proper colouring. A typical filtered data set will show a distinct lack in yellow and some green when properly colour balanced. It's by no means the end of the world - it's just something to be mindful of.
Correct colouring may be achieved however by shooting deep luminance data with light pollution filter in place, while shooting colour data without filter in place, after which both are processed separately and finally combined. Colour data is much more forgiving in terms of quality of signal and noise; the human eye is much more sensitive to noise in the luminance data that it is in the colour data. By making clever use of that fact and performing some trivial light pollution removal in Wipe, the best of both worlds can be achieved.
Once you have achieved a color balance you are happy with, the StarTools Color module offers a great number of ways to change the presentation of your colours.
The parameter with the biggest impact is the 'Style' parameter. StarTools is renowned for its Color Constancy feature, rendering colours in objects regardless of how the luminance data was stretched, the reasoning being that colours in outer space don't magically change depending on how we stretch our image. Other software sadly lets the user stretch the colour information along with the luminance information, warping, distorting and destroying hue and saturation in the process. The 'Scientific (Color Constancy)' setting for Style undoes these distortions using Tracking information, arriving at the colours as recorded.
To emulate the way other software renders colours, two other settings are available for the Style parameter. These settings are "Artistic, Detail Aware" and "Artistic, Not Detail Aware". The former still uses some Tracking information to better recover colours in areas whose dynamic range was optimised locally, while the latter does not compensate for any distortions whatsoever.
The LRGB Method Emulation allows you to emulate a number of colour compositing methods that have been invented over the years. Even if you acquired data with an OSC or DSLR, you will still be able to use these compositing methods; the Color module will generate synthetic luminance from your RGB on the fly and re-composite the image in your desired compositing style.
The difference in colouring can be subtle or more pronounced. Much depends on the data and the method chosen.
'Straight CIELab Luminance Retention' manipulates all colours in a psychovisually optimal way in CIELab space, introducing colour without affecting apparent brightness.
'RGB Ratio, CIELab Luminance Retention' uses a method first proposed by Till Credner of the Max-Planck-Institut and subsequently rediscovered by Paul Kanevsky, using RGB ratios multiplied by luminance in order to better preserve star colour. Luminance retention in CIELab color space is applied afterwards.
'50/50 Layering, CIELab Luminance Retention' uses a method proposed by Robert Gendler, where luminance is layered on top of the colour information with a 50% opacity. Luminance retention in CIELab color space is applied afterwards. The inherent loss of 50% in saturation is compensated for, for your convenience, in order to allow for easier comparison with other methods.
'RGB Ratio' uses a method first proposed by Till Credner of the Max-Planck-Institut and subsequently rediscovered by Paul Kanevsky, using RGB ratios multiplied by luminance in order to better preserve star colour. No further luminance retention is attempted.
'50/50 Layering, CIELab Luminance Retention' uses a method proposed by Robert Gendler, where luminance is layered on top of the colour information with a 50% opacity. No further luminance retention is attempted. The inherent loss of 50% in saturation is compensated for, for your convenience, in order to allow for easier comparison with other methods.
Note that the LRGB Emulation Method feature is only available when Tracking is engaged.
The 'Saturation' parameter allows colours to be rendered more, or less vividly, whereby Bright Saturation and Dark Saturation control how much colour and saturation is introduced in the highlights and shadows respectively. It is important to note that introducing colour in the shadows may exacerbate colour noise, though Tracking will make sure any such noise exacerbations are recorded and dealt with during the final denoising stage.
The 'Cap Green' parameter, finally, removes spurious green pixels if needed, reasoning that green dominant colours in outer space are rare and must therefore be caused by noise. Use of this feature should be considered a last resort if colour balancing does not yield adequate results and the green noise is severe. The final denoising stage should, thanks to Tracking data mining, pin pointed the green channel noise already and should be able to adequately mitigate it.
The Contrast module optimizes local dynamic range allocation, resulting in better contrast, reducing glare and bringing out faint detail.
It operates on medium to large areas and is especially effective for enhancing contrast in nebulae, globular clusters and galaxy cores.
The Contrast module has some parameters in common with the Wipe module. In some ways it is similar, though not the same.
Just like the Wipe module, the Contrast module is sensitive to "dark anomalies"; pixels not of celestial origin that are darker than the real celestial background.
So, just like the Wipe module, if dark anomalies are present, we need to make sure that any such anomalies are mitigated before Contrast sees them, either by removing them (cropping them out) or instructing the Contrast module to ignore them (increasing the 'Dark anomaly filter' parameter).
Once any dark anomalies are taken care of, a suitable 'Aggressiveness' parameter needs to be chosen. The 'Aggressiveness' parameter controls how 'local' the dynamic range optimisation is allowed to be. You will find that a higher 'Aggressiveness' value with all else equal, will yield an image with areas of starker contrast. More generally, you will find that changing the 'Aggressiveness' value will see the Contrast module take pretty different decisions on what and where to optimise. The rule of thumb is that a higher 'Aggressiveness' value will see smaller and 'busier' areas given priority over larger more 'tranquil' areas.
Similar to the Wipe module, the 'Precision' parameter can be used to increase the precision when dealing with highly detailed wide-fields with a lot of undulating detail, combined with high 'Aggressiveness' values.
The 'Dark anomaly headroom' parameter controls how heavily the Contrast module "squashes" the dynamic range of larger scale features it deems "unnecessary". By de-allocating dynamic range that is used to describe larger features and re-allocating it to interesting local features, the de-allocation necessarily involves reducing the larger features' dynamic range, hence "squashing" that range. Very low settings may appear to clip the image (though this is not the case). For those familiar with music production, the Contrast module is very much akin to a Compressor, but for your image instead.
The 'Compensate gamma' feature attempts to apply a non-linear curve that makes the image just as bright as the source (input) image. This option may be desirable if the image has gotten to dark.
Finally, the 'Expose dark areas' option can help expose detail in the shadows by normalizing the dynamic range locally; making sure that the fully dynamic range is used at all times. This option may generate artefacts at high 'Aggressiveness' settings, which may be mitigated in some instances by increasing the 'Precision' parameter.
The Compose module is easy-to-use, but extremely flexible compositing and channel extraction tool. As opposed to all other software, the Compose module allows you to effortless process, LRGB, LLRGB, or narrowband composites like SHO, LSHO and more composites, as if they were simple RGB datasets.
In traditional image processing software, composites with separate luminance information (for example acquired through a luminance filter, created by a synthetic luminance frame, or a combination of both), require lengthy processing workflows; luminance (detail) and color information needs (or should!) be processed separately and only combined at the end to produce the final image.
Through the Compose module, StarTools is able to process luminance and color information separately, yet simultaneously.
This has important ramifications for your workflow and signal fidelity;
•Your workflow for a complex composite is now virtually the same as it is for a simple DSLR/OSC dataset; Modules like Wipe and Color automatically consult and manipulate the correct dataset(s) and enable additional functionality where needed.•Because everything is now done in one Tracking session, you get all the benefits from signal evolution tracking until the very end, without having to end your workflow for luminance and start a new one for chroma/color; all modules cross-reference luminance and color information as needed until the very end, yielding vastly cleaner results.•The "Entropy" module can consult the chroma/color information to effortlessly manipulate luminance as you see fit, while Tracking monitors noise propagation.
Synthetic luminance dataset are created by simply specifying the total exposure times for each imported dataset. With a click of a button, synthetic luminance datasets can be added to an existing luminance dataset, or can be used as a (synthetic) luminance dataset in its own right.
Finally, the Compose module can be used to create bi-color composites, or to extract individual channels from color images.
Creating a composite is as easy as loading the desired datasets into the desired slots, and optionally setting the desired composite scheme and exposure lengths.
The "Luminance" button loads a dataset into the "Luminance File" slot. The "Lum Total Exposure" slider determines the total exposure length in hours, minutes and seconds. This value is used to create the correct weighted synthetic luminance dataset, in case the "Luminance, Color" composite mode is set to create a synthetic luminance form the loaded channels. Loading a Luminance file will only have an effect when the "Luminance, Color" parameter is set to a compositing scheme that incorporates a luminance dataset (e.g. "L, RGB", "L + Synthetic L From RGB, RGB" or "L + Synthetic L From RGB, Mono") .
The Red, Green and Blue buttons load a dataset in the "Red File", "Green File" and "Blue File" slots respectively. The "Red Total Exposure", "Green Total Exposure", "Blue Total Exposure" sliders determine the total exposure length in hours, minutes and seconds for each of the three slots. These values are used to create the correct weighted synthetic luminance dataset (at 1/3rd weighting of the "Lum Total Exposure"), in case the "Luminance, Color" composite mode is set to create a synthetic luminance from the loaded channels.
Loading an dataset into the "Red File", "Green File" or "Blue File" slots will see any missing slots be synthesised automatically if the "Color Ch. Interpolation" parameter is set to "On". Loading a color dataset into the "Red File", "Green File" or "Blue File" slots will automatically extract the red, green and blue channels of the color dataset respectively.
There are a number of compositing schemes available, some of which will put StarTools into "composite" mode (as signified by a lit up "Compose" label on the Compose button on the home screen). Compositing schemes that require separate processing of luminance and color will put StarTools in this special mode. Some module may exhibit subtly different behaviour, or expose different functionality while in this mode.
The following compositing schemes are selectable;
"RGB, RGB" simply uses red + green + blue for luminance and uses red, green and blue for the color information. No special processing or compositing is done. Any loaded Luminance dataset is ignored, as are Total exposure settings.
"RGB, Mono" simply uses red + green + blue for luminance and uses the average of the red, green and blue channels for all channels for the color information, resulting in a mono image. Any loaded Luminance dataset is ignored, as are Total exposure settings.
"L, RGB" simply uses the loaded luminance dataset for luminance and uses red, green and blue for the color information. Total exposure settings are ignored. StarTools will be put into "composite" mode, processing luminance and color separately but simultaneously. If not Luminance dataset is loaded, this scheme functions the same as "RGB, RGB" with the execption that StarTools will be put into "composite" mode, processing luminance and color separately yet simultaneously.
"L + Synthetic L from RGB, RGB" creates a synthetic luminance dataset from Luminance, Red, Green and Blue, weighted according to the exposure times provided by the "Total Exposure" sliders. The color information will consists of simply the red, green and blue datasets as imported. StarTools will be put into "composite" mode, processing luminance and color separately yet simultaneously.
"L + Synthetic L from RGB, Mono" creates a synthetic luminance dataset from Luminance, Red, Green and Blue, weighted according to the exposure times provided by the "Total Exposure" sliders. The color information will consists of the average of the red, green and blue channels for all channels, yielding a mono image. StarTools is not put into "composite" mode, as no color information is available.
The Hubble Space Telescope palette (also known as 'HST' or 'SHO' palette) is a popular palette for color renditions of the S-II, Hydrogen-alpha and O-III emission bands. This palette is achieved by loading S-II, Hydrogen-alpha and O-III ("SHO") as red, green and blue respectively. A special "Hubble" preset in the Color module provides a shortcut to color rendition settings that mimic the results from the more limited image processing tools from the 1990s.
A popular bi-color rendition of H-alpha and O-III is to import H-alpha as red and O-III as green as well as blue. A synthetic luminance frame is then created that only gives red and blue (or green instead of blue, but not both!) a weighting according to the two datasets' exposure lengths. The resulting color rendition tends to be close to these bands' manifestation in the visual spectrum with H-alpha a deep red and O-III appearing as a teal green.
StarTools' Deconvolution module allows for recovering detail in seeing-limited and diffraction-limited datasets.
The Deconvolution algorithm in StarTools is so fast, that previewing and experimentation to find the right parameters can be done in near-real-time.
The Deconvolution module incorporates a regularization algorithm that automatically finds the optimum balance between noise and detail and puts you in control of this trade-off in an intuitive way.
StarTools' signal evolution Tracking functionality allows the Decon module to achieve results that have no equal in other software, as it allows Decon to uses further information on how you stretched your image.
The De-Noise module offers detail-aware, astro-specific noise reduction, which, paired with StarTools' Tracking feature, yields results that have no equal.
Whereas generic noise reduction routines and plug-ins for terrestrial photography are often optimised to detect and enhance geometric patterns and structures in the face of random noise, the De-Noise module is optimised to do the opposite and optimise patterns and structures that are non-geometric in nature in the face of random noise (as well as read noise).
When used in conjunction with StarTools' 'Tracking' feature which data mines every decision and noise evolution per-pixel during the user's processing, the results that De-Noise is able to deliver autonomously are absolutely unparalleled. The extremely targeted noise reduction that is provided in this case, can only be approximated in other software by spending many hours creating a noise mask by hand.
Denoising starts when switching Tracking off. It is therefore generally the last step, and for good reason. Being the last step, Tracking has had the longest possible time to track and analyse noise propagation.
Bearing the aforementioned in mind, note that clicking the Denoise icon in the left hand menu launches the Denoise module in preview mode; the final result cannot be kept and is only meant for evaluation purposes to examine noise propagation and mitigation in an unfinished workflow. Only switching Tracking off will allow you to keep the final noise-reduced result.
The first stage of noise reduction involves the selection of 3 subtly different noise reduction algorithms, and helping StarTools establish a baseline for visual noise grain. To establish this baseline, increase the 'Grain size' parameter until no noise grain of any size can be seen any longer. StarTools will use this baseline to more intelligently redistribute the energy in the various bands that is taken out during the wavelet denoising in the second stage. Note that this parameter is also still available for modification in the second stage, though it lacks the visual aid presented here.
After clicking 'Next', the wavelet scale extraction starts, upon which, after a short while, the second interactive noise reduction stage interface is presented.
The base algorithm that performs noise removal is an enhanced wavelet denoiser, meaning that it is able to attenuate features based on their size. Noise grain caused by shot noise (aka Poisson noise) - the bulk of the noise astrophotographers deal with - exists on all size levels, becoming less noticeable as the size increases. Therefore, much like the Sharp module, a number of scale sizes are available to tweak, allowing the denoiser to be more or less aggressive when removing features deemed noise grain at different sizes. Tweaks to these scale parameters are generally not necessary, but may be desirable if - for whatever reason - noise is not uniform and is more prevalent in a particular scale.
Firstly, different to basic wavelet denoising implementations, the algorithm is driven by the per-pixel signal (and its noise component) evolution statistics collected during the preceding image processing. E.g. rather than using a single global setting for all pixels in the image, StarTools' implementation uses a different setting (yet centered around a user-specified global setting) for every pixel in the image.
Second, the wavelet denoising algorithm is further enhanced by a feature scale correlation enhancement, which exploits common psychovisual techniques, whereby noise grain is generally tolerated better in areas of increased detail.
Third, because shot (Poissonian) noise (applied) behaves differently to Gaussian noise (added) in areas of low signal around the noise floor, a separate algorithm can be deployed for just these areas if they are prevalent in your image. Datasets and images that show symptoms of linear noise response breaking down, may exhibit conspicuous single dark pixels inside otherwise smooth areas. This step
Finally, any removed energy is collected per pixel and re-distributed across the image, giving the user intuitive control over reintroduction of noise grain and fine detail, countering any over-smoothening.
The parameters that govern global noise reduction response (rather than per-feature-size) are 'Brightness/Color detail loss' and 'Smoothness'.
'Brightness/Color detail loss' specifies a measure of allowed acceptable detail loss in order to reduce noise. In color images, the 'Color detail loss' parameter works solely on any color noise, while the 'Brightness detail loss' parameter works on the detail itself, but not its colors.
The 'Smoothness' parameter determines how much (or little) the denoiser should take notice of any inter-scale detail correlation. Detail correlation is higher in areas that look 'busy' such as galaxy or nebula cores or shock waves, whereas detail correlation is low in areas that are 'tranquil' such as opaque homogenous gas clouds. Increasing 'Smoothness' progressively ignores such correlation, allowing for more aggressive noise reduction in areas of higher correlation.
'Scale correlation' specifies how deep the denoiser should look for detail that may be correlated across scales. Most data can withstand deep correlation, however some types of data may exhibit an artificially introduced correlation. This can be the case with data that;
•has been drizzled with insufficient frames•originates from a sensors with a color filter array (for example an OSC or DSLR) and where insufficient frames were stacked•was not sufficiently dithered between sub-frame acquisition•has any other type of recurring embedded pattern, visible or latent
Noise in such cases will not exhibit a Poisson distribution (e.g. it does no longer resemble shot noise) and will exhibit correlation in the form of clumps or streaks. Such data may require a shallower 'Scale correlation' value. More generally, such types of noise/artefacts are beyond the scope of the denoise module's capabilities and should be corrected during acquisition and pre-processing, rather than at the post-processing stage.
Set Smoothness until fine noise grain is sufficiently smoothened out. Increase Scale 5 if noise grain is visible in the largest scales. Increase or decrease Grain Dispersion to taste to reintroduce fine detail and grain. Vary the Brightness Detail Loss and Color Detail Loss if needed.
The Develop module was created from the ground up as an alternative the classic Digital Development algorithm that attempts to emulate classic film response when first developing a raw stacked image.
It effectively functions as a digital 'dark room' where your prized raw signal is developed and readied for further processing.
Automated black and white point detection ensures your signal never clips, while making histogram checking a thing of the past. A semi-automated 'homing in feature' attempts to find the optimal settings that bring out as much detail as possible, while still adhering to the Digital Development curve.
The Development module, along with the Auto Develop, HDR and Contrast modules is part of StarTools' automated stretching solution, making endless curve tweaking and histogram checking a thing of the past; leaving the guesswork to the computer means attaining superior results.
The Entropy module is a novel module that enhances detail in your image, using latent detail cues in the color information of your dataset.
The Entropy module exploits the same basic premise as the Filter module; that is, the observation that many interesting features and objects in outer space have distinct colors, owing to their chemical make up and associated emission lines. This correlation become 100% when considering a narrowband composite, where each channel truly is made up of data from distinct parts of the spectrum.
The Entropy module works by evaluating entropy (a measure of "busyness" or "randomness") as a proxy for detail. It does so on a local level in each colour channel for each pixel. Once this measure has been established for each pixel, the individual channel's contribution to luminance for each pixel is re-weighted in CIELab space to better reflect the contribution of visible detail in that channel.
The result is that the luminance contribution of a channel with less detail in a particular area is attenuated. Conversely, the luminance contribution of a channel with more detail in a particular area is boosted. Overall, this has the effect of accentuating latent structures and detail in a very natural manner. Operating entirely in CIELab space means that, psychovisually, there is no change in colour, only brightness.
The above attributes make the Entropy module an an extremely powerful tool for narrowband composites in particular.
The Entropy module is effective both on already processed images, as well as Tracked datasets. The module is available as of StarTools 1.5.
The Entropy module is very flexible in its image presentation. To start using the Entropy module, an entropy map needs to be generated by clicking the 'Do' button. This map's resolution/accuracy can be chosen by using the 'Resolution' parameter. The 'Medium' resolution is sufficient in most cases.
For the entropy module to be able to identify detail, the dataset should ideally be of an image-filling object or scene.
After obtaining a suitable entropy map, the other parameters can be tweaked in real-time;
The 'Strength' parameter governs the overall strength of the boost or attenuation of luminance. Overdriving the 'Strength' parameter too much may make channel transitions too visible. In this case you may wish to pull back, or increase the 'Midtone Pull Filter' size to achieve a smoother blend.
The 'Dark/Light Enhance' parameter enables you to choose the balance between darkening and brightening of areas in the image. To only brighten the image (for example if you wish to bring out faint H-alpha, but nothing else), set this parameter to 0%/ 100%. To only darken the image (for example to better show a bright DSO core) bring the balance closer to 100%/0%.
The 'Channel Selection' parameter allows you to only target certain channels. For example, if you wish to enhance S-II more visible in a Hubble-palette image, set this parameter to red (to which S-II should is mapped). S-II will now be boosted, and H-alpha and O-III will be pushed back where needed to aid S-II's contrast. If you wish to avoid the other channels being pushed back, simply set the 'Dark/Light Enhance' to 0/100%.
The 'Midtone Pull Filter' and 'Midtone Pull Strength' parameters, assist in keeping any changes in the brightness of your image confined to the area where they are most effective and visible; the midtones. This feature can be turned off by setting 'Midtone Pull Strength' to 0%. When on, the filter selectively accepts or rejects changes to pixels, based on whether they are close to half unity (e.g. neutral gray) or not. This feature works analogous to creating a HDR composite from different exposure times. The transition boundaries between accepted and rejected pixels are smoothened out by increasing the 'Midtone Pull Filter' parameter.
The Filter module allows for the modification of features in the image by their colour by simply clicking on them. It's as close to a post-capture colour filter wheel as you can get.
Filter can be used to bring out detail of a specific colour (such as faint Ha, Hb, OIII or S2 details), remove artefacts (such as halos, chromatic aberration) or isolate specific features. It functions as an interactive colour filter.
The Filter module is the result of the observation that many interesting features and objects in outer space have distinct colors, owing to their chemical make up and associated emission lines. Thanks to the Color Constancy feature in the Color module, colours still tend to correlate well to the original emission lines and features, despite any wideband RGB filtering and compositing. The Filter module was written to capitalise on this observation and allow for intuitive detail enhancement by simply clicking different parts of the image with a specific colour.
The Fractal Flux module allows for fully automated analysis and subsequent processing of astronomical images of DSOs.
The one-of-a-kind algorithm pin-points features in the image by looking for natural recurring fractal patterns that make up a DSO, such as gas flows and filaments. Once the algorithm has determined where these features are, it then is able to modify or augment them.
Knowing which features probably represent real DSO detail, the Fractal Flux is an effective de-noiser, sharpener (even for noisy images) and detail augmenter.
Detail augmentation through flux prediction can plausibly predict missing detail in seeing-limited data, introducing detail into an image that was not actually recorded but whose presence in the DSO can be inferred from its surroundings and gas flow characteristics. The detail introduced can be regarded as an educated guess.
It doesn't stop there however – the Fractal Flux module can use any output from any other module as input for the flux to modulate. You can use, for example, the Fractal Flux module to automatically modulate between a non-deconvolved and deconvolved copy of your image – the Fractal Flux module will know where to apply the deconvolved data and where to refrain from using it.
The HDR (High Dynamic Range) module optimises local dynamic range, in order to bring out the maximum amount of detail that is hidden in your data.
A HDR optimisation tool is a virtual necessity in astrophotography, owing to the huge brightness differences (aka 'dynamic range') innate to various objects that exist in deep space.
As opposed to other approaches (for example wavelet-based ones), the HDR module enhances dynamic range allocation locally (not just globally). It further takes into account psycho-visual theory (i.e. the way human vision perceives and processes detail) in the way the controls operate on the image.
Finally, the HDR module does not exacerbate noise grain like simpler dynamic range algorithms, factoring in noise propagation into the size of the final detail enhancement.
The result is an artefact free, totally natural looking image with real detail that does not suffer from the problems that other approaches suffer from, such as looking 'flat', looking too busy, or blowing out highlights such as stars.
The HDR module optimises local dynamic range allocation for smaller details (e.g. on a more local level) than the Contrast module; the HDR module works primarily medium-to-small features in the image.
The HDR module complements the Sharpen module and is generally a more flexible and powerful alternative that generally achieves artifact-free results. Examples of use cases are bright galaxy cores where small detail is still recoverable in the highlights.
The HDR module does not exacerbate noise grain like simpler dynamic range algorithms, factoring in noise propagation into the size of the final detail enhancement. As such, it is meant after your non-linear dataset has been stretched, for example using the Development or AutoDev modules.
As with most modules in StarTools, the HDR module comes with a number of presets;
•Optimise - accentuates detail•Equalise - pulls detail into the midtones and out of the shadows and highlights•Tame - pulls detail into the midtones and out of just the highlights•Reveal - reveals latent structural detail in the highlights (set 'Algorithm' to 'Reveal All' to also reveal structural detail in the shadows)
Going beyond the presets, more detailed adjustments can be made, starting with the 'Detail Size Range' parameter. This parameter is highly influential on the end result. It governs the range of detail sizes HDR should concentrate on, in order to bring out the most detail. Keeping this value small will see small detail accentuated. However, using larger values will see both small and large structural detail modified. Using larger values will progressively dig out larger scale structures and can be quite effective in highlighting these.
A selection of different algorithms to bring out detail exists. These are chosen through the 'Algorithm' parameter;
•'Equalize', much like the preset, pulls detail into the midtones and out of the shadows and highlights.•'Tame highlights' uses the 'Equalize' algorithm to enhance just the highlights. It is a great tool for reducing glare, very effectively negating brightness build-up in DSO cores and galaxies. It can yield similar results to the Contrast module, but on smaller scales.•'Brighten Dark' uses the 'Equalize' algorithm to enhance just the shadows. It just can be an extremely useful tool for bringing out latent detail in the shadows, such as faint, larger scale nebulosity. Because the Reveal module as whole factors in noise propagation into the size of the final detail enhancement, it does not tend to introduce much noise grain and will only bring out larger scale structures if detected.•'Optimize soft' uses a fairly conservative detail enhancement strategy and is useful to give, for example, an image of a DSO a bit more 'punch' if it is mostly very wispy or shrouded in nebulosity.•'Optimize hard' is a less conservative version of 'Optimize soft' and is a good general purpose structural detail enhancer.•'Reveal DSO core' uses the 'Reveal' algorithm and applies it to just the highlights. It is a very aggressive, but also effective, structural detail hunter. Its aggressiveness can be controlled by the 'Strength' parameter. The Reveal algorithm is a (very, very) distant cousin of the simple Contrast Limited Adaptive Histogram Equalisation (CLAHE) algorithm, but rather than performing local histogram equalisation, it performs local histogram stretching and not equalisation, thereby avoiding artifacts and noise grain exacerbation in areas with low signal-to-noise ratios. The 'Reveal DSO core' only workson the highlights.•'Reveal All' is similar in all aspects to the 'Reveal DSO core' algorithm, with the exception that it is also applied to the shadows, enhancing the totality of the local dynamic range.
In order to throttle how much the shadows and highlights respond to the enhancements, a brightness mask is used, the power of which is controlled by the 'Dark/Bright Response' parameter.
The Heal module was created to provide a means of substituting unwanted pixels in an neutral way.
Cases in which healing pixels may be desirable may include the removal of stars, hot pixels, dead pixels, satellite trails and even dust donuts.
The Heal module incorporates an algorithm that is content aware and is able to synthesise extremely plausible substitution pixels for even the large areas. The algorithm is very similar to that found in the expensive photo editing packages, however it has been specifically optimised for astrophotography purposes.
Getting started with the Heal module in StarTools is a fairly straightforward affair; simply put any unwanted pixels in a mask and let the module do its thing. The more pixels are in the mask, the more the Heal module will have to 'invent' and the longer the Heal module will take to produce a result.
By using the advanced parameters, the Heal module can be made useful in a number of advanced scenarios.
The 'New Must Be Darker Than' parameter lets you specify a brightness value that indicates the maximum brightness a 'new' (healed) pixel may have. This is useful if you are healing out areas that you later wish to replace with brighter objects, for example stars. By ensuring that the 'new' (healed) background is always darker than what you will be placing on top, you can simply use, for example, the Lighten mode in the Layer module.
The 'Grow Mask' parameter is a quick way of temporarily growing the mask (see Grow button in the Mask editor). This is useful if your current mask did not quite get all pixels that needed removing.
The 'Quality' parameter influences how long the Heal module may look for substitutes for each pixel. Higher quality settings give marginally better results but are slower.
The 'Neighbourhood Area' parameter sets the size of the local area where the algorithm can look for good candidate seed pixels.
The 'Neighbourhood Samples' parameter is useful if you are looking to generate more 'interesting' areas, based on other parts of the image. It can be useful for a large area being healed to avoid small repeating patterns. This feature is useful for terrestrial photography, however, this is often not needed or desirable for astrophotographical images. If you do not wish to use this feature, keep this value at 0.
The 'New Darker Than Old' parameter sets whether newly created pixels should always be darker than the old pixels. This may be useful for manipulation of the image in the Layer module (for example subtracting the healed image from the original image).
The Layer module is an extremely flexible pixel workbench for advanced image manipulation and pixel math, complementing StarTools' other modules.
It was created to provide you with a nearly unlimited arsenal of implicit functionality by combining, chaining and modulating different versions of the same image in new ways.
Features like selective layering, automated luminance masking, a vast array of filters (including Gaussian, Median, Mean of Median, Offset, Fractional Differentation and many, many more) allow you to emulate complex algorithms such as SMI (Screen Mask Invert), PIP (Power of Inverse Pixels), star rounding, halo reduction, chromatic aberration removal, HDR integration, local histogram optimization or equalization, many types of noise reduction algorithms and much, much more.
The Lens module was created to digitally correct for lens distortions and some types of chromatic aberration in the more affordable lens systems, mirror systems and eyepieces.
One of the many uses of this module is to digitally emulate some aspects of a field flattener for those who are imaging without a physical field flattener.
While imaging with a hardware solution to this type of aberration is always preferable, the Lens module can achieve some very good results in cases where the distortion can be well modeled.
The Life module brings back 'life' into an image by remodelling uniform light diffraction, helping larger scale structures such as nebulae and galaxies stand out and (re)take center stage.
Throughout the various processing stages, light diffraction (a subtle 'glow' of very bright objects due to lens or mirror diffraction) tends to be distorted and suppressed through the various ways dynamic range is manipulated. This can sometimes leave an image 'flat' and 'lifeless'. The Life module attempts to restore the effects of uniform light diffraction by an optical system, throughout a processed image. It does so by means of modelling an Airy disk pattern and re-calculating what the image would look like if it were diffracted by this pattern. The resulting model is then used to modulate or enhance the source image in various ways. The resulting output image tends to have a re-established natural sense of depth and ambiance, with better visible super structures.
For example, the Life module's Isolate preset, when applied to the whole image, is particularly adept at pushing back busy star fields and noisy backgrounds, refocusing the viewer's attention to the larger scale structures. As such it is a very powerful, yet easy to use tool to radically change the feel of an image.
The Life module may additionally be used locally by means of a mask. In this case the Life module can be used to isolate objects in an image and lift them from an otherwise noisy background. By having the Life module augment an object's super-structure, faint objects that were otherwise unsalvageable can be made to stand out from the background. Please note that, depending on the nature of the used selective mask, the super structures introduced by using the Life module in this particular way with a selective mask, should be regarded as an educated guess rather than documentary detail.
•Moderate - applies a moderate application of the 'life' algorithm.•Heavy - a more aggressive application of the 'life' algorithm.•Less=More - pushes back anything that is not a super structure, imparting depth to the image by manipulating brightness•Shroud - Helps brighten an image without emphasising background noise or star fields.•Isolate - pushes back anything that is not a super structure (similar to Less = More) while enhancing energy allocated to super structures.
Going beyond the presets, very detailed adjustments can be made, starting with the 'Glow Threshold' parameter. This parameter is determines how bright a pixel needs to be before it is considered for diffraction by the Airy disk diffraction model.
To view just the model that Life is using to enhance the image, the 'Output Glow Only' parameter can be set to 'Yes'. Optionally this output can be used to manipulate the image later using the Layer module, or in a separate application.
The 'Strength' parameter governs the overall strength of the effect.
The 'Inherit Brightness, Color' parameter determines whether brightness or color information is inherited (and thus unchanged) from the source image.
The 'Saturation' parameter controls the colour saturation of the output model (viewable by setting 'Output Glow Only' to 'Yes'), before it is applied to the source image to generate the final output. This parameter can be quite effective for enhancing the color of nebulosity.
The 'Detail Preservation' parameter selects the detail preservation algorithm the Life module should use to merge the model with the source image to produce the output image;
•Off - does not attempt to preserve any detail.•Min Distance to 1/2 Unity - uses the pixel that is closest to half unity (e.g. perfect gray).•Max Contrast - uses whatever pixel maximises contrast with its neighbouring pixels.•Linear Brightness Mask - uses a brightness mask that progressively masks-out brighter values until it uses the original values instead.•Linear Brightness Mask Darken - uses a brightness mask that progressively masks out brighter values. Only pixels that are darker than the original image are kept.
The 'Detail Preservation Radius' sets a filter radius that is used for smoothly blending processed and non-processed pixels, according to the algorithm specified by the 'Detail Preservation' parameter.
The 'Compositing Algorithm' parameter defines how the calculated diffraction model is to be generally combined with the original image:
•Screen - works like projecting two images on the same screen.•Power of Inverse - Power of Inversed Pixels (PIP) function.•Multiply, Gamma Correct - multiplies foreground and background and then takes the square root.•Multiply, 2x Gamma Correct - similar to 'Multiply, Gamma Correct' but doubles the Gamma Correction.
The 'Airy Disk Sampling' parameter controls the accuracy of the point spread function (PSF) that describes the diffraction model (an Airy disk).
•Default is 128 x 128 pixels. Range is 128 x 128, 256 x 256, 512 x 512 pixels.•Increasing this value will give a more accurate simulation but will take longer.
The 'Airy Disk Radius' parameter sets the radius of the Airy disk point spread function (PSF) that is used to diffract the light. Just like in nature, you may spot some (very) subtle rings around the stars after processing. The way this looks can be adjusted using this setting.
Finally, as with most modules in StarTools that employ masks, a 'Mask Fuzz' parameter is available to smoothly blend the transition between masked and non-masked pixels.
The Repair module attempts to detect and automatically repair stars that have been affected by optical or guiding aberrations.
Repair is useful to correct the appearance of stars which have been adversely affected by guiding errors, incorrect polar alignment, coma, collimation issues or mirror defects such as astigmatism.
The Repair module allows for the correction of more complex aberrations than the much less sophisticated 'offset filter & darken layer' method, whilst retaining the star's exact appearance and color.
The repair module comes with two different algorithms. The 'Warp' algorithm uses all pixels that make up a star and warps them into a circular shape. This algorithm is very effective on stars that are oval or otherwise have a convex shape. The 'Redistribution' algoirthm uses all pixels that make up a star and redistributes them in such a way that the original star is reconstructed. This algorithm is very effective on stars that are concave and can not be repaired using the 'Warp' algorithm.
StarTools' Detail-aware Wavelet Sharpening allows you to bring out faint structural detail in your images.
Other Wavelet Sharpening implementations can often drown out other fine detail because of different frequency ranges competing for the modification of the same pixel - in those implementations, the different scales (bands) interfere with each other and are not aware of the sort of detail you are trying to bring out.
Uniquely, StarTools' Wavelet Sharpening gives you control over how detail enhancements across different scales interact. Apart from traditional parameters like controlling the strength of the detail enhancement per band, StarTools allows you to be the arbiter when two scales (bands) are competing to enhance detail in their band for the same pixel.
As with all modules in StarTools, the Wavelet Sharpening module will never allows you to clip your data, always yielding useful results, no matter how outrageous the values you choose, while availing of the Tracking feature's data mining. The latter makes sure that, contrary to other implementations, only detail that has sufficient signal is emphasised, while noise grain propagation is kept to a minimum.
Using StarTools' Auto Mask Generator, stars are automatically left alone. And, best of all, the complete algorithm is so fast that results are calculated in virtually real-time, while the interface couldn't be more user friendly.
The Shrink module allows you to modify the appearance of stars in your image. It allows you to shrink stars, tighten stars and better color stars.
The Synth module generates physically correct diffraction and diffusion of point lights (such as stars) in your image, based on a virtual telescope model.
Besides correcting and enhancing the appearance of point lights (such as stars), the Synth module may even be 'abused' for aesthetic purposes to endow stars with diffraction spikes where they originally had none.
Any other tools on the market today simply approximate the visual likeness of such star spikes and 'paint' them on. However the Synth module can physically model and emulate most real optical systems and configurations to obtain a desired result.
The Wipe module detects, models and removes any source of unwanted light bias.
The Wipe module's main purpose is to eliminate unwanted light in an image and establish a neutral background.
Unwanted light may come in the form of gradients, colour cast or light pollution.
•Gradients are usually prevalent as gradual increases (or decreases) of background light levels from one corner of the image to another. Sources may include the or a nearby street light.•Colour casts are a tint of a particular colour which, contrary to a gradient, affects the whole image evenly.•Light pollution is the presence of a persistent haze of (often) coloured light, caused by urban street lighting.
Other issues that the Wipe module may ameliorate are vignetting and amp glow;
•Vignetting manifests itself as the gradual darkening of the image towards the corners and may be caused by a number of things.•Amp glow is caused by circuitry heating up in close proximity to the CCD, causing localised heightened thermal noise (typically at the edges). On some older DSLRs and Compact Digital Cameras, amp glow often manifests itself as a patch of purple fog near the edge of the image.
Strictly speaking, Vignetting is not an additive light source and the correct course of action is to apply flat frames during sub frame calibration. That said, reasonable results can be achieved using Wipe's "vignetting" preset.
Note that while part of Wipe's job description is 'establishing a neutral background', this doesn't necessarily the background is colourless. It simply means that the colour channels are now bias-less, however colour calibration of the channels by the Color module is still required.
It is of the utmost importance that Wipe is given the best artefact-free, linear data you can muster.
Because Wipe tries to find the true (darkest) background level, any pixel reading that is mistakenly darker than the true background in your image (for example due to dead pixels on the CCD, or a dust speck on the sensor) will cause Wipe to acquire wrong readings for the background. When this happens, Wipe can be seen to "back off" around the area where the anomalous data was detected, resulting in localised patches where gradient (or light pollution) remnants remain. These can often look like halos. Often dark anomalous data can be found at the very centre of such a halo or remnant.
The reason Wipe backs off is that Wipe (as is the case with most modules in StarTools) refuses to clip your data. Instead Wipe allocates the dynamic range that the dark anomaly needs to display its 'features'. Of course, we don't care about the 'features' of an anomaly and would be happy for Wipe to clip the anomaly if it means the rest of the image will look correct.
Fortunately, there are various ways to help Wipe avoid anomalous data;
•A 'Dark anomaly filter' parameter can be set to filter out smaller dark anomalies, such as dead pixels or small clusters of dead pixels, before passing on the image to Wipe for analysis.•Larger dark anomalies (such as dust specks on the sensor) can be excluded from analysis by, simply by creating a mask that excludes that particular area (for example by "drawing" a "gap" in the mask using the Lassoo tool in the Mask editor).•Stacking artefacts can be cropped using the Crop module.
Bright anomalies (such as satellite trails or hot pixels) do not affect Wipe.
Once any dark anomalies in the data have successfully been dealt with, operating the Wipe module is fairly straightforward.
By default, a setting is selected that performs well in the presence of moderate gradients, colour casts or bias levels.
If the gradient is found to undulate stronger, a higher 'Aggressiveness' setting may be appropriate. When using a higher 'Aggressiveness', be mindful of Wipe not 'wiping' away any medium to larger scale nebulosity. To Wipe, larger scale nebulosity and a strong undulating gradients can look like the same thing!
If you're worried about Wipe removing any larger scale nebulosity, you can protect this nebulosity by masking it out, so that Wipe doesn't sample it.
Because Wipe's impact on the dynamic range in the image is typically very, very high, a (new) stretch of the data is almost always appropriate so that the freed up dynamic range that used to be occupied by the gradients and/or light pollution can now be put to good use to show detail. Therefore, a global re-stretch using the AutoDev or Develop module is almost always required.
Having to 'Keep' the result and switching to 'AutoDev' or 'Develop', just to see the result, is a bit tedious. Therefore, switching on a courtesy 'Temporary AutoDev' operation allows you to see the result.
A number of controls for advanced use and special cases are available.
The 'Corner aggressiveness' lets the user specify a different aggressiveness value for the corners of the image. This can be useful if gradients become stronger in just the corners and can help ameliorate vignetting. The 'Drop off point' determines how far from the center of the image the 'Corner aggressiveness' starts taking over from the main 'Aggressiveness' parameter. At 100% for the 'Drop off point', no effect is visible (e.g. only the main 'Aggressiveness' parameter is used) since the' Corner aggressiveness' only comes into effect 100% of the way between the center of the image and the corners.
The 'Precision' parameter can help when dealing with rapidly changing (e.g. undulating) gradients combined with high 'Aggressiveness' values.
The 'Mode' parameter allows for the selection of what aspect of the image should be corrected by Wipe;
•Correct color and brightness; removes both colour and brightness bias across the image.•Correct color only; removies color casts but does not impact brightness bias.•Correct brightness only; retains color but corrects brightness bias. This mode is useful when processing narrowband data, or data that was not acquired on earth (for example Hubble Space Telescope data).
It's a feature called "Tracking" and processes your signal in 3D (X, Y, t) space, rather than standard 2D (X,Y) space.
The result is less noise grain, finer detail, more flexibility, and unique functionality. You will not find this in any other software.
StarTools monitors your signal and its noise component, per-pixel, throughout your processing (time). It sports image quality and unique functionality that far surpasses other software. Big claim? Let us back it up.
If you have ever processed an astrophotographical image, you will have had to non-linearly stretch the image at some point, to make the darker parts with faint signal visible. Whether you used levels & curves, digital development, or some other tool, you will have noticed noise grain becoming visible quickly.
You may have also noticed that the noise grain always seems to be worse in the darker areas than the in brighter areas. The reason is simple; when you stretch the image to bring out the darker signal, you are also stretching the noise component of the signal along with it.
And the former is just a simple global stretch. Now consider that every pixel's noise component goes through many other transformations and changes as you process your image. Once you get into the more esoteric and advanced operations such as local contrast enhancements or wavelet sharpening, noise levels get distorted in all sorts of different ways in all sorts of different places.
The result? In your final image, noise is worse in some areas, less in others. A "one-noise-reduction-pass-fits-all" no longer applies. Yet that's all other software packages - even the big names - offer.
Chances are you have used a noise reduction routine at some stage. In astrophotography, the problem with most noise reduction routines, is that they have no idea how much worse the noise grain has become in the darker parts. They have no idea how you stretched and processed your image earlier. And they certainly have no idea how you squashed and stretched the noise component locally with wavelet sharpening or local contrast optimisation.
In short, the big problem, is that separate image processing routines and filters have no idea what came before, nor what will come after when you invoke them. All pixels are treated the same, regardless of their history. Current image processing routines and filters are still as 'dumb' as they were in the early 90s. It's still "input, output, next".
Without knowing how signal and its noise component evolved to become your final image, trying to, for example, squash noise accurately is impossible. What's too much in one area, is too little in another, all because of the way prior filters have modified the noise component beforehand.
The separation of image processing into dumb filters and objects, is one of the biggest problems for signal fidelity in astrophotographical image processing software today. It is the sole reason for poorer final images, with steeper learning curves than are necessary. Without addressing this fundamental problem, "having more control with more filters and tools" is an illusion. The IKEA effect aside, long workflows with endless tweaking do not make for better images.
But what if every tool, every filter, every algorithm could work backwards from the finished image, and trace signal evolution, per-pixel, all the way back to the source signal? That's Tracking.
Tracking in StarTools makes sure that every module and algorithm can trace back how a pixel was modified at any point in time. It's the Tracking engine's job to allow modules and algorithms "travel in time" to consult data and even change data (changing the past) and then forward-propagate the changes to the present.
The latter sees the Tracking module re-apply every operation made since that point in time, however with the changed data as a starting point; changing the past for a better future. This is effectively signal processing in three dimensions; X, Y and time (X, Y, t).
This remarkable feature is responsible for never-seen-before functionality that allows you to, for example, apply deconvolution to heavily processed data. The deconvolution module "simply" travels back in time to a point where the data was still linear (normally deconvolution can only correctly be applied to linear data!). Once travelled back in time, deconvolution is applied and then Tracking forward-propagates the changes. The result is exactly what your processed data would have looked like with if you had applied deconvolution earlier and then processed it further.
Sequence doesn't matter any more, allowing you to process and evaluate your image as you see fit. But wait, there's more!
Time traveling like this is very useful and amazing in its own right, but there is another major, major difference in StarTools' deconvolution module.
The major difference, is that, because you initiated deconvolution at a later stage, the deconvolution module can take into account how you processed the image after the moment deconvolution should normally have been invoked (e.g. when the data was still linear). The deconvolution module now has knowledge about a future it normally is not privy to in any other software. Specifically, that knowledge of the future tells it exactly how you stretched and modified every pixel - including its noise component - after the time its job should have been done.
You know what really loves per-pixel noise component statistics like these? Deconvolution regularization algorithms! A regularization algorithm suppresses the creation of artefacts caused by the deconvolution of - you guessed it - noise grain. Now that the deconvolution algorithm knows how noise grain will propagate in the "future", it can take that into account when applying deconvolution at the time when your data is still linear, thereby avoiding a grainy "future", while allowing you to gain more detail. It is like going back in time and telling yourself the lottery numbers to today's draw.
What does this look like in practice? It looks like a deconvolution routine that just "magically" brings into focus what it can. No local supports, luminance masks, or selective blending needed. No exaggerated noise grain, just enhanced detail.
And all this is just what Tracking does for the deconvolution module. There are many more modules that rely on Tracking in a similar manner, achieving objectively better results than any other software, simply by being smarter with your hard-won signal.
In StarTools, your signal is processed (read and written) in a time-fluid way. Being able to change the past for a better future not only gives you amazing new functionality, changing the past with knowledge of the future also means a cleaner signal. Tracking always knows how to accurately estimate the noise component in your signal, no matter how heavily modified.
For its unique engine to function, StarTools needs to be able to make mathematical sense of your signal flow. That's why it's simply unable to perform "nonsensical" operations. This is great if you're a beginner and saves you from bad habits or sub-optimal decisions.
Just like in real life, in astrophotographical image processing, some things need to be done in a particular order to get the correct result. Folding, drying then washing your shirt, will achieve a markedly different result to washing, drying and folding it. Similarly, deconvolution will not achieve correct results if it is done after stretching, ditto for light pollution removal and color calibration. In mathematics, this is called the commutative property.
The "Tracking" feature, constantly backward propagates and forward propagates your signal through processing "time" as needed. This means that "nonsensical" signal paths (e.g. signal paths that get sequences wrong) would break Tracking's ability. Therefore, such signal paths are closed off. For this reason, it is neigh-impossible in StarTools to perform catastrophically destructive operations on your data; it simply wouldn't be sound mathematics and the code would break.
For example, the notion of processing in the linear domain vs non-linear (stretched) domain is completely abstracted away by the engine because it needs to do that. If you didn't know the difference between those two yet, you can get away with learning about this later. Even without knowing the ins-and-outs of astronomical signal processing, you can still produce great images from the get-go; StarTools takes care of the correct sequence.
So, whereas other software will happily (and incorrectly!) allow you to perform light pollution removal, color calibration or deconvolution after stretching, StarTools will...
...actually also let you do that, but with a twist!
Tracking will rewind and/or fast-forward to the right point in time, so that the signal flow to makes sense and is mathematically consistent. It inserts the operation in the correct order and recalculates what the result would have looked like if your decision had always been the case. It's time travelling for image processing, where you can change the past to affect the present and future.
For an in-depth explanation of Tracking, see the Tracking section.
StarTools is a 64-bit optimized application for multi-core processors, with at least 6GB of memory available. For larger datasets 16GB to 32GB may be required. Fast SSD access will greatly benefit the application. Always check for oversampling and Bin down your dataset to a lower resolution where possible. Legacy 32-bit machines and operating systems are also still supported.
The single ZIP archive contains the executables for Windows, macOS and Linux. StarTools is a pure native application and does not rely on other frameworks.
Never download StarTools from anywhere else but startools.org. We do not allow distribution of StarTools by any other party, on-line or off-line. If you find a copy of StarTools not hosted on startools.org, please let us know.
Some macOS (e.g. Sierra and above) users may need to run;
xattr -dr com.apple.quarantine StarTools.app
to un-quarantine StarTools.
This command needs to be run from the folder where the StarTools application is located (you can use the 'cd' command to navigate to the right folder, while using the TAB key to auto-complete the path).
Alternatively StarTools can be launched via control + click on the application, Show Package Contents, navigating to Contents/MacOS and clicking on the application.
Apple has been making it increasingly difficult for independent developers to distribute applications. As of Sierra you will need to follow the following steps to run StarTools.
StarTools 1.5.368 for Windows 32-bit, Windows 64-bit, MacOSX 64-bit, Linux 32-bit and Linux 64-bit (3.8MB)
Latest version released 2019-09-19 (YYYY/MM/DD)
StarTools 1.3.204 for Android 1.6+ Technology Demo (1.5 MB)
NOTE: Put any file you want to load in /sdcard root and name it 'file.tiff'
StarTools uses AIFE.AI for content management and digital footprint. This means that the website content doubles as a printable manual and vice-versa. This content is also available as a smartphone/tablet app, virtual flipbook, virtual reality (VR) experience and more.
These are some questions that get asked frequently.
StarTools is display-device agnostic, but can be configured to display its GUI at a 4x higher resolution to accommodate high-DPI devices and 4K displays.
To enable this mode, create an empty file called 'highdpi' (NOTE: without extension or file type) in the StarTools folder where the executable is launched from.
Some less reputable virus scanners such as BitDefender, Norton and SpyBot may falsely report StarTools as a Trojan or Potentially Unwanted Program (due to malware that carries a similar name). Despite multiple users going through the lengths of getting StarTools white listed, the same problem pops up every 6 months or so.
Please see this post in the forums for more information.
If despite the above information you feel your StarTools download does indeed contain malware, please contact us as soon as possible.
The minimum specifications for a computer to run StarTools successfully depends mostly on the resolution the data you intend to process.
Low resolution data sets (for example from a 1MP CCD or Webcam) may be processed successfully on a Pentium IV with 512Mb RAM.
High-resolution data sets, such as those from a DSLR typically require at least 4Gb of RAM.
For best results, 16GB and a modern 4-core CPU are recommended, in addition to running from a RAM disk (or alternatively a Solid State Drive).
Regardless of your machine's specification, consider binning your data if your data is oversampled.
As of 1.3.180 Beta, StarTools uses all cores that it can find to speed up your processing in situations where it makes sense.
Previous versions were capped to 4 cores max.
Please note that using multiple cores for tasks that are memory bus constrained, can actually have an adverse effect on performance, so you may find that not all algorithms and modules use all cores all of the time.
The 32-bit version is meant for older computers with less memory and/or a 32-bit Operating System.
The signal path is 32-bit for the 32-bit version, while the signal path is 64-bit for the 64-bit version, the latter being more precise but requiring twice the memory. Additionally, the 64-bit version makes use of the latest instruction sets (such as SSE) on the more modern CPUs to speed up processing tasks.
StarTools is a completely native, self-contained application that does not require any further installation of helper libraries or run-time frameworks.
Everything in StarTools was written from the ground-up and has been hand-optimised, from the image processing algorithms to the UI library, from the file importing to the font renderers, for the multi-platform framework to the decompression routines. Why? Because we feel it is important to be master of our own destiny (and make you master of your own destiny by extension) and fundamentally understand each and every ingredient that goes into the mix.
Fundamentally understanding the different algorithms, optimisation techniques and data structures gives us the ability to push the boundaries and create truly novel techniques and algorithm implementations.
Please note that Linux users, will still need X11, GLIB 2.15, zenity and wmctrl installed on their system.
If you had bothered to read the 'buy' page, you would have learned that you could spare yourself the effort of writing a keygen or crack - if you can't afford the license fee and you are a genuine enthusiast, we're happy to work something out!
We're not some big evil company and we're not in it for the money. Heck, we make a loss on this all for the love of the hobby and are not even covering our costs as it is.
Besides, ST's release cycle is one of continuous updates - you'd be continuously waiting for the next crack or keygen in order to avail of the latest features and bug fixes (of which there can be several a month).
A StarTools license is currently priced at 65 AUD (approximately 50 USD, 45 EUR, or 40 GBP).
A 20% discount applies for group buys of 5 licenses or more.
Your license will be yours to keep. It will never expire and is guaranteed to work with any new version that is released within 2 years of the purchase date. You will not need an Internet connection and you are free to install StarTools on as many systems as you like, provided you own those systems and are an individual. If you are any other entity (business, organization, club, etc.), please contact us. Please see the EULA included in the download for further details. We're not a fan of heavy handed DRM systems and complicated activation procedures. We trust our users to do the right thing – your license key uniquely identifies you and that's good enough for us.
If you haven't done so, please evaluate the trial version for as long as you like before buying. It offers full functionality, with the exception of being able to save your work. This way you can be sure StarTools performs adequately on your system and suits your needs.
Lastly, the StarTools project is more about enabling astrophotography for as many people as possible, no matter how limited or advanced their means and equipment, than it is about making a profit - we just try to cover our costs. That said, if the price of a license really is an issue for you (self supported student, minor, pensioner, veteran, hard times, etc.), contact us and we'll try to work something out; we understand - we've been there. No need for cracks, keygens, etc.
Please allow 48 hours for us to process you order as we manually generate the keys from your billing details and e-mail them to you as an attachment via your nominated PayPal e-mail address. Please make sure the e-mail address you have nominated for PayPal transactions is correct.
Please make sure your e-mail inbox is not full. If, despite repeated efforts, our e-mail with the license key attachment cannot be delivered, the full amount will be refunded. If we have not responded within 48 hours after payment, please check your Junk mail folder and contact us via e-mail or the contact form on the website.
Thank you for considering renewal of your StarTools update entitlement license!
Your continued support helps us improve StarTools with new tools and new algorithms, opening up your (and our!) wonderful hobby to more people around the world, regardless of their means.
A StarTools license renewal is currently priced at 29 AUD (approximately 20 USD, 18 EUR, or 17 GBP).
Renewals are checked against previous purchases. If your previous purchase cannot be found, renewal will fail and your renewal purchase will be refunded.
Please do contact us if you have special requirements, or if the pricing is an issue for you.
If you received a voucher for a StarTools license from a third party vendor, you can apply for your StarTools license by filling out the this form.
For terms, conditions and processing times, please refer to the information under "buy".
Visit our friendly forum, full of hints, tips and tutorials at http://forum.startools.org
These are some helpful links and tutorials related to StarTools and other image processing resources.
You may also find it helpful to know that the icons in the top two panels roughly follow a recommended workflow.
Much of StarTools revolves around signal evolution Tracking from start to finish. As such, familiarising yourself with how it works, is recommended to get the most out of your experience and your dataset.
This quick, 7-step guide gets you processing your first image with StarTools in no time at all.
This is a basic workflow showing how real-world, imperfect data from a DSLR can be processed in StarTools. The workflow details data prep, bias / gradient / light pollution removal, stretching, deconvolution, color calibration and noise reduction. Please see video description on YouTube for the actual datasets and other resources.
This video shows how processing a complex Hubble Space Telescope SHO dataset is virtually just as easy as processing a simple DSLR dataset in StarTools 1.5. Aside from activating the Compose module, your workflow and processing considerations are virtually the same. Please see video description on YouTube for datasets and other resources.
This is a very basic workflow using defaults, showing how the new Compose module (replacing the LRGB module in StarTools 1.5) makes complex LLRGB compositing and processing incredibly easy. The workflow details the usual data prep, bias/gradient removal, stretching, deconvolution, color calibration and noise reduction. You will notice this workflow is substantially similar to any other StarTools workflow, even though we are dealing with a complex composite of luminance, synthetic luminance, and color data all at once. Please see video description on YouTube for datasets and other resources.
This is a small selection of StarTools tutorials and resources, created by StarTools users.
A very popular, comprehensive tutorial titled "Processing a (noisy) DSLR image stack with StarTools" by Astro Blog Delta.
A great number of YouTube videos on StarTools are available from various users.
In-depth user notes, detailing modules, their parameters, use cases, hints and tips.
A utility to replay StarTools logs.
If you are looking for datasets from amateur astrophotographers to practice with, there are a number for useful resources.
Processing is meant to be fun! If you really need help with a particular dataset, jump on the forums or contact us directly for some pointers - even if you're just using the trial.
A great website with useful information and many datasets that are of a quality achievable by most people on a modest budget. Please note that most datasets will need to be converted to an uncompressed TIFF format.
If you have ImageMagick on your machine, you can use;
convert input.tiff -depth16 +compress output.tiff
A fantastic collection of various deep space objects, imaged in HaLRGB by Jim Misti. Working with just the L (luminance) frames, before delving into HaLRGB combining, is great way to learn the ropes.
Results are free to publish, as long as they are credited "Image acquisition by Jim Misti".
This Yahoo group is for help and tips in processing images captured with DSLR and One Shot Color CCD cameras of all brands.
StarTools was created to complement the many freely available stacking and pre-processing solutions with unique, state-of-the art post-processing functionality.
Some of these solution provide basic post-processing function as well. Please note that only pre-processing and stacking should be performed in these applications in order for signal evolution Tracking to work and achieve optimal results; Tracking cannot track signal and noise propagation that happened in other applications. Do not stretch, color calibrate, perform gradient removal, or perform any other operations beyond initial calibration in these applications.
"Simple but powerful", is the core philosophy of this Windows-only application.
DeepSkyStacker is Windows-only freeware software for astrophotographers, which aims to simplify all the pre-processing steps of deep sky images.
ASTAP, the Astrometric STAcking Program, is an astrometric solver, stacker of images, and provides photometry and FITS viewing functionality. It is available for all platforms.
Regim makes some processing steps that are unique to astronomical images a bit easier. Regim is available for all platforms.
Siril is a feature-rich, free astronomical image processing suite with excellent pre-processing capabilities. It is available for all platforms.
Fitswork is a windows image processing program, mainly designed for astronomic purposes.
You can convert everything you see to a format you find convenient. Give it a try!