StarTools is a powerful new type of image processing engine. It tracks your signal and its noise component as you process.
The result is less noise, more detail, ease of use, and unique advanced post-processing power compared to any other software.
StarTools is a new type of image processing application for astrophotography that tracks signal and noise propagation as you process.
By tracking signal and noise evolution during processing, it lets you effortlessly accomplish hitherto "impossible" feats like deconvolution of a heavily processed image, or pin-point accurate noise reduction without local supports or masks.
StarTools is a powerful new type of image processing application for astrophotography that tracks noise propagation as you process.
StarTools extensive knowledge of the past, present and - sometimes - future of your signal, allows you to do things users of other software can only dream of. These things include mathematically correct deconvolution of heavily processed data, mathematically correct color calibration of stretched data, and objectively the best noise reduction routine on the market that seems to "just know" exactly where noise grain in your final image is located.
As opposed to other software, StarTools uses new brute force and data mining techniques, so your precious signal is preserved as much as possible till the very end. StarTools makes use of the advances in CPU power, RAM and storage space, replacing old algorithms with new, more powerful ones.
StarTools is not just popular with beginners. StarTools is the best-kept secret amongst signal processing purists; those who fundamentally understand how StarTools achieves such superior signal fidelity. Yet, you don't need a mathematics or physics degree to understand the underlying theory; see the Tracking section to learn more.
We are incredibly pleased StarTools superior processing capabilities haven't gone unnoticed, now being the new tool of choice for a rapidly growing group of beginners, enthusiasts, schools and institutions that numbers in the many thousands.
The software is "user friendly by mathematical nature". To be able to function, the engine needs to be able to make mathematical sense of your signal flow from start to finish. That's why it is simply unable to perform "nonsensical" or destructive operations. This is great if you are a beginner, and it saves you from bad habits or sub-optimal decisions. It's not so much because we put "guard rails" in; it is just that the mathematics would break down otherwise.
StarTools aims to be as affordable as it is powerful. The StarTools project is about enabling astrophotography for as many people as possible, no matter how limited or advanced their means and equipment. As such, we aim to provide the most advanced image processing algorithms of any software at a just a fraction of the price of traditional software.
StarTools comprises several modules with deep, state-of-the-art functionality that rival (and often improve on) other software packages.
Don't be fooled by StarTools' simple interface - you are forgiven if, at first glance, you get the impression StarTools offers only the basics. Nothing could be further from the truth!
StarTools goes deep. Very deep. It's just not 'in your face' about it and you can still get great results without delving into the depths of its capabilities. It's up to you.
If you're a seasoned photographer looking to get more out of your data, StarTools will allow you to visibly gain the edge with novel, brute-force techniques and data mining routines that have only just become viable on modern 64-bit multi-core CPUs and increases in RAM and storage space.
If you're a beginner, StarTools will assist you by making it easy to achieve great results out-of-the box, while you get to know the exciting field of astrophotography better.
Whatever your situation, skills, equipment and prior experience, you'll find that working with StarTools is quite a bit different than most software you've worked with. And in astrophotography, that tends to be a good thing!
Getting to grips with new software can be daunting, but StarTools was designed to make this as painless as possible. This quick, generic work flow will get you started.
While processing your first images with StarTools, it may help knowing that the icons in the top two panels roughly follow a recommended workflow when read top to bottom, left to right.
With a suitable dataset, workflows in StarTools are simple, replicable and short. Most modules are visited only once, with a clear purpose.
If you are familiar with other processing applications, you may be surprised with the seemingly erroneous mixing of modules that operate on linear vs non-linear data.
In StarTools, this important distinction is abstracted away, thanks to the signal evolution Tracking engine. In fact, it lets you do things, with ease, that are hard or impossible in other applications.
Open an image stack ("dataset"), fresh from a stacker. Make sure the dataset was stacked correctly, as StarTools, more than any other software, will not work (or work poorly) if the dataset is not stacked correctly or has been modified beforehand. Your dataset should be as "virgin" as possible, meaning unstretched, not colour balanced, not noise reduced and not deconvolved. Please consult the "starting with a good dataset" section in the "links & tutorials" section.
Upon opening an image, the Tracking dialog will open, asking you about the characteristics of the data. Choose the option that best matches the data being imported. If your dataset comes straight from a stacker, the first option is always safe. The second option may yield even better results if certain conditions are met. Depending on what you choose here, StarTools may work exclusively on the luminance (mono) part of your image, bringing in color later; StarTools is able to seamlessly process color and detail separately (yet simultaneously).
Tracking is now engaged (the Track button is lit up green). This means that StarTools is now monitoring how your signal (and its noise component) is transformed as you process it.
Once imported, counter-intuitively, a good stacker output will have a distinct, heavy color bias with little or no apparent detail. Worry not; subsequent processing in StarTools will remove the color bias, while restoring and bringing out detail. If, looking at the initial image, you are wondering how on earth this will be turned into a nice picture, you are often on the right track.
Launch AutoDev to help inspect the data. Chances are that the image looks terrible, which is - believe it or not - the point. In the presence of problems, AutoDev will show them until they are dealt with. Because StarTools constantly tries to make sense of your data, StarTools is very sensitive to artefacts, meaning anything that is not real celestial detail (a single color bias, stacking artefacts, dust donuts, gradients, terrestrial scenery, etc.). Just 'Keep' the result. StarTools, thanks to Tracking, will allow us to redo the stretch later on.
At this point, things to look out for are;
•Stacking artefacts close to the borders of the image. These are dealt with in the Crop or Lens modules•Bias or gradients (such as light pollution or skyglow). These are dealt with in the Wipe module.•Oversampling (meaning the finest detail, such as small stars, being "smeared out" over multiple pixels). This is dealt with in the Bin module.•Coma or elongated stars towards one or more corners of the image. These can be ameliorated using the Lens module.
Make mental notes of any issues you see.
Fix the issues that AutoDev has brought to your attention;
1Ameliorate coma using the Lens module.2Crop any remaining stacking artefacts.3Bin the image up until each pixel describes one unit of real detail.4Wipe gradients and bias away. Be very mindful of any dark anomalies - bump up the Dark Anomaly filter if dealing with small ones (such as dark pixels) or mask big ones (such as large dust donuts) out using the Mask editor.
The importance of binning your dataset cannot be overstated. It will trade "useless" resolution for improved signal, making your dataset much quicker and easier to process, while allowing you to pull out more detail.
Once all issues are fixed, launch AutoDev again and tell it to 'redo' the stretch. If all is well, AutoDev will now create a histogram stretch that is optimised for the "real" object(s) in your cleaned-up dataset.
If your dataset is very noisy, it is possible AutoDev will optimise for the fine noise grain, mistaking it for real detail. In this case you can tell it to Ignore Fine detail.
If your object(s) reside on an otherwise uninteresting or "empty" background, you can tell AutoDev where the interesting bits of your image are by clicking & dragging a Region Of Interest ("RoI"). There is no shame in trying multiple RoIs. AutoDev will keep solving for a global strecth that best shows the detail in your RoI.
If even visible, don't worry about the coloring just yet - focus getting the detail out of your data first. If your image shows very bright highlights, know that you can "rescue" them later on using, for example, the HDR module.
Season your image to taste. Dig out detail with the Wavelet Sharpen ('Sharp') module, enhance Contrast with the Contrast module and fix any dynamic range issues with the HDR module.
Next, you can often restore blurred-out detail (for example due to an unstable atmosphere) using the easy-to-use Decon (deconvolution) module.
There are many ways to enhance detail to taste and much depends on what you feel is most important to bring out in your image. As opposed to other software, however, you don't need to be as concerned with noise grain propagation; StarTools will take care of noise grain when you finally switch Tracking off.
Launch the Color module.
See if StarTools comes up with a good colour balance all by itself. A good colour balance shows a good range of all star temperatures, from red, orange and yellow through to white and blue. HII areas will tend to look purplish/pink, while galaxy cores tend to look yellow and their outer rims tend to look bluer.
Green is an uncommon colour in outer space (though there are notable exceptions, such as areas that are strong in OIII such as the core of M42). If you see green dominance, you may want to reduce the green bias. If you think you have a good colour balance, but still see some dominant green in your image, you can remove the last bit of green using the 'Cap Green' function.
StarTools is famous for its Color Constancy color rendering. This scientifically useful mode shows colors (for example nebula emissions) in the same color, regardless of brightness. However, if you prefer the more washed out and desaturated color renderings of older software you can use the Legacy preset.
Switch Tracking off and apply noise reduction. You will now see what all the "signal evolution Tracking" fuss is about, as StarTools seems to know exactly where the noise exists in your image, snuffing it out.
As of StarTools 1.6, you get the choice of two noise reduction styles. One removes nose grain entirely, the other shapes it (some astrophotographers prefer a measure of equalized noise grain for a filmic effect). Which one you choose is down to your personal taste, as well as the noisiness of your dataset.
Enjoy your final image!
If you find that, despite your best efforts, you cannot get a significantly better result in StarTools than in any (yes any!) other software, please contact us.
A video is also available that shows a simple, short processing workflow of a real-world, imperfect dataset.
Please refer to the video description below the video for the source data and other helpful links.
Navigation within StarTools generally takes place between the main screen and the different modules. StarTools' navigation was written to provide a fast, predictable and consistent work flow.
There are no windows that overlap, obscure or clutter the screen. Where possible, feedback and responsiveness will be immediate. Many modules in StarTools offer on-the-spot background processing, yielding quick final results for evaluation and further tweaking.
In some modules a preview area can be specified in order to get a better idea of how settings would modify the image in a particular area, saving the user from waiting for the whole image to be re-calculated.
In both the main screen and the different modules, a toolbar is found at the very top, with buttons that perform functionality that is specific to the active module. In case of the main screen, this toolbar contains buttons for opening an image, saving an image, undoing/redoing the last operation, invoking the mask editor, switching Tracking mode on/off, restoring the image to a particular state, and opening an 'about' dialog.
Exclusive to the main screen, the buttons that activate the different modules, reside on the left hand side of the main screen. Note that the modules will only successfully activate once an image has been loaded, with the exception of the 'Compose' module. Note also that some module may remain unavailable, depending on whether Tracking mode is engaged.
Helpfully, the buttons are roughly arranged in a recommended workflow. Obviously not all modules need to be visited and workflow deviations may be needed, recommended or suit your personal taste better.
Consistent throughout StarTools, a set of zoom control buttons are found in the top right corner, along with a zoom percentage indicator.
Panning controls ('scrollbar style') are found below and to the right of the image, as appropriate, depending on whether the image at its current zoom level fits in the application window.
Common to most modules is a 'Before/After' button, situated next to the zoom controls, which toggles between the original and processed version of an image for easy comparison. A "PreTweak/PostTweak" button may also be available, which toggles between the current and previous result, allowing you to quickly spot the difference between two different settings.
All modules come with a 'Help' button in the toolbar, which explains, in brief, the purpose of the module. Furthermore, all settings and parameters come with their own individual 'Help' buttons, situated to the right of the parameter control. These help buttons explain, again in brief, the nature of the parameter or setting.
Even the way StarTools displays and scales images, has been created specifically for astrophotography.
StarTools implements a custom scaling algorithm in its user interface, which makes sure that perceived noise levels stay constant, no matter the zoom level. This way, nasty noise surprises when viewing the image at 100% are avoided.
Even more clever, StarTools scaling algorithm can highlight latent and faint patterns (often indicating stacking problems or acquisition errors) by intentionally causing an aliasing pattern at different zoom levels in the presence of such patterns.
The parameters in the different modules are typically controlled by one of two types of controls;
1A level setter, which allows the user to quickly set the value of a parameter within a certain range2An item selector, which allows the user to switch between different modes.
Setting the value represented in a level setter control is accomplished by clicking on the '+' and '-' buttons to increment or decrement the value respectively. Alternatively you can click anywhere in the area between the '-" and '+' button to set a value quickly.
Switching items in the item selector is accomplished by clicking the arrows at either end of the item description. Note that the arrows may disappear as the first or last item in a set of items is reached. Alternatively the user may click on the label area of the item selector to see the full range of items which may then be selected from a pop-over menu.
As of version 1.5, StarTools implements some hotkeys for common functions;
+ or = key
D or ENTER key
ESC key or ENTER key
Signal evolution Tracking data mining plays a very important role in StarTools and understanding it is key to achieving superior results with StarTools.
As soon as you load any data, StarTools will start Tracking the evolution of every pixel in your image, constantly keeping track of things like noise estimates, parameters you use and other statistics.
Tracking makes workflows much less linear and allows for StarTools' engine to "time travel" between different versions of the data as needed, so that it can insert modifications or consult the data in different points in time as needed ('change the past for a new present and future'). It's the primary reason why there is no difference between linear and non-linear data in StarTools, and the reason why you can do things in StarTools that would have otherwise been nonsensical (like deconvolution after stretching your data). If you're not familiar with Tracking and what it means for your images, signal fidelity and simplification of the workflow & UI, please do read up on it!
Tracking how you process your data also allows the noise reduction routines in StarTools to achieve superior results. By the time you get to your end result, the Tracking feature will have data-mined/pin-pointed exactly where (and how much) visible noise grain exists in your image. I therefore 'knows' exactly how much noise reduction to apply in each area of your image.
Noise reduction is applied at the very end, as you switch Tracking off, because doing it at the very last possible moment will have given StarTools the longest possible amount of time to build and refine its knowledge of where the noise is in your image. This is different from other software, which allow you to reduce noise at any stage, since such software does not track signal evolution and its noise component.
Tracking how you processed your data also allows the Color module to calculate and reverse how the stretching of the luminance information has distorted the color information (such as hue and saturation) in your image, without having to resort to 'hacks'. Due to this capability, color calibration is best done at the end as well, before switching Tracking off. This too is different from other software, which wants you to do your colour calibration before doing any stretching, since it cannot deal with colour correction after the signal has been non-linearly transformed like StarTools can.
The knowledge that Tracking gathers is used in many other ways in StarTools, however, the nice thing about Tracking is that it is very unobtrusive. In fact, it actually helps get you get better results from your data in less time by homing in on parameters in the various modules that it thinks are good defaults, given what Tracking has learnt about your data.
StarTools keeps a detailed log of what modules and parameters you used. This log file is located in the same folder as the StarTools executable and is named StarTools.log.
As of the 1.4 beta versions, this log also includes the mask you used, encoded in base64 format. See the documentation on masks on how to easily decode the base64 if needed.
The Mask feature is an integral part of StarTools. Many modules use a mask to operate on specific pixels and parts of the image, leaving other parts intact.
Importantly, besides operating only on certain parts of the image, it allows the many modules in StarTools to perform much more sophisticated operations.
You may have noticed that when you launch a module that is able to apply a mask, the pixels that are set in the mask will flash three times in green. This is to remind you which parts of the image will be affected by the module and which are not. If you just loaded an image, all pixels in the whole image will be set in the mask, so every pixel will be processed by default. In this case, when you launch a module that is able to apply a mask, the whole image will flash in green three times.
Green coloured pixels in the mask are considered 'on'. That is to say, they will be altered/used by whatever processing is carried out by the module you chose. 'Off' pixels (shown in their original colour) will not be altered or used by the active module. Again, please note that, by default all pixels in the whole image are marked 'on' (they will all appear green).
For example, an 'on' pixel (green coloured) in the Sharp module will be sharpened, in the Wipe module it will be sampled for gradient modelling, in Synth it will be scanned for being part of a star, in Heal in will be removed and healed, in Layer it will be layered on top of the background image, etc.
•If a pixel in mask is 'on' (coloured green), then this pixel is fed to the module for processing.•If a pixel in mask is 'off' (shown in original colour), then tell the module to 'keep the pixel as-is, hands off, do not touch or consider'.
The Mask Editor is accessible from the main screen, as well as from the different modules that are able to apply a mask. The button to launch the Mask Editor is labelled 'Mask'. When launching the Mask Editor from a module, pressing the 'Keep' or 'Cancel' buttons will return StarTools to the module you pressed the 'Mask' button in.
As with the different modules in StarTools, the 'Keep' and 'Cancel' buttons work as expected; 'Keep' will keep the edited Mask and return, while 'Cancel' will revert to the Mask as it was before it was edited and return.
As indicated by the 'Click on the image to edit mask' message below the image, clicking on the image will allow you create or modify a Mask. What actually happens when you click the image, depends on the selected 'Brush mode'. While some of the 'Brush modes' seem complex in their workings, they are quite intuitive to use.
Apart from different brush modes to set/unset pixels in the mask, various other functions exist to make editing and creating a Mask even easier;
•The 'Save' button allows you to save the current mask to a standard TIFF file that shows 'on' pixels in pure white and 'off' pixels in pure black.•The 'Open' button allows you to import a Mask that was previously saved by using the 'Save' button. Note that the image that is being opened to become the new Mask, needs to have the same dimensions as the image the Mask is intended for. Loading an image that has values between black and white will designate any shades of gray closest to white as 'on', and any shades of gray closest to black as 'off'.•The 'Auto' button is a very powerful feature that allows you to automatically isolate features.•The 'Clear' button turns off all green pixels (i.e. it deselects all pixels in the image).•The 'Invert' button turns on all pixels that are off, and turns off all pixels that were on.•The 'Shrink' button turns off all the green pixels that have a non-green neighbour, effectively 'shrinking' any selected regions.•The 'Grow' button turns on any non-green pixel that has a green neighbour, effectively 'growing' any selected regions.•The 'Undo' button allows you to undo the last operation that was performed.
NOTE: To quickly turn on all pixels, click the 'clear' button, then the 'invert' button.
Different 'Brush modes' help in quickly selecting (and de-selecting) features in the image.
For example, while in 'Flood fill lighter pixels' mode, try clicking next to a bright star or feature to select it. Click anywhere on a clump of 'on' (green) pixels, to toggle the whole clump off again.
The mask editor has 10 'Brush modes';
•Flood fill lighter pixels; use it to quickly select an adjacent area that is lighter than the clicked pixel (for example a star or a galaxy). Specifically, Clicking a non-green pixel will, starting from the clicked pixel, recursively fill the image with green pixels until it finds that; either all neighbouring pixels of a particular pixel are already filled (on/green), or the pixel under evaluation is darker than the original pixel clicked. Clicking on a green pixel will, starting from the clicked pixel, recursively turn off any green pixels until it can no longer find any green neighbouring pixels.•Flood fill darker pixels; use it to quickly select an adjacent area that is darker than the clicked pixel (for example a dust lane). Specifically, clicking a non-green pixel will, starting from the clicked pixel, recursively fill the image with green pixels until it finds that; either all neighbouring pixels of a particular pixel are already filled (on/green), or the pixel under evaluation is lighter than the original pixel clicked. Clicking on a green pixel will, starting from the clicked pixel, recursively turn off any green pixels until it can no longer find any on/green neighbouring pixels.•Single pixel toggle; clicking a non-green pixel will make a non-green pixel turn green. Clicking a green pixel will make green pixel turn non-green. It is a simple toggle operation for single pixels.•Single pixel off (freehand); clicking or dragging while holding the mouse button down will turn off pixels. This mode acts like a single pixel "eraser".•Similar color; use it to quickly select an adjacent area that is similar in color.•Similar brightness; use it to quickly select an adjacent area that is similar in brightness.•Line toggle (click & drag); use it to draw a line from the start point (when the mouse button was first pressed) to the end point (when the mouse button was released). This mode is particularly useful to trace and select satellite trails, for example for healing out using the Heal module.•Lasso; toggles all the pixels confined by a convex shape that you can draw in this mode (click and drag). Use it to quickly select or deselect circular areas by drawing their outline.•Grow blob; grows any contiguous area of adjacent pixels by expanding their borders into the nearest neighbouring pixel. Use it to quickly grow an area (for example a star core) without disturbing the rest of the mask.•Shrink blob; shrinks any contiguous area of adjacent pixels by withdrawing their borders into the nearest neighbouring pixel that is not part of a border. Use it to quickly shrink an area without disturbing the rest of the mask.
The powerful 'Auto' function quickly and autonomously isolates features of interest such as stars, noise, hot or dead pixels, etc.
For example, isolating just the stars in an image is a necessity for obtaining any useful results from the 'Decon' and 'Magic' module.
The type of features to be isolated are controlled by the 'Selection Mode' parameter
•Light features + highlight > threshold; a combination of two selection algorithms. One is the simpler 'Highlight > threshold' mode, which selects any pixel whose brightness is brighter than a certain percentage of the maximum value (see the 'Threshold' parameter below). The other selection algorithm is 'Light features' which selects high frequency components in an image (such as stars, gas knots and nebula edges), up to a certain size (see 'Max. feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below'). This mode is particularly effective for selecting stars. Note that if the 'Threshold' parameter is kept at 100%, this mode produces results that are identical to the 'Light features' mode.•Light features; selects high frequency components in an image (such as stars, gas knots and nebula edges), up to a certain size (see 'Max feature size') and depending on a certain sensitivity (see 'Filter sensitivity').•Highlight > threshold; selects any pixel whose brightness is brighter than a certain percentage of the maximum (e.g. pure white) value. . If you find this mode does not select bright stars with white cores that well, open the 'Levels' module and set the 'Normalization' a few pixels higher. This should make light features marginally brighter and dark features marginally darker.•Dead pixels color/mono < threshold; selects dark high frequency components in an image (such star edges, halos introduced by over sharpening, nebula edges and dead pixels), up to a certain size (see 'Max feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below') and whose brightness is darker than a certain percentage of the maximum value (see the Threshold parameter below). It then further narrows down the selection by looking at which pixels are likely the result of CCD defects (dead pixels). Two versions are available, one for color images, the other for mono images.•Hot pixels color/mono > threshold; selects high frequency components in an image up to a certain size (see 'Max feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below). It then further narrows down the selection by looking at which pixels are likely the result of CCD defects or cosmic rays (also known as 'hot' pixels). The 'Threshold' parameter controls how bright hot pixels need to be before they are potentially tagged as 'hot'. Note that a 'Threshold' of less than 100% needs to be specified for this mode to have any effect. Noise Fine - selects all pixels that are likely affected by significant amounts of noise. Please note that other parameters such as the 'Threshold', 'Max feature size', 'Filter sensitivity' and 'Exclude color' have no effect in this mode. Two versions are available, one for color images, the other for mono images.•Noise; selects all pixels that are likely affected by significant amounts of noise. This algorithm is more aggressive in its noise detection and tagging than 'Noise Fine'. Please note that other parameters such as the 'Threshold', 'Max feature size', 'Filter sensitivity' and 'Exclude color' have no effect in this mode.•Dust & scratches; selects small specks of dusts and scratches as found on old photographs. Only the 'Threshold' parameter is used, and a very low value for the 'Threshold' parameter is needed.•Edges > Threshold; selects all pixels that are likely to belong to the edge of a feature. Use the 'Threshold' parameter to set sensitivity where lower values make the edge detector more sensitive.•Horizontal artifacts; selects horizontal anomalies in the image. Use the 'Max feature size' and 'Filter sensitivity' to throttle the aggressiveness with which the detector detects the anomalies.•Vertical artifacts; selects vertical anomalies in the image. Use the 'Max feature size' and 'Filter sensitivity' to throttle the aggressiveness with which the detector detects the anomalies.•Radius; selects a circle, starting from the centre of the image going outwards. The 'Threshold' parameter defines the radius of the circle, where 100.00 covers the whole image.
Some of the selection algorithms are controlled by additional parameters;
•Exclude color; tells the selection algorithms to not evaluate specific colour channels when looking for features. This is particularly useful if you have a predominantly red, purple and blue nebula with white stars in the foreground and, say, you'd want to select only the stars. By setting 'Exclude color' to 'Purple (red + blue), you are able to tell the selection algorithms to leave features in the nebula alone (since these features are most prominent in the red and blue channels). This greatly reduces the amount of false positives.•Max feature size; specifies the largest size of any feature the algorithm should expect. If you find that stars are not correctly detected and only their outlines show up, you may want to increase this value. Conversely, if you find that large features are being inappropriately tagged and your stars are small (for example in wide field images), you may reduce this value to reduce false positives.•Filter sensitivity; specifies how sensitive the selection algorithms should be to local brightness variations. A lower value signifies a more aggressive setting, leading to more features and pixels being tagged.•Threshold; specifies a percentage of full brightness (i.e. pure white) below, or above which a selection algorithm should detect features.
Finally, the 'Source' parameter selects the source data the Auto mask generator should use. Thanks to StarTools' Tracking functionality which gives every module the capability to go "back in time", the Auto mask generator can use either the original 'Linear' data (perfect for getting at the brightest star cores) or the data as you see it right now.
As of the 1.4. beta versions, StarTools stores the masks you used in the StarTools.log file.
This StarTools.log file is located in the same folder as the executables. The masks are encoded as BASE64 PNG images. To convert the BASE64 text into loadable PNG images, you can use any online (or offline) BASE64 converter tool.
One online tool for BASE64 is Motobit Software's BASE64 encoder/decoder.
Simply paste the BASE64 code into the text box, select the decode the data from a Base64 string (base64 decoding) radio button, as well as the export to a binary file, filename: radio button. Name the file for example "mask.png" and click the convert the source data button.
This should result in a download of the mask as a PNG file.
AutoDev is an advanced image stretching solution that relies on detail analysis, rather than the simple non-linear transformation functions from yesteryear.
To be exact, in StarTools, Histogram Transformation Curves (DDP, Levels and Curves, ArcSinH stretch, MaskedStretch etc.) are considered obsolete an non-optimal; AutoDev uses robust, controllable image analysis to achieve better, more objective results in a more intuitive way.
When data is acquired, it is recorded in a linear form, corresponding to raw photon counts. To make this data suitable for human consumption, stretching it non-linearly is required. Historically, simple algorithms were used to emulate the non-linear response of photographic paper by modelling its non-linear transformation curve. Later, in the 1990s because dynamic range in outer space varies greatly, "levels and curves" tools allowed imagers to create custom histogram transformation curves that better matched the object imaged so that the most amount of detail became visible in the stretched image.
Creating these custom curves was a highly laborious and subjective process. And, unfortunately, in many software packages this is still the situation today. The result is almost always sub-optimal dynamic range allocation, leading to detail loss in the shadows (leaving recoverable detail unstretched), shrouding interesting detail in the midtones (by not allocating it enough dynamic range) or blowing out stars (by failing to leave enough dynamic range for the stellar profiles). Working on badly calibrated screens, can exacerbate the problem of subjectively allocating dynamic range with more primitive tools.
StarTools' AutoDev module uses image analysis to find the optimum custom curve for the characteristics of the data. By actively looking for detail in the image, AutoDev autonomously creates a custom histogram curve that best allocates the available dynamic range to the scene, taking into account all aspects and detail. As a consequence, the need for local HDR manipulation is minimised.
AutoDev is in fact so good at its job, that it is also one of the most important tools in StarTools for initial data inspection. Using AutoDev as one of the first modules on your data will see it bring out problems in the data, such as stacking artefacts, gradients, bias, dust donuts, and more. Precisely per its design goal, its objective dynamic range allocation will bring out such defects so the can be corrected.
Upon removal and/or mitigation of these problems, AutoDev may then be used to stretch the cleaned up data, bringing out detail across the entire dynamic range equally.
To be able to detect detail, AutoDev has a lot of smarts behind it. Its main detail detection algorithm analyses a Region of Interest ("RoI") - by default the whole image - so that it can find the optimum histogram transformation curve based on what it "sees".
Understanding AutoDev on a basic level is pretty simple really; its goal is to look at what's in your image and to make sure as much as possible is visible, just as a human would (try to) look at what is in the image and approximate the optimal histogram transformation curves using traditional tools.
The problem with a histogram transformation curve (aka 'global stretch') is that it affects all pixels in the image. So, what works in one area (bringing out detail in the background), may not necessarily work in another (for example, it may make a medium-brightness DSO core harder to see). Therefore it is important to understand that - fundamentally - globally stretching the image is always a compromise. AutoDev's job then, is to find the best-compromise global curve, given what detail is visible in your image and your preferences. Of course, fortunately we have other tools like the Contrast, Sharp and HDR modules to 'rescue' all detail by optimising for local dynamic range on top of global dynamic range.
Being able to show all things in your image equally well, is a really useful feature, as it is also very adept at finding artefacts or stuff in your image that is not real celestial detail but requires attention. That is why AutoDev is also extremely useful to launch as the first thing after loading an image to see what - if any - issues need addressing before proceeding. If there are any, AutoDev is virtually guaranteed to show them to you. After fixing such issues (for example using Crop, Wipe, Band or other modules), we can go on to use AutoDev's skills for showing the remaining (this time real celestial) detail in the image.
If most of the image consists of a background and just a small object of interest, by default AutoDev will weigh the importance of the background higher (since it covers a much larger part of the image vs the object). This is understandable and neatly demonstrates its behavior. It will always look for the best compromise stretch to show the entire Region of Interest ("RoI" - by default the entire image). This also means that if the background is noisy, it will start digging out the noise, taking it as "fine detail" that needs to be "brought out". If this behaviour is undesirable, there are a couple of things you can do in AutoDev.
1Change the 'Ignore Fine Detail <' parameter, so that AutoDev will no longer detect fine detail (such as noise grain).2Simply tell it what it should focus on instead by specifying an ROI and not regard the area outside the ROI just a little bit ('Outside ROI influence').
You will find that, as you include more background around the object, AutoDev, as expected, starts to optimise more and more for the background and less for the object. To use the RoI effectively, give it a "sample" of the important bit of the image. This can be a whole object, or it can be just a slice of the object that is a good representation of what's going on in the object in terms of detail. You can, for example, use a slice of a galaxy from the core, through the dust lanes, to the faint outer arms. There is no shame in trying a few different ROIs in order to find one you're happy with. What ever the case, the result will be more optimal and objective than pulling at histogram curves.
There are two ways of further influencing the way the detail detector "sees" your image;
•The 'Detector Gamma' parameter applies - for values other than 1.0 - a non-linear stretch to the image prior to passing it to the detector. E.g. the detector will "see" a darker or brighter image and create a curve that suits this image, rather than the real image. This makes the detector proportionally more (< 1.0) or less (> 1.0) sensitive to detail in the highlights. Conversely it makes the detector less (<1.0) or more (> 1.0) sensitive to detail in the shadows. The effect can be though of as a "smart" gamma correction. Note that tweaking this parameter will, by virtue of its skewing effect, cause the resulting stretch to no longer be optimal.•The 'Shadow Linearity' parameter specifies the amount of linearity that is applied in the shadows, before non-linear stretching takes over. Higher amounts have the effect of allocating more dynamic range to the shadows and background.
In AutoDev, you're controlling an impartial and objective detail detector, rather than a subjective and hard to control (especially in the highlights) bezier/spline curve.
Having something impartial and objective taking care of your initial stretch is very valuable, as it allows you to much better set up a "neutral" image that you can build on with the other local detail-enhancing tools in your arsenal (e.g. Sharp, HDR, Contrast, Decon, etc.). For example, when using Autodev, it will quickly become clear that point lights and over-exposed highlights, such as the cores of bright stars, remain much more defined. The dreaded "star bloat" effect is much less pronounced or even entirely absent, depending on the dataset.
However, knowing how to effectively use Region of Interests ("RoI") is crucial to making the most of AutoDev. Particularly if the object of interest is not image-filling, a Region of Interest will often be necessary. Fortunately the fundamental workings of the RoI are easy to understand.
Let's say our image is of galaxy, neatly situated in the center. Then confining the RoI progressively to the core of the galaxy, the stretch becomes more and more optimised for the core and less and less for the outer rim. Conversely, if we want to show more of the outer regions as well, we would include those regions in the RoI.
Shrinking or enlarging the RoI, you will notice how the stretch is optimised specifically to show as much as possible of the image inside of the RoI. That is not to say any detail outisde the RoI shall be invisible. It just means that any detail there will not (or much less) have a say in how the stretch is made. For example, if we would have an image of a galaxy, cloned it, put the two image side by side to create a new image, and then specified the RoI perfectly over just one of the cloned galaxies, the other one, outside the RoI would be stretched precisely the same way (as it happens to have exactly the same detail). Whatever detail lies outside the RoI, is simply forced to conform to the stretch that was designed for the RoI.
It is important to note that AutoDev will never clip your blackpoints outside the RoI, unless the 'Outside RoI Influence' parameter is explicitly set to 0% (though it is still not guaranteed to clip even at that setting). Detail outside the RoI may appear very dark (and approach 0/black), but will never be clipped.
Bringing up the 'Outside RoI Influence' parameter will let AutoDev allocate the specified amount of dynamic range to the area outside the RoI as well, at the expens of some dynamic range inside the RoI. If 'Outside RoI Influence' set 100%, then precisely 50% of the dynamic range will be used to show detail inside the RoI and 50% of the dynamic range will be used to show detail outside the RoI. Note that, visually, this behavior is area-size dependent; if the RoI is only a tiny area, the area outside the RoI will have to make do with just 50% of the dynamic range to describe detail for a much larger area (e.g. it has to divide the dynamic range over many more pixels), while the smaller RoI area has much fewer pixels and can therefore allocate each pixel more dynamic range if needed, in turn showing much more detail.
Non-linearly stretching an image's RGB components causes its hue and saturation to be similarly stretched and squashed. This is often observable as "washing out" of coloring in the highlights.
Traditionally, image processing software for astrophotography has struggled with this, resorting to kludges like "special" stretching functions (e.g. ArcSinH) that somewhat minimize the problem, or even procedures that make desaturated highlights adopt the colors of neighboring, non-desaturated pixels.
While other software continues to struggle with color retention, StarTools Tracking feature allows the Color module to go back in time and completely reconstruct the RGB ratios as recorded, regardless of how the image was stretched.
This is one of the major reasons why running the Color module is preferably run as one of the last steps in your processing flow; it is able to completely negate the effect of any stretching - whether global or local - may have had on the hue and saturation of the image.
Because of this, AutoDev's performance is not stymied like some other stretching solutions (e.g. ArcSinH) by a need to preserve coloring. The two aspects - color and luminance - of your image are neatly separated thanks to StarTools' signal evolution Tracking engine.
The Band module reduces horizontal and vertical banding/striping, often caused by read noise.
Using the Band module is quite straight forward; simply specify the orientation of the banding ("Horizontal" or "Vertical") and click 'Do'. An 'algorithm' parameter switches between two subtly different algorithms that attempt to reduce banding. If the default algorithm ('Algorithm 1') does not produce satisfactory results, 'Algorithm 2' may possibly yield better results.
The Bin module puts you in control over the trade-off between resolution, resolved detail and noise.
With today's multi-megapixel imaging equipment and high density CCDs, oversampling is a common occurrence; there is only so much detail that seeing conditions allow for with a given setup. Beyond that it is impossible to pick up fine detail. Once detail no longer fits in a single pixel, but instead gets "smeared out" over multiple pixels due to atmospheric conditions (resulting in a blur), binning may turn this otherwise useless blur into noise reduction. Binning your data may make an otherwise noisy and unusable data set usable again, at the expense of 'useless' resolution.
The Bin module was created to provide a freely scalable alternative to the fixed 2×2 (4x reduction in resolution) or 4×4 (16x reduction in resolution) software binning modes commonly found in other software packages or modern consumer digital cameras and DSLRs (also known as 'Low Light Mode'). As opposed to these other binning solutions, the StarTools' Bin module allows you to bin your data (and gain noise reduction) by the amount you want – if your data is seeing-limited (blurred due to adverse seeing conditions) you are now free to bin your data until exactly that limit and you are not forced by a fixed 2×2 or 4×4 mode to go beyond that.
Similarly, deconvolution (and subsequent recovery of detail that was lost due to atmospheric conditions) may not be a viable proposition due to the noisiness of an initial image. Binning may make deconvolution an option again. The StarTools Bin module allows you to determine the ratio whith which you use your oversampled data for binning and deconvolution to achieve a result that is finely tuned to your data and imaging circumstances of the night(s).
Core to StarTools' fractional binning algorithm is a custom built anti-aliasing filter that has been carefully designed to not introduce any ringing (overshoot) and, hence, to not introduce any artefacts when subsequent deconvolution is used on the binned data.
The Bin module is operated with just a single parameter. This parameter controls the amount of binning that is performed on the data. The new resolution is displayed ('New Image Size X x Y') , as well the single axis scale reduction, the Signal-to-Noise-Ratio improvement and the increased bit-depth of the new image.
Data binning is a data pre-processing technique used to reduce the effects of minor observation errors. Many astrophotographers are familiar with the virtues of hardware binning. The latter pools the value of 4 (or more) CCD pixels before the final value is read. Because reading introduces noise by itself, pooling the value of 4 or more pixels reduces this 'read noise' also by a factor of 4 (one read is now sufficient, instead of having to do 4). Ofcourse, by pooling 4 pixels, the final resolution is also reduced by a factor of 4. There are many, many factors that influence hardware binning and Steve Cannistra has done a wonderful write-up on the subject on his starrywonders.com website. It also appears that the merits of hardware binning are heavily dependent on the instrument and the chip used.
Most OSCs (One-Shot-Color) and DSLR do not offer any sort of hardware binning in color, due to the presence of a Bayer matrix; binning adjacent pixels makes no sense, as they alternate in the color that they pick up. The best we can do in that case is create a grayscale blend out of them. So hardware binning is out of the question for these instruments.
So why does StarTools offer software binning? Firstly, because it allows us to trade resolution for noise reduction. By grouping multiple pixels into 1, a more accurate 'super pixel' is created that pools multiple measurements into one. Note that we are actually free to use any statistical reduction method that we want. Take for example this 2 by 2 patch of pixels;
7 73 7
A 'super pixel' that uses simple averaging yields (7 + 7 + 3 + 7) / 4 = 6. If we suppose the '3' is anomalous value due to noise and '7' is correct, then we can see here how the other 3 readings 'pull up' the average value to 6; pretty darn close to 7.
We could use a different statistical reduction method (for example taking the median of the 4 values) which would yield 7, etc. The important thing is that grouping values like this tends to filter out outliers and make your super pixel value more precise.
But what about the downside of losing resolution? That super high resolution may have actually been going to waste! If for example your CCD can resolve detail at 0.5 arcsecs per pixel, but your seeing is at best 2.0 arcsecs, then you effectively have 4 times more pixels than you need to record one 1 unit of real resolvable celestial detail. Your image will be "oversampled", meaning that you have allocated more resolution than the signal really will ever require. When that happens, you can zoom in into your data and you will notice that all fine detail looks blurry and smeared out over multiple pixels. And with the latest DSLRS having sensors that count 20 million pixels and up, you can bet that most of this resolution will be going to waste at even the most moderate magnification. Sensor resolution may be going up, but the atmosphere's resolution will forever remain the same - buying a higher resolution instrument will do nothing for the detail in your data in that case! This is also the reason why professional CCDs are typically much lower in resolution; the manufacturers rather use the surface area of the chip for coarser but more deeper, more precise CDD wells ('pixels') than squeezing in a lot of very imprecise (noisy) CCD wells (it has to be said the latter is a slight oversimplification of the various factors that determine photon collection, but it tends to hold).
There is one other reason to bin OSC and DSLR data to at least 25% of its original resolution; the presence of a bayer matrix means that (assuming an RGGB matrix) after applying a debayering (aka 'demosaicing') algorithm, 75% of all red pixels, 50% of all green pixels, and another 75% of all blue pixels are completely made up!
Granted, your 16MP camera may have a native resolution of 16 million pixels, however it has to divide these 16 million pixels up between the red, green and blue channels! Here is another very good reason why you might not want to keep your image at native resolution. Binning to 25% of native resolution will ensure that each pixel corresponds to one real recorded pixel in the red channel, one real recorded pixel in the blue channel and two pixels in the green channel (the latter yielding a 50% noise reduction in the green channel).
There are, however, instances where the interpolation can be undone if enough frames are available (through sub-pixel dithering) to have exposed all sub-pixels of the bayer matrix to real data in the scene (drizzling).
StarTools' binning algorithm is a bit special in that it allows you to apply 'fractional' binning; you're not stuck with pre-determined factors (ex. 2x2, 3x3 or 4x4). You can bin exactly the amount that achieves a single unit of celestial detail in a single pixel. In order to see what that limit is, you simply keep reducing resolution until no blurriness can be detected when zooming into the image. Fine detail (not noise!) should look crisp. However, you may decide to leave a little bit of blurriness to see if you can bring out more detail using deconvolution.
Thanks to StarTools' Tracking feature the Color module provides you with unparalleled flexibility when it comes to colour presentation in your image.
Whereas other software without Tracking data mining, destroys colour and colour saturation in bright parts of the image as the data gets stretched, StarTools allows you to retain colour and saturation throughout the image with its 'Color Constancy' feature. This ability allows you to display all colours in the scene as if it were evenly illuminated, meaning that even very bright cores of galaxies and nebulas retain the same colour throughout, irrespective of their local brightness, or indeed acquisition methods and parameters.
This ability is important in scientific representation of your data, as it allows the viewer to compare similar objects or areas like-for-like, since colour in outer space very often correlates with chemical signatures or temperature.
The same is true for star temperatures across the image, even in bright, dense star clusters. This mode allows the viewer of your image to objectively compare different parts and objects in the image without suffering from reduced saturation in bright areas. It allows the viewer to explore the universe that you present in full colour, adding another dimension of detail, irrespective of the exposure time and subsequent stretching of the data.
For example, StarTools enables you to keep M42's colour constant throughout, even in its bright core. No fiddling with different exposure times, masked stretching or saturation curves needed. You are able to show M31's true colours instead of a milky white, or resolve star temperatures to well within a globular cluster's bright core. All that said, if you're a fan of the traditional 'handicapped' way of colour processing in other software, then StarTools can emulate this type of processing as well.
The Color module's abilities don't stop there, however. It is also capable of emulating a range of complex LRGB color compositing methods that have been invented over the years. And it does it at the click of a button! Even if you acquired data with an OSC or DSLR, you will still be able to use these compositing methods; the Color module will generate synthetic luminance from your RGB on the fly and re-composite the image in your desired compositing style.
The Color module allows for various ways to calibrate the image, including by star field, sampling G2V star, galaxy sampling and - unique to StarTools - the MaxRGB calibration view. The latter allows for objective colour calibration, even on poorly calibrated screens.
Aside from Color calibration (thanks to Tracking data mining carried out on a linear version of your data, no matter whether you have stretched it or not), the Color module comes with a number of ways to control colour saturation in your image. A green removal algorithm rounds out the feature set.
The Color module is very powerful - offering capabilities surpassing most other software - yet it is simple to use.
The primary goal that the Color module was designed to accomplish, is achieving a good colour balance that accurately describes the colour ratios that were recorded. In accomplishing that goal, the Color module goes further than other software by offering a way to negate the adverse effects of non-linear dynamic range manipulations on the data (thanks to Tracking data mining). In simple terms, this means that colouring can be reproduced (and compared!) in a consistent manner regardless of how bright or dim a part of the scene is shown.
Upon launch, the colour module blinks the mask three times in the familiar way. If a full mask is not set, the Color modules allows you to set it now, as colour balancing is typically applied to the full image (requiring a full mask).
In addition to blinking the mask, the Color module also analyses the image and sets the Red Bias Reduce, Green Bias Reduce and Blue Bias Reduce factors to an value which it deems the most appropriate for your image. This behaviour is identical to manually clicking the 'Sample' button.
The Red Bias Reduce, Green Bias Reduce and Blue Bias Reduce factors are the most important settings in the Color module. They directly determine the colour balance in your image. Their operation is intuitive; too much red in your image? Pump up the 'Red Bias Reduce' value. Too little red in your image? Reduce the 'Red Bias Reduce' value.
If you'd rather operate on these values in terms of Bias Increase, then simply switch the 'Bias Slider Mode' setting to 'Sliders Increase Color Bias'.
Switching between these two modes you can see that, for example, a RedBias Reduce of 8.00 is the same as a Green and Blue Bias Increase of 8.00. It makes intuitive sense when you think about it - a relative decrease of red makes blue and green more prevalent and vice versa.
Now that we know how to change the colour balance, how do we know what to actually set it to?
There are a great number of tools and techniques that can be applied in StarTools that let you home in on a good colour balance. Before delving into them, It is highly recommended to switch 'Style' to 'Scientific (Color Constancy)' during color balancing, even if that is not the preferred style of rendering the colour of the end result, this is because the Color Constancy feature makes it much easier to colour balance by eye in some instances due to its ability to show continuous, constant colour throughout the image. Once a satisfactory colour balance is achieved, of course, feel free to switch to any alternative style of colour rendering.
If you know that a particular pixel or area in your image is supposed to be a shade of neutral white or gray, simply clicking on it is sufficient to let StarTools compute the right Red, Green and Blue bias settings to make that pixel appear neutral. This technique is particularly useful if you have a star of spectral type G2V (sun-like) in your image. The reasoning is that the sun is the perfect daylight white reference, and so any similar star elsewhere in the galaxy should be too.
Upon launch, or upon clicking the Sample button, the Color module samples whatever mask is set (note also that the set mask also ensures the Color module only applies any changes to the masked-in pixels!) and sets the Red, Green and Blue bias settings accordingly.
We can use this same behaviour to sample larger parts of the image that we know should be white. This method mostly exploits the fact that stars come in all sorts of sizes and temperatures (and thus colours!) and that this distribution is completely random. Therefore if we sample a large enough population, we should find the average star to be somewhere in the middle. Our sun is a very average star and is the white balance that we're after. Therefore, if we sample a large enough number of pixels containing a large enough number of stars, we should find a good colour balance.
We can accomplish that in two ways; we either sample all stars (but only stars!) in a wide enough field, or we sample a whole galaxy that happens to be in the image (note that the galaxy must be of a certain type to be a good candidate and be reasonably close - preferably a barred spiral galaxy much like our own Milkyway).
Whichever you choose, we need to create a mask, so we launch the Mask editor. Here we can use the Auto feature to select a suitable selection of stars, or we can us the Flood Fill Brighter or Lassoo tool to select a galaxy. Once selected, return to the Color module and click Sample. StarTools will now determine the correct Red, Green and Blue bias to match the white reference pixels in the mask so that they come out neutral.
To apply the new colour balance to the whole image, launch the Mask editor once more and click Clear, then click Invert to select the whole image. Upon return to the Color module, the whole image will now be balanced by the Red, Green and Blue bias values we determined earlier with just the white reference pixels selected.
StarTools comes with a unique colour balancing aid called MaxRGB. This mode of colour balancing is exceptionally useful if trying to colour balance by eye, but the user suffers from colour blindness or uses a screen that is not colour calibrated very well.
The MaxRGB aid allows you to view which channel is dominant per-pixel. If a pixel is mostly red, that pixel is shown red, if a pixel is mostly green, that pixel is shown green, and if a pixel is mostly blue, that pixel is shown blue.
By cross referencing the normal image with the MaxRGB image, it is possible to find deficiencies in the colour balance. For example, the colour green is very rarely dominant in space (with the exception of highly dominant OIII emission areas in, for example the Trapezium in M42).
Therefore, if we see large areas of green, we know that we have too much green in our image and we should adjust the bias accordingly. Similarly if we have too much red or blue in our image, the MaxRGB mode will show many more red than blue pixels in areas that should show an even amount (for example the background). Again we then know we should adjust red or green accordingly.
StarTools' Color Constancy feature makes it much easier to see colours and spot processes, interactions, emissions and chemical composition in objects. In fact, the Color Constancy feature makes colouring comparable between different exposure lengths and different gear. This allows for the user to start spotting colours repeating in different features of comparable objects. Such features are, for example, the yellow cores of galaxies (due to the relative over representation of older stars as a result of gas depletion), the bluer outer rims of galaxies (due to the relative over representation of bright blue young stars as a result of the abundance of gas) and the pink/purplish HII area 'blobs' in their discs. Red/brown (white light filtered by dust) dust lanes complement a typical galaxy's rendering.
Similarly, HII areas in our own galaxy (e.g. most nebulae), while in StarTools Color Constancy Style mode, display the exact same colour signature found in the galaxies; a pink/purple as a result of predominantly deep red Hydrogen-alpha emissions mixed with much weaker blue/greenemissions of Hydrogen-beta and Oxygen-III emissions and (more dominantly) reflected blue star light from bright young blue giants who are often born in these areas, and shape the gas around them.
Dusty areas where the bright blue giants have 'boiled away' the Hydrogen through radiation pressure (for example the Pleiades) reflect the blue star light of any surviving stars, becoming distinctly blue reflection nebulae. Sometimes gradients can be spotted where (gas-rich) purple gives away to (gas-poor) blue (for example the Rosette core) as this process is caught in the act.
Diffraction spikes, while artefacts, also can be of great help when calibrating colours; the "rainbow" patterns (though skewed by the dominant colour of the star whose light is being diffracted) should show a nice continuum of colouring.
Finally, star temperatures, in a wide enough field, should be evenly distributed; the amount of red, orange, yellow, white and blue stars should be roughly equal. If any of these colors are missing or are over-represented we know the colour balance is off.
Colour balancing of data that was filtered by a light pollution filter is fundamentally impossible; narrow (or wider) bands of the spectrum are missing and no amount of colour balancing is going to bring them back and achieve proper colouring. A typical filtered data set will show a distinct lack in yellow and some green when properly colour balanced. It's by no means the end of the world - it's just something to be mindful of.
Correct colouring may be achieved however by shooting deep luminance data with light pollution filter in place, while shooting colour data without filter in place, after which both are processed separately and finally combined. Colour data is much more forgiving in terms of quality of signal and noise; the human eye is much more sensitive to noise in the luminance data that it is in the colour data. By making clever use of that fact and performing some trivial light pollution removal in Wipe, the best of both worlds can be achieved.
Once you have achieved a color balance you are happy with, the StarTools Color module offers a great number of ways to change the presentation of your colours.
The parameter with the biggest impact is the 'Style' parameter. StarTools is renowned for its Color Constancy feature, rendering colours in objects regardless of how the luminance data was stretched, the reasoning being that colours in outer space don't magically change depending on how we stretch our image. Other software sadly lets the user stretch the colour information along with the luminance information, warping, distorting and destroying hue and saturation in the process. The 'Scientific (Color Constancy)' setting for Style undoes these distortions using Tracking information, arriving at the colours as recorded.
To emulate the way other software renders colours, two other settings are available for the Style parameter. These settings are "Artistic, Detail Aware" and "Artistic, Not Detail Aware". The former still uses some Tracking information to better recover colours in areas whose dynamic range was optimised locally, while the latter does not compensate for any distortions whatsoever.
The LRGB Method Emulation allows you to emulate a number of colour compositing methods that have been invented over the years. Even if you acquired data with an OSC or DSLR, you will still be able to use these compositing methods; the Color module will generate synthetic luminance from your RGB on the fly and re-composite the image in your desired compositing style.
The difference in colouring can be subtle or more pronounced. Much depends on the data and the method chosen.
'Straight CIELab Luminance Retention' manipulates all colours in a psychovisually optimal way in CIELab space, introducing colour without affecting apparent brightness.
'RGB Ratio, CIELab Luminance Retention' uses a method first proposed by Till Credner of the Max-Planck-Institut and subsequently rediscovered by Paul Kanevsky, using RGB ratios multiplied by luminance in order to better preserve star colour. Luminance retention in CIELab color space is applied afterwards.
'50/50 Layering, CIELab Luminance Retention' uses a method proposed by Robert Gendler, where luminance is layered on top of the colour information with a 50% opacity. Luminance retention in CIELab color space is applied afterwards. The inherent loss of 50% in saturation is compensated for, for your convenience, in order to allow for easier comparison with other methods.
'RGB Ratio' uses a method first proposed by Till Credner of the Max-Planck-Institut and subsequently rediscovered by Paul Kanevsky, using RGB ratios multiplied by luminance in order to better preserve star colour. No further luminance retention is attempted.
'50/50 Layering, CIELab Luminance Retention' uses a method proposed by Robert Gendler, where luminance is layered on top of the colour information with a 50% opacity. No further luminance retention is attempted. The inherent loss of 50% in saturation is compensated for, for your convenience, in order to allow for easier comparison with other methods.
Note that the LRGB Emulation Method feature is only available when Tracking is engaged.
The 'Saturation' parameter allows colours to be rendered more, or less vividly, whereby Bright Saturation and Dark Saturation control how much colour and saturation is introduced in the highlights and shadows respectively. It is important to note that introducing colour in the shadows may exacerbate colour noise, though Tracking will make sure any such noise exacerbations are recorded and dealt with during the final denoising stage.
The 'Cap Green' parameter, finally, removes spurious green pixels if needed, reasoning that green dominant colours in outer space are rare and must therefore be caused by noise. Use of this feature should be considered a last resort if colour balancing does not yield adequate results and the green noise is severe. The final denoising stage should, thanks to Tracking data mining, pin pointed the green channel noise already and should be able to adequately mitigate it.
As of StarTools 1.6, the Color module comes with a vast number of camera color correction matrices for various manufacturers (Canon, Nikon, Sony, Olympus, Pentax and more), as well as a vast number of channel blend remappings for narrowband dataset (e.g. HST/SHO or bi-color duoband/quadband filter data).
Uniquely, thanks to the signal evolution Tracking engine, this color calibration is preferably performed towards the end of your processing workflow. This allows you to switch color rendering at the very last moment at the click of a button without having to re-composite and re-process, while also allowing you to use cleaner, non-whitebalanced, non-matrix corrected data for your luminance component, aiding signal fidelity.
The Contrast module optimizes local dynamic range allocation, resulting in better contrast, reducing glare and bringing out faint detail.
It operates on medium to large areas and is especially effective for enhancing contrast in nebulae, globular clusters and galaxy cores.
The Contrast module has some parameters in common with the Wipe module. In some ways it is similar, though not the same.
Just like the Wipe module, the Contrast module is sensitive to "dark anomalies"; pixels not of celestial origin that are darker than the real celestial background.
So, just like the Wipe module, if dark anomalies are present, we need to make sure that any such anomalies are mitigated before Contrast sees them, either by removing them (cropping them out) or instructing the Contrast module to ignore them (increasing the 'Dark anomaly filter' parameter).
Once any dark anomalies are taken care of, a suitable 'Aggressiveness' parameter needs to be chosen. The 'Aggressiveness' parameter controls how 'local' the dynamic range optimisation is allowed to be. You will find that a higher 'Aggressiveness' value with all else equal, will yield an image with areas of starker contrast. More generally, you will find that changing the 'Aggressiveness' value will see the Contrast module take pretty different decisions on what and where to optimise. The rule of thumb is that a higher 'Aggressiveness' value will see smaller and 'busier' areas given priority over larger more 'tranquil' areas.
Similar to the Wipe module, the 'Precision' parameter can be used to increase the precision when dealing with highly detailed wide-fields with a lot of undulating detail, combined with high 'Aggressiveness' values.
The 'Dark anomaly headroom' parameter controls how heavily the Contrast module "squashes" the dynamic range of larger scale features it deems "unnecessary". By de-allocating dynamic range that is used to describe larger features and re-allocating it to interesting local features, the de-allocation necessarily involves reducing the larger features' dynamic range, hence "squashing" that range. Very low settings may appear to clip the image (though this is not the case). For those familiar with music production, the Contrast module is very much akin to a Compressor, but for your image instead.
The 'Compensate gamma' feature attempts to apply a non-linear curve that makes the image just as bright as the source (input) image. This option may be desirable if the image has gotten to dark.
Finally, the 'Expose dark areas' option can help expose detail in the shadows by normalizing the dynamic range locally; making sure that the fully dynamic range is used at all times. This option may generate artefacts at high 'Aggressiveness' settings, which may be mitigated in some instances by increasing the 'Precision' parameter.
The Compose module is easy-to-use, but extremely flexible compositing and channel extraction tool. As opposed to all other software, the Compose module allows you to effortless process, LRGB, LLRGB, or narrowband composites like SHO, LSHO and more composites, as if they were simple RGB datasets.
In traditional image processing software, composites with separate luminance information (for example acquired through a luminance filter, created by a synthetic luminance frame, or a combination of both), require lengthy processing workflows; luminance (detail) and color information needs (or should!) be processed separately and only combined at the end to produce the final image.
Through the Compose module, StarTools is able to process luminance and color information separately, yet simultaneously.
This has important ramifications for your workflow and signal fidelity;
•Your workflow for a complex composite is now virtually the same as it is for a simple DSLR/OSC dataset; Modules like Wipe and Color automatically consult and manipulate the correct dataset(s) and enable additional functionality where needed.•Because everything is now done in one Tracking session, you get all the benefits from signal evolution tracking until the very end, without having to end your workflow for luminance and start a new one for chroma/color; all modules cross-reference luminance and color information as needed until the very end, yielding vastly cleaner results.•The "Entropy" module can consult the chroma/color information to effortlessly manipulate luminance as you see fit, while Tracking monitors noise propagation.
Synthetic luminance dataset are created by simply specifying the total exposure times for each imported dataset. With a click of a button, synthetic luminance datasets can be added to an existing luminance dataset, or can be used as a (synthetic) luminance dataset in its own right.
Finally, the Compose module can be used to create bi-color composites, or to extract individual channels from color images.
Creating a composite is as easy as loading the desired datasets into the desired slots, and optionally setting the desired composite scheme and exposure lengths.
The "Luminance" button loads a dataset into the "Luminance File" slot. The "Lum Total Exposure" slider determines the total exposure length in hours, minutes and seconds. This value is used to create the correct weighted synthetic luminance dataset, in case the "Luminance, Color" composite mode is set to create a synthetic luminance form the loaded channels. Loading a Luminance file will only have an effect when the "Luminance, Color" parameter is set to a compositing scheme that incorporates a luminance dataset (e.g. "L, RGB", "L + Synthetic L From RGB, RGB" or "L + Synthetic L From RGB, Mono") .
The Red, Green and Blue buttons load a dataset in the "Red File", "Green File" and "Blue File" slots respectively. The "Red Total Exposure", "Green Total Exposure", "Blue Total Exposure" sliders determine the total exposure length in hours, minutes and seconds for each of the three slots. These values are used to create the correct weighted synthetic luminance dataset (at 1/3rd weighting of the "Lum Total Exposure"), in case the "Luminance, Color" composite mode is set to create a synthetic luminance from the loaded channels.
Loading an dataset into the "Red File", "Green File" or "Blue File" slots will see any missing slots be synthesised automatically if the "Color Ch. Interpolation" parameter is set to "On". Loading a color dataset into the "Red File", "Green File" or "Blue File" slots will automatically extract the red, green and blue channels of the color dataset respectively.
There are a number of compositing schemes available, some of which will put StarTools into "composite" mode (as signified by a lit up "Compose" label on the Compose button on the home screen). Compositing schemes that require separate processing of luminance and color will put StarTools in this special mode. Some module may exhibit subtly different behaviour, or expose different functionality while in this mode.
The following compositing schemes are selectable;
"RGB, RGB" simply uses red + green + blue for luminance and uses red, green and blue for the color information. No special processing or compositing is done. Any loaded Luminance dataset is ignored, as are Total exposure settings.
"RGB, Mono" simply uses red + green + blue for luminance and uses the average of the red, green and blue channels for all channels for the color information, resulting in a mono image. Any loaded Luminance dataset is ignored, as are Total exposure settings.
"L, RGB" simply uses the loaded luminance dataset for luminance and uses red, green and blue for the color information. Total exposure settings are ignored. StarTools will be put into "composite" mode, processing luminance and color separately but simultaneously. If not Luminance dataset is loaded, this scheme functions the same as "RGB, RGB" with the execption that StarTools will be put into "composite" mode, processing luminance and color separately yet simultaneously.
"L + Synthetic L from RGB, RGB" creates a synthetic luminance dataset from Luminance, Red, Green and Blue, weighted according to the exposure times provided by the "Total Exposure" sliders. The color information will consists of simply the red, green and blue datasets as imported. StarTools will be put into "composite" mode, processing luminance and color separately yet simultaneously.
"L + Synthetic L from RGB, Mono" creates a synthetic luminance dataset from Luminance, Red, Green and Blue, weighted according to the exposure times provided by the "Total Exposure" sliders. The color information will consists of the average of the red, green and blue channels for all channels, yielding a mono image. StarTools is not put into "composite" mode, as no color information is available.
For practical purpose, synthetic luminance generation assumes that, besides possibly varying total exposure lengths, all other factors remain equal. E.g. it is assumed that bandwidth response is exactly equal to that of the other filters in terms of width and transmission, and that only shot noise from the object varies (either due to differences in signal in the different filter band from the imaged object, or due to differing exposure times).
When added to a real (non synthetic) luminance filter source, the synthetic luminance's three red, green and blue channels are assumed to contribute exactly one third to the added synthetic luminance. E.g. it is assumed that the aggregate filter response of the individual three red, green and blue channels, exactly match that of the single luminance channel.
As of StarTools 1.6, channel assignment does not dictate final coloring. In other words, loading, for example, your SHO dataset as RGB, no longer locks you into that choice.
Uniquely, thanks to the signal evolution Tracking engine, the Color module allows you to remap the channels at will, even far into your processing.
So, if for example you wish to switch your SHO imported dataset to a OHS rendering instead (or even complex channel blends), you can do so in a couple of clicks. The same goes for a HOO bi-color. Also see the Color module documentation for more information.
The Hubble Space Telescope palette (also known as 'HST' or 'SHO' palette) is a popular palette for color renditions of the S-II, Hydrogen-alpha and O-III emission bands. This palette is achieved by loading S-II, Hydrogen-alpha and O-III ("SHO") as red, green and blue respectively. A special "Hubble" preset in the Color module provides a shortcut to color rendition settings that mimic the results from the more limited image processing tools from the 1990s.
A popular bi-color rendition of H-alpha and O-III is to import H-alpha as red and O-III as green as well as blue. A synthetic luminance frame is then created that only gives red and blue (or green instead of blue, but not both!) a weighting according to the two datasets' exposure lengths. The resulting color rendition tends to be close to these bands' manifestation in the visual spectrum with H-alpha a deep red and O-III appearing as a teal green.
The crop module is an easy-to-use image cropping tool with quick aspect ratio presets and switchable luminance and chroma preview modes.
The module was designed to quickly find and eliminate stacking artefacts across luminance and chrominance data, as well as help with framing your object(s) of interest.
Using the crop module is fairly straightforward. The desired crop is created by clicking and dragging with the mouse the area to retain. Fine-tuning can be accomplished by changing the X1, Y1 and X2, Y2 coordinate pair parameters.
8 preset crops are available. Their names ('3:2', '2:3', '16:9', 9:16') denote the aspect ratio, while the smaller or greater sign prefix denotes their behaviour;
•Presets with the greater-than ('>') sign will grow the current selection to achieve the selected aspect ratio•Presets with the smaller-than ('<') sign will shrink the current selection to achieve the selected aspect ratio
A 'Color' button is available, which functions much like the Color button in the Wipe module. It is only available when Compose mode is engaged (e.g. luminance and chroma are being processed separately, yet simultaneously) and allows you to switch the view between the luminance and chroma datasets that are being processed in parallel. The later is useful if, for example, you need to crop stacking artefacts that only exist in the chroma dataset, but not in the luminance dataset. Because chrominance data always remains linear and is never stretched like the luminance dataset, a courtesy (non-permanent) AutoDev is applied, so you can better see what is in the chrominance dataset.
StarTools' Deconvolution module allows for recovering detail in seeing-limited and diffraction-limited datasets. StarTools' deconvolution is unique amongst astrophotography software in several ways;
•Firstly, StarTools' Tracking feature allows you to apply "mathematically correct" Deconvolution on data you have already stretched and processed. You can use deconvolution at any stage during your processing. In fact, the module will achieve better results if used late in your processing work flow. This is because the module will then have a better understand of how the signal (and its noise component) was stretched and modified per-pixel.•Second, the regularization algorithm (the algorithm that keeps noise from destabilising the convergent solution) is designed so that no guessing is needed as to how to set its parameters correctly. The regularization parameter is always set to provide the optimal balance between noise/artifact and detail based on the detected noise statistics of your dataset.•Third, the regularization algorithm employs the same psychovisual tricks as found in the noise grain equalization denoise module. This means it is able to shape any introduced (if you so choose) noise grain into "useful" detail enhancing grain.•Fourth, the Deconvolution algorithm in StarTools does not require pristine data and will often still successfully enhance detail in noisier datasets.•Finally, the Deconvolution algorithm in StarTools is so fast, that previewing and experimentation to find the right parameters can be done in near-real-time. This includes evaluating the effects of different, custom Point Spread Functions ("PSFs").
It is important to understand two things about deconvolution;
•Deconvolution is "an ill-posed problem", due to the presence of noise in every dataset. This means that there is no one perfect solution, but rather a range of approximations to the "perfect" solution.•Deconvolution should not be confused with sharpening; deconvolution should be seen as a means to restore a compromised (distorted by atmospheric turbulence and diffraction by the optics) dataset. It is not meant as an acuity enhancing process.
Understanding the former two important points will make clear why the various parameters exist in this module.
First order of business for using the Decon module, is to generate an inverted defect and singularity mask. This mask should contain all pixels we wish to deconvolve (green in the mask editor), and exclude all pixels that are not suitable (not green in the mask editor). Pixels that are not suitable are areas that contain aberrant data, no data, or data that is non-linear. Examples are hot pixels, dead pixels, defective sensors columns, over-exposing star cores or (more rare) highlights that have been non-linearly compressed by the sensor to fit into the dynamic range to prevent over-exposure. For your convenience, an AutoMask feature is available by means of the 'AutoMask' button (also launched upon opening the Decon module).
The AutoMask feature is able to generate a suitable mask in most cases by selecting 'Auto-generate mask'. As of StarTools 1.6, a more conservative 'Auto-generate conservative mask' option is also available which refrains from masking out detail in the highlights as much. The latter may be useful if your dataset is quite clean and your acquisition instrument has a good linear response throughout the dynamic range including into the highlights. Alternatively, you may also launch the Mask editor to create (or touch up) a mask yourself.
Deconvolution is extremely sensitive to aberrant data, as it relies on all data to be "real" and (originally) linear, in order to undo the specified blur in that area of the image. Letting Decon deconvolve any aberrant data greatly impacts the immediate vicinity being deconvolved and virtually always leads to significant artefacts being generated.
The Deconvolution algorithm's task, is to reverse the blur caused by the atmosphere and optics. Stars, for example, are so far away that they should really render as single-pixel point lights. However in most images, stellar profiles of non-overexposing stars show the point light "smeared" out, yielding a core surrounded by light tapering off. Further diffraction may be caused by spider vanes and/or other obstructions in the Optical Tube Array, for example yielding diffraction spikes.
The point light's energy is scattered around its actual location, yielding the blur. The way a point light is blurred like this, is also called a Point Spread Function (PSF). Deconvolution is all about modelling this PSF, then finding and applying its reverse to the best of our abilities.
Atmospheric or lens-related blur is more easily modelled, as its behaviour and effects on long exposure photography has been well studied over the decades. 5 subtly different models are available for selection via the 'Primary Point Spread Function' parameter;
•'Gaussian' uses a Gaussian distribution to model atmospheric blurring. This model is fast to calculate and was the default model in StarTools prior to version 1.6.•'Circle of Confusion' models the way light rays from a lens are unable to come to a perfect focus when imaging a point source (aka the 'Circle of Confusion'). This distribution is suitable for images taken outside of Earth's atmosphere or images where Earth's atmosphere did otherwise not distort the image.•'Moffat Beta=4.765 (Trujillo)' uses a Moffat distribution with a Beta factor of 4.765. Trujillo et al (2001) propose in their paper that this value is the best fit for prevailing Atmospheric turbulence theory.•'Moffat Beta=3.0 (Saglia, FALT)' uses Moffat distribution with a Beta factor of 3.0, which is a rough average of the values tested by Saglia et al (1993). The value of ~3.0 also corresponds with the findings Bendinelli et al (1988) and was implemented as the default in the FALT software at ESO, as a result of studying the Mayall II cluster.•'Moffat Beta=2.5 (IRAF)' uses a Moffat distribution with a Beta factor of 2.5, as implemented in the IRAF software suite by the United States National Optical Astronomy Observatory.
The size (aka 'kernel size') of the chosen 'Primary Point Spread Function' is controlled by the 'Primary Radius' parameter. A good rule of thumb is to increase this value until ringing artefacts become noticeable, and then back off a little. As of StarTools 1.6, an 'Enhanced Deringing' parameter is available than can further ameliorate ringing artefacts.
Converging on an optimal solution is an iterative process in the Deconvolution module. In general, more iterations, controlled by the 'Iterations' parameter, will yield a better result but will take longer to compute. More iterations tend to yield diminishing returns. Different datasets may benefit from more or fewer iterations. You may wish to experiment on a smaller preview section to evaluate improvements before computing deconvolution of the entire image.
A 'Secondary Point Spread Function' may be specified by clicking on a star. The Deconvolution module will then use the star as a guide to construct a suitable total PSF. Good star samples are stars that do not overexpose, but are not too dim, are closer to the center of the image and have a flat background. When a 'Secondary Point Spread Function' is provided, the total/final PSF used is a combination of that PSF modulated by the 'Primary Point Spread Function'. This allows you to create a final PSF that is tightly controlled by the ideal atmospheric profile (and its radius) as specified by the 'Primary Point Spread Function', while exhibiting a custom measure of deformity as seen in the selected star's PSF.
For example, to make Decon use the 'Secondary Point Spread Function' only, set the 'Primary Point Spread Function' to 'Circle of Confusion (No Atmosphere)' and specify a very large 'Primary PSF Radius'. As expected, smaller radii will start cutting off the 'Secondary Point Spread Function' in a circular fashion. For a gentler tapering off of the 'Secondary Point Spread Function', you can use, for example,a 'Gaussian (Fast)' profile for the 'Primary Point Spread Function'.
Optionally, any star chosen as a 'Secondary Point Spread Function' can be made to iteratively deconvolve along with the image (by choosing one of the 'Dynamic' Star Sample settings). This effectively means that the deconvolution process deconvolves with an ever-changing total PSF. This mode can yield very good, even superior results, depending on the fidelity of the initial star sample, though at the cost of longer processing times. If this mode is selected and you are using a preview, make sure that the chosen star is included in the preview and falls well into the preview area.
The 'Regularization' parameter controls the balance between newly recovered detail and noise grain propagation. Deconvolution is exceptionally sensitive to noise; without something discerning between newly recovered detail and artefact, the compounding effect of multiple iterations of deconvolving noise will quickly end up in a noisy, artefacting mess.
The 'Regularization' in StarTools is automatically set to a baseline that should yield a good balance between detail recovery and artefact/noise suppression. However, there are instances where you may wish to deviate somewhat from the baseline to show more detail. The way this detail is introduced at the expense of noise, is very similar to how the Noise Grain Equalization Denoise module works; noise grain is allocated/'allowed' in such a way that it is still hard to detect by humans if the image is viewed at 100% zoom or below. Zoom levels above 100% break the illusion, however and noise grain allocation becomes visible.
In general, as opposed to any other software, regularization (and deconvolution as a whole) in StarTools is extremely adept at detecting and mitigating noise and artifact propagation, thanks to signal evolution Tracking. Regularization in StarTools is wholly driven by per-pixel SNR statistics gathered as you processed the image, thereby avoiding artefact development in low SNR areas, while guaranteeing maximum detail in higher SNR areas. In fact, this ability makes applying deconvolution later in your processing a good idea, as Decon will have more "up to date" SNR statistics to work with. The closer your image is to completion, the more settled per-pixel SNR measurements will be. The latter settled SNR-measurements can than be taken into account by the Regularization algorithm to yield the most appropriate results for your image.
Throughout all this, Deconvolution still operates on the linear data, even though the end result is calculated for your stretched and (possibly) heavily processed image. The mechanism responsible for this mathematical tour de force is 'Tracking Propagation'; decisions based on your stretched image are back propagated to the dataset when it was linear, re-calculated, then forward propagated to the heavily processed state your dataset is now in.
You can think of this procedure as undoing all changes you made since you started with linear data until the dataset is linear again, then making a modification to the dataset in its linear state, then redoing all those changes you made again - this time starting from modified linear data. It's a little bit like time travel and changing the past using knowledge about the future.
There are two modes for 'Tracking Propagation'. The first, 'Post-decon (fast)', default mode, only back and forward propagates the final result of the deconvolution operation and uses an approximation for the intermediary iterations. This was the default mode before StarTools 1.6. The second mode, 'During Regularization (Quality)' back and forward propagates the results constantly for every iteration. The second option is slower but more precise than the first, however may allow you to push the dataset a little more, especially in conjunction with 'Regularization' values lower than the default balanced 1.0.
Deconvolution of planetary, solar and lunar images can be achieved as well by switching 'Image Type' to 'Lunar/Planetary'. The difference between 'Deep Space' and 'Lunar/Planetary' mode, is the way reconstructed highlights are treated. In the case of the 'Deep Space' setting, reconstructed highlights are allowed to overexpose (like any over-exposing stars in your image). In other words, dynamic range of the entire image is not adjusted to accommodate the reconstructed detail. However, in the case of a 'Lunar/Planetary' image, reconstructed highlights are allocated additional dynamic range, as to not make them overexpose. Note that this assumes there are no prior over-exposing areas in the source image.
Planetary, solar and lunar images will require a much less aggressive de-ringing strategy, so the 'Enhanced Deringing' parameter can usually be safely set to 0%.
The De-Noise module offers detail-aware, astro-specific noise reduction, which, paired with StarTools' Tracking feature, yields results that have no equal.
Whereas generic noise reduction routines and plug-ins for terrestrial photography are often optimised to detect and enhance geometric patterns and structures in the face of random noise, the De-Noise module is optimised to do the opposite and optimise patterns and structures that are non-geometric in nature in the face of random noise (as well as read noise).
When used in conjunction with StarTools' 'Tracking' feature which data mines every decision and noise evolution per-pixel during the user's processing, the results that De-Noise is able to deliver autonomously are absolutely unparalleled. The extremely targeted noise reduction that is provided in this case, can only be approximated in other software by spending many hours creating a noise mask by hand.
Denoising starts when switching Tracking off. It is therefore generally the last step, and for good reason. Being the last step, Tracking has had the longest possible time to track and analyse noise propagation.
Bearing the aforementioned in mind, note that clicking the Denoise icon in the left hand menu launches the Denoise module in preview mode; the final result cannot be kept and is only meant for evaluation purposes to examine noise propagation and mitigation in an unfinished workflow. Only switching Tracking off will allow you to keep the final noise-reduced result.
The first stage of noise reduction involves the selection of 3 subtly different noise reduction algorithms, and helping StarTools establish a baseline for visual noise grain. To establish this baseline, increase the 'Grain size' parameter until no noise grain of any size can be seen any longer. StarTools will use this baseline to more intelligently redistribute the energy in the various bands that is taken out during the wavelet denoising in the second stage. Note that this parameter is also still available for modification in the second stage, though it lacks the visual aid presented here.
After clicking 'Next', the wavelet scale extraction starts, upon which, after a short while, the second interactive noise reduction stage interface is presented.
The base algorithm that performs noise removal is an enhanced wavelet denoiser, meaning that it is able to attenuate features based on their size. Noise grain caused by shot noise (aka Poisson noise) - the bulk of the noise astrophotographers deal with - exists on all size levels, becoming less noticeable as the size increases. Therefore, much like the Sharp module, a number of scale sizes are available to tweak, allowing the denoiser to be more or less aggressive when removing features deemed noise grain at different sizes. Tweaks to these scale parameters are generally not necessary, but may be desirable if - for whatever reason - noise is not uniform and is more prevalent in a particular scale.
Firstly, different to basic wavelet denoising implementations, the algorithm is driven by the per-pixel signal (and its noise component) evolution statistics collected during the preceding image processing. E.g. rather than using a single global setting for all pixels in the image, StarTools' implementation uses a different setting (yet centered around a user-specified global setting) for every pixel in the image.
Second, the wavelet denoising algorithm is further enhanced by a feature scale correlation enhancement, which exploits common psychovisual techniques, whereby noise grain is generally tolerated better in areas of increased detail.
Third, because shot (Poissonian) noise (applied) behaves differently to Gaussian noise (added) in areas of low signal around the noise floor, a separate algorithm can be deployed for just these areas if they are prevalent in your image. Datasets and images that show symptoms of linear noise response breaking down, may exhibit conspicuous single dark pixels inside otherwise smooth areas. This step
Finally, any removed energy is collected per pixel and re-distributed across the image, giving the user intuitive control over reintroduction of noise grain and fine detail, countering any over-smoothening.
The parameters that govern global noise reduction response (rather than per-feature-size) are 'Brightness/Color detail loss' and 'Smoothness'.
'Brightness/Color detail loss' specifies a measure of allowed acceptable detail loss in order to reduce noise. In color images, the 'Color detail loss' parameter works solely on any color noise, while the 'Brightness detail loss' parameter works on the detail itself, but not its colors.
The 'Smoothness' parameter determines how much (or little) the denoiser should take notice of any inter-scale detail correlation. Detail correlation is higher in areas that look 'busy' such as galaxy or nebula cores or shock waves, whereas detail correlation is low in areas that are 'tranquil' such as opaque homogenous gas clouds. Increasing 'Smoothness' progressively ignores such correlation, allowing for more aggressive noise reduction in areas of higher correlation.
'Scale correlation' specifies how deep the denoiser should look for detail that may be correlated across scales. Most data can withstand deep correlation, however some types of data may exhibit an artificially introduced correlation. This can be the case with data that;
•has been drizzled with insufficient frames•originates from a sensors with a color filter array (for example an OSC or DSLR) and where insufficient frames were stacked•was not sufficiently dithered between sub-frame acquisition•has any other type of recurring embedded pattern, visible or latent
Noise in such cases will not exhibit a Poisson distribution (e.g. it does no longer resemble shot noise) and will exhibit correlation in the form of clumps or streaks. Such data may require a shallower 'Scale correlation' value. More generally, such types of noise/artefacts are beyond the scope of the denoise module's capabilities and should be corrected during acquisition and pre-processing, rather than at the post-processing stage.
Set Smoothness until fine noise grain is sufficiently smoothened out. Increase Scale 5 if noise grain is visible in the largest scales. Increase or decrease Grain Dispersion to taste to reintroduce fine detail and grain. Vary the Brightness Detail Loss and Color Detail Loss if needed.
New in StarTools 1.6, the Denoise 2 module offers an alternative aesthetic to the venerable classic Denoise module, yet with all of its local signal quality Tracking power intact.
Denoise 2 is an acknowledgement of the "two schools" of noise reduction prevalent in astrophotography; there are those who like smooth images with little to no noise grain visible, and there are those who find some noise grain acceptable (or even desirable!) for the purpose of creating visual interest and general aesthetics (much like noise grain is added for a "filmic" look in CGI). The classic Denoise module caters to the former, while the Denoise 2 module caters to latter.
Denoise 2 is centered around a single, intuitive concept; noise grain equalization. That is, with a single intuitive parameter, an "acceptable" noise grain visibility threshold is specified above which noise grain is visually acceptable, and below which noise grain is visually unacceptable. The result is a final image that appears to have a constant level of noise grain, no matter how it was processed or stretched from the time it was linear.
The equalization of noise grain across the entire image is an important aspect of Denoise 2, as doing so will no longer draw the attention of a viewer to areas of low SNR (Signal-to-Noise Ratio); it makes the image appear as if it has a constant SNR across the entire image, precisely as most viewers are used to in other types of media (e.g. terrestrial daylight photography and cinematic sequences).
The level of "still acceptable" noise grain, is set by the 'Grain Removal' parameter. Higher values remove more grain to the point of blurring the image.
'Grain Limit Detail' and 'Grain Limit Color' set the largest visible noise grain size, for detail (luminance) and color respectively, that Denoise 2 should target. Beyond these limits, Denoise 2 will leave detail wholly intact. These parameters are derived from the 'Grain Size' parameter in the setup screen. The 'Grain Size' parameter in the setup screen merely helps establish a visual baseline (starting point); 'Grain Limit Detail' and 'Grain Limit Color' override this.
Denoise 2 further - optionally - exploits the human visual system's poor ability to distinguish noise grain in the context of other detail, while also catering to its preference for acutance (a subjective perception of sharpness that is related to the edge contrast in an image). To this end, a psychovisual support image can be synthesised and evaluated. The Psychovisual support image should not be mistaken for a simple luminance mask blending the original and denoised image, rather it serves as an input to acutance modelling and signal-to-noise ratio estimation during denoising.
The 'Mode' parameter toggles between statistically correct grain removal and psychovisual grain removal. When 'Mode' is set to 'Statistical' the 'PV Detail', 'PV Support Gamma' and 'PV Support Area' are unavailable as these are for the psychovisual ("PV") grain removal mode only.
When the 'Mode' parameter is set to 'Psychovisual', the 'PV Detail', 'PV Support Gamma' and 'PV Support Area' become available. 'PV Detail' governs what detail sizes Denoise 2 should look at when deciding whether something is visually "busy". 'PV Support Gamma' governs a non-linear transformation of the support image, non-linearly stretching the 'busyness' map. Increasing this parameter increases the strength of the grain preservation in busy areas. Finally, 'PV Support Area' specifies how 'busyness' psychovisually affect neighboring pixels in an area. Increasing this value will make grain preservation 'bleed' into neighboring pixels, preserving more grain (and detail).
Psychovisual noise reduction and detail preservation is - by its very nature - an imprecise process. While evaluating the resulting Denoised image, it is important to try to keep a "bird's eye view" of the noise grain in your image; it is important to imagine someone looking at your denoised image for the first time with "fresh eyes" and no prior knowledge of the original noisiness of your image. "Pixel-peeping" (e.g. looking at the image at increased zoom levels than native 100% zoom) will reveal noise much more readily than normal, rather than letting the psychovisual tricks do their work at native resolution (in that sense the effect is similar to that of quantization error dithering).
As is the case with the classic Denoise module, the Denoise 2 module was primarily designed for targeting Poissonian ("shot") noise.
The Denoise 2 module relies entirely on the per-pixel statistics Tracking provides, and as such, it is not available for non-Tracked processing; the Denoise 2 cannot yield correct results if the dataset was not linear at the time of engaging Tracking.
The Develop module was created from the ground up as a robust equivalent to the classic Digital Development algorithm that attempts to emulate classic film response when first developing a raw stacked image.
The Develop module effectively functions as a classic digital dark room where your prized raw signal is developed and readied for further processing.
The module can also be used as swiss pocket knife for gamma correction, normalisation and channel luminance contribution remixing.
First off, please note that this module emulates many aspects of photographic film, including its shortcomings. These shortcomings include photographic film's tendency to "bloat" stellar profiles. If your goal is to achieve a non-linear stretch that shows as much detail as possible, the far more advanced AutoDev will always do an objectively better job for that purpose.
Enhancements over the classic Digital Development algorithm (Okano, 1997), are the introduction of an additional gamma correction component, the removal of the edge enhancement component, and the introduction of automated black and white point detection. The latter ensures your signal never clips, while making histogram checking a thing of the past.
A semi-automated 'homing in feature' attempts to find the optimal settings that bring out as much detail as possible, while still adhering to the Digital Development curve.
Finally, a luminance mixer allows for re-mixing of the contribution of each color channel to brightness.
Traditionally, image processing software for astrohptography has struggled with this, resorting to kludges like "special" stretching functions (e.g. ArcSinH) or Color enhancement extensions to the DDP algorithm (Okano, 1997) that only attempt to minimize the problem, while still introducing color shifts
Because of this, the digital development color treatment extensions as proposed by Okana (1997) has not been incorporated in the Develop module. The two aspects - color and luminance - of your image are neatly separated thanks to StarTools' signal evolution Tracking engine.
The Entropy module is a novel module that enhances detail in your image, using latent detail cues in the color information of your dataset.
The Entropy module exploits the same basic premise as the Filter module; that is, the observation that many interesting features and objects in outer space have distinct colors, owing to their chemical make up and associated emission lines. This correlation become 100% when considering a narrowband composite, where each channel truly is made up of data from distinct parts of the spectrum.
The Entropy module works by evaluating entropy (a measure of "busyness" or "randomness") as a proxy for detail. It does so on a local level in each colour channel for each pixel. Once this measure has been established for each pixel, the individual channel's contribution to luminance for each pixel is re-weighted in CIELab space to better reflect the contribution of visible detail in that channel.
The result is that the luminance contribution of a channel with less detail in a particular area is attenuated. Conversely, the luminance contribution of a channel with more detail in a particular area is boosted. Overall, this has the effect of accentuating latent structures and detail in a very natural manner. Operating entirely in CIELab space means that, psychovisually, there is no change in colour, only brightness.
The above attributes make the Entropy module an an extremely powerful tool for narrowband composites in particular.
The Entropy module is effective both on already processed images, as well as Tracked datasets. The module is available as of StarTools 1.5.
The Entropy module is very flexible in its image presentation. To start using the Entropy module, an entropy map needs to be generated by clicking the 'Do' button. This map's resolution/accuracy can be chosen by using the 'Resolution' parameter. The 'Medium' resolution is sufficient in most cases.
For the entropy module to be able to identify detail, the dataset should ideally be of an image-filling object or scene.
After obtaining a suitable entropy map, the other parameters can be tweaked in real-time;
The 'Strength' parameter governs the overall strength of the boost or attenuation of luminance. Overdriving the 'Strength' parameter too much may make channel transitions too visible. In this case you may wish to pull back, or increase the 'Midtone Pull Filter' size to achieve a smoother blend.
The 'Dark/Light Enhance' parameter enables you to choose the balance between darkening and brightening of areas in the image. To only brighten the image (for example if you wish to bring out faint H-alpha, but nothing else), set this parameter to 0%/ 100%. To only darken the image (for example to better show a bright DSO core) bring the balance closer to 100%/0%.
The 'Channel Selection' parameter allows you to only target certain channels. For example, if you wish to enhance S-II more visible in a Hubble-palette image, set this parameter to red (to which S-II should be mapped). S-II will now be boosted, and H-alpha and O-III will be pushed back where needed to aid S-II's contrast. If you wish to avoid the other channels being pushed back, simply set the 'Dark/Light Enhance' to 0/100%.
The 'Midtone Pull Filter' and 'Midtone Pull Strength' parameters, assist in keeping any changes in the brightness of your image confined to the area where they are most effective and visible; the midtones. This feature can be turned off by setting 'Midtone Pull Strength' to 0%. When on, the filter selectively accepts or rejects changes to pixels, based on whether they are close to half unity (e.g. neutral gray) or not. This feature works analogous to creating a HDR composite from different exposure times. The transition boundaries between accepted and rejected pixels are smoothened out by increasing the 'Midtone Pull Filter' parameter.
The Filter module allows for the modification of features in the image by their colour by simply clicking on them. It's as close to a post-capture colour filter wheel as you can get.
Filter can be used to bring out detail of a specific colour (such as faint Ha, Hb, OIII or S2 details), remove artefacts (such as halos, chromatic aberration) or isolate specific features. It functions as an interactive colour filter.
The Filter module is the result of the observation that many interesting features and objects in outer space have distinct colors, owing to their chemical make up and associated emission lines. Thanks to the Color Constancy feature in the Color module, colours still tend to correlate well to the original emission lines and features, despite any wideband RGB filtering and compositing. The Filter module was written to capitalise on this observation and allow for intuitive detail enhancement by simply clicking different parts of the image with a specific colour.
The Fractal Flux module allows for fully automated analysis and subsequent processing of astronomical images of DSOs.
The one-of-a-kind algorithm pin-points features in the image by looking for natural recurring fractal patterns that make up a DSO, such as gas flows and filaments. Once the algorithm has determined where these features are, it then is able to modify or augment them.
Knowing which features probably represent real DSO detail, the Fractal Flux is an effective de-noiser, sharpener (even for noisy images) and detail augmenter.
Detail augmentation through flux prediction can plausibly predict missing detail in seeing-limited data, introducing detail into an image that was not actually recorded but whose presence in the DSO can be inferred from its surroundings and gas flow characteristics. The detail introduced can be regarded as an educated guess.
It doesn't stop there however – the Fractal Flux module can use any output from any other module as input for the flux to modulate. You can use, for example, the Fractal Flux module to automatically modulate between a non-deconvolved and deconvolved copy of your image – the Fractal Flux module will know where to apply the deconvolved data and where to refrain from using it.
The HDR (High Dynamic Range) module optimises local dynamic range, in order to bring out the maximum amount of detail that is hidden in your data.
A HDR optimisation tool is a virtual necessity in astrophotography, owing to the huge brightness differences (aka 'dynamic range') innate to various objects that exist in deep space.
As opposed to other approaches (for example wavelet-based ones), the HDR module enhances dynamic range allocation locally (not just globally). It further takes into account psycho-visual theory (i.e. the way human vision perceives and processes detail) in the way the controls operate on the image.
Finally, the HDR module does not exacerbate noise grain like simpler dynamic range algorithms, factoring in noise propagation into the size of the final detail enhancement.
The result is an artefact free, totally natural looking image with real detail that does not suffer from the problems that other approaches suffer from, such as looking 'flat', looking too busy, or blowing out highlights such as stars.
The HDR module optimises local dynamic range allocation for smaller details (e.g. on a more local level) than the Contrast module; the HDR module works primarily medium-to-small features in the image.
The HDR module complements the Sharpen module and is generally a more flexible and powerful alternative that generally achieves artifact-free results. Examples of use cases are bright galaxy cores where small detail is still recoverable in the highlights.
The HDR module does not exacerbate noise grain like simpler dynamic range algorithms, factoring in noise propagation into the size of the final detail enhancement. As such, it is meant after your non-linear dataset has been stretched, for example using the Development or AutoDev modules.
As with most modules in StarTools, the HDR module comes with a number of presets;
•Optimise - accentuates detail•Equalise - pulls detail into the midtones and out of the shadows and highlights•Tame - pulls detail into the midtones and out of just the highlights•Reveal - reveals latent structural detail in the highlights (set 'Algorithm' to 'Reveal All' to also reveal structural detail in the shadows)
Going beyond the presets, more detailed adjustments can be made, starting with the 'Detail Size Range' parameter. This parameter is highly influential on the end result. It governs the range of detail sizes HDR should concentrate on, in order to bring out the most detail. Keeping this value small will see small detail accentuated. However, using larger values will see both small and large structural detail modified. Using larger values will progressively dig out larger scale structures and can be quite effective in highlighting these.
A selection of different algorithms to bring out detail exists. These are chosen through the 'Algorithm' parameter;
•'Equalize', much like the preset, pulls detail into the midtones and out of the shadows and highlights.•'Tame highlights' uses the 'Equalize' algorithm to enhance just the highlights. It is a great tool for reducing glare, very effectively negating brightness build-up in DSO cores and galaxies. It can yield similar results to the Contrast module, but on smaller scales.•'Brighten Dark' uses the 'Equalize' algorithm to enhance just the shadows. It just can be an extremely useful tool for bringing out latent detail in the shadows, such as faint, larger scale nebulosity. Because the Reveal module as whole factors in noise propagation into the size of the final detail enhancement, it does not tend to introduce much noise grain and will only bring out larger scale structures if detected.•'Optimize soft' uses a fairly conservative detail enhancement strategy and is useful to give, for example, an image of a DSO a bit more 'punch' if it is mostly very wispy or shrouded in nebulosity.•'Optimize hard' is a less conservative version of 'Optimize soft' and is a good general purpose structural detail enhancer.•'Reveal DSO core' uses the 'Reveal' algorithm and applies it to just the highlights. It is a very aggressive, but also effective, structural detail hunter. Its aggressiveness can be controlled by the 'Strength' parameter. The Reveal algorithm is a (very, very) distant cousin of the simple Contrast Limited Adaptive Histogram Equalisation (CLAHE) algorithm, but rather than performing local histogram equalisation, it performs local histogram stretching and not equalisation, thereby avoiding artifacts and noise grain exacerbation in areas with low signal-to-noise ratios. The 'Reveal DSO core' only workson the highlights.•'Reveal All' is similar in all aspects to the 'Reveal DSO core' algorithm, with the exception that it is also applied to the shadows, enhancing the totality of the local dynamic range.
In order to throttle how much the shadows and highlights respond to the enhancements, a brightness mask is used, the power of which is controlled by the 'Dark/Bright Response' parameter.
The Heal module was created to provide a means of substituting unwanted pixels in an neutral way.
Cases in which healing pixels may be desirable may include the removal of stars, hot pixels, dead pixels, satellite trails and even dust donuts.
The Heal module incorporates an algorithm that is content aware and is able to synthesise extremely plausible substitution pixels for even the large areas. The algorithm is very similar to that found in the expensive photo editing packages, however it has been specifically optimised for astrophotography purposes.
Getting started with the Heal module in StarTools is a fairly straightforward affair; simply put any unwanted pixels in a mask and let the module do its thing. The more pixels are in the mask, the more the Heal module will have to 'invent' and the longer the Heal module will take to produce a result.
By using the advanced parameters, the Heal module can be made useful in a number of advanced scenarios.
The 'New Must Be Darker Than' parameter lets you specify a brightness value that indicates the maximum brightness a 'new' (healed) pixel may have. This is useful if you are healing out areas that you later wish to replace with brighter objects, for example stars. By ensuring that the 'new' (healed) background is always darker than what you will be placing on top, you can simply use, for example, the Lighten mode in the Layer module.
The 'Grow Mask' parameter is a quick way of temporarily growing the mask (see Grow button in the Mask editor). This is useful if your current mask did not quite get all pixels that needed removing.
The 'Quality' parameter influences how long the Heal module may look for substitutes for each pixel. Higher quality settings give marginally better results but are slower.
The 'Neighbourhood Area' parameter sets the size of the local area where the algorithm can look for good candidate seed pixels.
The 'Neighbourhood Samples' parameter is useful if you are looking to generate more 'interesting' areas, based on other parts of the image. It can be useful for a large area being healed to avoid small repeating patterns. This feature is useful for terrestrial photography, however, this is often not needed or desirable for astrophotographical images. If you do not wish to use this feature, keep this value at 0.
The 'New Darker Than Old' parameter sets whether newly created pixels should always be darker than the old pixels. This may be useful for manipulation of the image in the Layer module (for example subtracting the healed image from the original image).
This guide lets you create starless linear data using StarNet++ and the Heal module. Even if you wish to use StarNet++ on your final image, you will find that using this guide to extract the a starmask, the Heal will achieve superior results when removing the stars that StarNet++ identified.
The Layer module is an extremely flexible pixel workbench for advanced image manipulation and pixel math, complementing StarTools' other modules.
It was created to provide you with a nearly unlimited arsenal of implicit functionality by combining, chaining and modulating different versions of the same image in new ways.
Features like selective layering, automated luminance masking, a vast array of filters (including Gaussian, Median, Mean of Median, Offset, Fractional Differentation and many, many more) allow you to emulate complex algorithms such as SMI (Screen Mask Invert), PIP (Power of Inverse Pixels), star rounding, halo reduction, chromatic aberration removal, HDR integration, local histogram optimization or equalization, many types of noise reduction algorithms and much, much more.
The Lens module was created to digitally correct for lens distortions and some types of chromatic aberration in the more affordable lens systems, mirror systems and eyepieces.
One of the many uses of this module is to digitally emulate some aspects of a field flattener for those who are imaging without a physical field flattener.
While imaging with a hardware solution to this type of aberration is always preferable, the Lens module can achieve some very good results in cases where the distortion can be well modeled.
The Life module brings back 'life' into an image by remodelling uniform light diffraction, helping larger scale structures such as nebulae and galaxies stand out and (re)take center stage.
Throughout the various processing stages, light diffraction (a subtle 'glow' of very bright objects due to lens or mirror diffraction) tends to be distorted and suppressed through the various ways dynamic range is manipulated. This can sometimes leave an image 'flat' and 'lifeless'. The Life module attempts to restore the effects of uniform light diffraction by an optical system, throughout a processed image. It does so by means of modelling an Airy disk pattern and re-calculating what the image would look like if it were diffracted by this pattern. The resulting model is then used to modulate or enhance the source image in various ways. The resulting output image tends to have a re-established natural sense of depth and ambiance, with better visible super structures.
For example, the Life module's Isolate preset, when applied to the whole image, is particularly adept at pushing back busy star fields and noisy backgrounds, refocusing the viewer's attention to the larger scale structures. As such it is a very powerful, yet easy to use tool to radically change the feel of an image.
The Life module may additionally be used locally by means of a mask. In this case the Life module can be used to isolate objects in an image and lift them from an otherwise noisy background. By having the Life module augment an object's super-structure, faint objects that were otherwise unsalvageable can be made to stand out from the background. Please note that, depending on the nature of the used selective mask, the super structures introduced by using the Life module in this particular way with a selective mask, should be regarded as an educated guess rather than documentary detail.
•Moderate - applies a moderate application of the 'life' algorithm.•Heavy - a more aggressive application of the 'life' algorithm.•Less=More - pushes back anything that is not a super structure, imparting depth to the image by manipulating brightness•Shroud - Helps brighten an image without emphasising background noise or star fields.•Isolate - pushes back anything that is not a super structure (similar to Less = More) while enhancing energy allocated to super structures.
Going beyond the presets, very detailed adjustments can be made, starting with the 'Glow Threshold' parameter. This parameter is determines how bright a pixel needs to be before it is considered for diffraction by the Airy disk diffraction model.
To view just the model that Life is using to enhance the image, the 'Output Glow Only' parameter can be set to 'Yes'. Optionally this output can be used to manipulate the image later using the Layer module, or in a separate application.
The 'Strength' parameter governs the overall strength of the effect.
The 'Inherit Brightness, Color' parameter determines whether brightness or color information is inherited (and thus unchanged) from the source image.
The 'Saturation' parameter controls the colour saturation of the output model (viewable by setting 'Output Glow Only' to 'Yes'), before it is applied to the source image to generate the final output. This parameter can be quite effective for enhancing the color of nebulosity.
The 'Detail Preservation' parameter selects the detail preservation algorithm the Life module should use to merge the model with the source image to produce the output image;
•Off - does not attempt to preserve any detail.•Min Distance to 1/2 Unity - uses the pixel that is closest to half unity (e.g. perfect gray).•Max Contrast - uses whatever pixel maximises contrast with its neighbouring pixels.•Linear Brightness Mask - uses a brightness mask that progressively masks-out brighter values until it uses the original values instead.•Linear Brightness Mask Darken - uses a brightness mask that progressively masks out brighter values. Only pixels that are darker than the original image are kept.
The 'Detail Preservation Radius' sets a filter radius that is used for smoothly blending processed and non-processed pixels, according to the algorithm specified by the 'Detail Preservation' parameter.
The 'Compositing Algorithm' parameter defines how the calculated diffraction model is to be generally combined with the original image:
•Screen - works like projecting two images on the same screen.•Power of Inverse - Power of Inversed Pixels (PIP) function.•Multiply, Gamma Correct - multiplies foreground and background and then takes the square root.•Multiply, 2x Gamma Correct - similar to 'Multiply, Gamma Correct' but doubles the Gamma Correction.
The 'Airy Disk Sampling' parameter controls the accuracy of the point spread function (PSF) that describes the diffraction model (an Airy disk).
•Default is 128 x 128 pixels. Range is 128 x 128, 256 x 256, 512 x 512 pixels.•Increasing this value will give a more accurate simulation but will take longer.
The 'Airy Disk Radius' parameter sets the radius of the Airy disk point spread function (PSF) that is used to diffract the light. Just like in nature, you may spot some (very) subtle rings around the stars after processing. The way this looks can be adjusted using this setting.
Finally, as with most modules in StarTools that employ masks, a 'Mask Fuzz' parameter is available to smoothly blend the transition between masked and non-masked pixels.
The Repair module attempts to detect and automatically repair stars that have been affected by optical or guiding aberrations.
Repair is useful to correct the appearance of stars which have been adversely affected by guiding errors, incorrect polar alignment, coma, collimation issues or mirror defects such as astigmatism.
The Repair module allows for the correction of more complex aberrations than the much less sophisticated 'offset filter & darken layer' method, whilst retaining the star's exact appearance and color.
The repair module comes with two different algorithms. The 'Warp' algorithm uses all pixels that make up a star and warps them into a circular shape. This algorithm is very effective on stars that are oval or otherwise have a convex shape. The 'Redistribution' algoirthm uses all pixels that make up a star and redistributes them in such a way that the original star is reconstructed. This algorithm is very effective on stars that are concave and can not be repaired using the 'Warp' algorithm.
StarTools' Detail-aware Wavelet Sharpening allows you to bring out faint structural detail in your images.
Other Wavelet Sharpening implementations can often drown out other fine detail because of different frequency ranges competing for the modification of the same pixel - in those implementations, the different scales (bands) interfere with each other and are not aware of the sort of detail you are trying to bring out.
Unique to other implementations, StarTools' Wavelet Sharpening gives you control over how detail enhancements across different scales and SNR areas interact. Apart from traditional parameters like controlling the strength of the detail enhancement per band, StarTools allows you to be the arbiter when two scales (bands) are competing to enhance detail in their band for the same pixel.
As of StarTools 1.6, you can now control how Sharp enhances detail based on the Signal-to-Noise Ratio (SNR) per-pixel in your image. This ability lets you dig out larger scale faint detail without increasing noise.
As with all modules in StarTools, the Wavelet Sharpening module will never allow you to clip your data, always yielding useful results, no matter how outrageous the values you choose, while availing of the Tracking feature's data mining. The latter makes sure that, contrary to other implementations, only detail that has sufficient signal is emphasised, while noise grain propagation is kept to a minimum.
Using StarTools' Auto Mask Generator, stars are automatically left alone. And, best of all, the complete algorithm is so fast that results are calculated in virtually real-time, while the interface couldn't be more user friendly.
The Shrink module allows you to modify the appearance of stars in your image. It allows you to shrink stars, tighten stars and better color stars.
New as of StarTools 1.6 beta, is the Stereo 3D module. The Stereo 3D module can be used to synthesise depth information based on astronomical image feature characteristics.
The depth cues introduced are merely educated guesses by the software and user, and should not be confused with scientific accuracy. Nevertheless, these cues can serve as a helpful tool for drawing attention to processes or features in an image.
Depth cues can also be highly instrumental in lending a fresh perspective to astronomical features in an image. The Stereo 3D module is able to generate plausible depth information for most deep space objects, with the exception of some galaxies.
The module can output various popular 3D formats, including side-by-side (for cross eye viewing), anaglyphs, depth maps, self-contained web content HTML, self-contained WebVR experiences and Facebook 3D photos.
Using the Stereo 3D module effectively starts with choosing a depth perception method that is most comfortable or convenient.
By default, the Side-by-side Right/Left (Cross) Mode is used, which allows for seeing 3D using the cross-viewing technique. If you are more comfortable with the parallel-viewing technique, you may select Side-by-side Left/Right (Parallel). The benefits of the two aforementioned techniques is that they do not require any visual aids, while keeping coloring intact. The downside of these methods, is that the entire image must fit on half of the screen. E.g. zooming in breaks the 3D effect.
If you have a pair of red/cyan filter glasses, you may wish to use one of the three anaglyph Modes. The two monochromatic anaglyph modes render anaglyphs for printing and viewing on a screen. The screen-specific anaglyph will exhibit reduced cross-talk (aka "ghosting") in most cases. An "optimized" Color mode is also available, which retains some coloring. Visual spectrum astrophotography tends to contain few colors that are retained in this way, however narrowband composites can benefit. Finally, a Depth Map mode is available to inspect (or save) the z-axis depth information that was generated by the current model.
The depth information generated by the Stereo 3D module is entirely synthetic and should not be ascribed any scientific accuracy. However, the modelling performed by the module is based on a number of assumptions that tend to hold true for many Deep Space Objects and can hence be used for making educated guesses about objects. Fundamentally, these assumptions are;
•Dark detail is visible by virtue of a brighter background. Dust clouds and Bok globules are good examples of matter obstructing other matter and hence being in the foreground of the matter they are obstructing.•Brighter areas (for example due to emissions or reflection nebulosity) correlate well with voluminous areas.•Bright objects within brighter areas tend to drive the (bright) emissions in their immediate neighborhoods. Therefore these objects should preferably be shown as embedded within these bright areas.•Bright objects (such as bright blue O and B-class stars), drive emissions in their immediate neighborhood and tend to generate cavities due to radiation pressure.•Stark edges such as shockfronts tend to speed away from their origin. Therefore these objects should perferably be shown as veering off.
Depth information is created between two planes; the near plane (closest to the viewer) and the far plane (furthest away from the viewer). The distance between the two planes is governed by the 'Depth' parameter.
The 'Protrude' parameter governs the location of the near and far planes with respect to distance from the viewer. At 50% protrusion, half the scene will be going into the screen (or print), while the other half will appear to 'jut out' of the screen (or print). At 100% protrusion, the entire scene will appear to float in front of the screen (or print). At 0% protrusion the entire scene will appear to be inside the screen (or print).
The 'Luma to Volume' parameter controls whether large bright or dark structures should be given volume. Objects that primarily stand out against a bright background (for example, the iconic Hubble 'Pillars of Creation' image) benefit from a shadow dominant setting. Conversely, objects that stand out against a dark background (for example M20) benefit from a highlight dominant setting.
The 'Simple L to Depth' parameter naively maps a measure of brightness directly to depth information. This a somewhat crude tool and using the 'Luma to Volume' parameter is often sufficient.
The 'Highlight Embedding' parameter controls how much bright highlights should be embedded within larger structures and context. Bright objects such as energetic stars are often the cause of the visible emissions around them. Given they radiate in all directions, embedding them within these emission areas is the most logical course of action.
The 'Structure Embedding' parameter controls how small-scale structures should behave in the presence of larger scale structures. At low values for this parameter, they tend to float in front of the larger scale structures. At higher values, smaller scale structures tend to intersect larger scale structures more often.
The 'Min. Structure Size' parameter controls the smallest detail size the module may use to construct a model. Smaller values generate models suitable for widefields with small scale detail. Larger values may yield more plausible results for narrowfields with many larger scale structures. Please note that larger values may cause the model to take longer to compute.
The 'Intricacy' parameter controls how much smaller scale detail should prevail over larger scale detail. Higher values will yield models that show more fine, small scale changes in undulation and depth change. Lower values leave more of the depth changes to the larger scale structures.
The 'Depth Non-linearity' parameter controls how matter is distributed across the depth field. Values higher than 1.0 progressively skew detail distribution towards the near plane. Values lower than 1.0 progressively skew detail distribution towards the far plane.
Besides rendering images as anaglyphs or side-by-side 3D stereo content, the Stereo 3D module is also able to generate Facebook 3D photos, as well as interactive self-contained 2.5D and Virtual Reality experiences.
The 'WebVR' button in the module exports your image as a standalone HTML file. This file can be viewed locally in your webbrowser, or it can be hosted online.
It renders your image as an immersive VR experience, with a large screen wrapping around the viewer. The VR experience can be viewed in most popular headsets, including HTC Vive, Oculus, Windows Mixed Reality, GearVR, Google Day Dream and even sub-$5 Google Cardboard devices.
To view an experience, put it in an accessible location (locally or online) and launch it from a WebVR/XR capable browser.
Please note that landscape images tend to be more immersive.
The 'Web2.5D' button in the module exports your image as a standalone HTML file. This file can be viewed locally in your webbrowser, or it can be hosted online.
Depth is conveyed by a subtle, configurable, bobbing motion. This motion subtly changes the viewing angle to reveal more or less of the object, depending on the angle. The motion is configurable both by you and the viewer in both X and Y axes. The motion can also be configured to be mapped to mouse movements.
A so called 'depth pulse' can be sent into the image, which travels through the image from the near plane to the far plane, highlighting pixels of equal depth as it travels. The 'depth pulse' is useful to re-calibrate the viewer's persepective if background and foreground appear swapped.
Hosting the file online, allows for embedding the image as an IFRAME. The following is an example of the HTML required to insert an image in any website;
<iframe scrolling="auto" marginheight="0" marginwidth="0" style="border:none;max-width:100%;" src="https://download.startools.org/pillars_stereo.html?spdx=4&spdy=3&caption=StarTools%20exports%20self-contained,%20embeddable%20web%20content%20like%20this%20scene.%20This%20image%20was%20created%20in%20seconds.%20Configurable,%20subtle%20movement%20helps%20with%20conveying%20depth." frameborder="0"></iframe>
The following parameters can be set via the url;
•modex: 0=no movement, 1=positive sine wave modulation, 2=negative sine wave modulation, 3=positive sine wave modulation, 4=negative sine wave, 5=jump 3 frames only (left, middle, right), 6=mouse control•modey: 0=no movement, 1=positive sine wave modulation, 2=negative sine wave modulation, 3=positive sine wave modulation, 4=negative sine wave, 5=mouse control•spdx: speed of x-axis motion, range 1-5•spdy: speed of y-axis motion, range 1-5•caption: caption for the image
Example of 2.5D embeddable web content
The Stereo 3D module is able to export your images for use with Facebook's 3D photo feature.
The 'Facebook' button in the module saves your image as dual JPEGs; one image that ends in '.jpg' and one image that ends in '_depth.jpg' Uploading these images as photos at the same time will see Facebook detect and use the two images to generate a 3D photo.
Please note that due Facebook's algorithm being designed for terrestrial photography, the 3D reconstruction may be a bit odd in places with artifacts appearing and stars detaching from their halos. Nevertheless the result can look quite pleasing when simply browsing past the image in a Facebook feed.
TVs and projectors that are 3D-ready can - at minimum - usually be configured to render side-by-side images as 3D. Please consult your TV or projector's manual or in-built menu to access the correct settings.
The Synth module generates physically correct diffraction and diffusion of point lights (such as stars) in your image, based on a virtual telescope model.
Besides correcting and enhancing the appearance of point lights (such as stars), the Synth module may even be 'abused' for aesthetic purposes to endow stars with diffraction spikes where they originally had none.
Any other tools on the market today simply approximate the visual likeness of such star spikes and 'paint' them on. However the Synth module can physically model and emulate most real optical systems and configurations to obtain a desired result.
The Wipe module detects, models and removes any source of unwanted light bias.
The Wipe module's main purpose is to eliminate unwanted light in an image and establish a neutral background.
Unwanted light may come in the form of gradients, colour cast or light pollution.
•Gradients are usually prevalent as gradual increases (or decreases) of background light levels from one corner of the image to another. Sources may include the or a nearby street light.•Colour casts are a tint of a particular colour which, contrary to a gradient, affects the whole image evenly.•Light pollution is the presence of a persistent haze of (often) coloured light, caused by urban street lighting.
Other issues that the Wipe module may ameliorate are vignetting and amp glow;
•Vignetting manifests itself as the gradual darkening of the image towards the corners and may be caused by a number of things.•Amp glow is caused by circuitry heating up in close proximity to the CCD, causing localised heightened thermal noise (typically at the edges). On some older DSLRs and Compact Digital Cameras, amp glow often manifests itself as a patch of purple fog near the edge of the image.
Strictly speaking, Vignetting is not an additive light source and the correct course of action is to apply flat frames during sub frame calibration. That said, reasonable results can be achieved using Wipe's "vignetting" preset.
Note that while part of Wipe's job description is 'establishing a neutral background', this doesn't necessarily the background is colourless. It simply means that the colour channels are now bias-less, however colour calibration of the channels by the Color module is still required.
It is of the utmost importance that Wipe is given the best artefact-free, linear data you can muster.
Because Wipe tries to find the true (darkest) background level, any pixel reading that is mistakenly darker than the true background in your image (for example due to dead pixels on the CCD, or a dust speck on the sensor) will cause Wipe to acquire wrong readings for the background. When this happens, Wipe can be seen to "back off" around the area where the anomalous data was detected, resulting in localised patches where gradient (or light pollution) remnants remain. These can often look like halos. Often dark anomalous data can be found at the very centre of such a halo or remnant.
The reason Wipe backs off is that Wipe (as is the case with most modules in StarTools) refuses to clip your data. Instead Wipe allocates the dynamic range that the dark anomaly needs to display its 'features'. Of course, we don't care about the 'features' of an anomaly and would be happy for Wipe to clip the anomaly if it means the rest of the image will look correct.
Fortunately, there are various ways to help Wipe avoid anomalous data;
•A 'Dark anomaly filter' parameter can be set to filter out smaller dark anomalies, such as dead pixels or small clusters of dead pixels, before passing on the image to Wipe for analysis.•Larger dark anomalies (such as dust specks on the sensor) can be excluded from analysis by, simply by creating a mask that excludes that particular area (for example by "drawing" a "gap" in the mask using the Lassoo tool in the Mask editor).•Stacking artefacts can be cropped using the Crop module.
Bright anomalies (such as satellite trails or hot pixels) do not affect Wipe.
Once any dark anomalies in the data have successfully been dealt with, operating the Wipe module is fairly straightforward.
By default, a setting is selected that performs well in the presence of moderate gradients, colour casts or bias levels.
If the gradient is found to undulate stronger, a higher 'Aggressiveness' setting may be appropriate. When using a higher 'Aggressiveness', be mindful of Wipe not 'wiping' away any medium to larger scale nebulosity. To Wipe, larger scale nebulosity and a strong undulating gradients can look like the same thing!
If you're worried about Wipe removing any larger scale nebulosity, you can protect this nebulosity by masking it out, so that Wipe doesn't sample it.
Because Wipe's impact on the dynamic range in the image is typically very, very high, a (new) stretch of the data is almost always appropriate so that the freed up dynamic range that used to be occupied by the gradients and/or light pollution can now be put to good use to show detail. Therefore, a global re-stretch using the AutoDev or Develop module is almost always required.
Having to 'Keep' the result and switching to 'AutoDev' or 'Develop', just to see the result, is a bit tedious. Therefore, switching on a courtesy 'Temporary AutoDev' operation allows you to see the result.
A number of controls for advanced use and special cases are available.
The 'Corner aggressiveness' lets the user specify a different aggressiveness value for the corners of the image. This can be useful if gradients become stronger in just the corners and can help ameliorate vignetting. The 'Drop off point' determines how far from the center of the image the 'Corner aggressiveness' starts taking over from the main 'Aggressiveness' parameter. At 100% for the 'Drop off point', no effect is visible (e.g. only the main 'Aggressiveness' parameter is used) since the' Corner aggressiveness' only comes into effect 100% of the way between the center of the image and the corners.
The 'Precision' parameter can help when dealing with rapidly changing (e.g. undulating) gradients combined with high 'Aggressiveness' values.
The 'Mode' parameter allows for the selection of what aspect of the image should be corrected by Wipe;
•Correct color and brightness; removes both colour and brightness bias across the image.•Correct color only; removies color casts but does not impact brightness bias.•Correct brightness only; retains color but corrects brightness bias. This mode is useful when processing narrowband data, or data that was not acquired on earth (for example Hubble Space Telescope data).
It's a feature called "Tracking" and processes your signal in 3D (X, Y, t) space, rather than standard 2D (X,Y) space.
The result is less noise grain, finer detail, more flexibility, and unique functionality. You will not find this in any other software.
StarTools monitors your signal and its noise component, per-pixel, throughout your processing (time). It sports image quality and unique functionality that far surpasses other software. Big claim? Let us back it up.
If you have ever processed an astrophotographical image, you will have had to non-linearly stretch the image at some point, to make the darker parts with faint signal visible. Whether you used levels & curves, digital development, or some other tool, you will have noticed noise grain becoming visible quickly.
You may have also noticed that the noise grain always seems to be worse in the darker areas than the in brighter areas. The reason is simple; when you stretch the image to bring out the darker signal, you are also stretching the noise component of the signal along with it.
And the former is just a simple global stretch. Now consider that every pixel's noise component goes through many other transformations and changes as you process your image. Once you get into the more esoteric and advanced operations such as local contrast enhancements or wavelet sharpening, noise levels get distorted in all sorts of different ways in all sorts of different places.
The result? In your final image, noise is worse in some areas, less in others. A "one-noise-reduction-pass-fits-all" no longer applies. Yet that's all other software packages - even the big names - offer.
Chances are you have used a noise reduction routine at some stage. In astrophotography, the problem with most noise reduction routines, is that they have no idea how much worse the noise grain has become in the darker parts. They have no idea how you stretched and processed your image earlier. And they certainly have no idea how you squashed and stretched the noise component locally with wavelet sharpening or local contrast optimisation.
In short, the big problem, is that separate image processing routines and filters have no idea what came before, nor what will come after when you invoke them. All pixels are treated the same, regardless of their history. Current image processing routines and filters are still as 'dumb' as they were in the early 90s. It's still "input, output, next".
Without knowing how signal and its noise component evolved to become your final image, trying to, for example, squash noise accurately is impossible. What's too much in one area, is too little in another, all because of the way prior filters have modified the noise component beforehand.
The separation of image processing into dumb filters and objects, is one of the biggest problems for signal fidelity in astrophotographical image processing software today. It is the sole reason for poorer final images, with steeper learning curves than are necessary. Without addressing this fundamental problem, "having more control with more filters and tools" is an illusion. The IKEA effect aside, long workflows with endless tweaking do not make for better images.
But what if every tool, every filter, every algorithm could work backwards from the finished image, and trace signal evolution, per-pixel, all the way back to the source signal? That's Tracking.
Tracking in StarTools makes sure that every module and algorithm can trace back how a pixel was modified at any point in time. It's the Tracking engine's job to allow modules and algorithms "travel in time" to consult data and even change data (changing the past) and then forward-propagate the changes to the present.
The latter sees the Tracking module re-apply every operation made since that point in time, however with the changed data as a starting point; changing the past for a better future. This is effectively signal processing in three dimensions; X, Y and time (X, Y, t).
This remarkable feature is responsible for never-seen-before functionality that allows you to, for example, apply deconvolution to heavily processed data. The deconvolution module "simply" travels back in time to a point where the data was still linear (normally deconvolution can only correctly be applied to linear data!). Once travelled back in time, deconvolution is applied and then Tracking forward-propagates the changes. The result is exactly what your processed data would have looked like with if you had applied deconvolution earlier and then processed it further.
Sequence doesn't matter any more, allowing you to process and evaluate your image as you see fit. But wait, there's more!
Time traveling like this is very useful and amazing in its own right, but there is another major, major difference in StarTools' deconvolution module.
The major difference, is that, because you initiated deconvolution at a later stage, the deconvolution module can take into account how you processed the image after the moment deconvolution should normally have been invoked (e.g. when the data was still linear). The deconvolution module now has knowledge about a future it normally is not privy to in any other software. Specifically, that knowledge of the future tells it exactly how you stretched and modified every pixel - including its noise component - after the time its job should have been done.
You know what really loves per-pixel noise component statistics like these? Deconvolution regularization algorithms! A regularization algorithm suppresses the creation of artefacts caused by the deconvolution of - you guessed it - noise grain. Now that the deconvolution algorithm knows how noise grain will propagate in the "future", it can take that into account when applying deconvolution at the time when your data is still linear, thereby avoiding a grainy "future", while allowing you to gain more detail. It is like going back in time and telling yourself the lottery numbers to today's draw.
What does this look like in practice? It looks like a deconvolution routine that just "magically" brings into focus what it can. No local supports, luminance masks, or selective blending needed. No exaggerated noise grain, just enhanced detail.
And all this is just what Tracking does for the deconvolution module. There are many more modules that rely on Tracking in a similar manner, achieving objectively better results than any other software, simply by being smarter with your hard-won signal.
In StarTools, your signal is processed (read and written) in a time-fluid way. Being able to change the past for a better future not only gives you amazing new functionality, changing the past with knowledge of the future also means a cleaner signal. Tracking always knows how to accurately estimate the noise component in your signal, no matter how heavily modified.
For its unique engine to function, StarTools needs to be able to make mathematical sense of your signal flow. That's why it's simply unable to perform "nonsensical" operations. This is great if you're a beginner and saves you from bad habits or sub-optimal decisions.
Just like in real life, in astrophotographical image processing, some things need to be done in a particular order to get the correct result. Folding, drying then washing your shirt, will achieve a markedly different result to washing, drying and folding it. Similarly, deconvolution will not achieve correct results if it is done after stretching, ditto for light pollution removal and color calibration. In mathematics, this is called the commutative property.
The "Tracking" feature, constantly backward propagates and forward propagates your signal through processing "time" as needed. This means that "nonsensical" signal paths (e.g. signal paths that get sequences wrong) would break Tracking's ability. Therefore, such signal paths are closed off. For this reason, it is neigh-impossible in StarTools to perform catastrophically destructive operations on your data; it simply wouldn't be sound mathematics and the code would break.
For example, the notion of processing in the linear domain vs non-linear (stretched) domain is completely abstracted away by the engine because it needs to do that. If you didn't know the difference between those two yet, you can get away with learning about this later. Even without knowing the ins-and-outs of astronomical signal processing, you can still produce great images from the get-go; StarTools takes care of the correct sequence.
So, whereas other software will happily (and incorrectly!) allow you to perform light pollution removal, color calibration or deconvolution after stretching, StarTools will...
...actually also let you do that, but with a twist!
Tracking will rewind and/or fast-forward to the right point in time, so that the signal flow to makes sense and is mathematically consistent. It inserts the operation in the correct order and recalculates what the result would have looked like if your decision had always been the case. It's time travelling for image processing, where you can change the past to affect the present and future.
For an in-depth explanation of Tracking, see the Tracking section.
StarTools is a 64-bit optimized application for multi-core processors, with at least 6GB of memory available. For larger datasets 16GB to 32GB may be required. Fast SSD access will greatly benefit the application. Always check for oversampling and Bin down your dataset to a lower resolution where possible. Legacy 32-bit machines and operating systems are also still supported.
The single ZIP archive contains the executables for Windows, macOS and Linux. StarTools is a pure native application and does not rely on other frameworks.
Never download StarTools from anywhere else but startools.org. We do not allow distribution of StarTools by any other party, on-line or off-line. If you find a copy of StarTools not hosted on startools.org, please let us know.
Some macOS (e.g. Sierra and above) users may need to run;
xattr -dr com.apple.quarantine StarTools.app
to un-quarantine StarTools.
This command needs to be run from the folder where the StarTools application is located (you can use the 'cd' command to navigate to the right folder, while using the TAB key to auto-complete the path).
Alternatively StarTools can be launched via control + click on the application, Show Package Contents, navigating to Contents/MacOS and clicking on the application.
Apple has been making it increasingly difficult for independent developers to distribute applications. As of Sierra you will need to follow the following steps to run StarTools.
Release Candidate versions are stable versions that are almost ready for release, and contain significantly enhanced functionality and features. They may still subject to small fixes and tweaks. Documentation may not be 100% complete
StarTools 1.6.394 for Windows 32-bit, Windows 64-bit, Windows 64-bit with AVX 2.0, MacOSX 64-bit, Linux 32-bit, Linux 64-bit, Linux 64-bit AVX 2.0 (6.8MB)
Latest version released 2020-05-02 (YYYY/MM/DD)
StarTools 1.5.369 Maintenance Release 3, for Windows 32-bit, Windows 64-bit, MacOSX 64-bit, Linux 32-bit and Linux 64-bit (3.8MB)
Latest version released 2019-10-31 (YYYY/MM/DD)
StarTools 1.3.204 for Android 1.6+ Technology Demo (1.5 MB)
NOTE: Put any file you want to load in /sdcard root and name it 'file.tiff'
StarTools uses AIFE.AI for content management and digital footprint. This means that the website content doubles as a printable manual and vice-versa. This content is also available as a smartphone/tablet app, virtual flipbook, virtual reality (VR) experience and more. This content will always be up-to-date with the latest information.
Unofficial English StarTools 1.6 Manual (96MB), last updated 2020-04-17, with tips, tricks and information from various sources.
Many thanks to J. Scharmann for putting together this excellent work, as well as its German translation.
Inoffizielle StarTools 1.6 Anleitung in Deutsch (96MB), letztes Update 2020-04-17.
Vielen Dank an J. Scharmann für die ausgezeichnete Übersetzung.
Manual de StarToolsBasado en la versión 1.6 al español (16B). Ultima actualizacion 2020-05-05.
Muchas gracias a C. R. Guixé por la excelente traducción.
These are some questions that get asked frequently.
StarTools works on all 32-bit (NT-based) and 64-bit versions of Windows. This means that StarTools runs on Windows NT 4, Windows 2000, Windows XP, Windows Vista, Windows 7, Windows 8 and Windows 10.
StarTools works on macOS versions from 10.7 onwards.
StarTools should works on 32-bit or 64-bit Linux distributions with X11, GLIBC 2.15 and Zenity.
StarTools is display-device agnostic, but can be configured to display its GUI at a 4x higher resolution to accommodate high-DPI devices and 4K displays.
To enable this mode, create an empty file called 'highdpi' (NOTE: without extension or file type) in the StarTools folder where the executable is launched from.
You may have to configure your operating system to not scale up StarTools. Wayland users may be interested in this link, while Windows 10 users may be interested in this link.
Some less reputable virus scanners such as BitDefender, Norton and SpyBot may falsely report StarTools as a Trojan or Potentially Unwanted Program (due to malware that carries a similar name). Despite multiple users going through the lengths of getting StarTools white listed, the same problem pops up every 6 months or so.
Please see this post in the forums for more information.
If despite the above information you feel your StarTools download does indeed contain malware, please contact us as soon as possible.
The minimum specifications for a computer to run StarTools successfully depends mostly on the resolution the data you intend to process.
Low resolution data sets (for example from a 1MP CCD or Webcam) may be processed successfully on a Pentium 4 with 512Mb RAM.
High-resolution data sets, such as those from a DSLR typically require at least 4Gb of RAM.
For best results, 16GB and a modern 4-core CPU are recommended, in addition to running from a RAM disk (or alternatively a Solid State Drive).
Regardless of your machine's specification, consider binning your data if your data is oversampled.
StarTools uses all your CPU's cores to speed up processing in situations where it makes sense.
Please note that using multiple cores for tasks that are memory bus constrained, can actually have an adverse effect on performance, so you may find that not all algorithms and modules use all cores all of the time.
The 32-bit version is meant for older computers with less memory and/or a 32-bit Operating System.
The signal path is 32-bit for the 32-bit version, while the signal path is 64-bit for the 64-bit version, the latter being more precise but requiring twice the memory. Additionally, the 64-bit version makes use of the latest instruction sets (such as SSE) on the more modern CPUs to speed up processing tasks.
We endeavor to keep supporting older hardware.
StarTools is a completely native, self-contained application that does not require any further installation of helper libraries or run-time frameworks.
Everything in StarTools was written from the ground-up and has been hand-optimised, from the image processing algorithms to the UI library, from the file importing to the font renderers, for the multi-platform framework to the decompression routines. Why? Because we feel it is important to be master of our own destiny (and make you master of your own destiny by extension) and fundamentally understand each and every ingredient that goes into the mix.
Fundamentally understanding the different algorithms, optimisation techniques and data structures gives us the ability to push the boundaries and create truly novel techniques and algorithm implementations.
Please note that Linux users, will still need X11, GLIB 2.15, zenity and wmctrl installed on their system.
If you had bothered to read the 'buy' page, you would have learned that you could spare yourself the effort of writing a keygen or crack - if you can't afford the license fee and you are a genuine enthusiast, we're happy to work something out!
We're not some big evil company and we're not in it for the money. Heck, we make a loss on this all for the love of the hobby and are not even covering our costs as it is.
Besides, ST's release cycle is one of continuous updates - you'd be continuously waiting for the next crack or keygen in order to avail of the latest features and bug fixes (of which there can be several a month).
A StarTools license is currently priced at an affordable 65 AUD (~ 45 USD, 40 EUR, or 35 GBP). A 20% discount applies for group buys of 5 licenses or more.
Your license is yours to keep forever. It will never expire and entitles you to all updates released within 2 years of the purchase date. You do not need an Internet connection and you are free to install StarTools on as many systems as you like, provided you own those systems and are an individual. If you are any other entity (business, organization, club, etc.), please contact us. Please see the EULA included in the download for further details. We're not a fan of heavy handed DRM systems, complicated activation procedures or "renters" licenses. We trust our users to do the right thing – your license key uniquely identifies you and that's good enough for us.
Please use the FREE trial version before you buy. It offers full functionality, with the exception of being able to save your work. This way you can be sure StarTools performs adequately on your system and suits your needs.
StarTools aims to be as affordable as it is powerful. The StarTools project is about enabling astrophotography for as many people as possible, no matter how limited or advanced their means and equipment - we just try to cover our costs. If the pricing is an issue for you (self supported student, minor, pensioner, veteran, hard times, COVID-19 related income difficulties, etc.), contact us and we'll try to work something out; we understand - we've been there. No need for cracks, keygens, etc.
Please allow 48 hours for us to process you order as we manually generate the keys from your billing details and e-mail them to you as an attachment via your nominated PayPal e-mail address. Please make sure the e-mail address you have nominated for PayPal transactions is correct.
Please make sure your e-mail inbox is not full. If, despite repeated efforts, our e-mail with the license key attachment cannot be delivered, the full amount will be refunded. If we have not responded within 48 hours after payment, please check your Junk mail folder and contact us via e-mail or the contact form on the website.
Thank you for considering renewal of your StarTools update entitlement license!
Your continued support helps us improve StarTools with new tools and new algorithms, opening up your (and our!) wonderful hobby to more people around the world, regardless of their means.
A StarTools license renewal is currently priced at 29 AUD (approximately 20 USD, 18 EUR, or 17 GBP).
Renewals are checked against previous purchases. If your previous purchase cannot be found, renewal will fail and your renewal purchase will be refunded.
Please do contact us if you have special requirements, or if the pricing is an issue for you.
If you received a voucher for a StarTools license from a third party vendor, you can apply for your StarTools license by filling out the this form.
For terms, conditions and processing times, please refer to the information under "buy".
We use PayPal as it automatically provides the verified details we require for license generation. However we understand not everyone has (or wants) a PayPal account.
An international bank transfer is also possible, for example through Transferwise. Please contact us if you wish to avail of this option, as the details you need may vary by bank.
Visit our friendly forum, full of hints, tips and tutorials at https://forum.startools.org
These are some helpful links and tutorials related to StarTools and other image processing resources.
You may also find it helpful to know that the icons in the top two panels roughly follow a recommended workflow.
Much of StarTools revolves around signal evolution Tracking from start to finish. As such, familiarising yourself with how it works, is recommended to get the most out of your experience and your dataset.
If you have a correctly stacked dataset, this quick, 7-step guide will get you processing your first image with StarTools in no time at all.
StarTools will not work correctly (or work poorly) with an incorrectly stacked dataset. Getting a suitable dataset from your free or paid stacking solution, is extremely important.
There is an optimal ISO value for each DSLR, where your specific sensor provides the optimal balance between read noise and dynamic range.
ISO in the digital domain is unfortunately much misunderstood. The most important thing to understand is that picking an ISO value does not - in any way - make your digital camera's sensor more or less sensitive to light. A sensor's ability to convert incoming photons into electrons is fixed. This article by Chris van den Berge goes in more depth.
For the purpose of astrophotograph then, your camera will have an ISO value that is optimal for this type of photography. This section contains a number of suggested ISO values for popular DSLR models from popular vendors. These values are based on data from Photons to photos, sensorgen.info (now defunct), DxOMark and dslr-astrophotography.com.
Please note that these are suggestions and you may wish to do more research and/or try one above the suggested setting.
There are a few simple, but important, do's and don'ts to prepare your dataset for post-processing in StarTools.
Learning how to use a new application is daunting at the best of times. And if you happen to be new to astrophotography (welcome!), you have many other things, acronyms and jargon to contend with too. Even if you consider yourself an image processing veteran, there are some important things you should know. That is because some things and best practices play a bigger role in StarTools than in other applications. By the same token, StarTools is also much more lenient in some areas than other applications.
Most advice boils down to making sure your dataset is as virgin as possible. Note that doesn't mean noise-free or even good, it just means you have adhered to all the conditions and best-practices outlined here, to the best of your abilities.
When learning how to process astrophotography images, the last thing you want to do, is learning all sorts of post-processing tricks and techniques, just to work around issues that are easily avoidable during acquisition or pre-processing. Fixing acquisition and pre-processing issues during post-processing, will never look as good, while you will also not learn much from this; it is likely whatever you learn and do to fix a particular dataset learn is likely not applicable to the next.
Conversely, if your dataset is clean and well calibrated according to best practices, you will find workflows much more replicable and shorter. In short, it is just a much better use of your time and efforts! You will learn much quicker and you will start getting more confident in finding your personal vision for your datasets - and that is what astrophotography is all about.
If practical, try a divide & conquer strategy, focusing on areas of data acquisition, pre-processing, and post-processing separately and in that order. Be mindful that success in conquering one stage is important to be able to achieve success in the stage that immediately follows it.
When we say StarTools requires the most virgin dataset you can muster, we really mean it! It means no procedures or modifications must be done by any other software - no matter how well-meaning. It means no gradient or light pollution removal, no color balancing, not even normalization (if not strictly necessary), and no pre-compositing of the channels. Signal evolution Tracking - the reason why StarTools achieves objectively better results than other software - absolutely requires it.
•Make sure your dataset is as close to actual raw photon counts as possible.•Make sure your dataset is linear and has not been stretched (no gamma correction, no digital development, no levels & curves)•Make sure your dataset has not been normalised (no channel calibration or normalisation) unless unavoidable due your chosen stacking algorithm•Make sure all frames in your dataset are of the same exposure length and same ISO (if applicable)•Make sure your dataset is the result of stacking RAW files (CR2, CR3, NEF, ARW, FITS, etc.) and not lossily compressed or low-bit depth formats (e.g. not JPEGs or PNGs).•Make sure no other application has modified anything in your dataset; no stretching, no sharpening, no gradient reduction, no normalisation•If you can help it, make sure your dataset is not color balanced (aka "white balanced"), nor has had any camera matrix correction applied•Flats are really not optional - your dataset must be calibrated with flats to achieve a result that would be generally considered acceptable•Dithering between frames during acquisition is highly recommended (a spiraling fashion is recommended, and if your sensor is prone to banding, you will want to use larger movements)•If you use an OSC or DSLR, choose a basic debayering algorithm (such as bilinear or VNG debayering) in your stacker. Avoid "sophisticated" debayering algorithms meant for single frames and terrestrial photography like AHD or any other algorithms that attempt to reconstruct detail.•If using a mono CCD/CMOS camera, make sure your channels are separated and not pre-composited by another program; use the Compose module to create the composite from within StarTools and specify exposure times where applicable.•Make sure you use an appropriate ISO setting for your camera (see Recommended ISO Settings for DSLR cameras section)
Some common problems in StarTools, caused by ignoring the check-list above;
•Achieving results that are not significantly better than from other software•Trouble getting any coloring•Trouble getting expected coloring•Trouble getting a good global stretch•Halos around dust specks, dead pixels or stacking artifacts•Faint streaks (walking noise)•Vertical banding•Noise reduction or other modules do not work, or require extreme values to do anything•Ringing artifacts around stars•Color artifacts in highlights (such as star cores)•Trouble replicating workflows as seen in tutorials and/or videos
•Uncorrelated noise grain (e.g. noise grain should be exactly one pixel in size)•Light pollution•Sky gradients
•Vignetting•Gradients due to uneven lighting•Dust specks, dust donuts•Smudges•Amp glow•Dead pixels, dead sensor columns•Satellite trails•Trees or buildings•Banding•Walking noise and other correlated noise (e.g. noise that is not single-pixel speckles)
The above are all easily avoided by good acquisition techniques, correct stacker settings, and proper calibration with flats and - optionally - darks and/or bias frames.
•Process your dataset from start-to-finish in StarTools including compositing (LRGB, LLRGB, SHO, HOO, etc.)•Use simple workflows and familiarize yourself with the 'standard' suggested workflow outlined in the application itself, the many tutorials, the documentation and as roughly depicted in the home screen when reading the modules left-to-right, top-to-bottom.•Acquire and apply flats•Dither between frames during acquisition as often as practical (ideally every frame)•Bin your dataset if your dataset is oversampled•Use deconvolution to restore detail if possible•Use an outlier rejection algorithm in your stacker (Median if < ~20 frames, any other more sophisticated outlier rejection algorithm if more)•Practice with some publicly available datasets that are of reasonable quality to get a feel for what a module is trying to do under normal circumstances
•Do not post-process any part of your image in any way, in other application before opening it in StarTools•Do not make composites in any other application but StarTools•Do not process any part of your subs in any way, in other application before stacking them•Do not visit the same modules many times•Do not process your dataset at higher resolution than necessary•Do not drizzle your dataset in your stacker if your dataset is already oversampled•Do not try to hide issues by clipping the interstellar background to black (this is hard to do in StarTools as it is very bad practice, but is not impossible)•Do not mix different frames shot with exposure times or ISOs in your stacker.
Deep Sky Stacker (FREE) remains a one of the most popular pre-processing applications for Windows. Stacking and saving your data with these settings is essential to getting good results from StarTools.
When applying the important pre-processing do's and don'ts when using StarTools with any stacker, you will want to configure Deep Sky Stacker specifically in the following manner.
•Choose No White Balance Processing in the RAW/FITS dialog•Choose Bilinear Interpolation for the Bayer Matrix Transformation algorithm•Save your final stack as 32-bit/channel integer FITS files, with adjustments not applied.•Stack with Intersection mode - this reduces (but may not completely eliminate) stacking artifacts•Do not choose Drizzling, unless you are 100% sure your that; your dataset is undersampled, you have shot many frames, and you dithered at the sub-pixel level between every frame•Turn off any sort of Background Calibration•Some users have reported that they need to check the 'Set black point to 0' checkbox in the 'RAW/FITS Digital Development Process Settings' dialog to get any workable image.
With all the above settings made, you can then safely stack and (assuming you used a DSLR or OSC) import your dataset into StarTools as "Linear, from OSC/DSLR with Bayer matrix and not white balanced".
Please consult the "Important dataset preparation do's and don'ts" section for further advice on improving your datasets.
This is a basic workflow showing how real-world, imperfect data from a DSLR can be processed in StarTools. The workflow details data prep, bias / gradient / light pollution removal, stretching, deconvolution, color calibration and noise reduction. Please see video description on YouTube for the actual datasets and other resources.
This video shows how processing a complex Hubble Space Telescope SHO dataset is virtually just as easy as processing a simple DSLR dataset in StarTools 1.5. Aside from activating the Compose module, your workflow and processing considerations are virtually the same. Please see video description on YouTube for datasets and other resources.
This is a very basic workflow using defaults, showing how the new Compose module (replacing the LRGB module in StarTools 1.5) makes complex LLRGB compositing and processing incredibly easy. The workflow details the usual data prep, bias/gradient removal, stretching, deconvolution, color calibration and noise reduction. You will notice this workflow is substantially similar to any other StarTools workflow, even though we are dealing with a complex composite of luminance, synthetic luminance, and color data all at once. Please see video description on YouTube for datasets and other resources.
This is a small selection of StarTools tutorials and resources, created by StarTools users.
This very useful document crafted by J. Scharmann, contains suggested workflow charts for beginners and advanced users.
A very popular, comprehensive tutorial titled "Processing a (noisy) DSLR image stack with StarTools" by Astro Blog Delta.
A brief tutorial on using Siril via the Sirilic front-end.
A great number of YouTube videos on StarTools are available from various users.
This guide lets you create starless linear data using StarNet++.
In-depth user notes, detailing modules, their parameters, use cases, hints and tips.
A utility to replay StarTools logs.
If you are looking for datasets from amateur astrophotographers to practice with, there are a number for useful resources.
Processing is meant to be fun! If you really need help with a particular dataset, jump on the forums or contact us directly for some pointers - even if you're just using the trial.
Please note: this resources seems to be temporarily unavailable. A great website with useful information and many datasets that are of a quality achievable by most people on a modest budget. Please note that most datasets will need to be converted to an uncompressed TIFF format.
If you have ImageMagick on your machine, you can use;
convert input.tiff -depth16 +compress output.tiff
A fantastic collection of various deep space objects, imaged in HaLRGB by Jim Misti. Working with just the L (luminance) frames, before delving into HaLRGB combining, is great way to learn the ropes.
Results are free to publish, as long as they are credited "Image acquisition by Jim Misti".
This Yahoo group is for help and tips in processing images captured with DSLR and One Shot Color CCD cameras of all brands.
StarTools was created to complement the many freely available stacking and pre-processing solutions with unique, state-of-the art post-processing functionality.
Some of these solution provide basic post-processing function as well. Please note that only pre-processing and stacking should be performed in these applications in order for signal evolution Tracking to work and achieve optimal results; Tracking cannot track signal and noise propagation that happened in other applications. Do not stretch, color calibrate, perform gradient removal, or perform any other operations beyond initial calibration in these applications.
"Simple but powerful", is the core philosophy of this Windows-only application.
DeepSkyStacker is Windows-only freeware software for astrophotographers, which aims to simplify all the pre-processing steps of deep sky images.
ASTAP, the Astrometric STAcking Program, is an astrometric solver, stacker of images, and provides photometry and FITS viewing functionality. It is available for all platforms.
Regim makes some processing steps that are unique to astronomical images a bit easier. Regim is available for all platforms.
Siril is a feature-rich, free astronomical image processing suite with excellent pre-processing capabilities. It is available for all platforms.
Fitswork is a windows image processing program, mainly designed for astronomic purposes.
You can convert everything you see to a format you find convenient. Give it a try!