Creates a single-star isochrone which you can overlay on your data. You should use this before you do any fitting to get a feel for where the correct parameters lie, and if there are any obvious outliers.
This creates the 2D isochrone models required by grid using a Monte-Carlo. You will be asked for a model number and the colour combination you want. The output files consist of the 2D isochrones (e.g. geneva_V_B-V_07.000.00.fit, where the number is the base ten logarithm of the age in years) and a list of the first 10,000 simulated stars. The latter is useful if the isochrone does not stretch as far in the 2D CMD as you think it should. If you sort the file on the third column (mass), and then examine the last column (flag) you can get an idea of if it is (say) the model isochrone which does not stretch to the mass you want, or the conversions from luminosity and temperature to magnitude and colour.
This takes a catalogue of observations in cluster format and fits them to a model. It does this by a grid search in the parameter space. It assumes that the uncertainties you give for say V and B-V are correlated, unless the uncertainties in colour are smaller than those in magnitude. It then creates uncertainties in each magnitude, e.g. V and B. The final tau-squared it gives you is after removing data points whose tau-squared lies above the clipping threshold.
unclipped.cat - All the data points that were actually used, with the
systematic uncertainty added.
grid.fit - The grid of tau-squared as a function of the parameters
searched (in FITS format).
The values of tau-squared in this file include the
contributions from all data points, though those with high tau-squared
will have had their values set to the maximum allowed.
Where the probability of any single datapoint becomes so small that it
would cause a numerical underflow (and hence taking the log of it
would be problematical) the tau-squared for the entire fit is set to
a high number. Hence the grid will have a sudden jump in tau-squared
where the underflow happens.
unclipped_abs.cat - As above but in absolute magnitude.
best_model.fit - The best fitting model corrected to the appropriate
reddening and distance modulus.
distrib.tau - The histogram of the tau-squared from each data
point.
grid_npts.fit - An image representing the number of data points within
the magnitude range of the model images as a function of the
parameters searched.
Calculates the expected value of tau^2 for you. For tau2 to give statistically meaningful answers you must have run grid to obtain a fit which has no datapoints soft clipped. You can achieve this by running grid once with a soft clipping of 20, and then renaming unclipped.cat to something like fitme.cat, and then re-running grid with this as the datafile and with a negative value for the soft clipping parameter (this switches it off).
The best fitting model (e.g. best_model.fit)
unclipped.cat from grid. (In principle you can use
the original catalogue if you have not added an extra uncertainty when
running grid, but in practice its best to always use unclipped.cat.)
Subtley, it takes the best fitting tau2 from grid.fit, which of course does
not include the effect of soft clipping.
integ.tau - The cumulative distribution of tau-squared. Search through
this file to find the nearest value of tau-squared to the
one you have, and the number next to it is the
corresponding value of Pr(tau-squared).
This is also given in the output from the program.
one.tau - The expected distrubution of tau2 amoungst the datapoints.
If you have removed datapoints compare this with distrib.tau from grid to
see if the remaining ones have a reasonable distrubution of tau2.
tau.diff
Derives uncertainty contours in tau-squared space. The 68 percent confidence limit is printed to the screen, as are some one-dimensional parameter limits. Use the latter with the same caution you would in the chi-squared case, i.e. if you have one free parameter they are right, more than that and they do not allow for any correlation in the parameters.
There is a brief description of the underlying idea behind uncer.
grid.fit - From grid. The tau-squared grid from the fitting process.
You are not prompted for this, it is read automatically.
uncer.out - The values of tau^2 appropriate for each confidence
limit. Read down the confidence limits to find the appropriate value
for a tau-squared contour.
You can also supply your own isochrones. The simplest way to do this is to supply an isochrone at the age and extinction you require, with all the magnitudes supplied. The format of the file is as follows.
#
#
#
# log(age/yr) M_ini Mact logTe logG logL/Lo M_V M_B M_U
6.8 0.8 0.8 3.687 4.6518 -0.612 6.6713 7.5805 8.0755
6.8 0.81 0.81 3.6909 4.6498 -0.589 6.6005 7.4943 7.9698
The code uses the last header line to pick out the columns it wants, and is quite intelligent at doing so. so not all the columns are needed. The mass used is M_ini. You then use user option for the interior models, and you will be prompted for a file name. When prompted for the atmospheric models to use, you can either give the answer 0, in which case the ones from the file will be used, or you can choose from the list.
You may have spotted that the code asks you for the age. This is because you can have many isochrones in the file, as provided one of them is a precise match to the age you want, the code will select it. The obvious next stage, therefore, is to supply a set of isochrones, and get the code to interpolate. A word of warning first. Interpolating in age for post-main-sequence isochrones is a subtle art best left to those who create the iscohrones, since they can track the structural changes which lead to sharp changes in the rate at which the observable parameters vary. Thus supplying an isochrone at the correct age is probably best. This contrasts with the situation for pre-main-sequence models where the variations are smooth and CMDfit's linear inetrpolation can cope. In that case you should should follow the instructions below for adding new isochrones or tracks.
Most of the reddening is now done through the .ext files prepared by reddening the spectra before folding them through band passes. However, there are a few reddening vectors which are stored in the .rv files. They are paired with model atmospheres in the file setup.bc. The format is as many lines of comments prefaced by a # as you want, followed by a line like
V B-V const, red1, red2, col1, col2
where for a simple BV reddening law const is, say, Av/E(B-V), i.e. about 3.1. The other terms are first and second order terms in reddening and colour, such that for a given reddening E(B-V)=e the magnitude is inrcreased by
(const + red1*e + red2*e^2 + col1*(B-V) + col2*(B-V)^2 )*e.
The programs use a unified file format for isochrones, tracks and bolometric corrections.
This is close to the format in which some of the theory groups provide the data, so simple
changes in the headers will make them compatible, but also allows them to be read by
Topcat.
There are as many lines of header comments as you like, introduced with hash (#), but
the final line must be a list of the names of the columns.
There are various standard names which allow you to label columns in a way the software
will understand. These are as follows.
Mini - Initial mass.
logG - Log of gravity.
LogG - Log of gravity.
logg - Log of gravity.
logL/Lo - Log of ratio of bolometric luminosity to that of the Sun.
logTe - Log of effective temperature in Kelvin.
log(age/yr) - Log of age in years.
M_ - Is used to introduce an absolute magnitude. So M_V is the V band absolute magnitude.
BC_ Is used to introduce a bolometric correction.