Observation of exoplanet transits (I)

Practical guide to make photometric observations, explained step by step, of exoplanet transits.

  1. Introduction to photometry
  2. Necessary equipment

Observation of exoplanet transits (II)

  1. Measurements with CCD
    • 1.1. Bias Image Files
    • 1.2. Dark Image Files
    • 1.3. Flat Image Files
    • 1.4. CCD linearity
    • 1.5. Exposure time
  2. Observation planning
    • 2.1. Choice of objective
    • 2.2. Intentional blurring
  3. Photometry and aperture size
    • 3.1. Photometry with MaximDL
    • 3.2. Photometry with Fotodif
    • 3.3. Practical example
  4. Observation methodology

1. INTRODUCTION TO PHOTOMETRY

Photometry is the branch of astronomy that is dedicated to measuring the brightness of different stars, whether stars, planets, satellites, asteroids, etc.
The scale of brightness of the stars was established by the Greek astronomer Hipparchus of Nicaea, who divided these brightnesses into five degrees or magnitudes; later, with the invention of the telescope by Galileo in 1609, the scale was extended to include these telescopic stars, invisible to the human eye due to their extreme weakness.
In the nineteenth century Norman Pogson correctly determined the scale of magnitudes, in such a way that the jump from one magnitude to another (from 1 to 2, 2 to 3) corresponds to a change equal to 2,512 times (100)1⁄5, that is, when the brightness of one star is 100 times greater than that of another, Its magnitude is 5 smaller units.

There are different methods; visual, photographic, photoelectric photometry (with photoelectric photometer) and in recent years with CCD cameras (CCD photometry), all allow working in different bands (V-Band, Band B, etc.) according to the filter used when making the measurements.
To make these measurements, photometric systems have been defined, the best known of which are the W.W. UBV. Morgan and Harold Johnson and the UBVRI of A. Cousins and J.Menzies.

If the precision with which the magnitudes are measured in the mid-twentieth century was one hundredth, with the use of CCD photometry it has been considerably expanded reaching ten thousandth of precision.
After the entry into the market of the new CCD cameras, photoelectric photometry has been relegated to certain fields, since CCD photometry is faster and more accurate, obtaining accuracies of thousandths of magnitude with any amateur telescope: the magnitude limit has been lowered above magnitude 18-19 in telescopes of only 20 cm aperture and can be reached, with dark skies and long exposures, at magnitude 22; This allows a large number of photometric studies (light curves of variable stars, comets, supernovae or asteroids) or colorimetric studies (color index B – V, V – Rc or even photometry Ic) previously reserved for large telescopes to be carried out.

1.1 Absolute photometry

This is the most general case, in which it is intended to take measurements of a few stars scattered across the sky and for a considerable fraction of the night. To achieve a good transformation of the instrumental magnitudes to the standard system, it is necessary to observe a certain number of reference stars (at least 12 or 15, although the ideal number would be around 120) scattered throughout the night, with different heights above the horizon and with a range of magnitudes and color indices that includes those presumably have the problem stars.

The comparison of the magnitudes in the standard system with the instrumental magnitudes obtained for the reference stars allows to evaluate how the atmosphere is affecting and the instruments used to the photometric measurements. This evaluation is carried out by determining a set of equations that allow the instrumental quantities to be converted into standard quantities with minimal error. Equations usually include terms that depend on the zenith distance at which each observation is made, the color of each observed star, and sometimes also the time of night each star was measured. The coefficients of each term are determined using the mathematical method known as least squares estimation.

The errors associated with the transformation to the standard system in absolute photometry can be relatively large (0.05 mag.), but they will be both minor and lacking systematic tendencies the greater the number of standard stars considered and the better they cover the range of color indices of the problem stars.

To achieve standard quality photometry, the observation of suitable standard stars becomes as important as the observation of the problem stars. The additional requirement for obtaining good results with this method is that the observation conditions are good (good atmospheric transparency) and stable throughout the night.

1.2 Aperture synthesis photometry

Under this pedantic name hides a concept of extraordinary simplicity. The aperture synthesis technique is based only on adding all the light that arrives from the object being studied and subtracting from this amount the contribution of the backlight. Although the concept is simple, its practical realization is not so simple, since, as we will see, we are exposed to a series of small errors that when accumulated can lead to incorrect measurements. We must therefore be extremely careful in each of the steps.
The schematic procedure is as follows:

1º – It is necessary to find the exact center of the star that we want to measure, once this point is determined we will take a large aperture around it and add all the light received, and finally, we will select another area close to the image, but free of stars to evaluate the contribution of the background and subtract it from the previous value. The precise determination of the center of a stellar image in the pixel network acquires fundamental importance in the field of astrometry.

2º – It is the sum of the signal captured in an aperture of a certain radius centered on the star. Apparently a simple operation, but what is the proper radius to measure the total brightness of the star? The answer seems obvious: one that contains all the light of the star in question.
As always, things are not as simple as they seem, and the profile of a star extends far beyond what one might imagine, so an opening containing all the light has to be very large. However, for faint stars too large a radius is not suitable, since background and reading noise will excessively disturb the measurement.
The choice of the optimal aperture therefore depends on the brightness of the star.

We must also bear in mind that the aperture cannot be arbitrarily large, because in this way the contribution of other nearby stars would be introduced to the extent.

There is no magic formula that provides the opening with which we must work, it will be a matter of practice and common sense. However, as a first approximation it is customary to accept that the aperture had to be around 4 or 5 times larger than the size of the star. The size of the star is usually estimated by sectioning the digital image through the center of the star and measuring the width of the stellar image in the area where its height is half the maximum intensity of the star.
We must bear in mind however, that, for bright stars, that is, with a relatively high number of counts per pixel, large apertures are preferable, and that, for faint stars, it is appropriate to take smaller apertures even at the risk of not taking into account all the light received.

Nor should we forget that apertures with a small number of pixels are subject to major errors, since they involve approximating a circle through an irregular polygon formed by pixels. In this case it is advisable to take into account the so-called “partial pixel corrections”, which generally consider the pixels at the edge of the aperture as if they were composed in turn of 4 sub-pixels with an amount of received light equal to 1/4 of the total. In this way it is achieved that the area of the image considered has greater resemblance to a circle.

Some programs perform magnitude calculations by aperture synthesis using not circular, but square apertures, which may seem to overcome the need for partial pixel corrections. Actually, it is not so, because it is appropriate that the aperture used, whether circular or square, is located in the position of the center of the star, and this position is usually determined with a precision of fractions of pixel: when placing a square at a point that does not coincide with the center of a pixel, its edge will necessarily intercept fractions of pixel.
Once the opening has been chosen and the amount of light inside has been added, we must then evaluate that part of it is due to the contribution of the fund. The background is the signal we would receive at our opening if the star were not there. Its main components are the light reflected inside the telescope or CCD camera and the diffuse light of the sky, either of human origin or of natural origin, such as the zodiacal or the light of the Moon.

To minimize errors, the usual way to determine the background is by taking into account an annular region centered on the star, far enough away from it to avoid its influence. In addition, the ring should contain a large number of pixels so that the statistical uncertainty of the value thus determined is small. Intuitively we are tempted to take the arithmetic measurement of the intensities of the pixels contained in the ring region as the most appropriate value of the background. But we must bear in mind that any effect caused by faint stars or galaxies, by the wings of bright stars, by cosmic rays, etc. will tend to add a positive contribution. Thus, the value of a statistical estimator called “mode”, which is less affected by pollution than the arithmetic mean, is generally taken as a background.
The mode of a set of pixels is simply the most frequent intensity value. It is also common to determine the background not by a ring centered on the star, but by means of one or more circular openings not too far from the star and free of other faint stars.

Once all the steps previously described have been completed, we can calculate the instrumental magnitude of the problem star by following some simple steps. First of all, we must add the total intensity of the star, I. If with Ixy we represent the average number of counts in the pixel located in row x and column y of the detector, the total intensity of the star is:

I = ∑Ix Ixy

where the sum is made not in the whole image, but only in the pixels contained within the aperture centered on the star and the chosen dimensions (here it may be necessary to consider the correction of partial pixels). The next step is the subtraction of the fund contribution. If Ifon is the estimated bead level for a background pixel in the area where the star is located, the total background corrected intensity I ́ is:

I′ = I − npixIfon

where npix is the area, measured in pixels, of the aperture used to estimate the total intensity I of the star.

Next, the intensity I ́ must be converted into the flow F by dividing by the exposure time t:

F = I′/t

Finally, from the flow F the instrumental magnitude m is obtained:

m = a − 2,5log(F)

where a is an arbitrary constant to produce reasonable, usually positive values and around 10. The choice of this constant is at the discretion of the observer, but it must be exactly the same for all the stars that are going to be treated together.
All the above steps can be summarized in a single expression:

m = a − 2.5log(ΣIxy–npixIfon/t)

The instrumental quantities thus calculated cannot be compared with catalogues, as they do not refer to a standard system. However, the differences of instrumental magnitudes can be considered a good approximation to the differences of magnitudes in the standard system is worked with standardized filters corresponding to a given photometric system.
Manually performing all the steps described for the calculation of instrumental magnitudes is undoubtedly an instructive exercise, but so heavy and cumbersome that it makes no sense to proceed in this way to treat the data on a regular basis. The solution is to use any of the available digital imaging programs, MaximDL, AstroImageJ, Seals, Astrometric, HOPS, etc.

1.3 Differential photometry

Differential photometry is the appropriate technique for the study of exoplanets. To calculate the light curves of stars that host extrasolar planets, we will use photometry, which as we have seen above is a technique that allows measuring the luminous intensity emitted by a celestial body at a certain wavelength. Thanks to it it is possible to establish the color index of the stars, from
which their spectral type, temperature, size and distance are obtained. Photometry measures the intensity of energy flow that comes to us from an object, its apparent magnitude, which is given by the following equation:

m = −2,5log(Ia)

where m represents the apparent magnitude of the star, and Ia the perceived brightness.
However, the flow that reaches us from a star does not make it unchanged, but the atmosphere disperses it, decreasing its intensity differently for each wavelength. This phenomenon is called atmospheric extinction. Likewise, each instrument also introduces a variation in the measured flow. Introducing these effects into the above equation, we have to:

mƛObs = mƛ + KƛXz + C

where:
mƛObs: The apparent magnitude measured by an observer at a lambda wavelength.
mƛ: The actual apparent magnitude of the object in that band (measured outside the atmosphere).
Kƛ: is the atmospheric extinction coefficient at the lambda wavelength.
Xz: is the mass of air present in the zenith angle z.
C: is the instrumental constant given by the team involved in the measurement process.

Knowing these parameters it is possible to clear the term and we obtain that:

mƛ = mƛObs − KƛXz − C mƛ = −2.5log(Iobs) − KƛXz − C

where:

mƛ: is the effective apparent magnitude of the star.
Iobs: is the luminous intensity measured from the photograph taken.
Kƛ: is the atmospheric extinction coefficient.
Xz: is the mass of air present in a zenithal angle Z.
C: is a specific instrumental constant.

However, in photographs, generally, in addition to the object we are studying, there are usually other stars with respect to which it is possible to determine the changes in brightness that our object presents. This is the basis of the technique of differential photometry.
The difference between magnitudes of two stars present in the same image can be expressed as follows:

∇mƛObs = m1ƛObs − m2ƛObs

Substituting in the previous equation:

∇mƛObs = (m1ƛ + K1ƛX1z + C1) − (m2ƛ + K2ƛX2z + C2)

As the two objects are in the same photograph, the atmospheric extinction coefficient Ksub lambda, the air mass Xz, and the instrumental constant C are the same for both apparent magnitudes. In this way the above expression is reduced to:

∇mƛ = m1ƛ − m2ƛ

Applying to the above equation:

m1ƛ − m2ƛ = [−2,5log(Ia1)] − 2,5log(Ia2)m1 − m2 = −2,5log(Ia1/Ia2)

The ultimate expression is known as Pogson’s equation.

With this we have theoretically demonstrated the following:

  1. Differential photometry does not require perfect climatic conditions such as those called “photometric nights” in allusion to those required for absolute photometry.
  2. nor to know the instrumental constant C or the extinction coefficient, since the variations measured in the intensity of the luminous flux of a star are measured with respect to another object present in the photograph and therefore affected in equal measure by these effects.

Thus we justify the choice of this method as the appropriate one to obtain the
light curves of stars that are orbited by exoplanets.

2. NECESSARY EQUIPMENT

2.1 Telescope

As every amateur astronomer will know the maxim, “the opening matters”, also when observing exoplanets is fulfilled. Even so, very good work can be done with telescopes with medium-apertures of 8″-14″, and even smaller, although they will be more detections, than measurements. Keep in mind that the first transits were detected with small aperture instruments, as was the case of STARE and numerous search projects such as SuperWASP that use lenses up to 8 cm aperture. That is, if you want you can. You will reach more or less deep transits, with more or less dispersion and measurement error, but the transit will manifest itself.

2.2 Mount

The issue of the frame is perhaps a little more complex, because everyone has their preferences brands, models, etc. It will be necessary a motorized mount with Go-to system and controllable through PC. When observing exoplanets it is advisable an azimuthal mount on equatorial wedge, because the meridian changes in German equatorial mounts, will not help us much. Let’s say that, if in the middle of a transit and when we find less air mass, we must change the meridian, frame, guide, etc. the time spent will be reflected in loss of shots and therefore loss of measurements.

Even so, many amateur astronomers use this type of mounts to observe exoplanets because, although they are perhaps not the most suitable, in recent years the number of brands and models in them has increased a lot.

2.3 CCD

Until a few years ago, the techniques available to non-professional sky scholars were limited to visual observation and, for the most skilled and dedicated, photography. Since the seventies, amateurs with more training, time and money, could enter the field of photoelectric photometry.

The nineties revolutionized amateur astronomy with the emergence of the famous CCD cameras, a type of high-performance detector. Without exaggerating their usefulness, it must be recognized that the rigorous use of these devices, of price similar to that of a personal computer, allows to achieve relevant results in photometry, astrometry and other disciplines.

The acronym CCD comes from the English charge-coupled-device, name that translates as coupled charging devices. This device consists of a solid surface sensitive to light, equipped with circuits that allow to read and store electronically the images projected on it. The assembly formed by the detector, the circuits, its housing and other various accessories (such as cooling system, etc.) constitutes the CCD chamber itself. We will explain in detail the operation of a CCD camera and its use in the following sections.

2.3.1 The photoelectric effect

The operation of CCDs is based on the physical phenomenon of the photoelectric effect. The description of the photoelectric effect was one of Albert Einstein’s main contributions to quantum theory, and it was this work (and not the theory of relativity) that earned him the Nobel Prize in 1992. Certain substances have the property of absorbing quanta, of light, or photons, and releasing an electron. This principle allows the construction of photovoltaic solar panels, in which the electrons generated when the light strikes, are collected and converted into electric current.

The same material that is usually used in solar panels, silicon, is the raw material for the manufacture of CCD detectors. A typical CCD consists of a rectangular silicon plate about 125,250 or up to 500 micrometers thick and several millimeters on each side, on which a series of structures are implanted that allow capturing and analyzing the electrons generated in silicon thanks to the photoelectric effect.

2.3.2 The latent image

Microscopic circuits organized in several layers are stamped on the silicon plate. These additions to the surface of the silicon constitute a dense network of electrodes. Each trio of electrodes acts as an electrostatic trap that accumulates around it the electrons generated in the mass of silicon. The central electrode of each trio is charged with a slightly positive voltage, while the two sides are maintained with zero potential. Thus, electrons, whose charge is negative, accumulate around the central electrode of the nearest trio, as light hits the detector. The trios of electrodes are arranged aligned in columns.

A certain number of columns located next to each other cover the entire CCD, separated by static barriers drawn on silicon with a substance that physicists call “p-type doped material”, a material that generates a permanent negative potential when in contact with silicon, which repels electrons and prevents them from migrating from one column to the adjacent one.

Electrode lines considered perpendicular to the columns are called rows. Each trio of electrodes is an elementary piece of the detector and corresponds to a point in the final digital image: a pixel. The physical size of the pixel is determined, then, by the spacing between trios of electrodes in one direction, and by the distance between the columns of doped material in the other. The two dimensions can be different, although it is very convenient that they are equal (that the pixels are square), as this simplifies the subsequent treatment of the data obtained. Typical linear pixel dimensions in today’s amateur cameras range from 4 to 25 μm.

To use the CCD camera, the detector is placed in the focal plane of a lens, just as you would with photographic film. The shutter is then opened and the light is allowed to strike the silicon surface for a certain period of time. Photons are converted into electrons that accumulate around the tiny electrodes. When the exposure is over, the image is latent, converted into electrons, inside the CCD. The next necessary step is to read and store it.

2.3.3 Reading the latent image

The reading of the latent image is carried out in a very ingenious process called charge transfer. The mechanism is based on playing with the voltages applied to the three electrodes that make up each pixel. In the starting situation, immediately after the end of the exposure, the central electrode of each pixel has a positive charge and zero charge on both sides. The electrons stored around the central electrode. In the second phase of the process, the right electrode of each pixel gradually increases its potential until it equals the central electrode, so that the electrons are free to move between the central electrode and the one on the right.

In the third and final phase, the potential of the central electrode is gradually reduced until it is cancelled. While that operation is being performed, electrons that previously had some freedom to choose one electrode or another are forced to accumulate around the right electrode. The overall result of the three phases described is that the charge has moved one electrode to the right. Next, the process is repeated, but playing not with the central and right electrodes of each pixel, but with the right and left electrodes.

The left electrode of each pixel increases its potential until it equals the voltage of the right electrode. Thus, electrons accumulated around the right electrode of a pixel can move freely between this location and the left electrode of the adjacent pixel. The potential of the right electrodes is then reduced until it is cancelled. Thus, the electrons that at the end of the image exposure lay in the central electrode of each pixel are now in the left electrode of the adjacent pixel.

In a final step, the voltages of the central electrodes are increased again and those of the left electrodes are cancelled. The electrons migrate to the central electrodes, and the result is that the entire charge of the CCD columns will have shifted an entire pixel to the right. The charge transfer mechanism, implemented in all columns at once, moves the entire image one row for every three elementary cycles. But what about the last row of the detector?

Indeed, the last electrodes in each column have no companions to transfer their contents to. On this side of the detector there is always an additional row of electrodes that does not receive light and is used in the process of reading the images. It is called a reading channel, and it is responsible for collecting electrons from the last row in each charge transfer cycle. When the reading channel contains the electrons from the last row, it receives the order to move pixel by pixel, through the same charge transfer process, and to pour into the external measuring device, consisting of an amplifier and other electronic devices.

This electron transfer operation from the detector to the read channel and from the read channel to the output amplifier is repeated as many times as necessary, until all pixels in the image have been evaluated. The image is then encoded numerically in the memory of the computer that controls the CC camera, and can be rendered on the monitor or recorded on a disc.

Sometimes it happens that a one-pixel electrode is defective and does not obey the potential change orders sent by the camera electronics. A pixel affected by such a defect is called a dead pixel. A dead pixel may not accumulate electrons or, if it accumulates them, it may not be able to transfer them to neighboring pixels in the reading process. Consequently, this pixel and all those that precede it are inaccessible to reading and do not produce useful data. A CCD with a dead pixel has the unmistakable feature of having an always black column from the pixel in question to the edge opposite the location of the read channel.

2.3.4. Linearity and saturation

One of the most notable characteristics of CCDs is their character as a linear detector. This means that the intensity recorded in each pixel in the form of electrons is proportional to the incident light. In other words, if in one image one pixel contains x electrons and in another image the same pixel contains 2x electrons, surely the second time the intensity of the light incident on the pixel was just double.

This property, which might seem like a truism, is not at all. In fact, there are few strictly linear detectors in astronomy. In photography, for example, if the intensity of light is doubled, the density of the dark spot on the negative does not double. The relationship is not the linear one. This greatly complicates photometric measurements with chemical photography. With CCDs, on the other hand, everything is easier. However, the linear behavior of a CCD has its limits. The most obvious is the saturation threshold.

When a lot of light hits the detector, the number of electrons generated can be so large that the electrodes are not physically able to retain them. Since then, more light does not add more detected electrons: the detector has become saturated. Saturation, in fact, does not usually occur in the entire detector at once, but in the brightest pixels. When a pixel becomes saturated, electrons that cannot be retained by its electrodes migrate along the column to the contiguous pixel center electrodes. That is why it is common in digital astronomical images to see bright stars pouring light in perfectly straight and long strips.

The parameter that measures the limit of charge accumulation by the electrodes is called “pixel capacity” (ful-well capacity), and must be among the specifications given by the manufacturer. Their typical values are a few hundred thousand electrons. Normally, the linearity ceases to be perfect before it reaches the capacity of the pixel, because the electrons already accumulated act as an electrostatic screen that reduces the effective positive charge of the electrode.

2.3.5. Gain and dynamic range

The digital image consists of a table of numbers that indicate the intensity recorded in each pixel. But the numbers stored do not mean the number of electrons found in each electrode. The number of electrons in an electrode can be tens or hundreds of thousands, and reserving space for such a large number for each pixel would make the resulting computer files too large. What is done is to divide the number of electrons by a certain number, called the “gain” of the chamber. Thus, what is recorded in the file is not the number of electrons, but the number of counts obtained when making the division.

Also known as analog-to-digital units ADUs, the gain, therefore, is measured in electrons per account, a unit that we will represent with the symbol e ̄/C. Some camera parameters, such as reading noise or darkness current, can be expressed interchangeably in beads or electrons. To move from electrons to accounts, forex between the gain. To go from beads to electrons, multiply by the gain.

It should be noted that some authors specify the gain not in electrons per account, but in accounts per electron. It should also be noted that some authors call sensitivity to gain. Giving the specifications of a camera in accounts is much more practical for the user, since you never work with the number of electrons directly, but with the number of accounts. However, a level of accounts lacks physical meaning if it is not accompanied by camera gain.

To make comparisons of some detectors with others, it is preferable to translate the parameters into electrons by multiplying by the gain (or dividing between it if it is defined not in electrons per account, but in counts per electron.). Camera usage programs typically store beads as integers, and each camera model has a limited range of intensities. The simplest cameras generate account levels between 0 and 256. Professional cameras usually record intensities between 0 and 65536 accounts.

The specific range of values for each camera is known as its “dynamic range”. Of course, to achieve the highest photometric accuracy, a large dynamic range is in the interest, but this greatly increases the space occupied by image files. The electronics of each CCD camera allocate a certain memory space to store the intensity values measured in each pixel. Memory spaces are measured in bits, that is, in minimum logical units that can adopt the values 0 or 1.

If a camera is 8-bit, this means that it reserves 8 bits to record the intensity value of each pixel, and is therefore able to distinguish 28 (=256) intensity levels. Thus the dynamic range of a camera can be measured either by the number of intensity levels it distinguishes, or by the number of bits it reserves for each pixel. If n is the number of bits of a camera, its dynamic range is 2 n. There are in the amateur market cameras from 8 to 16 bits, whose dynamic ranges vary, therefore, from 256 to 65536.

Knowing the pixel’s electron capacity and the camera’s dynamic range, it’s worth checking whether the camera’s gain takes good advantage of its potential. Some CCD cameras allow you to choose the gain to suit the user. Each observer can opt for a small value (to detect subtle nebular details) or high (to correctly measure stars of different brightness), but without ever exceeding the limit given by:

Maximum gain(e ⁄C) = pixel(electron)/dynamic range(beads) caocity

Beyond this limit, saturation would occur at account levels below the dynamic range. In practice, it is even convenient to be somewhat more conservative, because as said the response of the camera ceases to be linear before reaching the total saturation of the electrodes.

For most applications, it is important that the maximum level of counts corresponds to a number of electrons somewhat lower than the pixel capacity. For example, with a camera with a pixel capacity of 400,000 electrons and with a dynamic range of 65536 accounts, a gain not higher, for example of 8, would correspond the maximum number of accounts (65536) with 525300 electrons, well beyond the capacity of the pixel, thus rendering unusable the levels of accounts from 44000 (corresponding to the expected limit of linearity, about 350,000 electrons) to the maximum.

When it is not a question of obtaining quality photometry of stars of varied brightness, but of recording subtle details of faint structures in nebulae or galaxies, a gain that accumulates a large number of levels of accounts in a small range of intensities, that is, a small gain, may be of interest. In the camera for example above, a gain of 3 e ̄ / C would require having 120,000 levels of accounts to cover the entire interval between zero and the pixel capacity, which makes it impossible to measure the bright objects detected. However, this gain allows a high resolution of brightness levels in areas where lighting is very low.

2.3.6. Quantum efficiency and spectral curve

When photons hit the silicon plate of a CCD detector, they are only detected if they excite at least one electron that must also be picked up by a nearby electrode. The performance with which this process occurs determines the light sensitivity of a detector, and is one of the main characteristics of a CCD camera. This parameter is measured by the quantity called “quantum efficiency”. A detector that recorded all incident photons would have a quantum efficiency of 100%, while a dead CCD detected nothing and would therefore have a quantum efficiency of 0%.

The quantum efficiency of a CCD includes contributions from different physical processes, such as reflection on the surface of silicon (a reflected photon has had an impact, but is not detected), the loss of electrons by recombination before being confined by the electrodes, etc. The best cameras incorporate certain refinements to counteract some of these effects and thus improve quantum efficiency. For example, it is possible to treat the surface exposed to light with an anti-reflective film.

The quantum efficiency of a CCD is not the same for all colors of incident radiation. Although modern professional cameras exceed these values, good commercial detectors reach efficiencies greater than 60% for red light, but similar to or less than half that value in blue. The sensitivity of CCDs drops to virtually zero in the near-infrared (long waves) and ultraviolet (short waves).

Commercial cameras for amateurs often have serious problems detecting very blue or violet (let alone ultraviolet) radiation. The curve that describes quantum efficiency as a function of the wavelength of light is known as the detector’s “spectral sensitivity curve.” The reasons for this behavior in the quantum efficiency of CCDs is different for the two ends of the visible spectrum.

The drop in sensitivity to long wavelengths is due to the physics of the interaction between silicon and light. Silicon simply becomes almost transparent to infrared light and does not interact with it, so no electrons are generated and no detection occurs. At the end of short wavelengths the situation is different. Silicon is very effective at converting blue or ultraviolet light into electrons, but it happens that the simplest detectors receive the light on the face of the silicon tablet in which the rows of electrodes are implanted (front-side illuminated CCD’s). The electrodes let light of intermediate wavelengths pass through, but are virtually opaque to very blue or ultraviolet light.

Photometric measurements in the blue zone of the spectrum are very important, and therefore manufacturers have devised different resources to make their detectors sensitive to short wavelengths. One of the solutions used is that of CCDs treated with a phosphorescent coating, a substance that absorbs photons of short wavelength and sends them at another longer wavelength, to which the electrodes are transparent. These surface treatments (coatings) are not perfect and have the disadvantage that the high wavelength photons emitted can be fired in any direction, regardless of the origin of the absorbed blue photon. Even so, the results lead to useful measurements in these spectral bands.

Another solution, accessible until now only to professionals, is to use devices illuminated “from behind”, which receive the light on the face of the silicon not occupied by the electrodes. This idea implies a remarkable difficulty, and that is that the usual thicknesses of silicon pellets are such that the photons that penetrate the material from behind are absorbed too soon and produce electrons too far from the electrodes for detection. be effective.

This loss is all the more important the more interactive the incident photon is with silicon, and it happens that blue and ultraviolet lights interact with silicon first. The only solution is to produce thinned silicon plates (thinnes back-side iluminated CCD). It is necessary to reduce the thickness of silicon to 15, 10 or less micrometers. This increases the cost of the product astronomically, while reducing sensitivity at long wavelengths.

Even so, this seems to be the ideal solution, and it only remains to wait for the technology to progress enough so that the slimmed down and illuminated CCD from behind is accessible to hobbyists.

2.3.7. Other specifications of a camera

Apart from the specifications discussed above, and in addition to the obvious ones (detector size, pixel size, etc.), there are several very important parameters and that it is essential to know to evaluate the quality and limit the applications of a camera.

In the process of reading the latent image, the amplification and counting of electrons is a step that, due to its intrinsic nature, implies a certain margin of error. Therefore, electrons are not converted into accounts with absolute precision, but every measurement extracted from a pixel is affected by some degree of uncertainty, called “reading noise”.

The reading noise is a stable property of each camera, and must be included (in electrons or in accounts) among the data provided by the manufacturer. Read noise: it is a very important contribution to the total noise of the images. It has its origin in the random and inevitable errors that occur during the reading of the image, in the process of amplification and counting of the electrons captured in each pixel.

The existence of reading noise must always be taken into account, as it affects all steps of obtaining and processing images. Digital. Each camera has its own reading noise level, which must be among the specifications given by the manufacturer. If you do not know the reading noise value in a camera, or if you want to check if the manufacturer’s specification is reliable, it is possible to determine this parameter with very little effort by following the method described below. To determine the reading noise value, you have to start from two images. independent obtained under exactly the same conditions.

The best thing for this purpose is to record consecutive shots without any light hitting the detector and with zero integration time. These images. They won’t be filled with zeros evenly for three reasons:

1.Thermal noise, which will be of little importance thanks to the short integration time. It might seem that with a zero integration time thermal noise should be absent, but this is not the case, because during the reading time of the image thermal electrons accumulate. Similarly, in a zero-integration image, cosmic ray impacts may appear that hit the detector while it is being read.

2. The overall positive level that each camera always adds to all images, called “bias current”.

3. Reading noise, which manifests as a random oscillation around the average image value.

If the two images have been obtained under identical conditions, contributions 1 and 2 will be equal (except, perhaps, cosmic rays), so that if the difference of the two is found, an image will be obtained that on average will be worth 0, but that will show the oscillations due to the reading noise of the starting shots.

The next step is to analyze the image that results from the subtraction using an image processing program. This type of program usually allows the calculation of the average intensity of an image and, simultaneously, almost always offers the standard deviation or dispersion, σ, of the values around that average.

Dividing the dispersion by √2 will result in the camera’s reading noise, measured in accounts:

r = σ / √2

It is advisable to repeat the calculation several times, from different pairs of dark shots of zero integration time, and keep the average of the values obtained.

The photoelectric effect is not the only way to extract electrons from silicon. The agitation of extracting electrons from silicon. Thermal agitation also lifts electrons from this substance and is, therefore, a source of alterations in measurements.

The thermal contribution to the electrons captured with the CCD is called “dark current” (because it occurs even when no light hits the detector). It depends largely on temperature, can be described in various ways (e.g. number of thermal electrons accumulated per second at a given temperature) and must be among the specifications given by the manufacturer.

Stream of darkness

Photon absorption is unfortunately not the only way to release electrons in silicon semiconductor crystals. The material’s own thermal agitation causes electrons to jump nonstop. Thus, the electrodes of a CCD capture electrons even when the detector does not receive the impact of a single photon.

The production of these thermal electrons, the so-called “dark current”, “dark current” or “thermal noise”, grows exponentially with temperature. That is why to limit it is very important to cool the CCD camera as much as possible. Amateur instruments usually use thermoelectric devices capable of lowering the temperature by a few tens of degrees Celsius with respect to the environment.

Professional chambers are usually cooled to 120º C below zero by evaporation of liquid nitrogen. This drastic cooling and the technical refinements they incorporate, make professional cameras almost free of thermal noise. This is not the case for commercial CCDs, which are coarser and less refrigerated.

Thermoelectric devices add the disadvantage that their effect is relative, they lower the temperature of the detector from that of the environment, so that any change in ambient temperature has an instant impact on the detector. Nitrogen evaporation, on the other hand, provides more stability, but involves significant logistical and technical difficulties.

The thermal noise is different, in general, in different areas of the detector.

In addition, it accumulates over time, which makes its effect different for shots of different exposure. The quality of silicon used in the manufacture of CCD is a determining factor in the intensity of thermal noise.

The area of the CCD most active in the generation of thermal electrons is the interface between the silicon and the electrodes, which produces a distribution of dark current throughout the image. Another source of thermal electrons are defects in the crystal lattice of silicon, which cause the appearance of a certain structure in the observed pattern of the dark stream, with large-scale features and also “hot spots” located in certain areas, sometimes in isolated pixels.

Dark current has physical causes that depend on the environment and nature of the coupled charging device, so it is possible to estimate and correct it. The so-called “dark shots” are used for this, images obtained when the camera does not receive any lighting.

In our description of the dark current we are including a different source contribution to thermal noise: the “bias current”. This is a uniform background level intentionally added by the camera’s electronics during the reading process, to prevent reading noise from generating negative accounts in areas of low light intensity. It is easy to check the existence and approximate value of the bias current by obtaining images without the camera receiving light and with zero integration time. Although in professional cameras there is usually a separate treatment of thermal noise and bias current, in amateur cameras it is more convenient to encompass both concepts into one.

To regulate the exposure times of the shots, the camera must have some type of shutter device. Among the simple cameras, “electronic” shutters are very common, which in essence amount to the absence of a shutter.

Simply, the surface of the detector is subjected to a constant load scanning process, until the moment of starting the exposure. This method of shuttering has several drawbacks. In the first place, the light never stops hitting the detector, which continues to accumulate signal during the process of reading the latent image, which deteriorates the results. The impossibility of closing the light access to the CCD (if it is not by covering the telescope) makes it difficult to obtain dark sockets, necessary to correct the current of darkness.

More advisable, but also more expensive, are mechanical shutters, which allow physical and safe control of light access to the interior of the CCD camera. In certain cases, mechanical shutters may also give rise to difficulties.

Electronic shutters are, today, the most widespread among amateur cameras. That is why it is worth briefly commenting on its operation.

There are two types of electronic shutters: matrix transfer shutters and line transfer shutters. In both cases, the onset of exposure takes place when the process of constant scanning of the detector load ceases, but they differ in the mode of interrupting the shot at the end of the integration time.

  1. CCD matrix transfer: at the end of exposure, the entire latent image is transferred as quickly as possible (a fraction of a millisecond) to an adjacent region of the detector that is protected with a mask and therefore does not receive light: the storage region. Then, the latent image is read from the region of continuous storage by striking light on the detector, which deteriorates the resulting image if bright stars such as the Moon or planets are observed. In addition, the matrix transfer shutter involves wasting a certain surface of the CCD, which must be protected from light to be used as a storage region.
  2. Line transfer CCD: Detectors equipped with electronic line transfer shutters also sacrifice part of their sensitive surface. In this case, each row of pixels has an adjacent column covered with an opaque mask. At the end of the exposure, the stored loads are transferred almost instantaneously to the protected columns, and the reading process is carried out from them without the light that continues to affect the detector already posing a problem. The electronic shutter, line transfer does not cause difficulties when observing bright stars, but, apart from wasting sensitive surface, it has the serious disadvantage of increasing the dead zones between pixels, which deteriorates the sampling of stellar images and degrades, therefore, the astrometric and photometric results.

Despite its higher cost and its specific problems, the mechanical shutter is the best option, as evidenced by the fact that it is used by all professional CCDs.

To finish the list of characteristics of CCD detectors, we will comment on the “anti-blooming”, a range that incorporates some amateur cameras and that prevents saturated pixels from pouring electrons to adjacent pixels within their same column. It consists of locating, next to each sensitive row of the detector, a non-sensitive row polarized to a potential such that it drains the electrons left over from the pixels that are saturated.

Anti-blooming provides more aesthetically satisfactory results, but has the disadvantage of limiting the range of illumination in which the detector behaves linearly and, also, involves sacrificing a certain sensitive surface of the CCD and thus increasing the dead space between pixels, which deteriorates the sampling of stellar images and, like the electronic line transfer shutter, It affects scientific results. No professional camera has anti-blooming. Some amateur cameras incorporate both an electronic line transfer shutter and an anti-spill shutter: it is necessary to be aware of the scientific limitations involved in this design.

If you have been able to get here, now comes the interesting thing once we have the technical aspects quite clear.

Observation of exoplanet transits (II)